Show simple item record

A Computational Model Of Music Transcription (machine Perception, Acoustics, Notation).

dc.contributor.authorPiszczalski, Martin
dc.date.accessioned2016-08-30T16:40:02Z
dc.date.available2016-08-30T16:40:02Z
dc.date.issued1986
dc.identifier.urihttp://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:8621354
dc.identifier.urihttps://hdl.handle.net/2027.42/127924
dc.description.abstractThe purpose of this research was to create a computational model of music transcription. The computer system that resulted processed natural musical sounds and automatically produced the music notation symbols that represented those sounds. The learned, human skill of transcribing music is one of the most sophisticated auditory-based pattern-recognition tasks that humans perform. Two related signal-to-symbol, machine-perception disciplines are automatic speech recognition and computer vision. In the computational-model approach, hypotheses are implemented in precise algorithmic form on the computer. Any single algorithm, in turn, must work in harmony with a constellation of other algorithms that together form the integrated system. The robustness of the system was tested using unconstrained music played on a variety of musical instruments. A bottom-up (i.e., data-driven) approach was implemented in this working system. The digitized sound signal from monophonic (one-part) music was first transformed into its spectral representation, forming the basis for extracting the time-varying partials. Next the time-varying pitch was established from these partials. Musical note segmentation was done via pitch and amplitude edge operators processing the pitch information. The discrete acoustical events thus produced were then classified into music-notation note symbols (representing pitch and duration). Last, the musical information was presented in the graphical printed form of music familiar to millions of musicians. The backbone of the system was in the pitch-detection/note segmentation method. Highly precise pitch tracking was found not to be necessary although context was important in determining the time-varying pitch. The automatic transcription system yielded notation that closely followed the original music performance. Additional research is necessary to incorporate higher-level musical knowledge that appears essential for the proper presentation of music notation. Other more ambitious goals include automatic polyphonic music transcription.
dc.format.extent309 p.
dc.languageEnglish
dc.language.isoEN
dc.subjectAcoustics
dc.subjectComputational
dc.subjectMachine
dc.subjectModel
dc.subjectMusic
dc.subjectNotation
dc.subjectPerception
dc.subjectTranscription
dc.titleA Computational Model Of Music Transcription (machine Perception, Acoustics, Notation).
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied Sciences
dc.description.thesisdegreedisciplineComputer science
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/127924/2/8621354.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.