In the comments to a previous post, Fred Himebaugh asked me to explain the difference between syntax and semantics in music. First, I will give a general definition of the terms. "Syntax" is the description of proper construction following rules of grammar. Thus in language, syntax describes the proper order of words following the grammatical rules of that language. "Semantics" is the description of meaning assigned to a statement (or to symbols if we are going to be very general).
In tonal music, syntax is easy to describe as the expected order of harmonic progressions, melodic pitches and rhythms, both at local levels and in hierarchical systems. The expected order is based upon rules that have been (slowly) developed by observing the behavior of tonal music composed between about 1600 and 1880. Stefan Koelsch calls these rules "certain regularities." There is debate as to whether this syntax is learned by exposure to music of that culture or if this syntax is hardwired. This same debate exists in linguistics. But regardless of the means of development, there is no debate that music has a syntax.
Musical semantics is a different story. Some say that musical semantics are identical to musical syntax, that any meaning gleaned is only a perception of structure. But others say that music can communicate a variety of meanings. Koelsch describes four different aspects of meaning: 1) analogies to similar sounding objects or to qualities of objects, such as imitating birdsong; 2) emotional meaning, such as happy or sad; 3) extramusical associations, such as literary references or conventional uses of a particular musical work (anthem, folk song); and 4) perception of syntactical structure.
The question that Koelsch's team has been investigating is if our brains process musical semantics separately from musical syntax, and if these neurocognitive processes are similar to the processes of language syntax and semantics. Previous research has identified two electrical signal changes (called event related potentials) associated with harmonic expectancy. When a harmonic expectancy is violated, the ERAN and the N500 peaks are affected. By playing musical excerpts coupled with sentences in the native language of the listeners, said sentences containing either violations of syntax or violations of semantics, Steinbeis and Koelsch found that syntactic violations in the sentence reduced the effect of the ERAN, and that semantic violations in the sentence reduced the effect of the N500. Thus, if one accepts the premise that these two ERP peaks do correspond to harmonic expectancy, this harmonic expectancy has two different processes that correlate to language's syntax and semantics.
The implications of this? I'm not sure, beyond a a possible negation of that one opinion that music syntax and music semantics are identical. I suppose it opens up the possibility that there are universal meanings in music, with varying potential levels to "universal." And it could lead to investigations on whether music-making evolved as a byproduct of language development or vice-versa.
2 comments:
There is debate as to whether this syntax is learned by exposure to music of that culture or if this syntax is hardwired.
Not to mention the debate about what the proper description of this syntax is in the first place, and what the explanatory status of such a description would be.
On second thought: while these latter sorts of debates are furiously raging in the field of linguistics, their musical counterparts, as far as I can tell, have hardly seen the light of day (except during a brief period lasting approximately from 1950 to 1980). Anyone care to speculate on the reasons for this?
Might look @
http://web.mit.edu/linguistics/people/faculty/pesetsky/Pesetsky_Cambridge_music_handout.pdf
Post a Comment