Psychology Psychology of Music http://pom.sagepub.com/
Non-musicians' Non-musicians' and musicians' perception of bitonality
Mayumi Hamamoto, Mauro Botelho and Margaret P. Munger Psychology of Music 2010 38: 423 originally published online 24 March 2010 DOI: 10.1177/0305735609351917 The The online version of this article can be found at: found at: http://pom.sagepub.com/content/38/4/423
Published by: http://www.sagepublications.com
On behalf of:
Society for Education, Music and Psychology Research
Additional services and information for Psychology of Music can be found at: Email Alerts: http://pom.sagepub.com/cgi/alerts Subscriptions: http://pom.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://pom.sagepub.com/content/38/4/423.refs.html
>> Version of Record - Sep 21, 2010 OnlineFirst Version of Record - Mar 24, 2010 What is This?
Downloaded from pom.sagepub.com by guest on January 21, 2013
Article
Non-musicians’ and musicians’ perception of bitonality
Psychology of Music 38(4) 423–445 © The Author(s) 2010 Reprints and permission: http://www http://www.. sagepub.co.uk/journalsPermission.nav DOI: 10.1177/0305735609351917 http://pom.sagepub.com
Mayumi Hamamoto, Mauro Botelho Botelh o and Margaret P. Munger Davidson College, USA
Abstract
Bitonal music is characterized by a certain, dissonant effect that had been believed to be clearly audible by everyone. However, Wolpert found that non-musicians were unable to identify bitonal versions of originally monotonal musical passages as such in a free response task. The present study replicated Wolpert’s findings, but also had participants rate song clips for likeableness, correctness and pleasantness. Bitonal music was rated lower on all dimensions independent of the individual’s level of musical training, with no difference in ratings by non-musicians and musicians. In addition, following a brief training session, non-musicians (less than one year of musical training) identified whether clips were monotonal or bitonal at equivalently high rates as the intermediate (mean 2.4 years) and expert (mean 9.2 years) musician groups. keywords bitonality, musicians, non-musicians, perception, tonality
Introduction An enormous number of differences have been observed between how expert musicians and naïve listeners respond to t o music using both behavioural and neurophysiological neurophysiological methods (e.g., ( e.g., Koelsch & Mulder, Mulder, 2002; Peretz & Zatorre, 2005; Regnault, Bigand, & Besson, 2001). A difference we found particularly intriguing was the apparent inability of non-musicians to hear the unique effect of bitonal music (Wolpert, (Wolpert, 1990, 2000), music that is characterized by harsh harsh and unexpected dissonances. It had been believed that any person, regardless of previous musical experience, would find the effect of bitonality to be so unpleasant that bitonality would be immediately perceptible (e.g., Gosselin et al., 2006). In fact, consonant and dissonant chords elicit different patterns of neuronal activity in the primary auditory cortex for many animals, and the magnitude of the oscillatory pattern elicited by dissonant chords correlates with perceived ceiv ed dissonance in humans (Fishman et al., 2001).
Corresponding author:
Margaret P. Munger, Davidson College, Box 7001, Davidson, NC, 28035–7001, USA. [email:
[email protected]]
Downloaded from pom.sagepub.com by guest on January 21, 2013
424
Psychology of Music 38(4)
Bitonality can be observed in the works of many composers, most notably in those of Milhaud, perhaps the most famous polytonalist, but also in the music of Bartók, Britten, Casella, Honegger, Ives, Koechlin, Prokofiev and Strauss, to mention only the more prominent names (Morgan, 1991; Symms, 1996; Watkins, 1995; Whittall, 2001b). Bitonality is ‘the simultaneous use of two ... keys. This may occur briefly or over an extended span’ (Randel, 1986; see also Whittall, 2001a). (Polytonality refers to the use of three or more keys simultaneously.) The group of composers associated with Milhaud known as ‘Les Six’ was especially fond of bi- and polytonality (DeVoto, 1993; Médicis, 2005; Messing, 1988). Composers themselves cite bi- and polytonality as integral to their work (Bartók, 1931/1976, 1943/1976; Casella, 1924; Koechlin, 1925; Milhaud, 1923/1982). Compositional treatises routinely present bi- and polytonality as a viable compositional procedure (Cope, 1977; Dallin, 1974; Smith Brindle, 1986). Bitonality may also be associated with a piece’s text or dramatic structure, as, for example, in Britten’s operas Peter Grimes and The Turn of the Screw (Whittall, 2001a). It should be noted that such interpretations often fall under the category of ‘eye music’: the notational peculiarities of bitonality such as differing key signatures that convey ‘symbolic meaning that is apparent to the eye but not to the ear’ (Dart, 2001). Disagreement among musicians and especially music theorists arises not in relation to the term’s definition, but whether or not bitonality can be perceived. In spite of the evidence outlined above, many theorists have been less than accepting of the concept of bitonality on the grounds that it cannot be perceived. Van den Toorn famously declared: The ‘bitonality’ or ‘polytonality’ of certain passages in [Stravinsky’s music] can no longer be taken seriously ... Presumably implying the simultaneous (C-scale tonally functional) unfolding of separate ‘tonalities’ or ‘keys’, these notions – real horrors of the musical imagination – have widely (and mercifully) been dismissed as too fantastic or illogical to be of assistance. (van den Toorn, 1983, pp. 63–64)
Two widely-used 20th-century theory and analysis texts reflect this scepticism: one does not mention the topic at all even though it discusses music that is treated as bitonal in other quarters (Straus, 1990), the other mentions bitonality briefly but questions its perceptibility (Williams, 1997). Studies sympathetic to bi- and polytonality exist, but are infrequent (e.g., Harrison, 1997; Stein, 2005). The source of music theory’s claims that bi- and polytonality are not perceptible is not entirely clear, but seems to be rooted in semantics, namely, in the logical contradiction inherent in the term ‘bitonality’ rather than in any experimental evidence (Boretz, 1973; Forte, 1955; see also Tymoczko, 2002). Tonal pitch structure is typically conceived to be strictly hierarchical (e.g., Lerdahl, 2001 or Lerdahl & Jackendoff, 1983). Thus, tonality is understood as ‘the projection in time of a single [consonant] triad by means of ... linear and harmonic prolongations of this triad’ (Babbitt, 1952, p. 261). Note the use of the word single in the definition: in other words, if music is to be heard in terms of a key or a pitch-class centre, all of its pitches will be heard in terms of this single governing and superordinating pitch class. Conversely, bitonality, as mentioned above, is ‘the simultaneous use of two keys’. Consequently, because tonality by definition admits only one pitch centre, many theorists consider bi- or polytonalism a figment of the imagination, and explain so-called bi- or polytonal passages as a partitioning of the chromatic scale, or of certain pitch-class sets such as the octatonic scale or octad (Berger, 1978; Straus, 1990; van den Toorn, 1983). The octatonic collection is an eight-note scale in which whole steps and half steps, or half steps and whole steps, alternate. The octad collection is also an eight-note scale consisting of two whole steps, three half steps, and two whole steps (Figure Downloaded from pom.sagepub.com by guest on January 21, 2013
425
Hamamoto et al.
Octatonic collection
(a)
Octad collection
(b)
Figure 1. Eight-note scales: (a) the octatonic collection; (b) the octad collection.
1). An easy way to conceive of the octad is to arrange the pitches of a major scale in fifths rather than in steps, say, F–C–G–D–A–E–B, and then add one additional fifth to obtain the octad, F–C– G–D–A–E–B–F sharp. Finally, it should also be noted that antagonism towards bi- and polytonality is as old as the concept itself. Early criticism of bitonality was often virulent, and frequently mixed in with nationalistic and anti-Semitic rhetoric (Médicis, 2005; Messing, 1988). Schoenberg’s opposition to bi- and polytonality, although especially vehement, emphasized musical and structural reasons (Schoenberg, 1923/1975, 1925/1975, 1926/1975; see especially Schoenberg’s second untitled essay in Stein, 1975; Messing, 1988 mentions a number of still unpublished essays by Schoenberg that also describe polytonality negatively). But it is important to remember that most of Schoenberg’s criticism of polytonality dates from the mid-1920s, that is, during the peak of bi- and polytonality’s popularity but also precisely the time he was composing his first 12-tone pieces (and enduring his own share of negative criticism). Bitonality: perceptual issues It is puzzling that the logical contradiction (inasmuch as music theorists are concerned) inherent in the concept of bitonality is capable of quashing discussion. Although we recognize that experimental studies in the perception of bitonality are at a beginning state, it is surprising that one can dismiss the concept as ‘fantastic or illogical’ without any experimental evidence. Bitonality, and the perceptual issues it raises, came to a head in an exchange between two music theorists and Stravinsky scholars: Dmitri Tymoczko (2002, 2003a, 2003b) and Pieter van den Toorn (2003). Briefly, Tymoczko questions if the octatonic scale should be considered the source of Stravinsky’s pitch materials as van den Toorn has frequently claimed (1975, 1983, 1987). Tymoczko shows that many passages in Stravinsky’s music can be explained as resulting from the superposition of musical elements. Echoing the views of many composers, Tymoczko points out that bitonality is a useful explanation for the construction of a piece (2003b). Side-stepping for a moment the contentious issue of perception of bitonality, Tymoczko calls such layering of elements ‘ polyscalarity: the simultaneous use of musical objects which clearly suggest different source-collections. Polyscalarity is a kind of local heterogeneity, a willful combination of disparate and clashing musical elements’ (Tymoczko, 2002, p. 84).
Downloaded from pom.sagepub.com by guest on January 21, 2013
426
Psychology of Music 38(4)
Tymoczko goes on to address the issue of perception. While he admits that ‘a piece of music cannot, in the fullest and most robust sense of the term, be in two keys at once’ (2002, p. 84), bi- or polytonality as a concept is relevant for music in which distinct pitch configurations ‘naturally segregate themselves into independent auditory streams, each of which, if heard in isolation, would suggest a different tonal region’ (Tymoczko, 2002, p. 84; our emphasis). Auditory scene analysis is the ability to identify different sound sources within the auditory stream, and listeners use a variety of acoustic cues to distinguish auditory objects including frequency, intensity and spatial location (for review, see Bregman, 1990, 1993). In addition, a complex sound with multiple harmonics will be identified as two auditory objects when one of the harmonics is mistuned relative to the others (Alain, Arnott, & Picton, 2001). Using fundamental frequencies of 200 and 400 Hz, participants identified the complex sound as two distinct auditory objects when the degree of mistuning of a harmonic was 4 percent, which corresponds roughly to a semitone (Alain et al., 2001). Being able to distinguish multiple acoustic objects is not the same as being able to distinguish multiple tonalities, but it is clearly an initial requirement. We suggest that the unique effect created by bitonality results from the combination of two independent yet interacting factors: first, there is the sensory roughness that results when tones are too close in pitch, with the most dissonant intervals involving a difference of about 4 percent (a bit less than a semitone, Plomp & Levelt, 1965; Rasch & Plomp, 1999). Second, there is the abstract and high-level discordance created when distinct pitch configurations that project independent auditory streams, each in a different key, point to two different tonics, in violation of tonality’s basic premise. Note that it is not necessarily the case that all bitonal music will be dissonant (Dallin, 1974; Smith Brindle, 1986). Conversely, tonal music highly saturated with chromaticism and, for that matter, atonal music, can be very dissonant without giving the impression of bitonality. In his description of the perceptual experience of bi- and polytonality, Tymoczko avails himself of a rather colourful metaphor: ‘polytonal music tends to involve a very distinctive sort of “crunch”’ (Tymoczko, 2003b, p. 2). We understand Tymoczko’s ‘crunch’ to be synonymous with the more well-defined term ‘sensory roughness’. In spite of its colloquial ring, we adopted Tymoczko’s ‘crunch’ for its ready comprehensibility by non-musicians, part of our pool of participants. While we agree that independent auditory streams are essential for the perception of bitonality, one needs to take into account the varying degrees of independence between the musical structures that project the independent auditory streams. At one end of the spectrum lies Tymoczko’s extreme example of an oboist playing ‘My Country ’Tis of Thee’ in F major in one corner of a room, while in another corner a pianist plays ‘The Star-Spangled Banner’ in D-flat major (Tymoczko, 2003b). Two auditory streams thus project two different keys, but they are so independent, unrelated and uncoordinated that they would not be perceived as a bitonal piece, or, for that matter, any sort of piece, but rather as two pieces, each in one key, played simultaneously. At the other end of the spectrum lies the locus classicus of bitonality, the so-called ‘Petroushka chord’ (Figure 2; the clarinets are notated in C). Traditionally, this passage has been explained as the bitonal superposition of the keys of C and F sharp major, even by the composer himself (Stravinsky & Craft, 1962). The passage is quite dissonant, even by early 20th-century standards: all but one simultaneity, the C sharp–E in bar 9, are non-triadic. The sustained diminished third A sharp–C, alternating with the minor second G–F sharp, is especially biting. As is readily apparent, this passage consists of two highly coordinated parts, played by clarinets 1 and 2: they are timbrally homogenous, and exhibit identical rhythm and contour. In fact, the two clarinet parts are so well coordinated that they cannot be considered as independent
Downloaded from pom.sagepub.com by guest on January 21, 2013
427
Hamamoto et al.
auditory streams. In fact, this coordination helps to smooth out the otherwise dissonant effect of this passage. It should come as no surprise, then, that experimentation has corroborated van den Toorn’s hypothesis that this passage is heard in terms of the octatonic collection (Krumhansl, 1990; Krumhansl & Schumckler, 1986). Figure 3 illustrates how the octatonic scale is partitioned between the two clarinets and bassoon: C, E and G, suggesting C major, are given to clarinet 1, and C sharp, F sharp and A sharp, suggesting F sharp major, are assigned to clarinet 2 and bassoon. Two pitch-classes of the octatonic collection, D sharp and A, are not used, and the bassoon’s G sharp (circled in Figure 2) is a ‘rogue’ element, that is, it does not belong to the octatonic scale (but fits in comfortably in F sharp major). Somewhere in between the two examples above lie the two types of stimuli used in this study: the contrived bitonal examples created by Wolpert (1990, 2000) and also used in blocks 1 and 2 of this study, and the originally composed bitonal phrases taken from Milhaud’s piano suite Saudades do Brasil. Wolpert took a pre-existing piece, often well known and familiar to participants, and transposed the melody up a perfect fifth (1990), or up or down a whole step (2000). Like Wolpert (2000), we also took well-known pieces and transposed the melody up and down by a whole step. These contrived examples create a bitonal ‘crunch’ that is less jarring than the thought experiment of playing two different pieces in two distinct keys in opposite corners of a room, but a bit more pronounced than in the Petroushka passage. The passages taken from Saudades were composed intentionally as bitonal, and present a bitonal effect that is more subtle and nuanced than the contrived examples. Not only does Milhaud vary the interval separating the two keys, dissonances are better controlled, avoiding the arbitrary dissonances of the contrived examples. Moreover, a phrase often becomes monotonal at its conclusion. 49 Molto meno. = 50
Cl. 1 in B
Cl. 2 in A
1.
Bsn. 1, 2 lamentoso
Figure 2. Stravinsky, Petroushka , Second Tableau, bars 9–15: score (the clarinets are notated in C).
Clarinet 1
Clarinet 2, Bassoon 1
Figure 3. Stravinsky, Petroushka, Second Tableau, bars 9–15: partitioning of the octatonic collection by instruments.
Downloaded from pom.sagepub.com by guest on January 21, 2013
428
Psychology of Music 38(4)
Musicians vs. non-musicians Remarkably, bitonal sensory roughness or ‘crunch’ is perceived easily and immediately by musicians but often goes by unnoticed by non-musicians (Wolpert, 1990, 2000). In her study on melody recognition with different instrumental and harmonic accompaniments, Wolpert (1990) made an unexpected observation about non-musicians. Participants were presented with three versions of the same tune: the model (melody with accompaniment in same key), the same tune played by a different instrument (melody and accompaniment in same key), and the tune played by the same instrument as the model, but with the accompaniment in a dif ferent key from the melody. In choosing the melody that is more ‘similar’ to the model, musicians always selected the choice based on key equivalency, whereas 95 percent of non-musicians chose in terms of instrumentation, and so claimed the bitonal version was the most similar to the model. To non-musicians, timbre was more important than harmony, even harmony that made the musicians cringe. But even more intriguingly, while all of the musicians noticed the dissonant effect of a melody accompanied in a different key, over half of the non-musicians detected nothing. All musicians, within seconds of hearing the effect of bitonality, smiled or grimaced, but none of the non-musicians did so. This result was true for both familiar tunes, such as the nursery rhymes ‘Mary Had a Little Lamb’ and ‘Twinkle Twinkle Little Star’, and original melodies composed for the study. Wolpert (2000) focused more directly on the perception of bitonality: Myrow and Gordon’s ‘You Make Me Feel So Young’ was sung by a professional singer with an orchestral accompaniment, and then manipulated so that the accompaniment was transposed either up or down two semitones. Participants listened to the three versions of the song (original, accompaniment transposed up, and accompaniment transposed down), and answered one open question: ‘What differences, if any, do you hear among the excerpts?’ Again, Wolpert found considerable differences in perception between musicians and non-musicians. Musicians unanimously identified the melody and accompaniment’s keys as the primary difference. In fact, musicians often did not hide the unpleasantness of their experience with bitonality; several asked to be excused from listening to the bitonal pieces in their entirety. On the other hand, only 40 percent of the non-musicians mentioned the difference in key. Wolpert’s (1990, 2000) results contradict not only common assumptions regarding listeners familiar with western tonal music, but also behavioural and psychophysiological work on the perception of other aspects of pitch structure, namely harmony and mode (e.g., Costa, Fine, & Ricci Bitti, 2004; Gagnon & Peretz, 2003; see Peretz & Zatorre, 2005 for a review of psychophysiological work). For example, when non-musicians rate the happiness and sadness of a melody, they are influenced by mode (major or minor), either with relatively simple experimenter-created melodies (Gagnon & Peretz, 2003; Halpern, Martin, & Reed, 2008) or art music excerpts (Costa et al., 2004). While mode is certainly musically distinct from bitonality, it is surprising that mode would matter when bitonal ‘crunch’ (dissonance and/or incompatible independent auditory streams) does not. Significantly, these listeners were not asked to articulate what was different between the clips, as Wolpert (1990, 2000) asked, but simply to assign a rating. We speculate that perhaps non-musicians do not have the vocabulary to describe the bitonal effects perceived when hearing Wolpert’s stimuli, and a rating task would reveal more clearly the musical sensitivity of non-musicians. For example, Regnault et al. (2001) presented sequences of chords to musicians and nonmusicians with instructions to rate the final chord for consonance/dissonance, defining consonance as ‘pleasant, everything seems OK’ and dissonance as ‘unpleasant, something seems
Downloaded from pom.sagepub.com by guest on January 21, 2013
429
Hamamoto et al.
wrong’ for the non-musicians. Musicians were 97 percent accurate overall, and with these instructions non-musicians had 81 percent accuracy overall, with 84 percent accuracy for dissonant chords. In addition, event-related brain potentials (ERPs) were recorded and revealed that musicians and non-musicians both show a larger late positive component for dissonant over consonant chords (late epoch: 300–800 ms following the chord). Musical expertise does interact with ERPs in an earlier epoch (100–200 ms), with musicians having larger positive peaks to dissonant chords while non-musicians have larger positive peaks to consonant chords. The non-musicians are processing consonant and dissonant chords differently, as revealed by the ERPs, and can identify the chords fairly accurately (Regnault et al., 2001). However, different ERP patterns for stimuli do not always correspond with listeners being able to explicitly identify the stimulus differences (Koelsch & Mulder, 2002). While Regnault et al. (2001) found that non-musicians can identify dissonant chords through pleasantness ratings, non-musicians struggle when asked to explicitly identify the mode of an excerpt (Halpern et al., 2008; Leaver & Halpern, 2004). Halpern and her colleagues found that non-musicians can label major and minor songs as ‘happy’ and ‘sad’ respectively, but asking them to label the same songs using ‘major’ and ‘minor’ results in chance performance. Even musicians have some difficulty with the ‘major/minor’ labels, though performance is above chance (Halpern et al., 2008; Leaver & Halpern, 2004). Leaver and Halpern (2004) explored different types of training for non-musicians, finding that a short lesson that included music theory and explicit links to the affective labels improved non-musicians’ performance, but not to the same levels as musicians. In addition, an ERP study revealed that musicians have a large late positive component when processing the note critical for mode classification that is missing in non-musicians (Halpern et al., 2008), highlighting the fact that non-musicians are processing music differently. Wolpert (1990, 2000) was interested in what non-musicians would spontaneously identify regarding music, emphasizing that her question was ‘what do people hear’ rather than ‘what can people hear’. Given the influence of other musical characteristics on ratings on emotion (e.g., Costa et al., 2004; Gagnon & Peretz, 2003) and the evidence that non-musicians are at least implicitly aware of harmony, as revealed by ERPs (e.g., Koelsch & Mulder, 2002; Regnault et al., 2001), we wondered how non-musicians would spontaneously use scales of likeableness, pleasantness and correctness to rate monotonal and bitonal music. As mentioned above, it is possible that non-musicians do hear bitonality as readily as musicians, but simply lack the vocabulary to describe it. In addition, we examined whether participants, especially non-musicians, could be taught to identify bitonality in concert-hall music where the composer had chosen to write in bitonality. As will be discussed more fully below, the selections from Milhaud’s Saudades offer a subtle bitonal ‘crunch’, and, in our estimation, present a more robust test than the one offered by contrived musical examples. For the purposes of this study, we assume that accompaniment and melody project independent auditory streams. This assumption makes common sense; it also seems implicit in Wolpert’s studies (1990, 2000). Importantly, it is also how the composer Milhaud chooses to separate the keys in Saudades . Melody and accompaniment are differentiated by register, rhythm and pacing, timbre and dynamic level, yet are sufficiently coordinated in terms of meter and especially pitch structure to be perceived as integral parts of an organic whole. All the bitonal excerpts used in this study, whether contrived from a pre-existing piece or originally composed by Milhaud, present the accompaniment in one key and the melody in another. Finally, we wish to draw a distinction between the terms ‘tonality’ and ‘key’ as used in this study. We understand ‘tonality’ to be a musical system where any and all pitch configurations are perceived as being organized in terms of one and only one referential pitch class: the tonic. Downloaded from pom.sagepub.com by guest on January 21, 2013
430
Psychology of Music 38(4)
‘Key’ identifies the one major or minor triad that is projected by linear and harmonic prolongations. This being said, we also recognize that the term ‘tonality’ has a wide and often contradictory range of meanings, and is often used synonymously or interchangeably with ‘key’ (see Hyer, 2001). This is why ‘bitonality’, in spite of its name, means the simultaneous presence of two distinct keys or scales, not musical systems. At times, we will use the invented and tautological term ‘monotonal’ rather than ‘tonal’ solely to distinguish clearly from bitonal or polytonal. Thus, when participants are asked to ‘identify the tonality’ of an excerpt, they are in fact asked to determine whether the passage is (mono)tonal or bitonal. The current experiment has three components. First, we would like to replicate Wolpert’s (2000) finding that non-musicians do not spontaneously identify change in tonality within a free response task. We will use musical excerpts from both the classical and mid- to late-20thcentury popular songbook, and create manipulated (bitonal) versions of each clip by shifting the melody two semitones. Koelsch, Fritz, Cramon, Müller, & Friederici (2006) labeled a similar manipulation as ‘(permanently) dissonant’. Second, we would like to see how non-musicians, along with musicians, use rating scales for likeability, correctness and pleasantness for a similar set of original and manipulated (bitonal) clips. We are not going to instruct them on how to map these adjective scales onto the tonality of the clip, though previous work suggests that the (permanently) dissonant clips will be mapped onto the more negative ends of these scales (e.g., Gosselin et al., 2006; Koelsch et al., 2006). Finally, we would like to explore how readily nonmusicians can learn to label bitonality. When asked to identify when the ‘pianist gets lost’ dissonance was correctly identified (Gosselin et al., 2006), suggesting non-musicians are sensitive to dissonance. However, when explicitly asking non-musicians to identify major and minor mode, Halpern et al. (2008) found that non-musicians struggled, even though they were able to accurately label major as happy and minor as sad. Ours is perhaps an even more challenging task because we are using Milhaud’s Saudades, which were originally bitonal compositions and are subtler than the (permanently) dissonant clips used in previous literature (Koelsch et al., 2006; Wolpert, 2000; our blocks 1 and 2). Method Participants Forty-two participants (aged 18–22 years, gender was not recorded) were recruited from introductory psychology classes and student music ensembles. Musical experience was assessed by a brief survey at the end of the experiment, and revealed three distinct groups. Training in music theory was not considered, adopting Krumhansl’s (1983) idea of focusing on participants’ musical experience. If the participant’s only musical experience came from ensemble participation, such as school band or chorus, we divided the number of years by two for the purposes of assigning groups, because they had not received the attention of private instruction. Musicians (n = 14) had at least five years of private music lessons on an instrument (mean years of training = 9.2 years). Intermediate musicians (n = 14) had between one and five years of private lessons (mean years of training = 2.4 years), and non-musicians (n = 14) were those with less than one year of private lessons. Stimuli Music clips were created in MIDI (Musical Instrument Digital Interface) files using Steinberg’s sequencer program Cubase SX (version 3.0). Sample sounds of orchestral instruments were Downloaded from pom.sagepub.com by guest on January 21, 2013
431
Hamamoto et al.
taken from Symphonic Orchestra Silver Edition (Eastwest), drum sounds from BFD ( fxpansion), and guitar sounds from Virtual Guitarist (Steinberg). Blocks 1 and 2 presented two versions of classical and popular music excerpts: the original tonal version, and an altered bitonal version where the accompaniment was moved up or down two semitones, matching the manipulation used by Wolpert (2000). Classical excerpts included the beginning phrases of Schubert’s ‘Ave Maria’ (Ellens Gesang III, Op. 52, No. 6, D. 839), the first movement of Mozart’s Bassoon Concerto in B flat Major, K. 186e, and the arias ‘Ev’ry Valley’ and ‘Rejoice, Rejoice’ from Handel’s Messiah. Popular music excerpts were taken from jazz and pop songs from the mid to late 20th century: Gordon’s ‘Unforgettable’, Lennon and McCartney’s ‘A Day in the Life’, McCartney’s ‘Yesterday’, and Menken and Ashman’s ‘Beauty and the Beast’. For block 3, we constructed stimuli in a manner completely opposite to those in Wolpert’s study (1990, 2000). Rather than start with a monotonal piece and then distort it to render it bitonal, we took pieces that had been written bitonally by a renowned composer, and then made small adjustments to make them monotonal. Thus, for block 3 we selected five phrases from Milhaud’s Saudades: ‘Corcovado’, ‘Copacabana’, ‘Ipanema’, ‘Paineras’ and ‘Botafogo’. All selections were taken from the initial phrase of the piece with the exception of ‘Ipanema’, which was taken from an internal phrase. Again, save for ‘Ipanema’, the stimuli are similar in their construction. ‘Botafogo’ is typical and will serve to illustrate the stimuli in block 3 (Figure 4).1 The phrases consist of an accompaniment built on a characteristic Brazilian syncopated rhythm that oscillates between tonic and dominant at the rate of one chord per measure. The left-hand accompaniment does not use a complete scale; rather, it projects its key using a pentachord formed from the first five pitches of the scale. The right-hand melody is built on a different scale; unlike the accompaniment, it utilizes all the pitches of the scale. Figure 5 shows how the pitches of each key are distributed among melody and accompaniment. Tonal versions of each excerpt were created by slightly altering the right-hand melody to match the key of the left-hand accompaniment, thus eliminating some of the bitonal ‘crunch’ (Figure 6). While the rewriting eliminated the bitonality, some dissonances remained. The alterations were minute in order to preserve the composer’s style. In fact, changes were so minimal that listeners would not perceive an alteration unless they were intimately acquainted with Saudades. (The accompaniment was not changed.) Finally, we note that Milhaud also uses a variety of intervals to separate the key of the melody from that of the accompaniment. Table 1 illustrates the key differences in the pieces chosen for this study. (‘Ipanema’ is missing in Table 1; as will be discussed below, it is not entirely bitonal.) There are significant differences between Wolpert’s and our contrived examples and Milhaud’s pieces. When a monotonal piece is altered by transposing the melody only to another key, dissonances are created at salient moments where consonances typically existed, such as at phrase beginnings, downbeats and strong beats, and especially at the cadence that closes the phrase. Milhaud also uses bitonality to create dissonances, but exercises greater control in their deployment, as illustrated in ‘Botafogo’ (Figure 4). The accompaniment is in F minor. After a two-bar vamp, the melody enters in F sharp minor. Striking dissonances are heard at salient moments: an augmented fifth on the downbeat of bar 3, a major seventh on the downbeat of bar 4 and the clashes created by the right-hand chords in bars 7–11. Dissonances ease toward the end of the phrase (this is mirrored by the diminuendo to mp), and with the C natural in bars 13–14 the melody slips out of F sharp minor into F minor, the key of the accompaniment. Even when a phrase does not begin with a dissonance, as is the case in ‘Copacabana’, striking dissonances are placed on most beats throughout the phrase (Figure 7). This particular Downloaded from pom.sagepub.com by guest on January 21, 2013
434
Psychology of Music 38(4) Calme (88 = )
5
Figure 7. Milhaud, Saudades do Brasil , Op. 67, No. 4, ‘Copacabana’ .
Block 2 asked participants to listen to single versions of a musical clip and use a computer mouse to answer three questions using Likert-scales: How much do you like this song? (1, not at all likeable, to 5, like a lot.) How correct does this song sound? (1, wrong, to 5, correct.) How pleasant does this song feel? (1, unpleasant, to 5, pleasant.) All three scales appeared on-screen simultaneously, and participants could answer in any order. Over the course of block 2, participants heard two new classical and two new popular excerpts, in both original tonal and altered bitonal versions, for a total of eight clips to rate. The musical clips were presented in a unique random order for each participant. The musical excerpts used in blocks 1 and 2 were counterbalanced between participants, so that excerpts that appeared in block 1 for half the participants appeared in block 2 for the other half. Block 3 comprised three phases (see Figure 8): defining monotonality and bitonality, training participants with feedback to identify the tonality of a musical clip, and then testing participants’ ability to identify the tonality with new musical clips. ‘Copacabana’ was always used in the training phase (Figure 7), and Saudades set 1 included original and manipulated (monotonal) versions of ‘Ipanema’ and ‘Paineras’, while Saudades set 2 included original and manipulated (monotonal) versions of ‘Botafogo’ and ‘Corcovado’. While the original, bitonal version of ‘Copacabana’ was presented, the following definition appeared on screen: ‘BITONAL: Notice sometimes there is a “crunch” in the sound. This should sound somewhat unpleasant and feel like it shouldn’t be that way.’ Afterwards, the altered monotonal version was presented, with the following definition on screen: ‘MONOTONAL: Now the song sounds smooth and more pleasant.’ Following these definitions and examples, two additional excerpts, (either Saudades set 1 or Saudades set 2) were used for training. Both bitonal and monotonal versions of each were presented in random order for each participant (four total trials). Participants were asked to identify whether the passage was monotonal or bitonal, and then were given feedback regarding the accuracy of their response and the actual tonality of the clip. These training clips were repeated until participants correctly identified the tonality of four excerpts in a row. At this point, participants were asked to identify the tonality of two new excerpts from Saudades, (Saudades sets were counterbalanced between participants). Both tonal and bitonal versions of each excerpt were presented in random order for each participant, for a total of four trials. Downloaded from pom.sagepub.com by guest on January 21, 2013
435
Hamamoto et al.
Block 1 Listen to song set A (original and bitonal versions) Task: Comment on any differences
Block 2 Listen to song set B (original and bitonal versions) Task: Rate likeability, correctness & pleasantness 1 5
=
=
not at all likeable/wrong/unpleasant likeable/correct/pleasant
Block 3 Phase 1: Definitions Listen to original ‘Copacabana’ (bitonal version) View definition (on screen) Bitonal: Notice sometimes there is a ‘crunch’ in the sound. This should sound somewhat unpleasant and feel like it shouldn’t be that way.
Listen to manipulated ‘Copacabana’ (monotonal version) View definition (on screen) Monotonal: Now the song sounds smooth and more pleasant.
Phase 2: Training Listen to Saudades set 1 (original and monotonal versions) Task: Identify tonality, with feedback Repeated until 4 correct in a row
Phase 3: Testing Listen to Saudades set 2 (original and monotonal versions) Task: Identify tonality, no feedback
Figure 8. Experimental procedure. Song sets included two classical and two popular excerpts (original and manipulated bitonal versions) and were counterbalanced between participants. Saudades sets included two excerpts (original and manipulated monotonal versions) and were also counterbalanced between participants.
After completing all three blocks, participants were briefly interviewed about their musical experience, answering questions about private music lessons, musical genres performed and listened to, and how many hours a day they listened to music. Results Free response Block 1 presented monotonal and bitonal versions of the same musical excerpt, and asked participants to list any differences they heard. Mention of dissonance, differences in key between Downloaded from pom.sagepub.com by guest on January 21, 2013
436
Psychology of Music 38(4)
melody and accompaniment, or something being out of tune earned 2 points. Discussion of a preference, correctness, pleasantness or overall key or pitch change earned 1 point. Two raters scored each of the free responses (interrater reliability = 0.91). The 15 differences between the raters (out of 168 responses) were resolved through discussion, and subsequent interrater reliability was 1.0. The points were totalled for the entire block so that the highest possible score was 8 (2 points maximum per clip for four clips). A one-way ANOVA between the three groups found a significant effect of group (F(2, 39) = 12.73, p < .01), with Scheffé post hoc comparisons revealing that musicians scored higher (M = 74%, SD = 27) than both intermediate musicians (M = 37%, SD = 35, p <.01), and non-musicians (M = 20%, SD = 25, p < .01), but no difference was observed between intermediate and non-musicians ( p > .30). Free response scores and years of music lessons were significantly correlated (r = .64, p < .01). Musicians’ superior performance on the free response task replicates Wolpert (2000), and highlights that musicians certainly have a different vocabulary for describing pitch structure. Of particular interest, 11 participants (four musicians, three intermediate musicians and one non-musician) used the phrase ‘out of tune’ to describe the bitonal stimuli, which, in our estimation, is a close approximation to a correct evaluation of the aural effect of the stimuli. Nonmusicians were more likely to offer nonexistent differences, such as tempo change or added instruments, and to use vague descriptions such as ‘the sound was brighter’ or ‘more intense’. In fact, six of the non-musicians actually scored 0 points, mentioning no differences between the musical excerpts that could possibly be interpreted as relating to key, or even pleasantness. Rating data For the participant ratings in block 2 (see Table 2), we computed repeated-measures MANOVAs with musical experience as a between-participant factor, and a 3-adjective scale (likeability/ correctness/pleasantness) by 2 tonality (monotonal/bitonal) by 2 genre (classical/popular) within design. All reported effects and interactions have p < .05. No main effect was found for musical experience (F < 1.0). Main effects were found for tonality, F(3, 37) = 42.75, and genre, F(3, 37) = 3.01, with monotonal excerpts rated higher than bitonal excerpts on all scales, and pop excerpts rated higher than classical excerpts on all scales. Significant interactions occurred between musical experience, tonality and two of the adjective scales: correctness, F(2, 39) = 5.54, and pleasantness, F(2,39) = 3.27. Tukey’s HSD analysis revealed that musicians had a larger drop in correctness and pleasantness ratings when responding to bitonal clips compared to monotonal clips (see Figure 9). One of the popular excerpts was accidentally repeated in blocks 1 and 2 for half the participants, and consequently separate ANOVAs for each adjective scale with song set as a betweenparticipant factor along with 2 tonality by 2 genre within design were computed. A main effect of song set was observed only for correctness ratings, F(1,36) = 7.41, and a t-test revealed that the participants who heard the bitonal popular clip twice (in blocks 1 and 2), rated it as more correct than those who heard it only once (t(40) = 2.31). As expected from the MANOVA reported above, tonality and genre were also main effects. The interactions between tonality and musical experience reveal that musicians were more sensitive to tonality than the intermediate and non-musician groups when rating correctness and pleasantness (see Figure 9), but there was not the main effect of musical experience we expected based on the results of block 1 and Wolpert’s earlier work (1990, 2000). The results
Downloaded from pom.sagepub.com by guest on January 21, 2013
437
Hamamoto et al.
Table 2. Mean ratings for non-musicians, intermediate musicians and musicians on the scales in block 2 (1 = not at all likeable/wrong/unpleasant to 5 = like a lot/correct/pleasant)
Group
Task
Non-musicians
Intermediate musicians
Musicians
Likeability Correctness Pleasantness Likeability Correctness Pleasantness Likeability Correctness Pleasantness
Monotonal (original)
Bitonal (manipulated)
Classical
Popular
Classical
Popular
3.71 (0.58) 4.39 (0.71) 4.25 (0.73) 3.61 (0.81) 3.96 (0.89) 4.00 (0.88) 4.00 (0.65) 4.50 (0.68) 4.50 (0.55)
3.96 (0.75) 4.07 (0.90) 4.21 (0.78) 3.54 (0.75) 3.86 (0.93) 4.07 (0.70) 4.18 (0.58) 4.68 (0.37) 4.64 (0.50)
2.43 (0.65) 2.79 (0.91) 2.82 (1.10) 2.21 (0.89) 2.46 (1.08) 2.54 (1.05) 2.04 (0.93) 1.86 (1.03) 2.25 (1.09)
2.93 (0.70) 2.89 (0.98) 3.11 (0.88) 2.57 (1.14) 2.57 (1.21) 2.93 (1.25) 2.64 (1.20) 2.36 (0.99) 2.71 (0.97)
Note: Standard deviations are shown in parenthesis.
Non-musicians 5
5
Intermediate Musicians
g 4 n i t a r s s e 3 n t c e r r o C2
g n 4 i t a r s s e 3 n t n a s a e l 2 P
1
1 Monotonal
Bitonal
Monotonal
Bitonal
Figure 9. For each level of musical experience: non-musicians (less than one year of private music lessons, open square), intermediate musicians (between one and five years of lessons, open circle) and musicians (more than five years of lessons, closed triangle): (a) mean rating for correctness (1 = wrong, 5 = correct); (b) mean rating for pleasantness (1 = unpleasant, 5 = pleasant).
of block 2 make it clear that non-musicians do respond spontaneously to tonality, generally rating bitonal clips as unlikeable, incorrect and unpleasant as musicians do. Identifying bitonality Percentages correct for identifying the bitonality of the original Milhaud excerpts (and its absence in the adjusted excerpts) were calculated for each group: musicians (M = 86%,
Downloaded from pom.sagepub.com by guest on January 21, 2013
438
Psychology of Music 38(4)
SD = 16), intermediate musicians (M = 73%, SD = 18), and non-musicians (M = 80%, SD = 24). A one-way ANOVA found no main effect of group (F(2, 39) = 1.4, p > .26), with Scheffé post hoc comparisons revealing no differences between the groups ( p’s > .2). All groups were
equally accurate at distinguishing bitonal excerpts from monotonal excerpts. In fact, after training, there is no effect of musical experience, with non-musicians showing equivalent accuracy to musicians and intermediate musicians. There was also no correlation between accuracy for tonality and years of music lessons (r = .14, p = .37). Within the training session, each group made the same number of errors (musicians, 2.0 errors; intermediate musicians, 1.9 errors; non-musicians, 1.9 errors), with no correlation between number of errors and years of music lessons (r = –.06, p = .69). It is particularly noteworthy that the non-musicians, who were unable to identify the tonality differences in the free response task of block 1, performed equivalently to the most advanced musicians with only a small amount of training. Accuracy for identifying the tonality of each Milhaud excerpt is presented in Figure 10, with 95 percent confidence intervals. All the participants correctly identified the tonality of ‘Paineras’, as revealed by the perfect score (and no confidence intervals), but performance was at chance for ‘Ipanema’ and the monotonal version of ‘Botafogo’. Musical preferences All participants, regardless of level of musical training, reported their listening preferences to be popular, rock or some variation (e.g., classic rock, oldies), with no more specifics. Both intermediate musicians and musicians reported performing almost exclusively classical music (nine of 14 intermediate musicians, 11 of 14 musicians), with only one musician indicating jazz and two others indicating ‘anything’ as the genre they regularly performed.
Figure 10. Ratings for tonality identification of Milhaud, Saudades do Brasil , Op. 67.
Downloaded from pom.sagepub.com by guest on January 21, 2013
439
Hamamoto et al.
Discussion Non-musicians in our study are sensitive to tonality, and will spontaneously rate bitonal music as ‘unlikeable’, ‘incorrect’ or ‘unpleasant’ at rates equivalent to experienced musicians replicating Koelsch et al. (2006) and Gosselin et al. (2006). Non-musicians did not spontaneously include appropriate descriptions of bitonal music such as ‘bitonal’, ‘crunchy’, ‘dissonant’ or even ‘out of tune’ in a free response task, replicating Wolpert (2000). However, once their attention was directed to those characteristics, non-musicians were able to identify bitonality as well as musicians (block 3). These results complement research showing non-musicians’ sensitivity to other aspects of pitch structure, such as mode (Costa et al., 2004; Gagnon & Peretz, 2003) or how well a chord fits within its harmonic context (Regnault et al., 2001). Our data suggest that bitonality may be even more readily apparent than mode, given that nonmusicians performed equivalently to musicians, in contrast with non-musicians’ difficulty in identifying mode (Halpern et al., 2008). Since bitonality is such an unusual compositional style, restricted to a very narrow time period, used by a relatively small number of composers and associated with short-lived aesthetic movements, it is not surprising that its characteristic sound (its ‘crunch’) is so unfamiliar to most non-musician and intermediate musician participants. Conversely, mode and its association with certain emotional states is well-known, ubiquitous, and used in all sorts of musical styles and over a long period – indeed, it is still used today (for a survey of the literature on mode change see Gabrielsson & Lindström, 2001). Block 1 (free response) successfully replicated Wolpert’s (2000) finding that non-musicians do not spontaneously mention bitonality. There was a significant difference in the way musicians and non-musicians/intermediate musicians responded to bitonal and monotonal versions of the same music. While the musicians did not score perfectly in the free response task, they did identify the bitonality of the clips. Whereas Wolpert (2000) used professional musicians, we used musically involved undergraduate students, which probably accounts for the lower accuracy in our musician group. Non-musicians were more likely to offer non-existent differences, such as changes in tempo, instrumentation, or timbre. The scoring criteria were designed to be relatively generous, ignoring the made-up changes and giving full credit if more ambiguous terms than bitonality, such as ‘out of tune’, were mentioned. Nonetheless, the performance of musicians and non-musicians is clearly dissimilar. Intermediate musicians (one to five years of private music lessons) did not perform statistically better than non-musicians in the free response task, which suggests that something may happen after the fifth year of lessons which contributes to one’s ability to recognize and correctly identify bitonality. We realize that the students in our study were not part of a single, organized music programme but came from widely diverse musical backgrounds, and included singers and instrumentalists, so we can only speculate on the source of the difference. It seems likely that intermediate musicians who received structured and traditional music instruction through individual lessons, playing from notated music rather than by ear, and emphasizing classical art music, would possess a greater sensitivity towards tonality. As noted above, a few musicians and intermediate musicians used the phrase ‘out of tune’ to describe the bitonal stimuli. Although technically incorrect, other than bitonal, ‘out of tune’ is the most appropriate answer that a participant could possibly offer for a bitonal passage. Again, if participants have had music lessons coupled with extensive ensemble experience, such as orchestra and choir, they might be conscious of intonation and thus more sensitive to bitonal clashes. The rating task (block 2) revealed that non-musicians and intermediate musicians spontaneously rate bitonal music as ‘unlikeable’, ‘incorrect’ or ‘unpleasant’ to the same degree as Downloaded from pom.sagepub.com by guest on January 21, 2013
440
Psychology of Music 38(4)
musicians (Figure 9). It seems likely that the lack of comment from non-musicians and intermediate musicians on the bitonal ‘crunch’ effect during the free response trials has more to do with a lack of vocabulary than a lack of perception. In addition, all groups rated the pop excerpts as more likeable and pleasant compared to the classical excerpts, which fits with the listening habits of our participants, as revealed in the music experience survey. In regard to blocks 1 and 2, we should note that monotonal music that is altered or distorted to become bitonal by transposing the melody only up or down one or two semitones creates a strong and unique sort of ‘crunch’. (This particular experimental strategy was also integral to Wolpert, 2000.) As mentioned above, bitonal ‘crunch’ results from two different types of clash. First, there is the tonal incompatibility between two distinct pitch configurations that create independent auditory streams. In the case of this study, these are melody and accompaniment. Second, there are the dissonant intervals that may be created between such distinct configurations. While transposing the melody of a monotonal piece may, in and of itself, not create any more dissonances than a piece that is conceived bitonally by the composer, it will, invariably, create its strongest dissonances at moments that, according to tonal syntax, are, or ought to be, stable. These moments typically include phrase beginnings and endings, where the accompaniment would typically play the tonic chord and the melody would begin on a note of the tonic chord, and also downbeats of measures, and even strong beats. Block 3 investigated how well listeners could identify bitonality when a piece was conceived by the composer as bitonal. The contrast between an original bitonal piece and a pre-existing monotonal piece that is altered by transposing only the melody may be small but is critical. For instance, each original bitonal piece will present a unique dissonance profile. Consider, for example, the opening phrase of ‘Copacabana’ (Figure 7). Although it is as bitonal as the excerpts used in this study, it differs from the other phrases in that dissonances are not heard on downbeats and most strong beats. Also, Milhaud favours a strategy of bringing bitonal phrases to a monotonal and consonant close, again apparent in ‘Copacabana’. This particular characteristic is entirely absent in a monotonal piece that is modified to bitonal by transposing its melody. Incidentally, Morgan (1991) rejects the notion that the bitonalism in Saudades can actually be heard because of Milhaud’s particular compositional strategy. Morgan goes on to claim that the left-hand accompaniment controls the harmony of the passage, and the righthand melody provides merely a ‘dissonant “coloring”’ (1991, p. 165). For these reasons, we assume that the Milhaud passages would be more difficult to perceive, especially by participants with no or only intermediate musical experience. Their success at identifying the tonality of the Milhaud passages is thus particularly compelling, and points to a more sophisticated ear than previously supported. Our within-participant design allows for a relatively direct comparison between the free responses of block 1 and successful identification of tonality in block 3. Specifically, the same non-musicians who were at such a loss for words to identify what was different between the original and manipulated (bitonal) clips were able to identify the tonality of a clearly more sophisticated manipulation of tonality by block 3. That the clips involved more subtle manipulations of tonality is apparent from the difficulty all groups had with ‘Ipanema’. The most obvious pieces of the experimental design to facilitate this are the definition and training phases of block 3, which defined bitonality and monotonality with one excerpt (original and manipulated monotonal), then provided feedback regarding actual tonality for two additional excerpts (again, original and manipulated monotonal). Participants were then tested on two completely new excerpts and level of musical experience no longer separated participants’ ability to identify tonality. Of course, because this is a within-participant design, it is also possible that mere Downloaded from pom.sagepub.com by guest on January 21, 2013
441
Hamamoto et al.
exposure to bitonal versions of the classical and popular clips from blocks 1 and 2 also contributed to this improved identification. The current study cannot distinguish between an exposure effect and the training of block 3, but given that non-musicians improved only with training when asked to identify major and minor modes (Halpern et al., 2008; Leaver & Halpern, 2004), training seems likely to be a necessary component for identifying tonality in the current experiment. There are two aspects of the training that might be interesting to explore in future research: the effect of explicitly defining monotonality and bitonality using examples, and the practice with feedback. It is possible that the non-musicians’ improved identification of bitonality in block 3 was supported mainly by the explicit definition, and that the subsequent practice with feedback was unnecessary. The practice clips repeated until participants had successfully identified the tonality of four clips in a row, and there was no difference in the number of errors for any group. In other words, our non-musicians and musicians did not require different amounts of training on the tonality identification, suggesting that the definition itself was more important than the practice. From original bitonal excerpts from Saudades we created monotonal versions by changing certain notes of the melody to conform to the harmony of the accompaniment (Figure 6). In all cases, the changes were slight: shifting a few melody notes one or two semitones up or down sufficed to make it conform to the key of the accompaniment while preserving the melody’s original melodic contour. Consequently, our recomposed passages sounded very much like the originals but lacked only the bitonal ‘crunch’, characteristic of this and many other pieces by Milhaud. It is important to note that not every dissonance was eliminated in this way, but only those dissonances that resulted directly from the bitonal clash. So in the monotonal version of ‘Botafogo’, for example, a minor seventh between accompaniment and melody is still heard on the downbeat of bar 4 (Figure 6), but it has a milder effect than the harsher major seventh created by the bitonal superposition in the original (Figure 4). Because the bitonal stimuli in block 3 were composed originally as such, and because the monotonal stimuli in block 3 differ from their bitonal source in only minimal ways (dissonances preserved), we judge our block 3 to be a more robust test than blocks 1 or 2. As mentioned above, ‘Ipanema’ is a bit dif ferent from all the other Milhaud passages (Figure 11). Results showed that participants detected bitonality at a rate no better than chance (see Figure 10). A few factors may have contributed to this divergent result. Although it is the beginning of a major section, the ‘Ipanema’ excerpt is an internal phrase. It lacks the brief vamp that introduces many of the opening phrases in Saudades. One role the vamp plays is to establish clearly the key of the accompaniment. Such clarity contributes to a strong opposition with a melody when it enters in a different key. But even setting aside the issue of the introductory vamp, the bitonalism of ‘Ipanema’ is less clear than all other stimuli (Figure 11). The righthand melody suggests strongly F major, with the B natural (bar 54) and D sharp (bar 56) functioning as chromatic passing tones. But the key of the left-hand accompaniment is less clear: although suggestive of C major if considered in isolation, it conforms quickly and perfectly to the key of the melody by incorporating the initial B flat to form a dominant seventh in the key of F major. Although the move to the G flat major chord in bar 57 may seem unexpected and abrupt, and perhaps could even be interpreted as resulting from the bitonal ambiance of the piece, it is, in fact, a somewhat common monotonal progression. Although the E natural of the right-hand melody suggests a G flat–F bitonal superposition, a listener is more likely to hear a G flat major-minor seventh chord, for the E natural is enharmonically equivalent to F flat. The C7 –G flat7 chord succession can be heard monotonally; jazz musicians would recognize Downloaded from pom.sagepub.com by guest on January 21, 2013
442
Psychology of Music 38(4)
[Nerveux (116 = )] 53
57
(sub V ) 58
Figure 11. Milhaud, Saudades do Brasil , Op. 67, No. 5, ‘Ipanema’; chord symbols and roman numeral analysis provided .
it as a move from the regular dominant seventh to the tritone-substitute dominant seventh in F major (for a brief overview of jazz harmony see Strunk, 2002; jazz chord symbols and a traditional roman-numeral analysis are provided in Figure 11). Although a tonic resolution, albeit altered to a secondary dominant, is suggested in bar 62, the key of F major never comes to fruition. Our recomposition of ‘Botafogo’ also yielded curious results (Figure 10). While participants had no trouble identifying the original version as bitonal, they were unable to identify the recomposed version as monotonal. We are at a loss to explain why this particular recomposition was not as successful as the others (see Figure 6 for a score of the recomposed version of ‘Botafogo’). Perhaps the dissonances that remained in the recomposition, in particular the seventh on the downbeat of bar 4, may have confused some participants. Conclusion With very little training, all our listeners were quite successful at dif ferentiating between bitonal and monotonal excerpts, with the non-musicians performing at a rate equivalent to the musicians. The ease with which our less experienced groups learned to identify the unique sensory roughness of bitonality suggests that this is something that they quite readily perceive and are able to articulate, once they have acquired the appropriate vocabulary. Moreover, the results of the one tonally ambiguous passage, ‘Ipanema’ (Figures 10 and 11), indicate that listeners are able to distinguish between different kinds of dissonant passages: those that result from bitonality, and those that do not. Our study also suggests that the dismissive attitude some music theorists have taken toward bitonality may be misguided. Musical analyses considering bitonality should be encouraged (perhaps along the lines of Harrison, 1997 or Stein, 2005), as well as further exploration of auditory streaming in musical works. Finally, music analysis textbooks Downloaded from pom.sagepub.com by guest on January 21, 2013
443
Hamamoto et al.
should perhaps be less timid in their presentation of bitonality. Another interesting question for future research would be to examine the aesthetics of bitonality, and how musical experience impacts preference ratings for bitonal versus monotonal music. Note
1. Scores for all Milhaud original and recomposed excerpts can be accessed at http://www.davidson.edu/ academic/psychology/Munger/additional/examples.htm. References
Alain, C., Arnott, S. R., & Picton, T. W. (2001). Bottom-up and top-down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 72–89. Babbitt, M. (1952). Review of Structural Hearing, by Felix Salzer. Journal of the American Musicological Society, 5(3), 260–265. Bartók, B. (1931/1976). The influence of peasant music on modern music. In B. Suchoff (Ed.), Béla Bartók Essays (pp. 340–344). Lincoln, NE: University of Nebraska Press. Bartók, B. (1943/1976). Harvard lectures. In B. Suchoff (Ed.), Béla Bartók Essays (pp. 354–392). Lincoln, NE: University of Nebraska Press. Berger, A. (1978). Problems of pitch organization in Stravinsky. Perspectives of New Music, 2(1), 11–42. Boretz, B. (1973). Meta-variations, Part IV: Analytical fallout (I). Perspectives of New Music, 11(1), 146–223. Bregman, A. S. (1990). Auditory scene analysis. Cambridge, MA: MIT Press. Bregman, A. S. (1993). Auditory Scene Analysis: Hearing in complex environments. In S. McAdams & E. Bigand (Eds.), Thinking in sound: The cognitive psychology of human audition (pp. 10–36). Oxford: Oxford University Press. Casella, A. (1924). Tone-problems of to-day. The Musical Quarterly, 10(2), 159–171. Cope, D. (1977). New music composition. New York: Schirmer Books. Costa, M., Fine, P., & Ricci Bitti, P. E. (2004). Interval distributions, mode, and tonal strength of melodies as predictors of perceived emotion. Music Perception, 22(1), 1–14 Dallin, L. (1974). Techniques of twentieth century composition: A guide to the materials of modern music (3rd ed.). Dubuque, IA: William C. Brown. Dart, T. (2001). Eye music. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians (2nd ed.). London: Macmillan. DeVoto, M. (1993). Paris, 1918–45. In R. P. Morgan (Ed.), Modern times: From World War I to the present (pp. 33–59). Englewood Cliffs, NJ: Prentice-Hall. Fishman, Y. I., Volkov, I. O., Noh, M. D., Garell, P. C., Bakken, H., Arezzo, J. C., et al. (2001). Consonance and dissonance of musical chords: Neural correlates in auditory cortex of monkeys and humans. Journal of Neurophysiology, 86(6), 2761–2788. Forte, A. (1955). Contemporary tone structures. New York: Columbia University Press. Gabrielsson, A., & Lindström, E. (2001). The influence of musical structure on emotional expression. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 221–248). Oxford: Oxford University Press. Gagnon, L., & Peretz, I. (2003). Mode and tempo relative contributions to ‘happy–sad’ judgments in equitone melodies. Cognition and Emotion, 17(1), 25–40. Gosselin, N., Samson, S., Adolphs, R., Noulhiane, M., Roy, M., Hasboun, D., et al. (2006). Emotional responses to unpleasant music correlates with damage to the parahippocampal cortex. Brain, 129, 2585–2692. Halpern, A. R., Martin, J. S., & Reed, T. D. (2008). An ERP study of major–minor classification in melodies. Music Perception, 25(3), 181–191. Downloaded from pom.sagepub.com by guest on January 21, 2013
444
Psychology of Music 38(4)
Harrison, D. (1997). Bitonality, pentatonicism, and diatonicism in a work by Milhaud. In J. M. Baker, D. W. Beach, & J. Bernard (Eds.), Music theory in concept and practice (pp. 393–408). Rochester, NY: University of Rochester Press. Hyer, B. (2001). Tonality. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians (2nd ed.). London: Macmillan. Koechlin, C. (1925). Évolution de l’harmonie: Période contemporaine, depuis Bizet et César Franck jusqu’à nos jours [The evolution of harmony: The contemporary period, from Bizet and César Franck to the present day]. In A. Lavignac & L. d. L. Laurencie (Eds.), Encyclopédie de la musique et dictionnaire du Conservatoire; 2e partie: technique, esthétique, pédagogie (pp. 591–760). Paris: Delagrave. Koelsch, S., Fritz, T., Cramon, D. Y. V., Müller, K., & Friederici, A. D. (2006). Investigating emotion with music: An fMRI Study. Human Brain Mapping, 27, 239–250. Koelsch, S., & Mulder, J. (2002). Electric brain responses to inappropriate harmonies during listening to expressive music. Clinical Neurophysiology, 113, 862–869. Krumhansl, C. L. (1983). Perceptual structures for tonal music. Music Perception, 1(1), 28–62. Krumhansl, C. L. (1990). Cognitive foundations of musical pitch. Oxford: Oxford University Press. Krumhansl, C. L., & Schumckler, M. A. (1986). The Petroushka chord: A perceptual investigation. Music Perception, 4(2), 153–184. Leaver, A. M., & Halpern, A. R. (2004). Effects of training and melodic features on mode perception. Music Perception, 22(1), 117–143. Lerdahl, F. (2001). Tonal pitch space. Oxford: Oxford University Press. Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press. Médicis, F. D. (2005). Darius Milhaud and the debate on polytonality in the French press of the 1920s. Music and Letters, 86(4), 573–591. Messing, S. (1988). Neoclassicism in music: From the genesis of the concept through the Schoenberg/Stravinsky polemic. Ann Arbor, MI: UMI Research Press. Milhaud, D. (1923/1982). Polytonalité et atonalité [Polytonality and atonality]. In J. Drake (Ed.), Notes sur la musique: Essais et chroniques (pp. 173–188). Paris: Flammarion. Morgan, R. P. (1991). Twentieth-century music: A history of musical style in modern Europe and America. New York: W. W. Norton. Peretz, I., & Zatorre, R. J. (2005). Brain organization for music processing. Annual Review of Psychology, 56, 89–114. Plomp, R., & Levelt, W. J. M. (1965). Tonal consonances and critical bandwith. Journal of the Acoustical Society of America, 37, 1110–1123. Randel, D. M. (Ed.). (1986). The new Harvard dictionary of music. Cambridge, MA: The Belknap Press of Harvard University Press. Rasch, R., & Plomp, R. (1999). The perception of musical tones. In D. Deutsch (Ed.), The psychology of music (pp. 89–112). San Diego, CA: Academic Press. Regnault, P., Bigand, E., & Besson, M. (2001). Different brain mechanisms mediate sensitivity to sensory consonance and harmonic context: Evidence from auditory event-related brain potentials. Journal of Cognitive Neuroscience, 13, 241–255. Schoenberg, A. (1923/1975). New music. In L. Stein (Ed.), Style and idea: Selected writings of Arnold Schoenberg (pp. 137–139). Berkeley and Los Angeles, CA: University of California Press. Schoenberg, A. (1925/1975). Tonality and form. In L. Stein (Ed.), Style and idea: Selected writings of Arnold Schoenberg (pp. 255–257). Berkeley and Los Angeles, CA: University of California Press. Schoenberg, A. (1926/1975). Opinion or Insight? In L. Stein (Ed.), Style and idea: Selected writings of Arnold Schoenberg (pp. 258–264). Berkeley and Los Angeles, CA: University of California Press. Smith Brindle, R. (1986). Musical composition. Oxford: Oxford University Press.
Downloaded from pom.sagepub.com by guest on January 21, 2013
445
Hamamoto et al.
Stein, D. (2005). Introduction to musical ambiguity. In D. Stein (Ed.), Engaging music: Essays in music analysis (pp. 77–88). New York: Oxford University Press. Stein, L. (1975). Schoenberg: Five statements. Perspectives of New Music, 14(1), 161–173. Straus, J. N. (1990). Introduction to post-tonal theory. Englewood Cliffs, NJ: Prentice-Hall. Stravinsky, I., & Craft, R. (1962). Expositions and developments. New York: Doubleday. Strunk, S. (2002). Harmony. In B. Kernfeld (Ed.), The new Grove dictionary of jazz (2nd ed.). London: Macmillan. Symms, B. R. (1996). Music of the twentieth century: Style and structure (2nd ed.). New York: Schirmer Books. Tymoczko, D. (2002). Stravinsky and the octatonic: A reconsideration. Music Theory Spectrum, 24(1), 68–102. Tymoczko, D. (2003a). Octatonicism reconsidered again. Music Theory Spectrum, 25(1), 185–202. Tymoczko, D. (2003b). Polytonality and superimpositions. Unpublished paper. Retrieved January 2008, from http://www.music.princeton.edu/~dmitri/polytonality.pdf van den Toorn, P. C. (1975). Some characteristics of Stravinsky’s diatonic music. Perspectives of New Music, 14(1), 104–138. van den Toorn, P. C. (1983). The music of Igor Stravinsky. New Haven, CT: Yale University Press. van den Toorn, P. C. (1987). Stravinsky and the Rite of Spring: The beginnings of a musical language. Berkeley and Los Angeles, CA: University of California Press. van den Toorn, P. C. (2003). The sounds of Stravinsky. Music Theory Spectrum, 25(1), 167–185. Watkins, G. (1995). Soundings: Music in the twentieth century. New York: Schirmer Books. Whittall, A. (2001a). Bitonality. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians (2nd ed.). London: Macmillan. Whittall, A. (2001b). Form. In S. Sadie & J. Tyrrell (Eds.), The new Grove dictionary of music and musicians (2nd ed.). London: Macmillan. Williams, J. K. (1997). Theories and analyses of twentieth-century music. Fort Worth, TX: Harcourt Brace College Publications. Wolpert, R. S. (1990). Recognition of melody, harmonic accompaniment, and instrumentation: Musicians vs. nonmusicians. Music Perception, 8(1), 95–106. Wolpert, R. S. (2000). Attention to key in a non-directed music listening task: Musicians vs. Nonmusicians. Music Perception, 18(2), 225–230. Biographies
Mauro Botelho is an Associate Professor of Music at Davidson College, USA, where he teaches
music theory, analysis, music of Brazil and music of Latin America. He has an interest in the rhythm and form of tonal music, and has presented his research in the USA, Europe and Brazil. Address: Davidson College, Box 7131, Davidson, NC, 28035–7131, USA. [email:
[email protected]] Mayumi Hamamoto is a graduate of psychology from Davidson College, USA. She can be
contacted c/o Dr Margaret Munger. Margaret P. Munger is a Professor of Psychology at Davidson College, USA, where she teaches
cognitive psychology and research methods in attention and perception. She also co-authors a blog on research psychology, Cognitive Daily (http://www.scienceblogs.com/cognitivedaily).
Downloaded from pom.sagepub.com by guest on January 21, 2013