Table of Contents
- A personal connection with Language and Music
- Exploring the historical and evolutionary perspectives of Language and Music
- Analysing Language and Music from structural perspectives
- Convergence and divergence from psychological perspectives for Language and Music
- The power of music in language development: insights and applications for education
- A future for more interaction between Language and Music
- Recommended Readings
- References
A personal connection with Language and Music
“Both language and music are conveyed by sounds, are ubiquitous elements in all cultures, are specific to humans, and are cultural artifacts that do not correspond to natural objects.”
The other day, I observed my daughters conversing in what sounded like gibberish, yet they seemed to understand each other perfectly. They were engaged in pretend play, set in a foreign culture so distant and unfamiliar that none of us in the outer circle could come close to comprehending the ‘language’. The rhythm of their speech, however, caught my attention much as it sounded like a musical tune. Intrigued, I asked them to decode their conversation. They revealed a few codes which were related to everyday greetings and the toys they were attending to – though they decided to keep the ‘more communicative’ elements of their secret language to themselves.
I couldn’t help but smile, remembering my childhood experiments with artificial languages. Those languages were deeply personal, serving as a playground for my imagination in a world separated from all others. Perhaps, like my daughters today, being surrounded by a variety of languages spurred my desire to create my own. Influenced by Sinitic languages, I was then eager to use tonal shifts to convey meaning. To me, it was akin to music composition.
Reflecting on my closest brushes with creative music, I admit I don’t play any instrument and my singing shines only during private karaoke nights with family and friends. Still, I’m captivated by the interconnection between music and language. Is there some hidden synergy between them that sometimes ‘symbiose’ them naturally? Nursery rhymes have been argued to play an integral role in language development – Does this imply that learning music can enhance language skills over time? How does language and music relate to each other from different perspectives? Most importantly, what practical implications and applications are there for language education?
In this article, I hope to explore some of these questions more thoroughly, although I need to put in a strong disclaimer that I’m no expert in this area of research. This piece represents my best attempt in synthesising whatever I can gather till time of writing. Consider this to also be an open invitation to those with expertise in this interdisciplinary field to enlighten me – and many readers on LEA.
Exploring the historical and evolutionary perspectives of Language and Music
“Language and music define us as human.”
The exploration of the interconnection between language and music has intrigued scholars for centuries, encompassing a wide range of fields such as philosophy, biology, literature, and linguistics. This general interest examines how music and language share structural and expressive similarities, influencing human communication and emotion. For instance, more than 2 millennia ago, Plato argued that certain musical modes could uplift people’s spirits particularly because they resemble the sounds of dignified speech (Patel, 2008, p.4).

In more recent history, the Genevan philosopherJean-Jacques Rousseau advocated the view that music and language originated from a common ancestor and that language evolved out of music in order to structure human societies more rationally (Besson & Schön, 2001; Jentschke, 2014). The nineteenth-century English philosopher Herbert Spencer then further proposed in 1890 that the emotional impact of music stems from its similarity to speech, a perspective that continues to hold significance even today (Temperley, 2022).
Perhaps the most influential perspective relates that to Darwin which then become reflected in many later discussions which assume a mutual precursor for both language and music (Johansson, 2008; Scharinger & Wiese, 2022). In his 1871 work, Darwin argued for a shared origin of language and music, suggesting an early, non-referential, song-like communication system called “protolanguage” (Besson & Schön, 2001; Jentschke, 2014; Patel, 2008, p.4; Scharinger & Wiese, 2022) – well not too far from what my daughters (or my younger self) have created. Interestingly though, he considered modern music a “behavioural fossil” of that system. Well, an additional fun fact though, is that his thesis highlighted primate reproductive calls as a vital feature within that system (Besson & Schön, 2001).

A key learning point from the historical perspective is that language and music are both seen as communication systems and universally present across human cultures (Besson & Schön, 2001; Brown, 1991, p.130-141; Jackendoff, 2009; McDermott & Hauser, 2005; Temperley, 2022). To a large extent, they are distinguishing traits between humans and other animal species. If that is so, what structural similarities and differences might we observe in the research evidence?
Analysing Language and Music from structural perspectives
Over the past few decades, music research has been increasingly shaped by constructs from syntax and prosody, along with methodologies from psycholinguistics and computational linguistics (Temperley, 2022), revealing fascinating insights into the shared traits of music and language. I’d share a subset of those findings that are especially relevant to language education, but first, let’s consider a fundamental difference between language and music that sets the stage for our discussion.
I would argue that the core distinction between canonical human experiences of language and music lies in modality. Language is typically experienced through three modalities: auditory (e.g., spoken speech), visual (e.g., written words), and kinaesthetic (.g., signed gestures), with auditory and visual being more common. In contrast, music primarily engages us through sound. For most people, understanding music through reading or gestures requires special training, a skill that everyday folks like me don’t usually possess. And thus, many of the findings that compare language and music are rooted in the auditory modality.

Structural Similarities
With that in mind, let’s talk about some structural similarities that scholars have identified. First is the recognition that both language and music are structured hierarchically (Besson & Schön, 2001; Fedorenko et al., 2009; Johansson, 2008; Kunert et al., 2015; McDermott & Hauser, 2005; Patel, 2003, 2008; Scharinger & Wiese, 2022; Schön & François, 2011; Slevc, 2012). In language, sentences are composed of phrases, which in turn consist of words. Similarly, in music, compositions are made up of elaborate melodies which comprise musical phrases that further consist of notes and chords. This hierarchical structure allows for complex combinations of basic elements to create higher-order constructs in both domains.
Second is that sounds in both language and music are segmented along a continuum into identifiable units, allowing for structured analysis and communication (Besson & Schön, 2001; Haiduk & Fitch, 2022). Third is that language and music are both governed by rules and principles that govern how basic elements can be combined (Besson & Schön, 2001; Fedorenko et al., 2009; Johansson, 2008; Kunert et al., 2015; McDermott & Hauser, 2005; Patel, 2003, 2008; Scharinger & Wiese, 2022; Schön & François, 2011; Slevc, 2012). In language, phonemes combine to form words, and words combine to form sentences according to grammatical rules (e.g. syntax, morphology). In music, individual notes and chords combine to create melodies and harmonies based on musical theory. This rule-based nature constrains the generation of an infinite number of combinations recursively but creatively from a finite set of elements.
Last, both music and language share a common feature: they can express emotions through specific structural tools such as pitch (how high or low a sound is), loudness (how loud or soft a sound is), and temporal density (how quickly or slowly sounds occur). Fundamentally, people can convey different types of emotions by changing these elements in both speaking and playing music (Patel, 2008, p.312; Temperley, 2022).
Structural Differences
Despites these similarities, there might be more structural differences as argued (Slevc, 2012), which can be axiomatic in nature to the general layperson. Beyond that, researchers have also generally observed that the conventional categorisation of language into various components (phonetics, phonology, syntax, semantics, morphology, pragmatics, discourse) doesn’t clearly find its parallel in music (Jackendoff, 2009; Temperley, 2022). Despite proposals for mapping these elements (Patel, 2008), none has received widespread endorsement in musical discourse (Temperley, 2022).
When we focus on syntax, we see that language has a much more complex but rigid syntactic system than music, which can involve many grammatical categories with a variety of parameters (e.g., animacy, definiteness) to determine agreement (e.g., how some verbs select for certain types of nouns as objects) and constituency (Besson & Schön, 2001; Patel, 2008, p.263; Slevc, 2012). Whereas for music, even when scholars talk about the notion of a musical grammar or syntax, the parallel is in fact substantially different with more flexibility and ambiguity built in (Asano & Boeckx, 2015; Besson & Schön, 2001; Patel, 2008, p.264).
As we move to meaning, the difference is even more accentuated. Meaning in language generally refers to propositions tied to extralinguistic references, where sounds form words that denote specific concepts or objects in the world (e.g., consonants and vowels that come together to form the sound ‘cat’ represents that feline mammal); language is fundamentally symbolic, with a more direct relationship between sounds and meanings (Slevc, 2012; Temperley, 2022). Meaning in music, however, is fundamentally different (Besson & Schön, 2001; Jentschke, 2014; Temperley, 2022). While it can be self-referential (e.g., a recurring theme that echoes an earlier motif, or a dissonant note suggesting an eventual resolution), relate to emotions (as highlighted earlier) or act as figurative portrayals of lived experiences; little reaches specificity with reference to the extralinguistic/extramusical real-world as with language (Besson & Schön, 2001; Jentschke, 2014; Koelsch et al., 2004; Slevc, 2012; Temperley, 2022). We can argue that a simple phrase like “I love you” can hold various meanings under different circumstances, but a musical phrase like “Fa-So-La” would probably invite more ambiguity. The best exception scholars have raised though is leitmotifs, which are musical phrases that have been intentionally designed to denote extramusical meanings such as a particular character, situation, or idea (Patel, 2008, p.328; Temperley, 2022).
Join our mailing list!
Receive insights and EXCLUSIVE resources on language education in a monthly newsletter, fresh into your inbox. No Fees, No Spam, so No Worries!
Convergence and divergence from psychological perspectives for Language and Music
Till this point, structural perspectives comparing language and music suggest that both share properties that may lend themselves to common processing abilities in humans, albeit not totally. However, this was not the dominant view historically. Traditionally, music and language have been viewed to be separately processed by the brain, as found in earlier theories about brain lateralisation—the idea that each hemisphere of the brain specialises in certain tasks (Jäncke, 2012). It was believed, then, that language functions are mainly directed by the left hemisphere, while music functions are accounted for in the right hemisphere. Just to be categorical here: this is NOT to suggest anything about people being either left-brained or right-brained – a persistent neuro-myth that I hope no reader would take away.

In recent years, the view of specialised brain regions for language and music has been challenged, particularly with advances in neuroscientific methodologies and technologies (Jäncke, 2012), though there is also debate on whether any shared processing capabilities are rooted in the inherent qualities of general cognition rather than in the specialised faculties of language and music (Temperley, 2022). Nevertheless, let’s go into some interesting details of the research findings in this area.
Sound and Prosody Processing
Our ability to process complex auditory features, such as pitch, rhythm, contour, and tone, is crucial for understanding linguistic and musical structures. Numerous studies have demonstrated through various behavioural and neuroimaging evidence that suggests much overlap in brain activity patterns during these processes (Besson & Schön, 2001; Jentschke, 2014; Johansson, 2008; Magne et al., 2006; Milovanov & Tervaniemi, 2011; Pino et al., 2023; Proverbio & Piotti, 2022; Schön & François, 2011; Schön et al., 2004; Zhang et al., 2023).
Of particular interest to language educators like us, many studies also suggest that musical training afford some form of positive music-language transfer effects: musicians generally demonstrate heightened sensitivity to acoustic properties, improving speech signal perception and phonological awareness of a new language – a skill that non-musicians are not endowed with (Choi et al., 2025; Delogu et al., 2006; Jansen et al., 2023; Milovanov & Tervaniemi, 2011; Moreno, 2009; Pino et al., 2023; Proverbio & Piotti, 2022; Scharinger & Wiese, 2022; Schön & François, 2011; Slevc, 2012; Talamini et al., 2018; Zhang et al., 2023). An interesting note is that such an effect can also be found in children with impaired language developments (e.g., due to dyslexia) (Jentschke, 2014; Overy et al., 2003).
What this also implies is that musicians appear to have a comparative edge in learning tonal languages (e.g., Mandarin, Cantonese, Vietnamese, Thai, Punjabi), which depend heavily on subtle tonal shifts to convey different meanings – this is also the finding demonstrated in some studies (Choi et al., 2025; Delogu et al., 2006).
Syntactic Processing
Naturally, we’d expect greater convergence in speech and prosody, given that music is inherently an auditory experience, as I’ve previously established in my argument. For syntax, would we also see more convergence or divergence? Apparently, some scholars seem to lean towards convergence – the view that brain mechanisms for processing of linguistic syntax also overlap with those processing musical syntax (Besson & Schön, 2001; Fedorenko et al., 2009; Jentschke, 2014; Jentschke & Koelsch, 2009; Johansson, 2008; Kunert et al., 2015; Li et al., 2023; Patel, 2003, 2008; Slevc, 2012; Steinbeis & Koelsch, 2008; Sun et al., 2018). In other words, neural resources are shared even when syntactic representations are different.
When it comes to music-language transfer effects, however, the verdict is less conclusive. With processing studies, evidence presented is generally either a comparable pattern of brain activity within EEG or analogous activation of brain regions as evidenced by fMRI data when processing linguistic and musical syntax.
However, with transfer effect studies, the dependent variable can range from the degree of learning/acquisition, perception to awareness – on top of neuroimaging data if the method is used. Individuals with musical training demonstrate a wider range of performance results on these measures when compared to those without musical training. For instance, some studies found a significant effect with large effect size – musical aptitude or training has positive influence on linguistic ability or learning (Jentschke & Koelsch, 2009; Kunert et al., 2015; Pino et al., 2023; Schön & François, 2011😉 while others either found a very small or negligible effect (Talamini et al., 2018; Temperley, 2022).
Assuming the validity of all these different studies, we might want to acknowledge that despite the apparent convergence in processing, there are nuances that constrain the generalisation across all domains of syntax in both language and music. In other words, there remains dissociation in certain aspects of syntax that may lend themselves to separate processing mechanisms (Patel, 2008; Slevc, 2012; Temperley, 2022).
Get real-time updates and BE PART OF THE CONVERSATIONS by joining LEA’s online communities on your favourite platforms! Connect with like-minded language educators and get inspired for your next language lesson.
Meaning Processing
I’d guess some of us would presume that the greatest divergence in language and music processing would lie in the processing of meaning. As Slevc (2012, p.488) mentions, “the communication of specific denotative meanings is a core aspect of language, but it is far from obvious that there is a comparable sort of musical semantics.”
Of course, linguistic meaning is not just restricted to semantic meaning which is essentially the explicit meaning of words, phrases, sentences or texts based on the straightforward interpretation of the proposition put forward by the arrangement of individual units (e.g., the relationship between two words to evaluate which is agent and which is the patient). It includes the pragmatic meaning – meaning intended by speaker/writer that is derived from contextual factors (e.g., discourse, extralinguistic world). However, while Patel (2008) proposes a framework for studying pragmatic processing specifically in both music and language, such research efforts seem very limited – I didn’t manage to find any personally.
So, does semantic processing in language share neural mechanisms with semantic processing in music? There seems to be a certain degree of overlap as suggested by some based on neuroimaging evidence: music seems to activate semantic expectancy – state of anticipating or believing that units of meaning will appear in a certain position – in brain regions similar to those activated by language (Koelsch et al., 2004; Li et al., 2023; Steinbeis & Koelsch, 2008).
Irrespective of these findings, scholars remained conservative in asserting a strong overlap of shared neural resources in processing semantic meaning in both language and music – the stronger view is that activation may not be at similar levels considering that language has a larger potential to signify complicated meanings in contrast to music (Besson & Schön, 2001; Johansson, 2008; Koelsch et al., 2004; Moreno, 2009; Patel, 2008, p.335). This, however, would need more empirical support for validation.
Again, is there then at least some form of music-language transfer effect in semantic processing? Research studies that specifically address this issue are rather limited. Again, both sides of the coin exist: a couple of studies did not conclude evidence of transfer effects (Ju et al., 2024; Marie et al., 2011) while three other studies argued for a positive effect (Dittinger et al., 2016, 2018; Yu et al., 2017).
The power of music in language development: insights and applications for education
Having journeyed through the different perspectives of how language and music may be interconnected and the degree/limitations of that connectedness, perhaps the most important question we might have, as language educators, is that how these insights inform us. There are a number of researchers who have purposefully explored ways to integrate music into language development and education, with insights into some form of intervention practices – although much of such research might not draw upon earlier perspectives I’ve presented to coherently explain why certain practices succeed or fail. Below are some of such findings:
- Engaging in chanting and rhythmic speaking helps students practice speech elements, such as fluency and articulation; singing activities allow students to practice rhythm, dynamics, and mood, which are concepts common to both music and language; rhythmic experiences can lead to improved reading skills, as fluency in language reading is enhanced through rhythmic chants and songs; selecting songs with simple melodies, repetitive lyrics, and culturally relevant content can effectively reinforce language skills, such as grammar, pronunciation, and vocabulary (Mizener, 2008).
- Learning vocabulary through a composed song has higher retention rate in the learning of new vocabulary (Legg, 2009).
- Teaching songs in foreign language classes can effectively reduce anxiety levels, particularly for students who experience higher levels initially. The positive reception of music-based learning suggests that it can be a valuable pedagogical tool in language education, promoting a more enjoyable and effective learning environment (Dolean, 2016).
- Music can assist in promoting oracy by leveraging the natural connections between musical and linguistic skills: primary students who participated in a purpose-specific music programme in language classes demonstrated an increase in oracy, with varying degrees of improvement in pronunciation and vocabulary (McCormack & Klopper, 2016).
- Serving as a scaffold, music facilitates cognitive development, self-expression, and cultural sensitivity—important building blocks for literacy—and further stimulates mental imagery, a crucial element in reading comprehension and creative composition. (Salmon, 2010).

Beyond practice-based insights, the detailed analysis of structural and psychological parallels between language and music do offer leverage points that inform our teaching approaches and curriculum design. For one, given the strong evidence of overlap in auditory processing mechanisms between music and language, we should recognise the value of auditory training. Musical training, even at a basic level, can enhance learners’ sensitivity to pitch, rhythm, and intonation—skills critical to language acquisition, especially in tonal languages. In a sense, incorporating music-based activities or exercises focusing on prosody (intonation, stress, rhythm) into language classrooms may reinforce students’ auditory discrimination abilities and improve their pronunciation and listening comprehension.
In addition to that, noting some of the structural similarities between language and music, we can integrate musical analogies into our instruction to illustrate the hierarchical organisation and rule-governed creativity inherent in language learning. By explicitly highlighting how musical phrases combine to form coherent melodies, we can motivate students to visualise and understand the way words and phrases build into sentences and larger units of discourse. We can also encourage our learners to experiment creatively within grammatical constraints, similar to musical improvisation, by practising varied sentence patterns or poetic forms in accordance with popular (from the perspective of our learners) musical melodies or musical phrases.
Maybe two very relevant insights to be taken away pertains to those among us who’re dealing with tonal languages or learners with language learning impairments. For the former, basic musical training can be particularly advantageous. Leveraging more musical elements within classes may sensitise our learners to the naturalistic prosody of the target language. For the latter, music potentially acts as a mitigating intervention, easing the impairment’s impact. As such, systematic experimentation may be necessary to discover the most optimal approach.
Join our mailing list!
Receive insights and EXCLUSIVE resources on language education in a monthly newsletter, fresh into your inbox. No Fees, No Spam, so No Worries!
A future for more interaction between Language and Music
Research still has a long way to go in distilling the interconnection between language and music, particularly on whether established links are uniquely associated with the two domains or part and parcel of a general cognition which engages similar faculties. Nonetheless, I hope my synthesis has provided some insightful basis for experimentation that some of us may be interested to work on. Also, I see this as an invitation for experts in this area to share more of your thoughts. Looking forward to learning from you!
Recommended Readings
- Besson, M., & Schön, D. (2001). Comparison between language and music. Annals of the New York Academy of Sciences, 930(1), 232-258.
- Jackendoff, R. (2009). Parallels and Nonparallels between Language and Music. Music Perception: An Interdisciplinary Journal, 26(3), 195-204.
- Jäncke, L. (2012). The Relationship between Music and Language. Frontiers in Psychology, 3, 123.
- Jansen, N., Harding, E. E., Loerts, H., Ba?kent, D., & Lowie, W. (2023). The relation between musical abilities and speech prosody perception: A meta-analysis. Journal of Phonetics, 101, 101278.
- Jentschke, S. (2014). The Relationship between Music and Language. In S. Hallam, I. Cross & M. H. Thaut (Eds) The Oxford Handbook of Music Psychology (2nd edition) (pp. 343-356).
- Johansson, B. B. (2008). Language and Music: What do they have in Common and how do they Differ? A Neuroscientific Approach. European Review, 16(4), 413-427.
- Moreno, S. (2009). Can Music Influence Language and Cognition?. Contemporary Music Review, 28(3), 329-345.
- Patel, A. D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6(7), 674-681.
- Patel, A. D. (2008). Music, language, and the brain. Oxford University Press.
- Pino, M. C., Giancola, M., & D’Amico, S. (2023). The Association between Music and Language in Children: A State-of-the-Art Review. Children, 10(5), 801.
- Scharinger, M. & Wiese, R. (2022). Introduction: How to conceptualize similarities between language and music. In M. Scharinger & R. Wiese (Eds) How Language Speaks to Music: Prosody from a Cross-domain Perspective (pp. 1-16). De Gruyter.
- Slevc, L. R. (2012). Language and music: sound, structure, and meaning. WIREs Cognitive Science, 3(4), 483-492.
- Temperley, D. (2022). Music and language. Annual Review of Linguistics, 8, 153-170.
References
Asano, R., & Boeckx, C. (2015). Syntax in language and music: what is the right level of comparison?. Frontiers in Psychology, 6, 942.
Besson, M., & Schön, D. (2001). Comparison between language and music. Annals of the New York Academy of Sciences, 930(1), 232-258.
Brown, D. E. (1991). Human Universals. McGraw-Hill.
Delogu, F., Lampis, G., & Olivetti Belardinelli, M. (2006). Music-to-language transfer effect: may melodic ability improve learning of tonal languages by native nontonal speakers?. Cognitive Processing, 7(3), 203-207.
Dittinger, E., Barbaroux, M., D’Imperio, M., Jäncke, L., Elmer, S., & Besson, M. (2016). Professional music training and novel word learning: From faster semantic encoding to longer-lasting word representations. Journal of Cognitive Neuroscience, 28(10), 1584-1602.
Dittinger, E., Valizadeh, S. A., Jäncke, L., Besson, M., & Elmer, S. (2018). Increased functional connectivity in the ventral and dorsal streams during retrieval of novel words in professional musicians. Human Brain Mapping, 39(2), 722-734.
Dolean, D. D. (2016). The effects of teaching songs during foreign language classes on students’ foreign language anxiety. Language Teaching Research, 20(5), 638-653.
Ettlinger, M., Margulis, E. H., & Wong, P. C. M. (2011). Implicit Memory in Music and Language. Frontiers in Psychology, 2.
Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37(1), 1-9.
Haiduk, F., & Fitch, W. T. (2022). Understanding design features of music and language: The choric/dialogic distinction. Frontiers in Psychology, 13, 786899.
Jackendoff, R. (2009). Parallels and Nonparallels between Language and Music. Music Perception: An Interdisciplinary Journal, 26(3), 195-204.
Jäncke, L. (2012). The Relationship between Music and Language. Frontiers in Psychology, 3, 123.
Jansen, N., Harding, E. E., Loerts, H., Ba?kent, D., & Lowie, W. (2023). The relation between musical abilities and speech prosody perception: A meta-analysis. Journal of Phonetics, 101, 101278.
Jentschke, S., & Koelsch, S. (2009). Musical training modulates the development of syntax processing in children. NeuroImage, 47(2), 735-744.
Jentschke, S. (2014). The Relationship between Music and Language. In S. Hallam, I. Cross & M. H. Thaut (Eds) The Oxford Handbook of Music Psychology (2nd edition) (pp. 343-356).
Johansson, B. B. (2008). Language and Music: What do they have in Common and how do they Differ? A Neuroscientific Approach. European Review, 16(4), 413-427.
Ju, P., Zhou, Z., Xie, Y., Hui, J., & Yang, X. (2024). Music training influences online temporal order processing during reading comprehension. Acta Psychologica, 248, 104340.
Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., & Friederici, A. D. (2004). Music, language and meaning: brain signatures of semantic processing. Nature Neuroscience, 7(3), 302-307.
Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and Language Syntax Interact in Broca’s Area: An fMRI Study. PLOS ONE, 10(11), e0141069.
Legg, R. (2009). Using Music to Accelerate Language Learning: An Experimental Study. Research in Education, 82(1), 1-12.
Li, D., Wang, X., Li, Y., Song, D., & Ma, W. (2023). Resource sharedness between language and music processing: An ERP study. Journal of Neurolinguistics, 67, 101136.
Liu, J., Hilton, C. B., Bergelson, E., & Mehr, S. A. (2023). Language experience predicts music processing in a half-million speakers of fifty-four languages. Current Biology, 33(10), 1916-1925.
Magne, C. L., Schön, D. and Besson, M. (2006). Musician children detect pitch violations in both music and language better than nonmusician children: Behavioral and electrophysiological approaches. Journal of Cognitive Neuroscience, 18(2), 199-211.
Marie, C., Magne, C., & Besson, M. (2011). Musicians and the metric structure of words. Journal of Cognitive Neuroscience, 23(2), 294-305.
McCormack, B. A., & Klopper, C. (2016). The potential of music in promoting oracy in students with English as an additional language. International Journal of Music Education, 34(4), 416-432.
McDermott, J., & Hauser, M. D. (2005). The origins of music: Innateness, uniqueness, and evolution. Music Perception, 23, 29-59.
Milovanov, R., & Tervaniemi, M. (2011). The Interplay between Musical and Linguistic Aptitudes: A Review. Frontiers in Psychology, 2, 321.
Mizener, C. P. (2008). Enhancing Language Skills Through Music. General Music Today, 21(2), 11-17.
Moreno, S. (2009). Can Music Influence Language and Cognition?. Contemporary Music Review, 28(3), 329-345.
Overy, K., Nicolson, R.I., Fawcett, A.J. and Clarke, E.F. (2003). Dyslexia and music: Measuring musical timing skills. Dyslexia, 9(1), 18-36.
Patel, A. D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6(7), 674-681.
Patel, A. D. (2008). Music, language, and the brain. Oxford University Press.
Pino, M. C., Giancola, M., & D’Amico, S. (2023). The Association between Music and Language in Children: A State-of-the-Art Review. Children, 10(5), 801.
Proverbio, A. M., & Piotti, E. (2022). Common neural bases for processing speech prosody and music: An integrated model. Psychology of Music, 50(5), 1408-1423.
Proverbio, A. M., & Sanoubari, E. (2024). Music literacy shapes the specialization of a right hemispheric word reading area. NeuroImage: Reports, 4(4), 100219.
Salmon, A. (2010). Using music to promote children’s thinking and enhance their literacy development. Early Child Development and Care, 180(7), 937-945.
Scharinger, M. & Wiese, R. (2022). Introduction: How to conceptualize similarities between language and music. In M. Scharinger & R. Wiese (Eds) How Language Speaks to Music: Prosody from a Cross-domain Perspective (pp. 1-16). De Gruyter.
Schön, D., & François, C. (2011). Musical Expertise and Statistical Learning of Musical and Linguistic Structures. Frontiers in Psychology, 2.
Schön, D., Magne, C.L. and Besson, M. (2004). The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology, 41(3), 341-349.
Slevc, L. R. (2012). Language and music: sound, structure, and meaning. WIREs Cognitive Science, 3(4), 483-492.
Steinbeis, N. & Koelsch, S. (2008). Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns. Cerebral Cortex, 18, 1169-1178.
Sun, Y., Lu, X., Ho, H. T., Johnson, B. W., Sammler, D., & Thompson, W. F. (2018). Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia. NeuroImage: Clinical, 19, 640-651.
Talamini, F., Grassi, M., Toffalini, E., Santoni, R., & Carretti, B. (2018). Learning a second language: Can music aptitude or music training have a role?. Learning and Individual Differences, 64, 1-7.
Temperley, D. (2022). Music and language. Annual Review of Linguistics, 8, 153-170.
Yu, M., Xu, M., Li, X., Chen, Z., Song, Y., & Liu, J. (2017). The Shared Neural Basis of Music and Language. Neuroscience, 357, 208-219.
Zhang, K., Tao, R., & Peng, G. (2023). The advantage of the music-enabled brain in accommodating lexical tone variabilities. Brain and Language, 247, 105348.
