Auditory Interference in Chinese-English Code-Switching: Immersion as Help and Hindrance
By Alexandra Ekshteyn
By Alexandra Ekshteyn
Thirty participants were recruited from the University of Utah, Westminster College, Salt Lake Community College, and Weber State University (17 female, 10 male, 3 other). Participants were “traditional” college age between 18-24 years (M = 20.23, SD = 1.59), and evenly separated into monolingual and Chinese-English bilingual subgroups. The fifteen monolingual participants were defined as not being fluent in another language besides English, disregarding introductory foreign language classes taken as part of college requirements. Bilingual participants were required to be either native speakers of both languages, or natively fluent in one language with 2+ years of extensive experience in the other.
Participants were asked to complete a lexical decision task to determine if two words present on the screen were the same word in both Mandarin Chinese and American English.The monolingual and bilingual groups took two different tests catering to their level of comprehension of Chinese.
The monolingual task consisted of 28 words in Mandarin Chinese, presented in pinyin (phonetic romanization of Chinese characters). Pinyin was used instead of the original Hanzi characters in order to limit the potential confounding variable of an orthographic shift, and to prevent monolinguals from memorizing the shapes of the characters instead of actually understanding their semantic meaning. The vocabulary consisted of numbers (1-10), colors, cardinal directions, and seasons, and words were selected because most included category markers to hint at their definition. For example, color words contain the suffix sè, so a participant presented with hóngsè (red) would already be cued to look for a corresponding color word in English. Similarly, this remedied the confusion inherent in homophones in Mandarin, where the category suffix tiān in dōngtiān (winter) differentiates it from dōng (east). Monolingual participants were also given a study guide upon enrollment in the study for personal review, as well as an allotted 20 minutes to review again before the task. The study guide was designed to introduce the vocabulary using pinyin and pictures, without using English definitions.
The bilingual task was considerably more difficult, with 232 words taken from the New Practical Chinese Reader textbook, also in pinyin. Words included colors, animals, foods, locations, pronouns, and situational verbs, and amounted to an A2/B1 level on the Common European Framework of Reference for Languages (CEFRL) as advanced beginner/early intermediate understanding and ability to carry out basic conversations about personal interests, occupation, and daily interactions (Council of Europe, 2011). Bilingual participants were not provided with a study guide prior to the task, under the assumption that the words presented would be ones they used in daily interactions and had a solid understanding of.
Finally, both monolingual and bilingual participants were exposed to three different auditory interferences: Chinese, English, and a block of no audio to establish a testing baseline. Audio for both the Chinese and English blocks were spliced together from at four different news broadcasts or talk shows in the respective languages overlayed over one another, so participants would be exposed to the vocabulary and prosody of each language without being too distracted by following one concise narrative within the interference.
Participants were asked to complete three blocks of a computerized lexical decision task to identify words in Mandarin pinyin and English, with each block corresponding to a different auditory interference. Using a clicker box (“1” for correct, “4” for incorrect,), participants responded to two words on the screen, one in each language. If both words were the same in Chinese and English (for example, píngguǒ and apple), participants would click “1”. If the words were different (píngguǒ and elephant), they would select “4”. Each word-pair trial was present on the screen for three seconds. The monolingual task had a total of 168 trials, with 56 per auditory condition, and the bilingual task had 464 trials, with 154 per auditory condition.
Resulting data for both tasks was analyzed for accuracy and RT for each group (bilingual and monolingual) in each condition (Chinese, English, no audio) using an multivariable ANOVA. The mean accuracy scores for the bilingual participants were 90%, 89.5%, and 91.2% across the Chinese, English, and no audio conditions respectively, with RTs of 1082.1 ms, 1074 m, 991.2 ms. The mean monolingual scores were accuracies of 83.9%, 83.1%, and 88.9%, with RTs of 1311.9 ms, 1304.9 ms, and 1200.2 ms across the same three conditions.
There was a significant difference in accuracy between monolingual and bilingual groups for the Chinese and English conditions, [F(1,28) = 8.1, p < 0.05; F(1,28) = 9.7, p < 0.05]. In addition, there was a significant difference in RT between monolingual and bilingual groups for the Chinese, English, and no audio conditions, [F(1,28) = 5.1, p < 0.05; F(1,28) = 5.7, p < 0.05; F(1,28) = 7.1, p < 0.05].
Analysis of the results supports both parts of the hypothesis. Bilinguals regularly outperformed the monolingual participants with respects to both accuracy and RT, which can be attributed both to more in-depth familiarity with both languages as well as a better ability to code-switch while parsing auditory interference (as supported by the Filippi et al. paper). However, bilingual and monolingual results follow the same pattern across the three conditions. The English condition had the largest interference effects in both groups, with the highest RTs and low accuracies, and the no audio condition had the lowest interference effect, high accuracies, and low RTs. This is understandable, given that that common test-taking environment in United States classrooms is a silent one, so participants were more accurate and faster in a more predictable, standard environment with no distractions. Finally, both groups experienced a speed/accuracy trade-off for the Chinese audio block; while accuracy scores increased when exposed to the Chinese audio, participants slowed down in order to be more accurate, resulting in higher RT. In the case of the bilingual participants, this can be explained in that 10 of the 15 bilinguals were originally native English speakers who learned Chinese later in life. Therefore, despite extensive experience (an average of 3 years) using Chinese on a regular basis, they still benefitted from the immersive Chinese audio to some extent. From these results, it can be concluded that interference from L1 (English for both monolinguals and bilinguals as the dominant, most commonly used language in their daily lives) reduced their ability to accurately code-switch between the two languages, and exposure to L2 (Chinese for the monolinguals and most bilinguals) allowed for a high accuracy in recall in the monolinguals and an overall increase in accuracy in bilinguals at the expense of their speed. It can be said, then, that auditory interference in L1/L2 in both bilinguals-in-training and fluent bilinguals affects the speed with which they can code-switch between their two languages, more-so than the accuracy with which they do so. These results also indicate that language learning and processing incorporates multiple modalities in tandem to learn and apply linguistic knowledge, and speaks to the variety of teaching methods and classroom implements that must be incorporated to fluently learn a language. Further, this research into the underlying processing in bilingual code-switching calls into question the general benefits of bilingualism in relation to larger cognitive processes or task-switching in general, specifically the potential deficits faced by bilinguals in comparison to monolingual counterparts.
This study was limited by an inability to reliably measure the intensiveness of self-study in the monolingual group. Monolinguals were provided the study guide upon enrollment and were also given time before the task to review, but it can’t be known how long or well the monolinguals used the study guide and how that may have affected their performance. Also, as previously stated, two-thirds of the bilinguals were originally English native speakers, and as they still benefited from the Chinese audio immersion to an extent, this study wasn’t able to measure a truly bilingual reaction that comes from simultaneous language learning and symmetrical code-switching costs. Finally, this study only looked at single-word matching, and used Mandarin Chinese pinyin instead of the original Hanzi characters. Future research could incorporate full-sentence verification to examine difference in L1 and L2 syntax structure or orthographic shifts, and how those differences and more extensive processes would be affected by auditory interference.
Neuroscience and Honors
Salt Lake City
Astronomy, special effects/movie magic, and soccer.
I've had an interest in linguistics for a very long time, and when I was researching potential senior projects, I decided to pursue it through a neuroscience lens.