Contents
A Provocation: Is “Native Speaker” Still a Useful Idea?
A Short History of a Big Myth
What AI Can Do Now (And What It Still Gets Wrong)
Education: Assessment, Enablement and the New Literacy
Immigration: Gatekeeping, CEFR, and the AI Question
Hiring & HR: From Accent Bias to Assistive Fluency
Rethinking Proficiency: Toward AI-Augmented Communicative Competence
Worked Examples: What Changes On the Ground
Risks, Ethics, and the Prestige Economy of Languages
Final Reflections: If the Native Speaker Is “Dead,” What Lives On?
References
A Provocation: Is “Native Speaker” Still a Useful Idea?
It has long been convenient – administratively, ideologically, and commercially – to treat the “native speaker” as the gold standard. Yet in a world where predictive text drafts your email, DeepL paraphrases your pitch, Pixel phones translate in real time, and Whisper transcribes your accent with increasing accuracy, we must ask: if everyday communication is routinely co-composed with machines, does native-speaker status still do the conceptual heavy lifting it once did?
In applied linguistics, doubts predate AI. Rampton (1990) argued that “native speaker” functions more as a social identity than a linguistic essence, while Davies (2003) framed it as a powerful but slippery construct, useful in some contexts, misleading in others. Holliday (2006) later described “native-speakerism” as an ideology that sustains systemic inequalities in English language teaching. Today, those theoretical caveats are becoming practical imperatives as AI systems alter what it means to read, write, listen, and speak across languages.
The provocation is simple but profound: if communicative competence is now routinely scaffolded by machines, the category of “native speaker” may be less a linguistic reality than a lingering ideology.
A Short History of a Big Myth
The prestige of “native-like” competence grew with twentieth-century linguistics and the rise of nation-state language ideologies. Standardisation projects like codification, compulsory schooling, and state-run broadcasting, elevated certain varieties as “legitimate” forms of speech and writing, implicitly devaluing others. These institutional forces helped solidify a narrow “standard” rooted in specific Inner Circle varieties.
Kachru’s Three Circles Framework (1985) remains one of the most influential models in applied linguistics. He categorised global English use into three concentric circles:
- Inner Circle: Countries where English is the native language (e.g. UK, US, Australia) and which are “norm-providing”.
- Outer Circle: Postcolonial contexts (e.g. India, Nigeria) where English serves significant institutional roles and is “norm-developing”.
- Expanding Circle: Nations where English is primarily a foreign language, widely used in international contexts and “norm-dependent” (Kachru, 1985).
This model helped expose how certain varieties of English were socially and structurally privileged, reinforcing a form of linguistic hegemony.
By the early 2000s, research on English as a Lingua Franca (ELF) such as Seidlhofer (2005) showed that most international communication does not aim for Anglo-centric “native-like” standards. Participants prioritise intelligibility through negotiation, simplification, and mutual accommodation, not replication of native forms.
From a critical sociolinguistic perspective, Park & Wee (2012) argue that English is increasingly treated as linguistic capital in global markets. They show how families and individuals invest emotionally and economically in English learning, not for cultural affinity alone but as a strategic asset, framing English ability as a form of symbolic and material resource in a linguistic marketplace.
The “native speaker” has always been more ideological than empirical. It served to legitimise colonial power structures, centralise educational norms, and marginalise postcolonial or global variances. But if the category has always been a myth, why has it endured so stubbornly? Is it because institutions find comfort in fixed standards? Or because we, as speakers and learners, still crave an imagined benchmark to measure ourselves against? Perhaps the more unsettling question is whether letting go of the “native speaker” would force us to rethink what counts as authority, fluency, or even belonging in a language. And if so, are we – teachers, policymakers, test designers, everyday users – ready to live with that uncertainty?
What AI Can Do Now (And What It Still Gets Wrong)
Consumer devices now offer live on-device translation and captioning for dozens of languages; Google’s Pixel “Live Translate” is an emblematic example, shrinking the latency between hearing and understanding. This feature enables real-time conversations between people speaking different languages, providing AI-generated transcriptions and audio translations. It supports over 70 languages, including Arabic, Hindi, and Tamil, and is available in regions such as the US, India, and Mexico.(The Verge)
Open-source models like Whisper demonstrate robust transcription and translation across accents and noise conditions, lowering the access barrier for many users. Trained on 680,000 hours of multilingual and multitask supervised data, Whisper excels in real-world scenarios, handling diverse accents, background noise, and technical language .(conf.ling.cornell.edu, technicalexplore.com)
Tools such as DeepL Write now provide register-aware paraphrasing and style optimisation embedded directly in browsers and enterprise stacks. For many professional tasks, emails, proposals, website copy, the first pass may be machine-shaped and human-approved .
However, bias remains a significant concern. Speech recognition error rates remain higher for racialised varieties of English. Koenecke et al. (2020) report disproportionately worse performance for African American speech across major systems, a reminder that “frictionless” AI can be unevenly distributed .
AI amplifies capacity, but it also reproduces inequity. The future of proficiency cannot be defined without reckoning with this unevenness.
Education: Assessment, Enablement and the New Literacy
Artificial Intelligence (AI) is reshaping the production of academic language. Students now draft with AI assistants, revise using grammar and style models, and cite with retrieval aids. The question for educators is no longer “Can we stop this?” but “How do we teach with it responsibly?”
Assessment is Already Algorithmic
Automated scoring is increasingly prevalent in high-stakes evaluations. ETS’s SpeechRater complements human raters in assessing spoken English, providing detailed feedback on pronunciation, fluency, and grammar. Similarly, the Duolingo English Test (DET) employs machine learning to predict academic outcomes, with recent studies reporting moderate correlations between DET scores and first-year grades and writing performance.
AI-driven assessments offer efficiencies such as instant feedback and streamlined evaluation processes. A study by Burstein et al. (2025) found that increased access to AI-enabled practice tests was associated with better performance, positive affect, and increased likelihood of sharing scores for university admissions. (ResearchGate, arXiv)
Curriculum Needs a Pivot
The CEFR’s expanded “mediation” scales recognise skills like summarising, paraphrasing, and facilitating understanding across texts and languages . These competences are amplified and, at times, outsourced by AI tools. Teaching should therefore move from punishing assistance to auditing it: requiring disclosure, process notes, and reflective commentary on what the model contributed and why. (Eaquals)
Mediation, as defined by the CEFR, involves facilitating understanding between individuals, often across linguistic or cultural boundaries. This aligns with the capabilities of AI tools that assist in summarising and paraphrasing, thereby enhancing learners’ ability to mediate information effectively. (Eaquals, ecml.at)
But where does this leave the human in the loop? If AI can draft, revise, and assess, what remains uniquely human in the educational process? Are we preparing students to be critical consumers of AI-generated content, or are we inadvertently training them to be passive recipients? And as we embrace these tools, are we inadvertently narrowing the scope of what it means to be literate in the 21st century?
Immigration: Gatekeeping, CEFR, and the AI Question
Immigration systems worldwide rely on standardised proof of language ability, typically mapped to CEFR levels, such as IELTS, TOEFL, or the Duolingo English Test (DET). In the UK, proposals in 2025 signalled a move toward higher English requirements (B2 for certain visa and citizenship routes), justified as an integration measure (The Guardian, 2025; The Times, 2025). This move reflects a longstanding assumption that autonomous, “native-like” English proficiency correlates with social and economic integration.
However, these standards assume unaided language production, which may no longer reflect reality in an AI-mediated world. Many routine tasks like filling forms, drafting emails, accessing government services can now be performed with AI assistance. This raises the question of whether tests should continue to assess only unaided performance, or whether they should consider a candidate’s ability to use AI tools effectively and ethically to communicate in real-world contexts.
Two tensions emerge:
- Validity vs authenticity: If everyday communication increasingly involves AI mediation, should language tests simulate “AI-on” scenarios? For example, should candidates demonstrate their ability to prompt, edit, or verify AI-generated content? Such an approach could better measure functional competence rather than memorised grammar rules, but it also challenges traditional notions of linguistic authenticity.
- Equity vs enforceability: Incorporating AI into testing risks exacerbating digital divides. Not all candidates have equal access to high-quality devices, fast internet, or AI literacy training. Policymakers would need to standardise access and proctoring protocols to ensure fairness, or risk creating new forms of exclusion, even while intending to promote integration.
Moreover, higher language thresholds based solely on “native-like” production may penalise highly capable AI-augmented communicators. A migrant who effectively uses AI to participate in work, healthcare, or civic life may fail to meet outdated proficiency ideals, highlighting a mismatch between policy design and real-world communicative competence.
What should integration truly measure in an era of AI-assisted communication? Is fluency still a matter of independent production, or is it now a skill in orchestrating human–machine collaboration? Are we assessing the right competencies, or simply reinforcing an ideological notion of the “native speaker”? And if we persist in privileging unaided English, are we inadvertently excluding those who, while fully capable, rely on the same tools that have become essential to modern life? Perhaps the deeper question is this: if AI changes the very definition of language proficiency, what does it mean to be “integrated” in a society increasingly mediated by machines?
Hiring & HR: From Accent Bias to Assistive Fluency
Employers increasingly seek “excellent communication skills,” yet interviews and automated preselections themselves embed bias. Accent-based penalties are well-documented in Automatic Speech Recognition (ASR) systems and can inadvertently influence video-interview analytics if vendors are not rigorously audited (Koenecke et al., 2020; Feng et al., 2021). For instance, ASR systems tend to recognise American English accents far more accurately than Indian or African English accents, resulting in skewed scoring in automated recruitment platforms.
Meanwhile, global firms often implement “English-first” policies, promising efficiency in cross-border collaboration (Neeley, 2012). Yet these policies can generate subtle frictions, from delayed communication to exclusionary social dynamics. Employees may appear less confident or competent not because of actual skill deficits, but because the medium privileges a certain accent or idiom.
In practice, knowledge work is increasingly tool-mediated. Teams co-edit documents in Notion or Google Docs, polish emails with AI writing assistants, and transcribe or summarise client calls using speech-to-text tools. In this context, fluency is less about unaided native-like pronunciation and more about the effective use of assistive technology to convey ideas clearly and efficiently. Hiring criteria that overemphasise “native speaker” norms risk filtering out talented individuals who are highly proficient in tool-mediated communication.
Table 1: ASR Accuracy by Accent (Simulated Data)
Accent Group | Word Error Rate (WER) | Relative Accuracy vs. US Accent |
US English (Standard) | 5% | 100% |
UK English (RP) | 7% | 93% |
Indian English | 15% | 67% |
Nigerian English | 18% | 61% |
Australian English | 10% | 85% |
Figure 1: Conceptual Model of Assistive Fluency in Knowledge Work
[Employee] –> [Writing Assistant] –> [Co-edited Document] –> [Client/Team]
[Speech-to-Text AI] –> [Summaries/Transcripts] –> [Internal Review]
This simplified model highlights how communication proficiency today is often a co-operative, tool-augmented process, rather than a purely unaided skill.
Table 2: Examples of Assistive Tools in Hiring-Relevant Work
Task | Tool Examples | Purpose |
Writing & Editing | Grammarly, Wordtune, ChatGPT | Ensure clarity and grammar |
Document Collaboration | Notion, Google Docs, Confluence | Real-time co-editing and version control |
Call Transcription & Summaries | Otter.ai, Microsoft Teams, Fireflies | Capture spoken communication accurately |
Speech Enhancement | Krisp, NVIDIA RTX Voice | Reduce background noise and improve clarity |
The evidence is clear: accent bias exists in both human evaluation and automated systems. Yet the rise of assistive technologies reshapes the definition of communicative competence. Should hiring managers continue to penalise accents that ASR struggles to recognise, when tools can bridge the gap? Does insisting on unaided native-like fluency truly predict workplace effectiveness, or does it merely reflect outdated assumptions about communication? In a world where AI and collaborative tools increasingly mediate high-value knowledge work, perhaps “assistive fluency” – the capacity to communicate effectively using modern tools – matters far more than accent, dialect, or idiosyncratic grammar.
Rethinking Proficiency: Toward AI-Augmented Communicative Competence
For decades, language education and research have emphasised “native-likeness” as the ideal benchmark for communicative competence. However, scholars increasingly challenge this paradigm, advocating plurilingual, repertoire-based models that recognise the richness of multilingual strategies (Davies, 2003; Seidlhofer, 2005). English as a Lingua Franca (ELF) research has shown that successful international communication is often strategy-rich rather than error-free, with speakers deploying negotiation, clarification, and adaptation techniques to bridge linguistic and cultural differences (Jenkins, 2015; Cogo & Dewey, 2012).
The advent of AI amplifies these dynamics. Modern communicative competence is no longer only about linguistic accuracy: it involves tool selection, prompt engineering, model verification, and ethical transparency. Professionals must navigate AI-assisted writing tools, automated translation systems, and speech-to-text technologies, ensuring that the outputs are accurate, contextually appropriate, and bias-free. The rise of AI therefore introduces a “post-native” dimension to proficiency, where effectiveness is measured by both human and machine-mediated communication.
A Four-Strand Model of AI-Augmented Communicative Competence
A post-native proficiency model can be conceptualised along four interdependent strands:
- Core Linguistic Resources
- Grammar, lexis, and pronunciation remain fundamental.
- AI tools can assist in refining these, e.g., Grammarly or Wordtune, but foundational knowledge allows users to audit outputs critically.
- Interactional Agility
- Managing turn-taking, repair, and audience design.
- AI can support this through real-time transcription or conversational prompts, yet humans must interpret social cues and cultural nuances.
- Mediation & Multimodality
- Includes summarising, translating, and transcreating across channels such as text, video, and voice.
- AI tools like DeepL or Otter.ai facilitate efficiency, but users must ensure semantic fidelity and cultural appropriateness.
- AI Orchestration
- Selecting tools, designing prompts, auditing outputs, and mitigating bias.
- This strand reflects a new layer of professional literacy: the ability to orchestrate human-AI collaboration effectively (Kasneci et al., 2023; Xia et al., 2024).
Table 1: Examples of AI Tools and Their Communicative Applications
Communicative Task | AI Tool Examples | Role in Communication |
Writing & Editing | Grammarly, Wordtune, ChatGPT | Improves clarity, grammar, style |
Translation & Summarisation | DeepL, Otter.ai | Enhances comprehension and cross-language mediation |
Speech Recognition & Turn-taking | Microsoft Teams, Fireflies | Supports interactional agility in meetings |
Presentation & Visualisation | Canva AI, Tome | Enables multimodal content creation |
Ethical Auditing & Bias Mitigation | Fairlearn, Aequitas | Ensures AI outputs comply with fairness norms |
Implications for Language Education and Workplace Communication
- Curriculum Development
- Language curricula must include AI literacy alongside traditional linguistic instruction.
- Students should learn to craft prompts, evaluate AI outputs, and mediate machine-human interactions.
- Workplace Communication
- Hiring and evaluation should prioritise tool-augmented competence over unaided native-likeness.
- This requires redefining “communication excellence” in terms of outcomes, efficiency, and ethical AI use.
- Equity Considerations
- AI can both mitigate and amplify inequalities. Bias in ASR systems, translation models, or summarisation tools can disadvantage certain accents or dialects (Koenecke et al., 2020).
- Professionals must cultivate critical AI literacy, recognising where tools fail and how to intervene.
Table 2: Post-Native vs Traditional Proficiency Model
Dimension | Traditional Native-Like Model | Post-Native AI-Augmented Model |
Focus | Grammar & pronunciation | Strategy, tool use, multimodal output |
Interaction | Face-to-face | Human-AI collaboration, mediated dialogue |
Assessment | Accuracy & fluency | Effectiveness, adaptability, ethical use |
Outcome | Error-free language | Successful communication across channels |
As I reflect on these developments, I cannot help but wonder: Are our traditional benchmarks for language proficiency now outdated? If the “good communicator” is one who orchestrates human-AI collaboration with ethical awareness and strategic skill, then are we measuring what truly matters in the 21st-century workplace? Should we prioritise native-like accents and error-free grammar, or should we recognise the nuanced, tool-mediated strategies that enable global communication? Perhaps the ultimate challenge is not to master language in isolation, but to master the symbiosis of human creativity and AI support, and to question whether our educational and professional systems are ready for this transformation.
Worked Examples: What Changes On the Ground
Artificial Intelligence (AI) is increasingly integrated into various sectors, transforming traditional workflows and introducing new dynamics. Below are detailed examples illustrating these changes across different domains:
A. University Seminar (International Cohort)
Without AI:
- Language Barriers: Non-native English-speaking students often hesitate to participate in discussions due to language proficiency concerns.
- Dominance of Confident Speakers: Seminar dynamics may skew towards more confident, often native, English speakers, potentially marginalising quieter participants.
With AI Support:
- Live Captions and Translations: Tools like SyncWords and Maestra.ai provide real-time captions and translations, aiding comprehension for non-native speakers. (Maestra AI)
- Pre-Session Drafting: Students can use AI writing assistants to pre-draft their contributions, boosting confidence and participation.
- Inclusive Discussions: AI tools facilitate a more inclusive environment, ensuring all voices are heard, provided the technology is equitably distributed and norms for disclosure are established.
B. SME Exporting to New Markets
Without AI:
- High Translation Costs: Engaging professional translation services for each market incurs significant expenses.
- Slow Turnaround: Manual translations can delay market entry and responsiveness.
With AI Support:
- AI Translation Tools: DeepL Write assists in generating first-draft localisations, while Whisper aids in transcribing multilingual meetings.
- Human Post-Editing: Legal and culturally sensitive content undergoes review by bilingual staff to ensure accuracy and appropriateness.
- Faster Market Cycles: AI accelerates localisation processes, reducing gatekeeping costs and enabling quicker adaptation to new markets.
C. Hospital Intake with Newly Arrived Patients
Without AI:
- Ad-Hoc Interpreting: Relying on available staff or phone interpreters can lead to delays and miscommunication.
- Potential Misunderstandings: Language barriers may result in critical information being lost or misunderstood.(Digital Health)
With AI Support:
- On-Device Translation: Devices like Pocketalk and Mabel provide real-time, on-device translation, facilitating immediate communication. (–)
- Trained Human Interpreters: For consent and diagnosis, professional interpreters are still engaged to ensure accuracy and compliance.(–)
- Improved Outcomes: When staff are trained to recognise appropriate AI usage, patient intake processes become more efficient and accurate.
D. Hiring in a Global Team
Without AI:
- Native Speaker Bias: Recruitment processes may favour native English speakers, limiting diversity.(The Times)
- Homogeneous Candidate Pool: Reliance on traditional criteria can result in a lack of varied perspectives.
With AI Support:
- AI-Enabled Workflows: Candidates demonstrate their abilities through AI-assisted tasks, such as drafting and refining documents with tools like Grammarly and QuillBot.(sgalinski.de)
- Bias Mitigation: Implementing strategies to reduce accent and language proficiency biases ensures a more equitable selection process.
- Enhanced Diversity: AI tools can help identify candidates based on skills and potential, rather than linguistic background, promoting a more diverse workforce.
As I contemplate these examples, I am compelled to ask: Are we, as a society, ready to embrace the full potential of AI in these contexts? While AI offers significant advantages, it also presents challenges that require careful consideration. How can we ensure that AI tools are implemented ethically and inclusively? What measures can be taken to prevent the exacerbation of existing biases? It is imperative that we approach AI integration with a critical eye, balancing innovation with responsibility to foster environments that are both efficient and equitable.
Risks, Ethics, and the Prestige Economy of Languages
Pierre Bourdieu’s seminal work, Language and Symbolic Power (1991), illuminates how institutions elevate certain linguistic forms, often those of dominant social classes, as “legitimate,” thereby converting language into a form of symbolic capital. In the age of artificial intelligence (AI), this process risks being re-automated: if training datasets predominantly feature elite registers, AI models may perpetuate and even amplify these biases, further inflating the market value of these linguistic forms. Conversely, underrepresented varieties may be misrecognised by automatic speech recognition (ASR) systems or mis-corrected by writing tools, perpetuating a cycle of linguistic stratification. (cdn.bookey.app)
This phenomenon is not merely theoretical. Recent studies have shown that AI models often exhibit biases that reflect and reinforce societal inequalities. For instance, a study by the International Labour Organisation found that AI systems evaluating occupational prestige may undervalue certain jobs, particularly those performed by women and minorities, due to biased training data. Similarly, research has demonstrated that AI models can propagate stereotypes and biases, such as associating certain professions with specific genders or ethnicities. (Cryptopolitan, arXiv)
The implications of these biases are profound. As AI systems become increasingly integrated into critical areas such as hiring, education, and healthcare, the risk of entrenching existing inequalities grows. If corporate platforms set the communicative defaults, linguistic futures may become de facto platform policies. Therefore, research and policy must insist on dataset diversity, robust bias evaluation, and public-interest auditing, particularly where AI is used in visa decisions, educational testing, or employment screening.
Without intervention, the “prestige economy” of languages risks being recoded into algorithms, silently entrenching the inequalities it once performed in classrooms and visa offices.
Are we then, as a society, allowing our linguistic hierarchies to be encoded into the very algorithms that increasingly govern our lives? How can we ensure that AI serves to democratise communication rather than reinforce existing power structures? What steps can we take to audit and rectify the biases inherent in AI systems? These questions are not merely academic; they are urgent and demand our collective attention.
Final Reflections: If the Native Speaker Is “Dead,” What Lives On?
Paikeday once declared the “native speaker” dead; AI has not staged a funeral so much as forced a career change. The role is being rewritten: from solitary paragon to lead collaborator in a human–machine ensemble. What ought to be prized is not an accident of birth but a discipline of practice – ethical, strategic, and dialogic.
Here is the wager: if proficiency is reconceived as the skilled use of resources, human and artificial, then more people can speak more, and better, across differences. But if “native speakership” is simply replaced with “native platform speakership”, the field narrows again, this time along lines of access and algorithmic legibility.
The future will not be monolingual or post-lingual; it will be polyglot, tool-rich, and responsibility-heavy. The question is not who owns English, but who gets to be heard, and on what terms.
To be EngSighted is to hear both the voice and the interface: to notice how prompts, plugins, and policies inflect the sentence; to ask not “Is this native?” but “Is this fair, effective, and accountable?” If the native speaker is dying anywhere, it is a gatekeeping myth. What lives on is the craft of making meaning together, with judgement, with care, and, with luck, with better tools.
Are you Eng-Sighted?
References
- An, J., Huang, D., Lin, C., & Tai, M. (2024). Measuring gender and racial biases in large language models. arXiv. https://arxiv.org/abs/2403.15281
- Bourdieu, P. (1991). Language and symbolic power. Harvard University Press.
- Burstein, J., Cardwell, R., Chuang, P.-L., Michalowski, A., & Nydick, S. (2025). Exploring AI-enabled test practice, affect, and test outcomes in language assessment. arXiv. https://arxiv.org/abs/2508.17108
- Cogo, A., & Dewey, M. (2012). Analysing English as a lingua franca: A corpus-driven investigation. Continuum.
- Council of Europe. (2020). CEFR companion volume with new descriptors. Council of Europe Publishing. https://www.coe.int/en/web/common-european-framework-reference-languages
- Council of Europe. (2020). Mediation – Common European Framework of Reference for Languages. Council of Europe Publishing. https://www.coe.int/en/web/common-european-framework-reference-languages/mediation
- Davies, A. (2003). The native speaker: Myth and reality (2nd ed.). Multilingual Matters.
- DeepL. (2024). DeepL Write Pro & DeepL Translate (Chrome integration). https://www.deepl.com/write
- DiChristofano, A., et al. (2022). Global performance disparities between English-language accents in automatic speech recognition. arXiv.
- Duolingo. (2023). Responsible AI for test equity and quality: The Duolingo English Test as a case study. arXiv. https://arxiv.org/abs/2409.07476
- Duolingo English Test. (2023). Predictive validity of the Duolingo English Test for academic performance. Duolingo. https://englishtest.duolingo.com/research
- Educational Testing Service. (n.d.). SpeechRater®: An automated scoring system for non-native speech. https://www.ets.org/research/speechrater.html
- Feng, S., et al. (2021). Quantifying bias in automatic speech recognition. arXiv.
- Forbes. (2025, July 15). AI bias in hiring is an HR problem, not a tech issue. Forbes Human Resources Council. https://www.forbes.com/councils/forbeshumanresourcescouncil/2025/07/15/ai-bias-in-hiring-is-an-hr-problem-not-a-tech-issue/
- Holliday, A. (2006). Native-speakerism. ELT Journal, 60(4), 385–387. https://doi.org/10.1093/elt/ccl030
- International Labour Organization. (2023). Study exposes AI bias in occupational prestige perceptions. Cryptopolitan. https://www.cryptopolitan.com/ai-bias-in-occupational-prestige-perceptions/
- Jenkins, J. (2015). Global Englishes: A resource book for students (3rd ed.). Routledge.
- Kasneci, E., et al. (2023). Evaluating AI-generated language as models for strategic competence in language learning. ERIC.
- Kachru, B. B. (1985). Standards, codification and sociolinguistic realism: The English language in the Outer Circle. In R. Quirk & H. G. Widdowson (Eds.), English in the world (pp. 11–30). Cambridge University Press.
- Koenecke, A., Nam, A., Lake, E., et al. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689. https://doi.org/10.1073/pnas.1915768117
- Kotek, H., Dockum, R., & Sun, D. Q. (2023). Gender bias and stereotypes in large language models. arXiv. https://arxiv.org/abs/2308.14921
- Maestra.ai. (n.d.). Live conference translation. https://maestra.ai/tools/real-time-translation/live-conference-translation
- Mabel.ai. (n.d.). Real-time translation for healthcare. https://mabel.care/
- Neeley, T. (2012). Global business speaks English. Harvard Business Review. https://hbr.org/2012/05/global-business-speaks-english
- OpenAI. (2022). Whisper: Robust speech recognition via large-scale weak supervision. arXiv. https://arxiv.org/abs/2212.04356
- Paikeday, T. M. (1985). The native speaker is dead!. Toronto: Paikeday Publishing.
- Park, J. S.-Y. (2011). The promise of English: Linguistic capital and the neoliberal worker in the South Korean job market. International Journal of Bilingual Education and Bilingualism, 14(4), 443–455. https://doi.org/10.1080/13670050.2011.573069
- Park, J. S.-Y., & Wee, L. (2012). Markets of English: Linguistic capital and language policy in a globalizing world. Routledge.
- Pocketalk. (n.d.). Breaking language barriers in healthcare: How AI-enhanced translation fosters patient communication. https://www.digitalhealth.net/2025/03/breaking-language-barriers-in-healthcare-how-ai-enhanced-translation-solution-pocketalk-fosters-patient-communication/
- Rampton, B. (1990). Displacing the “native speaker”: Expertise, affiliation, and inheritance. TESOL Quarterly, 24(2), 419–445. https://doi.org/10.2307/3586893
- Seidlhofer, B. (2005). English as a lingua franca. ELT Journal, 59(4), 339–341. https://doi.org/10.1093/elt/cci064
- sgalinski.de. (2025, July 24). AI language tools compared: DeepL, Whisper, QuillBot & Co. https://www.sgalinski.de/en/typo3-agency/technology/2025-07-24-ai-tools-compared-language-processing/
- SyncWords. (n.d.). Live AI captions, subtitles & voice translations. https://www.syncwords.com/solutions/live-captions-translations-for-education
- The Guardian. (2025, May 14). People interviewed by AI for jobs face discrimination risks, Australian study warns. The Guardian. https://www.theguardian.com/australia-news/2025/may/14/people-interviewed-by-ai-for-jobs-face-discrimination-risks-australian-study-warns
- The Guardian. (2025, May 16). UK plans to raise English language requirement to B2 for visas. The Guardian. https://www.theguardian.com/politics/2025/may/16/immigration-english-requirement-b2
- The Times. (2025, May 16). Tougher English rules proposed for migrants. The Times. https://www.thetimes.co.uk/article/all-migrants-will-have-to-be-fluent-in-english-to-stay-in-uk-m9tn2d895
- The Verge. (2025). Google is building a Duolingo rival into the Translate app. The Verge. https://www.theverge.com/news/765872/google-translate-ai-language-learning-duolingo
- Time. (2025, August 4). When your job interviewer isn’t human. Time Magazine. https://time.com/7306955/ai-job-interview-recruitment/
- UNESCO. (2021). AI competency framework for teachers. UNESCO.Xia, Y., Shin, S.-Y., & Kim, J.-C. (2024). Cross-cultural intelligent language learning system (CILS): Leveraging AI to facilitate language learning strategies in cross-cultural communication. Applied Sciences, 14(5), 1234. https://doi.org/10.3390/app14051234
Leave a Reply