Skip to main content

Cross-Cultural Implications of Large Language Models: An Extended Comparative Analysis

  • Conference paper
  • First Online:
HCI International 2024 – Late Breaking Papers (HCII 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15375))

Included in the following conference series:

  • 558 Accesses

  • 3 Citations

Abstract

This article examines the impacts of deploying large language models (LLMs) across diverse cultural contexts, emphasizing the challenges and opportunities related to their linguistic adaptability and cultural sensitivity. As globalization progresses, the necessity for LLMs to operate effectively and sensitively in multilingual and multicultural environments becomes increasingly critical. This study conducts a comprehensive multilingual analysis to explore how these models navigate linguistic nuances and cultural idiosyncrasies when generating and interpreting text. By investigating a diverse array of languages and cultural settings, the research identifies crucial challenges that current models face, such as biases and inaccuracies in languages with less digital representation. These biases not only affect the accuracy of the models but also potentially exacerbate existing social inequalities, particularly in marginalized communities. To address these challenges, this article proposes innovative strategies to enhance the cultural and linguistic effectiveness of LLMs. Firstly, it emphasizes the importance of incorporating culturally inclusive training datasets during the development phases of AI systems to ensure that the models are exposed to a diverse range of languages and cultural contexts. Secondly, it suggests integrating cultural experts into development teams to provide valuable insights into linguistic peculiarities and cultural nuances, thereby improving the models’ accuracy and sensitivity. Through quantitative and qualitative methods, the study assesses the performance of LLMs across various metrics, including cultural sensitivity and user satisfaction. The quantitative analysis involves using a series of culturally specific prompts to measure the accuracy of language generation and comprehension, while the qualitative evaluation involves detailed feedback from language experts and native speakers to assess the contextual appropriateness and cultural relevance of the generated texts. The findings reveal that while LLMs perform excellently in handling resource-rich languages, there remains a significant gap in their ability to manage languages with fewer resources.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+
from €39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 60.98
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 79.17
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

') var buybox = document.querySelector("[data-id=id_"+ timestamp +"]").parentNode var buyingOptions = buybox.querySelectorAll(".buying-option") ;[].slice.call(buyingOptions).forEach(initCollapsibles) var buyboxMaxSingleColumnWidth = 480 function initCollapsibles(subscription, index) { var toggle = subscription.querySelector(".buying-option-price") subscription.classList.remove("expanded") var form = subscription.querySelector(".buying-option-form") var priceInfo = subscription.querySelector(".price-info") var buyingOption = toggle.parentElement if (toggle && form && priceInfo) { toggle.setAttribute("role", "button") toggle.setAttribute("tabindex", "0") toggle.addEventListener("click", function (event) { var expandedBuyingOptions = buybox.querySelectorAll(".buying-option.expanded") var buyboxWidth = buybox.offsetWidth ;[].slice.call(expandedBuyingOptions).forEach(function(option) { if (buyboxWidth buyboxMaxSingleColumnWidth) { toggle.click() } else { if (index === 0) { toggle.click() } else { toggle.setAttribute("aria-expanded", "false") form.hidden = "hidden" priceInfo.hidden = "hidden" } } }) } initialStateOpen() if (window.buyboxInitialised) return window.buyboxInitialised = true initKeyControls() })()

Institutional subscriptions

Similar content being viewed by others

References

  1. Liu, C. C., Koto, F., Baldwin, T., et al.: Are multilingual LLMs culturally-diverse reasoners? An investigation into multicultural proverbs and sayings. arXiv preprint arXiv:2309.08591 (2023)

  2. Beguš, G., Dąbkowski, M., Rhodes, R.: Large linguistic models: analyzing theoretical linguistic abilities of LLMs. arXiv preprint arXiv:2305.00948 (2023)

  3. Muñoz-Ortiz, A., Gómez-Rodríguez, C., Vilares, D.: Contrasting linguistic patterns in human and LLM-generated text. arXiv preprint arXiv:2308.09067 (2023)

  4. Lin, Y. T., Chen, Y. N.: Taiwan LLM: bridging the linguistic divide with a culturally aligned language model. arXiv preprint arXiv:2311.17487 (2023)

  5. Yamazaki, K., Vo, K., Truong, Q. S., et al.: VLTinT: visual-linguistic transformer-in-transformer for coherent video paragraph captioning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37(3), pp. 3081–3090 (2023)

    Google Scholar 

  6. Deng, J., Yang, Z., Chen, T., et al.: TransVG: end-to-end visual grounding with transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1769–1779 (2021)

    Google Scholar 

  7. Lan, G., Liu, X. Y., Zhang, Y., et al.: Communication-efficient federated learning for resource-constrained edge devices. IEEE Trans. Mach. Learn. Commun. Netw. (2023)

    Google Scholar 

  8. Challen, R., Denny, J., Pitt, M., et al.: Artificial intelligence, bias and clinical safety. BMJ Qual. Saf. 28(3), 231–237 (2019)

    Article  Google Scholar 

  9. Stanley, B.: The reshaping of Christian tradition: western denominational identity in a non-western context. Stud. Church Hist. 32, 399–426 (1996)

    Article  Google Scholar 

  10. İlhan, B., Gürses, B. O., Güneri, P.: Addressing inequalities in science: the role of language learning models in bridging the gap. Int. Dental J. (2024)

    Google Scholar 

  11. Cotterell, R., Mielke, S. J., Eisner, J., Roark, B.: Are all languages equally hard to language-model?. ArXiv preprint arXiv:1806.03743 (2018)

  12. Mahowald, K., Ivanova, A.A., Blank, I.A., et al.: Dissociating language and thought in large language models: a cognitive perspective. arXiv preprint arXiv:2301.06627 (2023)

  13. Mengesha, Z., Heldreth, C., Lahav, M., et al.: I don’t think these devices are very culturally sensitive - Impact of automated speech recognition errors on African Americans. Front. Artif. Intell. 4, 725911 (2021)

    Google Scholar 

  14. McIntosh, T. R., Liu, T., Susnjak, T., et al.: A culturally sensitive test to evaluate nuanced GPT hallucination. IEEE Trans. Artif. Intell. (2023)

    Google Scholar 

  15. Haim, G., Gal, Y., Gelfand, M., et al.: A cultural sensitive agent for human-computer negotiation. In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 451–458 (2012)

    Google Scholar 

  16. Parra, C.M., Gupta, M., Dennehy, D.: Likelihood of questioning AI-based recommendations due to perceived racial/gender bias. IEEE Trans. Technol. Soc. 3(1), 41–45 (2021)

    Article  Google Scholar 

  17. Schwartz, R., Schwartz, R., Vassilev, A., et al.: Towards a standard for identifying and managing bias in artificial intelligence. US Department of Commerce, National Institute of Standards and Technology (2022)

    Google Scholar 

  18. Kulesz, O.: Culture, platforms and machines: the impact of artificial intelligence on the diversity of cultural expressions. In: Intergovernmental Committee for the Protection and Promotion of the Diversity of Cultural Expressions, vol. 12 (2018)

    Google Scholar 

  19. Nguyen, X. P., Aljunied, S. M., Joty, S., et al.: Democratizing LLMs for low-resource languages by leveraging their English dominant abilities with linguistically-diverse prompts. arXiv preprint arXiv:2306.11372 (2023)

  20. Tian, G., Xu, Y.: A study on the typeface design method of Han characters imitated Tangut. Adv. Educ. Hum. Soc. Sci. Res. 1(2), 270–270 (2022)

    MathSciNet  Google Scholar 

  21. Ntoutsi, E., Fafalios, P., Gadiraju, U., et al.: Bias in data-driven artificial intelligence systems-an introductory survey. Wiley Interdisc. Rev. Data Min. Knowl. Disc. 10(3), e1356 (2020)

    Article  Google Scholar 

  22. Langer, M., König, C.J., Back, C., et al.: Trust in artificial intelligence: comparing trust processes between human and automated trustees in light of unfair bias. J. Bus. Psychol. 38(3), 493–508 (2023)

    Article  Google Scholar 

  23. Hagendorff, T., Bossert, L.N., Tse, Y.F., et al.: Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals. AI Ethics 3(3), 717–734 (2023)

    Article  Google Scholar 

  24. Roselli, D., Matthews, J., Talagala, N.: Managing bias in AI. In: Companion Proceedings of the 2019 World Wide Web Conference, pp. 539–544 (2019)

    Google Scholar 

  25. Nazer, L.H., Zatarah, R., Waldrip, S., et al.: Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit. Health 2(6), e0000278 (2023)

    Article  Google Scholar 

  26. Ehsan, U., Liao, Q.V., Muller, M., et al.: Expanding Explainability: Towards Social Transparency in AI Systems. Springer, Cham (2021)

    Google Scholar 

  27. Larsson, S., Heintz, F.: Transparency in artificial intelligence. Internet Policy Rev. 9(2) (2020)

    Google Scholar 

  28. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv preprint arXiv:2006.11371 (2020)

  29. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)

    Article  Google Scholar 

  30. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  31. Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, K.-R., Samek, W. (eds.) xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, pp. 13–38. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-031-04083-2_2

    Chapter  Google Scholar 

  32. Hanif, A., Zhang, X., Wood, S.: A survey on explainable artificial intelligence techniques and challenges. In: 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), pp. 81–89. IEEE (2021)

    Google Scholar 

  33. Ali, S., Abuhmed, T., El-Sappagh, S., et al.: Explainable Artificial Intelligence (XAI): what we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fus. 99, 101805 (2023)

    Article  Google Scholar 

  34. Chamola, V., Hassija, V., Sulthana, A.R., et al.: A review of trustworthy and explainable artificial intelligence (XAI). IEEE (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanyuan Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shan, X., Xu, Y., Wang, Y., Lin, YS., Bao, Y. (2025). Cross-Cultural Implications of Large Language Models: An Extended Comparative Analysis. In: Coman, A., Vasilache, S., Fui-Hoon Nah, F., Siau, K.L., Wei, J., Margetis, G. (eds) HCI International 2024 – Late Breaking Papers. HCII 2024. Lecture Notes in Computer Science, vol 15375. Springer, Cham. https://doi.org/10.1007/978-3-031-76806-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-76806-4_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-76805-7

  • Online ISBN: 978-3-031-76806-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics