Skip to main content

Enhancing Explainability in Medical AI: Developing Human-Centered Participatory Design Cards

  • Conference paper
  • First Online:
HCI International 2024 – Late Breaking Papers (HCII 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15382))

Included in the following conference series:

  • 579 Accesses

Abstract

Explainable artificial intelligence (XAI) aims to develop AI systems that are easy for humans to understand and explain the decision-making process clearly. Especially in high-risk fields such as medical, the explainability of AI systems is particularly crucial. However, designers face significant challenges in implementing explainable AI design activities in the medical domain. This research aims to provide systematic guidance for designers on medical XAI design methods, addressing the challenges faced by designers and users in designing explainable medical AI. Through comprehensive literature review and thematic analysis based on the PRISMA process, we developed medical XAI Design cards from a human-centered perspective to assist designers in exploring solutions. Combining qualitative and quantitative research, we collected feedback from designers and users, validating the effectiveness of the Design Cards and aiding in maintaining a focus on medical explainability during the design process. This research fills the gap in enabling designers to engage in medical XAI design activities, enhancing the explainability of XAI systems in the medical field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
€34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 53.49
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 70.61
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

') var buybox = document.querySelector("[data-id=id_"+ timestamp +"]").parentNode var buyingOptions = buybox.querySelectorAll(".buying-option") ;[].slice.call(buyingOptions).forEach(initCollapsibles) var buyboxMaxSingleColumnWidth = 480 function initCollapsibles(subscription, index) { var toggle = subscription.querySelector(".buying-option-price") subscription.classList.remove("expanded") var form = subscription.querySelector(".buying-option-form") var priceInfo = subscription.querySelector(".price-info") var buyingOption = toggle.parentElement if (toggle && form && priceInfo) { toggle.setAttribute("role", "button") toggle.setAttribute("tabindex", "0") toggle.addEventListener("click", function (event) { var expandedBuyingOptions = buybox.querySelectorAll(".buying-option.expanded") var buyboxWidth = buybox.offsetWidth ;[].slice.call(expandedBuyingOptions).forEach(function(option) { if (buyboxWidth buyboxMaxSingleColumnWidth) { toggle.click() } else { if (index === 0) { toggle.click() } else { toggle.setAttribute("aria-expanded", "false") form.hidden = "hidden" priceInfo.hidden = "hidden" } } }) } initialStateOpen() if (window.buyboxInitialised) return window.buyboxInitialised = true initKeyControls() })()

Institutional subscriptions

Similar content being viewed by others

References

  1. Abdul, A., et al.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018)

    Google Scholar 

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  3. Amato, F., et al.: Artificial neural networks in medical diagnosis. J. Appl. Biomed. 11.2, 47–58 (2013)

    Google Scholar 

  4. Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (2019)

    Google Scholar 

  5. Tricco, A.C., Lillie, E., Zarin, W., et al.: PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med, 169, 467–473 (2018). [Epub 4 September 2018]. https://doi.org/10.7326/M18-0850

  6. Apple: Human Interface Guidelines. https://developer.apple.com/design/human-interface-guidelines/. Accessed 30 Dec 2010

  7. Arya, V., et al.: One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  8. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion 58, 82–115 (2020)

    Google Scholar 

  9. Baldauf, M., Peter, F., Rainer, E.: Trust me, I’ma doctor–user perceptions of AI-driven apps for mobile health diagnosis. In: Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia (2020)

    Google Scholar 

  10. Band, S.S., et al.: Application of explainable artificial intelligence in medical health: a systematic review of interpretability methods. Informatics in Medicine Unlocked 101286 (2023)

    Google Scholar 

  11. Barda, A.J., Horvat, C.M., Hochheiser, H.: A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med. Inform. Decis. Mak. 20, 1–16 (2020)

    Article  MATH  Google Scholar 

  12. Barac, R., et al.: Scoping review of toolkits as a knowledge translation strategy in health. BMC Med. Inform. Dec. Mak. 14, 1–9 (2014)

    Google Scholar 

  13. Bellucci, M., et al.: Towards a terminology for a fully contextualized XAI. Procedia Comput. Sci. 192, 241–250 (2021)

    Google Scholar 

  14. Blaschke, T., et al.: REINVENT 2.0: an AI tool for de novo drug design. J. Chem. Inform. Model. 60.12, 5918–5922 (2020)

    Google Scholar 

  15. Brewer, L.C., et al.: Promoting cardiovascular health and wellness among African-Americans: community participatory approach to design an innovative mobile-health intervention. PloS one 14.8, e0218724 (2019)

    Google Scholar 

  16. Brand, G., et al.: Whose knowledge is of value? Co-designing healthcare education research with people with lived experience. Nurse Educ. Today 120, 105616 (2023)

    Google Scholar 

  17. Bødker, S., Pekkola, S.: Introduction the debate section: a short review to the past and present of participatory design. Scand. J. Inf. Syst. 22(1), 4 (2010)

    MATH  Google Scholar 

  18. Bove, C., et al.: Contextualization and exploration of local feature importance explanations to improve understanding and satisfaction of non-expert users. In: 27th International Conference on Intelligent User Interfaces (2022)

    Google Scholar 

  19. Brown, T.: Change by design: how design thinking creates new alternatives for business and society. Collins Business (2009)

    Google Scholar 

  20. Buschek, D., Eiband, M., Hussmann, H.: How to support users in understanding intelligent systems? an analysis and conceptual framework of user questions considering user mindsets, involvement, and knowledge outcomes. ACM Trans. Interact. Intell. Syst. 12(4), 1–27 (2022)

    Article  Google Scholar 

  21. Cabour, G., et al.: An explanation space to align user studies with the technical development of Explainable AI. AI Soc. 38.2, 869–887 (2023)

    Google Scholar 

  22. Caruana, R., et al.: Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)

    Google Scholar 

  23. Colonius, I., Sandra, B., Roberta, A.: Participatory design for challenging user groups: a case study. In: Proceedings of the 28th Annual European Conference on Cognitive Ergonomics (2010)

    Google Scholar 

  24. Chatti, M.A., et al.: Is more always better? The effects of personal characteristics and level of detail on the perception of explanations in a recommender system. In: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (2022)

    Google Scholar 

  25. Cheng, H.-F., et al.: Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (2019)

    Google Scholar 

  26. Chromik, M., Andreas, B.: Human-XAI interaction: a review and design principles for explanation user interfaces. In: Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part II 18. Springer International Publishing (2021)

    Google Scholar 

  27. Crupi, R., et al.: Counterfactual explanations as interventions in latent space. Data Min. Knowl. Discov. 1–37 (2022)

    Google Scholar 

  28. Deloitte AI. Deloitte Insights (2019). https://www2.deloitte.com/us/en/insights/deloitte-insights-magazine.html

  29. Deng, Y., Antle, A.N., Neustaedter, C.: Tango cards: a card-based design tool for informing the design of tangible learning games. In: Proceedings of the 2014 Conference on Designing Interactive Systems (2014)

    Google Scholar 

  30. Donetto, S., Tsianakas, V., Robert, G.: Using Experience-based Co-design (EBCD) to improve the quality of healthcare: mapping where we are now and establishing future directions, pp. 5–7. King’s College London, London (2014)

    Google Scholar 

  31. Eiband, M., et al. “Bringing transparency design into practice. In: 23rd International Conference on Intelligent User Interfaces (2018)

    Google Scholar 

  32. Gehrmann, S., et al.: Visual interaction with deep learning models through collaborative semantic inference. IEEE Trans. Visual. Comput. Graph. 26.1, 884–894 (2019)

    Google Scholar 

  33. Ghajargar, M., et al.: Graspable AI: Physical forms as explanation modality for explainable AI. In: Proceedings of the Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction (2022)

    Google Scholar 

  34. Gilpin, L.H., et al.: Explaining explanations: An overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE (2018)

    Google Scholar 

  35. Gobbo, B., et al.: xai-primer. com—a visual ideation space of interactive explainers. In: CHI Conference on Human Factors in Computing Systems Extended Abstracts (2022)

    Google Scholar 

  36. Google PAIR. 2019. People + AI Guidebook. pair.withgoogle.com/guidebook

    Google Scholar 

  37. Greenhalgh, T., et al.: Achieving research impact through co‐creation in community‐based health services: literature review and case study. Milbank Quart. 94.2, 392–429 (2016)

    Google Scholar 

  38. Greenhalgh, T., et al.: Frameworks for supporting patient and public involvement in research: systematic review and co‐design pilot. Health Expect. 22.4, 785–801 (2019)

    Google Scholar 

  39. Guesmi, M., et al.: On-demand personalized explanation for transparent recommendation. In: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (2021)

    Google Scholar 

  40. Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. 51.5,1–42 (2018)

    Google Scholar 

  41. Guo, L., et al.: Building trust in interactive machine learning via user contributed interpretable rules. In: 27th International Conference on Intelligent User Interfaces (2022)

    Google Scholar 

  42. Gustavsson, S.M.K., Andersson, T.: Patient involvement 2.0: experience-based co-design supported by action research. Action Res. 17.4, 469–491 (2019)

    Google Scholar 

  43. Hagen, P., et al.: Participatory design of evidence-based online youth mental health promotion, intervention and treatment (2012)

    Google Scholar 

  44. Herm, L.-V., et al.: A nascent design theory for explainable intelligent systems. Electron. Mark. 32.4, 2185–2205 (2022)

    Google Scholar 

  45. Hernandez-Bocanegra, D.C., Ziegler, J.: Conversational review-based explanations for recommender systems: exploring users’ query behavior. In: Proceedings of the 3rd Conference on Conversational User Interfaces (2021)

    Google Scholar 

  46. He, X., et al.: What are the users’ needs? Design of a user-centered explainable artificial intelligence diagnostic system. Int. J. Hum. Comput. Interact. 39.7, 1519–1542 (2023)

    Google Scholar 

  47. Hohman, F., et al.: Gamut: a design probe to understand how data scientists understand machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)

    Google Scholar 

  48. Holzinger, A., et al.: Causability and explainability of artificial intelligence in medicine. Wiley Interdiscipl. Rev. Data Min. Knowl. Discov. 9.4, e1312 (2019)

    Google Scholar 

  49. Hoofnagle, C.J., Van Der Sloot, B., Borgesius, F.Z.: The European Union general data protection regulation: what it is and what it means. Inform. Commun. Technol. Law 28.1, 65–98 (2019)

    Google Scholar 

  50. Johnson, K.W., et al.: Artificial intelligence in cardiology. J. Am. College Cardiol. 71.23, 2668–2679 (2018)

    Google Scholar 

  51. Josh, L.: Human-centered AI Cheat-sheet (2019). https://uxdesign.cc/human-centered-ai-cheat-sheet-1da130ba1bab

  52. Kensing, F., Blomberg, J.: Participatory design: issues and concerns. Comput. Support. Cooperat. Work 7, 167–185 (1998)

    Article  MATH  Google Scholar 

  53. Kim, C., et al.: Learn, generate, rank, explain: a case study of visual explanation by generative machine learning. ACM Trans. Interact. Intell. Syst. 11.3–4, 1–34 (2021)

    Google Scholar 

  54. Kim, M.-Y., et al.: A multi-component framework for the analysis and design of explainable artificial intelligence. Mach. Learn. Knowl. Extract. 3.4, 900–921 (2021)

    Google Scholar 

  55. Kvan, T.: Collaborative design: what is it? Autom. Constr. 9(4), 409–415 (2000)

    Article  MATH  Google Scholar 

  56. Kouki, P., et al.: Generating and understanding personalized explanations in hybrid recommender systems. ACM Trans. Interact. Intell. Syst. 10.4, 1–40 (2020)

    Google Scholar 

  57. Leask, C.F., et al.: Framework, principles and recommendations for utilising participatory methodologies in the co-creation and evaluation of public health interventions. Res. Involve. Engage. 5, 1–16 (2019)

    Google Scholar 

  58. Lei, L., Li, J., Li, W.: Assessing the role of artificial intelligence in the mental healthcare of teachers and students. Soft Comput. 1–11 (2023)

    Google Scholar 

  59. Liao, Q.V., Gruen, D., Miller, S.; Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020)

    Google Scholar 

  60. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  61. Liu, J., et al.: Increasing user trust in optimisation through feedback and interaction. ACM Trans. Comput.-Hum. Interact. 29.5, 1–34 (2023)

    Google Scholar 

  62. Lopes, P., et al.: XAI systems evaluation: A review of human and computer-centred methods. Appl. Sci. 12.19, 9423 (2022)

    Google Scholar 

  63. Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021)

    Article  MATH  Google Scholar 

  64. Meske, C., Bunde, E.: Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Inf. Syst. Front. 25(2), 743–773 (2023)

    MATH  Google Scholar 

  65. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  66. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. 11(3–4), 1–45 (2021)

    Article  MATH  Google Scholar 

  67. Morse, J.M., et al.: Verification strategies for establishing reliability and validity in qualitative research. Int. J. Qual. Meth. 1.2, 13–22 (2002)

    Google Scholar 

  68. Mucha, H., et al.: Interfaces for explanations in human-AI interaction: proposing a design evaluation approach. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (2021)

    Google Scholar 

  69. Muller, M.J., Kuhn, S.: Participatory design. Commun. ACM 36(6), 24–28 (1993)

    Article  MATH  Google Scholar 

  70. Müller, J., et al.: A visual approach to explainable computerized clinical decision support. Comput. Graph. 91, 1–11 (2020)

    Google Scholar 

  71. Naiseh, M., et al.: Explainable recommendation: when design meets trust calibration. World Wide Web 24.5, 1857–1884 (2021)

    Google Scholar 

  72. Naiseh, M., et al.: How the different explanation classes impact trust calibration: the case of clinical decision support systems. Int. J. Hum. Comput. Stud. 169, 102941 (2023)

    Google Scholar 

  73. Nakao, Y., et al.: Toward involving end-users in interactive human-in-the-loop AI fairness. ACM Trans. Interact. Intell. Syst. 12.3, 1–30 (2022)

    Google Scholar 

  74. Nazar, M., et al.: A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access 9, 153316–153348 (2021)

    Google Scholar 

  75. Neerincx, M.A., et al.: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, D. (eds.) Engineering Psychology and Cognitive Ergonomics. EPCE 2018. LNCS, vol. 10906. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91122-9_18

  76. Partogi, M., et al.: Sociotechnical intervention for improved delivery of preventive cardiovascular care to rural communities: participatory design approach. J. Med. Internet Res. 24.8, e27333 (2022)

    Google Scholar 

  77. Pollack, A.H., et al.: PD-atricians: leveraging physicians and participatory design to develop novel clinical information tools. In: AMIA Annual Symposium Proceedings, vol. 2016. American Medical Informatics Association (2016)

    Google Scholar 

  78. Rajkomar, A., Dean, J., Kohane, I.: Machine learning in medicine. N. Engl. J. Med. 380(14), 1347–1358 (2019)

    Article  MATH  Google Scholar 

  79. Ribeiro, M.T., Sameer, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  80. Robert, G., et al.: Patients and staff as codesigners of healthcare services. Bmj 350 (2015)

    Google Scholar 

  81. Roy, R., Warren, J.P.: Card-based design tools: a review and analysis of 155 card decks for designers and designing. Des. Stud. 63, 125–154 (2019)

    Article  MATH  Google Scholar 

  82. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  MATH  Google Scholar 

  83. Sanders, E.B.-N., Stappers, P.J.: Co-creation and the new landscapes of design. Co-design 4.1, 5–18 (2008)

    Google Scholar 

  84. Schoonderwoerd, T.A.J., et al.: Human-centered XAI: developing design patterns for explanations of clinical decision support systems. Int. J. Hum.-Comput. Stud. 154, 102684 (2021)

    Google Scholar 

  85. Sekiguchi, K., Hori, K.: Organic and dynamic tool for use with knowledge base of AI ethics for promoting engineers’ practice of ethical AI design. AI Soc. 35(1), 51–71 (2020)

    Article  MATH  Google Scholar 

  86. Shneiderman, B.: Creativity support tools: accelerating discovery and innovation. Commun. ACM 50(12), 20–32 (2007)

    Article  MATH  Google Scholar 

  87. Shneiderman, B.: Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 10(4), 1–31 (2020)

    Article  MATH  Google Scholar 

  88. Shneiderman, B.: Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Hum.-Comput. Interact. 36(6), 495–504 (2020)

    Article  Google Scholar 

  89. Simkute, A., et al.: XAI for learning: Narrowing down the digital divide between “new” and “old” experts. In: Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference (2022)

    Google Scholar 

  90. Sokol, K., Flach, P.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)

    Google Scholar 

  91. Song, D., et al.: A new xAI framework with feature explainability for tumors decision-making in Ultrasound data: comparing with Grad-CAM. Comput. Meth. Programs Biomed. 235, 107527 (2023)

    Google Scholar 

  92. Speith, T.: A review of taxonomies of explainable artificial intelligence (XAI) methods. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (2022)

    Google Scholar 

  93. Springer, A., Whittaker, S.: Progressive disclosure: empirically motivated approaches to designing effective transparency. In: Proceedings of the 24th International Conference on Intelligent User Interfaces (2019)

    Google Scholar 

  94. Springer, A., Whittaker, S.: Progressive disclosure: when, why, and how do users want algorithmic transparency information? ACM Trans. Interact. Intell. Syst. 10(4), 1–32 (2020)

    Article  MATH  Google Scholar 

  95. Sun, L., et al.: Capturing the trends, applications, issues, and potential strategies of designing transparent AI agents. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (2021)

    Google Scholar 

  96. Sun, J., et al.: Investigating explainability of generative AI for code through scenario-based design. In: 27th International Conference on Intelligent User Interfaces (2022)

    Google Scholar 

  97. Sun, T.Q., Medaglia, R.: Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Govern. Inform. Quart. 36.2, 368–383 (2019)

    Google Scholar 

  98. Szymanski, M., Millecamp, M., Verbert, K.: Visual, textual or hybrid: the effect of user expertise on different explanations. In: 26th International Conference on Intelligent User Interfaces (2021)

    Google Scholar 

  99. Tsai, C.-H., et al.: Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021)

    Google Scholar 

  100. Tsianakas, V., et al.: Implementing patient-centred cancer care: using experience-based co-design to improve patient experience in breast and lung cancer services. Support. Care Cancer 20, 2639–2647 (2012)

    Google Scholar 

  101. Van der Velden, M., Mörtberg, C.: Participatory design and design for values. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, pp. 41–66 (2015)

    Google Scholar 

  102. van der Waa, J., et al.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artific. Intell. 291, 103404 (2021)

    Google Scholar 

  103. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.105962 (2020)

  104. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extract. 3(3), 615–661 (2021)

    Article  MATH  Google Scholar 

  105. Wadley, G., et al.: Participatory design of an online therapy for youth mental health. In: Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration (2013)

    Google Scholar 

  106. Wang, D., et al.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)

    Google Scholar 

  107. Wang, Q., et al.: Extending the nested model for user-centric XAI: a design study on GNN-based drug repurposing. IEEE Trans. Visual. Comput. Graph. 29.1, 1266–1276 (2022)

    Google Scholar 

  108. Wang, X., Yin, M.: Are explanations helpful? a comparative study of the effects of explanations in AI-assisted decision-making. In: 26th International Conference on Intelligent User Interfaces (2021)

    Google Scholar 

  109. Weitz, K., et al.: “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. J. Multimodal User Interf. 15.2, 87–98 (2021)

    Google Scholar 

  110. Wiens, J., et al.: Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25.9, 1337–1340 (2019)

    Google Scholar 

  111. Xie, Y., et al.: CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020)

    Google Scholar 

  112. Yang, F., et al.: How do visual explanations foster end users’ appropriate trust in machine learning? In: Proceedings of the 25th International Conference on Intelligent User Interfaces (2020)

    Google Scholar 

  113. Yang, Q.: Machine learning as a UX design material: how can we imagine beyond automation, recommenders, and reminders? AAAI Spring Symp. 1(2), 1 (2018)

    Google Scholar 

  114. Yildirim, N., et al.: How experienced designers of enterprise applications engage AI as a design material. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (2022)

    Google Scholar 

  115. Zhang, A., et al.: Stakeholder-centered AI design: co-designing worker tools with gig workers through data probes. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin He .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, T., He, X. (2024). Enhancing Explainability in Medical AI: Developing Human-Centered Participatory Design Cards. In: Degen, H., Ntoa, S. (eds) HCI International 2024 – Late Breaking Papers. HCII 2024. Lecture Notes in Computer Science, vol 15382. Springer, Cham. https://doi.org/10.1007/978-3-031-76827-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-76827-9_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-76826-2

  • Online ISBN: 978-3-031-76827-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics