Abdul, A., et al.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018)
Google Scholar
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Article
Google Scholar
Amato, F., et al.: Artificial neural networks in medical diagnosis. J. Appl. Biomed. 11.2, 47–58 (2013)
Google Scholar
Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (2019)
Google Scholar
Tricco, A.C., Lillie, E., Zarin, W., et al.: PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med, 169, 467–473 (2018). [Epub 4 September 2018]. https://doi.org/10.7326/M18-0850
Apple: Human Interface Guidelines. https://developer.apple.com/design/human-interface-guidelines/. Accessed 30 Dec 2010
Arya, V., et al.: One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion 58, 82–115 (2020)
Google Scholar
Baldauf, M., Peter, F., Rainer, E.: Trust me, I’ma doctor–user perceptions of AI-driven apps for mobile health diagnosis. In: Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia (2020)
Google Scholar
Band, S.S., et al.: Application of explainable artificial intelligence in medical health: a systematic review of interpretability methods. Informatics in Medicine Unlocked 101286 (2023)
Google Scholar
Barda, A.J., Horvat, C.M., Hochheiser, H.: A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med. Inform. Decis. Mak. 20, 1–16 (2020)
Article
MATH
Google Scholar
Barac, R., et al.: Scoping review of toolkits as a knowledge translation strategy in health. BMC Med. Inform. Dec. Mak. 14, 1–9 (2014)
Google Scholar
Bellucci, M., et al.: Towards a terminology for a fully contextualized XAI. Procedia Comput. Sci. 192, 241–250 (2021)
Google Scholar
Blaschke, T., et al.: REINVENT 2.0: an AI tool for de novo drug design. J. Chem. Inform. Model. 60.12, 5918–5922 (2020)
Google Scholar
Brewer, L.C., et al.: Promoting cardiovascular health and wellness among African-Americans: community participatory approach to design an innovative mobile-health intervention. PloS one 14.8, e0218724 (2019)
Google Scholar
Brand, G., et al.: Whose knowledge is of value? Co-designing healthcare education research with people with lived experience. Nurse Educ. Today 120, 105616 (2023)
Google Scholar
Bødker, S., Pekkola, S.: Introduction the debate section: a short review to the past and present of participatory design. Scand. J. Inf. Syst. 22(1), 4 (2010)
MATH
Google Scholar
Bove, C., et al.: Contextualization and exploration of local feature importance explanations to improve understanding and satisfaction of non-expert users. In: 27th International Conference on Intelligent User Interfaces (2022)
Google Scholar
Brown, T.: Change by design: how design thinking creates new alternatives for business and society. Collins Business (2009)
Google Scholar
Buschek, D., Eiband, M., Hussmann, H.: How to support users in understanding intelligent systems? an analysis and conceptual framework of user questions considering user mindsets, involvement, and knowledge outcomes. ACM Trans. Interact. Intell. Syst. 12(4), 1–27 (2022)
Article
Google Scholar
Cabour, G., et al.: An explanation space to align user studies with the technical development of Explainable AI. AI Soc. 38.2, 869–887 (2023)
Google Scholar
Caruana, R., et al.: Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)
Google Scholar
Colonius, I., Sandra, B., Roberta, A.: Participatory design for challenging user groups: a case study. In: Proceedings of the 28th Annual European Conference on Cognitive Ergonomics (2010)
Google Scholar
Chatti, M.A., et al.: Is more always better? The effects of personal characteristics and level of detail on the perception of explanations in a recommender system. In: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (2022)
Google Scholar
Cheng, H.-F., et al.: Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (2019)
Google Scholar
Chromik, M., Andreas, B.: Human-XAI interaction: a review and design principles for explanation user interfaces. In: Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part II 18. Springer International Publishing (2021)
Google Scholar
Crupi, R., et al.: Counterfactual explanations as interventions in latent space. Data Min. Knowl. Discov. 1–37 (2022)
Google Scholar
Deloitte AI. Deloitte Insights (2019). https://www2.deloitte.com/us/en/insights/deloitte-insights-magazine.html
Deng, Y., Antle, A.N., Neustaedter, C.: Tango cards: a card-based design tool for informing the design of tangible learning games. In: Proceedings of the 2014 Conference on Designing Interactive Systems (2014)
Google Scholar
Donetto, S., Tsianakas, V., Robert, G.: Using Experience-based Co-design (EBCD) to improve the quality of healthcare: mapping where we are now and establishing future directions, pp. 5–7. King’s College London, London (2014)
Google Scholar
Eiband, M., et al. “Bringing transparency design into practice. In: 23rd International Conference on Intelligent User Interfaces (2018)
Google Scholar
Gehrmann, S., et al.: Visual interaction with deep learning models through collaborative semantic inference. IEEE Trans. Visual. Comput. Graph. 26.1, 884–894 (2019)
Google Scholar
Ghajargar, M., et al.: Graspable AI: Physical forms as explanation modality for explainable AI. In: Proceedings of the Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction (2022)
Google Scholar
Gilpin, L.H., et al.: Explaining explanations: An overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE (2018)
Google Scholar
Gobbo, B., et al.: xai-primer. com—a visual ideation space of interactive explainers. In: CHI Conference on Human Factors in Computing Systems Extended Abstracts (2022)
Google Scholar
Google PAIR. 2019. People + AI Guidebook. pair.withgoogle.com/guidebook
Google Scholar
Greenhalgh, T., et al.: Achieving research impact through co‐creation in community‐based health services: literature review and case study. Milbank Quart. 94.2, 392–429 (2016)
Google Scholar
Greenhalgh, T., et al.: Frameworks for supporting patient and public involvement in research: systematic review and co‐design pilot. Health Expect. 22.4, 785–801 (2019)
Google Scholar
Guesmi, M., et al.: On-demand personalized explanation for transparent recommendation. In: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (2021)
Google Scholar
Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. 51.5,1–42 (2018)
Google Scholar
Guo, L., et al.: Building trust in interactive machine learning via user contributed interpretable rules. In: 27th International Conference on Intelligent User Interfaces (2022)
Google Scholar
Gustavsson, S.M.K., Andersson, T.: Patient involvement 2.0: experience-based co-design supported by action research. Action Res. 17.4, 469–491 (2019)
Google Scholar
Hagen, P., et al.: Participatory design of evidence-based online youth mental health promotion, intervention and treatment (2012)
Google Scholar
Herm, L.-V., et al.: A nascent design theory for explainable intelligent systems. Electron. Mark. 32.4, 2185–2205 (2022)
Google Scholar
Hernandez-Bocanegra, D.C., Ziegler, J.: Conversational review-based explanations for recommender systems: exploring users’ query behavior. In: Proceedings of the 3rd Conference on Conversational User Interfaces (2021)
Google Scholar
He, X., et al.: What are the users’ needs? Design of a user-centered explainable artificial intelligence diagnostic system. Int. J. Hum. Comput. Interact. 39.7, 1519–1542 (2023)
Google Scholar
Hohman, F., et al.: Gamut: a design probe to understand how data scientists understand machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)
Google Scholar
Holzinger, A., et al.: Causability and explainability of artificial intelligence in medicine. Wiley Interdiscipl. Rev. Data Min. Knowl. Discov. 9.4, e1312 (2019)
Google Scholar
Hoofnagle, C.J., Van Der Sloot, B., Borgesius, F.Z.: The European Union general data protection regulation: what it is and what it means. Inform. Commun. Technol. Law 28.1, 65–98 (2019)
Google Scholar
Johnson, K.W., et al.: Artificial intelligence in cardiology. J. Am. College Cardiol. 71.23, 2668–2679 (2018)
Google Scholar
Josh, L.: Human-centered AI Cheat-sheet (2019). https://uxdesign.cc/human-centered-ai-cheat-sheet-1da130ba1bab
Kensing, F., Blomberg, J.: Participatory design: issues and concerns. Comput. Support. Cooperat. Work 7, 167–185 (1998)
Article
MATH
Google Scholar
Kim, C., et al.: Learn, generate, rank, explain: a case study of visual explanation by generative machine learning. ACM Trans. Interact. Intell. Syst. 11.3–4, 1–34 (2021)
Google Scholar
Kim, M.-Y., et al.: A multi-component framework for the analysis and design of explainable artificial intelligence. Mach. Learn. Knowl. Extract. 3.4, 900–921 (2021)
Google Scholar
Kvan, T.: Collaborative design: what is it? Autom. Constr. 9(4), 409–415 (2000)
Article
MATH
Google Scholar
Kouki, P., et al.: Generating and understanding personalized explanations in hybrid recommender systems. ACM Trans. Interact. Intell. Syst. 10.4, 1–40 (2020)
Google Scholar
Leask, C.F., et al.: Framework, principles and recommendations for utilising participatory methodologies in the co-creation and evaluation of public health interventions. Res. Involve. Engage. 5, 1–16 (2019)
Google Scholar
Lei, L., Li, J., Li, W.: Assessing the role of artificial intelligence in the mental healthcare of teachers and students. Soft Comput. 1–11 (2023)
Google Scholar
Liao, Q.V., Gruen, D., Miller, S.; Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020)
Google Scholar
Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Article
MathSciNet
MATH
Google Scholar
Liu, J., et al.: Increasing user trust in optimisation through feedback and interaction. ACM Trans. Comput.-Hum. Interact. 29.5, 1–34 (2023)
Google Scholar
Lopes, P., et al.: XAI systems evaluation: A review of human and computer-centred methods. Appl. Sci. 12.19, 9423 (2022)
Google Scholar
Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021)
Article
MATH
Google Scholar
Meske, C., Bunde, E.: Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Inf. Syst. Front. 25(2), 743–773 (2023)
MATH
Google Scholar
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Article
MathSciNet
MATH
Google Scholar
Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. 11(3–4), 1–45 (2021)
Article
MATH
Google Scholar
Morse, J.M., et al.: Verification strategies for establishing reliability and validity in qualitative research. Int. J. Qual. Meth. 1.2, 13–22 (2002)
Google Scholar
Mucha, H., et al.: Interfaces for explanations in human-AI interaction: proposing a design evaluation approach. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (2021)
Google Scholar
Muller, M.J., Kuhn, S.: Participatory design. Commun. ACM 36(6), 24–28 (1993)
Article
MATH
Google Scholar
Müller, J., et al.: A visual approach to explainable computerized clinical decision support. Comput. Graph. 91, 1–11 (2020)
Google Scholar
Naiseh, M., et al.: Explainable recommendation: when design meets trust calibration. World Wide Web 24.5, 1857–1884 (2021)
Google Scholar
Naiseh, M., et al.: How the different explanation classes impact trust calibration: the case of clinical decision support systems. Int. J. Hum. Comput. Stud. 169, 102941 (2023)
Google Scholar
Nakao, Y., et al.: Toward involving end-users in interactive human-in-the-loop AI fairness. ACM Trans. Interact. Intell. Syst. 12.3, 1–30 (2022)
Google Scholar
Nazar, M., et al.: A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access 9, 153316–153348 (2021)
Google Scholar
Neerincx, M.A., et al.: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, D. (eds.) Engineering Psychology and Cognitive Ergonomics. EPCE 2018. LNCS, vol. 10906. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91122-9_18
Partogi, M., et al.: Sociotechnical intervention for improved delivery of preventive cardiovascular care to rural communities: participatory design approach. J. Med. Internet Res. 24.8, e27333 (2022)
Google Scholar
Pollack, A.H., et al.: PD-atricians: leveraging physicians and participatory design to develop novel clinical information tools. In: AMIA Annual Symposium Proceedings, vol. 2016. American Medical Informatics Association (2016)
Google Scholar
Rajkomar, A., Dean, J., Kohane, I.: Machine learning in medicine. N. Engl. J. Med. 380(14), 1347–1358 (2019)
Article
MATH
Google Scholar
Ribeiro, M.T., Sameer, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Google Scholar
Robert, G., et al.: Patients and staff as codesigners of healthcare services. Bmj 350 (2015)
Google Scholar
Roy, R., Warren, J.P.: Card-based design tools: a review and analysis of 155 card decks for designers and designing. Des. Stud. 63, 125–154 (2019)
Article
MATH
Google Scholar
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)
Article
MATH
Google Scholar
Sanders, E.B.-N., Stappers, P.J.: Co-creation and the new landscapes of design. Co-design 4.1, 5–18 (2008)
Google Scholar
Schoonderwoerd, T.A.J., et al.: Human-centered XAI: developing design patterns for explanations of clinical decision support systems. Int. J. Hum.-Comput. Stud. 154, 102684 (2021)
Google Scholar
Sekiguchi, K., Hori, K.: Organic and dynamic tool for use with knowledge base of AI ethics for promoting engineers’ practice of ethical AI design. AI Soc. 35(1), 51–71 (2020)
Article
MATH
Google Scholar
Shneiderman, B.: Creativity support tools: accelerating discovery and innovation. Commun. ACM 50(12), 20–32 (2007)
Article
MATH
Google Scholar
Shneiderman, B.: Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 10(4), 1–31 (2020)
Article
MATH
Google Scholar
Shneiderman, B.: Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Hum.-Comput. Interact. 36(6), 495–504 (2020)
Article
Google Scholar
Simkute, A., et al.: XAI for learning: Narrowing down the digital divide between “new” and “old” experts. In: Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference (2022)
Google Scholar
Sokol, K., Flach, P.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)
Google Scholar
Song, D., et al.: A new xAI framework with feature explainability for tumors decision-making in Ultrasound data: comparing with Grad-CAM. Comput. Meth. Programs Biomed. 235, 107527 (2023)
Google Scholar
Speith, T.: A review of taxonomies of explainable artificial intelligence (XAI) methods. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (2022)
Google Scholar
Springer, A., Whittaker, S.: Progressive disclosure: empirically motivated approaches to designing effective transparency. In: Proceedings of the 24th International Conference on Intelligent User Interfaces (2019)
Google Scholar
Springer, A., Whittaker, S.: Progressive disclosure: when, why, and how do users want algorithmic transparency information? ACM Trans. Interact. Intell. Syst. 10(4), 1–32 (2020)
Article
MATH
Google Scholar
Sun, L., et al.: Capturing the trends, applications, issues, and potential strategies of designing transparent AI agents. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (2021)
Google Scholar
Sun, J., et al.: Investigating explainability of generative AI for code through scenario-based design. In: 27th International Conference on Intelligent User Interfaces (2022)
Google Scholar
Sun, T.Q., Medaglia, R.: Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Govern. Inform. Quart. 36.2, 368–383 (2019)
Google Scholar
Szymanski, M., Millecamp, M., Verbert, K.: Visual, textual or hybrid: the effect of user expertise on different explanations. In: 26th International Conference on Intelligent User Interfaces (2021)
Google Scholar
Tsai, C.-H., et al.: Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021)
Google Scholar
Tsianakas, V., et al.: Implementing patient-centred cancer care: using experience-based co-design to improve patient experience in breast and lung cancer services. Support. Care Cancer 20, 2639–2647 (2012)
Google Scholar
Van der Velden, M., Mörtberg, C.: Participatory design and design for values. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, pp. 41–66 (2015)
Google Scholar
van der Waa, J., et al.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artific. Intell. 291, 103404 (2021)
Google Scholar
Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.105962 (2020)
Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extract. 3(3), 615–661 (2021)
Article
MATH
Google Scholar
Wadley, G., et al.: Participatory design of an online therapy for youth mental health. In: Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration (2013)
Google Scholar
Wang, D., et al.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)
Google Scholar
Wang, Q., et al.: Extending the nested model for user-centric XAI: a design study on GNN-based drug repurposing. IEEE Trans. Visual. Comput. Graph. 29.1, 1266–1276 (2022)
Google Scholar
Wang, X., Yin, M.: Are explanations helpful? a comparative study of the effects of explanations in AI-assisted decision-making. In: 26th International Conference on Intelligent User Interfaces (2021)
Google Scholar
Weitz, K., et al.: “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. J. Multimodal User Interf. 15.2, 87–98 (2021)
Google Scholar
Wiens, J., et al.: Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25.9, 1337–1340 (2019)
Google Scholar
Xie, Y., et al.: CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020)
Google Scholar
Yang, F., et al.: How do visual explanations foster end users’ appropriate trust in machine learning? In: Proceedings of the 25th International Conference on Intelligent User Interfaces (2020)
Google Scholar
Yang, Q.: Machine learning as a UX design material: how can we imagine beyond automation, recommenders, and reminders? AAAI Spring Symp. 1(2), 1 (2018)
Google Scholar
Yildirim, N., et al.: How experienced designers of enterprise applications engage AI as a design material. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (2022)
Google Scholar
Zhang, A., et al.: Stakeholder-centered AI design: co-designing worker tools with gig workers through data probes. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023)
Google Scholar