Document details

Explainable AI in education: fostering human oversight and shared responsibility


Description

Explainable artificial intelligence (XAI) is a sub-field of artificial intelligence (AI), which aims to provide explanations about the reasons why an AI-based system takes a decision or provides an output (TechDispatch, 2023). The search for meaningful explanations is not new in the field of AI, but it has been mainly a technical issue for developers who were looking for reliability in the results obtained by their AI systems, so they could be accepted by end users of specific areas (Ali et al, 2023). The great advance of AI technology in the last years has turned these systems into general-purpose digital tools, and new considerations have arisen in this realm. In terms of ethical AI, the Ethics guidelines for trustworthy AI published in 2019 by the High-Level Expert Group on AI of the European Commission established seven key requirements for trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental wellbeing, and (7) accountability.

Document Type Text
Language English
Contributor(s) Repositório Comum
CC Licence
facebook logo  linkedin logo  twitter logo 
mendeley logo

Related documents