Method & Standards at a Glance
XR5.0 has completed its interim Trustworthy AI Self Assessment, taking stock of how the six pilot cases design, build, and operate AI. The aim is to keep our XR components aligned with the EU AI Act and to embed ethics by design from the very beginning. We used the European Commission’s Ethics Guidelines for Trustworthy AI and the seven key requirements (Figure 1) from the Assessment List for Trustworthy AI (ALTAI) as the backbone of the review, with a close look at human agency and oversight, technical robustness and safety, privacy and data governance, transparency and fairness, societal and environmental well being, and accountability.
Figure 1: Seven key requirements that AI systems should meet in order to be deemed trustworthy.
Methodologically, we combined ALTAI questionnaires completed by all pilot partners with brief interviews and document reviews. We mapped data flows, identified potential indirect identifiers, and checked documentation for model lifecycle, human oversight procedures, logging and audit trails, and risk management. This work sits under WP1 Task 1.4 and was carried out with support of the Ethics Advisory Board, ensuring that findings inform both day to day engineering choices and higher level governance across the project. The assessment complements Deliverable D1.5 Ethical Management and Regulatory Compliance Framework by translating principles into concrete actions that teams can apply during design, testing, and pilot operations.
Insights & Priority Actions
Across pilots we saw strong intent and growing maturity: clearer roles for human oversight, early adoption of privacy by design, and better documentation of models and datasets. At the same time, the ALTAI responses highlighted areas that need tightening. Partners should make transparency measures more consistent, strengthen data governance with traceable lineage and access control, and clarify accountability where several organisations collaborate on the same pipeline.
Our recommendations focus on making safeguards practical and measurable. Standardise model and data cards and keep them current; define explicit human intervention steps for safety critical actions; expand logs and audit trails so that explanations, inputs, and outputs can be traced; run regular bias and robustness checks linked to corrective actions; and keep a simple register of legal and ethical risks with owners, timelines, and evidence of closure. Together these steps raise confidence in the pilots and keep them aligned with the evolving EU AI Act.
Ongoing Alignment & Next Milestones
Trustworthy AI is a continuous process, not a single milestone. We will update the self assessment on a rolling basis, track follow up actions, and maintain a live mapping to European standards and guidance as they evolve, including the EU AI Act. The final consolidated version will feed into D1.5 at Month 36. In parallel, we will offer lightweight refreshers for pilot teams on transparency, oversight, and data governance. The goal is simple: turn high level principles into everyday habits, so users, partners, and regulators can trust how XR5.0’s tools are designed, tested, and used from pilot phase to deployment.
Written by Marcelo Corrales Compagnucci

