AI Transparency and Explainability Training: Fostering Ethical Decision-Making
- Equip teams with the knowledge and skills to develop AI models that prioritize transparency and explainability.
- Align AI development practices with the EU AI Act's emphasis on transparent decision-making processes.
- Raise awareness about the significance of transparency and explainability in building trustworthy AI systems.
Training Duration: 1-day workshop
Training Fee: €1500
Module 1: Introduction to AI Transparency and Explainability
- Understanding the importance of transparency and explainability in AI decision-making.
- Exploring how transparent and explainable AI aligns with the EU AI Act's requirements.
Module 2: Principles of Transparent AI Models
- Exploring the fundamental principles and characteristics of transparent AI models.
- Understanding the role of transparency in building user trust and regulatory compliance.
Module 3: Techniques for Explainable AI
- Introducing techniques and methodologies to enhance the explainability of AI models.
- Exploring approaches such as LIME, SHAP, and rule-based explanations.
Module 4: Legal and Ethical Landscape
- Navigating legal and ethical considerations related to AI transparency and explainability.
- Analyzing the relationship between transparency, fairness, and accountability.
Module 5: Interpretable Model Development
- Practical guidance for developing AI models with built-in transparency and explainability features.
- Hands-on exercises in using interpretable algorithms and visualization techniques.
Module 6: The EU AI Act and Transparent Decision-Making
- Deep diving into the EU AI Act's mandates related to transparent decision-making.
- Mapping AI development practices to the Act's transparency and explainability requirements.
Module 7: Interactive Workshops and Case Studies
- Collaborative discussions on the challenges and benefits of transparent AI models.
- Analyzing real-world case studies that highlight the impact of explainability on AI deployment.
Module 8: Implementation Strategies
- Defining strategies to implement transparent and explainable AI practices within your organization.
- Addressing potential obstacles and creating an action plan for integration.
Engaging presentations by experts in AI transparency and ethics.
Hands-on exercises to apply techniques for explainable AI model development.
Group discussions to share insights and perspectives on transparent AI practices.
Q&A sessions to clarify concepts and address participant inquiries.
Enhanced understanding of transparent and explainable AI concepts.
Skill development in implementing AI models with built-in transparency features.
Alignment with the EU AI Act's transparency and explainability requirements.
Empowerment to build AI systems that foster trust and accountability.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.
Nell serves as Chair & Vice-Chair respectively of the IEEE’s ECPAIS Transparency Experts Focus Group, and the 7001 Transparency of Autonomous Systems committee on A.I. Ethics & Safety, working to help safeguard algorithmic trust. More recently, as Chari of the IEEE P3152 Working Group, she is leading the development of signs and symbols to help the public understand whether they are dealing with a human or a machine, or some combination.
She also chairs EthicsNet.org , a community teaching prosocial behaviors to machines, CulturalPeace.org, which is crowd-crafting Geneva Conventions-style rules for cultural conflict. Her public speaking has inspired audiences to work towards a brighter future at venues such as The World Bank, The United Nations General Assembly, and The Royal Society.
Nell serves as an Executive Consultant on philosophical matters for Apple, as well as serving as Senior Scientific Advisor to The Future Society, and Senior Fellow to The Atlantic Council. She also holds Fellowships with the British Computing Society and Royal Statistical Society, among others.
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for
Electrotechnical Standardization (CENELEC) & International Electrotechnical
Commission (IEC) – safety systems, hardware & software standards
committees. He was appointed by CENELEC as convener of several Working Groups
for review of EN50128 Safety-Critical Software Standard and update and
restructuring of the software, hardware, and system safety standards in
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Ruth’s career has spanned 30 years developing and designing IT solutions for her clients, as a network engineer, senior technical consultant, solutions architect, business analyst and Technology Foresight professional. Her expertise is in introducing new technologies to business, creating managed services and creating innovative governance models within organisations.
Ruth’s passion is to work towards the ethical and sustainable development and use of technology for the good of society, enabling her clients to make wise and informed decisions and investments today to enable their preferred futures.
Ruth is the Chair of the IEEE Society on Social Implications of Technology (SSIT) Standards Committee, is a member of the IEEE Standards Association’s AsiaPac Regional Advisory Group, is the Standards Coordinator for the IEEE SSIT Australia and IEEE Victorian Section, and was an active contributing member of the IEEE 7000TM-2021 Standard Model Process for Addressing Ethical Concerns during System Design.