Data Privacy and Security Training: Safeguarding Sensitive Data in AI
- Equip participants with a comprehensive understanding of data privacy and security considerations in AI.
- Provide practical strategies and best practices to ensure compliance with the EU AI Act's requirements for data protection.
- Empower teams to effectively handle and safeguard sensitive data throughout the AI lifecycle.
Training Duration: 1-day workshop
Training Fee: €1500
Module 1: Introduction to Data Privacy and Security in AI
- Understanding the significance of data privacy and security in AI systems.
- Exploring the implications of the EU AI Act's data protection mandates on AI practices.
Module 2: GDPR and Beyond: Legal Landscape for Data Protection
- A deep dive into GDPR regulations and their implications for AI data handling.
- Navigating other global data protection regulations and their alignment with the EU AI Act.
Module 3: Identifying Sensitive Data in AI
- Recognizing the types of sensitive data commonly used in AI models.
- Understanding the ethical and legal responsibilities of handling personal and sensitive data.
Module 4: Data Collection and Processing Best Practices
- Guidelines for ethically collecting, processing, and storing data in compliance with privacy regulations.
- Strategies to ensure data minimization, purpose limitation, and user consent.
Module 5: Securing Data Infrastructure in AI
- Exploring cybersecurity measures to protect data stored and processed by AI systems.
- Encryption, access controls, and monitoring for data security.
Module 6: The EU AI Act and Data Privacy Compliance
- Analyzing the EU AI Act's requirements related to data privacy and security.
- Mapping AI data practices to the Act's mandates to ensure compliance.
Module 7: Handling Data Breaches and Incidents
- Strategies for responding to data breaches and security incidents effectively.
- Preparing incident response plans to mitigate risks and minimize impact.
Module 8: Interactive Workshops and Case Studies
- Collaborative discussions on real-world scenarios related to data privacy and security in AI.
- Analyzing case studies to identify best practices for handling sensitive data.
Module 9: Building a Data Privacy Culture
- Strategies for fostering a culture of data privacy and security awareness within the organization.
- Empowering participants to champion data protection principles.
Expert presentations on data privacy and security regulations, practices, and case studies.
Group discussions to explore practical scenarios and ethical dilemmas.
Q&A sessions to address participant queries and concerns.
Comprehensive understanding of data privacy and security regulations in AI.
Practical skills in handling and securing sensitive data throughout the AI lifecycle.
Alignment with the EU AI Act's data protection requirements.
Empowerment to build a data privacy culture within the organization.
Patricia has 20 years’ experience as a lawyer in data, technology and regulatory/government affairs and is a registered Solicitor in England and Wales, and the Republic of Ireland. She has authored and edited several works on law and regulation, policy, ethics, and AI.
She is an expert advisor on the Ethics Committee to the UK’s Digital Catapult Machine Intelligence Garage working with AI startups, is a Maestro (a title only given to 3 people in the world) and expert advisor “Maestro” on the IEEE’s CertifAIEd (previously known as ECPAIS) ethical certification panel, sits on IEEE’s P7003 (algorithmic bias)/P2247.4 (adaptive instructional systems)/P7010.1 (AI and ESG/UN SDGS) standards programmes, is a ForHumanity Fellow working on Independent Audit of AI Systems, is Chair of the Society for Computers and Law, and is a non-exec director on the Board of iTechlaw and on the Board of Women Leading in AI. Until 2021, Patricia was on the RSA’s online harms advisory panel, whose work contributed to the UK’s Online Safety Bill.
Trish is also a linguist and speaks fluently English, French, and German.
In 2021, Patricia was listed on the 100 Brilliant Women in AI Ethics™ and named on Computer Weekly’s longlist as one of the Most Influential Women in UK Technology in 2021.
Nell serves as Chair & Vice-Chair respectively of the IEEE’s ECPAIS Transparency Experts Focus Group, and the 7001 Transparency of Autonomous Systems committee on A.I. Ethics & Safety, working to help safeguard algorithmic trust. More recently, as Chari of the IEEE P3152 Working Group, she is leading the development of signs and symbols to help the public understand whether they are dealing with a human or a machine, or some combination.
She also chairs EthicsNet.org , a community teaching prosocial behaviors to machines, CulturalPeace.org, which is crowd-crafting Geneva Conventions-style rules for cultural conflict. Her public speaking has inspired audiences to work towards a brighter future at venues such as The World Bank, The United Nations General Assembly, and The Royal Society.
Nell serves as an Executive Consultant on philosophical matters for Apple, as well as serving as Senior Scientific Advisor to The Future Society, and Senior Fellow to The Atlantic Council. She also holds Fellowships with the British Computing Society and Royal Statistical Society, among others.
Ali Hessami is currently the Director of R&D and Innovation at Vega Systems, London, UK. He has an extensive track record in systems assurance and safety, security, sustainability, knowledge assessment/management methodologies. He has a background in the design and development of advanced control systems for business and safety-critical industrial applications.
Hessami represents the UK on the European Committee for
Electrotechnical Standardization (CENELEC) & International Electrotechnical
Commission (IEC) – safety systems, hardware & software standards
committees. He was appointed by CENELEC as convener of several Working Groups
for review of EN50128 Safety-Critical Software Standard and update and
restructuring of the software, hardware, and system safety standards in
Ali is also a member of Cyber Security Standardisation SGA16, SG24, and WG26 Groups and started and chairs the IEEE Special Interest Group in Humanitarian Technologies and the Systems Council Chapters in the UK and Ireland Section. In 2017 Ali joined the IEEE Standards Association (SA), initially as a committee member for the new landmark IEEE 7000 standard focused on “Addressing Ethical Concerns in System Design.” He was subsequently appointed as the Technical Editor and later the Chair of P7000 working group. In November 2018, he was appointed as the VC and Process Architect of the IEEE’s global Ethics Certification Programme for Autonomous & Intelligent Systems (ECPAIS).
Ruth’s career has spanned 30 years developing and designing IT solutions for her clients, as a network engineer, senior technical consultant, solutions architect, business analyst and Technology Foresight professional. Her expertise is in introducing new technologies to business, creating managed services and creating innovative governance models within organisations.
Ruth’s passion is to work towards the ethical and sustainable development and use of technology for the good of society, enabling her clients to make wise and informed decisions and investments today to enable their preferred futures.
Ruth is the Chair of the IEEE Society on Social Implications of Technology (SSIT) Standards Committee, is a member of the IEEE Standards Association’s AsiaPac Regional Advisory Group, is the Standards Coordinator for the IEEE SSIT Australia and IEEE Victorian Section, and was an active contributing member of the IEEE 7000TM-2021 Standard Model Process for Addressing Ethical Concerns during System Design.