Apr 2026
The IAPP Global Summit 2026 brought together privacy professionals, regulators and industry leaders from around the world to explore the evolving data protection landscape. From artificial intelligence (AI) governance to cross-border data challenges, the conference offered practical insights on how organisations can build trust, manage risk and adapt to rapid technological change. These insights are particularly timely for organisations in Bermuda as we mark the end of the first fully effective year of the Personal Information Protection Act (PIPA), which took effect on 1 January 2025.
Drawing on key themes from the conference, Andrew Barnes and Sarah Blair share several takeaways on privacy, cyber and AI-related risks.
Privacy as a Strategic Imperative
A consistent message throughout the Summit was that privacy should not be viewed as a barrier to innovation. Instead, it is central to maintaining trust and enabling sustainable growth. Speakers emphasised that embedding privacy into systems and processes is essential to ensuring that innovation supports, rather than undermines, society. This shift reflects a broader trend: organisations that prioritise responsible data use are better positioned to build long-term credibility with regulators, customers and stakeholders. These principles are now enshrined in PIPA, which requires organisations using personal information in Bermuda to adopt suitable measures and policies that give effect to its obligations and to the rights of individuals.
AI Governance Raises Fundamental Legal Questions
The rise of agentic AI systems capable of making decisions and taking actions independently is challenging traditional legal frameworks. Questions around liability, consent and accountability remain, particularly where AI operates without direct human oversight. Organisations should closely monitor legal and regulatory developments in this area, while establishing internal systems to ensure that high-risk decisions retain appropriate human involvement. While Bermuda does not currently have a dedicated AI regulatory framework, for regulated financial institutions the Bermuda Monetary Authority (BMA) published a Discussion Paper in July 2025 on the Responsible Use of Artificial Intelligence in Bermuda’s Financial Services Sector. A Stakeholder Letter recently issued in February 2026 summarised feedback and confirmed the BMA’s outcomes-based and principles-led approach to AI governance.
Practical Risk Management Over Formalistic Compliance
Regulators and litigators highlighted the importance of moving beyond formalistic compliance approaches. In practice, organisations should:
- invest in meaningful, scenario-based training rather than generic programmes;
- ensure alignment between legal, IT and marketing teams; and
- build cooperative relationships with regulators from the outset.
Poorly handled breach responses including inadequate disclosures or attempts to shift blame were noted as common triggers for increased regulatory scrutiny. Early legal input and clear internal coordination remain critical to managing incidents effectively. The BMA’s Stakeholder Letter also indicated that AI-related risks are largely addressed through existing regulatory frameworks, including corporate governance, risk management, cyber risk, operational resilience requirements and third-party oversight requirements, and the need for regulatory enhancements will be developed incrementally and proportionately.
Any Bermuda organisation deploying AI systems that use personal information must ensure compliance with PIPA’s existing requirements, particularly those regarding fairness, proportionality and transparency.
Data Minimisation Matters to Privacy Laws
Data minimisation continues to be a cornerstone of effective privacy programmes. Collecting and retaining only what is necessary not only supports compliance with laws such as the GDPR and PIPA but also reduces risk exposure.
In parallel, privacy regimes are becoming increasingly aligned across jurisdictions. While differences remain, organisations should expect greater regulatory consistency alongside increased enforcement activity aimed at setting clear precedents. The Office of the Privacy Commissioner for Bermuda has committed to engaging with and fostering relationships with international regulators to ensure effective collaboration on data protection matters.
Managing Data in Transactions and Global Operations
Data protection considerations are now central to corporate transactions and global operations. A recurring challenge in merger and acquisitions (M&A) activity is the absence of robust data mapping within target companies.
Best practices include maintaining:
- up-to-date data inventories and mapping;
- inventory of data sharing agreements; and
- documented histories of data breaches/incidents.
During transactions, personal data sharing should follow a phased and controlled approach, limiting disclosure to what is necessary at each stage. PIPA deliberately provides a framework enabling the disclosure and use of personal information (often of employees or policyholders) for the purposes of a “business transaction” (for example, a purchase, merger or other acquisition or disposal) provided that certain conditions are complied with.
Employee Data Management Across the Lifecycle
Employee personal information is collected and used throughout the employment life cycle. From recruitment and onboarding to payroll and benefits, policy compliance and investigations to terminations and offboarding, employee data is used regularly. One key area of compliance risk remains international transfers of employee personal data. Managing employee data across jurisdictions can present significant challenges, particularly where privacy laws differ from country to country. Often forgotten is the risk posed by intra-company transfers, for example when leveraging offshore human resources personnel or in the context of acquisitions.
Organisations should identify and map any transfers, ensuring that appropriate risk assessments are conducted and protection mechanisms put in place – for example, contractual arrangements, or corporate codes of conduct. A structured approach starting with understanding what data is held, who can access it and how it is used is critical. Another high risk area is the use of monitoring tools and AI-driven decision-making – these must be transparent, proportionate and supported by clear policies, with safeguards to prevent misuse or over-collection of data. Putting policies and systems in place is an important start, but ensuring they are implemented is key. Ultimately, policies must be supported by practical guidance and training to ensure they are understood and applied effectively in practice.
Conyers regularly advises on data protection and privacy compliance matters, for advice on your organisation’s data protection strategy, please contact the authors or your usual Conyers contact.