On July 29, members of the Freshfields Global AI Practice joined industry leaders at the 2025 Corporate Board Member AI Leadership Forum in our New York office. We were proud to lead two workshops: “Cybersecurity 2030: Establishing Your AI Risk Posture” and “A Regulatory Check-in: What Does the Future Hold Here and Abroad?”
These sessions offered practical guidance on navigating the cybersecurity challenges posed by AI and offered a strategic outlook on the evolving regulatory environment—both in the U.S. and internationally. We appreciated the opportunity to engage with a diverse group of board members and senior executives on how to proactively manage AI-related risks and prepare for what’s ahead.
Here are some key takeaways from the lively discussions:
The cybersecurity workshop addressed critical topics such as the risk assessment process for adopting AI capabilities, managing supply chain vulnerabilities, strengths and challenges in varying approaches to governance, and more. It also covered the practical aspects of maintaining visibility of AI activities across the enterprise and the importance of translating initial risk assessments into continuous risk management. Finally, the workshop explored how certain approaches to governance and risk management are particular to AI and whether and how different approaches will be required for evolving technologies, such as quantum computing.
- Strategic Risk Assessment and Sourcing: Before adopting AI, companies are well-advised to conduct a multi-disciplinary risk assessment of relevant aspects of the implementation. This process should consider whether AI security risks require special treatment compared to general technology risks. Companies must also carefully weigh the benefits and risks of building AI solutions in-house versus outsourcing to third-party providers.
- Implementation and Governance: Effective governance includes understanding the legal and regulatory challenges across the data lifecycle. Ongoing risk management and clear procedural controls strengthen such governance, with defined roles for core compliance staff, the C-suite, and the board.
- Crisis Preparedness: The workshop explored the value of incident preparation and tools that are available to put companies in a strong position to respond in a crisis. Such tools include AI Red Teaming, a method to proactively identify vulnerabilities specific to AI implementation. Companies will also benefit from a strategy to get ahead of potential reputational threats that can arise from the adoption of advanced technologies.
There are contrasting approaches of the EU and the US, the complexities of enforcement, and best practices for corporate compliance. The EU is moving forward with a comprehensive, risk-based framework called the AI Act, despite facing significant implementation delays and pushback from industry and some member states. In stark contrast, the US, under a new administration, is pursuing a strategy of reduced regulation to accelerate AI innovation and maintain global leadership. Enforcement of these new and existing laws is fragmented and presents challenges, as it involves multiple authorities and overlaps with other regulations like GDPR and the DSA. To navigate this complex and rapidly shifting environment, companies are advised to adopt proactive compliance strategies, including robust internal tracking, policy updates, and establishing cross-functional AI governance councils.
- EU Regulation vs. US Deregulation: The EU AI Act is a comprehensive, risk-based framework with high fines for non-compliance, aiming to set a global standard for AI regulation. However, this approach is being challenged by a contrasting shift in the US towards deregulation, with a new administration repealing previous AI orders and focusing on accelerating innovation.
- Implementation Challenges and Delays: The EU AI Act's phased implementation is facing delays in releasing crucial guidance and harmonized standards. For example, guidance on prohibited AI practices was only released after the provisions came into effect, and standards for high-risk AI systems are not expected until the end of 2025, which is a significant delay from the originally anticipated timeline.
- Fragmented and Overlapping Enforcement: The enforcement of AI regulations is fragmented, with both the European Commission and various national authorities involved, which could lead to different enforcement priorities and interpretations across member states. This is further complicated by the fact that AI enforcement overlaps with existing regulations such as GDPR and the Digital Services Act (DSA), creating multiple layers of scrutiny.
- Importance of Proactive Governance: Given the dynamic regulatory environment, companies are well advised to consider a proactive approach to AI governance and compliance. Key best practices include:
- Engaging with Regulators: Building dialogue with regulators can help companies navigate the evolving landscape and demonstrates a commitment to compliance.
- Robust Internal Tracking: It is essential to have a clear understanding of the AI systems a company uses, the data they collect, and the use of questionnaires, catalogues, and factbooks to maintain this knowledge.
- Updating Policies: Existing policies related to cyber, data/privacy, procurement, and employee handbooks need to be updated to address AI-specific considerations.
- Establishing an AI Council: A cross-functional council of senior leaders is necessary to direct the development and deployment of AI and to manage associated risks.
- Board-Level Responsibility: Corporate boards are expected to have a strong grasp of their organization's AI governance, risk management, and compliance frameworks. They need to understand how AI tools are developed and deployed, what data they use, and which third parties are involved.
The AI regulatory landscape is undoubtedly dynamic and presents both challenges and opportunities. By proactively addressing these issues, businesses can better navigate the complexities and ensure responsible innovation in the age of AI.