Santa Clara University School of Law hosted its annual symposium on March 28, 2025, at the Locatelli Center, bringing together esteemed legal and industry professionals to discuss one of the most pressing issues of our time: the governance of artificial intelligence (AI) and Privacy. The event, hosted by the High Tech Law Journal and the Journal of International Law and was coordinated by Jenna Tobin.

The symposium opened with a keynote address by Ruby Zefo, Chief Privacy Officer and Associate General Counsel for Privacy & Cybersecurity at Uber. An expert in AI policy and governance, Zefo provided critical insights into the challenges and opportunities presented by emerging AI technologies. Her address emphasized the need for proactive regulation and ethical oversight in AI development, highlighting the importance of interdisciplinary collaboration among legal professionals, policymakers, and technology leaders.

The First Panel, “Compliance in Practice – Bridging AI Innovation and GDPR,” featured Andrew Scott, Senior Cybersecurity Counsel at Roblox; Hina Moheyuddin, Former Privacy and Cybersecurity Associate at Aleada Consulting; Lydia de la Torre, Of Counsel at Squire Patton Boggs; and Rafa Baca, Privacy, Cybersecurity, and AI Lawyer at Beckage Firm. Linsey Krolik, Assistant Clinical Professor of Law at Santa Clara Law, moderated the discussion.

The panel focused on the operational challenges and strategies for ensuring GDPR compliance while fostering AI innovation. Key themes included AI governance and regulatory compliance frameworks, addressing GDPR’s Article 22 and balancing automated decision-making with human oversight, and aligning AI’s data needs with GDPR’s data minimization principle. The discussion also covered GDPR and AI in industry-specific applications, including how AI startups can build GDPR-compliant models and lessons learned from multinational corporations.

Panelists highlighted topics such as staying abreast of new technology, including the correct stakeholders, AI governance, vendor management, opt-outs for AI systems, and the harmonization of AI laws. The discussion delved into transparency and explainability, algorithmic discrimination, data risk mitigation, and the future outlook, including the impact of the EU AI Act.

The second discussion was moderated by Hai-Ching Yang, Director of Legal at Cerebras Systems and Adjunct Professor of Law at SCU. Panelists included Barbara Lawler, Founder and President of Digital Stewardship Strategies, LLC; Jess Miers, Visiting Assistant Professor of Law at the University of Akron School of Law; and Gauri Manglik, Deputy General Counsel at GoFundMe. Each of these experts provided unique insights into the challenges and opportunities presented by AI governance.

Panelists assessed the current regulatory landscape in the U.S. and the biggest challenges facing AI governance across different sectors. The discussion explored whether AI regulation should follow a sector-specific approach, rely on broad legal principles, or necessitate a new federal AI framework. Another focal point was the balance between government intervention and private sector self-regulation. As AI systems become increasingly autonomous, the conversation also addressed how policymakers can ensure accountability, liability, and alignment with human intent.

With the EU AI Act set to take full effect in 2026, panelists discussed how U.S. companies can prepare for compliance with international AI safety, transparency, and rights protection standards. The conversation also analyzed the impact of President Trump’s decision to rescind Biden’s AI executive order in favor of a deregulatory approach. The potential long-term consequences on innovation and public trust in AI governance were a significant topic of debate. Additionally, panelists explored the role of global AI governance in addressing risks posed by Chinese AI models, such as DeepSeek, particularly concerning safety standards, censorship, and data security.

Legal professionals and compliance teams play a crucial role in managing AI-related risks, from bias and discrimination to contractual liability and intellectual property disputes. The panel outlined immediate steps that legal teams can take to mitigate these risks and develop robust AI governance programs. Special attention was given to strategies for small and mid-sized companies, which often struggle to keep pace with evolving regulations.

With AI governance rapidly evolving, legal professionals must stay ahead of the curve. The panel concluded by discussing key governance trends for the next five years and the skills and expertise that legal practitioners should develop to remain at the forefront of AI policy and compliance. 

Following the final panel discussion, attendees had the opportunity to continue their conversations during a networking mixer. This provided a more informal setting for students, faculty, and professionals to engage with speakers and fellow attendees, fostering deeper discussions and potential collaborations within the AI governance space.

Reflecting on the symposium’s success, Jenna Tobin remarked, “Our Symposium was a tremendous success, and it was incredibly rewarding to see everyone’s hard work and preparation come to life. We aimed to highlight a timely and impactful issue and our fantastic speakers delivered insightful discussions that resonated with the audience. The strong turnout from both students and professionals in the community underscored the importance of these conversations, making the event both engaging and meaningful.” As AI technology continues to shape industries and legal frameworks, Santa Clara Law’s symposium provided a vital platform for engaging in forward-thinking discussions. This year’s event was an invaluable opportunity for students, professionals, and policymakers to gain insights into the future of AI governance.

Written by Daniel Zertuche, Student Writer for Santa Clara Law

Media Contact

Jennifer Wooliscroft | Director of Strategic Communication and Outreach | jwooliscroft@scu.edu | 408-551-1763