top of page
Abstract Wavy Lines

AI Safety Prize

Jury and Advisors

Lennig Pedron ©Jean-Luc_Auboeuf_12163.png

Lennig Pedron

Advisor & Judge

Lennig Pedron has expertise in cybersecurity and emerging technologies applied to the digital trust economy. She works in particular for the EPFL Innovation Park Foundation and is CEO of the Swiss initiative Trust Valley, the center of excellence in digital trust and cybersecurity.

Adrian_Hutter Profilbild.jpg

Dr. Adrian Hutter


Adrian Hutter is a Senior Research Engineer at Google, working on the safety of large language models. He holds a PhD in theoretical physics and has worked as a quantitative researcher on distributed machine learning. 

Natalie profile.jpg

Natalie Wu


Natalie Wu has worked in the spatial audio domain at Fraunhofer-IIS, and at Google, as a software engineer. She is now at Swiss AI company Lakera, with the mission to make AI safe, which is one of the biggest challenges of our generation.

Pavol profile.jpg

Dr. Pavol Bielik


Pavol Bielik is co-founder and CTO of Swiss AI company LatticeFlow. He holds a PhD from ETH Zurich, where he worked at the intersection of machine learning and programming languages. Pavol is also a recipient of the Meta Distinguished Fellowship award.

Flo Dorner.jpg

Florian Dorner

Advisor & Judge

Florian Dorner is a doctoral fellow with the Max Planck ETH Center for Learning Systems. His work aims at a better understanding of the societal impacts of AI and how policy and technical research can interface to shape these impacts.

Anka_Headshot - Anka Reuel.jpeg


Anka Reuel

Anka Reuel is a CS Ph.D. Student at the Stanford Intelligent Systems Laboratory at Stanford University. Her goal is to contribute to responsible AI systems with technical and governance solutions. Questions she is interested in include: How can we include ethical principles in AI systems? How can we make these systems more explainable to ensure they behave ethically?

David Lindner profile_square_400px.jpg

David Lindner


David Lindner is a research scientist at Google DeepMind working on AI Alignment. His research aims to develop AI systems that are interpretable and trustworthy. He was previously a doctoral student at ETH Zurich, where he worked on making reinforcement learning from human feedback more efficient and robust.


Dr. Gérôme Bovet


Gérôme Bovet is the Head of Data Science at the Cyber-Defence Campus with armasuisse Science&Tech-nology. His work focuses on machine/deep learning approaches, with emphasis on anomaly detection, adversarial and collaborative learning.


Chiara Gerosa


Chiara is the AI Governance Lead at Impact Academy, where she is building up a fellowship program for young and aspiring policy professionals. Previously, she was an AI Regulation Policy Advisor and a Counter-Disinformation Policy Advisor at the UK Government Department for Science, Innovation & Technology. Chiara is judging in a personal capacity.

Jean-Marc Rickli, NRE_9660_edited.jpg

Dr. Jean-Marc


Jean-Marc Rickli is the Head of Global and Emerging Risks and the Founder and Director of the Polymath Initiative at the Geneva Centre for Security Policy in Geneva. He is also the co-chair of the NATO Partnership for Peace Consortium Emerging Security Challenges Working Group and a senior advisor for the AI Initiative at the Future Society.

Myriam Dunn Cavelty.jpeg


Dr. Myriam Dunn Cavelty 

Myriam Dunn Cavelty is Senior Scientist and Deputy for Research and Teaching at the Center for Security Studies at ETH Zurich. Her research focuses on how digital technologies influence political behaviour and societal values and on how and why specific arrangements to govern the use of digital technologies emerge.

Martin Ochoa_edited_edited.jpg

Dr. Martín Ochoa

Advisor & Judge

Martin Ochoa is a Senior Researcher and Lecturer at the Zurich University of Applied Sciences, and co-founder at thenti GmbH. He is interested in applications of machine learning to authentication systems, cyber-physical systems security, and software security.

bottom of page