Hi! I'm Nora. I think a lot about how to make transformative AI go well.
Currently, I work on accelerating progress towards AI with quantitative safety guarantees, via a £59M-backed R&D programme called Safeguarded AI.
I also think about and try to catalize work on mechanisms for Flexible Hardware-Enabled Guarantees (flexHEG) which could undergird multi-stakeholder agreements on the governance of AI.
Previously, I co-founded and led an research initiative fostering interdisciplinary AI safety research (PIBBSS). My background spans complex systems, political theory, philosophy of science, and AI.
My Work
Safeguarded AI Programme
I'm serve as Techncial Specialist to the Safeguarded AI programme - a >£59M R&D programme at UK's Advanced Research and Invention Agency. Safeguarded AI combines scientific world models and mathemtical proofs to develop quantaitive safety guarantees for safety-critical AI systems. Over the course of the ~4-year programme, we seek to demonstrate a general-purpose workflow to leverage frontier AI to accelerate R&D of superhuman-but-task-specific AI systems with quantitative guarantees. Under Programme Director David 'davidad' Dalrymple and Scientific Director Yoshua Bengio, we want to answer the question of how we can leverage the raw potential of highly advanced AI while keeping risks below a societally acceptable level, and building up civilisational resilience. By developing a proof of concept, we intend to establish the viability of a new & complementary pathway for research and development toward safe and transformative AI.
Learn more on our website, or read the Programme Thesis.Flexible Hardware-Enabled Guarantees
I have trying to catalise work towards Flexible Hardware-Enabled Guarantee (flexHEG) mechanisms. These could be added to AI accelerators to enable multilateral, privacy-preserving and trustworthy verification and automated compliance guarantees for agreements regarding the development and use of advanced AI technology. I co-authored this interim report on flexHEG, and colleagues built and illustrated the design of a rapid prototype. I've also helped coordiante a funding round looking to accelerate the prototyping and iterative development of flexHEG-enabled guaranteeable chips.
PIBBSS (previously)
Between 2021 and 2024, I co-founded and directed PIBBSS - Principles of Intelligent Behaviour in Biological and Social Systems. In that period, PIBBSS raised >$2M , supported 6 in-house, long-term research affiliates, ~50 research fellows for 3-month full-time research fellowships, and organized 15+ AI safety research events/workshops. Over the years, we built a substantive and vibrant research community, spanning many disciplines, across academia and industry, both inside and outside of AI safety. While I handed over leadership, I continue to support PIBBSS as a member of the board.
Writing & Speaking
I'm not a full-time academic, but I occasionally write articles, white papers or posts, or give talks. Here are some selected pieces:
- Dalrymple, et al. "Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems." arXiv preprint arXiv:2405.06624. (2024).
- Petri, et al. "Interim Report: Mechanisms for Flexible Hardware-Enabled Guarantees." (2024).
- Ammann, Nora. "AI Alignment and the Value Change Problem." Master's Dissertation (first). (2023).
- Ammann, Nora. "Epistemic & Scientific Challenges when Reasoning about Artificial Intelligent Systems and Alignment." Talk for ML Alignment & Theory Scholars (MATS) programme. (2023).
- Stauffer, et al. "Policymaking for the Long-term Future: Improving Institutional Fit." (2021).