Hi! I'm Nora. I think a lot about how to make transformative AI go well.
Currently, I work on accelerating progress towards AI with quantitative safety guarantees, via a £59M-backed R&D programme called Safeguarded AI.
Most of my work could be describes as trying to catalyse important R&D. I am particularly interested in AI and Resilience: how to build civilisational resilience for the age of AI? And how to leverage AI to build that resilience, better and faster?
Previously, I co-founded and led an research initiative fostering interdisciplinary AI safety research (PIBBSS). My background spans complex systems, political theory, philosophy of science, and AI.
My Work
Safeguarded AI Programme
I'm serve as Techncial Specialist to the Safeguarded AI programme - a >£59M R&D programme at UK's Advanced Research and Invention Agency. Safeguarded AI combines scientific world models and mathemtical proofs to develop quantaitive safety guarantees for safety-critical AI systems. Over the course of the ~4-year programme, we seek to demonstrate a general-purpose workflow to leverage frontier AI to accelerate R&D of superhuman-but-task-specific AI systems with quantitative guarantees. Under Programme Director David 'davidad' Dalrymple and Scientific Director Yoshua Bengio, we want to answer the question of how we can leverage the raw potential of highly advanced AI while keeping risks below a societally acceptable level, and building up civilisational resilience. By developing a proof of concept, we intend to establish the viability of a new & complementary pathway for research and development toward safe and transformative AI.
Learn more on our website, or read the Programme Thesis.Flexible Hardware-Enabled Guarantees
I have trying to catalise work towards Flexible Hardware-Enabled Guarantee (flexHEG) mechanisms. These could be added to AI accelerators to enable multilateral, privacy-preserving and trustworthy verification and automated compliance guarantees for agreements regarding the development and use of advanced AI technology. I co-authored a three part report on flexHEG, and colleagues built and illustrated the design of a rapid prototype. I've also helped coordiante a funding round looking to accelerate the prototyping and iterative development of flexHEG-enabled guaranteeable chips.
Gradual Disempowerment
In January 2025, together with my co-authors, we published a paper discussion the possibility of gradual human disempowerment, which may occur even any sudden 'loss of control' event or discontinuous AI progress. In short, as critical social these systems — like economy, state, culture — become less reliant on human labor and cognition, this will decrease the extent to which humans can explicitly or implicitly align them. As a result, these systems —and the outcomes they produce— might drift further from providing what humans want. Competitive pressures, 'wicked' interactions across systems and scales, will make it systematically difficult to avoid outsourcing critical societal functions to AI. You can find the full paper at: https://gradual-disempowerment.ai/. I am now thinking about what security, verification and cooperation primitives could reduce the risk of gradual disempowerment.
Principles of Intelligent Behaviour
Between 2021 and 2024, I co-founded and directed PIBBSS - Principles of Intelligent Behaviour in Biological and Social Systems. In that period, PIBBSS raised >$2M , supported 6 in-house, long-term research affiliates, ~50 research fellows for 3-month full-time research fellowships, and organized 15+ AI safety research events. Over the years, we built a substantive and vibrant research community, spanning many disciplines, across academia and industry, both inside and outside of AI safety. While I handed over leadership, I continue to support PIBBSS as a member of the board.
Writing & Speaking
I'm not a full-time academic, but I occasionally write articles, white papers or posts, or give talks. Here are some selected pieces:
- Kulveit, et al. "Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development." Accepted to ICML Position Paper Track. (2025).
- Petrie, et al. "Flexible Hardware-Enabled Guarantees (A Three-Part Report)." . (2025).
- Dalrymple, et al. "Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems." arXiv preprint arXiv:2405.06624. (2024).
- Petri, et al. "Interim Report: Mechanisms for Flexible Hardware-Enabled Guarantees." (2024).
- Ammann, Nora. "AI Alignment and the Value Change Problem." Master's Dissertation (first). (2023).
- Ammann, Nora. "Epistemic & Scientific Challenges when Reasoning about Artificial Intelligent Systems and Alignment." Talk for ML Alignment & Theory Scholars (MATS) programme. (2023).
- Stauffer, et al. "Policymaking for the Long-term Future: Improving Institutional Fit." (2021).