top of page
  • Twitter
  • LinkedIn

Theomachia Labs

Combining strategic AI safety research agendas with top-tier volunteer talent to scale up solutions for mitigating x-risk from ASI.

background1.png

About

Theomachia Labs is a volunteer-powered AI safety research organization dedicated to advancing alignment research through strategic coordination and accessible career pathways. We exist to harness the energy and talent of individuals passionate about AI safety who are ready to contribute, regardless of their current career stage.

The AI safety field faces a paradox: countless talented individuals are eager to contribute to alignment research, yet traditional opportunities remain scarce. Many capable people cannot secure permanent or paid positions, despite possessing the skills and dedication needed to advance this critical work. Meanwhile, existing fellowship programs, while valuable, often result in fragmented efforts—dozens of small, independent projects that lack strategic coordination, with many left incomplete.

Theomachia Labs reimagines how AI safety research can be organized. Unlike traditional fellowship models, we operate with a unified research agenda informed by expert advisors and prediction markets. This ensures our efforts address the most important problems in AI alignment with clarity and purpose. We provide a stable organizational home where contributors can work long-term, build expertise, and advance their careers. This is a platform for sustained contribution and growth. Also, research is just one way to contribute. We welcome volunteers across all functions—operations, communications, design, marketing, HR, and more. AI safety requires an ecosystem, and every role matters.

Theomachia Labs serves as a launchpad. Contributors gain real-world experience, build professional networks, and develop skills that prepare them for paid positions within our organization or elsewhere in the AI safety ecosystem. As volunteers demonstrate impact and leadership, they may advance to executive roles with compensation.

How we work:

  • Our priorities are defined through rigorous consultation with advisors and informed by prediction markets, ensuring we focus on high-impact problems

  • Rather than a multitude of independent groups, we operate as a cohesive organization with clear structure and accountability

  • Contributors receive mentorship, skill-building opportunities, and pathways to advancement

  • As team members take on executive responsibilities, they transition to salaried positions

  • No matter your background or experience level, there's a way to contribute meaningfully to AI safety

Whether you're a researcher, communicator, operations specialist, or someone with skills you believe could advance AI safety, Theomachia Labs offers a structured way to make an impact.

Latest Publications

Novel AI Control Protocol Classes Evaluation and Scalability

Building on Greenblatt et al.'s control evaluation framework, we propose investigating alternative AI control protocol classes for AI control and their scaling properties as model capabilities increase. While the original work demonstrated effective safety-usefulness tradeoffs using simple trusted-untrusted model pairs, I hypothesize that hierarchical and parallel control structures may offer superior Pareto frontiers, particularly as the capability gap between trusted and untrusted models widens. I also suggest testing the original setup on different scales of capabilities to see if the results generalise over capabilities.

background2.png

Join our mailing list for updates on publications and events

© 2025 by Theomachia Labs

bottom of page