We’d like you to meet our first AI Fellows cohort. Hailing from Hawaii to Dubai, and many places between, our AI Fellows will be writing on law, military, development economics, evals, China, biosecurity, and more.
We can’t wait to share their writing with you.
Kevin Wei
I am currently a fellow in RAND's Technology and Security Policy Center. My research agenda is centered on the science of AI evals and related AI law/governance questions, with recent work published in top AI and AI ethics conferences. As an Asterisk Fellow, I'm mostly interested in exploring questions around how we can make evals more rigorous and more useful for assessment of downstream societal implications, translating technical concepts for the policy community, and the politics of AI governance.
Previously, I was a Schwarzman Scholar in Beijing (where I researched Chinese AI policy), a project manager / data scientist at cloud infrastructure provider DigitalOcean, and an organizer on Democratic political campaigns. I'm also wrapping up a J.D. at Harvard Law School and completed a master's in ML from Georgia Tech. You can find me on social media @kevinlwei or at kevinlwei.com.
John Vastola
I'm John J. Vastola, an academic researcher that studies natural and artificial intelligence. I received a PhD in physics from Vanderbilt University in 2021, and since then have been a postdoctoral research fellow at Harvard University. I'm particularly excited to write about the technical ideas behind modern AI systems, since I think you can't truly understand something without knowing how it works. I'm also interested in writing about the interplay between neuroscience and AI, and the extent to which learning more about our brains can help us build more capable machines. I write at Theories of Intelligence.
Larissa Schiavo
I work with Eleos AI Research; previously I worked at OpenAI, MURAL, and on various videogames. I write about AI welfare with an evidence-based and level-headed perspective. People say I’m funny. You can read my work at larissaschiavo.substack.com, and follow me on Twitter (@lfschiavo)
Buck Shlegeris
I'm the CEO of Redwood Research, a nonprofit that studies the risks posed by misaligned general-purpose AI, especially AI agents that conspire against their developer. To a greater extent than is reasonable, I like to draw inspiration from many different fields for my work. In this fellowship, I'm interested in exploring questions about AI misalignment risk that are of more general interest than my usual fare, and maybe I'll go even further afield than that. I usually write on the Redwood Research blog or on LessWrong.
Zilan Qian
Hi I am Zilan. I research and write about AI and geopolitics, focusing on overlooked (and fun) aspects of the U.S.–China competition, including humanoid robotics and access to banned U.S. AI models in China. For this fellowship, I hope to write more on the comparative dynamics of U.S. and Chinese AI development, including differences in the AI companion market and the geopolitical contexts.
I’m currently a research assistant at the Oxford Internet Institute, researching U.S.–China open-source governance, and a communications intern at the Safe AI Forum. I’m finishing my master’s in the Social Science of the Internet at Oxford, and I previously studied anthropology at Columbia University. I write on my Substack.
Max Nadeau
I'm Max Nadeau. I'm interested in how the abilities and usage of AI is evolving; I view it as trending toward more autonomy, reliance, and trust. I plan to blog about this and related issues during my time as a fellow — I’m very grateful to Asterisk for supporting my writing! In my day job, I'm a grantmaker on the Technical AI Safety team at Open Philanthropy. I help support machine learning research on understanding and controlling AI models, especially research that aims to scale to future, more capable AI. I've worked on grants for benchmarks of consequential AI capabilities, methods for detecting reward hacking, and techniques for finding and fixing rare failure modes in AI agents. Note: Open Phil has supported Asterisk in the past, but I was not involved in that process. The views expressed in my blogging here are my own and do not necessarily represent those of Open Phil. I write on Substack.
Richard Moulange
Hi! I'm Richard and I'm the AI–Biosecurity Policy Manager at the Centre for Long-Term Resilience, where I lead work on helping mitigate biological risks arising from advanced AI systems and understanding how we can use AI to better deter and defend against biological threats. I'm also a current Fellow with the Johns Hopkins Center for Health Security's Emerging Leaders in Biosecurity Initiative.
I recently finished a biostatistical machine learning PhD at the University of Cambridge, studying how to use deep learning to better diagnose and treat cancers using transcriptomic data. I've previously worked for the UK Civil Service and for GlaxoSmithKline R&D, as well as completing a GovAI Summer Fellowship and managing Fellows on the ERA:AI programme. My spare time mostly involves doing Park Runs and brunch with friends, playing DnD and making too many niche pop culture references.
I blog at richardmoulange.substack.com where I write about AI, biosecurity, national security and their (increasingly important!) intersections.
Michelle Ma
I'm Michelle Ma. I recently graduated with a B.A. in Economics from UChicago, where I worked on AI governance research at the UChicago Existential Risk Laboratory. Since policy is highly time-sensitive, I'm interested in examining key cruxes around the speed & nature of AI progress, focusing on practical bottlenecks that could double as governance opportunities (e.g. data, sensors). Beyond that, I'd also like to map out the robustness of AI policies under different worldviews, as well as develop a Republican perspective on AI governance. I write on Substack, mostly about AI. I'm also fascinated by progress studies, especially pharmaceutical innovation - I recently wrote a piece for Works in Progress on Journavx, a new non-opioid painkiller. Outside of writing & research, I like to run, play board games, and dance (flail around in Zumba class).
Joseph Levine
I am a PhD student at the University of Oxford, where I research finance and politics in West Africa. I've previously forecasted technological progress, designed financing for satellites and rockets, and driven ambulances. During the Fellowship, I'll be writing about AI adoption in Africa, how policymakers interact with AI tools, and the fiscal effects of AI on very poor countries. I write at https://jablevine.com/writing/.
Josh Holder
Hi, I’m Josh Holder! I graduated summa cum laude with a degree in Mechanical Engineering from Rice University, and went on to do a masters at the University of Washington, where I published research on reinforcement learning and optimization. I now work as an aerospace robotics engineer on safety critical systems in one of the most rigorous development environments on Earth. As such, I’m especially interested in how we can port some of the established safety frameworks from the aerospace industry to learning-based systems—and in turn, how automated reasoners might strengthen the safety of our existing systems. Outside of work, I'm an aspiring bird-watcher and love playing card games with friends.
Anson Ho
Hi! I'm Anson, I'm a researcher at Epoch AI, where I study the mechanics of AI progress and its societal impacts. I'm interested in understanding the bottlenecks to further AI scaling, how to better track AI progress, and what the next few years of AI developments will look like. I write on my substack.
Nate Fairbank
My name is Nate, and I’m interested in how AI works in the rain. Seriously. As an active-duty Army officer I currently command a fighting formation of 120 Soldiers in the Pacific. However, my previous assignment was to Pittsburgh, PA, where I earned two graduate degrees from Carnegie Mellon University as an Army Artificial Intelligence Scholar before serving as a Data Engineer at the Artificial Intelligence Integration Center (AI2C). At the AI2C I founded CamoGPT, a secure LLM chat client used tens of thousand of military members daily. I also prototyped HandMic, a “Siri for combat” that aims to streamline radio requests for Soldiers on the frontlines. I am deeply interested in the intersection of technology and tactics, and how they shape one another. When I’m not working I’m typically reading or running- I’m an avid endurance athlete and have run the Boston Marathon, an Ironman, and most recently a 30 mile Spartan Race. I write Edge Conditions.
Emma Erickson
My name is Emma Erickson, I’m a machine learning research scientist in the CIPHER lab, a cybersecurity lab at Georgia Tech Research Institute in Atlanta. Previously I was at the Auton Lab at Carnegie Mellon University, where my research focused on applied machine learning in specialized fields such as cardiology and predictive maintenance. I am looking forward to writing about AI in practice, particularly the interactions with domain-specific cultures which impact its uptake. I write on my Substack.
Lawrence Chan
I work at METR doing evaluations of large language models. Previously, I did research on mechanistic interpretability and adversarial training at Redwood Research. I was also a PhD student at UC Berkeley with the Center for Human- Compatible AI and InterACT Lab. I hold degrees in Computer Science and Business from the University of Pennsylvania, where I did research with the Good Judgment Project on using ML to assist forecasting. I write on my Substack.
Vince Vatter
I'm Vince Vatter, a professor of mathematics at the University of Florida, with a research background in combinatorics and graph theory. Recently, I've been interested in the intersection of math and AI, both in terms of how AI can help us to learn and do mathematics, and how AI does mathematics itself. In particular, I lead an NSF-funded project exploring how AI can help students learn to write mathematical proofs (https://hallmos.com/).
In this fellowship, I'm excited to explore what it really means for AI to "do" mathematics, whether current approaches represent genuine mathematical understanding or something fundamentally different, and what advances in AI mathematical capability might mean for the future of mathematics.