Today, we're opening applications for a writing fellowship aimed at improving the state of public discourse around AI. We're looking for 15 people with subject matter knowledge, nuanced opinions — and a niggling sensation that they should really, really start a blog. You can apply here.
In-person conversations at AI hubs or co-working spaces are often deeper, richer, and more thoughtful than what’s available online. Important conversations don't make it onto the internet. That's because writing is hard. We'd like to make it easier.
The fellowship lasts for six weeks, from August 24-October 6. Over the course of the program, you will produce three blog posts. The fellowship is remote and compatible with a full-time job. We expect about 10 hours of commitment per week, mostly writing, but also participating in weekly workshops, and providing feedback to peers. During that time, we'll provide:
Editorial Support: Work on your writing with Asterisk's editors as well as a full-time writing coach.
Peer Feedback: Discuss your latest piece in weekly peer workshopping sessions and in our group slack.
Mentorship: Fellows will have access to a network of mentors with extensive experience writing about AI for public audiences, including
Scott Alexander (Astral Codex Ten)
Jordan Schneider (ChinaTalk)
Dean Ball (Hyperdimensional/White House OSTP)
Sam Bowman (Anthropic)
Tim Lee (Understanding AI)
A Platform: In addition to writing for your own blogs, your work will be cross-posted to Asterisk's new AI vertical: asteriskmag.ai. We'll work with fellows to help them develop their audience and get comfortable writing in public.
A Stipend: $1000. (Self-explanatory)
An Excuse: Do you need a good reason to prioritize sitting down and trying to write? This is it.
Fortunately, there's a lot of good AI writing on the internet, but we think there's a need for much, much more. Here are some areas we think are particularly under-covered: In-depth, qualitative evaluations of what AIs can and can’t do, and how they’re really being used on the ground in different fields. Accessible explanations and assessments of technical AI safety research. Explorations of what good futures with advanced AI might look like, how we could get there, and the most likely reasons we might not. We’re also especially interested in writers who can bridge the inferential gap between various AI camps with different fundamental assumptions. All that said, we're certainly not limited to fellows who want to write about these topics — they're suggestions, not requirements.
Apply here. It should take about 20 minutes. We'll close applications on August 1st, and notify those selected by August 15th. We look forward to hearing from you!
And if you’d like to receive nuanced perspectives on AI from our newest vertical, sign up below.
Interesting, forwarding this to a few colleagues and students. These are mostly located in Europe and grappling with AI for science issues in their day to day.