(New Post) Risk Tokens: Economic Security in AI Safety
There is a lot of interesting research these days surrounding understanding the internals of models as well as other areas of alignment + safety.
With some of our work @CompoundVC @Compoundarxiv in biosecurity as well as crypto, I put forward the idea of Risk Tokens which aim to show that when you align economic incentives with desired behaviors, security becomes an emergent property rather than an imposed constraint.
A lot of this is inspired by some of the recent work at @AnthropicAI on circuit tracing and more.
There's this kind of mutually agreed upon politeness investors have with each other and founders because
1. Many of us collaborate at some point in our careers and we don't actually want to actively harm each other. 2. Being a founder is hard and we respect that
but sometimes it is really hard to not look at certain projects or VC structures (in crypto) or companies (in deep tech) and say something for the good of the commons.
It feels more likely to me that the market risk is skewed materially to the downside versus some even odds or better skew to continuing to rally past all time highs for rest of 2025