This is in response to Anton Leicht’s article from 2025-02-17 titled “AI Safety Policy Can’t Go On Like This — A changed political gameboard means the 2023 playbook for safety policy is obsolete. Here’s what not to do next.”
Finally people are getting the hang of it and realize that reframing of AI safety is incredibly necessary! For me as an outsider, it still looks like the AI safety movement is only about „how do we prevent AI from killing us?“. I know it‘s an oversimplification, but that‘s how, I believe, many who don‘t really know about AI perceive it. And „strategies on how not to be killed“ is the unsexiest political thing to say overall. It‘s the same reason why the global „fight“ against climate change is completely failing, the war on drugs is still going on in many countries etc.: The majority of voters are like small children who, metaphorically speaking, rather vote for having a sandcastle right now instead of improving the systems in the background.
For us „elites“, systems thinking is cool. It makes us feel smart, like we‘re in control, like „we know more than the others“. Whether that‘s actually true — doesn‘t matter. Point is: The „safetyist view“ will always be a theoretical thing for academics, because it doesn‘t actually solve a real, acute problem which even those least knowledgeable about AI could feel.
AI safety advocates are proclaiming the hypothetical scenario of AI somehow becoming dangerous for humans. But for your average Joe, this is just like the topic of climate change: He sees it in the news, but has no motivation to act. At maximum, he just thinks „yeah, someone should do something“ and then goes on with his day. As harsh as it may sound, but 99% of people have better things to do with their everyday life than to care about AI safety.
I know that this realization might feel unfortunate, but that‘s how it is. It‘s an unsexy topic packaged into long, unsexy articles like this. And with „unsexy“ in no way I mean „wrong“ or „pointless“. With „unsexy“ I mean „inaccessible“, „hard to understand“, „theoretical“ etc. For the politicians that should care about it (and rightfully so!) it‘s eventually just noise, not because the topic of AI safety is unimportant, but because no big chunk of the voters cares about the topic. Always remember: Politicians never want to do good for the people proactively, but the absolute majority of politicians is acting reactively. Only once the voters want it, they care about it. Before that, any advocacy will just be noise to them. That‘s the blessing and the curse of democracy.