Before AGI: The Epistemic Collapse

More than 50% of recently published website texts are now written by AI. This means that from today forward, the majority of all published texts is already synthetic. The same will hold true for any other form of content: images, video, and audio. In and of itself, AI written texts shouldn’t be such a large issue. The problem is not texts written by AI, but that we have simultaneously crossed a point where you can reliably distinguish AI-generated content from human content. I have a strong opinion that AI should sound like AI, I also think that AI chatbots should be apparent as such, and that AI-generated images and videos should have deeply embedded watermarks. This is also why I believe parts of the EU AI Act and the California AI Transparency Act are net-positive for humanity. But why do I believe so?

The most pressing issue with AI generated content is much less about capability or alignment of AI models, but the collapse of epistemic commons before we even arrive at general-intelligent or super-intelligent AI models. Here is what I mean:

Most text is now AI-generated, and within months the same will be true for video, images, and audio. When creation costs and efforts collapse to zero, two things vanish simultaneously: trust and meaning.

We can no longer casually trust what we see. Every text, every video, every expert opinion becomes suspect. As social primates evolved to trust patterns and authorities, we are losing the ability to distinguish signal from noise at the exact moment we need it most.

Perhaps the deeper crisis isn’t skepticism but meaning collapse. Scarcity and effort have always been core to how humans assign value and significance. When infinite content can be generated instantly and automated for any purpose, these anchors disappear.

Most look at this as primarily economic disruption, but perhaps it is much more psychological and civilizational because we are eroding the foundations of shared reality before we have built alternatives.

Then there is this slippery slope: From now on, humans will increasingly interact with and read texts written by AI systems trained on AI-generated texts. Again, soon it is also photos, videos, audio. This training-loop has (at least) the potential to create a cultural drift in directions yet unpredictable. One thing we can be quite certain about is that our human values are already being reshaped by AI systems in ways we cannot track. This in turn makes the question of “alignment” both: more important and at the same time secondary.

The most pressing risk of human civilization is therefore not hypothesizing a possibly “misaligned” superintelligence, but rather the risk of arriving there divided – socially and epistemically.

What must be done is certainly harder than alignment of AI systems?

  • Rebuilding trusted information infrastructure
  • Creating new forms of verifiable authenticity
  • Developing cultural “antibodies” to synthetic manipulation
  • Building meaning-making structures that aren’t dependent on scarcity or effort
  • Preserving and strengthening human coordination capacity
  • Etc.

This is harder than “alignment”, because the more we look at these to-dos from a federal or global perspective, the more impossible they will become.

Now, to move from the theoretical to the practical: Who are the 5 to 150 people you can still genuinely trust and coordinate with? Because everything else either emerges from functional groups, or it won’t emerge at all.


Discover more from Marius Schober

Subscribe to get the latest posts to your email.

100% free. No spam ever. Unsubscribe anytime.

Leave a Reply

Discover more from Marius Schober

Subscribe now to keep reading and get access to the full archive.

Continue reading