When looking at AI, people are fixated on surface-level effects: economic disruption (jobs disappearing), alignment risks (AI going rogue), or ethical dilemmas (bias of LLMs). While those are all real, they also seem to be distractions from the real shift. The current conversations are not about whether we achieve AGI anymore but about when – some say 10 years, I say it’s basically already here (it all depends on the definition of the term really). By definition, AGI will match and then surpass human intelligence in every single domain: strategic, creative, you name it. Once that threshold is crossed (and it’s closer than many admit), a feedback loop kicks in. AI designs better AI, which designs even better AI, ad infinitum.
Because we are not yet there, we debate AI as a tool. But as soon we cross that threshold, AI will predict, simulate, and optimize anything logic-based with absolute precision that human input is unnecessary or perhaps counterproductive. Humans – and that means governments, corporations, and individuals – will outsource everything, from policy to life choices, because AI will present the best logical and data-backed option. And because it is so much better in logic, you stop questioning it. The “alignment” problem is therefore ultimately less about making AI safe for humans, but about preparing humans to accept their irrelevance in logical intelligence and – in my opinion – transitioning (or better: re-connecting) them to their intuitive intelligence. If we fail at this, the majority of humans will experience free will only as an illusion.
We humans derive meaning from struggle, achievement, and social bonds. Within the next 10 to 20 years, we won’t need to struggle to achieve anymore. Achievement will be handed out (or withheld) by systems we cannot understand. What is left are social bonds. But is that really the case? We already see AI-mediated interactions replacing genuine connections (whether emails, eulogies, or even virtual AI companions). If we do not pay attention and re-connect with other humans (our tribes), we risk real psychological devastation at scale.
If AI is centralized, it will be operated by an elite (that’s at least the current trend). Not only will this elite gain god-like power, but it will form another elite class: humans who are augmented by superintelligence through direct neural interfaces or exclusive AI enhancements. What about the rest? An underclass kept alive by a universal-basic-whatever, but without purpose or power?
The problem really is: when we cross that threshold, it won’t be fixable. We better collectively act now, or the world will be run by a handful of super-enhanced humans and their AI overlords.
In 2025 these thoughts will read like speculation. But based on my observations of how the majority of humans started using and adopting AI, the trajectory seems obvious (to me). AI is optimizing for efficiency. Companies adopting it as well. Individuals must – or they are no longer competitive. What is the antidote? I am divided. I don’t believe AI must lead to such dystopia. I am much more convinced that it is our best shot to achieve utopia. But there is a very thin line in-between them: us humans. In how we collectively act. And acting is much (!) less about technological adaptation (from becoming AI “experts” to Neuralink cyborgization) and indefinitely more about re-connecting to what makes us uniquely human: our consciousness, our connection to God, our one Creator, and our unity. Meaning will come from non-competitive pursuits, AI-alignment from balancing logic with consciousness, and happiness from real, deep, social human connections. Intelligent machines – no matter how superintelligent they turn out – can never be conscious. Perhaps it is a wake-up call: we lost our spiritual connection to consciousness – and we must re-connect.

Leave a Reply