Category:


  • More than 50% of recently published website texts are now written by AI. This means that from today forward, the majority of all published texts is already synthetic. The same will hold true for any other form of content: images, video, and audio. In and of itself, AI written texts shouldn’t be such a large issue. The problem is not texts written by AI, but that we have simultaneously crossed a point where you can reliably distinguish AI-generated content from human content. I have a strong opinion that AI should sound like AI, I also think that AI chatbots should be apparent as such, and that AI-generated images and videos should have deeply embedded watermarks. This is also why I believe parts of the EU AI Act and the California AI Transparency Act are net-positive for humanity. But why do I believe so?

    The most pressing issue with AI generated content is much less about capability or alignment of AI models, but the collapse of epistemic commons before we even arrive at general-intelligent or super-intelligent AI models. Here is what I mean:

    Most text is now AI-generated, and within months the same will be true for video, images, and audio. When creation costs and efforts collapse to zero, two things vanish simultaneously: trust and meaning.

    We can no longer casually trust what we see. Every text, every video, every expert opinion becomes suspect. As social primates evolved to trust patterns and authorities, we are losing the ability to distinguish signal from noise at the exact moment we need it most.

    Perhaps the deeper crisis isn’t skepticism but meaning collapse. Scarcity and effort have always been core to how humans assign value and significance. When infinite content can be generated instantly and automated for any purpose, these anchors disappear.

    Most look at this as primarily economic disruption, but perhaps it is much more psychological and civilizational because we are eroding the foundations of shared reality before we have built alternatives.

    Then there is this slippery slope: From now on, humans will increasingly interact with and read texts written by AI systems trained on AI-generated texts. Again, soon it is also photos, videos, audio. This training-loop has (at least) the potential to create a cultural drift in directions yet unpredictable. One thing we can be quite certain about is that our human values are already being reshaped by AI systems in ways we cannot track. This in turn makes the question of “alignment” both: more important and at the same time secondary.

    The most pressing risk of human civilization is therefore not hypothesizing a possibly “misaligned” superintelligence, but rather the risk of arriving there divided – socially and epistemically.

    What must be done is certainly harder than alignment of AI systems?

    • Rebuilding trusted information infrastructure
    • Creating new forms of verifiable authenticity
    • Developing cultural “antibodies” to synthetic manipulation
    • Building meaning-making structures that aren’t dependent on scarcity or effort
    • Preserving and strengthening human coordination capacity
    • Etc.

    This is harder than “alignment”, because the more we look at these to-dos from a federal or global perspective, the more impossible they will become.

    Now, to move from the theoretical to the practical: Who are the 5 to 150 people you can still genuinely trust and coordinate with? Because everything else either emerges from functional groups, or it won’t emerge at all.

  • When looking at AI, people are fixated on surface-level effects: economic disruption (jobs disappearing), alignment risks (AI going rogue), or ethical dilemmas (bias of LLMs). While those are all real, they also seem to be distractions from the real shift. The current conversations are not about whether we achieve AGI anymore but about when – some say 10 years, I say it’s basically already here (it all depends on the definition of the term really). By definition, AGI will match and then surpass human intelligence in every single domain: strategic, creative, you name it. Once that threshold is crossed (and it’s closer than many admit), a feedback loop kicks in. AI designs better AI, which designs even better AI, ad infinitum.

    Because we are not yet there, we debate AI as a tool. But as soon we cross that threshold, AI will predict, simulate, and optimize anything logic-based with absolute precision that human input is unnecessary or perhaps counterproductive. Humans – and that means governments, corporations, and individuals – will outsource everything, from policy to life choices, because AI will present the best logical and data-backed option. And because it is so much better in logic, you stop questioning it. The “alignment” problem is therefore ultimately less about making AI safe for humans, but about preparing humans to accept their irrelevance in logical intelligence and – in my opinion – transitioning (or better: re-connecting) them to their intuitive intelligence. If we fail at this, the majority of humans will experience free will only as an illusion.

    We humans derive meaning from struggle, achievement, and social bonds. Within the next 10 to 20 years, we won’t need to struggle to achieve anymore. Achievement will be handed out (or withheld) by systems we cannot understand. What is left are social bonds. But is that really the case? We already see AI-mediated interactions replacing genuine connections (whether emails, eulogies, or even virtual AI companions). If we do not pay attention and re-connect with other humans (our tribes), we risk real psychological devastation at scale.

    If AI is centralized, it will be operated by an elite (that’s at least the current trend). Not only will this elite gain god-like power, but it will form another elite class: humans who are augmented by superintelligence through direct neural interfaces or exclusive AI enhancements. What about the rest? An underclass kept alive by a universal-basic-whatever, but without purpose or power?

    The problem really is: when we cross that threshold, it won’t be fixable. We better collectively act now, or the world will be run by a handful of super-enhanced humans and their AI overlords.

    In 2025 these thoughts will read like speculation. But based on my observations of how the majority of humans started using and adopting AI, the trajectory seems obvious (to me). AI is optimizing for efficiency. Companies adopting it as well. Individuals must – or they are no longer competitive. What is the antidote? I am divided. I don’t believe AI must lead to such dystopia. I am much more convinced that it is our best shot to achieve utopia. But there is a very thin line in-between them: us humans. In how we collectively act. And acting is much (!) less about technological adaptation (from becoming AI “experts” to Neuralink cyborgization) and indefinitely more about re-connecting to what makes us uniquely human: our consciousness, our connection to God, our one Creator, and our unity. Meaning will come from non-competitive pursuits, AI-alignment from balancing logic with consciousness, and happiness from real, deep, social human connections. Intelligent machines – no matter how superintelligent they turn out – can never be conscious. Perhaps it is a wake-up call: we lost our spiritual connection to consciousness – and we must re-connect.

  • If you exclude AI-driven investments, then the US economy mirrors Germany’s near-stagnation, with near zero GDP growth in the first half of 2025. More than 90% of US GDP growth stems from AI and related sectors. Large parts of >$375B AI investments scream “bubble”. Only a smaller percentage of companies and labs have a unique MOAT. Should it burst, due to whatever reason, the US will face a strong recession.

    The AI bubble could burst from two opposite extremes: exponential technological progress or the lack thereof. In case of exponential technological progress; imagine post-LLM architectures slashing compute needs by 100x (i.e., Mamba). This will strand GPU-heavy datacenters. It will make >$2.9T of mostly debt-financed datacenter investments obsolete. On the other side: If LLMs plateau without ROI, the hype will fade like dot-com, tanking valuations despite tangible capex.

    Whatever the case, if it pops, the US could spiral into a vicious reinforcing cycle: recession → layoffs/unemployment → consumer pullback → deflationary spiral (or stagflation if supply shocks hit) → political extremism. This reminds me of pre-WW2 Europe. The US must diversify growth beyond AI now.

    What can Germany learn from this? The obvious is to accelerate AI adoption and sovereignty. Just as the US without AI stagnates, Germany with AI could grow again. At the same time, imitating the US is a fragile lifeline. Perhaps the smartest idea is to reject the hype cycle altogether. Let Berlin based AI startups do their thing, rent US based AI software, and focus all energy on high-tech breakthroughs in the decentralized Mittelstand.

  • One of the many things we experience is the simultaneous reaching for more alongside the subconscious knowing that the little we have is all we truly need. We seek noise, though silence holds the answer. We look to the future, we look to the past, yet we forget the now.

    We live here. We live now.

    The illusion of the future and the weight of the past hold us captive. It is like a pendulum, swinging from what was to what ought to be. From what made us happy to what might make us unhappy. We cling instead of letting go. We try to force the future into submission, forgetting all the while that the future emerges with effortless grace in the here, in the now.

    Let us flow. Not blindly. With visionary intention. Instead of waiting for tomorrow, let us be today who we wish to be tomorrow. It is what is born today that shapes the morrow.

  • Opportunities

    Today, the largest opportunities arise from people living in the past. Obvious trends, like the exponential advancement of solar PV or electric vehicles, are irrationally badmouthed. There seems to be a longing for bringing back nuclear power, for bringing back manufacturing, for bringing back workers into offices, for bringing back military dominance. People still believe in college degrees, in standardized testing, credentialism in hiring, pension funds, the list goes on. Politicians try to compete with China, want to bring back supply chains, want to revive 40+h workweeks originating from the industrial revolution. This conservative nostalgia seems to be a large trend; or perhaps it has always been.

    What has worked 20 years ago will not miraculously come back. Politicians are selling “the good old days” to people not aware of current reality – of how incredibly fast exponential technologies and societal changes are evolving.

    The opportunities of today and the future are not in chasing the resurrection of the past but in identifying which underlying needs from those eras remain unmet in modern forms.

    For example: People don’t want manufacturing jobs back. They also don’t want to out-manufacture China. Who really wants to labor in a factory? Who wants to work 10 hours a day on a farm? The answer is: nobody really. What people do want is economic security and a trade balance that doesn’t feel like losing. They want protection from foreign economic coercion, and the dignity of creating tangible products the world needs.

    Another example: People don’t want coal plants or nuclear plants back. Who really wants polluted air or nuclear waste in their neighborhood? Again, the answer is: nobody really. What people actually want is affordable energy independence – which means electricity too cheap to meter – and a reliable baseload and a reliable grid during crises and days where the sun isn’t shining.

    The pattern is always the same: the surface demand is to reverse time; it is nostalgia. The underlying need is mostly emotional and social: security, dignity, control, identity.

    The opportunity is satisfying these needs through forward-looking solutions that people haven’t yet recognized as substitutes. It is in building for a world as it is, not as people wish it were. It is in accepting current reality the fastest and having the longest runway to build what’s actually next.

  • This is AGI

    I say that current AI is AGI. It is not obvious yet, because we haven’t yet connected very complex and fragmented software and data environments – and for R&D to turn into real-world change is a multi-year process anyway.

    Even if we stopped and freeze AI development here and now, we’d only realize that we indeed have AGI 2 or 3 years down the road. In some niches it will be faster (software or law) in others slower (complex logistics).

    However, AI development is not stopping here and now. It continues to improve – I say exponentially. Even if you are more conservative, then the linear growth still has undoubtedly a large rate of change.

    Today (!), we have AI models that evolved from barely completing sentences to writing code that ships to production, we have AI doing PhD-level research, and achieved gold medal-level performance on the International Math Olympiad. AI is solving medical problems that baffle experts.

    Again – what is currently mostly manually prompted work in long chat conversations will soon develop into agents that can do almost all knowledge work fully autonomously.

    I’m not talking about AI as an assistant, as a co-pilot. It will just straight up finish the work while you are napping on the beach.

    The difference between the GPT-3 model and today’s models – whether Grok 4, Gemini 2.5 Pro, or ChatGPT o4 – is like comparing a Nokia 1011 to an iPhone 16 Pro. We went from purely text based chats to multimodal understanding – models that can see, hear, and reason across domains simultaneously. AI is starting to genuinely understand context and nuance in ways that feels human.

    The next phase is not purely larger AI models, but models that learn continuously. They can remember you, plan and execute multistep tasks over days, weeks, or months.

    An AI system that perfectly remembers, understands context, who never sleeps, and gets smarter every day. This is being built today in AI labs around the globe.

    We have AGI today, and it is only a matter of time for us to arrive at superintelligent AI systems. Is it 2 years? 3 years? 4 years? 5 years? Irrelevant. Whether it is 1 year or 10 years, the implications are the same: everything is going to change forever.

  • Lyrics

    Pour quoi faire, te mettre à l’envers

    (What for, turning yourself upside down)

    Toujours danser sur le fil

    (Always dancing on the wire)

    Juste un autre verre

    (Just another drink)

    Pour sentir la terre

    (To feel the earth)

    Sous tes pieds et puis tu glisses

    (Beneath your feet and then you slip)

    Sens mon coeur, là t’auras plus peur

    (Feel my heart, then you won’t be afraid anymore)

    Tu pourras enfin me dire

    (You’ll finally be able to tell me)

    Toutes les choses qui font que t’as qu’une envie

    (All the things that make you only want one thing)

    C’est de danser sur le fil

    (It’s to dance on the wire)

    Comme une étoile tu files

    (Like a star you shoot)

    Tu t’en vas danser sur le fil

    (You go off to dance on the wire)

    Comme une étoile tu files

    (Like a star you shoot)

    Et on s’aime mieux que dans les films

    (And we love each other better than in movies)

    On a chanté jusqu’en pleurer sur les toits, toits

    (We sang until we cried on the rooftops, rooftops)

    Dansé et crié jusque tard, tard

    (Danced and shouted until late, late)

    On a chanté et pleuré sur les toits, toits

    (We sang and cried on the rooftops, rooftops)

    Dansé et crié, tout pour toi

    (Danced and shouted, all for you)

    Toi

    (You)

  • An absolutely beautiful solo guitar piece. It comes from the heart, and I feel it. WOW!

  • I don’t understand a word, but I can feel the emotional energy with a melodic house beat.

    The song “Qalbi” (قلبي) by SNX, which translates to “My Heart,” is a poignant expression of deep love, longing, and emotional attachment.

    The lyrics convey a powerful sense of yearning for a beloved person, emphasizing the heart’s intense connection and the soul’s devotion, even in absence or separation.

    Some translated lines:

    “قلبي يا قلبي” (Qalbi ya qalbi) – My heart, oh my heart
    “يا روحي يا روحي” (Ya rouhi ya rouhi) – Oh my soul, oh my soul

    Many lines speak to missing someone dearly and the pain of their absence.

  • Every time I wish the minds I admire would open up and show the full depth of their thoughts, ideas, and experiences, I realize – I’m doing the same thing by staying too silent online.

    The idols I look up to – for example Ido Portal, Bruce Poon Tip, Dr. Nun Amen-Ra, my Sifus, countless peers – are all masters of mind and body in private but leave little or no footprints online. The wisdom is real, yet it’s undocumented and the world barely sees it.