Tag: AI


  • Human Debt

    Yesterday I read a book chapter where the author, Thomas Gonschior, interviewed neuroscientist Gerald Hüther. Pre-AI he diagnosed what he called the “machine age mindset”. He said that decades of efficiency optimization treated employee as objects, suppressed intuition, punished initiative, and produced a workforce that functions as prescribed but creates nothing novel. Companies then began complaining that “the spirit of innovation is gone”, oblivious that they had created systems that killed it.

    Hüther’s argument is more structural than sentimental: when you treat humans like machines, they lose the capacities that make them human.

    While reading the chapter, I realized that the entire “AI transition” debate might be build on a fantasy. Everyone is saying once AI is handling the boring routine work, humans will be “freed” for higher-order thinking – creativity, intuition, judgment, inspiration, vision, etc.

    But Hüther exposed this fantasy before today’s AI was even launched. What he said is that the machine age didn’t just automate tasks but also the people operating the machines. Fifty years of KPI regimes, efficiency mandates, and management-by-fear produced exactly what was incentivized in the beginning: a workforce that waits for instructions, avoids-risks, and counts the hours until closing time. The human capacities that AI cannot replicate – the ones every CEO and HR department now claims to value – were systematically destroyed by the very management philosophies those CEOs inherited.

    I call this Human Debt: the accumulated deficit in creativity, intuition, courage, intrinsic motivation which were created by decades of efficiency-first management. Like technical debt, it was invisible as long as the old system kept running. AI is now the transformation that will make it visible.

    Consequentially, this makes the companies most celebrated for their operational excellency – tight processes, lean operations, disciplined execution cultures – carry the highest amount of Human Debt. They optimized for decades for the exact qualities and capabilities which AI now commoditizes, while destroying the capacities that AI cannot replicate.

    At the same time, this is not an easy training problem. You can not re-build intuition in a 6-week “re-skilling” program, you cannot “restore” courage through a change management seminar. Hüther talked about Enthusiasm which he refered to as a “spark that jumps”; it will be hard to “legislate” sparks in organizations that spent decades extinguishing them.

    This means – ironically – that companies that will win the AI transition are not the ones deploying the most AI but the ones that, against every incentive of the machine age and the new AI era, somehow preserved and preserve their people’s capacity to be human.

    If you are CEO of a company ask yourself honestly: if you free your people from routine and delegated tasks, can they actually create? If the answer is no, your AI “transformation” will bring efficiency gains but undifferentiated from anyone else you compete with. The organizations that combine AI automation with genuine human agency (not the PowerPoint version) will win.

    At the same time, Human Debt is an yet invisible and unpriced liability on every balance sheet. Companies with high operational discipline + low innovation culture are short a put they don’t know they sold. We are still early, but post-AI-deployment we will see organizations with productivity gains plateauing within 12-18 months, because the “freed” humans have nothing genuinely creative or inventive to contribute. This plateau will be the Human Debt surfacing.

    For companies that will now face decisions over AI-driven restructurings, the most important assessment must be on Human Debt. Organizations risk laying off people who are genuinely creative but AI-slow, while promoting and keeping those who operate the AI like machines. Those with high Human Debt will see AI ROI pleateau faster than they will expect. Those who will outcompete will be organizations who have a culture of genuine autonomy, intrinsic motivation, and a lived tolerance for failure (very rare).

    For anyone interested: the source is a German book called “Auf den Spuren der Intuition” from Thomas Gonschior – which itself is based on his documentary series aired on BR.

  • Do we want to use AI for peace or war? Do we want to use it for unity, compassion, and love – or do we allow it to be used for separation, killing, control, and power? Delegating killing other humans to a machine without consciousness, without compassion, without karma is different to making the choice to kill someone yourself. If you delegate it, you avoid confronting the moral weight, the suffering, the separation it creates. Autonomous weapons are the ultimate sign that large parts of world leaders are totally disconnected from nature, conscience, and love. It is the furthest you can get from the teachings of Jesus Christ.

  • Monoculture farming collapsed entire food supplies. I do believe we risk doing the same thing to corporate cognition.

    Today I read a thesis drawing on Gödel’s ‘Incompleteness’ Theorems to argue that AI, by accelerating the construction of purely logical systems, will expose the rigidity and fragility of our cognitive and organizational structures.

    It is a loose but quite interesting analogy because it instinct instinctively points at something real that AI adoption ignores entirely.

    When every organization offloads cognition to similar AI systems, which are trained on the same data, optimized for the same benchmarks and metrics, what you get is a cognitive monoculture at organizational (and societal) scale.

    Let’s call it algorithmic monocropping.

    Agriculture learned that with Irish potatoes or American bananas that if a system optimized only for one output, it becomes catastrophically fragile as soon a single point of failure arises.

    Corporate (and societal) AI adoption is repeating this mistake – not because AI is dangerous, but because uniform AI adoption as we observe today will by definition eliminate the cognitive variance that previously made organizations resilient.

    Individual atrophy is bad enough (offloading thinking makes you worse at thinking). But individuals work in organizations or run countries, and this is where collective atrophy becomes a serious problem. A company whose judgement layer will rely entirely on a ChatGPT 6 or Claude Opus 5 model, will share the same points of failure as every other company doing the same thing.

    The biggest advantage over the next decade will not accrue to individuals that adopt AI the fastest, but ironically to those who maintain their cognitive sovereignty – and to organizations who preserve their cognitive “biodiversity” alongside it. If you are capable to reason independently when models converge to the same average answer – you have a competitive advantage.

    The scariest is that monocultures don’t fail gradually but ALL AT ONCE.

  • AGI?

    AGI. The most powerful word in tech has no definition. Which means whoever writes it, wins.

    There is no definition of what AGI (artificial general intelligence) actually means. There is no agreed definition of “general”. No agreed definition of “intelligent”. No agreed method how to measure either. No AI lab has published falsifiable AGI criteria they commit to being measured against.

    It is quite likely this definitional “vacuum” is intentional because it serves whoever needs the term most at any given moment.

    Every foundation AI lab depends on proximity to “AGI” for their next capital raise. “Near-AGI” justifies valuations while “very good narrow AI” does not. There is zero incentive for those who develop AI to formalize a definition for what AGI actually means.

    And when there is no definition, there is no threshold. Every year regulators and investors argue over non-existing thresholds is a year in which AI labs develop without governance.

    Without a threshold: undisciplined capital inflow, regulatory forbearance, talent magnetism – all flowing toward a word no one has defined.

    I believe two seemingly inconsistent statements to be true:

    1. Today’s top-tier models are generally intelligent (in most cases they act logically intelligently)
    2. But they do not possess general intelligence (they cannot distinguish what they know from what they make up)

    The first statement is about the ability to process complex variables. The top models are already “raw” intelligent. Gemini 3.0 Deep Think can process complex variables, reason across domains, and demonstrate parity with elite specialists on hard problems.

    The second point is the load-bearing wall that relativizes the first one. A scientists who fabricates data 13% of the time is not a bad scientist but a fraud. Currently we allow AI a tolerance we would never extend to humans.

    As long as hallucinations persist, you cannot rely on even the most advanced reasoning capabilities. And without reliability, there is no autonomous deployment, no liability transfer, no enterprise-grade trust.

    I do believe this friction is temporary. Next-generation model architectures will reduce hallucination rates below human error rates, even though this may require fundamentally different approaches. Models that help humans solve frontier physics problems today will, in 18-36mo, do so with persistent memory, tool use across systems, and without hallucinations.

    But will such systems be declared AGI?

    My best guess is “No”. We will lift the thresholds. Being able to answer hard problems >99,99% of humans cannot answer will not be enough. We will define “general” to also include intuition, embodied judgment, multi-decade research agendas. This way, the term will always remain 2-3 years away. “We achieved AGI” might end the fundraising narrative, no one holding equity wants that sentence to be spoken out loud.

    I think it is best to ignore the AGI debate entirely.

  • After spreadsheets became standard in M&A, deals closed significantly slower. M&A deals went from 2-4 months from LOI to close (1970s to 1980s) to 6-12+ months from LOI to close (1980s to 1990s). Deals didn’t improve. 50-70% of acquisitions destroyed shareholder value (same as pre-spreadsheet). Basically, deals got slower, but not any better.

    Why did that happen ? Analysis paralysis, an illusion of precision, the replacement of judgment with calculation, and accountability shifted from CEO and CFO to analysts doing spreadsheets. Nobody except some dealmakers like Warren Buffett or Peter Lynch realized it at that time (“I’ve never seen a deal that didn’t look good in a spreadsheet”).

    Perhaps you’ve recognized some parallels: With AI we are repeating the pattern, only faster and deeper. If human nature stays the same, it will result in an efficiency paradox. Everything will be analyzed and created even faster. But with more output will come slower completion. It will lead to false confidence, zero responsibility (“The AI models said so”). Also, the authority is shifting from human to AI much faster than it shifted from human to spreadsheet.

    What will happen can perhaps be called a quality collapse. The average quality will increase, but the top-end quality will decrease. Everything will be crowded by AI-generated “pretty good” but what will be missing is excellence. Then, at the same time, the AI wave is hitting a succession/retirement wave. Senior experts with real experiential intuition and judgment are retiring. Juniors completely dependent on AI have to take over.

    While it was previously a recognized truth that 30 years of experience >>>> 5 years of experience, we now live in an illusion that 5 years of experience + AI = 30 years of experience. We won’t realize until totally novel problems arise that AI can’t handle because it is not in the training data, while humans are at that stage already cognitively crippled.

    We think we can just go back and “do it without AI if needed” but it will be too late because neural pathways are atrophying right now. Organizations shift all their processes around AI. Skills are not being taught to the next generation but to AI. We are basically already in a state of dependency which looks like empowerment, and we won’t see it until the tool is removed.

    Try doing 1970s M&A deals just with pen, paper, and calculators. How many globally could do it? Same thing will happen with AI but faster. The result – I fear – is that innovation in many organizations will slow down and they will commoditize.

    AI driven productivity gains are a dangerous illusion. Not because of AI (extremely great and powerful tool) but because of how we work with it. Spreadsheets optimized for what was modelable, not what was innovative and couldn’t be seen in numbers. AI will do the same thing, but not exclusively in finance but in all domains.

    What makes AI perhaps more “dangerous” is that it has no barrier to entry. It will enable some selected (rare) individuals to really master what they do (driving real innovation), but the majority (if they are not very careful and intentional) will destroy their own personal economic value.

    With spreadsheets, you had to learn formulas, understand logic, debug errors – which was a protection against overuse. AI has none of that, nothing to learn (if you are really honest, dear AI coaches), no debugging, no logic, no barrier = instant universal adoption which we are observing.

    So, back to the original observation: with spreadsheet everyone got more productive, but deals took longer and outcomes didn’t improve. Now with AI, everyone is getting more productive, but: are projects finishing faster? Is quality improving? Is innovation in the median of corporations accelerating?

    I think: with spreadsheets, people began optimizing for “model says yes” instead of deal is actually good. Are we optimizing AI use for “AI approves” instead of actually valuable?

    We know that if measurement becomes a target, it ceases to be a good target.

    This is by no mean an anti AI stance or anti spreadsheet stance. But I hope to arise some careful thought on the relationship we have with AI and how to avoid the analysis-paralysis of the spreadsheet era.

  • More than 50% of recently published website texts are now written by AI. This means that from today forward, the majority of all published texts is already synthetic. The same will hold true for any other form of content: images, video, and audio. In and of itself, AI written texts shouldn’t be such a large issue. The problem is not texts written by AI, but that we have simultaneously crossed a point where you can reliably distinguish AI-generated content from human content. I have a strong opinion that AI should sound like AI, I also think that AI chatbots should be apparent as such, and that AI-generated images and videos should have deeply embedded watermarks. This is also why I believe parts of the EU AI Act and the California AI Transparency Act are net-positive for humanity. But why do I believe so?

    The most pressing issue with AI generated content is much less about capability or alignment of AI models, but the collapse of epistemic commons before we even arrive at general-intelligent or super-intelligent AI models. Here is what I mean:

    Most text is now AI-generated, and within months the same will be true for video, images, and audio. When creation costs and efforts collapse to zero, two things vanish simultaneously: trust and meaning.

    We can no longer casually trust what we see. Every text, every video, every expert opinion becomes suspect. As social primates evolved to trust patterns and authorities, we are losing the ability to distinguish signal from noise at the exact moment we need it most.

    Perhaps the deeper crisis isn’t skepticism but meaning collapse. Scarcity and effort have always been core to how humans assign value and significance. When infinite content can be generated instantly and automated for any purpose, these anchors disappear.

    Most look at this as primarily economic disruption, but perhaps it is much more psychological and civilizational because we are eroding the foundations of shared reality before we have built alternatives.

    Then there is this slippery slope: From now on, humans will increasingly interact with and read texts written by AI systems trained on AI-generated texts. Again, soon it is also photos, videos, audio. This training-loop has (at least) the potential to create a cultural drift in directions yet unpredictable. One thing we can be quite certain about is that our human values are already being reshaped by AI systems in ways we cannot track. This in turn makes the question of “alignment” both: more important and at the same time secondary.

    The most pressing risk of human civilization is therefore not hypothesizing a possibly “misaligned” superintelligence, but rather the risk of arriving there divided – socially and epistemically.

    What must be done is certainly harder than alignment of AI systems?

    • Rebuilding trusted information infrastructure
    • Creating new forms of verifiable authenticity
    • Developing cultural “antibodies” to synthetic manipulation
    • Building meaning-making structures that aren’t dependent on scarcity or effort
    • Preserving and strengthening human coordination capacity
    • Etc.

    This is harder than “alignment”, because the more we look at these to-dos from a federal or global perspective, the more impossible they will become.

    Now, to move from the theoretical to the practical: Who are the 5 to 150 people you can still genuinely trust and coordinate with? Because everything else either emerges from functional groups, or it won’t emerge at all.

  • When looking at AI, people are fixated on surface-level effects: economic disruption (jobs disappearing), alignment risks (AI going rogue), or ethical dilemmas (bias of LLMs). While those are all real, they also seem to be distractions from the real shift. The current conversations are not about whether we achieve AGI anymore but about when – some say 10 years, I say it’s basically already here (it all depends on the definition of the term really). By definition, AGI will match and then surpass human intelligence in every single domain: strategic, creative, you name it. Once that threshold is crossed (and it’s closer than many admit), a feedback loop kicks in. AI designs better AI, which designs even better AI, ad infinitum.

    Because we are not yet there, we debate AI as a tool. But as soon we cross that threshold, AI will predict, simulate, and optimize anything logic-based with absolute precision that human input is unnecessary or perhaps counterproductive. Humans – and that means governments, corporations, and individuals – will outsource everything, from policy to life choices, because AI will present the best logical and data-backed option. And because it is so much better in logic, you stop questioning it. The “alignment” problem is therefore ultimately less about making AI safe for humans, but about preparing humans to accept their irrelevance in logical intelligence and – in my opinion – transitioning (or better: re-connecting) them to their intuitive intelligence. If we fail at this, the majority of humans will experience free will only as an illusion.

    We humans derive meaning from struggle, achievement, and social bonds. Within the next 10 to 20 years, we won’t need to struggle to achieve anymore. Achievement will be handed out (or withheld) by systems we cannot understand. What is left are social bonds. But is that really the case? We already see AI-mediated interactions replacing genuine connections (whether emails, eulogies, or even virtual AI companions). If we do not pay attention and re-connect with other humans (our tribes), we risk real psychological devastation at scale.

    If AI is centralized, it will be operated by an elite (that’s at least the current trend). Not only will this elite gain god-like power, but it will form another elite class: humans who are augmented by superintelligence through direct neural interfaces or exclusive AI enhancements. What about the rest? An underclass kept alive by a universal-basic-whatever, but without purpose or power?

    The problem really is: when we cross that threshold, it won’t be fixable. We better collectively act now, or the world will be run by a handful of super-enhanced humans and their AI overlords.

    In 2025 these thoughts will read like speculation. But based on my observations of how the majority of humans started using and adopting AI, the trajectory seems obvious (to me). AI is optimizing for efficiency. Companies adopting it as well. Individuals must – or they are no longer competitive. What is the antidote? I am divided. I don’t believe AI must lead to such dystopia. I am much more convinced that it is our best shot to achieve utopia. But there is a very thin line in-between them: us humans. In how we collectively act. And acting is much (!) less about technological adaptation (from becoming AI “experts” to Neuralink cyborgization) and indefinitely more about re-connecting to what makes us uniquely human: our consciousness, our connection to God, our one Creator, and our unity. Meaning will come from non-competitive pursuits, AI-alignment from balancing logic with consciousness, and happiness from real, deep, social human connections. Intelligent machines – no matter how superintelligent they turn out – can never be conscious. Perhaps it is a wake-up call: we lost our spiritual connection to consciousness – and we must re-connect.

  • If you exclude AI-driven investments, then the US economy mirrors Germany’s near-stagnation, with near zero GDP growth in the first half of 2025. More than 90% of US GDP growth stems from AI and related sectors. Large parts of >$375B AI investments scream “bubble”. Only a smaller percentage of companies and labs have a unique MOAT. Should it burst, due to whatever reason, the US will face a strong recession.

    The AI bubble could burst from two opposite extremes: exponential technological progress or the lack thereof. In case of exponential technological progress; imagine post-LLM architectures slashing compute needs by 100x (i.e., Mamba). This will strand GPU-heavy datacenters. It will make >$2.9T of mostly debt-financed datacenter investments obsolete. On the other side: If LLMs plateau without ROI, the hype will fade like dot-com, tanking valuations despite tangible capex.

    Whatever the case, if it pops, the US could spiral into a vicious reinforcing cycle: recession → layoffs/unemployment → consumer pullback → deflationary spiral (or stagflation if supply shocks hit) → political extremism. This reminds me of pre-WW2 Europe. The US must diversify growth beyond AI now.

    What can Germany learn from this? The obvious is to accelerate AI adoption and sovereignty. Just as the US without AI stagnates, Germany with AI could grow again. At the same time, imitating the US is a fragile lifeline. Perhaps the smartest idea is to reject the hype cycle altogether. Let Berlin based AI startups do their thing, rent US based AI software, and focus all energy on high-tech breakthroughs in the decentralized Mittelstand.

  • This is AGI

    I say that current AI is AGI. It is not obvious yet, because we haven’t yet connected very complex and fragmented software and data environments – and for R&D to turn into real-world change is a multi-year process anyway.

    Even if we stopped and freeze AI development here and now, we’d only realize that we indeed have AGI 2 or 3 years down the road. In some niches it will be faster (software or law) in others slower (complex logistics).

    However, AI development is not stopping here and now. It continues to improve – I say exponentially. Even if you are more conservative, then the linear growth still has undoubtedly a large rate of change.

    Today (!), we have AI models that evolved from barely completing sentences to writing code that ships to production, we have AI doing PhD-level research, and achieved gold medal-level performance on the International Math Olympiad. AI is solving medical problems that baffle experts.

    Again – what is currently mostly manually prompted work in long chat conversations will soon develop into agents that can do almost all knowledge work fully autonomously.

    I’m not talking about AI as an assistant, as a co-pilot. It will just straight up finish the work while you are napping on the beach.

    The difference between the GPT-3 model and today’s models – whether Grok 4, Gemini 2.5 Pro, or ChatGPT o4 – is like comparing a Nokia 1011 to an iPhone 16 Pro. We went from purely text based chats to multimodal understanding – models that can see, hear, and reason across domains simultaneously. AI is starting to genuinely understand context and nuance in ways that feels human.

    The next phase is not purely larger AI models, but models that learn continuously. They can remember you, plan and execute multistep tasks over days, weeks, or months.

    An AI system that perfectly remembers, understands context, who never sleeps, and gets smarter every day. This is being built today in AI labs around the globe.

    We have AGI today, and it is only a matter of time for us to arrive at superintelligent AI systems. Is it 2 years? 3 years? 4 years? 5 years? Irrelevant. Whether it is 1 year or 10 years, the implications are the same: everything is going to change forever.

  • How We Use AI

    Whether current AI systems qualify as AGI is beside the point. Five years ago, if you had asked me to define AGI, my answer would’ve closely described what GPT o3 or Gemini 2.5 Pro are now. So if this is AGI, then where are the breakthroughs?

    Valid question. The answer: we are the bottleneck.

    The limitation is no longer the model. The real limitation is that we haven’t really figured out how to use LLMs properly. Even if AI development froze today, and all we have available are o3 and Gemini 2.5 Pro level LLMs, then we would still see a decade of profound disruptions and innovations across entire industries.

    Most users treat AI like Google, a friend, a mentor, or a novelty. Few understand prompting. Those who do don’t even scratch the surface of what is possible when you give AI the right prompt, the relevant context, and access to specific or perhaps proprietary data.

    Worse, we are not augmenting human intelligence, we are outsourcing it. TikTokified workflows, mindless automation, and prompt-template copy-paste culture are commoditizing subpar outcomes. Instead of expanding our minds, we’re paralyzing them.

    The real potential lies hidden in tandem cognition. Reimagining how we work with AI systems in a way that ensures our uniquely human traits (intuition, creativity, vision, …) aren’t ignored, but amplified. Without this shift, outputs will commoditize (across humans and organizations).

    We urgently need two things: first a methodology for extracting maximum value from LLMs and second a philosophy for not replacing our human genius, but empowering it.

    The future is not AI versus human. It is human with AI, at full capacity. Currently, the focus is on maximum capacity for AI compute. Now it’s time we focus on maximum capacity for human genius.