Author: Marius Schober


  • After spreadsheets became standard in M&A, deals closed significantly slower. M&A deals went from 2-4 months from LOI to close (1970s to 1980s) to 6-12+ months from LOI to close (1980s to 1990s). Deals didn’t improve. 50-70% of acquisitions destroyed shareholder value (same as pre-spreadsheet). Basically, deals got slower, but not any better.

    Why did that happen ? Analysis paralysis, an illusion of precision, the replacement of judgment with calculation, and accountability shifted from CEO and CFO to analysts doing spreadsheets. Nobody except some dealmakers like Warren Buffett or Peter Lynch realized it at that time (“I’ve never seen a deal that didn’t look good in a spreadsheet”).

    Perhaps you’ve recognized some parallels: With AI we are repeating the pattern, only faster and deeper. If human nature stays the same, it will result in an efficiency paradox. Everything will be analyzed and created even faster. But with more output will come slower completion. It will lead to false confidence, zero responsibility (“The AI models said so”). Also, the authority is shifting from human to AI much faster than it shifted from human to spreadsheet.

    What will happen can perhaps be called a quality collapse. The average quality will increase, but the top-end quality will decrease. Everything will be crowded by AI-generated “pretty good” but what will be missing is excellence. Then, at the same time, the AI wave is hitting a succession/retirement wave. Senior experts with real experiential intuition and judgment are retiring. Juniors completely dependent on AI have to take over.

    While it was previously a recognized truth that 30 years of experience >>>> 5 years of experience, we now live in an illusion that 5 years of experience + AI = 30 years of experience. We won’t realize until totally novel problems arise that AI can’t handle because it is not in the training data, while humans are at that stage already cognitively crippled.

    We think we can just go back and “do it without AI if needed” but it will be too late because neural pathways are atrophying right now. Organizations shift all their processes around AI. Skills are not being taught to the next generation but to AI. We are basically already in a state of dependency which looks like empowerment, and we won’t see it until the tool is removed.

    Try doing 1970s M&A deals just with pen, paper, and calculators. How many globally could do it? Same thing will happen with AI but faster. The result – I fear – is that innovation in many organizations will slow down and they will commoditize.

    AI driven productivity gains are a dangerous illusion. Not because of AI (extremely great and powerful tool) but because of how we work with it. Spreadsheets optimized for what was modelable, not what was innovative and couldn’t be seen in numbers. AI will do the same thing, but not exclusively in finance but in all domains.

    What makes AI perhaps more “dangerous” is that it has no barrier to entry. It will enable some selected (rare) individuals to really master what they do (driving real innovation), but the majority (if they are not very careful and intentional) will destroy their own personal economic value.

    With spreadsheets, you had to learn formulas, understand logic, debug errors – which was a protection against overuse. AI has none of that, nothing to learn (if you are really honest, dear AI coaches), no debugging, no logic, no barrier = instant universal adoption which we are observing.

    So, back to the original observation: with spreadsheet everyone got more productive, but deals took longer and outcomes didn’t improve. Now with AI, everyone is getting more productive, but: are projects finishing faster? Is quality improving? Is innovation in the median of corporations accelerating?

    I think: with spreadsheets, people began optimizing for “model says yes” instead of deal is actually good. Are we optimizing AI use for “AI approves” instead of actually valuable?

    We know that if measurement becomes a target, it ceases to be a good target.

    This is by no mean an anti AI stance or anti spreadsheet stance. But I hope to arise some careful thought on the relationship we have with AI and how to avoid the analysis-paralysis of the spreadsheet era.

  • Deep Work 2.0

    Deep work is a term Cal Newport uses to describe the activities performed in a state of absolute distraction-free concentration to push our cognitive capabilities to their limit. I never read the book because the idea is just so simple: schedule a time when you perform real work, no social media, no notifications, just you and the work in front of you.

    When I first heard about Deep Work, it was not a new concept to me. I had already practiced deep focus sessions regularly – usually early in the morning. But it made me definitely more serious. No matter how disciplined I attempted to be, the infinite dopamine from social media and constant notifications from my phone every more often crushed my flow state. Years ago, I tried and then purchased the blocking software Cold Turkey (for Mac and PCs) and an Android app called Digital Detox. Both apps are absolutely great (yes: one-time purchases!). What they allow you to do is block anything you want (for example social media and YouTube) and at the same time make it extremely difficult (if not impossible) to circumvent them.

    Recently, I felt quite unhappy about the lack of progress towards the goals I had set for myself. One part of the equation certainly was the birth of our daughter. Yet, I still managed to schedule at least one Deep Work session each day. What was the missing link? I realized that it is not only social media, YouTube, or news websites anymore – LLMs are now equally distracting as social media.

    Today I created a new blacklist filter in Cold Turkey where I now block all LLM apps and URLs. Could be I’m one of the first persons in the world to do so, but I realized that – for my ADHD type brain – having AI accessible non-stop is an equal distraction as social media feeds: a source of noise and cheap dopamine.

    I realized that using LLMs blindly leads to procrastination, analysis paralysis, decision fatigue, unoriginal thought, loss of free will, decline of deep thinking capacity, atrophy of overall cognitive function, writing skill decline.

    To be totally honest: Instead of working, I prompted. Instead of writing, I prompted. Instead of thinking, I prompted.

    My personal insight is that I must be just as intentional and selectively about using AI as I must be with social media. Instead of using it all the time, I limit it now to very specific tasks where it adds exponential value to the work.

    Let’s be clear: I’m not avoiding AI. I’m also not badmouthing it. I believe AI is one of the greatest technologies humans have invented. What I can tell from my personal experience and observations: AI can be a powerful lever or a heavy burden. Therefore, I believe, it is time for Deep Work 2.0: Deep focus sessions where you intentionally do not use AI at all – at least not actively (i.e., only use pre-prompted conversations or Deep Research reports that you saved as a PDF or Markdown file for your Deep Work 2.0 session).

    What if not only distractions, social algorithms, but also (pretended) AI efficiency is a deadly enemy of our flow state?

  • More than 50% of recently published website texts are now written by AI. This means that from today forward, the majority of all published texts is already synthetic. The same will hold true for any other form of content: images, video, and audio. In and of itself, AI written texts shouldn’t be such a large issue. The problem is not texts written by AI, but that we have simultaneously crossed a point where you can reliably distinguish AI-generated content from human content. I have a strong opinion that AI should sound like AI, I also think that AI chatbots should be apparent as such, and that AI-generated images and videos should have deeply embedded watermarks. This is also why I believe parts of the EU AI Act and the California AI Transparency Act are net-positive for humanity. But why do I believe so?

    The most pressing issue with AI generated content is much less about capability or alignment of AI models, but the collapse of epistemic commons before we even arrive at general-intelligent or super-intelligent AI models. Here is what I mean:

    Most text is now AI-generated, and within months the same will be true for video, images, and audio. When creation costs and efforts collapse to zero, two things vanish simultaneously: trust and meaning.

    We can no longer casually trust what we see. Every text, every video, every expert opinion becomes suspect. As social primates evolved to trust patterns and authorities, we are losing the ability to distinguish signal from noise at the exact moment we need it most.

    Perhaps the deeper crisis isn’t skepticism but meaning collapse. Scarcity and effort have always been core to how humans assign value and significance. When infinite content can be generated instantly and automated for any purpose, these anchors disappear.

    Most look at this as primarily economic disruption, but perhaps it is much more psychological and civilizational because we are eroding the foundations of shared reality before we have built alternatives.

    Then there is this slippery slope: From now on, humans will increasingly interact with and read texts written by AI systems trained on AI-generated texts. Again, soon it is also photos, videos, audio. This training-loop has (at least) the potential to create a cultural drift in directions yet unpredictable. One thing we can be quite certain about is that our human values are already being reshaped by AI systems in ways we cannot track. This in turn makes the question of “alignment” both: more important and at the same time secondary.

    The most pressing risk of human civilization is therefore not hypothesizing a possibly “misaligned” superintelligence, but rather the risk of arriving there divided – socially and epistemically.

    What must be done is certainly harder than alignment of AI systems?

    • Rebuilding trusted information infrastructure
    • Creating new forms of verifiable authenticity
    • Developing cultural “antibodies” to synthetic manipulation
    • Building meaning-making structures that aren’t dependent on scarcity or effort
    • Preserving and strengthening human coordination capacity
    • Etc.

    This is harder than “alignment”, because the more we look at these to-dos from a federal or global perspective, the more impossible they will become.

    Now, to move from the theoretical to the practical: Who are the 5 to 150 people you can still genuinely trust and coordinate with? Because everything else either emerges from functional groups, or it won’t emerge at all.

  • When looking at AI, people are fixated on surface-level effects: economic disruption (jobs disappearing), alignment risks (AI going rogue), or ethical dilemmas (bias of LLMs). While those are all real, they also seem to be distractions from the real shift. The current conversations are not about whether we achieve AGI anymore but about when – some say 10 years, I say it’s basically already here (it all depends on the definition of the term really). By definition, AGI will match and then surpass human intelligence in every single domain: strategic, creative, you name it. Once that threshold is crossed (and it’s closer than many admit), a feedback loop kicks in. AI designs better AI, which designs even better AI, ad infinitum.

    Because we are not yet there, we debate AI as a tool. But as soon we cross that threshold, AI will predict, simulate, and optimize anything logic-based with absolute precision that human input is unnecessary or perhaps counterproductive. Humans – and that means governments, corporations, and individuals – will outsource everything, from policy to life choices, because AI will present the best logical and data-backed option. And because it is so much better in logic, you stop questioning it. The “alignment” problem is therefore ultimately less about making AI safe for humans, but about preparing humans to accept their irrelevance in logical intelligence and – in my opinion – transitioning (or better: re-connecting) them to their intuitive intelligence. If we fail at this, the majority of humans will experience free will only as an illusion.

    We humans derive meaning from struggle, achievement, and social bonds. Within the next 10 to 20 years, we won’t need to struggle to achieve anymore. Achievement will be handed out (or withheld) by systems we cannot understand. What is left are social bonds. But is that really the case? We already see AI-mediated interactions replacing genuine connections (whether emails, eulogies, or even virtual AI companions). If we do not pay attention and re-connect with other humans (our tribes), we risk real psychological devastation at scale.

    If AI is centralized, it will be operated by an elite (that’s at least the current trend). Not only will this elite gain god-like power, but it will form another elite class: humans who are augmented by superintelligence through direct neural interfaces or exclusive AI enhancements. What about the rest? An underclass kept alive by a universal-basic-whatever, but without purpose or power?

    The problem really is: when we cross that threshold, it won’t be fixable. We better collectively act now, or the world will be run by a handful of super-enhanced humans and their AI overlords.

    In 2025 these thoughts will read like speculation. But based on my observations of how the majority of humans started using and adopting AI, the trajectory seems obvious (to me). AI is optimizing for efficiency. Companies adopting it as well. Individuals must – or they are no longer competitive. What is the antidote? I am divided. I don’t believe AI must lead to such dystopia. I am much more convinced that it is our best shot to achieve utopia. But there is a very thin line in-between them: us humans. In how we collectively act. And acting is much (!) less about technological adaptation (from becoming AI “experts” to Neuralink cyborgization) and indefinitely more about re-connecting to what makes us uniquely human: our consciousness, our connection to God, our one Creator, and our unity. Meaning will come from non-competitive pursuits, AI-alignment from balancing logic with consciousness, and happiness from real, deep, social human connections. Intelligent machines – no matter how superintelligent they turn out – can never be conscious. Perhaps it is a wake-up call: we lost our spiritual connection to consciousness – and we must re-connect.

  • If you exclude AI-driven investments, then the US economy mirrors Germany’s near-stagnation, with near zero GDP growth in the first half of 2025. More than 90% of US GDP growth stems from AI and related sectors. Large parts of >$375B AI investments scream “bubble”. Only a smaller percentage of companies and labs have a unique MOAT. Should it burst, due to whatever reason, the US will face a strong recession.

    The AI bubble could burst from two opposite extremes: exponential technological progress or the lack thereof. In case of exponential technological progress; imagine post-LLM architectures slashing compute needs by 100x (i.e., Mamba). This will strand GPU-heavy datacenters. It will make >$2.9T of mostly debt-financed datacenter investments obsolete. On the other side: If LLMs plateau without ROI, the hype will fade like dot-com, tanking valuations despite tangible capex.

    Whatever the case, if it pops, the US could spiral into a vicious reinforcing cycle: recession → layoffs/unemployment → consumer pullback → deflationary spiral (or stagflation if supply shocks hit) → political extremism. This reminds me of pre-WW2 Europe. The US must diversify growth beyond AI now.

    What can Germany learn from this? The obvious is to accelerate AI adoption and sovereignty. Just as the US without AI stagnates, Germany with AI could grow again. At the same time, imitating the US is a fragile lifeline. Perhaps the smartest idea is to reject the hype cycle altogether. Let Berlin based AI startups do their thing, rent US based AI software, and focus all energy on high-tech breakthroughs in the decentralized Mittelstand.

  • One of the many things we experience is the simultaneous reaching for more alongside the subconscious knowing that the little we have is all we truly need. We seek noise, though silence holds the answer. We look to the future, we look to the past, yet we forget the now.

    We live here. We live now.

    The illusion of the future and the weight of the past hold us captive. It is like a pendulum, swinging from what was to what ought to be. From what made us happy to what might make us unhappy. We cling instead of letting go. We try to force the future into submission, forgetting all the while that the future emerges with effortless grace in the here, in the now.

    Let us flow. Not blindly. With visionary intention. Instead of waiting for tomorrow, let us be today who we wish to be tomorrow. It is what is born today that shapes the morrow.

  • Opportunities

    Today, the largest opportunities arise from people living in the past. Obvious trends, like the exponential advancement of solar PV or electric vehicles, are irrationally badmouthed. There seems to be a longing for bringing back nuclear power, for bringing back manufacturing, for bringing back workers into offices, for bringing back military dominance. People still believe in college degrees, in standardized testing, credentialism in hiring, pension funds, the list goes on. Politicians try to compete with China, want to bring back supply chains, want to revive 40+h workweeks originating from the industrial revolution. This conservative nostalgia seems to be a large trend; or perhaps it has always been.

    What has worked 20 years ago will not miraculously come back. Politicians are selling “the good old days” to people not aware of current reality – of how incredibly fast exponential technologies and societal changes are evolving.

    The opportunities of today and the future are not in chasing the resurrection of the past but in identifying which underlying needs from those eras remain unmet in modern forms.

    For example: People don’t want manufacturing jobs back. They also don’t want to out-manufacture China. Who really wants to labor in a factory? Who wants to work 10 hours a day on a farm? The answer is: nobody really. What people do want is economic security and a trade balance that doesn’t feel like losing. They want protection from foreign economic coercion, and the dignity of creating tangible products the world needs.

    Another example: People don’t want coal plants or nuclear plants back. Who really wants polluted air or nuclear waste in their neighborhood? Again, the answer is: nobody really. What people actually want is affordable energy independence – which means electricity too cheap to meter – and a reliable baseload and a reliable grid during crises and days where the sun isn’t shining.

    The pattern is always the same: the surface demand is to reverse time; it is nostalgia. The underlying need is mostly emotional and social: security, dignity, control, identity.

    The opportunity is satisfying these needs through forward-looking solutions that people haven’t yet recognized as substitutes. It is in building for a world as it is, not as people wish it were. It is in accepting current reality the fastest and having the longest runway to build what’s actually next.

  • This is AGI

    I say that current AI is AGI. It is not obvious yet, because we haven’t yet connected very complex and fragmented software and data environments – and for R&D to turn into real-world change is a multi-year process anyway.

    Even if we stopped and freeze AI development here and now, we’d only realize that we indeed have AGI 2 or 3 years down the road. In some niches it will be faster (software or law) in others slower (complex logistics).

    However, AI development is not stopping here and now. It continues to improve – I say exponentially. Even if you are more conservative, then the linear growth still has undoubtedly a large rate of change.

    Today (!), we have AI models that evolved from barely completing sentences to writing code that ships to production, we have AI doing PhD-level research, and achieved gold medal-level performance on the International Math Olympiad. AI is solving medical problems that baffle experts.

    Again – what is currently mostly manually prompted work in long chat conversations will soon develop into agents that can do almost all knowledge work fully autonomously.

    I’m not talking about AI as an assistant, as a co-pilot. It will just straight up finish the work while you are napping on the beach.

    The difference between the GPT-3 model and today’s models – whether Grok 4, Gemini 2.5 Pro, or ChatGPT o4 – is like comparing a Nokia 1011 to an iPhone 16 Pro. We went from purely text based chats to multimodal understanding – models that can see, hear, and reason across domains simultaneously. AI is starting to genuinely understand context and nuance in ways that feels human.

    The next phase is not purely larger AI models, but models that learn continuously. They can remember you, plan and execute multistep tasks over days, weeks, or months.

    An AI system that perfectly remembers, understands context, who never sleeps, and gets smarter every day. This is being built today in AI labs around the globe.

    We have AGI today, and it is only a matter of time for us to arrive at superintelligent AI systems. Is it 2 years? 3 years? 4 years? 5 years? Irrelevant. Whether it is 1 year or 10 years, the implications are the same: everything is going to change forever.

  • Lyrics

    Pour quoi faire, te mettre à l’envers

    (What for, turning yourself upside down)

    Toujours danser sur le fil

    (Always dancing on the wire)

    Juste un autre verre

    (Just another drink)

    Pour sentir la terre

    (To feel the earth)

    Sous tes pieds et puis tu glisses

    (Beneath your feet and then you slip)

    Sens mon coeur, là t’auras plus peur

    (Feel my heart, then you won’t be afraid anymore)

    Tu pourras enfin me dire

    (You’ll finally be able to tell me)

    Toutes les choses qui font que t’as qu’une envie

    (All the things that make you only want one thing)

    C’est de danser sur le fil

    (It’s to dance on the wire)

    Comme une étoile tu files

    (Like a star you shoot)

    Tu t’en vas danser sur le fil

    (You go off to dance on the wire)

    Comme une étoile tu files

    (Like a star you shoot)

    Et on s’aime mieux que dans les films

    (And we love each other better than in movies)

    On a chanté jusqu’en pleurer sur les toits, toits

    (We sang until we cried on the rooftops, rooftops)

    Dansé et crié jusque tard, tard

    (Danced and shouted until late, late)

    On a chanté et pleuré sur les toits, toits

    (We sang and cried on the rooftops, rooftops)

    Dansé et crié, tout pour toi

    (Danced and shouted, all for you)

    Toi

    (You)

  • An absolutely beautiful solo guitar piece. It comes from the heart, and I feel it. WOW!