Category: Posts


  • There is this book from Cal Newport called “So Good They Can’t Ignore You.” It basically says focus on what value you can offer and which rare and valuable skills you can acquire.

    I carefully predict that in order to be “so good they can’t ignore you” in the future, all you need to do is ignore using ChatGPT and any other LLM and just do intellectually challenging work every day. In 3–6 years you’ll be one of the sought out, employable, and highly-paid individuals simply because you are cognitively sovereign.

    Yet, in order to survive the next 3–6 years without falling behind competitively against those who do use ChatGPT et al., you must avoid all computer and knowledge work. In other words: have a career that allows you to provide value (and make money) without using a computer.

    Besides obvious blue-collar jobs (which are themselves antifragile if specialized), this is particularly sales and deal related work: sales, M&A, broker, leader/founder/owner.

    By pursuing such AI-resilient jobs, while spending early mornings, late evenings, or weekends writing essays, doing manual research, solving difficult math or physics problems, or simply by reading real books, you’ll be so good they can’t ignore you.

    It will require much less effort than ever before, not because less effort is required for mastery, but merely because you are now part of the control group against a massive population that will experience cognitive-atrophy and thereby become dependent on AI and thereby unoriginal, homogenous, and uncreative.

  • I always thought that the world is big. That there are many countries to choose from if you ask yourself “Where to live?” The truth is: the world is actually very small. When you have certain criteria, the # of countries that qualify collapses from hundreds to only a handful of options. For example: if you believe in home-schooling, no legally required childhood vaccination, and you do like sunshine, only 7 countries will match this specific criteria. Of these, some are shitholes, which leaves you with 5 options. If you don’t want to live completely detached from society (in the jungle or on a remote island) then you have 3 options left. If you want to own property, there are 2 left: Portugal and Panama. If you don’t like the EU, Panama is the only option with its own downsides – or perhaps the best option is to get a medical exemption to make the USA your perhaps best option.

    Have you ever defined what principles are truly important to you – and then matched where in the world these criteria are fulfilled? You will be surprised.

  • Stagnation

    Most seem to overlook that this economic stagnation is global, not specific to their country, and only getting started. Without the ambitious valuations and debt-funded capex for AI, the US economy would be just as (or even more) stagnatory as the EU.

    7 AI-focused companies now make up >30% of the entire US stock market. Banks are becoming increasingly exposed through credit lines to AI companies.

    Globally, growth rates are decelerating. Not only in Germany and more broadly the EU, but in basically all non-US developed economies, we are seeing sub 1-2% growth. Basically zero job creation. Globally, honest inflation is higher than honest GDP growth. China faces massive underreported demographic challenges, the impact of tariffs is brutal. Debt dynamics in the US, relevant EU countries, emerging economies are vicious. Mexico stalls, Brazil slows. If AI valuations and investments falter for whatever reason (unmet productivity or overinvestment), the US will be visibly in a recession, which would trigger a global one.

    Ergo: AI must work and create real productivity growth in traditional economies. That is honestly a lot of pressure on the big AI labs and researchers.

  • Warren Buffett is ending his Thanksgiving letter with timeless advice:

    • Don’t beat yourself up over past mistakes.
    • Get the right heroes and copy them.
    • Decide what you would like your obituary to say and live the life to deserve it
    • When you help someone in any of thousands of ways, you help the world.
    • The cleaning lady is as much a human being as the Chairman.

    His letter reads as though he feels his death is near. I learned a lot from everything Warren Buffett has written over his lifetime. I am extremely grateful for that. If there is one thing that he inspired me to do, it is to write publicly. While it feels as if nobody is reading what I write today, I know that there will be one single person 41 years from today who will be just as grateful that I published and not buried my thinking.

  • After spreadsheets became standard in M&A, deals closed significantly slower. M&A deals went from 2-4 months from LOI to close (1970s to 1980s) to 6-12+ months from LOI to close (1980s to 1990s). Deals didn’t improve. 50-70% of acquisitions destroyed shareholder value (same as pre-spreadsheet). Basically, deals got slower, but not any better.

    Why did that happen ? Analysis paralysis, an illusion of precision, the replacement of judgment with calculation, and accountability shifted from CEO and CFO to analysts doing spreadsheets. Nobody except some dealmakers like Warren Buffett or Peter Lynch realized it at that time (“I’ve never seen a deal that didn’t look good in a spreadsheet”).

    Perhaps you’ve recognized some parallels: With AI we are repeating the pattern, only faster and deeper. If human nature stays the same, it will result in an efficiency paradox. Everything will be analyzed and created even faster. But with more output will come slower completion. It will lead to false confidence, zero responsibility (“The AI models said so”). Also, the authority is shifting from human to AI much faster than it shifted from human to spreadsheet.

    What will happen can perhaps be called a quality collapse. The average quality will increase, but the top-end quality will decrease. Everything will be crowded by AI-generated “pretty good” but what will be missing is excellence. Then, at the same time, the AI wave is hitting a succession/retirement wave. Senior experts with real experiential intuition and judgment are retiring. Juniors completely dependent on AI have to take over.

    While it was previously a recognized truth that 30 years of experience >>>> 5 years of experience, we now live in an illusion that 5 years of experience + AI = 30 years of experience. We won’t realize until totally novel problems arise that AI can’t handle because it is not in the training data, while humans are at that stage already cognitively crippled.

    We think we can just go back and “do it without AI if needed” but it will be too late because neural pathways are atrophying right now. Organizations shift all their processes around AI. Skills are not being taught to the next generation but to AI. We are basically already in a state of dependency which looks like empowerment, and we won’t see it until the tool is removed.

    Try doing 1970s M&A deals just with pen, paper, and calculators. How many globally could do it? Same thing will happen with AI but faster. The result – I fear – is that innovation in many organizations will slow down and they will commoditize.

    AI driven productivity gains are a dangerous illusion. Not because of AI (extremely great and powerful tool) but because of how we work with it. Spreadsheets optimized for what was modelable, not what was innovative and couldn’t be seen in numbers. AI will do the same thing, but not exclusively in finance but in all domains.

    What makes AI perhaps more “dangerous” is that it has no barrier to entry. It will enable some selected (rare) individuals to really master what they do (driving real innovation), but the majority (if they are not very careful and intentional) will destroy their own personal economic value.

    With spreadsheets, you had to learn formulas, understand logic, debug errors – which was a protection against overuse. AI has none of that, nothing to learn (if you are really honest, dear AI coaches), no debugging, no logic, no barrier = instant universal adoption which we are observing.

    So, back to the original observation: with spreadsheet everyone got more productive, but deals took longer and outcomes didn’t improve. Now with AI, everyone is getting more productive, but: are projects finishing faster? Is quality improving? Is innovation in the median of corporations accelerating?

    I think: with spreadsheets, people began optimizing for “model says yes” instead of deal is actually good. Are we optimizing AI use for “AI approves” instead of actually valuable?

    We know that if measurement becomes a target, it ceases to be a good target.

    This is by no mean an anti AI stance or anti spreadsheet stance. But I hope to arise some careful thought on the relationship we have with AI and how to avoid the analysis-paralysis of the spreadsheet era.

  • Deep Work 2.0

    Deep work is a term Cal Newport uses to describe the activities performed in a state of absolute distraction-free concentration to push our cognitive capabilities to their limit. I never read the book because the idea is just so simple: schedule a time when you perform real work, no social media, no notifications, just you and the work in front of you.

    When I first heard about Deep Work, it was not a new concept to me. I had already practiced deep focus sessions regularly – usually early in the morning. But it made me definitely more serious. No matter how disciplined I attempted to be, the infinite dopamine from social media and constant notifications from my phone every more often crushed my flow state. Years ago, I tried and then purchased the blocking software Cold Turkey (for Mac and PCs) and an Android app called Digital Detox. Both apps are absolutely great (yes: one-time purchases!). What they allow you to do is block anything you want (for example social media and YouTube) and at the same time make it extremely difficult (if not impossible) to circumvent them.

    Recently, I felt quite unhappy about the lack of progress towards the goals I had set for myself. One part of the equation certainly was the birth of our daughter. Yet, I still managed to schedule at least one Deep Work session each day. What was the missing link? I realized that it is not only social media, YouTube, or news websites anymore – LLMs are now equally distracting as social media.

    Today I created a new blacklist filter in Cold Turkey where I now block all LLM apps and URLs. Could be I’m one of the first persons in the world to do so, but I realized that – for my ADHD type brain – having AI accessible non-stop is an equal distraction as social media feeds: a source of noise and cheap dopamine.

    I realized that using LLMs blindly leads to procrastination, analysis paralysis, decision fatigue, unoriginal thought, loss of free will, decline of deep thinking capacity, atrophy of overall cognitive function, writing skill decline.

    To be totally honest: Instead of working, I prompted. Instead of writing, I prompted. Instead of thinking, I prompted.

    My personal insight is that I must be just as intentional and selectively about using AI as I must be with social media. Instead of using it all the time, I limit it now to very specific tasks where it adds exponential value to the work.

    Let’s be clear: I’m not avoiding AI. I’m also not badmouthing it. I believe AI is one of the greatest technologies humans have invented. What I can tell from my personal experience and observations: AI can be a powerful lever or a heavy burden. Therefore, I believe, it is time for Deep Work 2.0: Deep focus sessions where you intentionally do not use AI at all – at least not actively (i.e., only use pre-prompted conversations or Deep Research reports that you saved as a PDF or Markdown file for your Deep Work 2.0 session).

    What if not only distractions, social algorithms, but also (pretended) AI efficiency is a deadly enemy of our flow state?

  • More than 50% of recently published website texts are now written by AI. This means that from today forward, the majority of all published texts is already synthetic. The same will hold true for any other form of content: images, video, and audio. In and of itself, AI written texts shouldn’t be such a large issue. The problem is not texts written by AI, but that we have simultaneously crossed a point where you can reliably distinguish AI-generated content from human content. I have a strong opinion that AI should sound like AI, I also think that AI chatbots should be apparent as such, and that AI-generated images and videos should have deeply embedded watermarks. This is also why I believe parts of the EU AI Act and the California AI Transparency Act are net-positive for humanity. But why do I believe so?

    The most pressing issue with AI generated content is much less about capability or alignment of AI models, but the collapse of epistemic commons before we even arrive at general-intelligent or super-intelligent AI models. Here is what I mean:

    Most text is now AI-generated, and within months the same will be true for video, images, and audio. When creation costs and efforts collapse to zero, two things vanish simultaneously: trust and meaning.

    We can no longer casually trust what we see. Every text, every video, every expert opinion becomes suspect. As social primates evolved to trust patterns and authorities, we are losing the ability to distinguish signal from noise at the exact moment we need it most.

    Perhaps the deeper crisis isn’t skepticism but meaning collapse. Scarcity and effort have always been core to how humans assign value and significance. When infinite content can be generated instantly and automated for any purpose, these anchors disappear.

    Most look at this as primarily economic disruption, but perhaps it is much more psychological and civilizational because we are eroding the foundations of shared reality before we have built alternatives.

    Then there is this slippery slope: From now on, humans will increasingly interact with and read texts written by AI systems trained on AI-generated texts. Again, soon it is also photos, videos, audio. This training-loop has (at least) the potential to create a cultural drift in directions yet unpredictable. One thing we can be quite certain about is that our human values are already being reshaped by AI systems in ways we cannot track. This in turn makes the question of “alignment” both: more important and at the same time secondary.

    The most pressing risk of human civilization is therefore not hypothesizing a possibly “misaligned” superintelligence, but rather the risk of arriving there divided – socially and epistemically.

    What must be done is certainly harder than alignment of AI systems?

    • Rebuilding trusted information infrastructure
    • Creating new forms of verifiable authenticity
    • Developing cultural “antibodies” to synthetic manipulation
    • Building meaning-making structures that aren’t dependent on scarcity or effort
    • Preserving and strengthening human coordination capacity
    • Etc.

    This is harder than “alignment”, because the more we look at these to-dos from a federal or global perspective, the more impossible they will become.

    Now, to move from the theoretical to the practical: Who are the 5 to 150 people you can still genuinely trust and coordinate with? Because everything else either emerges from functional groups, or it won’t emerge at all.

  • When looking at AI, people are fixated on surface-level effects: economic disruption (jobs disappearing), alignment risks (AI going rogue), or ethical dilemmas (bias of LLMs). While those are all real, they also seem to be distractions from the real shift. The current conversations are not about whether we achieve AGI anymore but about when – some say 10 years, I say it’s basically already here (it all depends on the definition of the term really). By definition, AGI will match and then surpass human intelligence in every single domain: strategic, creative, you name it. Once that threshold is crossed (and it’s closer than many admit), a feedback loop kicks in. AI designs better AI, which designs even better AI, ad infinitum.

    Because we are not yet there, we debate AI as a tool. But as soon we cross that threshold, AI will predict, simulate, and optimize anything logic-based with absolute precision that human input is unnecessary or perhaps counterproductive. Humans – and that means governments, corporations, and individuals – will outsource everything, from policy to life choices, because AI will present the best logical and data-backed option. And because it is so much better in logic, you stop questioning it. The “alignment” problem is therefore ultimately less about making AI safe for humans, but about preparing humans to accept their irrelevance in logical intelligence and – in my opinion – transitioning (or better: re-connecting) them to their intuitive intelligence. If we fail at this, the majority of humans will experience free will only as an illusion.

    We humans derive meaning from struggle, achievement, and social bonds. Within the next 10 to 20 years, we won’t need to struggle to achieve anymore. Achievement will be handed out (or withheld) by systems we cannot understand. What is left are social bonds. But is that really the case? We already see AI-mediated interactions replacing genuine connections (whether emails, eulogies, or even virtual AI companions). If we do not pay attention and re-connect with other humans (our tribes), we risk real psychological devastation at scale.

    If AI is centralized, it will be operated by an elite (that’s at least the current trend). Not only will this elite gain god-like power, but it will form another elite class: humans who are augmented by superintelligence through direct neural interfaces or exclusive AI enhancements. What about the rest? An underclass kept alive by a universal-basic-whatever, but without purpose or power?

    The problem really is: when we cross that threshold, it won’t be fixable. We better collectively act now, or the world will be run by a handful of super-enhanced humans and their AI overlords.

    In 2025 these thoughts will read like speculation. But based on my observations of how the majority of humans started using and adopting AI, the trajectory seems obvious (to me). AI is optimizing for efficiency. Companies adopting it as well. Individuals must – or they are no longer competitive. What is the antidote? I am divided. I don’t believe AI must lead to such dystopia. I am much more convinced that it is our best shot to achieve utopia. But there is a very thin line in-between them: us humans. In how we collectively act. And acting is much (!) less about technological adaptation (from becoming AI “experts” to Neuralink cyborgization) and indefinitely more about re-connecting to what makes us uniquely human: our consciousness, our connection to God, our one Creator, and our unity. Meaning will come from non-competitive pursuits, AI-alignment from balancing logic with consciousness, and happiness from real, deep, social human connections. Intelligent machines – no matter how superintelligent they turn out – can never be conscious. Perhaps it is a wake-up call: we lost our spiritual connection to consciousness – and we must re-connect.

  • One of the biggest problems of humanity is information overload.

    Think about how we used to get information just 50 years ago. We had to deliberately search for them.

    We had to engage in conversations, go to the library or bookshop to search relevant books or buy a newspaper.

    The internet gave birth to niche forums, and we got search engines which allowed us to find blogs and articles.

    Until social media arrived, it was a careful quest for information.

    Social media turned it around. Instead of searching for information, we now get bombarded with news, ideas, opinions – from anyone around the world, nonstop 24/7.

    With ChatGPT, we got a tool that not only bombarded us with human created information, we can now basically create our own information, endlessly.

    The worst: People now flood the internet and social platforms with content they didn’t even write themselves.

    What does this mean?

    It is now easier and cheaper than ever to access information. Which is great!

    But it is harder than ever to focus on what really matters to us.

    The now endless stream of information siphons our energy, distracting us from the intentional paths we truly wish to pursue.

    I think the best way to consume information consciously is to first have a clear picture of what we want to understand and know, and then to dedicate time for deep-reading and deep-writing.

    That means, not only searching for quick information on what truly interests you – but choosing one subject to study, research, and then write your own essay on it.

    Whether you publish that essay or not is irrelevant.

    Merely writing it keeps your thinking-ability alive.

    The important thing is to do it consciously.

    Use AI only as a research partner – not a ghostwriter.

    Pick what you want to master. Then dedicate time to actually master it – not only consume endless information on whatever the world decides is important now.

  • OpenAI released its new o3 models and numerous people argue that this is in fact Artificial General Intelligence (AGI) – in other words, an AI system that is on par with human intelligence. Even if o3 is not yet AGI, the emphasis now lies on “yet,” and – considering the exponential progression – we can expect AGI to arrive within months or maximum one to two years.

    According to OpenAI, it only took 3 months to go from the o1 model to the o3 model. This is a 4x+ acceleration relative to previous progress. If this speed of AI advancement is maintained, it means that by the end of 2025 we will be as much ahead of o3 as o3 is ahead of GPT-3 (released in May 2020). And, after achieving AGI, the self-reinforcing feedback loop will only further accelerate exponential improvements of these AI systems.

    But, most anti-intuitively, even after we have achieved AGI, it will for quite some time look as if nothing has happened. You won’t feel any change and your job and business will feel safe and untouchable. Big fallacy. We can expect that after AGI it will take many months of not 1-2 years for the real transformations to happen. Why? Because AGI in and of itself does not release value into the economy. It will be much more important to apply it. But as AGI becomes cheaper, agentic, and embedded into the world, we will see a transformation-explosion – replacing those businesses and jobs that are unprepared.

    I thought a lot about the impact the announced – and soon to be released – o3 model, and the first AGI model are going to have.

    To make it short: I am extremely confident that any skill or process that can be digitized will be. As a result, the majority of white-collar and skilled jobs are on track for massive disruption or elimination.

    Furthermore, I think many experts and think tanks are fooling themselves by believing that humans will maintain “some edge” and work peacefully side-by-side with an AI system. I don’t think AGI will augment knowledge workers – i.e. anyone working with language, code, numbers, or any kind of specialized software – it will replace them!

    So, if your job or business relies purely on standardized cognitive tasks, you are racing toward the cliff’s edge, and it is time to pivot now!

    Let’s start with the worst. Businesses and jobs in which you should pivot immediately – or at least not enter as of today – include but are not limited to anything that involves sitting at a computer:

    • anything with data entry or data processing (run as fast as you can!)
    • anything that involves writing (copywriting, technical writing, editing, proofreading, translation)
    • most coding and web development
    • SAAS (won’t exist in a couple of years)
    • banking (disrupted squared: AGI + Blockchain)
    • accounting and auditing (won’t exist as a job in 5-10 years)
    • insurance (will be disrupted)
    • law (excluding high-stake litigation, negotiation, courtroom advocacy)
    • any generic design, music, and video creation (graphic design, stock photography, stock videos)
    • market and investment research and analysis (AI will take over 100%)
    • trading, both quantitative and qualitative (don’t exit but profit now, but expect to be disrupted within 5 years)
    • any middle-layer-management (project and product management)
    • medical diagnostics (will be 100% AI within 5 years)
    • most standardized professional / consulting services

    However, I believe that in high-stakes domains (health, finance, governance), regulators and the public will demand a “human sign-off”. So if you are in accounting, auditing, law, or finance I’d recommend pivoting to a business model where the ability to anchor trust becomes a revenue source.

    The question is, where should you pivot to or what business to start in 2025?

    My First Principles of a Post-AGI Business Model

    First, even as AI becomes infallible, human beings will still crave real, raw, direct trust relationships. People form bonds around shared experiences, especially offline ones. I believe a truly future-proof venture leverages these primal instincts that machines can never replicate at a deeply visceral level. Nevertheless, I believe it is a big mistake to assume that humans will “naturally” stick together just because we are the same species. AGI might quickly appear more reliable, less selfish than most human beings, and have emotional intelligence. So a business build upon the thesis of the “human advantage” must expertly harness and establish emotional ties, tribal belonging, and shared experiences – all intangible values that are far more delicate and complex than logic.

    First Principle: Operate in the Physical World

    • If your product or service can be fully digitalized and delivered via the cloud, AGI can replicate it with near-zero marginal cost
    • Infuse strategic real-world constraints (logistics, location-specific interactions, physical limitations, direct relationships) that create friction and scarcity – where AI alone will struggle

    Second Principle: Create Hyper Niche Human Experiences

    • The broader audience, the easier it is for AI to dominate. Instead, cultivate specialized groups and subcultures with strong in-person and highly personalized experiences.
    • Offer creative or spiritual elements that defy pure rational patterns and thus remain less formulaic

    Third Principle: Emphasize Adaptive, Micro-Scale Partnerships

    • Align with small, local, or specialized stakeholders. Use alliances with artisan suppliers, local talents, subject-matter experts, and so on.
    • Avoid single points of failure; build a decentralized network that is hard for a single AI to replicate or disrupt

    Fourth Principle: Embed Extreme Flexibility

    • Structured, hierarchical organizations are easily out-iterated by AI that can reorganize and optimize instantly
    • Cultivate fluid teams with quickly reconfigurable structures, use agile, project based collaboration that can pivot as soon AGI-based competition arises

    Opportunity Vectors

    With all of that in mind, there are niches that before looked unattractive, because less scalable, that today offer massive opportunities – let’s call them opportunity vectors.

    The first opportunity vector I have already touched upon:

    • Trust and Validation Services: Humans verifying or certifying that a certain AI outcome is ethically or legally sound – while irrational, it is exactly what humans will insist on, particularly where liability is high (medicine, finance, law, infrastructure)
    • Frontier Sectors with Regulatory and Ethical Friction: Think of markets where AI will accelerate R&D but human oversight, relationship management, and accountability remain essential: genetic engineering, biotech, advanced materials, quantum computing, etc.

    The second opportunity vector focuses on the human edge:

    • Experience & Community: Live festivals, immersive events, niche retreats, or spiritual explorations – basically any scenario in which emotional energy and a human experience is the core product
    • Rare Craftsmanship & Creative Quirks: Think of hyper-personalized items, physical artwork, artisanal or hands-on creations. Items that carry an inherent uniqueness or intangible meaning that an AI might replicate in design, but can’t replicate in “heritage” or provenance.

    Risk Tactics

    Overall, the best insurance is fostering a dynamic brand and a loyal community that invests personally and emotionally in you. People will buy from those whose values they trust. If you stand for something real, you create an emotional bond that AI can’t break. I’m not talking about superficial corporate social responsibility (nobody cares) but about authenticity that resonates on a near-spiritual level.

    As you build your business, erect an ethical moat by providing “failsafe” services where your human personal liability and your brand acts as a shield for AI decisions. This creates trust and differentiation among anonymous pure-AGI play businesses.

    Seek and create small, specialized, local, or digital micro-monopolies – areas too tiny or fractal for the “big AI players” to devote immediate resources to. Over time, multiply these micro-monopolies by rolling them up under one trusted brand.

    Furthermore, don’t avoid AI. You cannot out-AI the AI. So as you build a business on the human edge moat, you should still harness AI to do 90% of the repetitive and analytic tasks – this frees your human capital to build human relationships, solve ambiguous problem, or invent new offerings.

    Bet on What Makes Us Human

    To summarize, AI is logical, combinatorial intelligence. The advancements in AI will commoditize logic and disrupt any job and business that is mainly build upon logic as capital. Human – on the other hand – is authenticity. What makes human human and your brand authentic are elements of chaos, empathy, spontaneity. In this context, human is fostering embodied, emotional, culturally contextual, physically immersive experiences. Anything that requires raw creativity, emotional intelligence, local presence, or unique personal relationships will be more AI resilient.

    Therefore, a Post-AGI business must involve:

    1. Tangibility: Physical goods, spaces, unique craftsmanship
    2. Human Connection: Emotional, face-to-face, improvisational experiences
    3. Comprehensive Problem Solving: Complex negotiations, messy real-world situations, diverse stakeholder management

    The inverse list of AGI proof industries involve some or multiple aspects of that:

    • Physical, In-Person, Human-Intensive Services
      • Healthcare: Nursing, Physical therapy, Hands-on caregiving
      • Skilled trades & craftsmanship
    • High-Level Strategy & Complex Leadership
      • Diplomacy, Negotiation, Trust building
      • Visionary entrepreneurship
    • Deep Emotional / Experiential Offerings
      • Group experiences, retreats, spiritual or therapeutic gatherings
      • Artistic expression that thrives on “imperfection”, physical presence, or spontaneous creativity
    • Infrastructure for AGI
      • Human-based auditing/verification
      • Physical data center operations & advanced hardware
      • Application and embedment of AI in the forms of AGI agents, algorithmic improvements, etc. to make it suitable for everyday tasks and workflow

    The real differentiator is whether a business is anchored in the physical world’s complexity, emotional trust, or intangible brand relationships. Everything pure data-driven or standardized is on the chopping block – imminently.