• Trump Tariffs

    I don’t believe that Trump uses the current tariffs as a negotiation leverage – at least not across the board.

    The U.S. will not go back to 0% or < 10% tariffs, because Republicans (JD Vance) will with absolute certainty lose re-election.

    Trump promised to revive the Rust Belt, which are the swing states. He can only “save” them through tariffs against China and all current and future China alternatives.

    Some regions, and I’d include the EU, UK, Argentina, Australia, New Zealand here, now have the unique chance to negotiate towards free trade or at least a friendly ≤ 10% tariff.

    Yet, speaking of the EU, I think they are mostly too arrogant or stupid to do so. 10% seems to be the new base anyhow.

    The relevant question is how the EU will respond.

    Currently, it looks that the EU is looking for sovereignty. This could mean new entry barriers for US technology products. Perhaps even a straight out ban of key technologies like Palantir to replicate local key players.

    This in turn is the ultimate and biggest threat to the USA. A world that is shifting away from US technology will erode the vital pillar that is currently keeping the US dollar alive.

  • BCG says that 83% of firms prioritize innovation, but only 3% feel able to execute.

    I ask: what’s the bigger flaw: overestimating corporate AI or underinvesting in people?

    Probably both.

    I see a trend where AI is seen as the cure-all.

    Instead of investing in visionary and creative leaders plus the engineers to execute, companies invest in AI gimmicks whose ROI is (for now) mostly in efficiency – not real innovation.

    Everyone – at this point – basically pretends it will lead to proprietary innovation.

    At the same time, the innovation companies end up running after, are the innovations that AI tells them to – and because AI is commoditizing logic, it simultaneously commoditizes innovation.

    Congratulations: You end up in a red ocean instead of blue ocean.

    Therefore, innovation becomes competition, and is not innovation anymore.

    The biggest mistake companies can make today is investing all in AI and not investing in the human genius of their workforce: visionary leadership, their collective intuitive intelligence, Human-AI-Symbiosis

  • Erdoğan’s calculated elimination of Imamoglu through academic technicalities and alleged ties to PKK is not really an isolated Turkish case but an example of democracy’s global collapse.

    Yesterday, Germany rushed constitutional changes without proper scrutiny and with a majority that was already voted out of office, Romania disqualifies candidates on procedural grounds. The list goes on: Hungary under Viktor Orbán, Serbia under Aleksandar Vučić, Israel under Netanyahu, Poland under the Law and Justice party – the democratic backsliding transcends regions and political systems. 72% of humanity now lives under authoritarian control.

    There is a new playbook: weaponize legal institutions against opponents, manufacture legitimacy through procedural theater, and dismantle democratic safeguards while maintaining the illusion of constitutionality.

    We’re witnessing not democracy’s dramatic assassination but its methodical strangulation through bureaucratic manipulation. This erosion isn’t coincidental but the inevitable outcome of centralized power structures that invariably corrupt even well-designed systems.

    I believe that our only viable path forward lies in radical decentralization: distributing governance to local communities, financial sovereignty through crypto networks, and communication via censorship-resistant platforms that no single entity controls.

    Decentralized systems restore human dignity by establishing unbreakable cryptographic guarantees rather than depending on the hollow promises of centralized authorities, career politicians, and unelected bureaucrats .

    The future belongs to networked individuals collaborating voluntarily through systems designed with liberty as their foundation. Decentralized and globally networked societies are antifragile societies that unleash innovation by enabling thousands of concurrent experiments instead of single-point failures.

    Only decentralization can safeguard freedom in an increasingly authoritarian world.

  • The Qing Dynasty, Ottoman Empire, and Rome were all civilizations stuck in a grim loop.

    Decline hit first: overstretch, corruption, and enemies pile up quietly.

    Then comes nostalgia: past glories are hyped up – Confucian order, Suleiman’s peak, Roman strength – as a hope to be a fix for the now.

    Next, technology-hype steps in: Western tools for Qing, reforms for Ottomans, Christianity for Rome – all fueled short-lived dreams of a turnaround.

    But the cracks stay, and it all fell apart.

    Today, the U.S. bets on AI and reshoring while chasing “greatness.”
    The EU pushes green tech, a military buildup and is dreaming of a greater federation.
    Also Russia – stuck in deep nostalgia of long-gone Soviet might – is betting on military tech, but seems to be already in decline’s later stages, struggling against isolation and internal decay.

    Can they escape the inevitable? Well, let’s try to predict the future.

    The Qing, Ottoman, and Rome empires crashed the same way.
    Russia’s dying population and oil addiction could break it apart, with Siberia becoming independent, falling under the wing of a more stable neighbor: China.
    The EU’s bickering and nationalist mess might split it into weak blocs, forgetting unity, like the Ottomans did.
    The US, stuck in political fights and inequality, could lose control and states might go rogue like Rome’s endgame.

    They are all chasing old nostalgia and shiny tech, ignoring the rot.

    DOGE is the only effort to remove the rot.

    If DOGE fails or brings polarization to a breaking point, the U.S. federal government will fail, leaving behind the states.

    Russia will be left as a small state surrounding Moscow and Siberia as a larger and resource rich state.

    What about the EU?

    We could see a “Hanseatic 2.0” (Germany, Netherlands, Belgium, Baltics, Scandinavia – potentially including former Russia’s St. Petersburg) prioritizing economic power. A “Latin Axis” (Portugal, Spain, Italy) might strengthen ties with Latin America, forging a Neo-Romanesque sphere. Central Europe could come together around a Visegrád Plus, with a focus on national sovereignty. Meanwhile, the Balkans would remain a volatile periphery, vulnerable to external influence – particularly a strong Türkiye.

  • Without a lot of preparation I tried my very first remote viewing with the German election as a target.

    (more…)
  • One of the biggest problems of humanity is information overload.

    Think about how we used to get information just 50 years ago. We had to deliberately search for them.

    We had to engage in conversations, go to the library or bookshop to search relevant books or buy a newspaper.

    The internet gave birth to niche forums, and we got search engines which allowed us to find blogs and articles.

    Until social media arrived, it was a careful quest for information.

    Social media turned it around. Instead of searching for information, we now get bombarded with news, ideas, opinions – from anyone around the world, nonstop 24/7.

    With ChatGPT, we got a tool that not only bombarded us with human created information, we can now basically create our own information, endlessly.

    The worst: People now flood the internet and social platforms with content they didn’t even write themselves.

    What does this mean?

    It is now easier and cheaper than ever to access information. Which is great!

    But it is harder than ever to focus on what really matters to us.

    The now endless stream of information siphons our energy, distracting us from the intentional paths we truly wish to pursue.

    I think the best way to consume information consciously is to first have a clear picture of what we want to understand and know, and then to dedicate time for deep-reading and deep-writing.

    That means, not only searching for quick information on what truly interests you – but choosing one subject to study, research, and then write your own essay on it.

    Whether you publish that essay or not is irrelevant.

    Merely writing it keeps your thinking-ability alive.

    The important thing is to do it consciously.

    Use AI only as a research partner – not a ghostwriter.

    Pick what you want to master. Then dedicate time to actually master it – not only consume endless information on whatever the world decides is important now.

  • I think we have an incomplete and false understanding of reality. Because of that, we are moving inside a tiny fraction of endless possibilities.

    An AGI system based on previous and current knowledge can only exploit what is possible within that tiny fraction. The missing link seems to be our mystical, uniquely human ability of intuition, which allows highly conscious humans to access knowledge outside our current fraction of possibilities – creating something (ideas, inventions, theories) entirely new, that has never been done before.

    Based on how AI systems are built, I expect them to create meaningful advancements within our current frame of understanding; but compared to what is actually possible, these will stay miniscule.

    To access what seems impossible, we shouldn’t look for logic and intellect. We should aim to understand consciousness; i.e., study Yogis, understand DMT, make sense of Psilocybin.

    Only by heightening our consciousness – and intuition seems to be the highest form of it – will we be able to emerge from current limitations. Because in the end, there are no limitations.

  • OpenAI released its new o3 models and numerous people argue that this is in fact Artificial General Intelligence (AGI) – in other words, an AI system that is on par with human intelligence. Even if o3 is not yet AGI, the emphasis now lies on “yet,” and – considering the exponential progression – we can expect AGI to arrive within months or maximum one to two years.

    According to OpenAI, it only took 3 months to go from the o1 model to the o3 model. This is a 4x+ acceleration relative to previous progress. If this speed of AI advancement is maintained, it means that by the end of 2025 we will be as much ahead of o3 as o3 is ahead of GPT-3 (released in May 2020). And, after achieving AGI, the self-reinforcing feedback loop will only further accelerate exponential improvements of these AI systems.

    But, most anti-intuitively, even after we have achieved AGI, it will for quite some time look as if nothing has happened. You won’t feel any change and your job and business will feel safe and untouchable. Big fallacy. We can expect that after AGI it will take many months of not 1-2 years for the real transformations to happen. Why? Because AGI in and of itself does not release value into the economy. It will be much more important to apply it. But as AGI becomes cheaper, agentic, and embedded into the world, we will see a transformation-explosion – replacing those businesses and jobs that are unprepared.

    I thought a lot about the impact the announced – and soon to be released – o3 model, and the first AGI model are going to have.

    To make it short: I am extremely confident that any skill or process that can be digitized will be. As a result, the majority of white-collar and skilled jobs are on track for massive disruption or elimination.

    Furthermore, I think many experts and think tanks are fooling themselves by believing that humans will maintain “some edge” and work peacefully side-by-side with an AI system. I don’t think AGI will augment knowledge workers – i.e. anyone working with language, code, numbers, or any kind of specialized software – it will replace them!

    So, if your job or business relies purely on standardized cognitive tasks, you are racing toward the cliff’s edge, and it is time to pivot now!

    Let’s start with the worst. Businesses and jobs in which you should pivot immediately – or at least not enter as of today – include but are not limited to anything that involves sitting at a computer:

    • anything with data entry or data processing (run as fast as you can!)
    • anything that involves writing (copywriting, technical writing, editing, proofreading, translation)
    • most coding and web development
    • SAAS (won’t exist in a couple of years)
    • banking (disrupted squared: AGI + Blockchain)
    • accounting and auditing (won’t exist as a job in 5-10 years)
    • insurance (will be disrupted)
    • law (excluding high-stake litigation, negotiation, courtroom advocacy)
    • any generic design, music, and video creation (graphic design, stock photography, stock videos)
    • market and investment research and analysis (AI will take over 100%)
    • trading, both quantitative and qualitative (don’t exit but profit now, but expect to be disrupted within 5 years)
    • any middle-layer-management (project and product management)
    • medical diagnostics (will be 100% AI within 5 years)
    • most standardized professional / consulting services

    However, I believe that in high-stakes domains (health, finance, governance), regulators and the public will demand a “human sign-off”. So if you are in accounting, auditing, law, or finance I’d recommend pivoting to a business model where the ability to anchor trust becomes a revenue source.

    The question is, where should you pivot to or what business to start in 2025?

    My First Principles of a Post-AGI Business Model

    First, even as AI becomes infallible, human beings will still crave real, raw, direct trust relationships. People form bonds around shared experiences, especially offline ones. I believe a truly future-proof venture leverages these primal instincts that machines can never replicate at a deeply visceral level. Nevertheless, I believe it is a big mistake to assume that humans will “naturally” stick together just because we are the same species. AGI might quickly appear more reliable, less selfish than most human beings, and have emotional intelligence. So a business build upon the thesis of the “human advantage” must expertly harness and establish emotional ties, tribal belonging, and shared experiences – all intangible values that are far more delicate and complex than logic.

    First Principle: Operate in the Physical World

    • If your product or service can be fully digitalized and delivered via the cloud, AGI can replicate it with near-zero marginal cost
    • Infuse strategic real-world constraints (logistics, location-specific interactions, physical limitations, direct relationships) that create friction and scarcity – where AI alone will struggle

    Second Principle: Create Hyper Niche Human Experiences

    • The broader audience, the easier it is for AI to dominate. Instead, cultivate specialized groups and subcultures with strong in-person and highly personalized experiences.
    • Offer creative or spiritual elements that defy pure rational patterns and thus remain less formulaic

    Third Principle: Emphasize Adaptive, Micro-Scale Partnerships

    • Align with small, local, or specialized stakeholders. Use alliances with artisan suppliers, local talents, subject-matter experts, and so on.
    • Avoid single points of failure; build a decentralized network that is hard for a single AI to replicate or disrupt

    Fourth Principle: Embed Extreme Flexibility

    • Structured, hierarchical organizations are easily out-iterated by AI that can reorganize and optimize instantly
    • Cultivate fluid teams with quickly reconfigurable structures, use agile, project based collaboration that can pivot as soon AGI-based competition arises

    Opportunity Vectors

    With all of that in mind, there are niches that before looked unattractive, because less scalable, that today offer massive opportunities – let’s call them opportunity vectors.

    The first opportunity vector I have already touched upon:

    • Trust and Validation Services: Humans verifying or certifying that a certain AI outcome is ethically or legally sound – while irrational, it is exactly what humans will insist on, particularly where liability is high (medicine, finance, law, infrastructure)
    • Frontier Sectors with Regulatory and Ethical Friction: Think of markets where AI will accelerate R&D but human oversight, relationship management, and accountability remain essential: genetic engineering, biotech, advanced materials, quantum computing, etc.

    The second opportunity vector focuses on the human edge:

    • Experience & Community: Live festivals, immersive events, niche retreats, or spiritual explorations – basically any scenario in which emotional energy and a human experience is the core product
    • Rare Craftsmanship & Creative Quirks: Think of hyper-personalized items, physical artwork, artisanal or hands-on creations. Items that carry an inherent uniqueness or intangible meaning that an AI might replicate in design, but can’t replicate in “heritage” or provenance.

    Risk Tactics

    Overall, the best insurance is fostering a dynamic brand and a loyal community that invests personally and emotionally in you. People will buy from those whose values they trust. If you stand for something real, you create an emotional bond that AI can’t break. I’m not talking about superficial corporate social responsibility (nobody cares) but about authenticity that resonates on a near-spiritual level.

    As you build your business, erect an ethical moat by providing “failsafe” services where your human personal liability and your brand acts as a shield for AI decisions. This creates trust and differentiation among anonymous pure-AGI play businesses.

    Seek and create small, specialized, local, or digital micro-monopolies – areas too tiny or fractal for the “big AI players” to devote immediate resources to. Over time, multiply these micro-monopolies by rolling them up under one trusted brand.

    Furthermore, don’t avoid AI. You cannot out-AI the AI. So as you build a business on the human edge moat, you should still harness AI to do 90% of the repetitive and analytic tasks – this frees your human capital to build human relationships, solve ambiguous problem, or invent new offerings.

    Bet on What Makes Us Human

    To summarize, AI is logical, combinatorial intelligence. The advancements in AI will commoditize logic and disrupt any job and business that is mainly build upon logic as capital. Human – on the other hand – is authenticity. What makes human human and your brand authentic are elements of chaos, empathy, spontaneity. In this context, human is fostering embodied, emotional, culturally contextual, physically immersive experiences. Anything that requires raw creativity, emotional intelligence, local presence, or unique personal relationships will be more AI resilient.

    Therefore, a Post-AGI business must involve:

    1. Tangibility: Physical goods, spaces, unique craftsmanship
    2. Human Connection: Emotional, face-to-face, improvisational experiences
    3. Comprehensive Problem Solving: Complex negotiations, messy real-world situations, diverse stakeholder management

    The inverse list of AGI proof industries involve some or multiple aspects of that:

    • Physical, In-Person, Human-Intensive Services
      • Healthcare: Nursing, Physical therapy, Hands-on caregiving
      • Skilled trades & craftsmanship
    • High-Level Strategy & Complex Leadership
      • Diplomacy, Negotiation, Trust building
      • Visionary entrepreneurship
    • Deep Emotional / Experiential Offerings
      • Group experiences, retreats, spiritual or therapeutic gatherings
      • Artistic expression that thrives on “imperfection”, physical presence, or spontaneous creativity
    • Infrastructure for AGI
      • Human-based auditing/verification
      • Physical data center operations & advanced hardware
      • Application and embedment of AI in the forms of AGI agents, algorithmic improvements, etc. to make it suitable for everyday tasks and workflow

    The real differentiator is whether a business is anchored in the physical world’s complexity, emotional trust, or intangible brand relationships. Everything pure data-driven or standardized is on the chopping block – imminently.

  • Nowadays, most emails I receive – including technical and legal ones – are undoubtedly written by ChatGPT. Which I’m okay with – but I find it rather funny that I now have to read what an AI has written only to input the context myself into my AI system. We are effectively constraining AI systems to communicate via human intermediaries – which is a laughably stupid and cognitively inefficient approach.

    I think it is wasted energy to make AIs even better at mimicking human communication – this energy is better used in developing AI-to-AI communication protocols that bypass human language entirely. Instead of exchanging emails written in human language, AIs should directly exchange action items, structured data, intent vectors, or probabilistic models. How valuable is it really in making AI communication more human-readable? I believe it is about freeing AIs to communicate in their “native language” while humans simply set high-level objectives and constraints. No latency, no information loss, no mental drainage, more time for actual human communication and interaction.

  • Everything looks as if the future belongs entirely to machines, where decisions will be driven solely by logic and data. This makes sense from a logical perspective. AI can already shift through terabytes of real-time data in seconds. It can identify patterns the human eye cannot see. As these systems become more sophisticated and continue to improve exponentially, it is fair to predict that in the near-term future we will not only push data- and logic-driven decision-making to a point of saturation, we will also experience a natural tendency to lean heavily on logic-based recommendations from advanced AI systems.

    I fear that the more we rely on these data-driven arguments, the more we risk sidelining a crucial element of decision-making: human intuition. We risk that algorithms and AI systems become the default arbiters of choice. The more powerful their capabilities become, the higher will be the temptation to dismiss our intuition. We will end up making decisions purely on logic, with every action optimized by data.

    Here is the contrarian truth: as AI systems gets better at advanced reasoning, processing even more data, and identifying patterns, pure logical and knowledge based analysis becomes commoditized.

    We are already in a world where decisions are made for us by algorithms and AI systems. Not only do they decide which video we should watch next on YouTube, they also provide decision makers with data and insights – whether it is in finance, trading, marketing, hiring, or medicine. And why not? AI systems process data faster, more accurately, and with few biases than any human being ever could. They can recognize patterns that would take humans years to discern. Advanced algorithms spit out logical predictions based on mathematical conclusions. For tasks like optimizing logistics, predicting customer behavior, and analyzing stock market trends, it is a no-brainer–AI wins.

    It seems logical to assume that pure data and computation will lead to the best decisions. But this is flawed because there is something missing in this equation. Decisions are not always about logic. The most important decisions in life and business are anything but logical. They are guided by subtle, almost imperceptible signals we cannot fully explain, but we feel. This is intuition, the gut feeling we experience when something just feels right or feels wrong. While it is tempting to dismiss these feelings as irrational, they often turn out to be right.

    Optimizing decisions based on more data and more logical reasoning is thereby flawed, and I fear that the more we lean on AI systems to guide our choices, the more we risk sidelining the most powerful tool humans possess: intuition.

    Scientific discoveries, for example, are not made as a result of logical reasoning. They are regularly the result of an “aha moment” of insight when knowledge seems to come from nowhere. Or think of the countless stories of entrepreneur who make bold decisions based on nothing but an intangible sense of certainty. Steve Jobs went against market research and expert advice when he decided to launch the iPhone. Elon Musk bet his fortune on SpaceX when logic screamed that the odds were against him. There are investors who pull out of a seemingly attractive opportunity just moments before it tanks, driven by nothing more than a gut feeling. Also, good music just comes to the musician, and it is not created by technical skill.

    Through intuition, we can feel the subtle energetic currents of events before they manifest. It’s the mother who knows something’s wrong with her child before receiving the call from school, or the traveler who avoids a particular flight, only to find out later it crashed.

    These aren’t coincidences or anomalies—they are examples of intuition at work. In these moments, we are not responding to what is, but we are aligning ourselves with what could be. We sense reality before it unfolds. This intuitive intelligence is more than a vague “gut feeling”; it is an ability to sense what isn’t in the data, to feel the reality before it is fully formed. With our intuition, we tap into a deeper field of information that transcends the conscious mind.

    Our rational mind is not very good at listening to our intuition. It is busy making sense of the things in our material world. It is busy with its endless internal monologue and anxiety. Our mind is constantly generating thoughts – and the more data we have access to (think of the infinite information feeds from social media, the news, and now generative AI) the more difficult it becomes to access our intuitive intelligence.

    Furthermore, I fear that, the more ubiquitous AI systems and the more convincing their logic-based arguments become, the more we will trust and rely on them blindly. When an algorithm presents a data-backed recommendation, it is hard to contradict it. The numbers add up, the patterns are clear – it feels almost reckless to go against the machine. But that is exactly the risk.

    The stronger the logical basis for decision-making becomes, the harder it will be to justify following your gut. The result? We will end up in a world where every decision is optimized for efficiency and logic – at the cost of creativity, foresight, and frankly, the human element.

    We risk entering a near-term future where we become slaves to the data, losing the ability to make decisions that transcend the immediate facts in front of us and instead tap into a deeper, more holistic understanding of reality.

    Exactly in fields that require the most crucial decisions, intuitive intelligence is of higher importance than pure data-driven logic. In business strategy, creative innovation, and geopolitical decisions, intuition plays a unique and uttermost important role. It tells use when an idea feels right, even if the numbers aren’t there to back it up, or we abandon a “logical” choice because something feels off.

    The best decisions aren’t made purely on logic or data. They’re made by integrating the analytical with the intuitive. AI will continue to become an ever more invaluable tool, but it’s just that – a tool. It processes the world as it is, based on observable facts and historical data. But intuition allows us to perceive the world as it could be. It taps into potential futures, subtle energetic shifts, and possibilities that aren’t visible in the data. Data can get us to the next step, but intuition lets us leap to entirely new paths. And as AI carves out logic’s territory, intuition becomes even more vital.

    I don’t say we should abandon data or AI systems – far from it. Intuitive decision makers aren’t anti data. They leverage data and logic without being trapped by it. They use it as a foundation, but they use their intuition to connect the dots and sense realities which machines cannot compute. They use data as a guide but trust their intuition to make the final decision. Their intuition will navigate the uncertainties and unknowns that lie beyond the reach of logic. The best leaders will be those who can access and trust their intuition even though logic is against it.

    Those who can access and act upon their intuitive intelligence will find themselves making the right decision when it matters the most—even if logic disagrees: preventing a nuclear conflict by sensing hidden motives when every visible sign points toward war, sparking a scientific breakthrough that defies conventional knowledge, designing a world-changing technology that others dismissed as impossible, or uniting adversaries to forge an unexpected, lasting peace against all rational odds.