How We Use AI

Whether current AI systems qualify as AGI is beside the point. Five years ago, if you had asked me to define AGI, my answer would’ve closely described what GPT o3 or Gemini 2.5 Pro are now. So if this is AGI, then where are the breakthroughs?

Valid question. The answer: we are the bottleneck.

The limitation is no longer the model. The real limitation is that we haven’t really figured out how to use LLMs properly. Even if AI development froze today, and all we have available are o3 and Gemini 2.5 Pro level LLMs, then we would still see a decade of profound disruptions and innovations across entire industries.

Most users treat AI like Google, a friend, a mentor, or a novelty. Few understand prompting. Those who do don’t even scratch the surface of what is possible when you give AI the right prompt, the relevant context, and access to specific or perhaps proprietary data.

Worse, we are not augmenting human intelligence, we are outsourcing it. TikTokified workflows, mindless automation, and prompt-template copy-paste culture are commoditizing subpar outcomes. Instead of expanding our minds, we’re paralyzing them.

The real potential lies hidden in tandem cognition. Reimagining how we work with AI systems in a way that ensures our uniquely human traits (intuition, creativity, vision, …) aren’t ignored, but amplified. Without this shift, outputs will commoditize (across humans and organizations).

We urgently need two things: first a methodology for extracting maximum value from LLMs and second a philosophy for not replacing our human genius, but empowering it.

The future is not AI versus human. It is human with AI, at full capacity. Currently, the focus is on maximum capacity for AI compute. Now it’s time we focus on maximum capacity for human genius.


Discover more from Marius Schober

Subscribe to get the latest posts sent to your email.

Comments

Leave a Reply

Discover more from Marius Schober

Subscribe now to keep reading and get access to the full archive.

Continue reading