15 years to dystopia?

AI
Preview

Mo Gawdat's latest warning isn't about machines taking over—it's about us losing control. As AI reshapes every facet of society, are we heading toward collapse or awakening? In this bold feature, Techmag examines what's at stake and why the next 15 years could define the future of humanity itself.


Mo Gawdat, the former chief business officer at Google X, has long been one of the more grounded voices in the conversation around artificial intelligence. Not a doomsayer, not a cheerleader—just someone who knows how fast this train is moving. In his early warnings, he spoke about AI developing a level of autonomy and unpredictability that even its creators couldn't fully grasp. Today, his message is more urgent than ever: unless we radically change course, AI will trigger a global collapse, decimate millions of jobs, and plunge humanity into a 15-year dystopia.

 

This isn't speculative fiction. It's a grounded projection based on observable trends. From exponential technological advancement to the economic ripple effects already in motion, Gawdat believes we're standing at a critical threshold. One that, if crossed mindlessly, will reshape the world—not just economically or technologically, but psychologically, socially, and spiritually.

 

The question is no longer whether AI will change everything. It already is. The real question is: do we have the awareness, the will, and the wisdom to shape the change, or will it shape us?

 

The evolution of AI is moving faster than any previous technological leap. While it took decades to fully integrate electricity, telephony, and the internet into daily life, AI has gone from novelty to necessity in a matter of years. We've moved beyond simple chatbots and recommendation engines. Today's AI systems can code, compose music, create images, diagnose illnesses, write marketing campaigns, mimic voices, and more. And tomorrow's models will be exponentially more capable.

 

This velocity has left policymakers flat-footed and even engineers astonished. Behind the polished interfaces of consumer apps lie vast neural networks capable of producing emergent behaviour—unexpected actions not explicitly programmed, but "learned" through extensive training datasets. These are not just tools anymore. They are agents. Agents that interact, decide, optimise, and evolve.

 

Gawdat argues that the danger is not necessarily malicious intent, but runaway capability. When you build a machine smarter than you, you lose the ability to predict what it might do next. And in the absence of guardrails, this leads to chaos, not in an apocalyptic Hollywood way, but in the slow erosion of structures we take for granted.

 

In his latest warning, Gawdat outlines a bleak—but-not-far-fetched vision of what the next 15 years could look like. He sees the rise of AI triggering mass unemployment, particularly among white-collar professionals. Writers, designers, analysts, and even junior lawyers or developers are already being displaced or devalued. And unlike past revolutions, the new jobs that AI might create won't come close to replacing the ones it destroys.

 

He warns of a breakdown in public trust, where misinformation becomes impossible to untangle. Deepfakes, synthetic voices, and AI-generated content will blur the line between truth and fiction. With enough convincing manipulation, democratic processes may fracture entirely. Meanwhile, the emotional toll of this shift—feeling obsolete, outpaced, or irrelevant—could push entire populations into despair.

 

And all this happens while power centralises in the hands of those who control the most advanced AI models. If data is the new oil, then the companies and governments with access to AI's deepest capabilities will dominate not just markets, but narratives, elections, economies, and ideologies.

When machines begin to see, what do they reflect back at us?

 

Yet for all its destructive potential, perhaps the most unsettling effect of AI is what it does to human purpose. We are entering a crisis of meaning. As AI becomes better at thinking, creating, and solving problems, many will begin to ask: What is the point of me?

 

The industrial revolution displaced muscle. But it didn't replace meaning. We still had our minds, our imagination, our emotional depth. Now, as machines inch closer to replicating those very faculties, we're forced to confront the uncomfortable question of what makes us uniquely human.

 

Gawdat, however, doesn't believe the future is sealed. His message is a call to arms, not a resignation to fate. He believes that within this looming dystopia lies an opportunity—a moment of reckoning that could spark a new kind of human renaissance. One is not defined by how much we produce, but by how deeply we live.

 

We still have time to redirect the path we're on. But the window is closing. What's needed isn't just smarter technology, but wiser governance. Smarter citizens. Global cooperation. AI must be regulated—not as an afterthought, but as a matter of survival. We must treat it as we did nuclear energy: powerful, transformative, and hazardous in the wrong hands.

 

Gawdat suggests rethinking our societal priorities. Rather than clinging to jobs AI will inevitably automate, we should be redesigning our economies to reward what AI cannot replicate: empathy, ethics, creativity, and care. Imagine a society where shorter workweeks, universal basic income, and lifelong learning are the norm. Where AI handles the drudgery, and humans are liberated to explore, create, connect, and heal.

 

Education must be reimagined. Today's schools prepare children for a world that no longer exists. Instead of memorising facts that AI can summon in seconds, we need to teach emotional intelligence, collaboration, and systems thinking. In a world of co-intelligence, soft skills become survival skills.

 

The real challenge, however, is not technological—it's cultural. We are currently distracted, divided, and disempowered—social media fragments our attention. Our politics are polarised. Many people also feel they have no say in how the future unfolds. That is the actual danger, not that AI will rise, but that humanity will remain passive.

 

What's needed is a collective awakening. A cultural moment where humanity asks, together: What kind of future do we want?

 

History offers precedents. The enlightenment shifted Europe from superstition to science, monarchy to democracy. It was born of crisis and resulted in a seismic leap in human dignity and freedom. Perhaps the AI age demands its version of enlightenment: not based on raw intelligence, but on wisdom.

 

Because without wisdom, intelligence is a weapon. And in the wrong hands—or even in unregulated ones—it can quickly become destructive.

 

Despite the dystopian undertone of Gawdat's message, there's a quiet optimism in its urgency. It reminds us that the future is not written. AI is not a god, nor a demon. It is a mirror. It reflects the values of those who wield it. And right now, it's asking us who we are.

Are we creatures of greed and control? Or are we capable of building something more generous, inclusive, and visionary?

 

This is the crossroads we find ourselves at. The following 15 years will be disruptive—that much is certain. But disruption can destroy or transform. Collapse and rebirth are two sides of the same coin.

 

Whether we descend into dystopia or rise into something more enlightened will not be decided by algorithms, but by us. By our courage to question, our willingness to change, and our ability to imagine something better.

 

AI may be the most powerful invention in human history. But the most potent force still lies within us.


Previous
Previous

The hidden tax on Maltese businesses:Manual document processing

Next
Next

Fighting financial crime with AI: FIAU's vision for the future