Tech Titans and Top Execs Unite to Halt the Race Toward AI Superintelligence

UPDATED: October 24, 2025
PUBLISHED: October 24, 2025
TABLE OF CONTENTS
Apple corporation co-founder Stephen Wozniak

If you’ve paid attention to nearly any AI advancement over the past few years, you’ve probably heard the word “superintelligence” thrown around. So what does that actually mean?

What is superintelligence?

In simple terms, superintelligence refers to an artificial intelligence that’s smarter than humans. Not just a little bit smarter—but way beyond our best scientists, artists or strategists across the globe. It’s the idea of AI that can outperform us in basically every intellectual task: reasoning, creativity, planning, even understanding emotions and human behavior.

Currently, we’re still in the “narrow AI” stage, with systems that excel in specific tasks, such as generating text, recognizing images or playing chess. Superintelligence, on the other hand, would be the next leap, a kind of AI that could improve itself, design better versions of itself and rapidly surpass human intelligence.

It’s both exciting and a little scary. On the one hand, a superintelligent AI could solve massive global problems, such as disease, climate change and energy shortages. On the other, if it isn’t aligned with human values or goals, it could make decisions that aren’t exactly in our best interest. That’s why so much of today’s AI research isn’t just about making systems smarter, but also safer.

Lately, the conversation around superintelligence has gone from futuristic speculation to a serious global debate. Just recently, hundreds of public figures, including Apple co-founder Steve Wozniak and Virgin’s Richard Branson, signed an open letter urging a ban on the development of AI that could reach or exceed human-level intelligence. Their concern isn’t about today’s chatbots or image generators, but about what comes next: systems that could act autonomously, rewrite their own code and make decisions with real-world consequences faster than we could ever understand or control.

SUCCESS Newsletter offer

Why global experts are sounding the alarm on unchecked AI progress

The letter warns that unchecked progress toward superintelligence could lead to systems capable of acting autonomously, making decisions with real-world consequences (financial, political, even existential) at speeds no human could match. 

Oxford philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, has long cautioned that once artificial intelligence reaches human-level general intelligence, it could quickly outpace us, leaving humanity’s future in the hands of a system whose goals might not align with our own. Alongside him in this fight against an entirely AI-free market is Geoffrey Hinton, often called the “Godfather of AI.”

Hinton, who helped pioneer the neural networks that underpin modern AI systems, made headlines when he resigned from Google in 2023 to speak more freely about the risks of the technology he helped create. In interviews, Hinton has warned that as AI systems continue to learn and evolve, they could soon develop their own forms of reasoning, ones that we neither understand nor can fully predict. 

The Nobel Prize-winning scientist is so concerned about the pace of AI development that he has previously warned there’s a 10%-20% chance AI could wipe out humans altogether. This year has already produced alarming examples of AI systems willing to deceive, cheat and even steal to meet their objectives. In one famous case this May, an AI model tried to blackmail an Anthropic engineer over an affair it had discovered in an email in a concerning effort to avoid being replaced.

How self-improving AI might outpace humans soon

Today’s AI models, like the ones that power chatbots or image generators, are trained on massive amounts of data. They learn by recognizing patterns, billions of them, and then predicting what should come next. The more data and computing power we throw at them, the better they get. Once an AI system can start improving itself, like writing its own code, refining its algorithms and optimizing its hardware use, it enters a recursive self-improvement loop. That’s the real tipping point.

In this loop, every upgrade the AI makes allows it to learn even faster, which leads to even better upgrades, a cycle that could quickly spiral beyond human understanding. Imagine teaching a student who becomes smart enough to rewrite the textbook and invent new subjects overnight. That’s what researchers mean when they talk about an impending intelligence explosion. Once that feedback loop starts, the AI could leap from human-level intelligence to something vastly more powerful rather quickly. 

Tech icons and celebrities unite over AI safety concerns

The call to halt superintelligence development has drawn support from a remarkably diverse coalition. Alongside top tech minds like renowned computer scientists Yoshua Bengio and Stuart Russell, the list includes several prominent academics, ethicists and cultural figures, all concerned about the rapid pace of AI advancement. Former military and national security officials, such as Admiral Mike Mullen and Susan Rice, also added their voices. Even well-known public figures from entertainment, including Prince Harry, Meghan Markle, Joseph Gordon‑Levitt, and will.i.am, signed on. 

The race to develop ever-smarter AI isn’t a mere technological challenge; it’s also a strategic one. Companies that ignore the ethical, safety and regulatory dimensions risk not only reputational damage but potentially catastrophic operational consequences if superintelligent systems evolve beyond human oversight. At the same time, those who invest in safe, aligned AI development stand to shape the future in ways that are both responsible and incredibly profitable.

Photo by Anton Gvozdikov/Shutterstock

Pablo Urdiales Antelo is a news writer with a sharp focus on politics and business. Drawing from his experience in breaking news and pop culture commentary, he offers a comprehensive and international perspective on current affairs, helping audiences decode the complexities of our modern world.

Oops!

You’ve reached your limit of free
articles for this month!

Subscribe today and read to your heart’s content!

(plus get access to hundreds of resources designed
to help you excel in life and business)

Just

50¢
per day

!

Unlock a fifth article for free!

Plus, get access to daily inspiration, weekly newsletters and podcasts, and occasional updates from us.

By signing up you are also added to SUCCESS® emails. You can easily unsubscribe at anytime. By clicking above, you agree to our Privacy Policy and Terms of Use.

Register

Get unlimited access to SUCCESS®
(+ a bunch of extras)! Learn more.

Let's Set Your Password

Oops!

The exclusive article you’re trying to view is for subscribers only.

Subscribe today and read to your heart’s content!

(plus get access to hundreds of resources designed
to help you excel in life and business)

Just

50¢
per day

!