AI experts from all around the world believe that given its current rate of progress, by 2027, we may hit the most dangerous milestone in human history, the point of no return. When AI could stop being a tool and start improving itself beyond our control, a moment when humanity may never catch up.
As it turns out, this exponential growth means we might be stepping into some very uncharted territory in the near future. If technology continues to get better and better at its current pace, we will soon reach a stage where it’s not only matching us, but surpassing the intelligence of us humans. Couple that with an ability to learn and an incentive to survive, and we don’t know what will happen next. This is referred to as the technological singularity.
Explore More: AI Trends 2026: The Key Innovations Transforming AI Systems and Infrastructure
What is the Technological Singularity?
It’s a term borrowed from astrophysics. Singularity refers to a tipping point beyond which all laws that are currently known simply fall apart, like how the laws of physics fall apart beyond the singularity of a black hole. A technological singularity is a similar tipping point when technological progress is so overwhelming that we’ll no longer be in control of it or the things that it might lead to.
When there is a technological singularity, scientists predict that computers will be able to make life-changing Nobel Prize-winning discoveries every 5 seconds. Now, that may seem like an incredible future or even a potentially life-threatening one. And that’s exactly why the prospect of a technological singularity is so complicated. On the one hand, it may seem like rapidly progressing technology could actually enslave humanity, but it also has an immense potential to improve human life. And this potential is the reason why it is being developed so rapidly.
There are enormous incentives to devote even more research and resources into the development of artificial intelligence, economic and otherwise. For example, it can help companies curate products each customer is more likely to buy, something practically impossible to do man for man. They can predict when demand is going to be low to prevent waste. It can also conduct research faster than any human ever has or ever could.
AI’s Impact on Jobs and Economy
These innovations can lead to other less inspiring changes in human society, too. After all, if scientific research can be done with a computer, what use is there for researchers anymore? If cars can drive themselves and nanobots can repair organs and 3D printers can literally print bridges, are all jobs simply going to be replaced?
We are already staring this reality in the face. According to the IMF, around 40% of jobs globally could be impacted by AI. In advanced economies, the figure is closer to 60%. OpenAI, the creators of ChatGPT, estimate that 80% of the US workforce may see at least 10% of their tasks affected by AI and around 19% of workers could see 50% or more of their tasks impacted, especially office and knowledge workers.
If your company believes you bring less than 50% to the company, you’re either going to be let go so fewer humans can do more tasks, or you’ll earn a lot less for doing the same work you were doing before. Without thorough consideration or government intervention, we may be headed for unemployment rates the likes of which humanity has never seen. And if recent events haven’t made it clear, it’s not just about the economy or salaries, but also about the meaning that most of us tend to derive from our work. Not doing anything, as it turns out, is really boring.
What Happens When AI Surpasses Human Intellect
Technological progress is not slowing down anytime soon. What happens when computers replace not only our labor, but also our intellect? What happens when they can mimic intelligence and learn on their own? It could be terrible and it could be great. It’s not clear. But one thing is for sure, we will not control it. If unleashed, the reality is that we might never be able to control AGI.
Regular people like you and I who have good intentions for the future of humanity need to be able to understand how these tools work and what goes on inside the black box. If we leave the future of AI into the hands of some men in suits in Silicon Valley, we open ourselves up to all kinds of harm. But if all of us have a hand in building this tool, we can also have a hand in building the future of humanity.
If we continue to develop artificial intelligence, then there’s going to be more and more nonhuman intelligence that we just don’t have access to. Eventually, there’s going to be a point where we represent the minority of intelligence, maybe even a very minuscule amount.
Can We Really “Turn Off” AI?
We can just turn it off, right? But when modern-day humans began to take over the planet, why didn’t the chimps and the Neanderthals turn us off? If this artificial intelligence becomes super intelligent and learns through and is connected to the internet, we can’t just shut down the entire internet. There is no off switch.
What happens if we end up stuck with an AI that is constantly and exponentially getting smarter than we are? What if it gets to a point that humans get in the way and the AI hits the off switch on us?
When people think of AI, they tend to think of super intelligent AI. AI that serves the human race but could also end us at a moment’s notice. But is this really going to happen? Pop culture, mostly movies, tend to depict AI not as benevolent creations, but rather as robots with malicious intent.
Narrow AI vs Artificial General Intelligence (AGI)
There are many different types that serve different purposes. Artificial narrow intelligence, also known as weak AI, is the only form of artificial intelligence that humanity has created so far. Even the most advanced video-generating AI tools fall in this category. It does a very good job at what it is programmed to do.
Narrow AI was created for the sole purpose of handling one task. It is good at speech and image recognition and also at playing games like chess or Go or even complex ones like Dota 2.
This is how Spotify creates daily mixes for you based on the music you listen to and how Amazon learns from and teaches itself your buying habits to suggest you new products. It seems an uncommon occurrence where these AI teach themselves how to do the task at hand. How is that even possible? It’s through a process called machine learning.
Machine learning is the science of trying to get computers to learn and think like we humans do. It’s essentially the same way that babies learn. We start off as small screaming sacks of meat, but over time we improve our learning. We take in more data and more information from observations and interactions. And most of the time we end up pretty smart.
The most popular technique to make a computer mimic a human brain is known as a neural network. Our brains are very good at solving problems. But each neuron in your brain is only responsible for solving a very minuscule part of any one problem. It is an assembly line where each neuron has a certain job in order to completely solve a problem.
Of course, no neural network is really simple. Many have millions of parameters and are much more complex than one layer network. The world is full of sounds, visuals, and data. We take in all of this to form our view of reality. However, as more and more complex topics show up with more and more data, it becomes harder for humans to analyze this alone. This is where machine learning comes in handy. Machines can not only analyze data given to them, but also learn from it and adapt their own view of it.
AGI is AI with more than a single purpose. AGI is almost at or equal to human-level intelligence. And this is where we’re trying to get. But there is a problem here. The more we search into it, the harder it seems to be able to achieve.
When someone asks you a complicated question, you have to sort through a ton of unrelated thoughts and observations and then articulate a concise response. This isn’t easy for computers. Humans can’t process information at the speed of light like computers can. But we can plan things and we can think of smart ways to solve problems without having to use brute force.
Getting a computer to human-level thinking is really, really hard. Humans can create, invent, build societies, play games, and laugh. These are very hard things to teach a computer. How can you teach a computer to create something that doesn’t exist or has never even been thought of? And what would even be its incentive to do so?
I personally believe AGI, or strong AI, is the most important artificial intelligence to be created. Machine learning is exponential. It starts off slow, but then there’s a tipping point where things begin to speed up drastically.
From AGI to Superintelligence
The difference between weak AI and strong AI is millions of times larger than the difference between strong AI and super intelligent AI. Once we have AGI that can function like a human being, it may help us reach super intelligence in only a few months or even weeks.
Many people see intelligence as a graph. An ant is here. A mouse is here. The average human is here, and Einstein is just floating a little bit above that. If you asked most people where a super intelligent AI would lie, they’d probably put it just above Einstein. But that’s not the case. AI might not yet be at human-level intelligence, but when it gets there, it’s not going to stop. It’ll zoom past, becoming more advanced until eventually that graph looks unrecognizable.
This is what is referred to as the technological singularity. When AI becomes so advanced that there’s an explosion of knowledge and information, some beyond human understanding. If we make a super intelligent AI, it could improve upon itself. Each time it gets smarter in a short period. The first recreation may take a month, the second a week, the third a day, until it’s billions of times smarter than all of humanity.
To empathize what would happen to our species during such an event, you could simply look at what history tells us about how a more intelligent species, us humans, treats its less intelligent counterparts, monkeys. The same monkeys that we caged up, killed, ran any and all tests on, and had no ethical qualms about it until very recently.
Sam Harris provides an analogy in this regard to help us visualize how we might be treated based on our own past behavior. He draws on the relationship we humans have with ants by saying that we don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We carefully step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, like an ant hill in your beautiful backyard or termites within the walls of your kitchen, we annihilate them without a single thought.
Experts Warning of 2027 AGI
How near is all of this? Ray Kurzweil, renowned inventor and futurist, has said that we may reach the singularity by 2049. He attributes this oddly specific date to what he calls price performance calculations per consistent dollar. He plotted these numbers from 1980 through 2050. In 1981 and 2015, those numbers were roughly where he predicted them to be.
Others are skeptical of such claims. Most notable amongst them is Noam Chomsky, widely regarded as the father of modern linguistics and one of the most cited scholars alive. What makes his perspective interesting is that he perhaps possesses a deeper understanding of language than most of us. This is a very important part of creating a generally intelligent machine since understanding how we communicate with other humans will help us communicate with other potentially conscious machines.
While Kurzweil’s 2049 projection does sound far off, researchers in 2025 are saying that the timeline might actually be much, much shorter, possibly less than 2 years away. Ben Goertzel, founder of SingularityNET, has suggested that AGI could emerge by 2027, followed rapidly by artificial super intelligence or ASI. The AI Futures Project led by researcher Daniel Kokotajlo has even outlined scenarios where a fully autonomous AI surpasses human capabilities across the board by late 2027. Mo Gawdat, a former Google executive, also warns of a 2027 arrival, though he emphasizes the risk of social upheaval if the transition isn’t managed carefully.
A large-scale survey of AI researchers puts the chance of machines outperforming humans at every task by 2027 at about 10%, climbing sharply to 50% by 2047. So, while Kurzweil’s long-range vision places the singularity decades away, many experts working at the cutting edge today warn that 2027 may be the true breaking point, the year when artificial intelligence crosses the threshold from tool to uncontrollable force.
Max Tegmark, an American-Swedish from MIT, is interested in investigating the risk of extinction from artificial intelligence. He likes to use the analogy of when man first discovered fire. It was a wonderful discovery that has paved the way for modern life. But it hasn’t always been safe. Fires have caused lots of death, pain, and suffering in the process.
But we are where we are because we were able to learn from our mistakes and devise things like fire escapes and fire extinguishers. AI might be the same at the start, but the only difference here is that we only have one shot at this. It’s all or nothing. If AI lights a fire, we may never be able to extinguish it in the hopes of a next time.
But there are critics who doubt that this is how the future will actually play out. Unlike Chomsky, they don’t doubt the exponential progress or its ability to mimic human-like computation in the future. Instead, they doubt whether the future will be so aggressively against our survival. They say that while a technological singularity is coming, we shouldn’t fear it. Instead, we should embrace the progress that it could bring to us.
Such is also the perspective of Garry Kasparov, widely considered as one of the best chess players of all time. He was on the other side when IBM’s Deep Blue finally beat humanity’s best at the game that they had invented. Garry believes that instead of seeing this as a man versus machine contest, we should take it as an opportunity to realize the potential of augmentation. There’s definitely more comfort in thinking about it like this, considering we’ve been augmenting ourselves with technology for decades now, first with things like calculators and more recently mobile phones.
Singularity doesn’t have to be an apocalyptic mess where every moving piece of metal is trying to kill us. It doesn’t have to be a deceptive reality where no one and nothing can be trusted. It can simply be a reality where we are free to explore other dimensions of the human experience and engage in our creative pursuits. Not to put food on the table, but simply for the sake of them.
In both of the major criticisms against the idea of a technological singularity, neither party actually denies that it’s coming. They’re saying that it’s not coming anytime soon or that it’s not going to be as bad as you think it is when it does. But you could argue that the technological singularity is nearer than we think and it’s going to be much, much worse than we anticipated.
What happens then? Does it really matter how we as humans prepare? What policies could we possibly come up with? Whatever the answer, one thing remains the same. It’s coming. And if I were you, I’d get ready.
