The technical singularity, coined by Vernor Vinge and later popularised by Ray Kurzweil, is something that really brings up strong opinions. An example was some ago, this article was on the front-page of Version2 (big danish IT media site) – the headline in English: “Author: Singularity is bad science fiction without science“. I’ve seen many of these articles over the years.
What is quite interesting is that people seem to either drink the Kool-Aid OR being naysayers believing the singularity will never happen. Either people seem to believe we will live forever or everything will stay merely the same. This polarizing approach annoys the heck out of me:
I really think that people who hear about “the singularity” often get it all wrong.
Getting it all wrong, however, is easy.
During the last decade, the Singularity University, founded by Peter Diamandis and Ray Kurzweil, has popularized the Singularity with thick marketing and very expensive admission prices. Thick marketing combined with promises of being uploaded into the cloud and living forever.
Despite the thick marketing, however, I don’t agree the Singularity is “bad science fiction without science” – I think it’s the future we’re heading towards. Fast. I just think that Peter Diamandis and Ray Kurzweil will get very disappointed if they expect to live forever.
What is the singularity?
I’ve seen so many different interpretations of what “the singularity” is: everything from “superintelligent AI” to “when we can live forever”. Not even having the correct reference point is a problem: you simply can’t argue about if something will happen or not if you don’t even know your reference point.
The term “singularity” is used in physics as something we cannot see beyond. In the “technical singularity”, it’s the same thing. It’s a point in the future, where technology is developing so fast, that the results will be unfathomable.
Take humans just around 70.000 years ago, just around the cognitive revolution. If you took a time machine and put a random human from that time, clicked the “update time“-button, and showed him our present… Our poor human would see the current world as unfathomable. He wouldn’t know what on earth was going on: cars, the Internet, and planes. I think it’s fair to say that 2020 is unfathomable from the perspective of someone from 70.000 years ago… The argument is that the Singularity, is something like this from our perspective.
The idea of the singularity was heavily popularized by Ray Kurzweil. Ray Kurzweil is the author of multiple books, but “The Singularity is Near” written in 2005, is the most popular. He later co-founded the Singularity University with Peter Diamandis that has further popularized the term on a global scale.
Ray Kurzweil argues that the singularity will happen by 2045. The reasoning is that computing is improving not linearly, but exponentially. By forecasting the development, he argues that in 2029, artificial intelligence will be as smart as human beings, and then improve from there until the speed has improved so much we hit the singularity in 2045.
Could the singularity happen?
Now, back to the present in 2020: does it seem realistic?
Let’s be honest. Talking to Siri does not make it seem likely that in just 10 years, you can’t tell the difference between a human and your assistant on the phone. Heck, even today I asked Siri on my AirPods “is it going to rain today?“, and she replies: “Here is what I found on ‘The Beatles’...
Things DO seem to move faster. But artificial intelligence being human-like levels in 10 years? Or technology improving so much we can’t fathom the world in a mere 25 years (2045 vs 2020)?
It’s clear that something very pretty drastic needs to happen… Any natural intuition clearly tells us that this is unrealistic.
Why AI (artificial intelligence) keeps coming up
In the rest of the article, I’m going to keep repeating AI as if it was the same as the Singularity. Obviously, they’re not. However, AI is a major part of reaching the Singularity.
The Singularity assumes the speed of everything will increase faster, and faster, and faster and faster – until it gets to fast that it’s too fast to understand. That will never happen with our human brains – we’re simply not built for that kind of development.
That’s why one of the key assumptions, is that we get AI to work. Not just improving Siri a bit, but actually getting to a real AI – indistinguishable form humans. An AI can improve faster, and faster, and faster as it’s not limited by our physical bodies.
One of my favorite websites, WaitButWhy, made an absolutely amazing article explaining this in-depth.
The argument against the Singularity
The Singularity has plenty of naysayers. Two popular examples:
Paul Allen: The Singularity Isn’t Near (co-founder of Microsoft, died 2018)
To me, it seems that you can divide the naysayers up in two groups:
- Group A: People who just don’t believe the Singularity will happen – guys like the one who I mentioned at the start of the article
- Group B: People who think the Singularity could happen, but don’t buy the 2045 deadline. Paul Allen’s article is a great example of this
I think “Group A” is just naive. Take our poor human from 70.000 years ago we teleported up to the present – the world is unfathomable meaning to him. In his eyes, we’re already a Singularity. I also think it’s a fair argument that it is extremely likely that in 70.000 years we will have been through another Singularity or wiped out the entire population.
I also think that no one would argue that development happens slower today than 70.000 years ago. Heck, even the most negative naysayers will probably even agree the pace is 10-100x faster today. With this argument, we should at least expect a new technical singularity within the next 700-7.000 years.
Of course, some would argue that it’s possible we could hit some “technology wall”, but it’s yet to see a convincing argument this could happen.
This brings us to “Group B”, who are just saying “Yeah, a singularity could happen – we’re just not near“. This is an extremely important distinction.
The discussion should not be if the Singularity will happen, but when. Ray Kurzweil says it’s a mere 25 years, Paul Allen doesn’t put on a number but has very good arguments why it’s probably going to take longer than for instance Ray Kurzweil expects. But even Paul Allen would probably agree that between the next 25 to 7.000 years – yeah – the singularity would’ve happened.
The timeline depends a lot on two things: computing power and how fast we’re getting superintelligent artificial intelligence:
The underlying assumption: computing power keeps growing exponentially
“The Singularity is Near” – Ray Kurzweil’s book from 2005, has an underlying assumption. The assumption is that computing power improves exponentially. By improving exponentially, things will move faster. This will make a positive self-reinforcing cycle, making us smarter, faster, and better.
However, the naysayers have a pretty valid point: Moores law has stopped working! Moores law was simply that the number of transistors we can put on a chip would double roughly every second year. The only challenge is that has slowed down significantly during the last many years.
That is a valid problem for the singularity. This is the main and most important assumption for the singularity. But this is also where our naysayers and Kool-Aid drinkers again misunderstand each-other.
The Singularity crowd doesn’t talk about Moores law, but “how much computation power you can buy for 1.000 USD“. That is not the same as Moores’s law, but a much broader term. This term allows counting in GPU’s and even cheaper prices because of mass-production. While Moores law has definitely slowed down, computation power pr. 1.000 USD seems to improve at the same speed as always. At least for now.
I am not an expert in computation, but this is a very valid point. If this development slows down, the Singularity will be slowed down a lot. The reason is, when you’re on an exponential growth curve, it’s always the last doublings that mean the most.
No major AI algorithm developments for decades
Reading most major tech outlets, it seems AI is improving like crazy. Billions in funding, China, Russia and US competition on a national level.
What surprises most people, is the underlying algorithms we use are the same we’ve used for decades. In academia, the algorithms are pretty much the same today as it was decades ago. What has changed is the amount of data available in combination with the computing power available.
There is no doubt that the many-doublings in data and computing power will bring us a long way. During the next many years we will see amazing things made from this. However, it’s also a very valid argument that we cannot expect sentient machines from current algorithms. A valid argument is that since the Singularity is dependent on superintelligent artificial intelligence, we cannot expect it to happen unless we get a serious upgrade in algorithms.
The argument FOR the Singularity
Will this change over the next decade or two?
I personally guess so. Billions of dollars are being invested in projects such as the European Human Brain Project and the American BRAIN Initiative. But, you should not listen to me, since that is one of the core problems of the Singularity fan-boys. I’ve no idea about how the brain works, and I’ve no idea how far we are from understanding or reverse-engineer the brain.
Another interesting recent development is GPT-3.
You don’t copy a bird, to make a plane fly. Planes don’t look like birds. Take the newly announced GPT-3 model from openAI. If you haven’t heard about it, try to Google around. It’s not trying to copy how the human brain works. Instead, to quote the article: “But GPT-3, by comparison, has 175 billion parameters — more than 100 times more than its predecessor and ten times more than comparable programs.“. I’ve no idea what 175 billion parameters mean, but I’ve seen the text samples from this program. It’s amazing what it can do. If this gets 10 years of improvement? Do we then need new algorithms?
(We probably do, but it’s interesting to see how far we can get)
It’s also hard to argue that we don’t get smarter. The story Peter Diamandis from ‘The Singularity University’ loves to tell: “Even a poor child in Africa with a cellphone has access to more information than the president of the United States had thirty years ago.”. We have access to more information, easier collaboration and we stand on much higher shoulders than before.
If you’re starting a company today or want to research something, you have a level of access and help that is unprecedented. If you want to learn something, you have everything at your finger-tips.
I don’t know if this is a real argument “for” the singularity. What I’d say is:
- We’re getting more and more capable as humans, with an increasing speed
- We’re increasingly understanding the human brain – albeit probably much less than I believe if I ask a scientist
- We’re seeing amazing developments in narrow AI that makes specific applicants
I’d personally be really surprised if we don’t see a computer passing the Turing test within 2045, and I don’t see it unlikely it won’t happen by 2029 as Ray Kurzweil predicted. And then onward, we will see things faster and faster.
Where does this leave us
What has made both the Singularity, Ray Kurzweil and Peter Diamandis so famous, are very bold statements and giving specific timelines.
You need to be controversial to get attention. What Ray Kurzweil has done, that very few others have done, is to make specific & public deadlines about when he thinks specific things will happen. These timelines have received lots of critiques. Kurzweil typically arguing he’s nearly all correct and journalists saying he’s nearly all wrong.
But what we forget here is why Ray Kurzweil made them in the first place. Ray Kurzweil, whom I’m a huge fan of, identified and coined the term singularity for a large audience. He understands the importance of getting the whole concept out to the public – to a large audience. The best way to get this attention? Make bold timelines.
But what people don’t get: the timelines don’t matter. They’re marketing. What really matters is the future we’re heading towards. Maybe not tomorrow, maybe not in 2045 – but as a human species things are about to become really strange.
What makes the Singularity weird, is that the consequence will be a utopia. A utopia of immortality, super-intelligent AI, free income for everyone, and lots of time to explore the universe in spaceships.
Utopias are a dangerous thing to the rational mind. Utopias combined with immortality is even more dangerous. My biggest peeve with the Singularity is that its “inventors” forecast that utopia and immortality coincidentally will happen just around when they would otherwise die from old age. To me, it seems, the Singularity has become an escape pod. A hope to live forever.
Looking back during the history, it’s easy to laugh at all the kings and explorers who sought the fountain of youth. But it makes sense. Death is scary. Death is final. Back then the belief you could find such a fountain of youth was founded in hope and fables. Today this hope is based on science.
And this brings me to my final comment:
The Singularity tells smart people they can live forever. And because we now tell smart people immortality is an option – we start treating The Singularity as a religion but call is science. This is why we see people mixing science and bullshit into a dangerous cocktail. Combine this cocktail with a wish to be controversial as that gives more publicity and it gets even more dangerous.
However. Just because Ray Kurzweil and Peter Diamandis are wrong, and the Singularity probably won’t happen by 2045 and will die on the way – that doesn’t mean we don’t have to take this future seriously. Make no mistake:
The singularity will happen. Just not by 2045.