A lot of people have been wondering how seriously they should be taking AI these days.
Basically, people are burnt out on past tech bubbles, dashed hopes, and a bit of pessimism.
And to be fair, they’d be right to be. There have been two AI winters in the past 80 years, spurred by the failure of AI to deliver on its sci-fi promises. Why should this time be any different?
And when you look at all the other tech bubbles that have blown up or were overhyped in the past, it makes sense for the public to be leery of any more grand promises.
But let’s think about it another way:
The first neural networks and building blocks of AI were created in the 1940’s and 1950’s. There have already been two major drives to achieve AI in the 70’s and 80’s before the bubbles burst and investors gave up.
There is hardly any other technological concept that has been in development for so long without companies completely giving up on it. And if it were not feasible at all, all companies certainly would have given up.
But instead today we are seeing the exact opposite happen. Every single major tech company, completely aware of the AI winters of the past, is barreling full steam ahead to advance their AI technology stack as much as possible, as quickly as possible. The fact that AI as a technology has been getting worked on for so long is exactly why we should be hopeful that this time, it’s real.
My point isn’t only that they are working quickly on it; they are also investing billions into it. They view it as life or death for their companies; that the companies that innovate the best AI features will win various economic races.
The other factor that has inhibited public perception is the fact that GPT 3.5 free but GPT 4 costs money.
Most people have only tried GPT 3.5, thought it was dumb and that GPT 4 couldn’t be worth paying for, so they gave up on it. In reality, GPT-4 is practically an omniscient god compared to GPT 3.5 – it was the void speaking back.
Many companies and researchers have noticed the power of these models, and have raced to apply the neural architecture these GPTs are based on to other domains, such as robotics and mechanical engineering.
The other question around all this is: can we take this AI revolution seriously? And frankly, I think it’s best to look at Sam Altman, the CEO of OpenAI.
Lately, it’s clear he’s taking this super seriously. In fact, I’d say he seems to have a much more somber tone whenever talking about AI advances lately. He seems to be aware of some immense capabilities behind the curtain that he can’t quite talk about yet. They are likely the types of advancements will move mountains for the species.
And when Altman talks about AGI, he’s very seriously talking about what it will do for us, and trying to figure out we as humans will do next. My guess is he’s probably got the same feeling bank tellers had when the ATM came about.
There’s one more thing to keep in mind here regarding the pace of all of this.
First, ChatGPT 3.5 came out in November of 2022, and GPT 4 came out in March of 2023. It’s only been a bit over a year since this revolution kicked off. But the crazy thing about these release dates is that GPT-4 actually finished its training in August of 2022, several months before ChatGPT was ever released. In the meantime, no other company has been able to reach GPT-4 levels on any model the public has access to.
It would be silly to believe that OpenAI hasn’t been making capabilities advancements either. We should expect big jumps in the coming months, and really take seriously what it means to live in a world of ubiquitous intelligence and AGI.
In other words, unless it is proven impossible for AI to continue advancing beyond its current publicly-known state, I’d say this AI cycle is not hype, and it is in fact the real deal – a culmination of 80 years of research from the theorization of computational neural networks in 1943 to now.
We should expect to see enormous leaps in capabilities and intelligence in the coming years. Some will wonder if this can be AI if it isn’t also conscious, and this is likely a different question. I suspect we’ll find many ways to optimize every cognitive process we use for intelligence, but not necessarily build a conscious machine with the coordination of these processes.
Not on purpose, at least.