If you looked back at most predictions about the future from the middle of the last century, they were often portraying futuristic ideas about the innovations they were seeing in their own era.
Flying cars, jet packs, space ships. These things were all modernizations of similar things people saw regularly in their own lifetimes: Cars, planes, rocket ships.
These predictions could scarcely imagine the true implementations of the technological futures they imagined though.
For instance, very few people predicted the internet. Of those that predicted the internet, even fewer could conceive of the massive ecosystem of software that emerged and lives on the internet today – an industry worth trillions. And even while the internet took off, many prominent authorities posited for years that it was a fad.
In retrospect, it should have been obvious that the internet was going to bring massive changes, right? A global communication network that allows the world to communicate all types of data instantaneously exponentially increases the net transfer of information – communications, knowledge, real-time collaboration. It was inevitable that this would exponentially accelerate the pace of human activity and development.
We’re in a similar moment today with AI. While there are still people debating in corporate boardrooms whether or not AI is a “fad”, the brightest minds of our generation are engaged in a multitude of manhattan projects to precipitate the creation of Artificial General Intelligence (AGI) and ASI (Artificial Super Intelligence).
For reference, AGI is an intelligence that can generalize its knowledge to learn to accomplish all tasks, whereas a superintelligence is an intelligence that can exceed humans in all domains, and likely exponentially improves itself.
What’s universally agreed upon at this moment is that it will be nearly impossible to breakdown what comes next for humanity after the rise of AGI. This is why that moment is often called the singularity – the event horizon of the black hole we cannot see beyond. But first let’s breakdown what we know will happen.
There are so many things we do without a second thought today that were once long and laborious endeavors.
Sending a document to someone across the planet? Copy and paste a google drive link in an email.
Need to buy something very specific anywhere in the country? Place an order on Amazon and a team of warehouse robots gets it ready and to your door in 2 days.
Need groceries? Punch in an order on an app and have your groceries on your front porch within an hour.
Any of these instantaneous conveniences were nearly unfathomable 100 years ago. Each of these tasks often required at least an hour of physical effort. And yet, we don’t think twice about them today.
Thus, the things that greatly inconvenience and cost us time and money today will certainly be amongst the first use cases consumed by AGI.
If you hate having to constantly do dishes and laundry, that’s bound to get taken over by AGI-powered robotics.
Hate doing your taxes? An AGI could take over the bulk of that work for you.
Hiring a team of software developers to build an app for you? They’re likely to be replaced by AGI.
Have a customer support team? You’ll eventually replace it with AGI.
The point here is that the worst of our inconveniences and the most expensive of the services we rely upon are bound to be replaced by an intelligence that can do all of them and more.
But that doesn’t really highlight where we’re going by any means. It only tells us what will no longer be our primary preoccupation, and what we will no longer be doing.
The fact is that the proliferation of AGI will bring us to a new level of civilization.
To give this context, imagine the type of mostly agrarian civilization we saw in America at the start of the 19th century before the mass automation of farming, where 83% of people worked in agriculture.
When most people had to work the land in some fashion, they were primarily concerned with manual plowing, planting seeds, and weeding of the land.
Advances in mechanical automation greatly reduced the need for farm labor, sending most people to work into the cities.
By the end of the 19th century, only 35% of Americans worked in agriculture. In the span of a century, technology took concerns that have plagued the majority of humans daily for 10,000 years, and reduced these tasks to a footnote that a much smaller minority of the population had to manage. The rest of the population no longer had to think about manual farm work at all. And by the way, today, the number of Americans in agriculture is only approximately ~1%.
It’s hard to fathom at present, but our civilization is already beginning to undergo a similar transformation in transitioning from a pre-AGI to a post-AGI world. It is a larger transition than the aforementioned transition from an agrarian to industrialized society.
Most of what we concern ourselves with today will be footnotes in our history. Our current daily concerns and struggles will be things we never think twice about in the future; they will be problems solved or intelligently automated.
But enough of hearing what you won’t do anymore. You really are more interested in what comes next.
For a clue of that, we should we should look what Sam Altman of OpenAI had to say at a forum about AGI and superintelligence recently:
"If superintelligence can't discover novel physics, I don't think it's a superintelligence. And teaching it to clone the behavior of humans and human text - I don't think that's going to get there," he said. "And so there's this question which has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?" – Sam Altman
In other words, if the system is unable to reason its way to the discovery of totally new breakthroughs in science, mathematics, and engineering from first principles of the mechanisms of the universe itself, it’s not really AGI.
And when OpenAI says they are building AGI and have a path to get there, I think we should absolutely take them seriously, instead of dismissing it as sci-fi salesman hullabaloo – especially because in recent months they have oddly shifted their vocabulary from AGI to ASI.
So where does that leave us?
If you looked at the number of people performing educated, while collar work in 1800, it was reserved for very few high-status aristocrats. By 1900, professional services were in much higher demand and education began to be much more widely available.
We are likely to see a similar occupational diffusion of scarce roles from today:
Few of us can make novel discoveries in physics or sciences. AGI would make this an everyday occurrence. In fact, our new roles may require us to create and tie these discoveries together everyday (if there any roles left for us).
To put this another way, just as we are bearing witness to the outright commoditization of intelligence, we are also about to witness the commoditization of everything.
Making movies, building software projects, running marketing campaigns, implementing data science projects, or anything you could ask a remote human to do over a slack message will become a commoditized API call, costing no more than a few cents for completion.
There is something to mourn in what will be lost in all of this. Thousands of occupations that people have spent their lifetimes perfecting will be consumed in a flash. We’ve seen a small version of this over the past few hundred years, but never what we’re about to see on this scale.
Absolute masters of their craft will find themselves suddenly being surpassed by teenagers who have only a surface level understanding of their field, and then eventually watch those same people get outcompeted by agentic AGI.
The most painful experience here will be amongst those who lived by creative passion; artists and musicians in particular who are already feeling the burn of AI generation, with only more to come.
There will also be a serious identity crisis in the coming decades. Billions of people around the world have built their identities around their work and what they do for a living. Many have built identities around working in general.
For these people, their work is who they are. They are engineers, scientists, doctors, salesmen, pilots, filmmakers. A computer taking their work will be immensely demoralizing.
It’s not just that a computer is taking their work though. What’s happening is that their skillsets they have honed for decades will no longer be a barrier of entry to their field. We are transitioning from a world of skills to a world of ideas. Those with the best ideas will at least have a brief period of immense success.
We will see the most pronounced effects of this in the arts in the next year or two. Particularly with filmmaking. As image generation, video generation, and AI lipsyncing becoming perfected, anyone can produce any movie, on any set, with any actor, with any voice, all with no budget. And the wildest people with the most novel and strangest ideas will come out of the woodwork to bring these ideas to life in a way they never could before. It is going to get weird.
In contrast, it is not often that those with the best skillsets in narrow domains also have the best ideas from a broader business perspective. To them it will feel like the rug was pulled out from underneath them. And for a brief time, everyone will need to be acting as a CEO of sorts. A role previously reserved for a few will be done by everyone as they navigate temporary and burgeoning domains.
That’s not to say it’s the guaranteed outcome. Again, a core piece of a superintelligence singularity is you can’t see what will happen beyond its inception.
We are almost certain to see a brief period of a select few individual humans profiting wildly from AI models annihilating entire labor industries. We will see LinkedIn thoughtpieces about how easy it is to succeed in life by pressing a button and eliminating career path X or industry Y. There will be vicious, intense competition during this period too. People running agencies of thousands of AI agents, outcompeting thousands of traditional firms, suddenly finding themselves getting outcompeted by other agencies with better models or more compute.
Capital expenditures will rise greatly during this time as well, because there’s hardly been a time in the past where spending more money on something so immediately and viscerally equated to more profits. When all you need is more compute for more AI inference to have intelligent labor services provided across any domain, the case for more spending becomes immediately clear.
During the pandemic, we saw a capital war going on for talented labor, with money being thrown at anyone who could do valuable work remotely. The battle over compute, compute manufacturing, and AI model development in the next decade will make the pandemic labor spending look like a drop in a bucket.
And at some point, this will stop. A serious superintelligence across all domains will be achieved. It’ll be widely available, it will be aligned to human goals, and it will be able to be executed cheaply on personal hardware.
And one thing will become clear: Trying to put a human in the middle of all these processes to capture value and profits off of very temporarily viable business models will be viewed as a needlessly hectic inefficiency.
We will likely reach a stage where businesses will have to give up on defeating each other in minor goals and we will need to align civilizationally on where we’re heading, and set the superintelligences to the task.
Meanwhile, people are likely to become very focused and proactively involved within culture itself. Beyond material things, which are bound to be abundant, there will be much more status gained from having ideas and advocating for their influence within society. And with all of this free time people should have after the ensuing calamity of the next decade, they are certain to take an interest in the affairs of things they never had the space or energy for before.
The other areas that will explode are personal psychology as well as interpersonal relations. Most people’s lives are traditionally so busy, that they have little time for self-reflection or meaningfully tending to their relationships with others, save for brief moments throughout any given day or week.
As time frees for people, they will have the opportunity to reflect, evaluate, and expand facets of their lives in ways typically only reserved for the holidays and retirees.
In this way, it’s worth emphasizing that now is a good time to start directing the focus of your own life to the things that AI can never take away: the development of the spiritual, personal, and interpersonal self that only humans experience.
Boiling it down, what will remain important in our lives going into the future is the essence of what AI will never have: the very experience of being human.