Everywhere we look today, we see debates about AI on the rise. Political movements are already forming around the role of AI and quelling its impacts.
Some groups fear AI will kill all of humanity (unlikely). Other groups are simply worried about their jobs and being able to put food on the table. At the same time, groups of techno-optimists are pushing for reaping as many of AI’s benefits as quickly as possible.
Government leaders are caught in the middle of all of this: They need to mitigate harms such as job loss to prevent dissatisfied voters, try to leverage AI’s economic benefits, and make sure their national security doesn’t become threatened by adversarial countries that develop AI weapons faster. And different leaders are taking different priorities.
Table Stakes
AI can be viewed similarly to nuclear fusion, both in what it gives us, and the questions it raises for ourselves.
If nuclear fusion works, it forces us to ask: Who could we be if we have unlimited energy? What could we accomplish? Which insurmountable problems simply go away?
A computational intelligence is like an infinite source of energy for intellectual work. By extension, it is an accelerant for an entire society’s capabilities. It acts as a fundamental catalyst, similar to electricity.
The mass proliferation of AI makes us ask similar questions: Who could we be if we had unlimited intelligence? What can you do with unlimited talent resources? How much better could life be if you always made the most optimal decision?
All of that is exciting. But it’s important to remember that the great challenge of civilization at-scale is coordination. Despite any economic abundance that AI can create, societies and organizations will continue to compete.
Even in abundance, the level of prosperity will be relative between nations, and competitive innovations will unlock new tiers of goals for those most advanced nations to strive for.
The goals of tomorrow are beyond the comprehension of today’s world. However, we can imagine those future goals will be of similar relative difficulty to that future civilization as our current major goals are to ours.
Today, two of the major goals of human civilization are to build AGI (unlimited intelligence), and to build nuclear fusion (unlimited energy).
These goals have been considered practically impossible for a long time. They’ve also been dreamed about and worked on for decades. We are now extremely close to both of them. Nations and corporations have worked together and competed on both of these goals. Now that they are legitimately within reach, we are beginning to see more competition in making them real.
Let’s look at nuclear fusion first: Building a nuclear fusion reactor was a shared project openly shared between many governments for a long time. It was sort of a pipe dream, but one still worth exploring. Now that accomplishing it is within reach, multiple private companies have spun up, each competing and trying to build a commercially viable fusion reactor first.
We’re seeing a very similar phenomena with AI. For over a decade, research has been open, and AI models have been shared openly with the research papers. Or, at least training data and exact methods would often be shared, allowing end-users to train the model themselves.
These days, there are now multiple companies all aiming to achieve artificial general intelligence. While much more research is happening in AI now, there’s a lot more competition, and a lower proportion of state-of-the-art research is being shared openly. The impossible has become obtainable, and the most motivated and capable amongst us have decided they will benefit from obtaining it.
So let’s consider the massive projects of Fusion and AI on a planetary scale:
First, organizations shared resources to build these things. They shared expertise, learned together, and theorized about what it would take to make these things happen.
Then some actors decided science has hit a point where they can move forward with building those things and split off to act as their own organizations.
Now those organizations are all competing and on the verge of accomplishing these holy grails of their fields.
Why is this important? Because it sets the stage for the future of cooperation and competitiveness.
After we crack Fusion and AGI, the new set of goals we strive for that they enable are unfathomable to us. But again, we can envision that at first organizations will be open and cooperative with each other when figuring out how to accomplish seemingly impossible tasks.
Once the task starts to seem commercially viable, researchers will splinter off into their own companies to build what was thought to be impossible.
Here’s where this matters: By virtue of a nation having their researchers be capable enough to participate in joint-effort research, participating nations gain institutional knowledge in accomplishing these new sets of impossible goals.
However, having AGI, and likely the nuclear fusion to power it, will be table stakes for being able to participate in the cooperation phase of these goals before moving on to the self-interested competition for their final development.
Ultimately, the result of this is a star-bound race into humanity’s future. We’re only now approaching the end of the pre-qualifying round and lining up positions at the starting line — but having a good starting position in any race is worth a lot in itself.
War
Some nations for whom strong AI achievements are accessible are prioritizing AI in war.
Autonomous weaponry is an entire field of its own. A sufficiently intelligent weapon can replace or supplant the need for soldiers, and level out the playing field. We’ve already seen the wide deployment of self-navigating kamikaze drones in battle.
Weapons with high degrees of onboard autonomous intelligence will be unmatched. Imagine drones that can identify enemy combatants and autonomously act to eliminate them, or autonomous fighter jets that can pull higher G-forces than any human pilot and evade surface-to-air missiles. And perhaps a heat-seeking missile can instead use an onboard AI tracking system that wouldn’t be tricked by flares.
There’s also the capacity of AI for computer virus generation for hacking other nation’s grids.
Basically, the nations that have the most autonomous capabilities will have the upper hand in a war. And for this reason, the United States has limited the ability of companies to export of AI-capable GPUs to China – a move that has increased China’s desire to reclaim Taiwan, where most cutting-edge GPUs are produced.
Economics
It’s clear that AI’s biggest impacts will be economic. We will see deflationary effects across industries as labor costs plummet to 0 while production and completion of products and services increase in pace.
Take self driving cars: As AI improves, they will become far more ubiquitous. Ubers, Lyfts, and taxi drivers generally will effectively disappear. Costs will go down significantly.
Self driving cars will also enable more people to own fewer cars. Cars spend 95% of their time parked. If people can just use a self-driving ride service and pay some marginal amount more than fuel and maintenance for their rides, it’s highly likely they’ll end up saving over owning their own cars and covering insurance for them.
Reducing the fixed living expenses of a nation’s populace enables them to essentially become capital allocators in their own lives and essentially have a higher quality of life.
We’ll see these effects across industries. If I previously needed a mechanic to fix a somewhat simple car issue, but can just upload a picture of my engine to ChatGPT, I’ll have saved $100 and can allocate that to something else.
Or, if I was needing a tutor in a subject, but now have expert access on-demand, I’m now able to operate with a working knowledge of that field without much cost. Cross-disciplinary engineering firms stand to gain enormously from tools such as these.
For these reasons, it’s obvious why nations who can see the benefits are rushing to gain access to these tools. And also why some like the US are trying to prevent adversaries such as Russia and China from accessing them.
Of course, nations will need to resolve at a fundamental level what the end of exchanging labor for currency looks like – because it doesn’t matter how infinitely affordable everything becomes if no one has any method of bringing in expendable money.
Appealing to the Electorate
Government leaders have a lot to worry about in this moment. If they don’t ensure things adjust smoothly, they could be dealing with angry mobs.
Realistically, there will be a lot of work to do for quite a while, but a lot of people could easily be displaced in the interim.
We’ve seen tech leaders hint at this in Congress, where they’ve mentioned the potential for job losses. But no actual solution has been seriously proposed or taken seriously by congress yet. There is the potential for UBI, but the appetite for advocating for it in an official capacity is absent.
Meanwhile, the EU has taken a staunchly more anti-AI stance, disregarding competition across nations. They’re focused on preventing the AI from generating any copyrighted material it has trained on, or generating any potentially illegal content. AI systems in use across a wide variety of industries will need to be registered and face their own regulations.
At the same time, France also raided NVIDIA’s offices, and Italy passed laws that temporarily banned ChatGPT entirely.
Europe’s sentiment has generally been more about how to prevent things from changing significantly and causing any negative effects at all, rather than trying to work with and adapt to a civilization-altering invention.
Doom
AI Doomers have also been gaining influence in the realms of government. They come up with useless probabilities, such as saying “we have a 10 to 90% that AI kills everyone”, and put a lot of effort into sounding smart. They typically have traction amongst people who have a poor understanding of technology.
But it’s worth mentioning them here, because government leaders mostly have a poor understanding of technology. And as they try to understand AI, they’ll pay attention to the people who sound like experts, but are religiously convinced an AI Rapture Judgement Day is inevitably coming.
For nations who take these arguments seriously, we’ll see wider and more draconian policies to try to reign open source AI in.
I suspect these efforts will mostly backfire. Most efforts to stop open source in the past typically have.