Wooly Mammoths and the Future of Disposable Software
Handcrafted-code is going the way of the dodo
Many moons ago, a manager at an old job ranted to me about disposable vape pens. He hated their wastefulness.
If you don’t know what a vape pen is, it’s basically an electronic device that can heat up THC, CBD, nicotine, or other substances to a temperature where they become psychoactive and turn into inhalable vapors.
When I was in high school, such devices were large, bulky, and relatively expensive to manufacture or buy.
Today, they sell disposable vape pens. Unlike typical vaporizers, they can’t be refilled, or recharged; they have all of their inhalable contents and batteries sealed away inside of them, destined to be used until emptied or dead, and then discarded.
It’s a little wild that these things are disposable: Vape Pens are hardware components that have a lithium-ion battery built in, and an electronic microcontroller for managing voltage, temperature, and control of the device.
There’s probably more computational power in a disposable vape pen than was used in the moon landing.
My manager was having none of it. Every time he found one, he would take it home and rip the lithium-ion battery out to reuse it.
He owned a 3D printer and would do stuff like 3D print an electric toothbrush, and then stick the trashed vape-pen batteries inside of it.
Any time I see a disposable vape pen haphazardly littered somewhere these days, I think of him.
Anyway, I think we’re heading toward a future where people will increasingly be disposing of things that we’d look at today as insanely wasteful. Things will be much cheaper and quicker to create than they were before. Consequently, people will dispose of these creations just as easily.
This will apply to a lot of things, like movies and music, but I’m mainly looking at this from a software engineering perspective. Hundreds of billions of dollars are spent worldwide on software engineering. The code behind all the applications we use is meticulously (and not so meticulously) crafted. It’s costly in dollars and time to put it all together.
Wooly Mammoths
A lot of this is destined to change in the near future. First, let’s take a look at how software is done today:
At the moment, it’s typical for software engineers to consider flexibility when writing software applications. Requirements could change, so it’s best for the software code to be written in a way that it can easily be changed or expanded to accommodate new requirements.
If the software isn’t written flexibly, the difficulty of adding simple changes borders on rewriting the entire app from scratch. We call this type of code “brittle”. Every change in brittle code has the potential to break everything, so you need to change all those other things that broke just to make the app work again.
As you can imagine, brittle code is deeply expensive to maintain. So what tends to happen is companies will prioritize having an upfront cost of better designing and planning to ensure easy flexibility later down the road. This also influences how applications evolve.
It also tends to happen that applications will evolve down narrow genetic business paths. Additional modifications and services are built and added on according to market demands and sometimes experiments from leadership, like new organelles of a cell.
Applications must fit the business needs for a specific business environment, or they die.
A result of this, however, is that if a business environment changes dramatically, it can often be difficult for a large existing enterprise application to adapt.
Think about it like this: That enterprise app started small for a specific niche, evolved to become large, added a ton of new services, and suddenly when a big environmental change came the app became useless.
Maybe the niches they serve went extinct, or those niches found more modern and nimble service providers able to accommodate all their needs more easily and cheaply.
When large market changes come, those giant applications become like wooly mammoths at the end of the ice age: Adapted narrowly to the tundras of the glaciers, only to suddenly find themselves in a forest sauna, with humans nipping at their legs for their meat and fur.
Overall, the structure of a lot of development is on course to change. I’m not saying every application ever will be built this way. But a large fraction of code written on Earth will be AI-generated — especially considering that most people today can’t code, but most people can use AI to write code.
Soon, people will likely request a narrow task be accomplished, and then AI tools will build the software it needs to accomplish those tasks. The requirements of the application may be extremely narrow, and may only need to be used one time.
The code the AI writes will likely not be brittle. It is trained on flexibly written applications and often tries to employ best-practices. But realistically, it will be less time-consuming to modify the requirements of the task than it will be to modify the code.
In a sense these AI-generated applications will function just like those disposable vapes. They will be intricate, have a bunch of complicated logic that would have taken painstaking hours to build manually, and have all the logic built-in to accomplish that specific task (this logic is the vape juice). And when the AI is done using the software to write its tasks, it will throw these apps away.
HyperPools
There’s a missing component here that I think will determine just how powerful application creation will be in the future: The HyperPool
In Igniting the Everything Machine, I mentioned an example of asking an AI to build an Uber for Dogsleds. What I didn’t mention is that apps like that are bad examples for the kinds of things people will build with AI.
In the past, if you were building an Uber for Dogsleds, you’d need to build a whole company around it. You might hire devs, you might build an MVP, you might seek funding as soon as you have traction.
I don’t think any of that will be necessary moving forward with the HyperPool, because those types of applications will be too ephemeral to build a company around.
First, what is a HyperPool? It’s a massive, live pool of hyper-dimensional data for AI tools to continuously evaluate for usage in potential applications.
So how would it work with a Uber for Dogsleds?
Let’s forget about the person trying to build the application. Let’s only think about the people involved:
First, you need someone trained in dogsledding.
Second, that person needs to have dogs.
Third, they need to be interested in economic gain from providing dogsled transport.
Fourth, there has to be some snow on the ground.
Fifth, you need someone who wants a ride somewhere.
OKAY. Those are all the people you need. Let’s pretend there is one HyperPool, an all-seeing panopticon; Judas Priest’s Electric Eye, but for helpful purposes.
And let’s just say that thanks to the HyperPool, the AI knows there’s a pro dogsledder nearby. The HyperPool data indicates they’ve been ordering dog food lately, so the AI knows they still have dogs. And their financial data says they’ve been picking up extra gigs lately, so they’re looking for that kind of work.
Now someone tells their Siri or Google or whatever that they’re wanting a ride to some remote, snowy area of Alaska.
An AI starts piecing together how to make this happen.
Looking for available Helicopter pilots…
Looking for available Snowmobile drivers….
Looking for available Moose riders….
And then looking for Dogsledders.
Suddenly, the AI has found our hero. They get pinged with the request. They weren’t ever signed up for any dogsledding Uber app, or actively soliciting this type of job, but they happily accept the work.
The AI tooling of the near future will excel at promoting these symbiotic connections and removing the frictions that prevented them. But it also means that a lot of applications today that function through facilitating “matching” will become obsolete. And this is a core function of many applications.
There’s also a long way to go before this happens. The likely intermediary step is that many companies will build their own hyperpools; many companies already have them. They will collect niche data, and aggregate as much metadata as they can glean on top of that data. They’ll then mix in AI tooling to provide as many additional services as they can with that metadata.
A perfect example for the near future would be dating. And there’s a couple of companies that could be well positioned for this. Let’s look at Match Group: At the moment, they own:
Tinder
Plenty Of Fish
Match.com
OKCupid
Hinge
And like 40 other dating apps
A company like this could develop the perfect hyperpool! With a ton of dating data, and the ability to ask their users for any additional information an AI might need to find a match, they could create an incredibly successful dating service.
Unfortunately, their business model revolves around app usage and getting people to pay for digital ‘roses’ to send to each other, and not around successful matches.
The bright side is another company out there like Meta (who owns Facebook, Instagram, and WhatsApp) could build an incredibly effective HyperPool for something like this, amongst many other useful applications.
Data as Currency
I think most Hyperpools will be limited to within companies at first. But I also think companies stand to gain some symbiotic benefits by exchanging data with counterparts seeking to build out non-competing aspects of their business.
For instance, while Meta probably would already have a lot the data it needs to make dating matches, it lacks a lot of auxiliary data points directly related to dating because users generally use the Match Group apps to store all that information.
On the other hand, Match Group lacks a lot of the data that keeps users constantly engaged and buying digital trinkets within apps. Meta probably has a lot to offer here.
This is a great example of a potential area where two companies could be mutually exchanging live data from their hyperpools. Why?
Match Group’s business model isn’t getting people to match; it’s app engagement and selling digital trinkets to boost interactions. But if two people actually match on their platform, they don’t need the dating apps anymore! It’s (shortsightedly) treated as lost revenue.
In contrast, Meta’s dating features are not really a core part of their platforms. And if two people match through their services, they probably aren’t going to also stop using Facebook / Instagram / WhatsApp.
Over time this obviously may not be great for Match Group; People genuinely interested in relationships would move to Meta’s platforms as they hear more success stories and Match Group would increasingly find itself competing with companies like OnlyFans in zero-sum games.
Other entities may choose different strategies; they may choose to lock data down entirely and try to keep paying for as much live data as possible to maintain their own hyperpools. This is something that gets increasingly pricey if everyones tries to lock their data down in the same way.
This is already happening today: some websites are outright blocking AI models like ChatGPT from being able to scrape them, because they need user engagement for ad revenue.
Another great example is Twitter; in February it went from having a free API for gathering and analyzing Twitter data (and automating posts), to having a minimum API price plan of $42,000 per month for access to 0.3% of tweets.
It’s possible this is related to ChatGPT scraping; it may not be. But I believe it’s also possible we see companies pursue similar strategies in the near future of locking down all their data that they deem worthy of another company’s hyperpool.
The winners in these scenarios will probably be the organizations that can learn to mix, match, and exchange data.
At least, until some umbrella organization comes in and figures out a way to singularly build out the giant pool of data across all dimensionalities, the all-seeing eye I mentioned earlier.
Ripping out the vape pen batteries
So what should we do with all these disposable applications anyway? In a sense, I actually hold a similar view to my old manager: We should find a way to reuse them.
But what’s actually worth reusing? Let’s look at the physical costs:
AI generations take a while – often at least a few minutes, and prompting the AI just perfectly to get exactly what’s needed is tricky, so it isn’t worth totally disposing of a winning prompt. On top of that getting runtime configurations just right so that the app can actually run can be an even bigger hassle.
At the same time, it’s expensive to keep applications running and waiting when they aren’t needed. However, storage is extremely cheap, and boot-up times can be quick.
I think a lot of AI-generated applications could be cheaply stored. Their capabilities could be indexed, and anytime that logic is needed again, the apps could be rebooted and executed.
But why would this even be beneficial?
There are over 8 billion people in the world. Over 5 billion of them have internet access, and that number is quickly growing. The odds that each person will need completely custom applications for tasks in their lives seems low.
It seems more likely that many people will repeatedly need the same kind of logical sequences for various tasks they’re asking an AI to automate. But often, these types of things are so mundane or unprofitable to build a business around, that no one has ever built an app for these tasks.
You might have 1 million users who would pay 1 cent for a type of task. Or 10,000 users that would pay a buck at most. In the past, the incentives just may not exist to build tools to automate certain things.
But that can change when every possible app can be quickly generated, stored away, and booted up again at a moment’s notice.
I believe the current moment is ripe for a service that can store, index and boot these small applications in order to let other AIs build on top of it. AI agents can scour the the index for just the right tool (or plugin), pluck it out for quick usage, and shut it down when it’s done.
Or, if the service doesn’t exist yet, the AI can write it and store it away as well for later usage.
The New High Level
But why is such a service even helpful? It supports higher and higher levels of more abstract programming. Let’s look at some history of programming:
Early on, we had assembly, which was hardware-level instructions for moving values around in physical registers on a processor. Then came C, which was high-level code that compiled down to assembly, and replaced much of assembly writing.
Then Objective-C, C++ and Java came along replaced C. Today Python, JavaScript, Go, Swift, and Rust are commonly replacing those predecessors.
The benefit of each evolution is it gives its writers more powerful, higher-level tooling to save more time over those who used the previous iteration.
However, it also isn’t magic that these evolutions happened. Many programmers were dragged kicking and screaming into the future. Many of them liked their assembly, and COBOL, and C. But they simply couldn’t compete economically with the programmers using higher-level tools.
For instance, someone writing python today can accomplish in hours what would take an assembly coder years. There are massive Python libraries, calling hundreds or thousands of C and C++ modules, that turn into millions of lines of assembly code.
Soon, all high-level code we know today will effectively be like assembly; those using AI tools will be leaps and bounds ahead in productivity. Many people will need to move on to those AI tools simply to compete. We’ll reach a point where writing tasks and requirements in plain and concise english will be the new high level code.
Ultimately, the best way to get good at this is to become clear on what you actually want or need to do, and become an effective writer and communicator to make sure these AI tools understand exactly what you’re aiming for.
And the best way to do that is to start practicing now; write out general tasks you’d want automated, write out specific application requirements to get tasks done, and visualize everything you’d need a robot to do for you to get some complex task done.
Paradigm shifts are often perfect opportunities to take advantage of slow incumbents who don’t realize how much things are changing, so getting skilled at these tools now will provide massive returns to outsiders coming into the field now with fresh eyes and a high-level view of their own.