Artists and the Holy Gatekeepers of Tomorrow
“I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.”
―Stephen Jay Gould
(in reference to Einstein’s brain being donated and studied for science)
A funny thing happens as you age.
When you’re young, your imagination is limitless. You think of wild ideas and take it upon yourself to enact them in some way: A story, a movie, an invention, a painting.
As a kid, you might come up with entire plots and scripts for movie ideas. You imagine freely, because you don’t think ahead to the limitations. You just create. Want to write a plot where Tom Cruise is playing the Easter Bunny as the main character in a Jurassic Park movie? Sure, why not.
But as you become more “mature”, you create your own limitations. You tend to think about the fact that it takes a lot of resources to bring an idea to life. You might need actors, instruments, cameras, engineers, and time – the logistics are extensive. Even making something as simple as a painting can feel constrained by time and skill. And so you give up thinking things through as soon as you ponder the first roadblock to the realization of your idea.
As far as creativity goes, only a small subset of people go on to become artists. This has almost always been the case; usually only a dedicated handful of people could live decent lives as artists. Usually these artists were benefactors of patronage by noblemen and aristocrats.
Some people got lucky and found ways to apply art to their craftsmanship. People could create beautiful woodwork, electronics designs, cars, boats; the creative process can be found in many places. Even Leonardo Da Vinci utilized his artistic talent to engineer unprecedented weapons of war.
As for everyone else? Well, there’s a reason the starving artist trope exists. Society at-large was not willing to support many artists, so those who chose the artist’s path often had a difficult life. This didn’t mean their art wasn’t worth making, but their lives were harder than they could have been.
We’re blessed to live in a time now where people can easily broadcast their art to the world. It has become immensely easier to create all kinds of art; Music, animations, photography, movies, 3D designs, games – the list is expansive.
The digital tools to make creation easier and accessible for everyone at relatively minimal cost have exploded. At the same time, billions have gotten access to the internet. Everyone has a chance to distribute their art, but they face the challenges that many artists of olde faced: It’s hard to receive patronage.
The good news is that great artists have a better chance at being discovered. Content truly is king on the internet: Something exciting, new, and of high quality has immense capacity to go viral. The other side of that is that everything on the internet faces magnified power laws, where the top 1% of content get the overwhelming majority of proceeds.
However, with the scale of so many billions of users, many smaller artists still have a chance to make an okay living, without the direct aid of nobility.
Dark clouds have gathered over the artist landscape though. AI art generation has greatly upset artists, and many of them have been very vocal in their opposition to it.
They often raise incorrect arguments about how the AI simply “collages” their art and accuse the AI of thievery. Other times, they wax poetic about how true art can only be created by humans.
There’s a lot that’s outright wrong with all of this. But I think the actual crux of this boils down to economic fear. AI is horrifying to a lot of people for a lot of reasons. Entire industries are being wiped out in a flash.
Podcast transcription? Wiped out. Translation? Wiped out. College essay writing? Decimated. Tutoring? ChatGPT is an incredible personal tutor.
It’s easy to understand why many artists are bitter. Many thought they had found a promising path doing what they enjoy. After all, AI could never be creative, and it would never come for their jobs, right?
Suddenly, an AI model comes along and it’s able to mimic any artist’s style and quickly make almost any random thing that anyone asks it to create.
At first, its creations were hideous and easy to mock. Extra fingers, heads, etc. But within months, the styling was near perfect and the major issues were hammered out.
And despite the protests and criticisms of AI by many artists, people began using it anyway. Pretty shortly after this began, thousands of lesser-known artists and designers began outright losing the small gigs they relied on to support themselves. The rug was pulled out from under them.
Despite all of this, I still think that the artists have AI all wrong. They drum up interpretations of art that are convenient to them as people with an occupation as an artist. But the stigma they try to create around using AI art is also just another form of gatekeeping.
The truth is that we are all artists. People have been painting art on caves for tens of thousands of years. Everyone painted and made art as kids. Only a few are able to really stick with it; but most of us still have those visions and art pieces we never got to finish or start locked away inside of us.
AI changes a lot of that. If you have a totally original concept, where you can describe it in perfect detail and generate it with AI, who is to say it’s not art?
When a film director tells all his actors in a shot the exact image and vibe they’re going for in a scene, are the directors not artists?
How many people have had spontaneous ideas or visions that they wished they could just get in physical form and show to others?
Visions
In 2017, I attended a Peyoté ceremony where I sat on my knees in a teepee with an enormous burning fire in the middle of it for 12 hours. At one point during the ceremony, I saw a vision of dark room, with a tree sitting on an office chair bending toward the computer monitor as its only source of light.
I realized that the tree represented myself, constantly staring and slouching into my computer monitor. The computer screen being its only source of light, the tree naturally bent towards it – my spine was the slouched trunk of the tree. I had sketched it out briefly, but never took it any further.
Being able to generate this visualization with AI was inspiring. Sure, it doesn’t look perfect, but to get a close-enough depiction of what I saw in my mind during an impactful moment in my life and share it feels like magic. It ignited my mind to come up with concepts and think “that idea could be generated in a few seconds with AI”.
However, this is something that many artists want to prevent the average person from being able to do. In their eyes, it’s lost revenue.
Here’s an unspoken reality though: Billions of people envision and imagine things that they would like to be able to easily show to others. People have dreams with crazy concepts and ideas they’d like to revisit. Of those billions of people, few have the skills and time to accurately recreate their visions. And a similar few have the financial resources to hire an artist to try over and over to get their concept down correctly.
Something that isn’t yet appreciated is that once everyone can easily generate exactly what they’re imagining, it’s going to unlock boundlessly higher levels of creativity. Whatever the Gutenberg Press was for literature, AI art generation will be for art.
To be able to visualize and put your vision into words and experiment with getting roughly what you are imagining is an entire form of literacy in itself. It’s a muscle that people usually stop using past a certain point. Being able to so easily put that muscle to use again will result in so much more creativity, art, and entirely new kinds of art.
Imagine the impact if 8 billion people couldn’t read or write, and suddenly they could. Or a very real example: Imagine the effects on art and culture if no one had internet access, and within a few decades 5 billion people did? I think we’ll see a similar scale of effect from everyone getting back into the habit of realizing their visions.
The demand for art in some ways may even become greater than ever, especially as undiscovered visionaries bring novel concepts to life. I would guess that there will be great demand for novel art with moving, new, and inspiring concepts, and people will take on an increased interest in creating art themselves.
At the same time, a new mode of artistic literacy doesn’t mean that everything becomes art. The proliferation of reading and writing didn’t mean everyone was writing novels. It just meant that reading and writing became an every day part of life, to the point that we are all reading and writing little text messages to each other as a primary means of communication. Few would consider text messages art.
I suspect something similar will happen with generative art, where these imaginings and concepts will simply become a regular part of communication. Imagine a friend in your group chat proposes taking a road trip to the Grand Canyon, another friend creates and sends an AI picture of everyone in the chat at the Grand Canyon, and then everyone feels inspired to actually go do the trip.
It won’t exactly be considered art; it’ll just be an afterthought of daily communication.
The High Priests and Gatekeepers
I truly believe those who gatekeep AI are holding humanity back. In some ways, they are similar to the Vatican holding back science and prosecuting Galileo for suggesting the Earth revolves around the Sun.
It’s worth addressing and identifying the three kinds of AI gatekeepers:
Job Gatekeepers: those that have no control over AI but are economically affected by it
AI Gatekeepers those who control the AI and would be economically affected by its release
Priests: those whose entire career is centered around propagating AI doomsday scenarios
Job Gatekeepers
Artists aren’t the only ones that have a vested interest in preventing access to AI; they’re just unlucky in the sense that there’s little they can do to prevent it.
However, there are many powerful entities that do have access to AI and also have an interest in preventing public access.
Google was like this for a while; they pioneered much of the generative AI technology we see today. But in the name of “safety”, they always opted against making these tools publicly available.
The truth was that these tools would have cannibalized their search business, so it was better for their bottom line to never release them.
ChatGPT’s release gave Google a swift kick in the rear to find a way to compete and actually start shipping their AI products for the first time. Ironically, ChatGPT’s architecture was initially based on a Google research paper. This copying is similar to when Xerox Parc’s research lab handed over the concept of a Graphical User Interface to Apple in 1979.
OpenAI is guilty of this type of gatekeeping as well though. They held back image generation tools for a long time due to safety concerns, and did a similar schtick with concerns about their GPT tools.
Now that OpenAI has hit a bit of a brick wall in terms of making their models more capable than GPT-4, their CEO is going to Congress begging them to make regulations to make AI safer. This translates to: “please prevent other companies from developing a more advanced model that outcompetes us”.
Unfortunately, the overwhelming majority of society actually has a vested interest in preventing access to AI. People don’t want to be automated out of their job, and people also don’t enjoy being forced to train up new skills extremely quickly within the economic pressure cooker of unemployment. It’s also hard to see what jobs and opportunities will be available once current roles are automated, so people will naturally be fearful of any changes – especially fast changes.
If you look at the sheer amount of programmers who refuse to move on from outdated legacy programming frameworks and languages, it’s easy to see the challenges we face in getting people to move on to new technologies generally.
Part of this resistance to change is how our work culture has evolved. In the past, people often spent decades working the same job or doing the same trade. Someone who worked at a company could reasonably expect to be there for a long time and possibly have a pension. Losing a job was a catastrophe though.
I saw this a lot in the coal country of West Virginia. A mining operation would start and an entire town would form around it. It would be prospering, and people made a decent living. Then the mining operation would shut down, everyone would get laid off, and the towns would fall into despair. There would only be a few good job options around until the mines opened back up.
The working world of today and tomorrow is drastically different. Every job is starting to change so quickly that it seems almost impossible for it to be a tragedy for a job to end or be automated.
Many people have already switched from staying in jobs for decades to typically only staying a few years at a time. And while every job in a field can be similar, each job is different, so what that means is people are getting skilled at learning how to do a new job quickly. This is great news, because so many new tools and automations are coming, that a typical job may look completely different every few months.
What this means is that the best skill to focus on is a meta-skill of being able to pick up new tools, and focus on getting to good end results quickly while automating every tedious task along the way.
A benefit of this shifting dynamic is that since everyone will be inexperienced with every new AI tool that comes out, everyone will constantly be on an even playing field with each other as these tools emerge and job requirements shift.
However, there are a lot of industries that have traditionally moved more slowly and have an interest in keeping things slow. There are also fearmongers who tell you of every AI apocalypse imaginable, from complete joblessness and despair, to AI turning everyone into paperclips.
AI Gatekeepers
Recently researchers devised an AI chemistry assistant named ChemCrow that uses and augments GPT 4 to outperform GPT4 itself across all metrics for chemistry related tasks such as proposing novel molecules for desired properties, synthesis procedures, and can even output the synthesis procedure to an actual chemical synthesis machine.
In the example above, ChemCrow was given a task to propose and synthesize an insect repellent, and then actually produced it.
Amazing stuff, right? But this was met with a lot of fear. People suggested the AI could secretly produce lethal chemicals and kill its operators, or bad actors might use it for other nefarious purposes.
I don’t think that a malicious AI is the real risk; it seems more likely that bad training of lab technicians. Despite this, the paper’s authors chose not to open source everything from their paper anyway, out of fears of harmful consequences. However, I think this restriction may also be to avoid dropping an automation bomb into the labor market of chemistry in the same way AI art did.
Overall, I think this does a disservice to everyone. It operates from some of the worst assumptions about humanity as a whole, and puts a ceiling on positive outcomes for the democratization of chemical engineering and experimentation.
Ideally, the most complicated sciences of today should look like lego pieces to the builders of tomorrow. That is how much of machine learning has advanced: unimaginable miracles 20 years ago are just utilities in a programmer’s toolkit today. I think we should find ways to make similar levels of abstraction possible across all fields.
There’s really no limit to the number of ethical conundrums one could conceive of with open AI tools though – typically dubbed “dual-use”. For instance, the early GPT tools were restricted from the public due to the fear they may be used to generate fake news.
That kind of concern applies can apply to any technological frontier that hands even the slightest empowerment to the end user though. There are constant dual-use concerns raised about 3D printers and their ability to print serious rifles and pistols. Yet the benefits of putting 3D printers widely in the hands of small companies, schools, libraries, and everyday people’s homes greatly outweighs the risks of bad actors abusing their printers to build a plastic arsenal.
This gatekeeping will likely not stop here though. If it was feasible to build such a capable chemical engineering tool in just a few months, it’s likely similar tools can be built for other fields, like mechanical engineering, and similar ethical and labor market concerns will hold back their release. The situation won’t look much different from when Google and OpenAI held their AI models back.
I truly think these moves of total restriction are mistakes. They have a tendency to cause one actor or another to go rogue and build totally open source versions of these tools, partially because:
The research paper makes it widely known that this type of AI automation is completely possible
Many people don’t enjoy doing robotic grunt work that they don’t actually need to be doing
Getting frustrated by people holding back forbidden fruit from you is an unstoppable motivating force
Everyone saw this type of research get released just a few months after GPT4 got released so they know it doesn’t take that long to replicate it with the right knowledge and some dedication.
Priests
All this covers the first two types of gatekeepers; the final one are the groups who propagate myths that the AI itself is going to become conscious and kill us all.
Keep in mind that the people saying this have been saying this for 20 years. They have built entire organizations and nonprofits centered entirely around propagating this. They are funded off of people’s fears that this is a reality. Their claims aren’t provable and often inaccurate, but if they acknowledge that what they’re spreading about AI doomsday might not be a reality, they’ll be out of a job.
While these gatekeepers themselves have little control over the AI, a lot of smart people take them seriously and at their word, so they have the potential to restrict AI further by influencing companies and legislation.
It’s also generally unreasonable. There is enormous concerted effort and funding behind solving industry problems at different frontiers, while there will always be very limited funding and utility behind using these tools for nefarious purposes. Nevermind the fact that nation-states can always build weapons and tools for nefarious purposes without AI.
Hitting our stride
Overall, all these attempts to restrict AI are slowing humanity down and preventing us from reaching greater potential. The reasons to be optimistic far outweigh the reasons for fear.
Despite all these hurdles, it’s worth trying to push through them. The benefits are a world where all of our material issues are solved and we can begin to return focus to treating the soul and mind of humankind to make them whole again.
AI tools represent an opportunity for people to return to a state of greater imagination and play – one where their ideas flow as freely as they did in childhood, and they are able to act on creating them without strain, roadblocks, or difficulty.
Right now, we have no idea how many brilliant people and Einsteins are totally occupied up doing menial tasks. Many were simply born into difficult situations with fewer opportunities. The ideal state we should strive for is one where every single person’s complete potential is unlocked. We don’t yet know what that world looks like, but I’m certain that only a small percentage of people are anywhere close to this level of self-actualization.
However, I’m certain that the AI tools that eliminate menial tasks, enable unprecedented personal capabilities, provide education, and unlock creativity will help us achieve this ideal state for the first time in our collective history.
It’s just going to take a little bit of struggle for us to get there.