There’s been a lot of upheaval the past few days over new code-writing automation tools.
The tools claim to act as a full-fledged software engineer, implementing new software end-to-end.
This, naturally, has put many students into a state of distress. Hundreds of thousands of recent computer science graduates and students are on the job market for a junior developer role. Few are landing them.
I don’t think this is any reason to fret though, and here’s why:
An entire era in the middle of the last century was dedicated to making compilers that could turn written English into executable programs.
It just couldn’t be done. English was too loose; there were too many possible interpretations, and most importantly, people were unable to be specific enough in their instructions.
Ultimately, programming is what was converged on to make things that worked as intended.
Programming languages were modeled as closely to English as possible, where the language’s available instructions matched to an intent of the programmer.
There was no room for looseness of interpretation here. The programmer could misinterpret what a business owner wanted, but the compiler always correctly interpreted what the programmer wrote.
Programming has been the closest we could get to achieving computable English, because it forced you to express exactly what you wanted in the exact logical statements needed to make your intents legible without any room for misinterpretation at all.
What does this have to do with an AI that can program?
As AI becomes much more intelligent, two things will happen:
The AI will become much more capable across all domains, including programming
AI will be constrained in its potential by its user
This is what happened with computers; Most computers are incredibly capable. Most people barely know how they work or how to fully leverage their capabilities, despite using them every day.
What we’re going to find is many people will be unable to articulate with exact specificity what they’re needing the computer to do, and this will result in it not giving them what they want.
OpenAI is attempting to solve this by claiming they’re eliminating the need for prompt engineering.
What they mean is that they are adding an additional step between a user’s request and an AI response: They have their model interpret what the user is probably asking for, and then generate a very detailed prompt to the model that will give the user exactly what they’re asking for.
They already do this today, with their image and video generators. We will see more of this moving forward for agents and code and any other tools their AI can access.
Here’s why it’s not over for programmers.
If you asked someone to write a detailed specification for how they wanted a new piece of software to work, in English, there’d probably be inherent logical inconsistencies.
If they generated that software with AI, it likely would not work as intended.
Let’s fast forward to the future, where the AI can go back and forth asking you specifics about your implementation, generating a detailed prompt, and giving you something pretty close to what you want.
The problem with this approach is it’s basically like flying a passenger jet on autopilot.
For a programmer able to tease out exactly what they want in an accurate specification in English, that’s more like flying a fighter jet. They’ll have more agility, they’ll be able to tell more quickly if they’re getting exactly what they want.
Will they be writing code anymore? Not necessarily, but they’ll still very much be programming specifications in a written human language.
This doesn’t just apply to code though. Someone who can think computationally and programmatically will have domain over many fields when enabled by AI.
For instance, in film-making, a programmer would become inclined to be highly specific when describing a scene, every actor in it, all motion within it, what is said, and each character’s expressions. That degree of programmatic specificity will create compelling and captivating pieces.
Fundamentally, the skill of programming is about attention to details and piecing together logic. This will always be needed, and is universally applicable. Having a tool like AI simply enables programmatic critical thinkers to apply their skills to non-programming domains as easily as they’ve been able to within software.
So if I wasn’t clear, what I’m saying is learn some programming. It’s going to be useful in expressing your intents and helping you get exactly what you want out of AI systems.
Thinking computationally and programmatically will become exponentially more useful in the coming years, even if you never need to write a single line of computer code for the rest of your life.