Published
- 9 min read
Is this the end for all programming languages?
In the beginning
In the beginning was machine code. Back when men were men and most programmers were women. Once upon a time all code was written in machine code. Some of the feats that were achieved back then are simply mind-boggling. Creating games for computers with less power than my watch. Hell, my toaster probably has more compute today. If you’re of a certain age, you probably learned something like 68k assembler at university, I know I did. Do they even bother to teach this any more?
But then we invented higher level languages like fortran and cobol. Structured programming became a thing. Goto was declared evil, procedures and functions were the new shiny. Then we developed an unhealthy obsession with objects, that lasted decades. Then functional programming became cool and OO languages developed functional idioms, languages like Scala crossed the divide. Along the way we’ve invented macros and templating and generic types - all abstractions to allow us to think more about problems and less about solutions.
Communication
Any fool can write code a computer can understand, good programmers write code that humans can understand
There’s plenty of Martin Fowler’s writing I don’t like, but this to me sums up how programming evolved: as languages became higher level and we thought more in terms of problems than solutions, the art of programming became more about expressing your intent than in simply making the machine do your bidding. Expressing the intent behind code is important even in a reasonable sized code base that you work on as a solo developer; but as development teams begin to grow this communication becomes even more important. Once you start working with a handful of colleagues, or dozens or even hundreds, then expressing the intent and reasoning becomes infinitely more important.
So software development is really all about communication. Put a team of developers in a room with a whiteboard to solve a problem and before long there’ll be heated discussion and the whiteboard will be covered in scribble. This is how developers communicate: we draw pictures, we explain how we see the world, how the world might look. We’re trying to exchange mental models, to reach a shared understanding of the problem and potential solutions.
Software has a canonical written form. Whatever the language might be, there’s syntax and structure and idioms and patterns. But try abstracting higher: look at software design, zoom out to the solution architecture. Here there isn’t a canonical writtem form. Sure we use box and line diagrams or UML if you’re so inclined. There’ll be lots of annotations and arrows and explanations. This is the cave art written on the whiteboard. It almost always needs an explanation - I’ve never seen a software architecture diagram that, for someone outside the project team, didn’t need an explanation of what it all meant. We don’t have a canonical form for software design, just pictures in our heads - so we use language and diagrams to explain what we mean: we communicate.
And then came machines
The new AI tooling like GitHub Copilot and Cursor are quite frankly amazing. The ability to have the machine generate code still impresses me, no matter how much I use it. There’s an argument that these tools are pretty much at the level of a junior developer today: I can describe a change to the code, a function, a small increment, and the machine can generate suitable code based on my natural language explanation. Sure it needs careful review as there can be mistakes and misunderstandings. This is much the same stimulus/response you’d get from a junior developer, although much quicker. If I’ve broken my work down into a number of small tickets, we’re not far from the point where each of those is conceivably an instruction for the AI to implement. How long is it until my AI copilot just takes tickets directly from Jira?
But what does the next step look like? The next step I think is collaboration. To progress up the career ladder, junior developers begin taking on bigger pieces of work, less clear pieces of work. A more experienced developer will be able to pick up a ticket where the solution isn’t clear and needs some investigation - report back to a senior dev with your findings, discuss possible options and engineering trade-offs. For human developers, this is the next step in the evolution. Will AI follow the same path? At what point will I be able to ask the AI to investigate a change where we don’t know what the solution looks like? And have it report back with different alternatives and recommend an approach? At what point do we collaborate with the AI, rather than have it simply do what we ask?
CTO life
Compare for a moment with my life as a CTO. I work with different software teams, effectively writing code for me. Sure I could go in and read the source code they produce (and very occasionally do), but why would I bother? The developers I work with are the experts, I don’t need to get involved in the details at that level. But I do talk to them about the code. I understand the key engineering trade-offs we’ve taken, the key concepts of the software, the overall software architecture. When we have a change to make or a new requirement has emerged, I talk to the team about it. I can explain how I imagine it working out, at least at a high level. They’ll understand the details and tell me what is or isn’t possible, given the code base as it stands. And then we agree a change to make and they go off and do it.
You can imagine a possible future where a developer is able to have this kind of conversation with an AI. No longer constrained to work at the level of individual lines of source code, the developer works with an AI that can write code infinitely faster. Sure, the developer could go in and read the code, and probably sometimes does - but when your AI copilot can explain what the different implementation options are and go off and do it, why bother? At this point there is true collaboration: a discussion of trade-offs and different engineering approaches, of different possible future directions and design decisions, which problems to tackle today and which to leave for tomorrow.
The end
Is this the end of programming as a discipline? I don’t think so. That expert level understanding of how machines work and the complexities involved in translating the human requirements into working software will always require someone like a developer. Improvements to tooling just increase the amount of work we can get done in a given amount of time - as has happened throughout history. These tools may be novel, but I don’t think it fundamentally changes how humans use tools.
What you can imagine happening, though, is for line-by-line source code to become like machine code today. Sure, I could go in and read machine code, but why on earth would I bother? I have a compiler that does a decent job of translating higher level language into machine code, so outside of some very specific optimisation problems - why would I ever need to get down to that level of detail? Could the same become true once the AI acts as our “compiler”? Why bother reading the code when I can ask the AI to summarise it for me? When I need a change I can ask the AI to make it, and have it prompt me when what I’ve asked for changes something fundamental or isn’t currently possible without large-scale change. Outside of some highly specific use cases, at that point why bother reading and writing individual lines of code at all?
An example
An old colleague of mine is a died-in-the-wool Matlab programmer. I tried to persuade him to move to Python nearly 15 years ago, but still he clings on to Matlab. However, recently one of the engineers on his team was doing something cool with PyTorch. He wanted to try it himself, but realised there was no alternative but to do this in Python. Once upon a time, this would mean learning Python to a decent enough level, then learning PyTorch and a bunch of other libraries along the way. Instead, he was able to ask ChatGPT to generate some PyTorch code for him. He understands the machine learning principles behind PyTorch, so knows at a conceptual level what he wants the code to do - so he was able to work with ChatGPT at a conceptual level and have it handle writing the actual code.
This to me is a prime example of where we’re getting to with these tools. No longer is arcane API knowledge or experience of working with particular libraries the mark of a senior developer. A senior developer has the experience and conceptual understanding of how to achieve what they want to achieve; the actual mechanics of how that’s rendered into code can increasingly be left to the machines. Sam Altman has talked about the single person unicorn - it’s hard to disagree that this is now a question of when not if.
This change won’t happen overnight, and it probably won’t happen as quickly as many tech evangalists would have us believe. But I think this is the ultimate direction for software development. Little by little, as developers work with AI copilots more we’ll become ever more separated from the underlying source code. This is no bad thing - it will enable developers to be more productive and achieve things that were unimaginable only a few years ago. But it fundamentally changes our relationship with source code. This raises plenty of other questions: what happens to principles like software craftsmanship, when the AI is the craftsman? How do juniors progress in an industry when the usual path of becoming an expert in a language or ecosystem becomes irrelevant? Do we even need as many programmers as we have today? These are all great questions, that deserve posts of their own.