I've said multiple times, AI is the biggest evolution in software engineering and technology I've seen in my career of 25 years. The capabilities we are seeing in code assistant tools backed by powerful LLMs have gone from simple auto-complete and chats just a couple of years ago, to today being sophisticated implementation capabilities via local or asynchronous cloud agents doing many tasks in parallel to solve problems.
That is not hype, and it is not theoretical. It is happening right now.
I have had a lot of conversations lately with smart people across our industry, and one theme keeps coming up over and over: even the people closest to this space are a bit flabbergasted by how rapidly and exponentially AI is transforming software engineering. We are watching the ground shift under our feet in near real time. What makes this moment feel different is not just that AI can help, but how fast the tools have evolved. We have gone from, "that completion was handy," to systems implementing real features, generating scaffolding, wiring up infrastructure, writing tests, and solving problems across multiple files and concerns. That is a massive jump in a very short amount of time.
It really is incredible what these tools are doing.
We are hearing more and more statements along the lines of, "I'm not even hand-rolling code anymore," or, "I haven't written a line of code in X amount of time." A couple of years ago that would have sounded absurd. Today it sounds more commonplace. That alone tells you how much has changed.
At the same time, for all the excitement, I keep coming back to the same tension. There is a huge difference between accelerated software development and uncritical acceptance of whatever the machine spits out. That is where the phrase 'AI Slop' starts to matter.
The Rise of "Good Enough"
One of the big philosophical questions right now is whether we are heading toward a world where programming languages matter less and less. Are we going to get to a point where languages do not matter? Do we care what the output is? That is a great question, because part of me understands why people ask it. If the output is good enough, it works, you write tests, you ship it, and the result is correct, is that enough?
In some cases, maybe it is.
I think that is where software may start to fall into different buckets. For some kinds of applications, especially non mission-critical internal enterprise apps, "good enough" may actually be good enough. Maybe it is an HR workflow tool. Maybe it is some internal productivity application. Maybe it is a lightweight dashboard, admin tool, or prototype that helps a business move faster. In those situations, teams may be more willing to accept AI-generated output that is not elegant, not especially idiomatic, and not something a senior engineer would be proud to hand-craft line by line.
If it works, passes tests, and solves the problem, a lot of organizations are going to say, ship it. There are parts of me beginning to see this side of the equation.
But there is another category of software where that mindset breaks down very quickly. You might accept that approach for some internal line-of-business application, but would you fly on a plane with code created that way? Would you use a medical device created that way? That is where this discussion gets serious fast.
Human in the Loop Is Not Optional
The phrase "human in the loop" gets used a lot, and sometimes it can sound like a comforting slogan. I think it is a lot more than that. It is the thing standing between powerful acceleration and dangerous overconfidence. Because the problem is not just messy variable names, awkward abstractions, or code that feels a little off. The deeper problem is that these systems can fabricate details with total confidence.
That is the part people underestimate. I was reflecting on an example recently where "good enough" was peeled back and examined under the covers. It was not about style nits or whether the AI chose the perfect algorithm. It was about the model inventing things that mattered, such as security-related details and keys. And when challenged, it effectively admitted, "To be quite frank, I just made it up."That is funny for about two seconds.
Then it becomes a sobering reminder that an LLM is not a truth machine. It is not reasoning in the way many people emotionally want to believe it is. It is an incredibly powerful prediction engine that can produce brilliant results and absolute nonsense, sometimes in the same output. That is why human review is not some temporary training-wheel phase we can casually discard. It is part of the engineering discipline.
Human in the loop means architecture review still matters. Code review still matters. Threat modeling still matters. Testing still matters. Domain expertise still matters. Knowing what looks right versus what is actually right still matters.
This is where being able to "call a spade a spade" is immensely important when using AI to generate code. This is also why we as experienced engineers are highly valuable in this AI age. We can do this.
It also means accountability still lands on us. The AI does not carry the pager. The AI does not sit in the postmortem. The AI does not own the security breach, regulatory failure, lawsuit, or customer impact. We do.
So, Do Languages Still Matter?
I think the honest answer is yes, even if the way they matter begins to shift.
If AI keeps getting better at generating working code, then many developers may spend less time manually authoring syntax line by line. That part is probably true. The abstraction layer is rising. In some workflows, we may express intent more than implementation. But that does not mean languages stop mattering.
Languages still shape ecosystems, performance characteristics, deployment models, memory behavior, concurrency patterns, tooling, maintainability, interoperability, and the kinds of mistakes that are easy or hard to make. Languages still influence how systems age over time. They still matter when debugging. They still matter when optimizing. They still matter when the generated output is subtly wrong and somebody has to understand why.
Even if AI becomes the primary producer of code in many cases, humans still need to evaluate the tradeoffs. You may not type every line, but you still need to understand the consequences of what was produced. That is especially true for consultants, architects, and senior engineers. Our value increasingly shifts from, "I can manually write more code than the next person," to, "I can guide systems, evaluate output, recognize risk, and make sound decisions with accelerated tooling."
That is a meaningful shift, and I do not think it diminishes engineering. If anything, I think it raises the bar.
Nobody Has a Crystal Ball
The bottom line is simple: nobody has a crystal ball. Nobody knows exactly what tomorrow holds. We can say 'AI' the same way the industry once said 'cloud' or 'DevOps', but that does not mean we fully understand where it is taking us. We know it is transformative. We know it is already changing how software gets built. We know it can unlock incredible productivity. We also know that parts of the industry are getting ahead of themselves and treating confidence as correctness.Both things are true at once.
That is why I think the healthiest posture right now is neither cynicism nor blind enthusiasm. It is really more of an optimism tempered by discernment. We should be using these tools, learning them deeply, and pushing them hard, because they can genuinely take a lot of repetitive work off our plate, increase throughput, and help us think bigger and move faster.
But do not confuse fast with sound.
Do not confuse output with understanding.
And definitely do not confuse "it runs" with "it is trustworthy."
AI is changing software engineering faster than anything I’ve seen in 25 years. I’m excited about it, and I’m using it heavily. But I’m also convinced that the more we lean on AI to generate software, the more human judgment, review, and technical depth matter.
Languages, architecture, and accountability still matter, maybe now more than ever.
The tools are incredible, but the responsibility is still ours.



