Nobody Knows How To Live In The Future

Share:

Automation was the promise. Ever since the industrial revolution, the goal it seemed was to have machines do all human menial labor to free us up to do, well, other stuff. Automation meant we got everything we needed faster and cheaper, and that we'd never lack or suffer scarcity.

With the advent of the information age, and much work becoming sit-down, "thinking" work, we become accustomed to the idea that most high paying jobs involved some sort of information processing and deliberation that required specialized skills or knowledge. And then, just a couple years ago, these AI large language models (LLMs) started to advance. All of a sudden, computers could write their own programs in just about any language. You didn't need specialized programming knowledge. You didn't even have to understand programming concepts. You could "vibe code" anything, just tell the LLM what you wanted your program to do, and it would whip something up. And if you didn't know how to use the program, it would tell you, and even run it for you.

And it doesn't just apply to programming per se. If you're having trouble installing a particular program, or troubleshooting a problem on your phone or computer, or trying to program your new router, the LLM can help you with that as well. And it never gives up. If it messed up, you can continually go back to it and ask it to try again. It will endlessly give you approaches and things to try, without ever complaining. And it will do so faster than a human.

LLMs are the bicycles of the information age. They are the biggest boost to information processing efficiency since the invention of computers. They're the ultimate abstraction of technology. You can have zero understanding of technology, and with the right tool stack (e.g. LLMs, speech-to-text, vision models, LLM agent-based coding and execution), you can just say what you want to create and make it a reality.

Naturally, this makes people very upset. People who spent a long time learning a skill are finding that employers are essentially replacing them (to varying extents) with LLMs. A few years ago, having a particular IT or dev skill and experience meant an almost guaranteed job. If an employer needed a programmer (software developer) to write or maintain an application, they would hire one. If they needed someone to implement or maintain some IT infrastructure, they're hire a human.

Now, things are different.

When someone has a need, their first thought is going to be whether they can meet that need with their existing tools, which includes AI. Do they need to hire a new developer to write a new set of features, or can they use who they already have? Or could they outsource to a third-party service on a contract basis?

Software development is a commodity. Infrastructure configuration is a commodity. You still need humans to do much of the physical labor, but a lot of the mental work can be done by an LLM. It still has to be checked and managed by a human, because LLMs will do stupid, wrong things. But they will also do a lot of things right, and extremely fast. "Fail faster" has been a mantra in the IT world for many years. It's a pithy way of saying to use trial-and-error to find what works. Well, LLMs are very good at failing faster.

Rather than complaining about AI, we should be thankful for it. It's going to free us up to focus on more important things.