My take on AI: 20% of my skills are now obsolete, but the other 80% just got way more valuable.
## Why?
1. Engineering is all about trade-offs. If someone can clearly define all the trade-offs in detail, they can probably code it themselves.
2. Engineering exists because humans suck at precisely describing what they want and how to build it.
3. My real job? Helping people figure out what they actually want, when they need it, and how to build it. This means digging into context, constraints, and trade-offs.
4. What I'm really shipping is a well-designed system with clear boundaries, not just code.
I've always treated code as disposable anyway. So I'm stoked that LLMs can help me crank out boring code faster and reduce the mental overhead of writing it.
## The Data Behind It
Here's what I think makes LLMs actually decent at programming: we've got tons of high-quality training data that's been verified by compilers. It's like having a massive codebase that's been battle-tested.
But here's the catch - I'm skeptical this success can be replicated in other domains where the training data is sparse and hard to validate.