What AI Is Still Bad At (And Why That Matters)
AI is improving at an extraordinary pace.
It writes code, explains concepts, drafts documentation, suggests refactors, and sometimes even anticipates what we’re trying to build. The progress is real. Pretending otherwise would be naïve.
But progress does not mean completeness.
AI is powerful — yet it still has a shape. And understanding that shape is what separates fear from clarity.

TL;DR
- AI is strong at pattern recognition and output generation
- It is weak at context, ambiguity, and ownership
- It does not think long-term
- It does not understand organizational dynamics
- It cannot carry responsibility
- Knowing these limits reduces anxiety and increases leverage
---
AI Has a Shape
It’s tempting to divide the world into extremes.
Either AI is magic. Or AI is useless.
Neither is true.
AI excels in environments where:
- The problem is clearly stated
- The context is bounded
- The objective is explicit
- The evaluation criteria are defined
In other words: structured tasks.
Software engineering, unfortunately, rarely looks like that.
---
1. Context (The Invisible Variable)
AI works with the context it is given.
Humans work with the context they sense.
In real projects, context includes:
- Business constraints
- Historical decisions
- Political sensitivities
- Legacy compromises
- Unwritten expectations
Most of this is not in the prompt.
Most of this is not in the documentation.
And yet it shapes every important decision.
AI does not “feel” context drift. It does not notice tension in a roadmap meeting. It does not understand why a technically clean solution might be strategically wrong.
That matters.
---
2. Long-Term Thinking
AI optimizes for immediate correctness.
Engineering often optimizes for long-term survivability.
There is a difference.
Long-term thinking includes:
- Maintainability over cleverness
- Simplicity over abstraction
- Stability over novelty
- Predictable failure over hidden fragility
AI can generate a solution that works.
It cannot evaluate how that solution will age across two years of feature pressure and team turnover.
That kind of thinking is accumulated through experience.
---
3. Ambiguous Requirements
Many engineering problems begin in ambiguity.
- “We need something scalable.”
- “This should be more flexible.”
- “Make it production-ready.”
AI requires clarity.
Strong engineers create clarity.
A significant portion of the job is turning vague desires into precise questions.
AI can answer well-formed questions.
It cannot reliably detect that the question itself is flawed.
---
4. Organizational Dynamics
Software does not live in isolation.
It lives inside organizations.
Decisions are shaped by:
- Team skill levels
- Delivery pressure
- Cross-team dependencies
- Leadership priorities
- Cultural norms
An architecturally perfect solution may fail in a politically misaligned environment.
AI does not model trust. It does not model incentives. It does not model fatigue.
But engineers must.
---
5. Responsibility
This is the most important limitation.
AI generates output.
Humans own consequences.
When a system fails:
- Someone investigates
- Someone explains
- Someone absorbs the impact
- Someone decides what to change
That responsibility cannot be delegated to a model.
It is not a technical gap.
It is structural.
---
Why This Should Calm You
If your identity is tied to typing code, AI will feel threatening.
If your identity is tied to judgment, ownership, and system thinking, AI becomes a tool.
Understanding what AI is bad at clarifies what you should double down on.
- Context awareness
- Long-term thinking
- Ambiguity navigation
- Organizational literacy
- Responsibility
These are not diminishing skills.
They are compounding ones.
---
The Important Reframe
AI is not replacing engineering.
It is compressing structured tasks.
The less structured the problem, the more human judgment matters.
And real-world engineering is full of unstructured problems.
---
What’s Next
In the next post, I want to explore a related tension:
How hiring, team structures, and expectations are shifting as AI becomes part of everyday development work.
---
Want to Discuss This?
I don’t run comments on this blog.
If this resonates — or if you see it differently — feel free to reach out to me on LinkedIn. I genuinely enjoy thoughtful discussions.
