Turn "we need AI" into a specific pain you can actually fix.
Talk to real people. Ask where time, money, or sanity is leaking. Write it down in plain language. AI can summarize interviews, but only you can feel what actually hurts.
Six things humans keep owning when AI helps write the code.
Almost everything an LLM can do cheaply will eventually feel like a commodity: typing boilerplate, wiring endpoints, filling in tests from patterns. What remains rare (and therefore valuable) are the judgment calls that shape what gets built at all.
Here's a six-part lens for how companies can look at their engineers and themselves differently. This is the work humans keep—and where AI becomes a tool, not a threat.
Turn "we need AI" into a specific pain you can actually fix.
Talk to real people. Ask where time, money, or sanity is leaking. Write it down in plain language. AI can summarize interviews, but only you can feel what actually hurts.
Decide what "good" looks like this quarter, not in a sci-fi future.
Take the messy wish list and slice it into must-have, nice-to-have, and not-now. Trade scope for focus. Let AI help explore variations, but you choose the bet.
Choose the shape of the system: contexts, boundaries, data flow.
Map who owns what. In Phoenix, that's your contexts. Elsewhere, it's services, modules, or bounded contexts. AI can sketch diagrams—but you decide what's allowed to talk to what.
Inside each boundary, choose the public API and responsibilities.
Group behaviors that belong together. Keep data and operations close. Let AI propose function sets, types, and schemas, then trim until a junior can understand it in an afternoon.
Match patterns to problems instead of cargo-culting whatever's trendy.
Event-sourcing or plain CRUD? Worker pool or one process per thing? AI can list pros/cons, but only you know the team's skills, appetite for complexity, and how long this system must live.
Decide what must never break and how you'll notice when it does.
Think in failure stories: what happens if this silently stops working? AI can spit out test cases from your spec, but humans pick the invariants, alerts, and dashboards that matter.
Rule of thumb: if this changes the shape of the system or who apologizes when it fails, a human must stay in charge—even if AI typed most of the code.
Most devs say: "AI saves me time, but I still feel overloaded." This demo shows a better question: where do we spend the saved time?
Move the sliders and switches. Watch how the time shifts from low-leverage typing to higher-leverage thinking.
With a bit of AI help, code and test drafting speed up. You can reclaim time for architecture decisions and better test planning.
If you treat engineers as "people who type code," AI will look like a direct threat. If you treat them as "people who shape systems in reality," AI becomes leverage, not replacement.
When you interview, ask candidates to reshape a vague request into a concrete plan. You can always teach a new library; it's harder to teach "what problem are we really solving?"
Treat context maps, specs, and design docs as living artifacts. AI tools should plug into those, not replace them with vibes in a chat window.
That 3–4 hour block of focused work is where understanding and good judgment show up. Let AI shrink the busywork around it, but defend the block itself.