Don't Automate Broken Processes with AI
A concerning pattern is playing out with AI in enterprise: people rush to bolt AI onto existing tools and flawed workflows, and the result is faster, messier failure.
AI is not the process, and it can't always overcome a bad one. AI amplifies what you give it. If the underlying process is broken or undefined, automating it simply scales the problem. That’s why “AI-first” initiatives that ignore process quality often produce faster errors, amplified user frustration, and brittle systems that are costly to repair. Fix the process first.
Make up your mind: Human-in-the-loop or AI-in-the-loop
One of the most common mistakes is fuzzy role definition between people and AI. At kickoff, be explicit about who makes final decisions, which steps are advisory versus automated, and when humans must review or intervene. “Just build something” is fine for a PoC, but a working proof of concept is not a license to remove humans.
PoCs don’t prove operational readiness
Proofs of concept are useful experiments, not production-ready systems. PoCs can hide integration complexity, edge-case behavior, governance gaps, and the ongoing human effort required for quality control. Treat PoC success as a hypothesis to validate before scaling.
The myth of autonomous AI
Real-world AI systems are not fully autonomous. Even highly automated systems require humans to set goals, make strategic choices, ensure resources, and intervene in edge cases. In enterprise settings, AI should draft and suggest; humans should decide and take responsibility.
A company I worked with tried to auto-generate Google Slides from an outline via the Slides API. The automation produced quick drafts but never reliable final slides — creative work needs aesthetic judgment and iteration. The automation still required human cleanup and curation. When you think about it, this is not surprising. The model doesn't know what it's doing or why. A human has to continually drive AI towards the goal. Just stating the goal usually isn't enough.
AI is like a smart toddler, and can be just as messy
AI is powerful but messy — like a smart toddler that needs direction and redirection. If left unsupervised in critical tools or flawed workflows, it will make a huge mess. Design systems assuming humans will clean up after AI, and minimize cleanup through better templates, guardrails, and clear handoffs.
AI is just a tool, not a mind
AI can dramatically increase velocity and productivity when paired with well-defined processes and clear human oversight. Just as you wouldn't use a screwdriver to cut a steak, don’t force AI into broken processes or unclear roles. Treat it as an tool, because that's what it is.
Checklist
If you can't answer these questions decisively, tread carefully before going all-in:
- Who makes final decisions? Every workflow has a final step. Is it AI or human?
- Which steps are advisory versus automated? It's pretty safe to have AI make suggestions that a human can choose (not) to implement.
- When does a human need to review or intervene? Multi-agent workflows can easily compound errors. In addition to programmatic checks, there might need to be a human-in-the-loop somewhere.
- What are escalation and rollback procedures? If a particular model suddenly starts yielding crazy responses, or just not responding at all, does the whole pipeline stop? Hospitals have plans to "go to paper" if the electronic medical record system goes down. Your AI pipelines need the same backup plan.