The jagged frontier illustrates the unpredictable boundaries of AI capabilities, where it excels at complex tasks but often fails at simple ones, necessitating careful mapping and understanding of its strengths and weaknesses.
AI nails your hardest task - then faceplants on the easy one.
What It Is
The jagged frontier is the irregular, unpredictable boundary of what AI can and cannot do. Not a clean line between "simple" and "complex" - the real boundary is jagged. AI excels at some seemingly difficult tasks while failing spectacularly at apparently simple ones.
The term was coined by Dell'Acqua, Mollick et al. in an experiment with 758 BCG consultants (Harvard Business School, 2023). The results were clear:
- Tasks inside the frontier: AI users were 25% faster with 40% higher quality
- Tasks outside the frontier: AI users performed 19% worse than unassisted colleagues
The worst outcome? Consultants who didn't know where the frontier was. They trusted AI blindly and got burned on the wrong tasks.
How To Spot It
- AI delivers a complex analysis flawlessly, then fails at a simple data summary in the same workflow
- Someone says "AI can't do X" while their colleague uses AI for something far more complex
- Results vary wildly depending on the task, not the difficulty level
What To Do (FL1 - Team Level)
- Map your frontier: Systematically test which tasks in your workflow AI handles well and which it doesn't
- Update regularly: The frontier moves with every model update. What failed 3 months ago may work now
- Share the map: Make your team's frontier knowledge explicit, not tribal
- Design handoffs: Use Centaur Work patterns — human handles outside-frontier tasks, AI handles inside-frontier tasks
The Trap
Assuming AI capability scales linearly with task difficulty. It doesn't. The frontier is jagged, and overconfidence on the wrong side costs more than not using AI at all.