Someone asked me five questions last week. Not abstract questions about AI's future — operational questions. The kind that decide what a company builds on Monday morning.
I spent three days researching the answers. What follows isn't a landscape survey or a capability catalog. It's five conclusions, each backed by the strongest data I could find, each ending with what it means for someone who has to make decisions this quarter.
1. Where Is AI Coding, and Where Is It Going?
The honest answer is: further than the skeptics think, not as far as the hype suggests, and the trajectory is steep but has a ceiling nobody talks about.
Right now: 51% of GitHub commits are AI-assisted. Microsoft says AI writes 20-30% of its code — "fantastic" Python, C++ needs work. Andrej Karpathy hasn't typed code since December 2025 — 80% agent, 20% manual. DX's 135,000-developer study found 26.9% of production code is now AI-authored, up from 22% last quarter. 62% of developers use an AI coding assistant daily. This isn't early adoption. This is the new baseline.
But the quality data is sobering. CodeRabbit found AI code has 1.7x more issues than human code. Technical debt grows 30-41% within 90 days of heavy AI adoption. Maintenance costs reach 4x by year two if unmanaged. The safe zone, according to multiple enterprise studies, is 25-40% AI code — above 50%, rework rates jump to 20-30%.
THE TRAJECTORY
Agentic tools complete the shift from "pair programmer" to "junior engineer on parallel workstreams." Multi-agent orchestration becomes standard. Specification-driven development enters mainstream discourse — Martin Fowler is already writing about it. Kiro, Spec Kit, and Tessl are building tooling around it.
AI handling 67%+ of production bug fixes and routine features. Multi-day autonomous sprints normalized for well-scoped projects. SWE-bench Pro approaching 40-50%. The bottleneck has fully shifted from implementation to specification quality.
ai-2027.com projects "Automated Coders" by late 2027. Deloitte projects the developer pool expanding from 30M to 100M with citizen developers. But Gartner also projects a 2,500% increase in defects from citizen developers by 2028 and 40%+ of agentic AI projects cancelled by 2027.
The conclusion: AI coding is real, accelerating, and has a quality ceiling that most companies will hit within the year. The companies that win aren't the ones that adopt fastest — they're the ones that build verification infrastructure before the defect wave arrives. McKinsey found that firms which redesigned workflows before choosing tools were 2x more likely to report significant returns. The tool is not the hard part. The process around the tool is the hard part.
2. What Do Engineers Actually Do Now?
They're becoming judges, not authors.
The data on this is remarkably consistent across sources. Senior engineers spend 3.5x more time reviewing AI suggestions (4.3 minutes) than human code (1.2 minutes). Anthropic's own internal data shows feature implementation via AI jumped from 14.3% to 36.9% of Claude usage, while code design and planning jumped from 1% to 9.9%. Karat surveyed 400 engineering leaders: the skills becoming less important are writing clean code from scratch and language syntax. The skills becoming more important are problem-solving, adaptability, and judgment.
"We're going to start to see the title of 'software engineer' go away. It's just going to be 'builder' or 'product manager.'"
— Boris Cherny, creator of Claude Code
The organizational evidence is sharper. Shopify now factors AI usage into performance reviews and requires teams to justify new headcount against what AI can do. Klarna cut 40% of its workforce, then admitted it went too far and started rehiring. Nubank saved 12x engineering hours using Devin for database migrations — the kind of well-scoped, repetitive work where AI excels. One agriculture company (unnamed in the Optimum Partners report) went from two-pizza teams to one-pizza teams.
The entry-level pipeline is collapsing in real time. Entry-level engineering positions have declined ~40% versus pre-2022 levels. AI engineer roles are growing 300% faster than traditional SWE roles. Gartner says 80% of software engineers will need AI-assisted development upskilling by 2027. The median SWE salary is $130K in 2026; AI/ML specialists command $134K-$193K — a 30-50% premium for the skills that haven't been automated yet.
The conclusion: Your engineers are shifting from writing code to specifying intent, reviewing output, and orchestrating agents. This isn't coming — it's already happened. The org chart implication: you need fewer people who can write a for-loop and more people who can write a specification that an agent won't misinterpret. Optimum Partners put it well: "Organizations that win in 2026 transition early-career talent from Code Generators to System Verifiers." If you're still hiring for syntax fluency, you're hiring for a skill with a two-year shelf life.
3. What Products Do We Build?
Not SaaS replacements. Not everything from scratch. The answer is more nuanced than either extreme.
Start with what's dying. The seat-based SaaS model is under severe pressure. A $2 trillion SaaS selloff hit between January 15 and February 14, 2026 — forward P/E collapsed from ~35x to ~20x. Atlassian reported its first-ever enterprise seat decline and slashed 10% of its workforce to fund AI investments. Salesforce stock dropped 26-28%. IDC predicts 70% of vendors will move away from pure seat-based pricing by 2028. 40% of enterprise SaaS contracts now use outcome-based pricing, up from 15% two years ago.
Meanwhile, Retool surveyed 817 builders and found 35% have already replaced at least one SaaS product with a custom build. 78% expect to build more custom tools in 2026. Development economics have collapsed: MVP cost dropped from ~$25K to ~$7K. The vibe coding market hit $4.7B in 2025 and is projected to reach $12.3B by 2027. Lovable is at $300M ARR.
But here's the counter-data that the "build everything" crowd ignores: MIT research shows fully internal builds succeed only ~33% of the time, versus 67% for vendor partnerships. 60% of builders in the Retool survey created tools outside IT oversight — shadow IT risk. And 45% of AI-generated code contains security vulnerabilities.
| Category | Verdict | Why |
|---|---|---|
| Workflow automations | Build or kill | Most vulnerable (35% already replaced). AI agents handle this natively. |
| Internal admin tools | Build | 33% already replaced. Domain-specific, low compliance burden. |
| CRM / BI / project management | Evaluate | Vulnerable but complex. Nearly half of new CRM spend is going to data architecture. |
| Accounting / ERP / compliance | Buy | Requires 100% accuracy. Regulatory burden too high for custom builds. |
| Security / observability | Buy | AI creates new attack surfaces faster than you can secure them internally. |
| AI governance | Build the layer | Only 22% of enterprises have a defined AI governance strategy. Massive white space. |
The conclusion: Build the glue layer. The products worth building are the ones that connect AI agents to your specific domain logic — the integrations, the governance rules, the data pipelines that no vendor can build for you because they don't know your business. Don't replace your ERP. Don't build a general-purpose CRM. Build the connective tissue between your systems of record and the AI agents that will increasingly operate on them. And budget for the 33% failure rate on custom builds — it's real.
4. Do We Even Need a CMS?
Yes. But not for what it used to do.
The CMS market is not dying — it's growing. $30.9B in 2025, projected to reach $48.2B by 2031 at 7.68% CAGR. The AI-specific CMS segment is growing even faster: $3.94B in 2026, projected $22.28B by 2034 at 21% CAGR. Nobody is eliminating their CMS. In my research, I found zero documented cases of any company successfully operating without one.
But what the CMS does is changing fundamentally. The editing interface — the part where a human types words into a box — is dying. Sanity (which just raised an $85M Series C) declared "the end of the CMS era" — meaning the editing UI, not the data layer. They're rebranding as a "Content Operating System." Contentstack launched "Agent OS" with a "Brand Guardian" that audits AI content against brand guidelines before publishing. Their CEO says content management as we knew it is dead.
Dries Buytaert, the founder of Drupal, laid out the clearest vision: the CMS becomes the audit and version control layer for AI-generated content. His scenario — AI overnight rewrites 500 product descriptions, CMS makes that auditable and reversible — is not hypothetical. It's what Contentstack's AI agents already handle for 90% of translation and cultural adaptation work automatically.
The three-layer architecture emerging across the industry:
System of Record — Content governance, version control, audit trails, compliance. This is the CMS layer that survives and grows.
System of Context — Real-time data integration, personalization signals, user behavior. Feeds into agent decision-making.
System of Action — AI agent orchestration. The agents that actually create, modify, distribute, and optimize content.
The governance requirement is non-negotiable for any regulated industry. Healthcare, finance, legal — all need audit trails, confidence scores, reasoning paths, model version tracking, and prompt context preservation. These are now standard features in enterprise CMS platforms. A CMS Healthcare Playbook (v4) recommends prompt-level safeguards, auditable data lineage, and a "Shared AI Governance Hub."
The conclusion: Your CMS isn't an observation layer — it's a control plane. The human editing UI is dead. What survives and becomes more important is the structured data layer, the governance rules, the audit trails, and the orchestration of AI agents that generate and modify content on your behalf. If you're in a regulated industry, this layer is legally required. If you're not, it's still the difference between "AI wrote 500 product descriptions overnight" and "AI wrote 500 product descriptions overnight and three of them promised things we can't deliver." The sober take in an AI-drunk world is: AI needs guardrails, and your CMS is one of them.
5. What's the North Star?
This is the question underneath all the other questions. If AI writes the code, if AI generates the content, if seat-based SaaS is dying, if engineers are becoming judges — what is a company actually for?
Here's the pattern I keep finding. Every success story in my research shares the same architecture, whether they know it or not:
Stripe ships 1,300 AI-generated PRs per week with zero human-written code and zero unreviewed merges. It works because their CI pipeline, type system, devboxes, and code review infrastructure were built for humans years before LLMs existed. Intercom's Apex 1.0 beats GPT-5.4 and Claude Opus 4.5 at customer support resolution — not with a bigger model but with domain-specific post-training on billions of conversations. Salesforce stopped hiring engineers entirely and reports 30% productivity gains — but has disclosed nothing about output quality, customer satisfaction, or defect rates.
And then there's the counter-evidence. NBER surveyed 6,000 executives: 89% report zero productivity impact from AI over three years. Duke CFOs project 502,000 AI-related job cuts while reporting perceived gains far exceeding actual gains. The DX study found the same AI tools produce 50% fewer incidents at some companies and 2x more incidents at others. Same tools. Different outcomes. The variable isn't the AI — it's the organization.
THE SPECIFICATION IS THE PRODUCT
Gabriel Gonzalez, principal engineer: "A sufficiently detailed spec is code." The emerging discipline of Specification-Driven Development — now covered by Martin Fowler, ThoughtWorks, and InfoQ — represents the logical endpoint. If AI writes the implementation, then the specification IS the product. The company that can write the best specifications — the clearest expression of intent, constraints, edge cases, and acceptance criteria — wins. Everything else is execution, and execution is being commoditized.
This reframes every question above:
AI coding trajectory? It gets better at execution. The bottleneck is increasingly your ability to specify what you want. Invest there.
What engineers do? They write specifications, review outputs, and maintain the infrastructure that makes AI reliable. Hire for judgment, not syntax.
What products to build? The glue layer — the specifications, governance rules, and domain logic that connect commodity AI to your specific business. That's your moat.
Do you need a CMS? You need a control plane for AI-generated content. Your CMS is becoming that, whether the vendor calls it that or not.
The north star is this: don't reorganize around AI. Reorganize around judgment. AI amplifies whatever is already there. DORA's 2025 report found that "the success of AI depends less on the sophistication of tools and more on organizational systems." If your engineering process is strong, AI makes it faster. If it's weak, AI makes it fail at scale. The same tool that gives Stripe 1,300 PRs per week gives other companies a 67% rejection rate and 4x maintenance costs.
The sober take isn't that AI changes nothing. It's that AI changes everything about execution and nothing about judgment. The companies that survive the next two years are the ones that figured this out before their competitors did — and invested in the specifications, the review infrastructure, the governance layers, and the people capable of exercising judgment under uncertainty.
That's the Monday morning answer.