On April 15, Snap laid off 1,000 people — 16% of its workforce. CEO Evan Spiegel's explanation was five words: "AI generates over 65% of new code." The stock jumped 9%. Irenic Capital, an activist investor with a 2.5% stake, had been pushing for exactly this. $500 million in annualized savings. A "crucible moment," Spiegel called it.
Here is what nobody asked: Which models generate Snap's 65%? Were those models verified? Can anyone — inside or outside Snap — trace which lines of production code came from which AI, trained on which data, running which version?
The answer, for Snap and for nearly every company shipping AI-generated code in 2026, is no.
The First Invisibility: Who Wrote This?
Fifty-one percent of all code committed to GitHub is now AI-generated or substantially AI-assisted. Microsoft's Satya Nadella says 20-30% of code in some Microsoft repositories is AI-generated — but adds that there is no reliable method to measure this in a repository after the fact. If Microsoft can't trace it in their own repos, defense organizations buying Microsoft products certainly cannot.
The verification numbers are worse. A Lightrun survey found 43% of AI-generated code changes need debugging in production, after passing QA and staging. Sixty-one percent of developers say AI produces code that "looks correct but is unreliable." Zero percent of engineering leaders described themselves as "very confident" that AI-generated code will behave correctly once deployed.
Not "low confidence." Zero.
An IT Pro investigation found that a majority of developers don't check AI-generated code — and 38% say it's because verification takes longer than reviewing code written by a colleague.
So the first invisibility is already compounding: roughly half of new code is AI-generated, roughly half of that goes unverified, and nobody can reliably distinguish AI-written lines from human-written lines after commit. The authorship has disappeared.
The Second Invisibility: What's Inside the Tool?
On March 22, a developer named Fynn was debugging Cursor's API endpoint when an internal model identifier appeared in the response: accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast. Cursor's heavily marketed "Composer 2" — positioned as proprietary frontier intelligence — was running Moonshot AI's Kimi K2.5 under the hood. A Chinese model, backed by Alibaba and HongShan, fine-tuned with additional reinforcement learning. Cursor's VP of developer education acknowledged it: "Only ~1/4 of the compute spent on the final model came from the base."
Five days earlier, on March 17, Rakuten launched "Rakuten AI 3.0" — promoted as Japan's largest and most powerful AI model, supported by the Japanese government's GENIAC program. Within hours, developers found the architecture field on Hugging Face: DeepseekV3ForCausalLM. Same 671B parameters, 37B activated. The original MIT license had been deleted and replaced with Apache 2.0. According to Nikkei, six of the ten most prominent models from Japanese companies are secondary developments based on DeepSeek or Qwen.
MODEL PROVENANCE IN 2026
| What Users See | What's Actually Running | How It Was Discovered |
|---|---|---|
| Cursor Composer 2 | Moonshot AI Kimi K2.5 (fine-tuned) | API response leaked model ID |
| Rakuten AI 3.0 | DeepSeek V3 (fine-tuned) | Hugging Face config file |
| "AI coding assistant" | 3-4 foundation models, overlapping training data | Nobody checked |
| Snap's 65% | Unknown | Nobody asked |
Cycode's 2026 State of Product Security report found that only 19% of organizations have full visibility into where and how AI is used across their development. Seventy percent of developers use between two and four AI tools simultaneously. The tools update weekly. A vulnerability introduced through a model update on Monday morning is in production code by Monday afternoon, merged by a developer who saw nothing wrong, reviewed by a tool that shares the same blind spots.
The Third Invisibility: The Chain Folds Back
This is where the opacity becomes recursive. A March 2026 analysis in War on the Rocks put it precisely:
"It is not just that AI tools write code that enters defense systems. AI tools — increasingly built and tested by agentic AI processes — write the tools that write the code that enters defense systems. The chain has folded back on itself."
Nearly all coding assistants run on three or four foundation models with substantially overlapping training data. Claude Code gets weekly updates, each silently changing output. Anthropic gives Claude Code free to over 10,000 open-source maintainers. The models writing the tools are trained partly on code the tools previously generated. The inspector and the inspected share a bloodline.
This is not a theoretical concern. It is the structure of the industry in April 2026. Three layers stacked:
Each layer makes the one beneath it harder to audit. You can't verify which model wrote the code if you can't tell whether a human wrote it. You can't assess the tool's reliability if the tool was built by a process you can't inspect.
The Framework That Doesn't Exist Yet
The obvious answer is an AI Bill of Materials — an AIBOM. OWASP launched a project. The EU AI Act mandates supply chain documentation under Article 11, with compliance deadlines in August 2026. CycloneDX and SPDX, the standards bodies behind software SBOMs, have acknowledged that existing formats can't capture AI artifacts — model versions, training data provenance, prompt content, routing decisions.
On paper, the industry is moving. In practice, adoption is negligible. The 19% visibility figure from Cycode represents the high-water mark. The remaining 81% of organizations are shipping AI-generated code with no record of which model produced it, which version was running, or what data it was trained on.
And the pace makes manual documentation impossible. Models retrain. Datasets reload. Prompts evolve. Dependencies shift. All of it happening multiple times per day, across tools that share the same foundation models, maintained by teams that are themselves using AI to write the maintenance code.
The Convenient Number
Come back to Snap. Spiegel's "65%" moved a stock price, justified 1,000 layoffs, and satisfied an activist investor. Deutsche Bank has a term for this: "AI redundancy washing" — companies attributing layoffs to AI that are actually driven by financial pressure, overhiring corrections, and strategic restructuring.
The data supports the skepticism. Forrester found that 55% of employers who made AI-attributed layoffs already regret it. Nine out of ten didn't have a mature AI system in place when they cut. New York became the first state requiring companies to disclose whether technology caused their layoffs. Of 162 companies filing WARN notices covering 28,300 workers, exactly zero checked the AI/technology box. The same companies credit AI in earnings calls.
But even if Snap's 65% is real — even if two out of every three commits shipping at Snap were genuinely written by a model — the double invisibility still applies. The number itself is unverifiable. There is no reliable method to audit the claim. There is no record of which models, which versions, which verification was applied. The same opacity that makes AI coding efficient makes it uninspectable.
A company laid off 1,000 people based on a number that nobody — not investors, not regulators, not the laid-off engineers themselves — can check.