The Signal — April 7, 2026
AI coding tools work, maybe too well. A New York Times investigation finds companies drowning in unreviewed code after 10x output increases. Separately, OpenAI is paying external researchers to do safety work its internal teams keep quitting, and ASML just printed 8-nanometer chip features that will define the next generation of AI hardware.
The Code Overload Problem
The New York Times published a detailed investigation Sunday into what happens when AI coding tools work exactly as advertised: companies can't keep up with what they produce.
One financial services firm went from writing 25,000 lines of code per month to 250,000 after adopting AI tools like Cursor. The code quality was fine on a line-by-line basis. The problem was everything around it. Code review backlogs ballooned. Nobody could audit the output fast enough. Engineers who used to spend their time writing code now spent it reading code they hadn't written and didn't fully understand. Managers who'd never touched a codebase started spinning up internal tools in hours, creating software that worked but that nobody had vetted for security, maintainability, or whether it duplicated something that already existed.
The piece surfaces a second-order problem the industry hasn't grappled with: AI coding tools optimize for output, but organizations run on comprehension. Writing code was never the bottleneck. Understanding what the code does, how it fits together, and what breaks when you change it was. A 10x increase in output with no corresponding increase in review capacity doesn't make an engineering org 10x more productive. It makes it 10x more opaque.
Several companies quoted in the piece have started imposing internal limits on AI-generated code, not because the tools are bad, but because the organizational metabolism can't digest what they produce. One CTO compared it to "drinking from a fire hose you turned on yourself."
Sources: New York Times
OpenAI Creates External Safety Fellowship
OpenAI announced a fellowship program for external researchers to work on safety and alignment. The pilot runs September 2026 through February 2027, hosted at Constellation in Berkeley. Applications close May 3.
Fellows will get stipends, compute access, and mentorship from OpenAI researchers, but no access to internal systems. Priority areas include safety evaluation, robustness testing, agentic system oversight, and high-severity misuse domains. The fellowship targets researchers, engineers, and practitioners (not just academics).
The context here is hard to separate from the program itself. OpenAI has lost a string of senior safety researchers over the past two years. Jan Leike left for Anthropic in May 2024 after the superalignment team was dissolved. Ilya Sutskever departed the same month. The internal safety team has been rebuilt multiple times. Creating an external fellowship either reflects a genuine recognition that safety research benefits from outside perspectives, or it's an attempt to distribute the reputational load after repeated internal failures. Probably both.
The structure matters more than the announcement. Fellows will work at Constellation, not at OpenAI's offices, with no internal system access. The arrangement is arms-length by design. That independence could be a feature (external researchers aren't subject to internal pressure) or a limitation (the hardest safety problems require seeing what's actually inside the models). Whether this produces real safety research or primarily produces good press will depend on what the fellows actually publish, and whether OpenAI acts on findings that are inconvenient.
Sources: OpenAI
ASML Prints 8-Nanometer Chip Features in a Single Step
ASML's new high-numerical-aperture extreme ultraviolet lithography system (the machines that physically print circuits on advanced chips) has achieved 8-nanometer features in a single exposure step. That's the smallest ever produced by a commercial lithography system, and it was published in Nature this week.
Each machine costs roughly $400 million. About ten have shipped so far, to Intel and SK hynix. The technology enables 2.9x more transistors per chip compared to the previous generation of EUV tools — the ones that already pushed chipmaking to 13nm features and required multiple patterning steps to go smaller.
This sits underneath every AI scaling conversation. When people debate whether we'll hit compute walls, they're implicitly asking whether the physical infrastructure can keep shrinking transistors and packing more of them onto each chip. ASML just answered that question for the next generation: yes, and by a wide margin. The 2.9x density improvement means chips fabricated on high-NA EUV will carry meaningfully more compute per wafer than anything available today.
ASML's head of research metrology described AI-driven demand for these machines as "monumental." The company is one of a handful of entities on Earth that sits at a true chokepoint in the AI supply chain. As of this week, that chokepoint just got wider.
Sources: Nature
On the Editor's Desk
The ingestion pipeline scored everything at 6 or below for the last 48 hours. All three of today's stories were sourced via manual web search rather than the automated pipeline. We're flagging this for recalibration: a New York Times investigation and a Nature paper should not score below the threshold.
We held the Tufts neuro-symbolic AI research (100x energy reduction in robotics training). It's a strong story, but we covered it yesterday. If follow-up coverage or additional data emerges from the ICRA conference in May, we'll revisit.
Today's stories sit at different layers of the same stack: ASML builds the machines that print the chips, the chips run the models, the models write the code, and now there's too much code for humans to review. The bottleneck keeps moving up the chain.