The Signal — April 19, 2026

Tesla launches unsupervised robotaxis in Dallas and Houston. Northwestern prints artificial neurons that talk to living brain cells. And the AI memory shortage has a timeline: 2030.

Tesla's robotaxis go unsupervised in two more cities. Northwestern engineers print artificial neurons that talk to living brain cells. The AI memory crunch has a timeline now — and it's not good. Sunday edition.


Tesla Launches Unsupervised Robotaxis in Dallas and Houston

Tesla expanded its robotaxi service to Dallas and Houston on Saturday, making them the third and fourth US cities after Austin. The important word is "unsupervised" — no human safety monitors in the vehicle. Tesla made the shift to fully driverless operation in Austin back in January, and is now replicating that model across Texas.

The expansion tracks with guidance Tesla gave during Q4 2025 earnings: unsupervised robotaxis in additional cities by H1 2026. Saying you'll do it and doing it are different things, and Tesla has a long history of missed autonomous driving deadlines. But this one landed. Three Texas cities now have Tesla vehicles driving passengers around with no human backup behind the wheel.

The competitive landscape is worth noting. Waymo operates in San Francisco, Phoenix, Los Angeles, and Austin. Zoox is testing in Las Vegas. Tesla's approach — retrofitting production vehicles rather than building purpose-designed robotaxis — gives it a scaling advantage if the technology works reliably. That "if" is doing a lot of work in that sentence, but the data is accumulating. Each city that goes live without major incidents makes the next expansion easier to justify.

Sources: Reuters · TechCrunch · Teslarati


Printed Artificial Neurons Communicate with Living Brain Cells

Northwestern University engineers built artificial neurons — printed on flexible substrates using standard fabrication techniques — that generate electrical signals realistic enough to activate living brain cells. They tested this on mouse brain tissue slices. The artificial neurons fired, and the biological neurons responded. Published in Nature Nanotechnology on April 15.

This sits at the intersection of several fields that don't talk to each other enough: neuromorphic computing, brain-computer interfaces, and bioelectronics. Most brain-computer interface work focuses on reading signals out of the brain (think Neuralink). This goes the other direction — sending signals in, using hardware cheap enough to print. The fact that biological neurons treated the artificial signals as genuine is the key result. The brain didn't reject the input as noise.

Don't expect clinical applications tomorrow. Mouse brain slices in a lab are several leaps away from therapeutic use in humans. But the foundational question — can we build affordable, flexible electronics that speak the brain's native language? — just got a strong "yes" from a credible lab and a top-tier journal. That changes which research directions get funded next.

Sources: Northwestern News · ScienceDaily · News-Medical


The AI Memory Shortage Now Has a Timeline: 2030

The global DRAM shortage driven by AI demand could persist until 2030, according to SK Group's chairman. That's not a throwaway quote — SK Hynix is one of only three companies on Earth that manufactures high-bandwidth memory (HBM), the specialized RAM that AI chips depend on. When they say it'll take four more years to catch up, they're the ones who would know.

The numbers are stark. HBM for AI chips now consumes 23% of total DRAM wafer capacity. Only 60% of demand will be met by the end of 2027. Samsung, SK Hynix, and Micron are all building new fabrication capacity, but almost none of it comes online before late 2027. Meanwhile, the squeeze is spreading beyond AI — DDR4 and even DDR3 prices are climbing as manufacturing capacity gets redirected toward higher-margin AI memory. Micron confirmed its entire 2025 HBM production sold out before the year began.

This is one of those infrastructure stories that sounds dry until you realize it directly determines how fast AI can scale and how much it costs. Every model training run, every inference cluster, every new AI data center needs memory. If supply stays constrained through 2030, that's a hard ceiling on the industry's growth rate that no amount of software optimization can fully work around.

Sources: The Verge · PCMag · Tech Insider


On the Editor's Desk

We dropped the World ID / Tinder story from today's edition. The new partner announcements (Zoom, DocuSign, Okta) are incremental updates to an ongoing story rather than fresh developments. Worth revisiting if the standalone World ID app gains meaningful traction.

Sunday editions tend to be quieter, so we used the space for the Northwestern neurons paper — the kind of research that doesn't trend on Twitter but matters more than most things that do. If you can print electronics that talk to brain cells, the roadmap for brain-computer interfaces just got a new lane.

The DRAM story is infrastructure, which we know isn't everyone's favorite read. But hardware constraints are the unglamorous force that actually determines whether all the exciting AI announcements turn into real products at real scale. When the people who make the memory chips say "2030," the rest of the industry has to plan around that number.