The Brittle Stack: How We Lost Control of Our Software (and How Hackers Are Taking It Back)
The technology industry in 2026 is caught in a profound, accelerating crisis of agency. Across the digital landscape, we are rapidly ceding control of our infrastructure to two disparate but equally opaque forces: autonomous AI agents generating mountains of unmaintainable code, and legislative bodies demanding unprecedented structural access to our private data. The result is a software ecosystem that is increasingly hostile, fragile, and disconnected from human understanding.
Yet, against this backdrop of deteriorating software quality and top-down surveillance, a quiet counter-movement is taking root. By returning to the physical foundations of computing—stripping components from crashed cars and mapping undocumented networks—engineers are proving that complex technology can still be dismantled, understood, and reclaimed.
The Agentic Fever Dream and the Erosion of Quality
It has been roughly a year since the initial wave of autonomous coding agents arrived to replace precursors like Aider and early iterations of Cursor. What began as an exhilarating way to accelerate side projects has rapidly morphed into a systemic liability. As developers eagerly deployed these tools into production environments, driven by free API credits from OpenAI and Anthropic over the Christmas break, the consequences of this "progress" have become impossible to ignore.
As developer Mario Zechner recently noted in a sobering retrospective, Thoughts on slowing the fuck down, software has regressed into a "brittle mess." The industry is quietly accepting a new reality where 98% uptime is becoming the norm rather than the exception. The symptoms of this agentic addiction are everywhere. Companies boasting that their products are completely written by AI are shipping releases plagued with gigabytes of memory leaks, bizarre UI glitches, and broken features that any human QA team would have easily caught.
We are witnessing the fallout of a culture that prioritizes producing the largest amount of code in the shortest amount of time, consequences be damned. Whispers of an AI-caused outage at AWS—swiftly followed by an internal 90-day reset—and rumors of Windows degrading as Microsoft leans heavily into AI-generated code highlight the risks of this transition. Developers are building orchestration layers to command swarms of autonomous agents, only to find themselves installing uninstallable malware like Beads simply because an AI recommended it. Grand experiments, such as Anthropic using an agent swarm to build a C compiler or Cursor attempting to build a browser, have proven largely broken. Teams across the industry are realizing they have "agentically coded themselves into a corner," saddled with a gazillion unrequested features, zero code review, and architectural decisions delegated entirely to machines.
Legislative Overreach in a Crumbling Ecosystem
While developers struggle to maintain systems they no longer fully comprehend, regulatory bodies are attempting to mandate further compromises to our digital infrastructure. In Europe, the push to structurally weaken digital privacy continues unabated. According to recent campaigns from privacy advocates, The EU still wants to scan your private messages and photos.
This ongoing legislative effort, heavily criticized by decentralized communities on platforms like Mastodon (operating under banners like @fightchatcontrol), seeks to impose mass surveillance on private communications. The implications are deeply troubling when viewed alongside the broader software degradation trend. Mandating scanning mechanisms and backdoors in private messaging platforms requires highly complex, secure, and resilient codebases. Yet, we are demanding these delicate architectural compromises precisely at the moment when the industry is abandoning disciplined software engineering in favor of generating buggy, unreviewed code via LLM slot machines. We are building surveillance mandates into software foundations that are already cracking.
Reclaiming the Black Box
If AI-generated technical debt and government surveillance represent a loss of human agency, the antidote can be found in the methodical, deeply manual work of hardware reverse engineering. Security researchers are actively rejecting the opacity of modern systems by physically tearing them apart.
To participate in a vehicle bug bounty program, security researcher xdavidhu completely bypassed official simulators and emulators. Instead, as documented in Running Tesla Model 3's computer on my desk using parts from crashed cars, he painstakingly built a fully functional Tesla computer environment directly on his desk. Sourcing components from salvage companies on eBay, he acquired a water-cooled Media Control Unit (MCU) and layered Autopilot (AP) computer—roughly the size of an iPad and the thickness of a 500-page book—for $200 to $300.
This project required a return to fundamental engineering problem-solving. Faced with chopped cables and proprietary hardware, the researcher relied on Tesla's publicly available "Electrical Reference" manuals to understand the internal wiring. When the required Rosenberger 99K10D-1D5A5-D connector proved completely unobtainable on the open market, human ingenuity took over: he realized a widely used BMW LVDS video cable was a near-perfect match. Powered by a 10A, 0-30V Amazon power supply and utilizing prior research by @lewurm to map the car's internal webservers via Ethernet, the researcher successfully booted the crashed car's operating system.
What This Means
The contrast between these trends defines the 2026 technology landscape. On one hand, we are hurtling toward a future where our software is an unmaintainable black box, written by autonomous agents that compound errors with zero learning, and mandated by regulators to include structural surveillance. On the other hand, the foundational hacking ethos remains alive and well, proving that with enough patience, a salvaged touchscreen, and a 10-amp power supply, even the most proprietary black boxes can be illuminated.
Ultimately, the industry must face a reckoning. We cannot continue to compound "booboos with zero learning" while simultaneously trusting these fragile systems with our most private digital lives. Reclaiming our software will require slowing down, prioritizing human discipline over agentic speed, and remembering how our systems actually work beneath the surface.
True innovation in 2026 might not be writing more code, but finally taking the time to understand the code we already rely on.