Why "Voidware" Is A Big Deal
Could this be to SaaS, what the algorithmic News Feed had been to internet content?
We’ve all been amazed at how capable AI is at building software apps for the better part of a year now; the Voidware paper, though, takes it one step further. It’s an incredible illustration of where the future of software may be headed with AI.
In this post, I want to: (1) bring it to your attention – definitely take a look at the full thing, and (2) share some of my own thoughts around it.
The App Does Not Exist
Here is how Ohad Eder-Pressman starts describing the demo he had cobbled together:
Last week I made a todo app appear out of thin air. No code. No database. No backend. Just an AI hallucinating an entire application into existence, one request at a time.
It works. Which is the part I still can't quite believe.
We've been using AI to write code faster. Cursor and Claude Code build entire features from natural language. These tools are incredible, but they're still operating in the old paradigm: we build software, then we run it.
What I'm about to show you breaks that assumption entirely.
Picture visiting a website where nothing exists until you arrive. No code waiting on a server. No routes defined. When the page loads, an AI generates the interface on the spot. You click a button, but there's no event handler. The AI sees the click and decides what should happen. You submit a form to /api/todos. That endpoint doesn't exist, has never existed. But the AI receives the request and responds with exactly what a todo API should return.
The model isn't a tool that helps build the app. The model IS the app.
Consider my recent expense tracking app story – yes, it was mind-blowing to watch my friend generate it through his iPhone while waiting in line to board a plane. The artifact, however, was still the traditional software stack: a web UI, a backend API, a web server, a database. The entire software stack had to be generated and deployed before we could use the app.
Ohad’s demo, on the other hand, has none of that.
The incredible thing to realize is that there is no code deployed anywhere; it’s all generated on the fly.
When we needed our expense tracking app to support currency conversions, my friend had to log into the Base44 admin view, explain the feature, review it, and deploy; yes, it’s incredible that he could do it through his phone within 20 minutes, but how much more incredible would it have been if he didn’t even have to do anything? What if the app could just “figure out” that we needed this extra feature, and just make multi-currency support appear from thin air?
Voidware completely bypasses the vibe-coding engineer.
That’s what’s so fascinating about this demo. There is no predefined UI, no predefined API, no predefined backend code. The app itself simply emerges in response to the user behavior.
Everything is done through ~50 lines of code that pipe HTTP requests to an LLM (again, read the whole thing for some more technical details).
The News Feed Analogy
One parallel I keep thinking about is the algorithmic News Feed; at first – when the internet was just getting started – news websites were created as the digital version of a physical newspaper. While it was incredible that everyone could access any publication, from anywhere around the world, instantly as new articles were being published – it was still a single editor making the same choices for the entire reader base. The key unlock was when Facebook invented the News Feed. Suddenly, every user received their own personal editor, choosing the stories most relevant to their interests. Figuring this out was a key driver of Facebook’s success, as it dramatically increased user engagement and established a moat.
We all, however, still consume our personalized content through the same generic experience. We each have different posts in our feed, but that feed resides within the same Facebook (or, at this point, probably Instagram) app, through the same look-and-feel, same user experince. Does this really make sense, though? With billions of active users, there are so many different personas hiring apps like Instagram for so many different jobs-to-be-done; wouldn’t they benefit from tailored app experiences?
Through a Voidware-style AI-generating-code-on-the-fly model, not only do users get their own personal editor, but each user also gets their own PM, designer, and engineering team!
A user turning to Twitter (fine, X) for a quick dopamine hit of memes can’t possibly be expected to share the same UX as the user looking to catch up on the latest insights around AI-assisted coding. While the former is probably well served by the existing app design, the latter would benefit from a NotebookLM-style research canvas with AI summaries. As I sometimes play the role of both of these user personas, it would be amazing if the app could – under different circumstances – understand the job I’m currently hiring it for, and morph itself accordingly.
Imagine your personal LLM-PM tracking your Instagram usage, saying to itself something like “Oh we’re diving into recipe ideas now, are we? Perhaps I should add a little button on top, that switches into a table view of your favorite recipes so far.” Then the LLM-designer sketches something, and the LLM-Software-Engineer goes ahead and adds the button. Instantly. Just for yourself.
Like, Meta never employed a human editor to analyze my behavior and determine that they should suggest egg-based recipe videos for me. The algorithm has just decided to do so, and I – ashamed to admit – just keep watching them. Why can’t a whole personalized app be generated around it? It’s a similar concept; once I start using the recipes view, it will evolve based on my personal way of engaging with it. If I never click on the recipes button, however, it would simply disappear over time.
It might require several years, or a decade, to figure out how to build and scale such concept properly. And yet, It is mind-blowing to consider all these possibilities.
While the Voidware demo is nothing but ~50 lines of LLM-wrapping code for now, it might signal the next stage in human-computer interaction: the era of the fully personalized software application.
Your writing is delightful!