Skeuomorphism pulls, originality pushes

Skeuomorphism helps pull technology into the present, but the future is always non-skeuomorphic

The “term” skeuomorphic entered the lexicon following the iPhone release, as the original iPhone compensated for the lack of physical buttons with lighting and detail that made icons and buttons appear as real objects. The term peaked in 2013 , when Apple released iOS 7 and we got a clear vision into what a non-skeuomorphic UI might look like. This was something of a design choice, but it was also a technical one: iPhones were finally capable of background blur via GPU acceleration, which enabled depth as a metaphor for UI. Suddenly we had a native way to experience services delivered through a glassy rectangle.
Naturally the iPhone wasn't the only form of skeuomorphism. Newspapers began publishing skeuomorphs of their print editions before they figured out how to make web-native websites.
NYT on the web

Skeumorphism has something of a negative connotation now because it implies dragging along the past, but, as the iPhone did, it's often necessary to rely on established metaphors to pull the future forward. Then, when the technology becomes familiar enough and powerful enough to eschew the skeuomorphs, great minds push things in the world that are newer, more native, and better.

AI being the first major technological wave since the iPhone's launch, the world abounds in skeuomorphism again. I posted about this in the realm of vibecoding:

There's a lot of pent-up FOMO from people who didn't get to participate in the upside of the app explosion nor make the money one could have made being an engineer over the last 15 years. Showing code being written and deployed in a chat is something of a painkiller for that FOMO. It elicits the feeling, I'm finally writing code, which is one half of the reason why vibecoding apps stream their output.

By the standards of pre-AI coding, it's preposterous to call AI code generation slow, but consider this thought experiment: What if it were instantaneous? Imagine typing a request for an app and it just appears. You'd lose the opportunity to give the user an artificial IKEA effect, which will change who wants to use them.

Were code generation instantaneous, it's obvious that instead of using a conversational UI in a vibecoding app, you'd type something into Figma and edit it on a GUI-enriched canvas, similar to how you search for and then edit templates in Figma already. Why show code at all?

You see the same hallmarks of skeuomorphism in vibe coding that we got with the early days of iOS.

  1. Technological limitations driving design patterns that function as muzak (AI code generation taking minutes, not seconds, which requires a lot of UI and UX to entertain the user)
  2. More painkiller type apps (ugh, I don't want to learn to code) than vitamin-type apps

But this extends well beyond vibecoding. Everywhere you look you see applications that drag metaphors of the past, the constraints of an early technology, and the limitations of physical reality into interfaces.

AI therapy apps create a single therapist persona

AI could in theory free us from the one patient/one therapist pairing. What if therapy were more like interacting with a room of people? As with all of the below, I have no idea if this a good idea. The Auren app is one interesting attempt at this, and this thread points a more multilateral form of therapy.

AI Browsers

I'm confused why this is even a category, beyond the fact that it's familiar enough to garner some traction that could later parlay into another, less skeuomorphic thing. The very thing that AI is good at is foregoing the need to browse websites. Why pair that with a…site browser?

Education is still tutor- and material-based

Most of what you see in education and learning is about creating new learning material and delivering chatbot-centric differentiated tutoring. Alpha School is a notable exception; school becomes an experience that looks nothing like we know of school, where strict education comprises a small part of the day. The product is literally called TimeBack because instead of merely replacing a teacher with technology, it does away with the entire temporal structure of schooling.

In a similar vein, what if you could spin up a synthetic world that helps people learn in the same way they do through browsing the internet?

Websites are still deployed

Why are website intermediated by deployment? React exists to make code more readable and maintainable, which helps LLM's too, but it all transpiles JS and then HTML/CSS anyway. This post imagines a future where state is handled through the context window. Claude Imagine is a first glimpse:
And say what you want about Chad IDE, but at least they're doing something new and contemporary with the otherwise idle time spent watching agents agent than just streaming the agent's logs.

People using AI to write docs that other people use AI to summarize

The roundtrip seems wasteful of tokens. Maybe there's a Babelfish for vectors such that you don't need to intermediate a detailed idea or plan with an actual document.

Images and videos are still bits

This is maybe the dumbest idea because images and video are kinda Lindy, but what would it be like if every image and video were as tappable and swappable as software. This post on making a product demo video with code (via Claude) is interesting. Gizmo is interesting because it doesn't it's so agonistic to what kind of media is delivered in the feed.

If I had a good idea for a skeuomorphic AI app I'd be pulling $2M in ARR as you read this. But I'd also be missing out on pushing out a much weirder, more exciting future.

¯\_(ツ)_/¯