tag:whoo.ps,2005:/feedWhoops2023-11-10T20:24:37Ztag:whoo.ps,2005:Post/2172023-11-10T20:24:37Z2023-11-13T16:26:59ZAI still doesn't really understand memes<h2>Introduction</h2>
<p>Antimatter is a platform for teaching and learning through memes. Teachers use Antimatter to invite students to create memes based on their current unit as a means of what teachers call formative assessment, because, in the classroom as in life, in order to create a good story, joke, or puzzle, you have to really understand the subject matter.</p>
<p>Antimatter was founded in January 2021, before DALL·E and other foundational models became a sensation. Since then we’ve naturally fielded a lot of questions about whether AI will come for memes. Those questions are usually pointed toward “AI generated meme templates”, which we think misses the mark. We believe that meme templates are, at their core, found objects, and so instantly AI-generated, novel images will fail to resonate because they carry no cultural context. Unless, of course, an AI generated image gains notoriety through the meme template supply chain (4Chan and then Reddit), in which case it is still found, not created, by the people who ultimately meme it. The meme template below is one such example; this image became a template because it’s bad, not because it’s good.</p>
<p><img src="https://preview.redd.it/a-confusing-scene-at-walmart-v0-4hnq8np23wsb1.jpg?auto=webp&s=1827f864f08055c276d22603aaae96f4f49ce7a6" alt="">
<span class="caption">Source: <a href="https://www.reddit.com/r/MemeTemplatesOfficial/comments/oe4q8a/a_confusing_scene_at_walmart_v04/">/r/MemeTemplatesOfficial</a></span></p>
<p>Below we present our findings on how well the state of the art can create and interpret memes. As you gathered from the title of this post, it still can’t really do either well. Maybe we’re biased because we deal in memes from morning to night, but we believe memes are an interesting litmus test for AI because of how uniquely human they are. This is even more so than text, which has a somewhat <a href="https://whoo.ps/2022/02/17/what-the-turing-test-got-wrong">reflexive</a> nature with computing itself. </p>
<h2>What is and isn’t a meme?</h2>
<p>It’s important to distinguish what kind of memes we’re talking about, because people often conflate lots of things that aren’t actually memes. Memes compress information, which is to say they use very few pixels to express complex ideas. Many of the things that people conflate with memes do the opposite: they use a lot of pixels to express something very simple. The image below, for example, is not a meme.</p>
<p><img src="https://media1.giphy.com/media/UO5elnTqo4vSg/giphy.webp" alt="Shaq Shimmy">
<span class="caption">Shaq Shimmy: A Reaction GIF, not a meme</span></p>
<p>This GIF is used to express, “I am excited by this”. The literal rendering of the words, “I am excited by this” on your computer screen consumes roughly three orders of magnitude fewer pixels and is no less informative. Reaction GIF’s are flair; they're as harmless but also as meaningless as the pins worn on suspenders at TGIFriday’s. Reaction GIF's closer to stickers than they are to memes, and memes are closer to textual information than they are to Reaction GIF’s. </p>
<p>It’s not all so neat and tidy, of course. Here’s a captioned image, generated by an AI on <a href="memecam.io">memecam.io</a>. I gave MemeCam my headshot and this is what it generated:</p>
<p><img src="https://i.imgur.com/aejV6Dy.jpeg" alt="">
<span class="caption">Auto-generated meme by MemeCam</span></p>
<p>Sure, it used Impact font. It looks like a meme. But it’s not a meme because the idea expressed in the image — that I am a poor dresser — is about the image. This captioned image isn’t a vehicle for an idea the way a good meme is; it’s the destination. Note how the very same image and text, sent in Messages, differs only in aesthetics.</p>
<p><img src="https://i.imgur.com/23Ee7ni.jpg" alt="">
<span class="caption">The same quasi-meme works just as well when delivered as sequential messages</span></p>
<p>Contrast this with some of the memes created by students in classrooms on Antimatter about <a href="https://antimatter.systems/discover/9ed1f8a4-e3f4-4e82-92a6-655ac7e30569?content=8ed310d2-b18d-47a6-983b-c9ba84d317e7">natural selection</a>…
<img src="https://antimatter.systems/_next/image?url=https%3A%2F%2Faoqysnuwpwgkhlzkdwnh.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2F799ab0cd-e43d-4f4f-86b1-22449bc72a43-176473595.png&w=2048&q=75" alt="">
<span class="caption">A meme made by a student on Antimatter about Natural Selection</span></p>
<p>Or the <a href="https://antimatter.systems/discover/6abfb355-1bb9-456b-b0d8-ab0da50a60c8?content=b303f5b8-60d7-4e99-acd8-b117eee2362e">plum pudding model of the atom</a>…
<img src="https://antimatter.systems/_next/image?url=https%3A%2F%2Faoqysnuwpwgkhlzkdwnh.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2F7d2ce516-6f76-4e23-8d8d-61526ad5bef3-825557566.png&w=2048&q=75" alt="">
<span class="caption">A meme made by a student on Antimatter about The Plum Pudding Model of the atom</span></p>
<p>Or <a href="https://antimatter.systems/discover/33e88155-0073-4b85-bf2d-ff987fe4b20a?content=ef869f6b-3973-452f-b068-6fcf273f24fa">Roman architecture</a>…
<img src="https://antimatter.systems/_next/image?url=https%3A%2F%2Faoqysnuwpwgkhlzkdwnh.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2F17b3a327-414e-4132-9878-0a67cf1fa2cc-867029196.png&w=2048&q=75" alt="">
<span class="caption">A meme made by a student on Antimatter about Roman Architecture</span></p>
<p>Or <a href="https://antimatter.systems/discover/4384e320-121f-4b61-b1b1-69148b6590bd">the Proclamation of 1763</a>…
<img src="https://antimatter.systems/_next/image?url=https%3A%2F%2Faoqysnuwpwgkhlzkdwnh.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2F36747d9c-b926-4f36-8023-8ed95ca46b1a-976463165.png&w=2048&q=75" alt="">
<span class="caption">A meme made by a student on Antimatter about The Proclamation of 1763</span></p>
<p>What separates these memes from the Shaq Shimmy reaction GIF and the psuedo-meme generated by MemeCam is that:
<strong>The important concept behind each meme is not contained in the images themselves</strong>. They’re not expository.
<strong>The punchline derives from your understanding of the subject matter.</strong> Conversely, if you don’t know anything about any of these topics, the meme isn’t funny.</p>
<p>With those definitions in mind, let’s walk through how well AI does at creating and interpreting memes. Note that while we've tested just about every AI vendor, we've tested OpenAI most thoroughly by a long shot. The results from all the tests below were done on ChatGPT; no other platforms perform notably better in any regard.</p>
<h2>AI creation of memes</h2>
<p>Let’s start with an image created by DALL·E 2 about natural selection. This was made in April 2023.</p>
<p><img src="https://yale-club.vercel.app/_next/image?url=https%3A%2F%2Frbithsrcebntfmvcxlhw.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2Fcd0b2031-b41e-49f3-af16-da8134e46a79-269989072.jpeg&w=2048&q=75" alt="">
<span class="caption">A meme made about Natural Selection by DALL·E 2</span></p>
<p>Here’s another, made today (November 9, 2023), on DALL·E 3.</p>
<p><img src="https://i.imgur.com/xV2rm0x.png" alt="">
<span class="caption">A meme made about Natural Selection by DALL·E 3</span></p>
<p>The first is a facsimile of memes’ aesthetic language and the second is more a cartoon than a meme. Neither are interesting nor insightful—had the HMS Beagle made Island Seeds Co. its first stop, it may have actually delayed Darwin’s articulation of natural selection.</p>
<p>I’d love to know how much, if at all, OpenAI and others have tried to make real memes (in contrast to fake-ish memes like MemeCam). Nevertheless, even if there's progress, we suspect it's something of a moving target. One of the reasons that memes have proliferated so much over the past few years is that they are a response to the otherwise high degree of legibility you can feel while using the internet. Memes in some sense are definitionally a step ahead of whatever computers can read and write. If OpenAI becomes as good at generating memes as it is at generating text, those pesky human memers may just start memeing in new ways, like a surfer gliding just ahead of the break.</p>
<h2>OpenAI's interpretation of memes</h2>
<p>As mentioned above, all of the following tests were done with OpenAI’s recently released GPT-Vision. We have only done cursory tests on other AI’s.</p>
<p>For testing purposes we bucketed memes into four categories, featuring memes that rely increasingly on visuals and context than they do captions:</p>
<ol>
<li>Memes where the visuals are mere adornment</li>
<li>Memes where the context is spare</li>
<li>Memes with complex visual narratives</li>
<li>Memes with no textual content</li>
</ol>
<h3>Memes where the visuals are mere adornment</h3>
<p>The following meme was taken from <a href="https://www.reddit.com/r/HistoryMemes/comments/ee3ka9/black_panther_flashbacks/">r/HistoryMemes</a>. The text reads like a sentence and the visuals are mere adornment.</p>
<p><img src="https://preview.redd.it/silgj4opx5641.jpg?auto=webp&s=37c8f3bae8dac0b107de77ec2d03258d2c6eec5c" alt="">
<span class="caption">A meme found on r/HistoryMemes where the imagery is mere adornment</span></p>
<p>Here is ChatGPT’s response when we asked it to interpret this quasi-meme:</p>
<blockquote>
<p><span class="blockquote-long">This meme is structured as a satirical take on the subject of questions that are considered impolite or sensitive in different contexts. It plays on the cultural norms of what is traditionally deemed inappropriate to ask different groups of people and juxtaposes these with a critical view of historical events:</span></p>
<p><span class="blockquote-long">1. The first panel says, "Never Ask A Woman Her Age," which is a reference to the social taboo against asking a woman about her age. It is considered impolite because it can imply concerns about youth and beauty standards.`</span></p>
<p><span class="blockquote-long">2. The second panel states, "A Man, His Salary," implying that it is taboo to ask a man about his salary. This reflects the belief that financial earnings are a private matter and can also be linked to self-worth or status.</span></p>
<p><span class="blockquote-long">3. The final panel, featuring the flag of the United Kingdom, says, "Or Britan How They Got So Many Artifacts In Their Museum," which is a humorous critique of British colonial history. It implies that many artifacts in British museums were acquired during the era of the British Empire, often in ways that are now considered unethical or were outright acts of plunder.</span></p>
<p><span class="blockquote-long">The humor arises from the unexpected twist in the last panel. While the first two panels deal with personal sensitivities, the third panel escalates to a national or historical sensitivity, mocking the contentious legacy of colonialism and the controversies surrounding the acquisition of cultural artifacts. The misspelling of "Britain" as "Britan" might be an unintentional error or a deliberate choice to mimic the often informal and incorrect spelling seen on the internet, which can add to the meme's casual and irreverent tone.</span></p>
</blockquote>
<p>Here ChatGPT performs very well, but as noted above, the concept delivered in this meme would come across nearly as well, if less memorably, without the imagery. This is basically an easy OCR plus textual inference.</p>
<h3>Memes where the context is spare</h3>
<p>This meme was created by a student in an activity on Antimatter. Note how if you merely read the text “Going into contemporary issues / Leaving contemporary issues” you’d think or feel nothing at all.</p>
<p><img src="https://antimatter.systems/_next/image?url=https%3A%2F%2Faoqysnuwpwgkhlzkdwnh.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2F33afea05-240e-49e4-b274-6fcf5ffcbd7d-936911351.png&w=3840&q=75" alt="">
<span class="caption">A meme made on Antimatter about a unit on Contemporary Issues</span></p>
<p>Here is ChatGPT’s response when we asked it to interpret this legitimate, though fairly basic meme:</p>
<blockquote>
<p><span class="blockquote-long">This meme utilizes a before-and-after format to create a humorous effect based on the contrast between two different states of being. Here's a step-by-step explanation:</span></p>
<p><span class="blockquote-long">1. Top Image (Before): This part of the meme shows characters from the children's television show "Teletubbies," which is known for its colorful and cheerful content. The characters are brightly colored, smiling, and appear to be dancing or jumping joyfully. The text "Going into Contemporary Issues" implies that this is the state of the person before they encounter contemporary issues. It suggests optimism and naiveté.</span></p>
<p><span class="blockquote-long">2. Bottom Image (After): The bottom part of the meme shows the same characters but edited to look like pandas in a black-and-white photo, suggesting they are worn out or have lost their original color and vibrancy. The photo is also grainy, which could imply a historical or tired look. The text "Leaving Contemporary Issues" indicates that this is the state after dealing with contemporary issues. This part suggests that the experience of engaging with contemporary issues has been draining or negative, taking away the color and joy similar to how vibrant Teletubbies have turned into monochrome pandas.</span></p>
<p><span class="blockquote-long">3. The humor comes from the dramatic transformation from a state of colorful happiness to one of grayscale exhaustion, metaphorically representing the effect that engaging with modern-day problems can have on a person. The exaggeration of the change plays on the common feeling that contemporary issues can be overwhelming and disheartening, which many people can relate to. The use of well-known children's show characters, generally associated with innocence and simplicity, to convey a message about the complexities of modern life creates a juxtaposition that is also a source of humor.</span></p>
</blockquote>
<p>ChatGPT fares pretty well here in terms of characterizing the narrative and what it implies about the content of the two captions. There are a few hallucinations—the Teletubbies obviously aren’t intended to look like pandas in a black-and-white photo—which veers the response somewhat off course.
There’s also sort of an odd, Conehead-like response from ChatGPT. It’s like an agent that’s never hung with humans and has to spend excessive time narrating things that any person doesn’t need to expend any conscientious effort to interpret.
To be fair, there is probably little need to ever rely on AI for interpreting a meme like this. The context is easily shared between the individual who created this meme and the small group that enjoyed it, and there’s nothing really universal or objective about this topic the way there about our next meme.</p>
<h3>Memes with complex visual narratives</h3>
<p>This meme was also made in an Antimatter classroom on an activity about the Colonization of the Carolinas. It’s worth noting that this was created in an activity called Educated Guess, where the activity explicitly asks the student to create a puzzling meme that does not use any words from the teacher’s prompt in the meme. This meme is cartoon-like—the objects in the image tell a story of their own. </p>
<p><img src="https://antimatter.systems/_next/image?url=https%3A%2F%2Faoqysnuwpwgkhlzkdwnh.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fimages%2Fc6464ebc-2e84-49ff-9177-141db3d712d3-10835274.png&w=3840&q=75" alt="">
<span class="caption">A meme made on Antimatter about The Colonization of the Carolinas</span></p>
<p>Here is ChatGPT’s response when we asked it to interpret this good meme that relies on a fairly legible visual narrative:</p>
<blockquote>
<p><span class="blockquote-long">This meme features the "Polandball" or "countryball" format, which is a user-generated Internet meme that personifies the various countries of the world and represents political and historical situations, stereotypes, and international relations through interactions between these personified countries.</span></p>
<p><span class="blockquote-long">Here's a breakdown of the elements in the meme:</span></p>
<p><span class="blockquote-long">1. Countryballs: There are three countryballs represented here, identifiable by their flags:
- The one on the left has the flag of Spain.
- The one on the right has the flag of the United Kingdom.
- The smaller one below with a feather on its head, looking annoyed, has the flag of South Carolina, which is a state in the United States.</span></p>
<p><span class="blockquote-long">2. Expression and Posture:
- The Spanish and British countryballs appear to be holding or pulling at the South Carolina countryball, which is represented as a torn piece of cloth.
- The South Carolina countryball has an expression of frustration or exasperation, with the caption "Really?" which suggests disbelief or irritation at the situation.</span></p>
<p><span class="blockquote-long">3. Historical Context:
- This meme likely alludes to historical events where both Spain and the United Kingdom had colonial interests in what is now the United States, with South Carolina being one of the original Thirteen Colonies.
-The torn cloth might symbolize the state being "pulled" or influenced by the historical colonial powers.</span></p>
<p><span class="blockquote-long">4. Current Event or Commentary:
- Depending on when the meme was made, it could be commenting on a current event or political situation. The meme might be suggesting that Spain and the UK are, metaphorically, trying to exert influence or control over South Carolina, which the state itself finds ridiculous or objectionable.
The meme uses humor and exaggeration to comment on historical or current political relationships. The simplicity of the drawings and the direct expressions are key to the humor and accessibility of these memes.</span></p>
</blockquote>
<p>ChatGPT performs quite poorly here. It correctly identified the general subject but fails to pinpoint its commentary on colonialism's impact on Native Americans.
It’s sort of correct that, between the spare illustration style and reference to countries, the imagery is evocative of <a href="https://www.reddit.com/r/polandball/">r/Polandball</a>, but that is not the correct origin of the imagery. It even seems distracted by the Polandball imagery, failing to identify that “the smaller one with a feather in its head” represents a Native American, and incorrectly asserting that that figure has a South Carolina flag.</p>
<p>It also misidentifies a torn cloth, which may be causing some distraction or misinterpretation.</p>
<h3>Memes with no textual content</h3>
<p>This meme was taken from <a href="https://www.reddit.com/r/physicsmemes/comments/kncao1/made_with_paint/">r/physicsmemes</a> and represents the most challenging test for ChatGPT. The meme has no captions. What's more, it only makes sense to someone who is familiar with the <a href="https://en.wikipedia.org/wiki/Double-slit_experiment">double slit experiment</a>, which demonstrated that light can behave as a wave and particle depending on whether it’s being observed. This is known as <a href="https://en.wikipedia.org/wiki/Observer_effect_(physics)">the Observer Effect</a>.</p>
<p><img src="https://i.redd.it/pz6s3irlge861.png" alt="">
<span class="caption">A meme with no captions found on r/PhysicsMemes</span></p>
<p>Here is ChatGPT’s response when we asked it to interpret this superb that relies on a highly contextual visual narrative with no captions:</p>
<blockquote>
<p><span class="blockquote-long">This meme features a four-panel image with two distinct elements. On the left, there's a puppet-like character with a somewhat surprised or confused facial expression. On the right, there are drawings of what appear to be a door in two different states: closed and then heavily shaded.</span></p>
<p><span class="blockquote-long">The humor in this meme arises from the juxtaposition of the character's facial expressions and the state of the door. The first and third panels show the character looking towards the door with a neutral or inquisitive expression. The second and fourth panels depict the door, first normal and then with a dark scribble over it.
The pattern of the meme typically represents an initial state or concept in the first half, followed by a dramatic or absurd change in the second half. The character's reaction is what provides the comic effect; it looks at the initial state with normalcy and then turns back with a changed expression that suggests surprise, confusion, or realization after seeing the altered state.</span></p>
<p><span class="blockquote-long">In the context of this meme, the humor might come from the absurdity of the door's transformation, with no explanation or context given, and the character's bewildered reaction to it. This meme format is often used to convey reactions to surprising or inexplicable situations, with the door's alteration standing in for the surprising element.</span></p>
</blockquote>
<p>This response from ChatGPT is the equivalent of the <a href="https://www.youtube.com/watch?v=e6Y2uQn_wvc">Guy Goma BBC Interview</a>: It was doomed from the start.
Most importantly, ChatGPT misidentifies the illustration of the double slit experiment as a door, <em>“first normal and then with a dark scribble over it”</em>. Without that context, ChatGPT has nothing to say about what this meme conveys about the Observer Effect.</p>
<p>Interestingly, ChatGPT fails to identify the <a href="https://knowyourmeme.com/memes/awkward-look-monkey-puppet">Awkward Look Monkey Puppet</a> meme but does pick up on the way that this meme is typically used: To express an impish disbelief. The only problem is that the imagery from this meme is not about surprise; it’s about literally looking at something and then not.</p>
<p>You can give ChatGPT an A for effort, as you can Guy Goma, who was not hired for the data cleanser role.</p>
<h2>Final note</h2>
<p>We’re as optimistic as anyone that ChatGPT will eventually pass all these tests, but also unclear on the timeline. It’s as easy to imagine ChatGPT 5 getting there as it is memes representing something of a last-mile problem for AI. Good memes aren’t expository; they’re highly contextual and puzzle-like, which makes them more obscure to an AI interpreter. </p>
<p>If you’d like to test this for yourself, you can always try this on ChatGPT 4 on web as we did, using the prompt: “Please explain this meme step-by-step and why it is funny”.</p>
<p>You’re also welcome to try out our best attempt at overcoming ChatGPT’s interpretive shortcomings: Our most up-to-date prompts that give ChatGPT some assistance are available on <a href="https://antimatter.systems/sorcerer">Sorcerer</a>, our AI-powered product for students and learners. See for yourself if you can interpret the memes created by students on Antimatter.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2152023-04-02T10:36:36Z2023-04-02T11:00:49ZRemember when<p>I’m mid-life now, which means I’ve lived long enough to see the trends from my formative years, the 90’s, come back in fashion. When it comes to literal fashion—Friends sweatshirts, baggier jeans and so forth—that feels charming and regular. But when it comes to digital fashion, that feels weird and bad.</p>
<p>Consider the return of Chicago font, 8-bit artwork and terms like Web3. Maybe I’m overthinking this; maybe the return of Chicago and 8-bit in digital design is just another instance of fashion being an eternal remix. But there’s an obvious difference between fashion and products—fashion is just how we adorn ourselves, while digital products should always point to a new and better future. It’s really problematic if the ethos of digital product design is that we need to restore things to the way things once were.</p>
<p>Some examples of retro design:</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/coinage.jpg" alt="Coinage">
<span class="caption">In the absence of a new idea, Coinage recreated an old Bill Gates photoshoot</span></p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/seedclub.jpg" alt="SeedClub">
<img src="https://whoops-images.s3.amazonaws.com/images/seedclub2.jpg" alt="SeedClub">
<span class="caption">In the absence of a new idea, SeedClub merely lifted Apple’s original OS styles</span></p>
<p><img src="https://www.larvalabs.com/public/images/product/cryptopunks/punk-variety-2x.png" alt="Cryptopunks">
<span class="caption">In the absence of a new design motif, Cryptopunks appeals to the nostalgia of early gaming</span></p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/poolsuite.jpeg" alt="Poolsuite">
<span class="caption">In the absence of a modern spin on radio, Poolsuite just cribs the Apple II OS design</span></p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/vacation.gif" alt="Vacation">
<span class="caption">Vacation sunscreen, a reminder that in spite of all the grunge and Prozac, the 90’s were more carefree</span></p>
<p>(Aside: Forgive me if you had a hand in designing any of the above. If your response is, "Wow this dude has gotten cranky in middle age," you'd be right.)</p>
<p>And then you’ve got the mother of all Remember When’s: the term “Web3”
<img src="https://whoops-images.s3.amazonaws.com/images/web3.gif" alt="Web3">
<span class="caption">The term “crypto” was too original, so out of impatience for traction we switched to something more familiar</span></p>
<p>The pervasion of the term Web3 is sort of forgivable if it merely helps to complete the <a href="https://twitter.com/cdixon/status/1459036992050716697?lang=en">Read (Web1) => Write (Web2) => Own (Web3) gestalt</a>. But it’s really lame inasmuch as it was pushed top-down by folks with lots of things to sell, similar to "Metaverse". At the peak of Metaverse/Web3 it really felt like the 90’s, when PepsiCo was pushing Pepsi as <a href="https://whoops-images.s3.amazonaws.com/images/pepsi.jpg">"the choice of a new generation"</a>, except instead of approaching it with an alright-look-I’ll-at-least-try-it attitude that befits corporate marketing, we interpreted Metaverse/Web3 as a bottom-up phenomenon.</p>
<p>And then there’s the most sinister reason for the use of Web3, Chicago font, and 8-bit art: It’s an appeal to a time when lots of optimistic, technical- and design-minded people had very wealthy futures in store. It presupposes that there’s a new, lucrative increment of Web_ every 10 years as reliably as Moore’s Law predicts the shrinking of circuits.</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">It's worse than that<br><br>Implicit in "web3" is the message that "you missed the lucrative boat of web2 but don't worry there's another lucrative boat embarking now". Which is partly why web3 often feels brute-forced and certainly why nostalgia is the primary aesthetic in crypto</p>— Jonathan Libov (@libovness) <a href="https://twitter.com/libovness/status/1615379356766838787?ref_src=twsrc%5Etfw">January 17, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">This captures a sentiment I’ve felt. I want there to be a next big thing in software and to live 2010s again. But there doesn’t always <em>have</em> to be one. And the more I learn about fascinating things are going on in other fields, the more bored I am listening to other sw guys. <a href="https://t.co/hncDpxuWaj"><a href="https://t.co/hncDpxuWaj">https://t.co/hncDpxuWaj</a></a></p>— Dan Grover (@DanGrover) <a href="https://twitter.com/DanGrover/status/1474276063882665984?ref_src=twsrc%5Etfw">December 24, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>For as little as I’ve followed ChatGPT, Midjourney and other AI products—I run a non-AI startup and have two kids at home that occupy all of my time—it’s refreshing to see that its proliferation has required zero marketing, just product. And <em>that</em> feels much more like the halcyon days of Web1 and Web2 more than Web3 and its accouterments ever have.</p>
<p>Crypto, as I prefer to call it, is in fact a bottoms-up phenomenon, but in my estimation real crypto stuff requires the kind of patience that only people who couldn’t bear to work on anything else would have. The Homebrew Computer Club, after all, wasn’t guaranteed a generation of wealth. If crypto is—<a href="https://twitter.com/libovness/status/1635626651475165184?s=20">still very plausibly</a>—10 years away from mainstream use, how many people and investors really have the stomach to wait that long?</p>
<p>Appealing to the aesthetics of that era is like marketing a risk-free path to prosperity: A movement that relies on hindsight more so than foresight, an industry that will welcome your community-building skills if you don’t have any technical skills, and a mutiny against the fiat system that is somehow supposed to be sanctioned by US authorities. If you’re swayed by this aesthetic, it may be a sign you’re eager to be swayed. (Which honestly is totally fine if you’re buying sunscreen).</p>
<p>As <a href="https://www.youtube.com/watch?v=j9kL3it5cFI">Tony Soprano put it</a>, "'Remember When' is the lowest form of conversation". Which is to say, things that are the future should really look like the future, not the past, and as Tony found out himself, that’s difficult.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2142022-12-12T15:36:49Z2022-12-12T16:03:23ZThe honor system is the best system
<p>New technology often stirs fears of cheating or stealing. Some examples:</p>
<ul>
<li>Napster and torrents, which were overtaken by Spotify and other streaming platforms</li>
<li>Deepfakes, which as it turns out <a href="https://medium.com/swlh/deepfakes-paranoia-considered-pointless-507ca4705ee6">have very little audience</a></li>
<li>Writing an essay for school via GPT, which has very recently <a href="https://avc.com/2022/12/sign-everything/">forced questions about how to conduct education</a></li>
</ul>
<p>Now consider this image:</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/skis.jpg" alt="Ski rack"></p>
<p>These are skis put aside by skiers on a mountain while they're drinking hot chocolate in the chalet. Anyone, including that guy sitting in the Adirondack chair looking at his phone, could steal any of these skis. He could put a dozen pairs in his car and sell them in his car, or he could take just one and never have to rent again. But he doesn’t, because people generally don’t cheat.</p>
<p>Life presents us with opportunities to cheat all the time:</p>
<ul>
<li>Bittorrent is still alive and well</li>
<li>You could just walk out of a restaurant or a taxi without tipping. Remember, Uber ratings happen <em>before</em> the tip is revealed.</li>
<li>You can steal someone else’s joke or media on social media and present it as your own</li>
<li>You can lie to would-be investors about the performance of your business</li>
</ul>
<p>Alex Danco <a href="https://alexdanco.com/2019/12/17/ten-predictions-for-the-2020s/">on the curious infrequency of scams in Silicon Valley</a> (namely, founders absconding with a few million dollars that they immediately, and legally, convert into a bonus):</p>
<blockquote>
<p>I’ve long believed that the real mystery of Silicon Valley isn’t the outsider question, “How is Silicon Valley so wild and crazy”, but actually the insider question: “How is Silicon Valley so stable?” It’s built on speculative finance, it’s full of experiments whose outcome you can’t know for years, and it has to move fast enough and fluidly enough that (at early stage anyway) it effectively works on the honour system despite the FOMO environment. It’s so interesting how, in this environment, there aren’t any scams like this...I’ve long believed that the real mystery of Silicon Valley isn’t the outsider question, “How is Silicon Valley so wild and crazy”, but actually the insider question: “How is Silicon Valley so stable?”</p>
</blockquote>
<p>Why is it that people don't cheat more often when it seems like they'd have something to gain and almost nothing to lose?</p>
<h3>Why don’t people cheat more often?</h3>
<p>Sure, there are people out there who, when faced with these opportunities to cheat, take the immoral path. Why don’t people cheat? Three reasons:</p>
<p><em>Fear of getting caught</em>. This is why we have a justice system, and to be clear I’m not suggesting that if convenience stores or car dealerships switched to a “pay what you want” model that people would generally pay fair value. But from Bittorrenting to tipping or stealing social media there’s almost no method to even mete out punishment, and people still generally do the right thing. (Yes of course every Instagram meme account is just re-posting others' memes but I think everyone, blockchain-provenance maximalists aside, would agree we have a pretty well functioning meme economy).</p>
<p><em>Self-esteem</em>. I’d venture to guess that 90% of people believe they’re in the top 10% of moral behavior, and that’s an important belief to maintain in order to get out of bed every morning. By contrast, the reason people believe that sociopaths like SBF, Elizabeth Holmes and Bernie Madoff will eventually get caught—that is, we all know they won't cheat enough <em>just</em> to get ahead and stop—is that tolerance for lying is a fatal flaw; a person with that kind of sociopathy has as much chance at survival as an organism with no fear of or ability to experience pain. It’s also worth noting that people’s overestimation of their own rectitude explains why they overestimate other people's proclivity to cheat; surely the 90% of people who are less moral, everyone thinks, must be jumping at the chance to cheat. But of course only 10% of those people are actually in the top 10%.</p>
<p><em>People are convenience seekers more than they are capital-seekers</em>. We all know how the Napster/Bittorrent story ends: Spotify and Netflix etc. have such good user experiences that they’re worth the $12/month. It’s for similar reasons that people don’t just steal other people’s skis; ski rentals are convenient enough to be worth the price. It’s not so much an economic calculation as a Larry-David-esque <em>eh</em> when comparing the convenient paid option with the free immoral option. Heck even the ski dude on the Adirondack chair fiddling with his phone is evidence of this; fiddling with your phone is more convenient than stealing a bunch of skis.</p>
<h3>Anxiety about cheating generally reveals anxiety about systems that are already flawed</h3>
<p>As Byrne Hobart puts it in <a href="https://medium.com/swlh/deepfakes-paranoia-considered-pointless-507ca4705ee6">Deepfakes Paranoia Considered Pointless</a>, the anxiety about deepfakes is similar to the anxiety we once had about Napster: it’s generally an expression of a system whose distribution monopoly is eroding. </p>
<p>Music once had an atom-based distribution monopoly on the production of records and CD’s the same way the media once had a distribution monopoly on plopping newspapers on people’s driveways. The erosion of those monopolies naturally causes finger-pointing at bad things that will happen in this new world: artists not getting paid because of torrents and people with no editorial standards influencing the conversation. Sure we experienced some pain and attrition with Napster just as we’re experiencing some pain now with the unbundling of media organizations, but then again if you’re a media organization today you might look at how all the music groups survived just fine. If Soundcloud was born out of a new bottom-up promise in music, and Substack is to media and Soundcloud is to music, that’s a pretty tolerable outcome.</p>
<p>The same is true in education, where there are newfound concerns with cheating via GPT. But in reality cheating isn’t really much of a problem:</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Cheating is a minor issue in education and the AI cheating arms race that is about to occur doesn’t matter that much.<br><br>Education is primarily about helping a student willingly learn. It is not primarily about validating qualifications for a credential.</p>— Matt Bateman (@mbateman) <a href="https://twitter.com/mbateman/status/1598876316794658818?ref_src=twsrc%5Etfw">December 3, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>Cheating isn’t a problem in school because deep down, even the less motivated kids acknowledge that there’s <em>some</em> merit in school, and cheating your way through K-12 and college would render all those years a waste of time and opportunity. Also, cheating feels bad.</p>
<p>It’s also true that Education has some monopoly on education, but we can deal with that. Some new technologies have become inconveniently convenient to the status quo, but not only is human behavior pretty <a href="https://www.nytimes.com/2021/06/17/style/lindy.html">Lindy</a>, people are generally more upright than we give them credit for.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2132022-12-07T02:39:30Z2022-12-07T14:22:36ZCreating and playing games<p>I, for one, welcome our AI underlords. So does Ben Thompson. Yesterday in <a href="https://stratechery.com/2022/ai-homework">AI Homework</a>: </p>
<blockquote>
<p>Instead of insisting on top-down control of information, embrace abundance, and entrust individuals to figure it out. In the case of AI, don’t ban it for students — or anyone else for that matter; leverage it to create an educational model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful; only then will it be valuable and reliable.</p>
</blockquote>
<p>Come at the king (of business illustration) you best not miss, so I’ll do my best to illustrate what I think AI means for learning and education. One of the bedrocks of educational theory is <a href="https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/">Bloom’s Taxonomy</a>, a framework for ranking categories of learning from simple to complex, concrete to abstract, and strictly necessary to aspirational. Here it is, without markup:</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/Blooms-Taxonomy-650x366.jpg" alt="Bloom’s Taxonomy">
<span class="caption">Bloom’s Taxonomy</span></p>
<p>As far as academic frameworks go, Bloom’s Taxonomy is remarkably durable and intuitive (it’s probably durable because it’s so intuitive). It's the framework through which I think all these <em>What happens to learning in a ChatGPT world?</em> should be viewed.</p>
<p>The most common but also the most rote form of learning is <em>Remembering</em>. Utilitarian as it may be, you need to memorize the dates of the Battle of Yorktown in order to understand the story of the Revolutionary War, much as you need to memorize a chunk of the Periodic Table of Elements. All the way at the top of Bloom’s Taxonomy you have <em>Creation</em>, the most elusive and aspirational form of learning. That is, to truly understand a subject matter, you must be capable of producing a novel work or story, or <a href="https://whoo.ps/2022/04/05/shitposting-is-the-highest-form-of-consciousness">even a novel joke</a>, with it.</p>
<h3>Learning products in Bloom's Taxonomy</h3>
<p>If you take Bloom’s Taxonomy as the model for learning and plug in all of today’s tooling—now maybe including ChatGPT—you get something like this:</p>
<p><a href="https://whoops-images.s3.amazonaws.com/images/blooms.png"><img src="https://whoops-images.s3.amazonaws.com/images/blooms.png" alt="Bloom’s Taxonomy Market Map"></a>
<span class="caption">Bloom’s Taxonomy wrt commoditization</span></p>
<p>The closer you are to the bottom of Bloom’s Taxonomy, the greater the risk of being commoditized by AI. Paraphrasing Wikipedia on your way to a five-paragraph essay isn’t exactly new, and it’s now even easier with GPT3. It doesn’t even seem far off at all that ChatGPT could reproduce rote exercises and utilities like flashcards, coding and math exercises. </p>
<p>It's certainly true that <em>creating</em> flashcards is actually more useful than practicing with flashcards you found. This is foundational to Anki and even Quizlet—my understanding is that Quizlet's marketplace of flashcards belies the core, valuable activity of creating flashcards on your own or with friends. But we're talking human behavior here; ChatGPT flashcards are to self-made flashcards as Keurig is to roasting and grinding your own coffee beans. The former will dominate.</p>
<p>I think Ben Thompson mostly got it right. Evaluating whether and in what ways the AI captured the subject matter accurately is not only an interesting exercise (albeit a little contrived), it’s arguably more valuable than re-producing the material that other learners have already produced a million times over. Flashcards and exercises <em>are</em> commodity products that computers can generate better than humans. We simply had an interim period where some online tools helped facilitate that kind of production because the internet transforms distribution. </p>
<p>As Austin Allred, founder of the coding academy BloomTech (yes it's the same Bloom), pointed out yesterday, machines can accelerate all this rote learning—<em>Remembering</em> and <em>Understanding</em> in Bloom's Taxonomy terms—by enabling more people to play more games against the computer (more on that below).</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">In 1984 an education researcher named Benjamin Bloom found that the median student using personalized tutoring and mastery-based progression performs better than 98% of students in a traditional classroom.<br><br>Computers and AI are now making those two things cost $0.<br><br>Buckle up. <a href="https://t.co/RlJGVtikGf">pic.twitter.com/RlJGVtikGf</a></p>— Austen Allred (@Austen) <a href="https://twitter.com/Austen/status/1599732259774697472?ref_src=twsrc%5Etfw">December 5, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>At the tippy top of the Taxonomy, the outlook for learning and education is arguably brighter than ever. Because all the utilities at the bottom will get commoditized, we have more time to spend in the most valuable area of learning: Creation and Evaluation. What’s more, new tools and services make Creation and Evaluation easier than they’ve ever been. Think:</p>
<ul>
<li><p>Creating games on <a href="https://replit.com">Replit</a></p></li>
<li><p>Telling authentic stories via <a href="https://flip.com">Flip</a></p></li>
<li><p>Creating learning memes on <a href="https://antimatter.systems">Antimatter</a> (full and shameless disclosure: I’m the founder of Antimatter)</p></li>
</ul>
<h3>A Cat and a Mouse</h3>
<p>Why won’t creating games, telling stories, or creaing memes be commoditized the way exercises and utilities will? Because, let’s not forget, ChatGPT isn’t studying history so much as it’s <em>learning</em> from the stories that humans have told about history. Every novel story created by a human is a story that ChatGPT has yet to learn. And every story told by ChatGPT is potential raw material for humans to tell a new, novel story. We’re one step ahead by definition.</p>
<p>Consider <a href="https://antimatter.systems/studio/1st-block/unit-revolutionary-war-review">these memes created by students on Antimatter about the Revolutionary War</a>. Or this Guess the MacBeth Character meme created by a Teacher on Antimatter:</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/macbeth.jpg" alt="MacBeth">
<span class="caption">To be or not to be (intensely human)</span></p>
<p>There remains a very wide gap between AI’s ability to tell good jokes or puzzle-like stories and what humans are able to produce. This isn’t totally a coincidence. Memes are in some ways a <em>reaction</em> to the legibility of the internet today, the very same legibility that serves as one of the foundations of LLM’s. It’s inevitable that LLM’s will be capable of being genuinely funny or creating real puzzles, but it may be closer to the last mile than wherever we are today, and it may turn out to be an infinite cat-and-mouse game.</p>
<h3>More Games</h3>
<p>Speaking of games, it's my general belief that technology inevitably makes everything more game-like. Less time putzing around, more time playing measurable status games on Facebook and Twitter for Likes and Retweets or actual games on Fortnite, Minecraft, etc. Less time watching sports, more time playing fantasy sports. Less time saving with a steady APR, more time investing in markets that go up and down. </p>
<p>In a sense, the <em>What happens to Learning and Education in a world where content is free and AI is abundant?</em> question could be re-phrased in game-like terms: <em>Students + AI vs. Education</em>. To be clear, that's the fear ChatGPT is drumming up; it's not and never should be adversarial. Still, to that end, I think it’s instructive to look at the diverging outcomes of the two games that have been solved in some form by math. </p>
<p><em>Chess</em>. It seems quaint now to reflect on the time when we weren’t sure if computers would ever solve chess. They did, long ago, but it’s uplifting to think that chess is <a href="https://www.chess.com/article/view/why-is-chess-so-popular-right-now">nevertheless more popular than ever</a>. To boot, people still love playing against the computer, or against a highly liquid supply of human competitors, at incrementally increasing levels of difficulty. This is a testament to the durability of puzzles.</p>
<p><em>Baseball</em>. Math hasn't ruined baseball so much as baseball management's inertia in saving the game from the math. The math tells managers to pull starting pitchers swap relievers in and out according to their as-little-as-one-batter specialities; the math also teaches you not to steal bases and to shift infielders in ways that eliminate the mano a mano qualities that once made baseball fun to watch. Whether it's out of inertia or nostalgia, baseball's management has been dreadfully slow in adjusting the game to preserve what made it special. It's a lesson in inaction.</p>
<p>While chess and baseball's courses aren't diametrically opposed—baseball needs to be saved while chess naturally fits more seamlessly into the world we live in now—we're nevertheless presented with two models of dealing with the math. With respect to learning and education, we should avoid baseball's inertia and nostalgia at all costs. To reiterate Ben Thompson's point, "Instead of insisting on top-down control of information, embrace abundance, and entrust individuals to figure it out." </p>
<p>Similarly, we should embrace the infinite distribution of challenges that computers and the internet have brought to the world of chess. Consider how viewers of chess game streams look at the model during the game, which enhances the conversation around the game and makes everyone smarter. More liquidity, more infinite games for learners to play. There's little to fear but inaction.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2122022-10-03T13:41:09Z2022-10-03T13:52:01ZCynical optimism for crypto<p>Here are my three most heterodox takes on crypto:</p>
<ol>
<li>It’s often said that UX is what’s holding crypto back. I think crypto would have far <em>fewer</em> users if the UX were better. Crypto’s mediocre UX provides a sort of positive friction that augments a sense of in-groupiness.</li>
<li>Crypto is more like Scientology than it is web2, or prior waves of technology, in that it’s driven by a strange brew of religion and avarice. In some respects crypto resembles today’s far political left in its religiosity and inscrutability than it does web1 or web2, which were very utilitarian.</li>
<li>Many people working in crypto have a stated preference for building the future but a revealed preference of longing for the past. Between 8-bit art, <a href="https://twitter.com/VittoStack/status/1563152763441147904?s=20&t=6L7nkBuFAcQDTdv03joyXw">market maps that enumerate web3 versions of web2 products</a>, and even the term web3, many web3’s builders view web1/web2 not with healthy competitiveness but deep nostalgia; longing for the increasingly inaccessible freedom and paths to prosperity that web2 generated in its early days. </li>
</ol>
<p>With all that said, I’m quite optimistic that crypto is going to work.</p>
<p>Why? For one, crypto has a great use case right now: belonging. With churchgoing and other forms of IRL community on the decline, many things need to fill the void. And since having private thoughts and beliefs separate from work is sort of passé, crypto is an excellent vehicle for infusing work-life with a community-driven mission. This renders the “Isn’t it just tokens and financialization all the way down?” critique sort of moot: After all, religions and other tight-knit communities also work just fine without being reducible to an underlying, real-world, income-bearing assets. They just have other kinds of growth loops. Good luck competing with religions.</p>
<p>And of all the religions we could have invented to replace what’s on the decline, crypto is a great one. Satoshi and Vitalik’s contributions to this world have been as selfless and benevolent as one could ever hope for: They’ve inspired an entire generation of young, capable people to be relentlessly optimistic, which is awesome. There are surely thousands of S-Tier people who would work on really hard problems in crypto in their spare time even if it didn’t pay. </p>
<p>But of course crypto does pay, and that’s where all the analogies to web2 and <em>every other technology wave I can think of</em> breaks down. As Antonio García Martínez articulated, Web2 was product-first and is still struggling to figure out business models; Web3 is business-model first and struggling to figure out product. </p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">In many ways web 3 is the reverse of web 2. <br><br>Web 2: Find a viral consumer use case, and figure out how to monetize it later. <br><br>Web 3: Get the economics of how to provide the infra behind a service right first, and then figure out the consumer use case later.</p>— Antonio García Martínez (agm.eth) (@antoniogm) <a href="https://twitter.com/antoniogm/status/1534608112581169152?ref_src=twsrc%5Etfw">June 8, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>Though it incentivizes a lot of unsavoriness and scamminess due to the economics, a technology wave with a business model and distribution nearly baked in is quite a growth loop.</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">If all the money and brains are running in one direction, like it or not, that's where the future lies.</p>— Antonio García Martínez (agm.eth) (@antoniogm) <a href="https://twitter.com/antoniogm/status/1574093438009516032?ref_src=twsrc%5Etfw">September 25, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>Which is to say that crypto has the benefit of all the fervor and <a href="https://twitter.com/fintechjunkie/status/1541450811066155009?s=46&t=K0dtpNVrprizyNyleOr-_A">convenient obfuscation</a>) that come with religion <em>and</em> near-term, irresistible, liquid financial incentives. It seems like it will be more than enough to keep lots and lots of people hanging around the rim until the Fed has a major liquidity crisis or some nation breaks and goes rogue and crypto breaks through.</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/crypto.png" alt="Waiting for crypto"></p>
<p>Were I 10 years younger (not quite as young as the kid on the tricycle, but younger than I am now) I’d surely be patient and optimistic enough to invest my time in crypto. But now I’m the dad, and I’m counting on the money I’ve in crypto to put that kid through college. I’m optimistic.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2112022-04-05T14:15:40Z2022-04-05T17:11:30ZShitposting is the highest form of consciousness<p>Here is the order of conscientiousness, in order from most base to the noblest:</p>
<ol>
<li><em>Uninformed complaining</em>. Everyone has someone in their life who seems poorly read or uninquisitive that still feels empowered to complain about a state of affairs. They complain in conversation or on social media. This is distinct from people who don’t have skin in the game because many are not fortunate enough to afford the skin.<br><br></li>
<li><em>Informed fandom</em>. Some people are well read or inquisitive in nature, though it’s often obvious that they’re always arguing from priors and they tend to focus much more on what’s wrong with the world than what can be done in a definite, actionable way. Supporting <em>anything</em> that moves the needle for a cause is mere fandom, even if you’re well read. Folks who fall in this camp are prone to prattling and engaging, which is better than hiding behind mere slogans, but are disinclined to listen. <br><br></li>
<li><em>Informed exposition</em>. These are people who are well read, inquisitive, and generally present a positive, definite, and actionable vision of the future. But they express themselves in a fairly straightforward manner and sometimes across as combative or obstinate with people who disagree. Don’t get me wrong, this is on the noble end of the spectrum. Everything <a href="https://twitter.com/DKThomp">Derek Thompson</a> is <a href="https://www.theatlantic.com/author/derek-thompson/">writing</a> and podcasting these days to help us understand and improve our world is extraordinarily noble and never combative. It’s necessary, but it’s not sufficient.<br><br></li>
<li><em>Shitposting</em>. There’s an unusually intuitive framework for learning called <a href="https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/">Bloom’s Taxonomy</a>, which emphasizes that Creation and Evaluation are the highest form of learning. Why? Because exposition often, but not always, relies on regurgitation and repetition (the bottom of Bloom’s Taxonomy), whereas creating something new based on the subject matter relies not only on a fundamental understanding of that subject but also the opposing opinions on that subject (also see Bryan Caplan’s <a href="https://www.econlib.org/archives/2011/06/the_ideological.html">Ideological Turing Test</a>). It’s impossible to tell <em>good</em> jokes about subjects you don’t fully understand, and very difficult to tell jokes that people with opposing opinions will enjoy and grapple with. To accomplish the latter you need to be informed enough about opposing opinion, generous enough to take it seriously, and above all confident enough to tell a joke that, taken at face value, inverts your actual opinion. This is shitposting.<br><br></li>
</ol>
<p>An important distinction between Informed Exposition and Shitposting, the two more noble forms of consciousness, is the generosity expressed by crafting your opinion in the form of a puzzle. From Wordle to buying JPEG’s to DAO’s to game shows with strategic elements (Survivor, Top Chef) to gaining clout on social media to strategy games (Factorio, Monopoly), people love puzzles. They bring humor and joy through manufactured friction; they bring us together.</p>
<p>From the internet's founding until quite recently (Netscape through TikTok), the battleground for puzzle was mostly in user experience. One of the most useful <a href="https://twitter.com/BradUSV/status/78512458419015680">investing theses</a> of all time ran its course through the mid 2010’s. The game back then was this: How do you craft interactions between people and data that so effectively remove friction from computing that you can suck in an entire ecosystem? This was true from Snapchat to Salesforce. </p>
<p>But now we’ve solved just about all of the user experience problems. There’s a playbook and even a Figma library of components for whatever kind of experience you want to craft. With user experience solved, those puzzles have moved up the stack to content and social systems. Namely, how do you manufacture content that’s challenging and frictionful (think: puzzles) enough to engage? </p>
<p>In the realm of engagement and persuasion on the internet, that now takes the form shitposting, and its name belies its generosity.<br><br></p>
<hr>
<p><em>Shameless but earned plug</em>: Raising consciousness through shitposting is at the very core of the learning platform we’re building at Antimatter. Here's our <a href="https://twitter.com/universeandmore?lang=en">Physics Teacher by Day/Front End Engineer by Night</a> on <a href="https://youtu.be/GKE-0_GXbNo">Antimatter and Bloom's Taxonomy</a>. In particular we’re hiring designers (see <a href="https://antimatter.systems/jobs/head-of-design">here</a> and <a href="https://antimatter.systems/jobs/ui-designer">here</a>) — we’d love to talk.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2102022-02-17T16:35:26Z2022-02-17T17:13:43ZWhat the Turing test got wrong<p>The most common manifestation of the <a href="https://en.wikipedia.org/wiki/Turing_test">Turing test</a> looks something like this: A human has conversations with two participants, one a person and one a computer, with their identities obscured. If humans can’t identify the human on average, the computer (programmer) is successful at the Turing test.</p>
<p>What Turing and almost any technological optimist predicted is that eventually computers become adept enough to sound like human. What would have been more difficult to predict is what has come to pass: Humans would speak more like computers. In other words, it’s not that computers would merely catch up to human conversational facility, but that the two would converge.</p>
<p>The cause is legibility. When Google and others made the internet a marketplace of information, they set the criteria for getting discovered. It was all innocent enough when Google and others asked for metadata to help bespoke publishers get surfaced, but then content farms and <a href="https://alexskra.com/blog/the-mermaid-is-taking-over-google-search-in-norway/">other neat tricks</a> figured out how to game the system. The latter is a technical hack that eventually gets quashed; the former undermines the criteria for the Turing test.</p>
<p>Consider recipes. It is <em>really</em> weird that the only way to get a highly legible, structured recipe seen from an website that doesn’t already have lots of adjacent SEO juice is to prepend it with a seemingly illegible, unstructured backstory about the recipe. Examples taken from following the search results for “BLT recipe” via <a href="https://twitter.com/byrnehobart">Byrne Hobart’s</a> <a href="https://onezero.medium.com/war-peace-bacon-eggs-understanding-the-endless-recipe-site-preamble-a890b3c55ad">Why Recipe Bloggers Make You Scroll so Far to Read the Recipe</a>:</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/blt1.png" alt="BLT">
<img src="https://whoops-images.s3.amazonaws.com/images/blt2.png" alt="BLT"></p>
<p>To be clear, those are Probably Not <em>Un</em>True type stories about BLT's that no person wants to publish read and no one wants to read, but it’s the only way to vault a blog onto the first page of Google results. And they’re on the first page because people want authenticity, and Google knows that, so they’re scanning the web for the most legible sources of authenticity. It’s like the Turing test, except that the obscured person is talking with the obscured computer, and the computer is talking with the person taking the test.</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/turingtest.png" alt="The Turing test"></p>
<p>This isn’t just a Google phenomenon, nor even a computer-legibility phenomenon. It’s also a social signaling phenomenon on social platforms. Inasmuch as fortune cookies are mechanical, you get humans converging to the mechanical once they’ve garnered enough followers:</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">The rhetorical style of any Twitter account that continues to gain followers converges on that of a fortune cookie.</p>— Eugene Wei (@eugenewei) <a href="https://twitter.com/eugenewei/status/998714814749802496?ref_src=twsrc%5Etfw">May 21, 2018</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>While I adore the unrelenting optimism in web3 communities, I'll be damned if the “gm”/“wagmi”/“maxi”/etc. phenomenon isn’t an exercise in making your signal so common and so legible that parsing out any real authentic content has become almost illegible. Anecdotally speaking, I think the below is less of a Discord problem so much as a <a href="https://www.urbandictionary.com/define.php?term=pbkac">PBKAC</a> problem:</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Opinion - Discord is unusable. Endless feed of stuff, all the time, everything gets lost, zero curation, cant track anything, like 1000 people shouting at each other in a bar, like Twitter without a basic algorithm. Urgh! We cant built web 3 communities on this...</p>— Raoul Pal (@RaoulGMI) <a href="https://twitter.com/RaoulGMI/status/1493760406657150977?ref_src=twsrc%5Etfw">February 16, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><a href="https://twitter.com/dkb868">@dkb868</a>'s <a href="https://dkb.io/post/google-search-is-dying">“Google Search is Dying”</a> laid out the case that the reason Reddit has still, somehow, yet to peak on Google buzz is that people are increasingly explicitly seeking out more authentic experiences, where “authentic” means something like, “Places where humans still talk like humans”:
<br></p>
<blockquote>
<p><span class=“blockquote-long”>[There’s] some general sense that the authentic web is gone. The SEO marketers gaming their way to the top of every Google search result might as well be robots. Everything is commercialized. Someone’s always trying to sell you something. Whether they’re a bot or human, they are decidedly fake.</span></p>
<p><span class=“blockquote-long”>So how can we regain authenticity? What if you want to know what a genuine real life human being thinks about the latest Lenovo laptop?</span></p>
<p><span class=“blockquote-long”>You append “reddit" to your query (or hacker news, or stack overflow, or some other community you trust).</span>
<br>
This is also the thrust of Ben Thompson’s <a href="https://stratechery.com/2020/social-networking-2-0/">Social Networking 2.0</a>: “[The traditional social networks] still have value, but primarily as a tool for distribution and reach of content that will increasingly be created in one place, and discussed in another.” </p>
</blockquote>
<p><img src="https://i0.wp.com/stratechery.com/wp-content/uploads/2020/12/socialv2-4.png?resize=768%2C575&ssl=1" alt="Social Networking 2.0">. </p>
<p>One problem with computer-like human speech on the internet is the very nature of text: It’s easy to hide in text. You can paraphrase Wikipedia your way to writing a five paragraph essay in school the same as you can publish a popular tweet where you and your audience only 70% know what they’re talking about. This isn’t a problem for Reddit because pseudonymity makes performativity useless, which makes hiding moot. It's also why even YouTubers feel more authentic than much of Twitter, Instagram, and Facebook, even though YouTubers <em>explicitly</em> treat YouTube-ing as a profession.</p>
<p>The Turing test, in a way, implies some kind of elimination of friction: The friction between what computers can tell humans and what humans want to hear. Humans have sort of pushed the ball over the goal line with our recipes written for Google and our fortune cookie tweets, but it's a touchdown nevertheless. Fortunately, and optimistically, if Reddit’s continued ascent is any guide, it’s all the more evidence for the idea that technology doesn’t mostly displace stuff; it's mostly additive. Humans still gonna human.</p>
Jonathan Libovtag:whoo.ps,2005:Post/2092022-02-04T17:24:34Z2022-02-04T17:32:17ZWalking through doors<p><a href="https://en.wikipedia.org/wiki/The_Doorway_Effect">The Doorway Effect</a> is a psychological phenomenon in which one is prone to forget things they otherwise would not have forgotten had they not walked through a door. </p>
<p>Thank goodness for The Doorway Effect because for as much as we all strive to improve our memory — if I were a more accurate and fecund witness to my own history, I might be <a href="https://twitter.com/RamsNFL/status/1040067912478781440?s=20&t=xLPCqEX4txoCUmxpjf3nkg">coaching in the Super Bowl</a> — remembering everything, from the intellectual to the emotional, seems burdensome. Forgetting is part of how we cope with the monotony of everyday life and how we look at problems in new ways.</p>
<p>Doors are so important and so often overlooked. Walking through a heavy door with a tactile doorknob is one of the simple, great pleasures in life. Simon Sarris, in his series on <a href="https://simonsarris.medium.com/designing-a-new-old-home-part-1-cf298b58ed41">Designing a New, Old Home</a>:</p>
<blockquote>
<p><span class="blockquote-long">Hardware [in modern homes] is of poor quality, even in half-million-dollar-plus homes. Cheap and ugly faucets, light fixtures, doorknobs, and paper-thin doors dominate...[These] homes are not built by people intending to live in them. Instead, they are built by builders, who mostly want to flash-form 60 “units” overnight out of sticks and drywall. Everything from sun positioning to doorknobs becomes not just an afterthought, but a no-thought…How much you spend on something like knobs is up to you, but my feeling is that the hardware you physically interact with, the things you touch every day, are worth the extra cost. They should be heavy and feel like solid, purposeful objects, and not a hollow piece of brass connected to a hollow door.</span></p>
</blockquote>
<p>In my first few years living in New York City post-college, I lived in Prospect Heights in Brooklyn and took the Q train back and forth to the office where I worked near Union Square. The Q train travels on the Manhattan Bridge, which at the right times of day and times of year provides an immaculate, sun-low-in-the-sky view of the Brooklyn waterfront and the Financial District. When I crossed the bridge at the end of the day, daytime was unambiguously over.</p>
<p>I struggled severely working from home during the pandemic. Working from home while trading off with my wife looking after the kid, and then the kids as of July 2021, was unrelenting. Even our walks outdoors felt like a mere amble over to a different part of the room, or, at best, a slick passage through a cheap, hollow door.</p>
<p>I’ve been going to an office every single day since June 2021. The subway ride on the 4 train, from where I live in now in Brooklyn, isn’t as cleansing—inasmuch as a <em>subway</em> ride can be cleansing—as those rides on the Q train, but it’s cleansing nevertheless. As is getting up from my desk and walking over to a meeting room for a meeting, be it over Zoom or, ideally, in-person.</p>
<p>Surely some part of this is my approaching middle age. Like anyone who was young during the Great Depression who carried frugality and economic unease throughout their entire life, I have some calcified sense of how a productive day should be shaped. And that’s not to dismiss how much never-leave-your-home Zoom has changed my life: I raised money for <a href="https://antimatter.systems">a company</a> that way.</p>
<p>Much as homes have exterior and interior doors, our work life has exterior doors (the office and meeting room doors) and interior doors (whatever is on our computer screen). These are doors:</p>
<p><img src="https://whoops-images.s3.amazonaws.com/images/doors.jpg" alt="Doors"></p>
<p>In that vein, we ask every potential new hire at our company about their flow state. Where were you the last time you completely lost yourself in your work? What room were you in? What time was it? What app(s) did you have open? Who were you talking to, or not talking to? We think it’s our job to align everyone’s flow state with their personal mission at the company. </p>
<p>For now <a href="https://whoo.ps/2021/11/08/no-slack">we don’t use Slack</a>; we used Teamflow for some time but it fell into disuse. Strong opinions held loosely here, but I feel quite strongly that a team huddled in one giant room (where a different channel in Slack is more like ambling over to another part of the same room than entering a different room) is a huge tax on productivity and, in turn, improving and thriving in your craft. This is true even of group Figma collaborations; conversing and commenting in Figma is technically spatial but it’s mainly linear. That’s why it’s easy to discuss UI in Figma and very difficult to discuss product. We inevitably end up rewriting and organizing lots of comments from the Figma in a Google Doc, because doors.</p>
<p>Focusing almost exclusively on productivity rather than serendipity and camaraderie is of course a choice. It’s by no means to suggest that as a team we don’t experience any camaraderie; quite the opposite, when we congregate to show our work it’s all the more joyous because there are bright lines between meetings and the rest of our time, which is primarily devoid of meetings. I’d like to think we enjoy our time together all the more, because doors. </p>
Jonathan Libovtag:whoo.ps,2005:Post/2082022-01-21T14:36:51Z2022-01-23T16:11:03ZWeaponizing Christensen and Perez<p>Who are Web3’s two most important people? Satoshi and Vitalik? Maybe they’re Clayton Christensen and Carlota Perez.</p>
<p>Both Clayton Christensen’s Disruptive Innovation and Carlota Perez’s Technological Revolutions models came to prominence during the rise of Web2. The former teaches us, as <em>investors</em>, that incumbents are blind to innovations that serve some small segment of a growing market. The latter teaches us, as investors, that financial exuberance will eventually latch onto the toys enjoyed by that small, growing market, pull the future forward, and <em>make it real</em> faster than its inherent economics support.</p>
<p>These models are remarkably explanatory, in hindsight, for maybe two centuries of technological innovation, and hence highly lucrative for those who adopted those frameworks between 2003 and 2017, when they became broadly popular. That’s obviously within range of the rise of Web3; in other words, we already knew the playbook well when Web3 came along. And as a result, those models became more <em>causal</em> than explanatory for the success of Web3.</p>
<p>In Christensen’s parlance, the PC reduced friction for a small, early market of people who spent time in word processing and spreadsheet calculations. By comparison, what real-world problems did early crypto solve for a small, early market? My primary suggestion (a few more further below) might seem cynical but it’s real: People love numbers that go up and down. They love the stock market, they love sports, they love App Stores charts, and they used to love the box office before it became irrelevant. As a numbers-go-up-and-down product that’s open 24/7 and built by hackers, crypto trading was fun for an early market of people that like to try new stuff. Classic Christensen. </p>
<p>In citing the numbers-that-go-up-and-down-game, I’m hardly dismissing the ideology that Satoshi unleashed on the world—extra-governmental, extra-corporate currencies—nor the thousands of open-source developers who saw a much better, post Web2, post central bank future, but I don’t think it’s controversial to say that the vast majority of people who have played the numbers-going-up-and-down game the last 10 years weren’t previously experiencing all that much friction with TradFi, which has only grown more user friendly (having a fun, modern rewarding job in finance is a different story — <a href="https://iterative.capital/thesis/">Iterative Capital’s thesis</a> on this topic is by far the best). We can at least agree that the friction that crypto removes from TradFi is a much more distant proposition than PC’s were for people using typewriters.</p>
<p>Christensen’s model is more causal than explanatory for Web3 because it instilled so much confidence in early adopters. Web3 was, at the time Christensen’s model became widely known, a cheap toy, and we already knew what can happen with products that appear to be cheap toys. (Contrast this with, say, early VR, which was an expensive toy. Unit bias and fractionalization mattered and matter bigly for crypto.) </p>
<p>The causal nature of Carlota Perez’s model is even more literal and more stark. Perez’s framework was <a href="https://twitter.com/search?q=perez%20frenzy%20crypto&src=typed_query">dropped early and often</a> during the crypto run-up and crash of 2017.</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">2/ I was rereading Carlota Perez’s excellent book on Technological Revolutions the other day (published in 2002, well before the bitcoin white paper), and she points out a highly relevant fact:</p>— AriannaSimpson.eth (@AriannaSimpson) <a href="https://twitter.com/AriannaSimpson/status/944985763967533056?ref_src=twsrc%5Etfw">December 24, 2017</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>This was not only an argument to hodl, but a call to avoid being cynical about all the capital that had been burned after the ICO run up. More specifically, it was an argument that we should view the overfunding as a natural and positive development for crypto, as it helped to fund constructions of the rails that would (in fact) later support the deployment phase, a la broadband in the late 90’s. In other words, Perez’s model was in progress and would reward anyone who saw it through (narrator: “It did”).</p>
<p>(<em>Update</em>: Via <a href="https://twitter.com/maxkufner">@maxkufner</a>, it seems Perez herself doesn't believe crypto to be a technological revolution and is well aware that many people do). </p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">FWIW, as far as I understand, Perez doesn't consider crypto to be a new technological revolution at all, but a extension of the current ICT paradigm:<br><br>"I definitely don't think it's a revolution, even though many who have read my book believe so."</p>— Max Kufner (@maxkufner) <a href="https://twitter.com/maxkufner/status/1485196300320751618?ref_src=twsrc%5Etfw">January 23, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>We arrive at an interesting question: Do Christensen and Perez’s models matter anymore in a world where everyone is an investor and everyone is aware of disruptive innovation and cycles of financial capital? Are those models like financial strategies that, like almost every financial strategy, gets arbed away with broader awareness?</p>
<p>Some supporting evidence: Facebook has eaten Christensen’s model for breakfast. Zuck saw how Bill Gates’ Microsoft got disrupted and bought all the things that would disrupt him. He’s refashioned the company to eat the future. They also went early in crypto via ~libra~ diem, though the jury’s still out on that one. Facebook, Microsoft and others are experiencing second and third waves of growth precisely because they’ve been willing to disrupt themselves in ways that many of their predecessors didn’t. </p>
<p>Further supporting evidence. Even if you think we’re following Perez’s cycle of technological revolutions and financial capital, we’re <em>way</em> off from her timeline. Something has changed. Maybe it’s just the speed? Maybe.</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">a lot of people who cite Carlota Perez seem to think this part is 2-3 years <a href="https://t.co/ftelJNdgF1">pic.twitter.com/ftelJNdgF1</a></p>— Jonathan Libov (@libovness) <a href="https://twitter.com/libovness/status/1079007603852804096?ref_src=twsrc%5Etfw">December 29, 2018</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>What’s more, crypto must be the first disruptive innovation-type product that succeeded primarily because of the belief of future financial returns more so than any of the near-term human needs it satisfied. I cited one human need it did solve—24/7 access to numbers that go up and down—and there are obviously others, though you kind of have to squint to avoid seeing them as pursuits of future returns: </p>
<ul>
<li>People love collecting things (Web 2: Pinterest, Pokemon Go; Web 3: NFT’s and cryptos in general)</li>
<li>People love having reasons to build apps (Web2: Chatbots when it appeared to be a wave; Web 3: new projects, wallets etc.) </li>
<li>People love being a part of big wave (Web1: IPO frenzy of the late 90’s, mobile circa 2010; Web 3: Web3)</li>
</ul>
<p>That’s all cynical but optimistic. Cynical because citing the promise of future returns and features like <em>numbers going up and down</em> diminishes the very real ideological change that Satoshi introduced. Cynical because Web3 relies so heavily on <em>marketing</em> to sustain product innovation:</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">It’s interesting that some social phenomenon like Facebook, basically just grew organically with little marketing <br><br>Whereas others like crypto has been supported by an extraordinary amount of paid advertising.</p>— Joe Weisenthal (@TheStalwart) <a href="https://twitter.com/TheStalwart/status/1416185148714979331?ref_src=twsrc%5Etfw">July 16, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>And that last human need—people love being a part of big wave—is difficult to extricate from the belief of future financial returns; the venture arena is prone to identifying new waves because they entail an entirely new class of large outcomes (<a href="https://whoo.ps/2015//futures-of-text">I’ve made this mistake</a>), rather than a needle-in-a-haystack like odds within an existing, waning wave. </p>
<p>What’s different between Web3 and other false-start waves in tech is that everyone is an investor now. Not just because crypto is naturally fully liquid, but also because it rose toward the end of a booming decade in tech, leaving us with a much larger class of individual investors, all of whom can readily cite Christensen and Perez. That’s cynical inasmuch as it’s divorced from all the world-changing ideology that Satoshi and Vitalik unleashed on the world. </p>
<p>And yet, none of that is inherently bad! Almost everyone agrees that financial returns and web3 just being cooler and in-group-ier than Web2 is pulling a far better and more equitable future closer to the present. But I think it also means that Christensen and Perez’s models don’t matter as much as they once did. They can’t; we know too much.</p>
Jonathan Libov