How to make $183 billion disappear
What Claude's viral NYC pop-up and "thinking" cap reveals about tech culture's hunger for meaning
Over the weekend, AI company Anthropic took over Air Mail, a chic newsstand and café in New York’s West Village, to create a “thinking space” for users of their generative AI app, Claude.
The pop-up was simply designed: free caps with “thinking” printed on them, free tote bags when the caps ran out, free coffee if you downloaded the Claude app, and copies of Anthropic CEO Dario Amodei’s book up for grabs. You just had to show the app on your phone to get in. The broader “Keep Thinking” campaign included a “say no to slop” vibe that contrasted Claude with the recent deluge of low-quality AI content.
Lines to Air Mail’s front door snaked down the sidewalk all weekend. People waited two hours, three, for a cap and a latte. My feed—an actual ocean away from the West Village—was filled with photos of the caps aesthetically arranged on coffee tables, gushing about the vibes and the aesthetic of it all.
The online commentary was effusive. People said the pop-up was “tasteful”, described it as feeling “like a warm room”, even called it “the best marketing campaign, almost Apple-like”. Indeed, nearly one in every five posts on my feed was a Claude billboard juxtaposed with Apple’s popular “Think Different” campaign.
Here’s where I got curious. Pop-ups aren’t new, and free merch is a tale as old as time. David vs Goliath framings are practically a Silicon Valley cliché at this point. So what made the Anthropic campaign land the way it did? I think the answer isn’t immediately obvious, which is part of what makes this entire thing so incredibly interesting.
The circle is actually a spiral
I’ll start with the cap which, I’ll admit, is right up my alley. It’s a simple baseball cap with “thinking” embroidered onto it in classic serif against weathered fabric, with no other obvious branding.
The simplicity of this cap is the whole point. It functions as a costly signal, where the cost is time (spent waiting in line) and knowledge (about Claude, having the app, being plugged into the right networks to know about the event). This is what Pierre Bourdieu called cultural capital: the ability to recognise and value this aesthetic choice becomes, in itself, a marker of belonging. The cap is legible only to insiders; to everyone else, it’s a cap with a word on it. And that’s precisely what makes it valuable to those in the know.
Giving away Dario’s book was a particular stroke of genius, IMO. In tech culture, origin stories matter enormously. The narrative of principled researchers leaving the behemoth OpenAI over safety concerns and to do AI “the right way”, it gives people something to believe in beyond the features. Dario’s book makes that narrative tangible and portable, and giving it away for free makes that founder mythology spread that much more organically. The more people subscribe to that narrative, the more likely they are to choose that product of their own volition.
So on one level of the spiral, you have free offerings, which in general carry psychological weight. In traditional marketing, even if there’s something free up for grabs, there’s a clear transaction: give us your email or data, and we’ll give you something in return. Economists call this a market economy.
This pop-up probably felt different. The giving felt generous rather than extractive. The staff weren’t upselling you or collecting detailed information, they were handing you a whole bunch of free stuff. The singular ask—download the app if you want coffee—felt more like an optional joining of a community than like completing a transaction. From the many tweets I saw, people walked away feeling like they’d been given something, and not like something had been extracted from them. That feeling matters more than the economic reality of the exchange.
On the next level: the venue. I saw a lot of people asked: why NYC? Why not SF? I don’t think that choice was arbitrary. I think SF has reached tech saturation, so an Anthropic pop-up would register as just another tech thing in a deluge of tech things. Preaching to the choir. Choosing NYC, on the other hand, signals something beyond “we’re a tech company”.
Air Mail, for starters, sells carefully curated magazines, CBD-infused tonics, and other tangible and intangible markers of IYKYK taste. It’s also founded by Graydon Carter, the legendary Vanity Fair editor and taste-maker, which means it’s wrapped in cultural cachet. It being located in NYC’s West Village, which is not lacking in cultural significance, adds to the value.
By choosing this location, Anthropic juiced some of that credibility for itself, bathed in its alternative-boujie halo. They aligned themselves with a particular kind of taste: intellectual, established, earnest, cultural rather than just trendy. It’s not an image you’d associate a tech company with, which was the point. They made a statement as much through the venues they chose not to host in as by the one they did.
OpenAI already owns the tech mindshare. So Anthropic goes after something else: cultural legitimacy, intellectual seriousness, the sense that they care about craft and meaning. The aesthetic choices—classic serif typefaces, warm and intimate photography, a city of culture—all signalled care and intentionality at a human scale.
On the next level, you have the crowd itself. The people in line had inadvertently become collaborators in something much larger than a weekend pop-up, though most probably didn’t realise it.
Whether intentionally or intuitively, the event was designed for virality. Everything was photogenic: the warm lighting, the aesthetic cohesion, the free stuff. It was time-limited and location-specific, creating natural scarcity and FOMO. Conversations were kicking up while people were waiting in line, about which AI tool they use and what that says about them. (I called out, in an earlier essay, that our choice of software has become subtle performances of identity).
Once enough people are doing something, others join not because they’ve independently evaluated whether it’s worth spending a sunny Saturday on, but because the crowd itself signals value. You see a long line, you assume it must be worth the wait. The line validates the event, which attracts more people, and so on. That happened here, and those lines made it to photos on the Twittersphere, creating an illusion of massive scale.
I think that digital amplification spiraled outwards in ways that broke past the walls of that chic little newsstand. People who weren’t in NYC at the right time weren’t the traditional target of this popup, but many of them became the actual converts, downloading Claude to see what warranted this response (driven by social proof generated by people, half of whom might have been there because it was the weekend and there was free coffee).
To cap it all off, I don’t think Anthropic tried to hide that this was marketing. Their branding was visible. The request to download the app was explicit. They announced the pop-up from their official social accounts. They were admirably upfront about it all.
Design has this concept called “honest materials”, where something doesn’t try to pretend it’s something else. This campaign had that quality. It’s what people mean by anti-marketing marketing: it works precisely because it doesn’t feel like it’s trying to manipulate you. Commercial activity was laundered through such careful attention to taste and culture that it felt like something else entirely.
The perfect storm
So if those were the mechanics, what made them work the way they did? I spent hours going over the reactions to this campaign. Some reactions pitched Anthropic as makers of thoughtful collaborators pitted against those high up above:
Other reactions weren’t as keen to drink the Kool-Aid:
I somewhat agree that this says something about how tech culture’s tribalism operates, and how starved we are for anything that feels genuine. And granted, accounting for algorithmic bias, much of this reaction might have been contained within tech circles.
But to analyse why the philosophy and execution worked the way they did, it’s important to pay attention to the zeitgeist.
The current AI moment is uniquely anxious:
excited about tech
terrified about job displacement
disgusted by slop
deeply cynical about big tech, and
desperate for goodness.
This anxiety creates specific receptivity. We are primed to want an alternative that feels different: agentic not predatory, collaborative not manipulative, thoughtful not slop-ful.
The anxiety reached a particular pitch over the past couple of weeks, when the AI landscape was especially active. OpenAI launched Sora 2.0, their video general model, and socials were immediately flooded with the type of rapid-fire content production that starts to blur together after a while. Other companies like Google and Meta were also pulsing out updates and new models, adding to the general noise.
Beyond the immediate timing, something slower and more fundamental has been happening about how tech culture has been evolving. We’ve been living in meta-ironic tech culture for years. Everything is a meme, nothing is sincere. Earnestness is met with skepticism, even mockery.
This ironic distance was very much a product of environments where hype cycles are short and disappointments are frequent. When you’ve watched enough companies promise to change the world and then pivot to selling, I don’t know, ads, “lol jk unless” becomes the only rational reaction, the sensible default.
But cultural exhaustion has been building, and the pressure cooker is about to explode. There is a generational shift towards post-post-ironic sincerity; people are tired of everything being deeply unserious and wanting to believe in something, anything, again. The people yearn for genuineness.
Anthropic’s pop-up, intentionally or not, landed into this cultural moment. Their pop-up doubled as emotional reassurance: human-scaled, considered, and feeding a hunger for something that was neither spontaneous ragebait (ahem, Cluely and friends) nor A/B tested to death. It offered permission to engage as a community, not a cult or a permanent subscriber to The Future. The anti-slop framing emerged organically from the timing of it all; coming right after a wave of AI tools focused on output, generation, and just more, the emphasis on “keep thinking” took on the tone of implicit critique. For an audience that is fed up to the back teeth with hype cycles and dangerous tech, this particular restraint registered as different, curious.
(This positioning is also largely consistent with the product itself. Claude already has that particular tone and emotive predisposition; the models feel more conversational, more human, less like you’re talking to an optimisation engine.)
It helps that Anthropic’s position in the market made this dynamic all the more potent. If OpenAI had done this exact same pop-up, it would have read completely differently, corporate and calculated. Anthropic benefits enormously from being in second position, because it lends credibility to that David vs Goliath narrative. They can position themselves as the thoughtful alternative, the principled choice, the underdog with better values.
All of this made the perfect storm that caused this reaction to what was, on paper, a small event with free merch.
The marketing scales
Whether this positioning and posturing reflects reality is, rather cleverly, besides the point. That’s what I think is the most subtle part of this whole thing that we need to pay attention to.
Anthropic is valued at a whopping $183 billion. They’re backed by Google and Amazon. Anthropic is far, far from a scrappy startup operating out of a garage, fighting against impossible odds. They’re one of the most well-funded companies in Silicon Valley, competing for dominance in what might be the most important technology race of the decade. The pop-up was also a part of Anthropic’s major brand campaign, a multi-million dollar effort spanning TV, streaming, print, and OOH advertising. Put into perspective, this isn’t David vs Goliath, it’s Goliath vs Godzilla (or a better analogy).
And yet, in the lines stretching down West Village sidewalks and making their appearance on feeds all over the world, it seemed like people chose not to let that valuation complicate their enthusiasm. One would argue it conveyed the same sense of familiarity and warmth as a new neighbourhood coffee shop or mom-and-pop store. It makes the intimate, grassroots feeling that much more remarkable.
That leads me to think that the pop-up worked precisely because it made Anthropic’s massive scale feel invisible. Every single choice created perceptual distance from what Anthropic actually is: a billion-dollar company with backing from two of late-stage capitalism’s Final Bosses.
(To be clear: large companies must have principles. Size and values aren’t inherently incompatible. But the interesting bit is how readily Anthropic’s scale disappeared from the conversation.)
Here’s the other thing: we’re unlikely to know how successful this campaign was in traditional terms. How many people converted to Claude at the event, or got a paid plan there or after? How many will still use it a month later? How many were first-time users versus leeched from competitors? Will they go after London, or Tokyo, or Paris next, or will this remain a one-hit wonder? I’d argue that releasing this data would puncture the narrative, and keeping it vague means everyone can project their own definition of success onto it.
Of course, this could all be speculation, smoke and mirrors. But whatever their actual reasoning, I think the pop-up—and the reactions to it—reveals a lot about how we construct meaning around our choices, especially in times of anxiety and uncertainty.
We want to believe in something. We want our consumption choices to align with our values while propelling the collective forward. We want to feel like we’re backing the good guys. And when a company provides the right signals, the right emotional narrative, we’re willing to buy it.
Whether those signals reflect deeper reality or sophisticated positioning is almost beside the point. The feeling was real and, given time, feelings create their own truths.