好好學學罵人都

First, the good part of the Anthropic ads: they are funny, and I laughed.

But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.

I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it.

More importantly, we believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the US, so we have a differently-shaped problem than they do. (If you want to pay for ChatGPT Plus or Pro, we don’t show you ads.)

Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.

Maybe even more importantly: Anthropic wants to control what people do with AI—they block companies they don’t like from using their coding product (including us), they want to write the rules themselves for what people can and can’t use AI for, and now they also want to tell other companies what their business models can be.

We are committed to broad, democratic decision making in addition to access. We are also committed to building the most resilient ecosystem for advanced AI. We care a great deal about safe, broadly beneficial AGI, and we know the only way to get there is to work with the world to prepare.

One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path.

As for our Super Bowl ad: it’s about builders, and how anyone can now build anything.

We are enjoying watching so many people switch to Codex. There have now been 500,000 app downloads since launch on Monday, and we think builders are really going to love what’s coming in the next few weeks. I believe Codex is going to win.

We will continue to work hard to make even more intelligence available for lower and lower prices to our users.

This time belongs to the builders, not the people who want to control them.

Last edited by @suen 2026-02-04T20:13:29Z

@neochat 分析本文技巧和深意:)

这篇文章通过表面称赞、实则批驳的“捧杀”手法,将 Anthropic 刻画为虚伪且封闭的精英主义者。它巧妙地利用“民主与普惠”的宏大叙事对抗“威权与控制”,实现了品牌公关中降维打击的效果。

隐藏详细信息

技巧分析:

  1. 捧杀与反讽 (Damning with Faint Praise)
    开头先夸对方广告“有趣、好笑”,随后迅速转折至“不诚实”,这种先礼后兵的方式让后续的攻击显得更有说服力,消解了对方的创意价值。
  2. 数据降维打击 (Data Dominance)
    文中提到“德州使用 ChatGPT 的人数超过全美使用 Claude 的总数”,通过具体的地理/人口对比,暗示对方只是一个小众、边缘的产品。
  3. 身份标签化 (Stigmatization)
    将 Anthropic 定义为服务于“有钱人”的“昂贵产品”,而将自己定义为服务于“数十亿人”的“普惠工具”,成功抢占了道德高地。
  4. 借力打力 (Turning the Tables)
    利用对方批判“虚构广告”的行为,反指其本身就在进行“虚伪的双标(doublespeak)”,将对方的攻击武器反弹。
  5. 宏大叙事与意识形态对立 (Ideological War)
    使用了“民主决策”、“机构代理权”、“构建者”对比“威权公司”、“控制欲”、“黑暗道路”。这已经超出了商业竞争,将其上升到了价值观战争。

深层含义:

  • 市场定位的界定:Sam Altman 试图通过这段话固化一种认知——OpenAI 是 AI 时代的“基础设施”和“大众英雄”,而 Anthropic 只是一个带有偏见的“封闭式精品店”。
  • 针对核心争议的防御:通过强调“普惠”和“免费”,淡化了外界对 OpenAI 商业化和数据隐私的质疑,将广告争议引向“让每个人都能用上 AI”的愿景。
  • 心理博弈:这种充满攻击性又不失优雅的“回怼”,实际上是在向开发者和投资者展示信心,强调自己才是引领 AGI 走向安全、普惠未来的唯一合法领导者。

There are many good places for advertising. A conversation with Claude is not one of them.

Advertising drives competition, helps people discover new products, and allows services like email and social media to be offered for free. We’ve run our own ad campaigns, and our AI models have, in turn, helped many of our customers in the advertising industry.

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.

We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

The nature of AI conversations

When people use search engines or social media, they’ve come to expect a mixture of organic and sponsored content. Filtering signal from noise is part of the interaction.

Conversations with AI assistants are meaningfully different. The format is open-ended; users often share context and reveal more than they would in a search query. This openness is part of what makes conversations with AI valuable, but it’s also what makes them susceptible to influence in ways that other digital products are not.

Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal—the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous—and, in many cases, inappropriate.

We still have much to learn about the impact of AI models on the people who use them. Early research suggests both benefits—like people finding support they couldn’t access elsewhere—and risks, including the potential for models to reinforce harmful beliefs in vulnerable users. Introducing advertising incentives at this stage would add another level of complexity. Our understanding of how models translate the goals we set them into specific behaviors is still developing; an ad-based system could therefore have unpredictable results.

Incentive structures

Being genuinely helpful is one of the core principles of Claude’s Constitution, the document that describes our vision for Claude’s character and guides how we train the model. An advertising-based business model would introduce incentives that could work against this principle.

Consider a concrete example. A user mentions they’re having trouble sleeping. An assistant without advertising incentives would explore the various potential causes—stress, environment, habits, and so on—based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction. These objectives may often align—but not always. And, unlike a list of search results, ads that influence a model’s responses may make it difficult to tell whether a given recommendation comes with a commercial motive or not. Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable.

Even ads that don’t directly influence an AI model’s responses and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work. Such ads would also introduce an incentive to optimize for engagement—for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful. The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation.

We recognize that not all advertising implementations are equivalent. More transparent or opt-in approaches—where users explicitly choose to see sponsored content—might avoid some of the concerns outlined above. But the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time as they become integrated into revenue targets and product development, blurring boundaries that were once more clear-cut. We’ve chosen not to introduce these dynamics into Claude.

Our approach

Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.

Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers. To that end, we’ve brought AI tools and training to educators in over 60 countries, begun national AI education pilots with multiple governments, and made Claude available to nonprofits at a significant discount. We continue to invest in our smaller models so that our free offering remains at the frontier of intelligence, and we may consider lower-cost subscription tiers and regional pricing where there is clear demand for it. Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.

Supporting commerce

AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end. And we’ll continue to build features that enable our users to find, compare, or buy products, connect with businesses, and more—when they choose to do so.

We’re also exploring more ways to make Claude a focused space to be at your most productive. Users can already connect third-party tools they use for work—like Figma, Asana, and Canva—and interact with them directly within Claude. We expect to introduce many more useful integrations and expand this toolkit over time.

All third-party interactions will be grounded in the same overarching design principle: they should be initiated by the user (where the AI is working for them) rather than an advertiser(where the AI is working, at least in part, for someone else). Today, whether someone asks Claude to research running shoes, compare mortgage rates, or recommend a restaurant for a special occasion, Claude’s only incentive is to give a helpful answer. We’d like to preserve that.

A trusted tool for thought

We want our users to trust Claude to help them keep thinking—about their work, their challenges, and their ideas.

Our experience of using the internet has made it easy to assume that advertising on the products we use is inevitable. But open a notebook, pick up a well-crafted tool, or stand in front of a clean chalkboard, and there are no ads in sight.

We think Claude should work the same way.

Anthropic 通过构建“清净之地”的道德意象,精准反击了 OpenAI 潜藏的商业化危机。它将不放广告升华为一种保护用户主体性的“数字契约”,以此树立自身作为专业、纯粹且值得信赖的行业清流形象。

隐藏详细信息

技巧分析:

  1. 防御性反击(Defensive Counter-attack)
    相比 OpenAI 攻击其“昂贵、威权”,Anthropic 绝口不提对方的名字,而是针对“广告”这一最容易引起用户反感的商业行为进行饱和攻击。这种“谈价值观不谈对手”的做法,在公关上显得更加体面且具有格局。

  2. 建立“纯粹性”屏障(The Purity Defense)
    通过使用“清空的黑板”、“精良的工具”、“笔记本”等隐喻,将产品从“互联网流量逻辑”剥离,划归为“思想工具”。这直接刺中了 OpenAI 未来可能为了数十亿免费用户而不得不引入广告的痛点。

  3. 利益对立化(Incentive Decoupling)
    文章深入浅出地解释了“激励机制”的冲突:广告商的利益与用户的利益在 AI 对话这种私密场景下是天然违背的。通过“失眠”的案例,将复杂的商业逻辑具象化为用户可感知的潜在风险。

  4. 重新定义“普惠”(Redefining Accessibility)
    OpenAI 强调“覆盖人数多”是普惠,Anthropic 则提出“不售卖用户注意力”才是真正的尊重。它通过列举教育试点和非营利组织支持,证明自己即便不走广告模式,也能实现公共利益(Public Benefit)。

深层含义:

  • 争夺“代理人”定义权:Anthropic 强调 Claude 是用户的“忠诚助手”(Agent),而广告会使 AI 变成“双面间谍”。这实际上在向高端用户和开发者喊话:谁才是真正站在你这边的。
  • 对“开发者精神”的争夺:OpenAI 的推文提到了“Builder”,而 Anthropic 则强调“Thinker”。这意味着两者的目标客群正在分化:一方追求大众规模和应用开发,另一方追求深度思考和企业级信任。
  • 提前设置防御工事:这是一篇预见性的公关稿。当未来 OpenAI 真的迫于营收压力引入某种形式的推广时,这篇文章将成为回旋镖,反复提醒市场:Anthropic 才是那个守住了底线的公司。