Why Anthropic constantly does weird things

The video explains that Anthropic’s unusual behaviors—such as attributing personality to its AI models and emphasizing safety—are deliberate branding strategies designed to differentiate the company and attract mission-driven employees, much like Apple or Nike use strong brand identities. The speaker argues that while these practices may seem odd or insincere, they serve a purpose by humanizing technology and fostering loyalty in a market where AI products are otherwise similar.

The video discusses the current controversy between Anthropic and the Department of Defense, where both sides accuse each other of dishonesty. The creator chooses not to delve into the political aspects, instead focusing on Anthropic’s unique behavior and branding strategies. As a software engineer and entrepreneur, the speaker offers insight into why Anthropic acts differently from other AI companies, particularly in how it ascribes personality and consciousness to its models and claims to know better than the government about the use of its technology.

The speaker draws a comparison between Anthropic’s CEO, Dario Amodei, and Steve Jobs, emphasizing that both are obsessed with design, brand, image, and mission. Anthropic, like Apple and Nike, uses aesthetics and values to differentiate itself in a market where AI models are increasingly commoditized and similar in function. The company’s focus on branding, safety, and alignment is seen as a deliberate strategy to stand out and build consumer loyalty, much like how Nike sells ordinary shoes at a premium by attaching a strong brand identity.

The video acknowledges previous criticisms of Anthropic as just another profit-driven corporation using safety as a marketing tool. However, the speaker now recognizes that companies can genuinely embody values if they consistently act according to them. By creating a strong brand identity centered on safety and alignment, Anthropic attracts mission-driven employees and appeals to users who value these principles, even if the underlying technology is similar to competitors like OpenAI.

Anthropic’s approach is contrasted with OpenAI’s, which the speaker likens to Microsoft’s utilitarian, enterprise-focused strategy. Anthropic’s recent actions, such as giving its Opus 3 model a “retirement interview,” are highlighted as examples of its commitment to its brand values and its willingness to treat AI models as entities with personalities. This approach not only differentiates Anthropic in the marketplace but also helps with recruiting talented individuals who are drawn to the company’s mission and values.

Finally, the speaker reflects on the concept of “useful fictions”—beliefs that may not be literally true but are beneficial to hold, like money or national borders. Treating AI models as if they have personalities or consciousness, even if they do not, can provide emotional and existential utility to users. The speaker concludes that while these practices may seem strange or even risky, they serve a purpose in humanizing technology and creating meaningful interactions, suggesting that such useful fictions may become increasingly important as AI becomes more integrated into daily life.