☀️ AI Morning Minute: Claude
The AI named after the father of information theory
You’ve been reading about AI models, companies, and concepts in this newsletter for months. This week, we’ve been going deep on one product: Claude, the AI built by Anthropic. Since we’ve gone over a good chunk of its features, I thought we’d take a step back and understand what Claude actually is, where it came from, and why its creators think it should work differently from the competition.
What it means
Claude is a family of large language models built by Anthropic, the AI safety company founded in 2021 by siblings Dario and Daniela Amodei, both former OpenAI executives. The name is a tribute to Claude Shannon, a mathematician who published “A Mathematical Theory of Communication” in 1948 and is widely considered the father of information theory. Shannon’s paper, cited over 160,000 times, laid the groundwork for how we encode, transmit, and decode information, which is essentially what a language model does at scale.
The name also served a second purpose: most virtual assistants (Alexa, Siri, Cortana) had female-sounding names, and Anthropic deliberately chose a male name to break the pattern of gendering AI assistants as women.
Why it matters
Claude is trained using Constitutional AI, a method Anthropic invented. Instead of relying entirely on human reviewers to rate outputs (the standard RLHF approach), Claude is given a written constitution of principles and learns to evaluate its own responses against those rules. The 2026 version of that constitution is 23,000 words long, up from 2,700 in 2023. It’s essentially a rulebook for how to be helpful without being harmful.
Anthropic has drawn a line that other labs haven’t. The company includes contractual prohibitions against using Claude for mass domestic surveillance and fully autonomous weapons. When the Pentagon pushed back on those restrictions, labeling Anthropic a “supply chain risk,” Anthropic took it to court rather than remove the language. Over 200 employees from Google and OpenAI signed letters supporting Anthropic’s position.
Claude is now a platform, not just a chatbot. What started as a conversational model in 2023 has grown into an ecosystem: Claude Code for developers, Cowork for desktop automation, Dispatch for mobile task management, Skills for custom workflows, Artifacts for interactive content, Projects for persistent workspaces, and Connectors for linking to external services. This week, we’ll cover each of those.
Simple example
You hire a new employee. Most companies hand them a laptop and say “figure it out.” Anthropic wrote a 23,000-word employee handbook before the employee’s first day, covering how to handle sensitive questions, when to push back, and what lines never to cross. Then they gave that handbook to the employee (Claude) and said “internalize this.” That’s Constitutional AI.
The model doesn’t just learn what to say from examples. It learns why certain answers are better than others, based on written principles it can reference. The handbook isn’t just a suggestion. It’s the architecture.

