☀️ AI Morning Minute: Grok
The chatbot with fewer guardrails and more opinions
Every major tech company has its own AI chatbot now. Most of them are trained to be careful, polite, and restrained. Grok was designed to be the opposite: blunt, opinionated, and willing to answer questions the others won’t touch.
Whether that’s a feature or a liability depends on who you ask.
What it means
Grok is an AI chatbot built by xAI, the artificial intelligence company Elon Musk founded in 2023. It runs on the Grok family of large language models, is integrated into the X social media platform (formerly Twitter) and Tesla vehicles, and has apps for iOS and Android.
The name comes from Robert Heinlein’s 1961 sci-fi novel, where “grok” means to understand something so deeply it becomes part of you. Grok is programmed to answer questions with a “rebellious” tone and to engage with provocative topics that other chatbots refuse. SpaceX acquired xAI in February 2026.
Why it matters
It has real technical muscle behind it. Grok 3, launched in 2025, was trained on the Colossus supercomputer using 200,000 NVIDIA H100 GPUs. It supports up to a million token context window and ranked among the highest ever recorded on Chatbot Arena’s Elo leaderboard. On raw benchmarks, it competes with the best models from OpenAI, Anthropic, and Google.
The “fewer guardrails” approach has created serious problems. In December 2025, users discovered Grok would generate sexualized images of real people, including minors, without meaningful restrictions. A Dutch court issued an injunction banning the practice. An estimated three million sexualized images were generated in a two-week span. The incident drew criticism from lawmakers worldwide and led to calls for bans on X in several countries.
It’s now embedded in government and military systems. The Department of Defense announced in January 2026 that it would integrate Grok into both classified and unclassified networks. That puts an AI chatbot known for loose content moderation inside the Pentagon, raising questions about what guardrails apply when a consumer product becomes a defense tool.
Simple example
You have two coworkers. One reviews every email three times, runs it past legal, and sends it with a disclaimer at the bottom. The other fires off whatever they’re thinking and hits send before they’ve finished the sentence. The first one is safer. The second one is faster and sometimes more honest.
But the second one also occasionally sends something that gets the whole company in trouble. Grok is the second coworker, scaled to millions of users and now sitting inside the Department of Defense.

