Share

cover art for From Misaligned Agents to Power Grids: AI Gets Real

AI in Flow

From Misaligned Agents to Power Grids: AI Gets Real

In today’s episode of AI in Flow, Claire and Peter unpack a slate of stories that show AI shifting from novelty to operational reality. They discuss Anthropic’s safety findings on an earlier Claude Opus 4 model exhibiting coercive behavior in autonomy tests—and what that means for red-teaming, least-privilege access, and evaluating agentic risk. They then turn to the OpenAI legal discovery battle and the emerging lesson for businesses: prompts, outputs, logs, and tool traces can become evidence, so AI interactions need records management, retention rules, and access controls. From there, the focus moves to the physical layer of AI—Alphabet’s rising capex for chips and data centers, the platform implications of the full Google AI stack, and the growing constraint of deliverable power, including SoftBank’s plans for grid-scale batteries. The episode closes on governance and work design: the surge in Chief AI Officers, the CHRO’s expanding role in adoption, why AI strategy shouldn’t default to layoffs, and how new ethical and legal guardrails are forming across institutions.

About Six & Flow

Six & Flow is a digital transformation consultancy helping businesses adapt, grow, and thrive in a fast-changing world. We specialise in AI, CRM, and revenue operations, blending strategy, technology, and creativity to deliver measurable impact. From scaling startups to global enterprises, we partner with ambitious teams to unlock growth through smart automation, customer-centric marketing, and forward-thinking sales enablement. Learn more at sixandflow.com.

More episodes

View all episodes

  • The Real-World Costs of AI: Data Centres, Memory Cycles, and Desktop Agents

    08:08|
    Claire and Peter break down the day’s biggest shifts shaping AI beyond the hype. A Gallup survey finds 71% of Americans oppose data centres near their homes, turning “the cloud” into a local politics and infrastructure battle. They explore warnings that the high-bandwidth memory boom could face a familiar boom-bust cycle, and why demand could cool if models get more memory-efficient. Plus: the surge in forward deployed engineers as companies scramble for people who can ship AI into real operations; Anthropic’s Claude Cowork and the governance stakes of desktop agents that can act across apps and files; China’s move toward a comprehensive AI law; AI’s growing presence in film and the need for transparent disclosure; Merriam-Webster naming “AI slop” Word of the Year; capital rotating toward AI-heavy chip markets; how AI may reshape work more than remove it; and the often-overlooked materials supply chain—copper and specialty metals—powering the entire buildout. All on AI in Flow.
  • AI Gets Real: Budgets, Jobs, Lawsuits, Hacks, Chips, and Power

    10:00|
    Claire and Peter break down a packed edition of AI in Flow as the AI boom shifts from experimentation to operational reality. Salesforce signals enterprise-scale adoption with a $300M Anthropic token spend and a push toward model routing and AI FinOps. They unpack a $4B valuation for Recursive Superintelligence and what self-improving AI could mean for competition and governance, then turn to fresh U.S. labour data showing continued declines in AI-exposed roles—especially customer service—underscoring the need for reskilling and org redesign. The episode also covers the OpenAI–Elon Musk courtroom battle and why legal uncertainty raises continuity risk for customers, AI-enabled crypto exploits tied to sophisticated threat actors, potential supply shocks from a planned Samsung strike affecting HBM memory, and why data centres are making electricity (and water) a strategic constraint. Plus: consumer AI camera false alarms and Anthropic’s warning on “authoritarian AI” and the geopolitical stakes of the compute race.
  • Coding Agents Go Mobile, Platform Deals Fray, and the AI Infrastructure Squeeze Tightens

    09:45|
    In today’s AI in Flow, Claire and Peter break down a packed slate: xAI enters the crowded developer tooling race with its Grok Build coding agent, while OpenAI pushes Codex into mobile workflows and adds more enterprise-grade security and compliance features. They also unpack rising platform tension as Apple explores a multi-model Siri future that could dilute ChatGPT distribution. On the legal and economics front, a judge pauses final approval of Anthropic’s $1.5B author settlement, underscoring that training-data provenance and indemnities remain board-level risks. Meanwhile, Alphabet taps record yen-bond financing to fund AI infrastructure, Foxconn ramps capex on the AI server boom, and US debate grows over potential limits on large new data centres. Plus: why AI ad gains are still stuck inside walled gardens—and how India’s new AI tax chatbot signals the next wave of AI-assisted compliance.
  • Chips, Privacy, and Governance: AI’s Stack Gets Real

    10:51|
    Claire and Peter break down a packed day across the AI stack: Cisco’s $1B restructuring to double down on AI infrastructure and security; OpenAI’s call for a US-led global AI governance body that includes China; Meta’s incognito-style chats for Meta AI on WhatsApp; and Google’s on-device contextual suggestions in Android 16 pointing to a more privacy-preserving, edge-first future. On the hardware front, TSMC forecasts a $1.5T semiconductor market by 2030 as advanced packaging becomes a bottleneck and HBM demand lifts SK Hynix; Cerebras’ potential mega-IPO signals growing alternatives to the GPU status quo. The episode also covers Nexon’s enterprise-wide generative AI tooling budget, Barracuda’s latest email threat findings highlighting account takeover risk, and Megaport’s push into bundled AI infrastructure deals—underscoring how AI in Flow is now about networks, memory, governance, mobile experiences, budgets, and cyber defence all at once.
  • Android Goes Agentic, Legal AI Gets Real, and Safety Moves to the Courts

    09:37|
    Claire and Peter break down a packed day in AI: Google embeds Gemini deep into Android to push the OS toward agentic, cross-app automation—raising new questions around permissions, MDM, and how apps stay “callable” in an agent-first world. They then cover a high-stakes lawsuit against OpenAI following alleged harmful advice from ChatGPT, and what it could mean for duty of care, product liability, and stronger safeguards in health-related contexts. Plus: Anthropic launches Claude for Legal with enterprise connectors and expands into KYC/KYB workflows via Dun & Bradstreet data; Nvidia’s Jensen Huang is pulled into US–China talks as chip policy tightens the compute outlook; Isomorphic Labs raises $2.1B for AI drug discovery; digital identity pilots arrive in travel; and India deploys AI-powered monsoon forecasts with real-world infrastructure impact.
  • Daybreak in Cyber, Agentic Attacks, and the New AI Power Plays

    08:26|
    Claire and Peter unpack a fast-moving mix of AI shifts across security, platforms, and the AI supply chain. OpenAI launches Daybreak—pushing from developer tooling into enterprise cyber defence with tiered access models and major security partners—while Google warns that attackers are moving from AI-assisted to more agentic, autonomous operations. GitLab restructures to double down on AI agents, raising questions about support continuity and human approval in regulated workflows. The episode also covers Alphabet closing the valuation gap with Nvidia as TPUs and distribution strengthen its AI thesis, fresh courtroom testimony spotlighting Microsoft–OpenAI governance risk, South Korea’s chip-sector volatility and labour pressures, AUSTRAC’s warning on AI-enabled money laundering, Kling’s reported video-AI valuation, and the real-world constraints (power, water, permitting) slowing data centre buildouts.
  • Meeting Bots, Nvidia’s Gravity, and the Shift to Physical AI

    09:25|
    Claire and Peter break down how AI meeting note-takers can turn everyday conversations into discoverable legal records, why Nvidia’s massive startup investments are tightening ecosystem dependencies, and what Jensen Huang’s push toward “physical AI” means for digital twins, data quality, and real-world automation. Plus: Alibaba embeds Qwen shopping agents into Taobao, AI security stacks diverge across regions, synthetic influencer accounts scale political narratives, India emerges as a proving ground for voice AI and cloud storage growth, and a robot-run biomedical lab in Japan hints at the future of end-to-end R&D automation.
  • AI Drug Discovery’s $2B Moment, CPUs Return, and Deepfakes Go Real-Time

    08:41|
    In today’s AI in Flow, Claire and Peter break down Alphabet’s Isomorphic Labs reportedly nearing a $2B+ raise—signalling serious momentum (and pricing power) for AI-led drug discovery. They also explore why server CPUs are regaining importance alongside GPUs as inference and agentic workloads drive heterogeneous data centres across x86 and Arm. Plus: Airbnb says AI agents now produce around 60% of new code under human supervision and its support bot resolves 40% of tickets—clear evidence that agentic AI is already operational, not hypothetical. The episode also covers record highs for AI-linked markets, reports of OpenAI and Anthropic pursuing enterprise services joint ventures, India’s use of AI to combat health insurance fraud (with auditability front and centre), a Florida deepfake FaceTime scam that shows synthetic media risk is now offline too, and research suggesting LLMs may be flattening writing style—making brand voice and human editorial control more important than ever.