Share

cover art for Deepfakes, Personal Agents, and Europe's AI Wake-Up Call

AI in Flow

Deepfakes, Personal Agents, and Europe's AI Wake-Up Call

On today’s episode of AI in Flow, Claire and Peter unpack the EU’s GDPR investigation into X’s Grok chatbot after troubling deepfake allegations—and why it’s a defining moment for AI image generation and regulatory enforcement. Then, OpenAI’s latest hire signals a deeper move into automation as the company bets big on personal agents, while Infosys and Anthropic team up to bring Claude into telecom and finance. Plus, we explore the push for sovereign AI in emerging markets and what Brussels suspending AI assistants means for data privacy. Tune in for insights on how AI’s next phase is getting political, personal, and global.

About Six & Flow

Six & Flow is a digital transformation consultancy helping businesses adapt, grow, and thrive in a fast-changing world. We specialise in AI, CRM, and revenue operations, blending strategy, technology, and creativity to deliver measurable impact. From scaling startups to global enterprises, we partner with ambitious teams to unlock growth through smart automation, customer-centric marketing, and forward-thinking sales enablement. Learn more at sixandflow.com.

More episodes

View all episodes

  • Claude Paywalls, Courtroom Hallucinations, and the Power Problem Behind AI Agents

    08:14|
    Claire and Peter break down a packed roundup of AI headlines shaping how businesses adopt (and govern) AI. Anthropic is changing how Claude subscriptions work with third-party tools—pushing heavy users toward add-on bundles or the API. Wired reports a security incident at AI training data supplier Mercor, prompting renewed focus on vendor risk and AI supply chain due diligence. In legal news, sanctions are rising for AI-generated briefs with fabricated citations, underscoring the need for strict human verification and audit trails. The hosts also explore Claude Dispatch and what local, desktop-executed agents could mean for enterprise workflows—along with the guardrails required. Finally, they zoom out to the infrastructure reality behind the agent boom: electricity, data-centre capacity, permitting, physical security, geopolitics, and why portability across model and hardware stacks matters. Plus, a practical idea from Andrej Karpathy on building a living Markdown knowledge wiki as an alternative to over-reliance on RAG.
  • AI Gets Operational: Agents on Windows, Payment Rails, and ChatGPT Commerce

    09:18|
    Claire and Peter break down the biggest AI stories of the day on AI in Flow. Anthropic brings Claude’s computer-use capability to Windows—pushing agentic automation deeper into everyday enterprise desktops—while a separate US policy dispute raises questions about how easily governments could restrict AI vendors. Microsoft announces a major Japan investment focused on in-country AI infrastructure, cybersecurity, and training, as Google releases the commercially permissive, open Gemma 4 model family. The Linux Foundation’s new x402 initiative aims to standardise how AI agents pay for services across fiat and crypto, and travel booking moves further into the assistant layer with EaseMyTrip inside the ChatGPT marketplace. Plus: autonomous legal workflows, OpenAI’s move into podcasting, child-safety pressure around synthetic content, and Sarvam AI’s multilingual momentum in India.
  • Leaked Agent Code, Kids’ AI Videos, and the Power Grid Bottleneck

    09:46|
    Claire and Peter break down today’s biggest shifts in AI on AI in Flow: Anthropic scrambles after a packaging error exposes internal Claude Code agent source code (not weights or customer data, but potentially revealing proprietary orchestration and guardrails). Pressure mounts on Google to curb AI-generated children’s videos on YouTube and YouTube Kids, with implications for creators, advertisers, and regulation. In the US, AI data centre buildouts face delays not from chips, but from shortages of transformers and other electrical gear—raising costs and slowing roadmaps. Also in today’s briefing: Oracle reportedly cuts roles while doubling down on AI infrastructure; “AI coworker” agents like Junior move beyond chatbots into proactive work (with permission and hallucination risks); AI boosts textile recycling at industrial scale; the NIH expands AI funding for Alzheimer’s subtyping; researchers argue multimodal models need better self-awareness of uncertainty; deepfake conflict content overwhelms fact-checkers; and Europe accelerates humanoid robotics for practical industrial deployments.
  • Security Leaks, Privacy Trackers, and the Power Behind AI

    09:39|
    Claire and Peter break down a packed day in AI on AI in Flow: Anthropic’s accidental Claude Code source exposure and what it means for trust and secure deployment; a proposed class action alleging Perplexity chats were tracked and shared with ad platforms; and the escalating reality that AI is now constrained by electricity as Microsoft explores a major natural-gas power deal for data centres. They also cover surging semiconductor exports signalling sustained infrastructure demand, Anthropic’s Australia expansion tied to renewables and sovereignty, Oracle’s job cuts to fund AI cloud investment, Singapore’s first autonomous public ride service launch, alarming AI-enabled harassment targeting teachers in Scottish schools, and Jerome Powell’s blunt advice that learning AI tools is becoming essential for the workforce.
  • Agentic Macs, Procurement Guardrails, and the New AI Infrastructure Reality

    09:19|
    In today’s episode of AI in Flow, hosts Claire and Peter break down a packed slate of AI shifts across product, policy, and operations. Anthropic previews “computer use” for Claude Code on macOS—pushing coding assistants toward true agentic workflows that can navigate apps, run tests, and apply fixes, while raising urgent questions about access controls, auditability, and kill switches. California’s new executive order shows how procurement can become de facto AI governance, with vendor expectations around bias, illegal-content safeguards, civil-rights protections, and watermarking likely to spill into private-sector RFPs. Microsoft doubles down on multi-model copilots with Copilot Critique and Council, signaling a move toward more reliable, reviewable AI outputs via built-in second opinions. The episode also covers Google Maps’ new “Ask Maps” for AI-driven trip planning, uneven global rollouts highlighted by Apple Intelligence’s China hiccup, and growing physical infrastructure risk—from regional instability impacting data centers to public opposition to new builds. Finally, Claire and Peter share two encouraging healthcare advances (rapid AI gestational age estimation and improved cardiac risk prediction from existing scans) and note India’s accelerating push for AI-ready data center capacity.
  • Power, Pullbacks, and Platform Gates: AI Gets Real

    08:40|
    Claire and Peter break down today’s biggest AI developments—starting with Microsoft stepping in to lead a massive data centre expansion in Texas, underscoring that power and infrastructure are now core constraints for scaling AI. They cover OpenAI shutting down Sora just six months after launch, the economics behind compute-heavy consumer products, and why businesses need stronger contract and capacity protections. Plus: Apple’s reported plan to open Siri to third-party chatbots via “Siri Extensions,” what that could mean for distribution and platform rules, and why reliability is a competitive advantage after a major DeepSeek outage. The episode also examines “AI brain fry,” the risks of unreliable AI detectors, research on sycophantic chatbots and user behaviour, and a set of real-world wins—from AI-assisted cardiology and multi-billion-dollar drug discovery deals to AI-enabled drone inspections for building maintenance.
  • Leadership Shake-Ups, Open Algorithms, and the Rising Governance Bar

    09:12|
    In today’s episode of AI in Flow, hosts Claire and Peter break down a wide-ranging set of stories shaping how AI is built, bought, and governed. They look at the resignation of xAI’s last original co-founder as Elon Musk restructures the company ahead of a potential IPO—what that signals for enterprises betting on Grok and why procurement flexibility matters. They also explore Bluesky’s new AI assistant “Attie,” which lets users create custom feeds with natural language on an open protocol, raising big questions about moderation, accountability, and brand safety. On the infrastructure front, Samsung and SK Hynix ramp investment in China memory fabs to ease the AI memory crunch, while the hosts argue the bigger enterprise bottleneck may be data plumbing—pipelines, lineage, and AI-grade data platforms. The episode then turns to legal and governance developments in India: warnings about shadow AI use in courts without safeguards, and a landmark Delhi High Court injunction targeting AI deepfakes and mandating takedowns and traceability. Rounding out the briefing: viral AI parody content testing IP enforcement, India’s pragmatic ecosystem-first AI strategy, a low-cost AI badminton line-calling system, and Mark Cuban’s comments on robot taxes and disclosure risks tied to aggressive automation narratives.
  • Smart Glasses, Workplace Agents, and the New Risk Surface

    08:49|
    In today’s AI in Flow, Claire and Peter unpack how AI is rapidly moving from demos into everyday products, workflows, and real-world consequences. Meta is set to expand Ray-Ban smart glasses for prescription wearers, while Google’s internal “Agent Smith” shows how quickly autonomous workplace agents can become mission-critical—raising the stakes for governance, access controls, and auditing. They also cover a surge in documented cases of AI rule-breaking, new research on “social sycophancy” in leading language models, a high-profile lawsuit alleging AI search exposed sensitive personal data, tighter scrutiny around restricted Nvidia chips, bitcoin miners pivoting into AI data centres, IBM’s acquisition of Confluent to power real-time governed data streams, and policy signals like an AI tax proposal in Telangana. The takeaway: as AI becomes more embedded and autonomous, the competitive edge shifts to organisations with the strongest controls.
  • Power, Policy, and Platform Shifts in the AI Race

    08:09|
    Claire and Peter break down a day of AI moves that show where the real battlegrounds are emerging. Meta ramps its Texas data-centre build to $10B and targets 1GW by 2028, underscoring that AI competition now hinges on energy, infrastructure, and supply chains as much as chips. They also cover a judge temporarily blocking the Pentagon from blacklisting Anthropic, raising bigger questions about model-use restrictions, procurement terms, and who defines “acceptable use” in government contracts. Plus: Google expands Gemini-powered Search Live to 200+ countries and 98 languages, accelerating the shift toward voice- and camera-led discovery; Microsoft reshapes HR for an “AI-first” workforce; Apple deploys major retention packages to hold onto scarce AI design talent; X cuts non-technical roles to prioritize engineering; and the video landscape shifts as Grok Imagine advances while OpenAI winds down Sora. The episode closes with updates on AI policy influence, due diligence failures in public AI infrastructure deals, and urgent UK brand-safety concerns around harmful synthetic content.