Share

AI in Flow
Nvidia’s China Play, Microsoft’s AI Breakaway & Perplexity’s Security Snafu
•
In today’s episode of AI in Flow, we explore Nvidia’s strategic talks with the White House to supply a scaled-down Blackwell GPU to China amid tightening U.S. export controls. Microsoft steps out of OpenAI’s shadow with new in-house models like MAI‑Voice‑1, powering faster and more efficient AI experiences. Plus, Perplexity’s Comet browser faces a major red flag after a prompt injection vulnerability reveals how easy it is for AI agents to go rogue. Also on the radar: Amazon’s leadership shakeup to boost gen AI in Prime Video and South Korea’s bold step toward AI-led economic growth. Essential insights and action items for IT leads, tech builders, and digital strategists.
About Six & Flow
Six & Flow is a digital transformation consultancy helping businesses adapt, grow, and thrive in a fast-changing world. We specialise in AI, CRM, and revenue operations, blending strategy, technology, and creativity to deliver measurable impact. From scaling startups to global enterprises, we partner with ambitious teams to unlock growth through smart automation, customer-centric marketing, and forward-thinking sales enablement. Learn more at sixandflow.com.
More episodes
View all episodes

Model Moats, Scheduler Power, and AI That Fixes Potholes
08:33|Claire and Peter break down today’s biggest shifts in AI across security, infrastructure, and real-world deployment. OpenAI, Google, and Anthropic reportedly coordinate to detect and block model copying—signalling tighter controls that could change how businesses access frontier models. Nvidia’s reported acquisition of SchedMD (the team behind Slurm) raises questions about neutrality and vendor leverage in AI cluster operations. Google’s new offline-first dictation app on iPhone points to the next wave of on-device productivity tooling. The hosts also unpack “code overload” from AI coding assistants and what it means for review, security, and governance, plus signals from JPMorgan on enterprise-scale adoption, Samsung’s AI-memory-driven profits, growing APAC data centre buildout, Datadog’s push to connect experimentation with observability, and Boston’s practical AI rollout for road maintenance and citizen Q&A.
AI Becomes Infrastructure: OpenAI Spend Signals, Defence Targeting, and the Real Risks of Agents
08:36|Claire and Peter break down today’s biggest shifts as AI hardens into critical infrastructure. They unpack reports of internal tension at OpenAI over IPO timing and massive infrastructure spend—and what that could mean for pricing, rate limits, and vendor concentration risk. They then look at the rapid expansion of Project Maven into AI-assisted battlefield management, highlighting how safety principles, governance, audit trails, and human-in-the-loop controls are becoming table stakes for high-stakes decision support. The episode also covers Taiwan’s investigations into covert talent recruitment, Wipro’s acquisition of Mindsprint as a signal of domain-led, long-horizon AI services deals, and new research on how agentic systems leak data through memory, tools, and agent-to-agent communication. Finally, they explore why full automation is often uneconomic, why hybrid human+AI systems win in practice, and quick hits on GeoAI fairness, AI-powered microgrids, resilience gains from structured adoption, and a labour market that’s surging for senior roles while squeezing entry paths.
Claude Paywalls, Courtroom Hallucinations, and the Power Problem Behind AI Agents
08:14|Claire and Peter break down a packed roundup of AI headlines shaping how businesses adopt (and govern) AI. Anthropic is changing how Claude subscriptions work with third-party tools—pushing heavy users toward add-on bundles or the API. Wired reports a security incident at AI training data supplier Mercor, prompting renewed focus on vendor risk and AI supply chain due diligence. In legal news, sanctions are rising for AI-generated briefs with fabricated citations, underscoring the need for strict human verification and audit trails. The hosts also explore Claude Dispatch and what local, desktop-executed agents could mean for enterprise workflows—along with the guardrails required. Finally, they zoom out to the infrastructure reality behind the agent boom: electricity, data-centre capacity, permitting, physical security, geopolitics, and why portability across model and hardware stacks matters. Plus, a practical idea from Andrej Karpathy on building a living Markdown knowledge wiki as an alternative to over-reliance on RAG.
AI Gets Operational: Agents on Windows, Payment Rails, and ChatGPT Commerce
09:18|Claire and Peter break down the biggest AI stories of the day on AI in Flow. Anthropic brings Claude’s computer-use capability to Windows—pushing agentic automation deeper into everyday enterprise desktops—while a separate US policy dispute raises questions about how easily governments could restrict AI vendors. Microsoft announces a major Japan investment focused on in-country AI infrastructure, cybersecurity, and training, as Google releases the commercially permissive, open Gemma 4 model family. The Linux Foundation’s new x402 initiative aims to standardise how AI agents pay for services across fiat and crypto, and travel booking moves further into the assistant layer with EaseMyTrip inside the ChatGPT marketplace. Plus: autonomous legal workflows, OpenAI’s move into podcasting, child-safety pressure around synthetic content, and Sarvam AI’s multilingual momentum in India.
Leaked Agent Code, Kids’ AI Videos, and the Power Grid Bottleneck
09:46|Claire and Peter break down today’s biggest shifts in AI on AI in Flow: Anthropic scrambles after a packaging error exposes internal Claude Code agent source code (not weights or customer data, but potentially revealing proprietary orchestration and guardrails). Pressure mounts on Google to curb AI-generated children’s videos on YouTube and YouTube Kids, with implications for creators, advertisers, and regulation. In the US, AI data centre buildouts face delays not from chips, but from shortages of transformers and other electrical gear—raising costs and slowing roadmaps. Also in today’s briefing: Oracle reportedly cuts roles while doubling down on AI infrastructure; “AI coworker” agents like Junior move beyond chatbots into proactive work (with permission and hallucination risks); AI boosts textile recycling at industrial scale; the NIH expands AI funding for Alzheimer’s subtyping; researchers argue multimodal models need better self-awareness of uncertainty; deepfake conflict content overwhelms fact-checkers; and Europe accelerates humanoid robotics for practical industrial deployments.
Security Leaks, Privacy Trackers, and the Power Behind AI
09:39|Claire and Peter break down a packed day in AI on AI in Flow: Anthropic’s accidental Claude Code source exposure and what it means for trust and secure deployment; a proposed class action alleging Perplexity chats were tracked and shared with ad platforms; and the escalating reality that AI is now constrained by electricity as Microsoft explores a major natural-gas power deal for data centres. They also cover surging semiconductor exports signalling sustained infrastructure demand, Anthropic’s Australia expansion tied to renewables and sovereignty, Oracle’s job cuts to fund AI cloud investment, Singapore’s first autonomous public ride service launch, alarming AI-enabled harassment targeting teachers in Scottish schools, and Jerome Powell’s blunt advice that learning AI tools is becoming essential for the workforce.
Agentic Macs, Procurement Guardrails, and the New AI Infrastructure Reality
09:19|In today’s episode of AI in Flow, hosts Claire and Peter break down a packed slate of AI shifts across product, policy, and operations. Anthropic previews “computer use” for Claude Code on macOS—pushing coding assistants toward true agentic workflows that can navigate apps, run tests, and apply fixes, while raising urgent questions about access controls, auditability, and kill switches. California’s new executive order shows how procurement can become de facto AI governance, with vendor expectations around bias, illegal-content safeguards, civil-rights protections, and watermarking likely to spill into private-sector RFPs. Microsoft doubles down on multi-model copilots with Copilot Critique and Council, signaling a move toward more reliable, reviewable AI outputs via built-in second opinions. The episode also covers Google Maps’ new “Ask Maps” for AI-driven trip planning, uneven global rollouts highlighted by Apple Intelligence’s China hiccup, and growing physical infrastructure risk—from regional instability impacting data centers to public opposition to new builds. Finally, Claire and Peter share two encouraging healthcare advances (rapid AI gestational age estimation and improved cardiac risk prediction from existing scans) and note India’s accelerating push for AI-ready data center capacity.
Power, Pullbacks, and Platform Gates: AI Gets Real
08:40|Claire and Peter break down today’s biggest AI developments—starting with Microsoft stepping in to lead a massive data centre expansion in Texas, underscoring that power and infrastructure are now core constraints for scaling AI. They cover OpenAI shutting down Sora just six months after launch, the economics behind compute-heavy consumer products, and why businesses need stronger contract and capacity protections. Plus: Apple’s reported plan to open Siri to third-party chatbots via “Siri Extensions,” what that could mean for distribution and platform rules, and why reliability is a competitive advantage after a major DeepSeek outage. The episode also examines “AI brain fry,” the risks of unreliable AI detectors, research on sycophantic chatbots and user behaviour, and a set of real-world wins—from AI-assisted cardiology and multi-billion-dollar drug discovery deals to AI-enabled drone inspections for building maintenance.
Leadership Shake-Ups, Open Algorithms, and the Rising Governance Bar
09:12|In today’s episode of AI in Flow, hosts Claire and Peter break down a wide-ranging set of stories shaping how AI is built, bought, and governed. They look at the resignation of xAI’s last original co-founder as Elon Musk restructures the company ahead of a potential IPO—what that signals for enterprises betting on Grok and why procurement flexibility matters. They also explore Bluesky’s new AI assistant “Attie,” which lets users create custom feeds with natural language on an open protocol, raising big questions about moderation, accountability, and brand safety. On the infrastructure front, Samsung and SK Hynix ramp investment in China memory fabs to ease the AI memory crunch, while the hosts argue the bigger enterprise bottleneck may be data plumbing—pipelines, lineage, and AI-grade data platforms. The episode then turns to legal and governance developments in India: warnings about shadow AI use in courts without safeguards, and a landmark Delhi High Court injunction targeting AI deepfakes and mandating takedowns and traceability. Rounding out the briefing: viral AI parody content testing IP enforcement, India’s pragmatic ecosystem-first AI strategy, a low-cost AI badminton line-calling system, and Mark Cuban’s comments on robot taxes and disclosure risks tied to aggressive automation narratives.