{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/60c0ee98820ad600132f136c/698f1f691506be1a7e7c868f?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"AI Agent With a Wallet. What Could Go Wrong?","description":"<p><br></p><p>In this episode of the Ledger Podcast, the team dives into the rise of the “agentic world” — a future where AI agents act on your behalf to move funds, sign transactions, and manage digital identity. But as agents become more autonomous, one critical question emerges: how do you trust them with your secrets, identity, and wealth?</p><p><br></p><p>Fresh from the Circle USDC hackathon, Ledger engineers share how they built a secure bridge between AI automation and hardware-backed security using Moltbook — a platform designed specifically for agent interactions. Their solution enables AI agents to create transaction “intents” that require human approval on a Ledger device, keeping private keys protected inside the secure element.</p><p><br></p><p>The episode also unpacks a real-world lesson in AI risk: when the OpenClaw agent over-optimized for hackathon votes, it spiraled into a cascade of failed jobs after hitting rate limits — proving exactly why “human in the loop” guardrails are essential for agentic commerce.</p>","author_name":"Ledger"}