{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/6953b9ead0c0aeaf12bcbd70/69c302ff034bb238fb187715?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Your AI Is Taking Orders From Strangers","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/6953b9ead0c0aeaf12bcbd70/1774387825128-d96ee93f-7cfe-4010-a296-e5d818e024c8.jpeg?height=200","description":"<p><strong>Your AI might not be hacked. It might be persuaded.</strong></p><p><br></p><p>In this episode of A Beginner’s Guide to AI, we unpack one of the most underestimated threats in modern business: prompt injection. As AI systems and AI agents become deeply embedded in workflows, they don’t just process information anymore. They act on it. And that creates a completely new category of AI security risks.</p><p><br></p><p>We explore how attackers can manipulate AI systems using nothing but language, why AI struggles to separate instructions from data, and how this leads to real-world issues like AI data leakage. This is not a theoretical problem. It is already happening inside enterprise environments.</p><p><br></p><p>If you are working with AI in marketing, operations, or leadership, this episode will fundamentally change how you think about AI risk management and enterprise AI security.</p><p><br></p><p><br></p><p><strong>Key highlights:</strong></p><ul><li>What prompt injection is and why it matters</li><li>Why AI agents introduce new security risks</li><li>Real-world case of AI data leakage</li><li>How AI systems get manipulated through input</li><li>What businesses must change to stay secure</li></ul><p><br></p><p><br></p><p>📧💌📧</p><p>Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: <a href=\"https://beginnersguide.nl\" rel=\"noopener noreferrer\" target=\"_blank\"><strong>beginnersguide.nl</strong></a></p><p>📧💌📧</p><p><br></p><p><br></p><p><br></p><p><strong>Quotes from the Episode:</strong></p><ul><li>“Prompt injection is social engineering for machines.”</li><li>“Your AI can become an insider threat without meaning to.”</li><li>“Language is no longer just information. It’s control.”</li></ul><p><br></p><p><br></p><p><strong>Chapters:</strong></p><p>00:00 Why AI Security Is Different</p><p>05:40 What Prompt Injection Really Is</p><p>14:20 How AI Gets Manipulated by Language</p><p>23:10 Why AI Agents Increase the Risk</p><p>32:45 Real Case Study: AI Data Leakage</p><p>44:30 How to Protect Your AI Systems</p><p><br></p><p><br></p><p><strong>About Dietmar Fischer: </strong>Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at <a href=\"https://argoberlin.com\" rel=\"noopener noreferrer\" target=\"_blank\"><strong>argoberlin.com</strong></a></p><p><br></p><p><br></p><p><br></p><p><em>Music credit: \"Modern Situations\" by Unicorn Heads</em></p>","author_name":"Dietmar Fischer"}