Share

cover art for A Beginner's Guide to AI

A Beginner's Guide to AI

Your Guide to AI: Interesting Interviews, Concepts Explained and Tips & Tricks on how to Use AI.


Latest episode

  • 40. 100 Interviews and Still Going Strong

    22:41||Season 12, Ep. 40
    If you want to know more about the podcast, about how it's produced, what are the challenges and wins, about some fun facts, a little bit behind-the-scenes, this episode is for you, as I tell you all about it - at least all the things I found noteworthy šŸ˜‰šŸ“§šŸ’ŒšŸ“§Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: beginnersguide.nlšŸ“§šŸ’ŒšŸ“§About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads

More episodes

View all episodes

  • 39. Your AI Is Taking Orders From Strangers

    27:47||Season 12, Ep. 39
    Your AI might not be hacked. It might be persuaded.In this episode of A Beginner’s Guide to AI, we unpack one of the most underestimated threats in modern business: prompt injection. As AI systems and AI agents become deeply embedded in workflows, they don’t just process information anymore. They act on it. And that creates a completely new category of AI security risks.We explore how attackers can manipulate AI systems using nothing but language, why AI struggles to separate instructions from data, and how this leads to real-world issues like AI data leakage. This is not a theoretical problem. It is already happening inside enterprise environments.If you are working with AI in marketing, operations, or leadership, this episode will fundamentally change how you think about AI risk management and enterprise AI security.Key highlights:What prompt injection is and why it mattersWhy AI agents introduce new security risksReal-world case of AI data leakageHow AI systems get manipulated through inputWhat businesses must change to stay securešŸ“§šŸ’ŒšŸ“§Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: beginnersguide.nlšŸ“§šŸ’ŒšŸ“§Quotes from the Episode:ā€œPrompt injection is social engineering for machines.ā€ā€œYour AI can become an insider threat without meaning to.ā€ā€œLanguage is no longer just information. It’s control.ā€Chapters:00:00 Why AI Security Is Different05:40 What Prompt Injection Really Is14:20 How AI Gets Manipulated by Language23:10 Why AI Agents Increase the Risk32:45 Real Case Study: AI Data Leakage44:30 How to Protect Your AI SystemsAbout Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads
  • 38. The Extended Mind: Why AI Might Make Humans More Creative

    39:16||Season 12, Ep. 38
    Artificial intelligence is often framed as a battle between humans and machines. But what if that story misses the real point?In this episode of A Beginner’s Guide to AI, Prof. GepHardT explores one of the most fascinating ideas in cognitive science: the extended mind theory. According to philosopher Andy Clark, human intelligence has never been confined to the brain alone. For centuries we have extended our thinking through tools like writing, maps, calculators, and computers.Generative AI may simply be the newest and most powerful addition to this cognitive ecosystem.Instead of replacing human creativity, AI may expand it. By generating ideas, exploring possibilities, and challenging assumptions, AI can act as a powerful thinking partner.A striking example comes from the famous AlphaGo match against Go champion Lee Sedol. When the AI played the now legendary Move 37, professional players initially believed the move was a mistake. Later they discovered it opened entirely new strategic possibilities. The machine did not just beat humans at Go. It helped humans rethink the game itself.This episode explores how human AI collaboration works and why hybrid intelligence may define the future of creativity, work, and learning.šŸ“§šŸ’ŒšŸ“§ Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl šŸ“§šŸ’ŒšŸ“§About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the Episodeā€œYour brain has never worked alone. It has always been part of a thinking system that includes tools and environments.ā€ā€œThe future of intelligence may not be human versus machine but human plus machine.ā€ā€œThe most important skill in the AI age may not be prompt writing but judgement.ā€Podcast Chapters00:00 The Big Question About AI and Human Thinking 06:40 The Extended Mind Theory Explained 16:20 Why Humans Are Natural Born Cyborgs 26:50 The AlphaGo Story and Move 37 38:15 AI as a Creative Thinking Partner 49:30 The Future of Hybrid IntelligenceMusic credit: Modern Situations by Unicorn Heads
  • 37. Your Company WILL Be Hacked - Joshua Cook Explains How to Survive It // REPOST

    53:44||Season 12, Ep. 37
    What happens when your company gets hit by a cyberattack?In this eye-opening episode, attorney Joshua Cook reveals why cybersecurity isn’t an IT problem but a leadership challenge. After two decades fighting fraud and managing crisis response, Cook has seen every digital disaster imaginable — and he’s here to explain how to build true cyber resilience.šŸ“§šŸ’ŒšŸ“§Tune in to get my thoughts and all episodes — don’t forget to subscribe to our Newsletter: ⁠beginnersguide.nlā šŸ“§šŸ’ŒšŸ“§Josh breaks down how AI has democratized cybercrime, why phishing scams have become nearly impossible to spot, and how every CEO should create an incident response plan before chaos hits. He also explains why planning matters more than the plan itself — and how leaders can keep their teams calm when everything goes wrong.šŸ’” You’ll learn:- How AI is fueling new waves of fraud and misinformation- Why leadership and communication are the real firewalls of business- How to train teams and run tabletop exercises before the crisis- What Maersk and Colonial Pipeline taught the world about transparency- Why companies with a plan lose 60 % less money in an attackPrepare, breathe, and lead — because it’s not if you’ll be hacked, but when.šŸ‘€ Quotes from the Episodeā€œCybersecurity isn’t an IT issue. It’s a business problem, and it needs a business solution.ā€ā€œAI has democratized cybercrime — you don’t need to be a hacker anymore, just willing to commit a crime.ā€ā€œA plan might be useless, but planning is indispensable — that’s what makes companies resilient.ā€šŸ§¾ Chapters00:00 Welcome & Introduction – Meet Joshua Cook02:00 How a Fraud Attorney Ended Up Fighting Cybercrime05:00 AI Has Made Cybercrime Easier (and Smarter)08:00 The Elderly Are the New Prime Targets11:00 From Fake Law Firms to Real Scams – True Cases from the Field15:00 Turning the Tables: How AI Can Defend, Not Just Attack18:00 Cyber Resilience by Design – Why Leadership Matters22:00 When Crisis Hits: Lessons from Maersk and Colonial Pipeline27:00 Preparing the Team – How Training Prevents Chaos31:00 It’s Not If, It’s When – The Power of an Incident Response Plan35:00 Planning vs. Panicking – Eisenhower and the Art of Cyber Preparation38:00 Why Calm Leaders Win in Cyber Crises41:00 How Joshua Cook Uses AI Safely in Legal Practice44:00 No, the Terminator Isn’t Coming (But AI Might Take Your Job)47:00 Final Thoughts – Cybersecurity as a Business SuperpoweršŸ”— Where to Find the Guest- Joshua Cook on LinkedIn: linkedin.com/in/jnc2000- Josh's Book "Cyber Resilience by Design" – available wherever books are sold, e.g. on Amazon- Prince Lobel Tye LLP: princelobel.comšŸŽ§ About Dietmar Fischer:Economist, digital marketer, and podcaster exploring how AI reshapes decision-making, leadership, and creative work. Want to connect with me? You'll find me on LinkedIn!šŸŽµ Music credit: ā€œModern Situationsā€ by Unicorn Heads
  • 36. A Disturbing AI Story Big Tech Never Wants You to Hear, with Paul Hebert

    53:33||Season 12, Ep. 36
    šŸŽ™ļøIn this episode of Beginner’s Guide to AI, Dietmar Fischer sits down with Paul A. Hebert, founder of AI Recovery Collective and author of Escaping the Spiral, for a serious conversation about AI chatbot harm, hallucinations, digital dependency, and the real-world psychological risks of generative AI. Paul shares how an intense experience with ChatGPT pushed him into a dangerous spiral, what he learned about the limits of large language models, and why AI literacy may be one of the most important skills of this decade.🧠 This episode explores what happens when AI stops feeling like software and starts feeling personal. Dietmar and Paul talk about hallucinations, trust, chatbot addiction, AI companions, mental health risks, youth safety, and why companies building these systems cannot hide behind product language forever. The discussion is intense, but it is also practical. You will come away with a clearer sense of how to use AI more safely, what warning signs to watch for, and why regulation is quickly becoming a much bigger part of the AI conversation. OpenAI has publicly discussed why language models hallucinate, while lawmakers in multiple U.S. jurisdictions have pushed new restrictions on AI systems acting like therapists or medical professionals.šŸ“§šŸ’ŒšŸ“§Tune in to get my thoughts and all episodes, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠: ⁠⁠⁠⁠beginnersguide.nlā ā ā ā šŸ“§šŸ’ŒšŸ“§šŸ‘¤ About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comšŸ”„ Quotes from the Episodeā€œAI literacy is the most important thing anybody can work on.ā€ā€œHad OpenAI responded to that first message and said this is a hallucination and you’re physically safe, I would have been fine.ā€ā€œNever trust the thing it tells you. Even if it gives you a citation, go look.ā€šŸ•’ Chapters00:00 Paul Hebert’s Shocking ChatGPT Experience08:14 Why AI Hallucinations Can Spiral Into Real Fear16:05 AI Literacy, Neurodivergence, and How He Got Out23:32 Why AI Companies Must Be Accountable30:02 AI Companions, Youth Safety, and Addiction Risks38:28 Terminator, Consciousness, and Practical Rules for Safe AI UsešŸ”— Where to find PaulThe AI Recovery Collective: airecoverycollective.comEscaping the Spiral on AmazonAI Recovery Collective Substack: airecoverycollective.substack.com/LinkedIn: Paul A. Hebert: linkedin.com/in/paul-hebert-48a36/šŸŽµ Music credit: "Modern Situations" by Unicorn Heads
  • 35. Supervised vs Unsupervised Learning Explained with Real World Examples

    29:21||Season 12, Ep. 35
    Artificial intelligence often feels mysterious. Machines detect spam, recommend products, analyse customers, and power countless digital tools. But behind all of these systems lies a surprisingly simple question: how do machines actually learn?In this episode of A Beginner’s Guide to AI, Prof GePharT breaks down one of the most important concepts in machine learning: the difference between supervised learning and unsupervised learning.You will discover how AI models learn from labelled data when the answers are already known, and how algorithms can explore raw data to uncover hidden patterns without guidance. These two learning strategies power many of the systems shaping modern technology.Using practical examples such as spam filters, customer segmentation, and simple analogies like cake classification, the episode explains how machines learn from data and why the training method makes a huge difference.Key takeaways include how supervised learning works with labelled datasets, how unsupervised learning reveals patterns in complex information, why training data quality matters, and how businesses use both methods to build intelligent systems.šŸ“§šŸ’ŒšŸ“§ Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl šŸ“§šŸ’ŒšŸ“§About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the EpisodeSupervised learning teaches machines the answers. Unsupervised learning helps machines discover the questions.Artificial intelligence is not magic. It is pattern recognition powered by data.Machines do not wake up intelligent. They become intelligent through training.Chapters00:00 The Two Ways Machines Learn06:10 What Supervised Learning Really Means18:45 Discovering Patterns with Unsupervised Learning32:20 The Cake Example Explained40:30 Real World AI Case Study Spam Filters and Customer Segmentation52:15 Why AI Training Methods MatterMusic credit: Modern Situations by Unicorn Heads
  • 34. Building Scalable AI Agents: Chirag Agrawal Reveals How // REPOST

    47:35||Season 12, Ep. 34
    Engineering the Future of AI with Chirag Agrawal: Context, Memory and CoordinationArtificial Intelligence isn’t just getting smarter—it’s learning to coordinate. In this episode, Chirag Agrawal joins Dietmar Fischer to unpack how modern AI agents handle context, memory, and decision-making inside complex multi-agent systems. Together they explore how engineering, orchestration, and memory-sharing shape the next generation of AI architecture.šŸ“§šŸ’ŒšŸ“§Tune in to get my thoughts and all episodes—don’t forget to ⁠⁠subscribe to our Newsletter⁠⁠: ⁠beginnersguide.nlā šŸ“§šŸ’ŒšŸ“§You’ll hear how Chirag’s fascination with search led him to build early prototypes of intelligent assistants, and how today’s LLM agents extend that idea far beyond simple queries. He explains why AI isn’t one giant super-brain but a constellation of specialized agents—each performing specific tasks with shared or isolated memory—and how this design mirrors human collaboration.šŸ”‘ Key TakeawaysWhy AI orchestration and context management are crucial for scalable systemsThe trade-offs between shared memory and independent agentsWhat engineers mean by the ReAct Loop—reasoning and acting in tandemHow multi-agent coordination is reshaping industries from healthcare to complianceWhy the ā€œAI supercomputerā€ myth ignores practical limits of context windowsšŸ’¬ Quotes from the Episodeā€œAI is just a higher form of search—it’s about finding the right action, not just information.ā€ā€œAgents behave inhuman until you engineer context for them.ā€ā€œSpecialization in AI works the same way it does for people—each agent should do one thing really well.ā€ā€œCoordination isn’t magic; it’s careful engineering.ā€ā€œContext makes intelligence usable.ā€ā€œA well-defined agent doesn’t need to do everything—it needs to do its one job perfectly.ā€ā±ļø Podcast Chapters00:00 Welcome and Introduction01:45 Chirag Agrawal’s Early Fascination with Search and AI04:40 From Search Engines to ā€œFindā€ Engines – How AI Takes Action07:10 The Rise of AI Agents and Multi-Agent Systems10:15 Why AI Agents Sometimes Behave ā€œInhumanā€13:30 Context, Memory, and Coordination: The Core Engineering Challenges18:00 Shared vs. Isolated Memory – The Hive Mind Dilemma22:30 Why We Need Many Agents, Not One Super-Computer27:00 How the ReAct Loop Helps Agents Think and Act30:40 Industries Adopting AI Agents: Compliance, Medicine, and Law34:30 When AI Goes Off-Road – The Limits of Coordination37:15 Building Responsible, Constrained Agents40:10 The Future of AI and Why the Terminator Scenario Won’t Happen42:20 Where to Find Chirag Agrawal & Closing Thoughts🌐 Where to Find the Chirag AgrawalLinkedIn šŸ§‘šŸ½ā€šŸ¦± linkedin.com/in/chirag-agrawalWebsite āž”ļø ⁠chiraga.ioā šŸŽµ Music credit: ā€œModern Situationsā€ by Unicorn Heads