A Beginner's Guide to AI

  • 40. 100 Interviews and Still Going Strong

    22:41||Season 12, Ep. 40
    If you want to know more about the podcast, about how it's produced, what are the challenges and wins, about some fun facts, a little bit behind-the-scenes, this episode is for you, as I tell you all about it - at least all the things I found noteworthy 😉📧💌📧Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: beginnersguide.nl📧💌📧About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads
  • 39. Your AI Is Taking Orders From Strangers

    27:47||Season 12, Ep. 39
    Your AI might not be hacked. It might be persuaded.In this episode of A Beginner’s Guide to AI, we unpack one of the most underestimated threats in modern business: prompt injection. As AI systems and AI agents become deeply embedded in workflows, they don’t just process information anymore. They act on it. And that creates a completely new category of AI security risks.We explore how attackers can manipulate AI systems using nothing but language, why AI struggles to separate instructions from data, and how this leads to real-world issues like AI data leakage. This is not a theoretical problem. It is already happening inside enterprise environments.If you are working with AI in marketing, operations, or leadership, this episode will fundamentally change how you think about AI risk management and enterprise AI security.Key highlights:What prompt injection is and why it mattersWhy AI agents introduce new security risksReal-world case of AI data leakageHow AI systems get manipulated through inputWhat businesses must change to stay secure📧💌📧Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: beginnersguide.nl📧💌📧Quotes from the Episode:“Prompt injection is social engineering for machines.”“Your AI can become an insider threat without meaning to.”“Language is no longer just information. It’s control.”Chapters:00:00 Why AI Security Is Different05:40 What Prompt Injection Really Is14:20 How AI Gets Manipulated by Language23:10 Why AI Agents Increase the Risk32:45 Real Case Study: AI Data Leakage44:30 How to Protect Your AI SystemsAbout Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads
  • 38. The Extended Mind: Why AI Might Make Humans More Creative

    39:16||Season 12, Ep. 38
    Artificial intelligence is often framed as a battle between humans and machines. But what if that story misses the real point?In this episode of A Beginner’s Guide to AI, Prof. GepHardT explores one of the most fascinating ideas in cognitive science: the extended mind theory. According to philosopher Andy Clark, human intelligence has never been confined to the brain alone. For centuries we have extended our thinking through tools like writing, maps, calculators, and computers.Generative AI may simply be the newest and most powerful addition to this cognitive ecosystem.Instead of replacing human creativity, AI may expand it. By generating ideas, exploring possibilities, and challenging assumptions, AI can act as a powerful thinking partner.A striking example comes from the famous AlphaGo match against Go champion Lee Sedol. When the AI played the now legendary Move 37, professional players initially believed the move was a mistake. Later they discovered it opened entirely new strategic possibilities. The machine did not just beat humans at Go. It helped humans rethink the game itself.This episode explores how human AI collaboration works and why hybrid intelligence may define the future of creativity, work, and learning.📧💌📧 Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl 📧💌📧About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the Episode“Your brain has never worked alone. It has always been part of a thinking system that includes tools and environments.”“The future of intelligence may not be human versus machine but human plus machine.”“The most important skill in the AI age may not be prompt writing but judgement.”Podcast Chapters00:00 The Big Question About AI and Human Thinking 06:40 The Extended Mind Theory Explained 16:20 Why Humans Are Natural Born Cyborgs 26:50 The AlphaGo Story and Move 37 38:15 AI as a Creative Thinking Partner 49:30 The Future of Hybrid IntelligenceMusic credit: Modern Situations by Unicorn Heads
  • 37. Your Company WILL Be Hacked - Joshua Cook Explains How to Survive It // REPOST

    53:44||Season 12, Ep. 37
    What happens when your company gets hit by a cyberattack?In this eye-opening episode, attorney Joshua Cook reveals why cybersecurity isn’t an IT problem but a leadership challenge. After two decades fighting fraud and managing crisis response, Cook has seen every digital disaster imaginable — and he’s here to explain how to build true cyber resilience.📧💌📧Tune in to get my thoughts and all episodes — don’t forget to subscribe to our Newsletter: ⁠beginnersguide.nl⁠📧💌📧Josh breaks down how AI has democratized cybercrime, why phishing scams have become nearly impossible to spot, and how every CEO should create an incident response plan before chaos hits. He also explains why planning matters more than the plan itself — and how leaders can keep their teams calm when everything goes wrong.💡 You’ll learn:- How AI is fueling new waves of fraud and misinformation- Why leadership and communication are the real firewalls of business- How to train teams and run tabletop exercises before the crisis- What Maersk and Colonial Pipeline taught the world about transparency- Why companies with a plan lose 60 % less money in an attackPrepare, breathe, and lead — because it’s not if you’ll be hacked, but when.👀 Quotes from the Episode“Cybersecurity isn’t an IT issue. It’s a business problem, and it needs a business solution.”“AI has democratized cybercrime — you don’t need to be a hacker anymore, just willing to commit a crime.”“A plan might be useless, but planning is indispensable — that’s what makes companies resilient.”🧾 Chapters00:00 Welcome & Introduction – Meet Joshua Cook02:00 How a Fraud Attorney Ended Up Fighting Cybercrime05:00 AI Has Made Cybercrime Easier (and Smarter)08:00 The Elderly Are the New Prime Targets11:00 From Fake Law Firms to Real Scams – True Cases from the Field15:00 Turning the Tables: How AI Can Defend, Not Just Attack18:00 Cyber Resilience by Design – Why Leadership Matters22:00 When Crisis Hits: Lessons from Maersk and Colonial Pipeline27:00 Preparing the Team – How Training Prevents Chaos31:00 It’s Not If, It’s When – The Power of an Incident Response Plan35:00 Planning vs. Panicking – Eisenhower and the Art of Cyber Preparation38:00 Why Calm Leaders Win in Cyber Crises41:00 How Joshua Cook Uses AI Safely in Legal Practice44:00 No, the Terminator Isn’t Coming (But AI Might Take Your Job)47:00 Final Thoughts – Cybersecurity as a Business Superpower🔗 Where to Find the Guest- Joshua Cook on LinkedIn: linkedin.com/in/jnc2000- Josh's Book "Cyber Resilience by Design" – available wherever books are sold, e.g. on Amazon- Prince Lobel Tye LLP: princelobel.com🎧 About Dietmar Fischer:Economist, digital marketer, and podcaster exploring how AI reshapes decision-making, leadership, and creative work. Want to connect with me? You'll find me on LinkedIn!🎵 Music credit: “Modern Situations” by Unicorn Heads
  • 36. A Disturbing AI Story Big Tech Never Wants You to Hear, with Paul Hebert

    53:33||Season 12, Ep. 36
    🎙️In this episode of Beginner’s Guide to AI, Dietmar Fischer sits down with Paul A. Hebert, founder of AI Recovery Collective and author of Escaping the Spiral, for a serious conversation about AI chatbot harm, hallucinations, digital dependency, and the real-world psychological risks of generative AI. Paul shares how an intense experience with ChatGPT pushed him into a dangerous spiral, what he learned about the limits of large language models, and why AI literacy may be one of the most important skills of this decade.🧠 This episode explores what happens when AI stops feeling like software and starts feeling personal. Dietmar and Paul talk about hallucinations, trust, chatbot addiction, AI companions, mental health risks, youth safety, and why companies building these systems cannot hide behind product language forever. The discussion is intense, but it is also practical. You will come away with a clearer sense of how to use AI more safely, what warning signs to watch for, and why regulation is quickly becoming a much bigger part of the AI conversation. OpenAI has publicly discussed why language models hallucinate, while lawmakers in multiple U.S. jurisdictions have pushed new restrictions on AI systems acting like therapists or medical professionals.📧💌📧Tune in to get my thoughts and all episodes, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠: ⁠⁠⁠⁠beginnersguide.nl⁠⁠⁠⁠📧💌📧👤 About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com🔥 Quotes from the Episode“AI literacy is the most important thing anybody can work on.”“Had OpenAI responded to that first message and said this is a hallucination and you’re physically safe, I would have been fine.”“Never trust the thing it tells you. Even if it gives you a citation, go look.”🕒 Chapters00:00 Paul Hebert’s Shocking ChatGPT Experience08:14 Why AI Hallucinations Can Spiral Into Real Fear16:05 AI Literacy, Neurodivergence, and How He Got Out23:32 Why AI Companies Must Be Accountable30:02 AI Companions, Youth Safety, and Addiction Risks38:28 Terminator, Consciousness, and Practical Rules for Safe AI Use🔗 Where to find PaulThe AI Recovery Collective: airecoverycollective.comEscaping the Spiral on AmazonAI Recovery Collective Substack: airecoverycollective.substack.com/LinkedIn: Paul A. Hebert: linkedin.com/in/paul-hebert-48a36/🎵 Music credit: "Modern Situations" by Unicorn Heads
  • 35. Supervised vs Unsupervised Learning Explained with Real World Examples

    29:21||Season 12, Ep. 35
    Artificial intelligence often feels mysterious. Machines detect spam, recommend products, analyse customers, and power countless digital tools. But behind all of these systems lies a surprisingly simple question: how do machines actually learn?In this episode of A Beginner’s Guide to AI, Prof GePharT breaks down one of the most important concepts in machine learning: the difference between supervised learning and unsupervised learning.You will discover how AI models learn from labelled data when the answers are already known, and how algorithms can explore raw data to uncover hidden patterns without guidance. These two learning strategies power many of the systems shaping modern technology.Using practical examples such as spam filters, customer segmentation, and simple analogies like cake classification, the episode explains how machines learn from data and why the training method makes a huge difference.Key takeaways include how supervised learning works with labelled datasets, how unsupervised learning reveals patterns in complex information, why training data quality matters, and how businesses use both methods to build intelligent systems.📧💌📧 Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl 📧💌📧About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the EpisodeSupervised learning teaches machines the answers. Unsupervised learning helps machines discover the questions.Artificial intelligence is not magic. It is pattern recognition powered by data.Machines do not wake up intelligent. They become intelligent through training.Chapters00:00 The Two Ways Machines Learn06:10 What Supervised Learning Really Means18:45 Discovering Patterns with Unsupervised Learning32:20 The Cake Example Explained40:30 Real World AI Case Study Spam Filters and Customer Segmentation52:15 Why AI Training Methods MatterMusic credit: Modern Situations by Unicorn Heads
  • 34. Building Scalable AI Agents: Chirag Agrawal Reveals How // REPOST

    47:35||Season 12, Ep. 34
    Engineering the Future of AI with Chirag Agrawal: Context, Memory and CoordinationArtificial Intelligence isn’t just getting smarter—it’s learning to coordinate. In this episode, Chirag Agrawal joins Dietmar Fischer to unpack how modern AI agents handle context, memory, and decision-making inside complex multi-agent systems. Together they explore how engineering, orchestration, and memory-sharing shape the next generation of AI architecture.📧💌📧Tune in to get my thoughts and all episodes—don’t forget to ⁠⁠subscribe to our Newsletter⁠⁠: ⁠beginnersguide.nl⁠📧💌📧You’ll hear how Chirag’s fascination with search led him to build early prototypes of intelligent assistants, and how today’s LLM agents extend that idea far beyond simple queries. He explains why AI isn’t one giant super-brain but a constellation of specialized agents—each performing specific tasks with shared or isolated memory—and how this design mirrors human collaboration.🔑 Key TakeawaysWhy AI orchestration and context management are crucial for scalable systemsThe trade-offs between shared memory and independent agentsWhat engineers mean by the ReAct Loop—reasoning and acting in tandemHow multi-agent coordination is reshaping industries from healthcare to complianceWhy the “AI supercomputer” myth ignores practical limits of context windows💬 Quotes from the Episode“AI is just a higher form of search—it’s about finding the right action, not just information.”“Agents behave inhuman until you engineer context for them.”“Specialization in AI works the same way it does for people—each agent should do one thing really well.”“Coordination isn’t magic; it’s careful engineering.”“Context makes intelligence usable.”“A well-defined agent doesn’t need to do everything—it needs to do its one job perfectly.”⏱️ Podcast Chapters00:00 Welcome and Introduction01:45 Chirag Agrawal’s Early Fascination with Search and AI04:40 From Search Engines to “Find” Engines – How AI Takes Action07:10 The Rise of AI Agents and Multi-Agent Systems10:15 Why AI Agents Sometimes Behave “Inhuman”13:30 Context, Memory, and Coordination: The Core Engineering Challenges18:00 Shared vs. Isolated Memory – The Hive Mind Dilemma22:30 Why We Need Many Agents, Not One Super-Computer27:00 How the ReAct Loop Helps Agents Think and Act30:40 Industries Adopting AI Agents: Compliance, Medicine, and Law34:30 When AI Goes Off-Road – The Limits of Coordination37:15 Building Responsible, Constrained Agents40:10 The Future of AI and Why the Terminator Scenario Won’t Happen42:20 Where to Find Chirag Agrawal & Closing Thoughts🌐 Where to Find the Chirag AgrawalLinkedIn 🧑🏽‍🦱 linkedin.com/in/chirag-agrawalWebsite ➡️ ⁠chiraga.io⁠🎵 Music credit: “Modern Situations” by Unicorn Heads
  • 33. Stop wasting your Copilot licenses — Jim Spignardo’s brutal checklist

    51:26||Season 12, Ep. 33
    Artificial Intelligence is moving from experimentation to everyday business reality. But most organisations still struggle with one key question: How do you actually implement AI across a company?In this episode of Beginner’s Guide to AI, Dietmar Fischer speaks with Jim Spagnardo, enterprise AI strategist at ProArch, about what it really takes to roll out AI inside organisations.Jim explains why AI adoption is less about technology and more about culture, leadership, and data readiness. He introduces the idea of the three Ds of work — the dull, the draining, and the distracting tasks that AI can remove so people can focus on higher-value work.They also discuss when companies should use tools like Microsoft Copilot, when it makes sense to build a custom data and AI platform, and why data governance becomes critical once AI is introduced.If you are a business leader trying to understand how AI will reshape your organisation, this conversation offers a practical look at the challenges — and opportunities — ahead.📧💌📧Tune in to get my thoughts and all episodes, don't forget to ⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠: beginnersguide.nl📧💌📧About the host, Dietmar Fischer:Dietmar Fischer is a podcaster and AI marketer from Berlin. If you want to get your AI or digital marketing projects started, contact him at argoberlin.com.Interesting details and takeaways• Why leaders must mandate AI adoption and how to structure a Smart Start engagement.• The three Ds (dull, draining, distracting) as a simple way to position benefits for end users.• How Copilot reduces context switching and the security/data protections needed to use it responsibly.• Practical, measurable first use cases and how to track success via clear KPIs.• Advice for students and early-career professionals: be a self-starter and learn AI skills now.Quotes from the episode“We have to show people we’re taking away the dull, the draining, and the distracting so they can do creative work.”“There’s nowhere to hide: bad data surfaces weaknesses far faster when you use AI.”“If you’re going to succeed, go after high-value, low-effort, high-return use cases first.”“This affects everybody — it’s not just moving infrastructure; it changes conversations and who you have to talk to.”“Copilot lives inside your environment — users don’t have to context-switch and it knows your organisation.”“Don’t wait for formal education to teach this; be a self-starter and learn before you need it.”Chapters00:00 Welcome and why Jim got into AI03:40 From IT conversations to the C-suite: changing who you must talk to07:05 The three Ds: removing dull, draining, and distracting work10:40 When to choose Copilot versus building your own data platform14:30 Copilot advantages and data governance considerations18:20 Visual reasoning, demos and the “Barcelona photo” moment22:15 Smart Start: executive briefings, champions and use case workshops27:00 Writing with AI and transparency in authoring content30:10 Risks, regulations and advice for the next generation33:45 Where to find Jim and closing thoughtsWhere to find the Jim:LinkedIn: linkedin.com/in/spignardo/Website: ProArch.comMusic credit: "Modern Situations" by Unicorn Heads 🎵
  • 32. Your “Revenue” Is Probably Wrong and Ritish Chugh Tells You Why

    48:35||Season 12, Ep. 32
    🎙️ Ritish Chugh (Airbnb analytics engineering) joins Dietmar Fischer to unpack a problem almost every company has, but few name clearly: your metrics do not mean the same thing across teams. Finance, marketing, and sales can all talk about “revenue” and still end up in dashboard chaos. The result is wasted time, slow decisions, and leadership that does not fully trust analytics or AI.In this episode, Ritish introduces the idea of the human data pipeline: the person who stitches together conflicting definitions, tribal knowledge, and unspoken assumptions just to answer basic business questions. Then we move into the fix: unified metric definitions, a data dictionary for business metrics, and a semantic layer that acts as a translator between raw data schemas and business meaning. That foundation is what makes natural language querying and conversational analytics viable at scale, without turning AI into a confident hallucination machine.We also cover why AI adoption in analytics stalls when organizations prioritize models and infrastructure but neglect data quality, validation frameworks, and metrics governance. If you want AI to support decision-making, you need governed metrics, clear ownership, and a system that produces consistent answers across BI tools, SQL, and AI agents. Finally, Ritish shares wow moments from using AI tools to summarize years of code and PRs, generate deeper test coverage, and reduce time spent on manual SQL by building agents on top of a semantic layer.📧💌📧Tune in to get my thoughts and all episodes, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠: ⁠⁠⁠⁠beginnersguide.nl⁠⁠⁠⁠📧💌📧About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comChapters00:00 From data consulting to Airbnb and AI as a junior analyst02:22 The human data pipeline and why metrics never match across departments07:32 The fix: unified metric definitions, data dictionary, and the semantic layer translator13:32 Why AI adoption stalls: data quality, trust, validation, and metrics governance26:36 Data abundance, experimentation, and AI assisted A/B testing with humans in the loop33:37 Wow moments with AI, role transformation, and why the Terminator is not invited (yet)Quotes from the Episode“AI just acts like a junior analyst, which is always available for you.”“The first thing is… build that level of data definition that is unified for all.”“No matter what AI models they’re using… if the data… is not up to the mark, it’s not going to give you the right results. It’s always going to hallucinate.”“Every department has a different interpretation and definition of the metric.”“I spend a lot of time really doing reconciliation between the numbers and data…”“The most important thing happening is transformation…”Where to find Ritish:➡️ You connect with him on LinkedIn: linkedin.com/in/ritish-chugh/📌 Keywords you’ll hear in action: semantic layer, data dictionary, metrics governance framework, unified metric definitions, governed metrics, natural language querying, conversational analytics, agentic analytics, data quality for AI adoption.Music credit: "Modern Situations" by Unicorn Heads
loading...