{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/688bd9d76bbbf6afc7811bec/68d13464146cfd1a65679d2e?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"How Do Large Language Models Learn? | AI Series Pt. 3","description":"<p>How does ChatGPT actually <em>learn</em>? In Episode 8 of <em>Mr. Fred’s Tech Talks</em>, we go behind the curtain to explore how Large Language Models are trained and why the process isn’t as magical as it looks.</p><p><br></p><p>You’ll discover:</p><ul><li>How Large Language Models train step by step, billions of rounds of “guess the next word.”</li><li>Why tokens are the LEGO bricks of AI learning.</li><li>How GPUs, servers, and warehouse-sized data centers form the backbone of AI training.</li><li>A simple library analogy that makes servers easy to understand.</li><li>What happens when training goes wrong: bias, hallucinations, and overfitting.</li></ul><p><br></p><p>AI isn’t just software, it’s powered by huge amounts of energy, specialized hardware, and careful fine-tuning by humans. And while it’s powerful, it’s not perfect. These systems don’t “think” like we do they predict patterns, one token at a time.</p><p><br></p><p>💡 <em>Tech Tip of the Episode:</em> Try asking ChatGPT/CoPilot/Gemini/Grok etc. a simple question, then twist it:</p><ul><li>“Who was the first person to walk on the moon?” → Neil Armstrong</li><li>“Who was the first person to walk on the sun?” → Watch how it tries to answer the impossible.</li></ul><p><br></p><p>This little experiment shows why AI can amaze us one moment and confuse us the next.</p><p><br></p><p>Join me in Episode 8 to learn how AI models are trained, what really happens inside data centers, and why understanding this matters for parents, teachers, and anyone curious about the future of technology.</p>","author_name":"Fred Aebli"}