{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/66cf6d924960e4eb18d4aa8d/67225de077f0e7cbfb9ba08b?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"AI in K-12 Education Part 2: Tackling Cheating, Privacy, and Policy","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/66cf6d924960e4eb18d4aa8d/1730305112858-bec2d6e7-36de-44a2-bfa6-6504c787bd3a.jpeg?height=200","description":"<p>In this episode of IT SPARC Cast Deep Dive, John and Lou continue with the second of three parts of the discussion on AI in K-12 and primary education. They dive into how schools address AI-assisted cheating, privacy challenges, and the role of IT departments in crafting responsible AI policies. Learn about real-life strategies from school IT leaders and the importance of collaboration in using AI effectively in education.</p><p><br></p><p><strong>Show Notes:</strong></p><p><br></p><p><strong>Intro:</strong></p><p><br></p><p>John and Lou kick off by recapping the first episode, where they compared two school districts’ approaches to AI in education.</p><p><br></p><p><strong>Deep Dive:</strong></p><p><br></p><p>AI and Cheating Detection</p><p>\t•\tConcerns around plagiarism: Districts are exploring AI detection tools but worry about false positives.</p><p>\t•\tSome schools now require all assignments to be written in Google Docs to track typing patterns and prevent AI-assisted submissions.</p><p>\t•\tDiscussion on the evolving tactics students might use to bypass these measures.</p><p><br></p><p>Collaborative Policy Development</p><p>\t•\tDistricts are conducting “AI tours” and working with digital learning specialists to educate teachers on safe AI tools and data privacy.</p><p>\t•\tSchools emphasize collaboration between IT, teachers, and administrators to ensure that AI policies align with classroom needs.</p><p><br></p><p>Managing Student Devices</p><p>\t•\tIT departments are limiting AI tools on student Chromebooks, while allowing teachers access to approved educational AI applications.</p><p>\t•\tGoogle’s admin console gives schools control over Chromebooks, enabling restrictions that align with educational goals and privacy requirements.</p><p><br></p><p>Experimenting with Prompt “Poisoning” to Detect AI Usage</p><p>\t•\tJohn and Lou test a strategy where obscure references are added to prompts to detect AI-generated work, revealing mixed results in effectiveness.</p><p>\t•\tThey discuss the importance of educating teachers to recognize AI-generated assignments and use critical questioning to assess student knowledge.</p><p><br></p><p><strong>Wrap Up:</strong></p><p><br></p><p>John and Lou encourage feedback from educators and IT professionals on AI’s role in schools, inviting emails at feedback@itsparccast.com and comments on X @ITSPARCCast.</p><p><br></p><p>Listeners are urged to subscribe, share, and stay tuned for next week’s episode on AI’s future in education.</p>","author_name":"John Barger"}