{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/66f2a0897a3d63d20ff54509/68b9754af8dc6bde38dd0bef?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Will ChatGPT's new parental controls actually work?","description":"<p>We’re joined by Elaine Burke, tech journalist and host of the For Tech’s Sake podcast to ask: are these new controls a meaningful step towards safety, or just a sticking plaster on a much deeper problem?</p><p><br></p><p>We’re all grappling with how to use new AI tools, or whether to attempt to stay away from them completely. For some people, they've become a source of support. But what happens when a chatbot becomes a trusted confidant for a teenager in crisis?</p><p><br></p><p>Following a lawsuit in the United States from the parents of a teenager who died by suicide, OpenAI is rolling out parental controls for ChatGPT. The move comes as data suggests mental health queries are a common type of prompt from Irish users, with local experts and regulators issuing stark warnings about the dangers of using AI for therapy.&nbsp;</p><p><br></p><p><br></p><p><br></p>","author_name":"The Journal"}