{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/65b7c372124cd20018028989/69b2b51b645f7e43f27161fd?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"AI security in 2026: How to stay ahead","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/65b7c372124cd20018028989/1773329921014-93d9dad0-189b-41a3-ae8a-2fe9c1761ce3.jpeg?height=200","description":"<p>Host Gail Lundgren (Grey Matter) is joined by Shannon Murphy (Trend AI) to unpack the biggest cyber security threats facing organisations in 2026 and what security protocols need to be in place as AI adoption accelerates. They explore the AI adoption curve (adopters, builders, scalers), how security stacks evolve across each stage, and where AI initiatives typically originate inside organisations - along with the risks that follow. The conversation also covers emerging challenges like prompt injection, agentic/MCP abuse, and sensitive data exposure, plus practical best practices to reduce risk while enabling innovation. Gail and Shannon&nbsp;also reflect&nbsp;on women in tech and advice for those pursuing careers in security.&nbsp;</p>","author_name":"Grey Matter"}