{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/63c42dcb7ae74e0010ef5e7b/661d37647b8b6100171f0a91?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"E38: OpenAI Grant Winner: Building AI That Actually Understands What Matters to People","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/63c42dcb7ae74e0010ef5e7b/1713791380139-d7ccc8ea23d6d5a91e28e68ebacf63d7.jpeg?height=200","description":"<p>In this episode of Polyweb, host Sara Landi Tortoli interviews Oliver Klingefjord, the co-founder and technical lead at the Meaning Alignment Institute.</p><p>The Institute won an OpenAI grant to explore \"democratic fine-tuning\" as an approach that differs from Constitutional AI. </p><p>During the conversation, Oliver and Sara discuss the process of creating a \"moral graph\" based on people's discussions of values, the potential risks of personalized AI models, and the broader implications of their work in aligning technology with human meaning and purpose.</p><p><br></p><p>00:00 Intro</p><p>04:42 How the Meaning Alignment Institute won the OpenAI grant to explore democratic AI alignment</p><p>08:13 Comparing democratic fine-tuning to RLHF and Constitutional AI</p><p>12:33 Institute's process of surfacing underlying values through moral dilemmas</p><p>16:00 Creating a \"moral graph\" to capture wisdom and agreements</p><p>22:04 Risks of personalized AI models and the need for shared human values</p><p>26:25 Using the moral graph to guide and evaluate AI model outputs</p><p>32:12 Potential applications of the moral graph</p><p>36:07 Challenges of scaling to superintelligent AI</p><p>39:55 Broader implications and the need for people to articulate their values</p><p>43:55 Where to find more information about the Meaning Alignment Institute</p><p><br></p><p>👉 CHECK OUT THE MEANING ALIGNMENT INSTITUTE: https://www.meaningalignment.org/</p><p><br></p><p><br></p><p>📚 OTHER RESOURCES MENTIONED: </p><p>Meaning Alignment Institute Substack: https://meaningalignment.substack.com/</p><p>Introducing Democratic Fine-Tuning: https://meaningalignment.substack.com/p/introducing-democratic-fine-tuning</p><p>Value Discovery GPT: https://chat.openai.com/g/g-TItg5klMA-values-discovery</p><p><br></p><p>🔗 CONNECT WITH  OLIVER:</p><p><br></p><p>👥 Linkedin - https://www.linkedin.com/in/oliver-klingefjord-45b662105/</p><p>🐦 Twitter - https://twitter.com/klingefjord</p><p><br></p><p>🔗 CONNECT WITH SARA:</p><p><br></p><p>📸 Instagram - https://bit.ly/saralanditortoli-instagram</p><p>🐦 Twitter - https://twitter.com/Sara_LandiTor</p><p>👥 Linkedin - https://bit.ly/saralanditortoli-linkedin</p><p>💌 Newsletter - https://polyweb.beehiiv.com/</p><p><br></p><p>🪢 ABOUT POLYWEB:</p><p><br></p><p>Polyweb navigates through the intricate nexus of technology, product innovation, and human creativity. Engage with leading entrepreneurs and experts in uncovering how we can forge impactful companies and lead lives enriched with purpose in the Technology Era.</p><p><br></p><p>📺 Subscribe to our YouTube channel: https://www.youtube.com/@polywebhq</p>","author_name":"Sara Landi Tortoli"}