{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/61e878a1419a9b0013b27134/699671d1e1d8773119e1e4ca?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Claude's Constitution, with Amanda Askell","description":"<p>Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about&nbsp;<a href=\"https://www.anthropic.com/constitution\" rel=\"noopener noreferrer\" target=\"_blank\">Claude's Constitution</a>: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.</p><p><br></p><p>The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of \"case law,\" and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications.</p><p><br></p>","author_name":"Lawfare & University of Texas Law School"}