Artificial Intelligence is inevitable, creating new business risks and opportunities. In 2024, Mark Lyster led the development of this toolkit designed to assist investors in understanding and navigating the key challenges, risks, and human rights impacts associated with AI.
The AI and Human Rights: Investor Toolkit, co-chaired by Mark Lyster is a pioneering resource that addresses the intersection of AI and human rights. Developed in collaboration with industry leaders, ESG analysts, and human rights advocates, the toolkit equips investors with the knowledge and strategies needed to engage companies on responsible AI practices.
Key highlights from the paper include:
Comprehensive Risk Frameworks: Tools for identifying and evaluating potential human rights risks within AI applications.
Engagement Strategies: Practical guidance for investors to influence companies toward responsible AI governance and transparency.
Global Benchmarks: Case studies and best practices from leading organisations to foster accountability and innovation.
Mark’s leadership was pivotal in shaping the toolkit to address the ethical and operational complexities of AI. “AI represents both incredible potential and significant risk. This toolkit is designed to empower investors with the knowledge and strategies needed to advocate for AI systems that uphold fundamental human rights,” he shared.
As businesses increasingly adopt AI, understanding its implications for privacy, equality, and social justice has never been more crucial. The AI and Human Rights: Investor Toolkit is a vital resource for organizations looking to integrate ethical considerations into their technology investments.
For more information about the toolkit and its recommendations, click here. We are proud to showcase Mark’s role in this critical initiative and look forward to the continued impact of this work in the ESG landscape.
Comentários