Trust and responsibility should be core principles of AI. We encourage you to infuse these guiding principles and technologies for trust and transparency into your AI project.
The LF AI Trusted AI Committee is a global group working on policies, guidelines, tools and use cases by industry to ensure the development of trustworthy AI systems and processes to develop them continue to improve over time. The starting point was a survey and outreach to current open source Trusted AI related projects to join LF AI efforts. Future directions include creating a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI. We invite you to join and contribute to an evolving document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology.
To view the Trusted AI projects on the LF AI landscape, please click here.