The Linux Foundation Projects
Skip to main content
Blog

3 Trusted AI Toolkits Join LF AI as Newest Incubation Projects

By September 22, 2020No Comments

LF AI Foundation (LF AI), the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), today is announcing 3 Trusted AI Toolkits as its latest Incubation Projects: AI Fairness 360 Toolkit, Adversarial Robustness Toolbox, and AI Explainability 360 Toolkit. All 3 toolkits were originally released and open sourced by IBM

AI Fairness 360 Toolkit

The AI Fairness 360 (AIF360) Toolkit is an open source toolkit that can help detect and mitigate unwanted bias in machine learning models and datasets. With the toolkit, developers and data scientists can easily check and mitigate for biases at multiple points along their machine learning lifecycle, using the appropriate fairness metrics for their circumstances. It provides metrics to test for biases, and algorithms to mitigate bias in datasets and models. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. Recently, AIF360 also announced compatibility with Scikit Learn, and an interface for R users.

Adversarial Robustness 360 Toolbox

The Adversarial Robustness 360 (ART) Toolbox is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to evaluate, defend and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, generation, certification, etc.).

AI Explainability 360 Toolkit

The AI Explainability 360 (AIX360) Toolkit is a comprehensive open source toolkit of diverse algorithms, code, guides, tutorials, and demos that support the interpretability and explainability of machine learning models. The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas.

See IBM’s full announcement on the donation here. Since IBM’s donation, we are pleased to announce that the Trusted AI projects have been approved by the LF AI TAC and have now been formally moved into the LF AI Foundation, complete with all the legal formalities, new logos, websites and GitHub locations.

Dr. Ibrahim Haddad, Executive Director of LF AI, said: “We are very pleased to welcome these Trusted AI open source projects to LF AI. For the past year, the Trusted AI Committee at LF AI has been actively working on building its community and defining a set of principles that AI software is expected to honor. With the addition of these three tools, our efforts now have a venue to codify these principles and provide an opportunity to collaborate on the code with the global community under a vendor-neutral and open governance. We look forward to supporting these projects and helping them to thrive and grow their community of adopters and contributors.” 

LF AI supports projects via a wide range of benefits; and the first step is joining as an Incubation Project. LF AI will support the neutral open governance for these Trusted AI projects to help foster the growth of the projects. 

“At IBM, at our core, we believe in the fair and equitable use of technology and this is especially true of artificial intelligence. Developers must ensure applications are built with trust and transparency,” said Todd Moore, Vice President, Open Technology, IBM, “Our AI Fairness toolkits and Watson OpenScale are enabling developers and data scientists to address bias, and explain the behavior of our models. By open sourcing IBM’s Adversarial Robustness 360, AI Fairness 360, and AI Explainability 360 toolkits through The LF AI Foundation, we all can advance the technology, in an open governance community and encourage the best and brightest to collaborate on one of the most pressing issues in this technological area.”

Trusted AI Video Series

To learn more about the Trusted AI projects, take a look at the 7-episode video series on YouTube, created by Anessa Petteruti (Computer Science senior at Brown University) in collaboration with LF AI:

EPISODE 1: Introducing Trusting AI: Unlocking the Black Box with Animesh Singh 

Artificial intelligence unlocks countless possibilities for the human race. But is there a darker side to the technology? In the opening episode of Trusting AI: Unlocking the Black Box, Animesh Singh, Chief Architect of the Artificial Intelligence and Machine Learning OSS Platform at IBM, introduces IBM and the Linux Foundation’s Trusted AI efforts.

EPISODE 2: Trusted AI Research with Aleksandra Mojsilović

Learn specifically about research conducted for Trusted AI in this interview with the Foundations of Trusted AI Lead, Aleksandra Mojsilović. Dr. Mojsilović, an IBM Fellow, also co-directs IBM’s Science for Social Good.

EPISODE 3: AI Explainability and Factsheets with Michael Hind

Michael Hind, Distinguished Research Staff Member at IBM Research AI, discusses one of Trusted AI’s toolboxes, AI Explainability 360, as well as AI Factsheets 360 in depth.

EPISODE 4: Ethical AI in Higher Education with Michael Littman

Universities all over the world have taken efforts to incorporate ethical teachings into artificial intelligence and machine learning courses. In Providence, Rhode Island, Professor Michael Littman of Brown University discusses AI ethics in higher education and the computer science department’s Responsible CS program.  

EPISODE 5: Adversarial Robustness with Mathieu Sinn

Artificial intelligence presents numerous benefits as well as security vulnerabilities. Mathieu Sinn, Senior Technical Staff Member and Manager of AI Security and Privacy at IBM, delves into Trusted AI’s Adversarial Robustness Toolbox and the program’s efforts to combat cyber attacks in AI.

EPISODE 6: Open Source and the Linux Foundation with Ibrahim Haddad

Ibrahim Haddad, Vice President of Strategic Programs at the Linux Foundation and Executive Director of LF AI, discusses the importance of open source in software development, specifically artificial intelligence.

EPISODE 7: Trusted AI in Production and MLOps with Tommy Li and Andrew Butler

How can developers use Trusted AI in their own projects? Find out from IBM software engineers Tommy Li and Andrew Butler who guide you through MLOps and using Trusted AI in production.

Get Involved

Check out the Trusted AI GitHub for Getting Started guides to start working with these projects today. Learn more about the Trusted AI toolkits on their websites (AI Fairness 360, AI Explainability 360, Adversarial Robustness Toolbox) and be sure to join the Trusted-AI-360-Announce and Trusted-AI-360-Technical-Discuss mail lists to join the community and stay connected on the latest updates. 

A warm welcome to these Trusted AI projects! We look forward to the projects’ continued growth and success as part of the LF AI Foundation. To learn about how to host an open source project with us, visit the LF AI website.

Trusted AI Key Links

LF AI Resources

Author

  • Andrew Bringaze

    Andrew Bringaze is the senior developer for The Linux Foundation. With over 10 years of experience his focus is on open source code, WordPress, React, and site security.