LF AI Welcomes ONNX, Ecosystem for Interoperable AI Models, as Graduate Project

By Blog, Press Release

Active contributors to ONNX code base include over 30 blue chip companies in AI including Amazon, Facebook, Intel®, Microsoft, NVIDIA

San Francisco – November 14, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), is announcing today the Open Neural Network eXchange (ONNX) is its newest graduate level project. Moving ONNX under the umbrella of LF AI governance and management is viewed as a key milestone in establishing ONNX as a vendor-neutral open format standard. 

ONNX is an open format used to represent machine learning and deep learning models. An ecosystem of products supporting ONNX provides AI capabilities like model creation and export, visualization, optimization, and acceleration capabilities. Among its many advantages, ONNX provides portability, allowing AI developers to more easily move AI models between tools that are part of trusted AI/ML/DL workflows. 

“We are pleased to welcome ONNX to the LF AI Foundation. We see ONNX as a key project in the continued growth of open source AI,” said Mazin Gilbert, Chair of the LF AI Foundation Governing Board. “We are committed to expanding open source AI, and supporting trust, transparent and accessible community development.”

“ONNX is not just a spec that companies endorse, it’s already being actively implemented in their products. This is because ONNX is an open format and is committed to developing and supporting a wide choice of frameworks and platforms. Joining the LF AI shows a determination to continue on this path, and will help accelerate technical development and connections with the wider open source AI community around the world,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “We’re happy to provide ONNX an independent home and work with the community to boost its profile as a vendor neutral standard. ONNX will retain its existing OSI-approved open source license, its governance structure and their established development practices. We are committed to providing ONNX with a host of supporting services especially in the area of marketing and community events to extend its reach and adoption.”

The ONNX community was established in 2017 by Facebook and Microsoft to create an open ecosystem for interchangeable models. Support for ONNX has grown to over 30 registered companies, plus many more end user companies, around the world. ONNX is a mature project with 138 contributors, over 7K stars and over 1K forks. 

“Huawei welcomes the ONNX project joining LF AI. As one of the initial premier members of the LF AI Foundation as well as an early member of the ONNX community, Huawei has been fully supportive throughout the entire process,” said XiaoLi Jiang, GM of Open Source Ecosystem in Huawei, “ONNX as model exchange format standard is a great addition for interoperability which enables broader collaboration for LF AI. Huawei will continue to increase our open source investments and collaborate with the open source community to advance the application and the development in Artificial Intelligence.”

“IBM is delighted to see the ONNX community joining the Linux Foundation as it will encourage accelerated development in the space. ONNX allows the transfer of models between deep learning frameworks and simplifies the deployment of trained models to inference servers in hybrid cloud environments via Watson Machine Learning. In addition, provides a standardized format to run models on both Watson Machine Learning Accelerator and IBM Z that benefit from IBM Power, enabling clients to infuse deep learning insights into transactions as they occur,” said Steven Eliuk, IBM Vice President, Deep Learning & Governance Automation, IBM Global Chief Data Office.

“Intel is committed to support ONNX in our software products such as nGraph, Intel® Distribution of the OpenVINO™ Toolkit, and PLAIDML to provide acceleration on existing Intel hardware platforms and upcoming deep learning ASICs,” said Dr. Harry Kim, Head of AI Software Product Management.  “Developers from Intel have already contributed to defining quantization specs and large model support for ONNX, and we expect even more contributions to the project as ONNX joins the Linux Foundation.”

“We’re thrilled that the Linux Foundation has selected ONNX to expand open source AI capabilities for developers,” said Eric Boyd, corporate vice president, AI Platform, Microsoft Corp. “Microsoft has been deeply committed to open source and continues to invest in tools and platforms that empower developers to be more productive. We use ONNX across our product portfolio – such as Azure AI – to deliver improved experiences to consumers and developers alike.”

“NVIDIA GPUs and TensorRT offer a programmable, high-performance platform for inference. The ONNX open format complements these and benefits from wide adoption within the AI ecosystem,” said Rob Ober, chief platform architect for Data Center Products at NVIDIA. “As an active contributor to ONNX, we’re glad to see the project join the Linux Foundation and look forward to industry-wide collaboration and innovation.”

For more information on getting involved immediately with ONNX, please see the following resources:

ONNX GitHub: https://github.com/onnx/
Discussions: https://gitter.im/onnx/
DockerHub: https://hub.docker.com/r/onnx/onnx-ecosystem
Main Site: https://onnx.ai
Twitter: https://twitter.com/onnxai
Facebook: https://facebook.com/onnxai

Companies that have already implemented ONNX in products include Alibaba, AMD, Apple, ARM, AWS, Baidu, BITMAIN, CEVA, Facebook, Graphcore, Habana, HP, Huawei, IBM, Idein, Intel, Mathworks, Mediatek, Microsoft, NVIDIA, NXP, Oath, Preferred Networks, Qualcomm, SAS, Skymizer, Synopsys, Tencent, Xiaomi, Unity.

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

Technical Presentations from Angel Meetup – Oct 13 – Shenzhen

By Blog

The Angel technical meetup was held October 13 in Shenzhen. It was an excellent afternoon of presentations, technical discussions and networking. Thank you to everyone who participated!

Presentations and write up from the meetup are available here:

Angel is an LF AI Foundation incubation project that offers a high-performance distributed machine learning platform based on the philosophy of Parameter Server. It is tuned for performance with big data and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension model.

To learn more on Angel, please visit: https://lfai.foundation/projects/angel-ml/

LF AI Resources

NYU Joins LF AI as Associate Member

By Blog

The LF AI Foundation welcomes New York University (NYU) joining the LF AI as an Associate member. 

NYU is a well-known private research university, founded in 1831, and based in New York City. NYU also has degree-granting campuses in Abu Dhabi and Shanghai, and academic centers around the world.

The university has over 50,000 students, split roughly in half into undergraduate and graduate.

Prof. Shivendra Panwar, Director of the NY State Center for Advanced Technology in Telecommunications (CATT), will be the representative to LF AI. We are looking forward to collaborating with NYU on open source AI tools and methodologies, including Acumos and more.


Apache NiFi ‹› AI Fairness 360 (AIF360) Integration – Trusted AI Architecture Development Report 1

By Blog

By Romeo Kienzler, Chief Data Scientist, IBM Center for Open Source Data and AI Technologies

We’re currently in the process of integrating the Trusted AI Toolkits into Acumos AI. There are of course many possibilities in doing so – therefore we’ve started the journey with an architecture development method. Maybe not many of you are familiar with TOGAF – The Open Group Architecture Framework – but we’re making use of it here in order to make sure the architectural choices of integrating the Trusted AI Toolkits are sound. As you can see below in Figure 1, the development of an architecture is an iterative process. Since we’ve completed one iteration, we want to give you an update on the actual development of those individual process steps.

Figure 1 – The Open Group Architecture Framework (TOGAF)


It is agreed by many AI practitioners that Trusted AI is a key property for large scale adoption of AI into enterprise. A set of questions illustrated in Figure 2, taken from Todd Moore’s (IBM VP, Open Technology) keynote at the Linux Foundation Conference in Lyon 2019 needs to be asked about every AI model apart from it’s generic performance metrics like accuracy, F1 score or area under ROC.

Figure 2: Questions to be asked once AI model training is done

Since open source toolkits exist within the Linux Foundation AI to answer these questions, the task is to find out, how they are used most efficiently.

Architecture Vision

To answer the questions above the following checks and mitigations against an AI model deployment candidate must be done:

  • Adversarial Robustness Assessment and Mitigation
  • Bias Detection and Mitigation
  • Explainability
  • Accountability (Repeatability / Data and Model Lineage)

A set of open source tools exist for each of those tasks. In the following we want to make sure to identify the correct tools and the correct ways of integrating them to maximize their positive impact. So the overall value proposition of this architecture is the removal of

major road blocks of brining AI models into production, by generating, qualifying and quantifying trust, affecting Stakeholders like Regulators, Auditors and Business Representatives. Ease of use and adoption rate are the main drivers for transformation and can be seen as the main KPIs here.

Therefore, the main risk is complexity. The higher the complexity, the more adoption rate declines.

Information Systems Architecture

In this current iteration we only focus on bias detection using the AIF360 toolkit. Bias mitigation, adversarial robustness, explainability and accountability will be covered in future versions of this document. Although such a component can be deployed in many ways in an abundance of different possible information systems architectures, we’ve identified only two different generic integrations scenarios, which we call data driven integration and model driven integration.

Data driven integration

In data driven integration the AI model is only taking part in such that it generates predictions which are stored to a database. Using the model’s predictions and the ground truth, bias metrics can be computed. This works exactly in the same way ordinary model validation is done on a test set which hasn’t been used for training the model. The test set is applied to the model and then it’s performance is assessed on the predictions generated by the model on the test set using a metric algorithm. So the same rule applies here, but instead of an ordinary metric algorithm, the algorithms for bias measurement are used. This process is illustrated in Figure 3.

Figure 3: Data driven integration uses predictions created by the model and ground truth data to compute bias metrics

Model driven integration

On the other hand, a second integration scenario might be feasible as well which we call “model driven integration”. In this case, no data is provided to the bias detection library, only the model and a configuration object which contains information on the protected attributes, the label columns and the schema. In this case, the model has to be executed in an appropriate runtime first using artificial data. If this is a feasible way of integration will be determined during the next iterations of this project. Figure 4 illustrates this.

Figure 4: The model is created using data but only the standalone model is assessed without any further need for data

Hybrid integration

Since model driven integration is not yet confirmed to work, we propose finally a hybrid integration, as illustrated in Figure 5, where the model is executed in an appropriate environment but has access to the test data set. This is very similar to the data driven approach which the difference that model predictions are not needed beforehand but are created on the fly during execution of the validation process. This might have advantages in the area of data lineage / accountability or facilitating operational aspects.

Figure 5: Hybrid integration allows the bias detection component accessing the test data set

Technology Architecture

Among others, integration into Acumos AI is one of the most important aspects of this project. Although other integration points exist, we first started with an evaluation of integration into Apache Nifi since Nifi will be part of the next release of Acumos AI and play a central role in data integration tasks. Therefore, a POC was conducted on integration the LF AI AIF360 (AI Fairness 360) toolkit as a custom processor into Apache Nifi using the data driven approach.

Opportunities and Solutions

For simplicity we started with the ExecuteStreamCommand Processor within Nifi which allows wrapping any executable which reads from STDIN and writes to STDOUT into a Nifi Processor.

Implementation Details

Environment Setup

We’ve used an Ubuntu 18.04 Server LTS system and installed Apache Nifi on top of it. The following script installs Apache Nifi and takes care of the necessary configuration:

apt update

apt install -y openjdk-8-jdk unzip git

wget http://mirror.easyname.ch/apache/nifi/1.9.2/nifi-1.9.2-bin.zip

unzip nifi-1.9.2-bin.zip

apt install -y python3-pip

pip3 install –upgrade pip

apt install -y python3-venv

python3 -m venv venv

source venv/bin/activate

pip3 install aif360

git clone https://github.com/romeokienzler/lfai_nifi.git

./nifi-1.9.2/bin/nifi.sh start


Now it’s time to test. In a browser, Nifi can be accessed on port 8080. The file https://github.com/romeokienzler/lfai_nifi/blob/master/AIF360.xml contains the template to create the flow. Now it is possible to copy the “fair” and “unfair” test data into the “in” folder and it will be consumed by the Nifi flow. This flow is illustrated in Figure 6.

Figure 6: The sample Nifi flow which uses the bias detection processor

After the flow has run, the bias metrics are attached as attributes to the flowfile as shown in Figure 7.

Figure 7: Bias metrics are attached as attributes to the flowfile

Future Work

In the next steps we’ll add bias mitigation as well to this prototype. Then we’ll evaluate the other integration scenarios mentioned and identify the best way of integrating them to Acumos AI. Finally, we’ll integrate the remaining toolkits for adversarial robustness, explainability and accountability into Acumos AI.


We’ve shown that using a custom processor Trusted AI toolkits can be integrated into Apache Nifi and Acumos AI.

Trusted AI Committee Established by LF AI Foundation

By Blog

Can We Trust AI?

AI is advancing rapidly within the enterprise with more than half of organizations launching at least one AI deployment in production. At the same time as organizations work to improve the performance of AI, the teams building and deploying AI have to grapple with the challenges of determining whether the AI models can be trusted.  Implementing trusted AI processes requires assessing to what degree the AI models are fair, secure, explainable, and have well-documented lineage.  

Collection of logos for the LF AI member organizations who are part of the Trusted AI Committee. See full list on the LF AI Wiki.

LF AI Trusted AI Committee

LF AI is an umbrella foundation of the Linux Foundation that supports open source innovation in artificial intelligence, machine learning, and deep learning. To build trust in the adoption of AI, the Trusted AI Committee has been established as part of Linux Foundation AI (LF AI.)  The mission of the committee is to:  

  • (a) define policies, guidelines, tooling and use cases by industry to create responsible and trusted AI
  • (b) survey and contact current open source trusted AI related projects to join LF AI efforts, 
  • (c) create a badging or certification process for open source projects that meet the trusted AI policies/guidelines defined by LF AI, and
  • (d) create a document that describes the basic concepts and definitions in relation to trusted AI and aims to standardize the vocabulary/terminology.

The Trusted AI Committee has three chairs, spread across regions (Asia, Europe and U.S.A). 

Please refer to the wiki for more details.

Formation of Working Groups

To begin this work the Trusted AI Committee has established two working groups made up of a diverse range of committee members from multiple LF AI members from around the world. The two working groups are: (1) Principles Working Group, and (2) Use Cases Working Group. Both working groups recognize the importance of diversity in the voices that contribute to solving problems in this space. Both working groups will work to increase the diversity of contributors while maintaining a balance between Europe, Asia, and America.

Principles of Trusted AI

The Principles Working Group (PWG) is creating an initial whitepaper that surveys a wide range of prior work and will propose practical guidelines. The PWG has set ambitious goals that will inform the work of the Use Cases Working Group. First, the PWG will define a set of baseline definitions for trusted AI. To inform this, they will collect existing reference materials, analyze the materials according to an appropriate methodology, identify a set of common principles, and propose guidelines for any AI open source project – that can be iteratively refined as principles are put into practice via operational guidelines. They will then identify tools and open source libraries that can be used to implement these common principles. They will discuss and document the relevance of self-certification and audit programs as needed to ensure trust in open AI tools and libraries. 

Use Cases by Project, Industry, and Technology

The Use Cases Working Group (UWG) is creating code for specific industry applications of AI (use cases) that can be assessed using the guidelines developed by the PWG, and to provide feedback to make updates. This working group aims to identify open source trusted AI tools from member and non member companies. The distinction of use cases by industry is imperative for adoption so they seek to identify and implement industry use cases for the financial industry, automotive industry, etc. Use cases that outline technical integration between open source projects, e.g. Acumos and AIF360. Next, the UWG will work to create technical guidelines, integration and best practices for trusted ML Functions which can be used in context of ML Ops. As necessary the UWG will identify and implement integration points between external projects.  

Future Goals of the Working Groups

After achieving as many of these goals as possible the Use Cases Working Group will define the set of initial projects to drive the integration work of additional projects. The UWG will build a team of core contributors with an emphasis on maintaining collaboration between Europe, Asia and America. This team will work toward the creation of best practices and reference architecture for MLOps in context of trusted AI, the creation of Kubeflow Pipelines for Trusted AI Committee projects to be consumed within ML Platforms, and Apache Nifi Pipelines with trusted AI projects for Acumos consumption. The UWG will define requirements around lineage tracking, metadata collection etc. Lastly, with so many telephone communication companies in the LF AI umbrella, the working group then plans to dive into Telco use cases for trustworthy AI. 

LF AI Welcomes Adlik, Toolkit for Accelerating Deep Learning Inference, as Newest Incubation Project

By Blog

Contributed by ZTE, Adlik allows deep learning models to be deployed to different platforms with high performance and flexibility

San Francisco – October 21, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), today welcomes Adlik, joining the LF AI as an incubation project. Adlik comes from LF AI Premier member ZTE, which has committed to hosting Adlik in a neutral environment with an open governance model that encourages contributions and adoption.

“We are extremely pleased to welcome Adlik to the LF AI. Today’s announcement is an important contribution to the growing ecosystem of open source AI,” said Dr. Ibrahim Haddad, Executive Director of LF AI. “Adlik optimizes models developed in widely used frameworks like Tensorflow, Keras and Caffe and has the potential for wide impact in the AI space. Adlik is poised to help the overall growth of open source AI, and we look forward to supporting Adlik’s technology development and user adoption around the world.”

The goal of Adlik is to accelerate deep learning inference process both on cloud and embedded environments. Adlik consists of two sub projects: model compiler and serving platform. The model compiler supports several optimizing technologies like pruning, quantization and structural compression to optimize models developed in major frameworks like Tensorflow, Keras, and Caffe, so that they can run with lower latency and higher computing efficiency. The serving platform provides deep learning models with optimized runtime based on the deployment environment such as CPU, GPU, and FPGA. Based on a deep learning model, users of Adlik can optimize with a model compiler and then deploy to platforms utilizing the serving platform.

“We are very pleased to share knowledge and explore deploying deep learning technologies together. Adlik is a tool for models, and it will support more and more training frameworks and model optimization algorithms in the near future,” said Wei Meng, Director of Standard and Open Source Planning, Technology Planning Dept., ZTE Corporation. “We are happy to see a good ecosystem based on Adlik collaborate with other projects. Any developers are welcome to contribute to Adlik. Let’s all work together!” 

Release 1 of Adlik is expected before the end of the year with the following main features: Support for optimization like quantization and pruning; Support for compilation for models from a wider range of frameworks; Support for customization of runtime and service core; Support for FPGA runtime; Support for multiple instances for serving models.

For more information on getting involved immediately with Adlik, please see the following Adlik resources:

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

A Guide to Hosting Your Project in LF AI

By Blog

Building an open source software project and wanting to gain traction? Just providing a software repo, mailing list, and a website is not enough. A much wider set of services, including scalable and neutral governance, is critical for increasing adoption of open source projects. 

The LF AI Foundation (LF AI) provides a wide range of services for its hosted projects with a focus on increasing development and innovation in the open source AI ecosystem. By being part of LF AI, a hosted project gets access to program management services, event management services, marketing services and programs, PR support, legal services, and staff eager to help grow your project. 

All services act as enablers to propel your project further, providing solid ground on which various organizations and interested individuals would feel propelled to join the project and be part of its community of users and/or contributors versus other projects. 

Why host a project under LF AI?

  1. You believe your project will gain wider community adoption if it’s no longer solely affiliated with a corporate partner
  2. Several companies are working on very similar projects, and transferring management to an open source foundation would unite people under a common project
  3. There are legal or administrative tasks essential to the health of your project, and it’s not clear which current participant should own these tasks. These types of needs typically only arise after a project has already become reasonably established, with an active contributor community and often one or more dedicated corporate partners.

How are projects on-boarded into LF AI?

Projects are on-boarded and progress pursuant to the LF AI Foundation’s Project Process and Lifecycle Document

LF AI hosted projects fall into one of three stages: Incubation, Graduation, or Emeritus.

The core 5 requirements for a project to qualify as an incubation project are:

Use an approved OSI open source license

Be supported by an LF AI member

Fit within the mission and scope of LF AI

Allow neutral ownership of project assets such as a trademark, domain or GitHub account (the community can define rules and manage them)

Have a neutral governance that allows anyone to participate in the technical community, whether or not a financial member or supporter of the project
In addition to the incubation requirements, a project must meet these requirements: 

Have a healthy number of committers from at least two organizations

Have achieved and maintained a Core Infrastructure Initiative Best Practices Badge

Demonstrate a substantial and ongoing flow of commits and merged contributions

Document current project owners, and current and emeritus committers in OWNERS.md and COMMITTERS.md files

Document project’s governance (we help projects create a governance model that works for them or simply help them document their existing governance)
Emeritus projects are projects which the maintainers feel have reached or are nearing end-of-life.

Emeritus projects have contributed to the ecosystem, but are not necessarily recommended for modern development as there may be more actively maintained choices.
Accepting incubation projects into LF AI requires a positive vote of the Technical Advisory Council (TAC)  Accepting graduation projects into LF AI requires the positive vote of both the TAC and the Governing Board

How does Your Project Transition from Incubation to Graduation?

The TAC undertakes an annual review of all LF AI hosted projects to assess whether each Incubation stage project is making adequate progress towards the Graduation stage, and whether each Graduation stage project is maintaining progress to remain at Graduation level. 

The TAC then provides a set of recommendations for each project to improve and/or a recommendation to the LF AI Governing Board on moving a project across stages. 

Common Benefits to Incubation and Graduation Projects

  • Access to a larger community within that same ecosystem leading to larger pipeline of potential users and contributors 
  • Validation from the Linux Foundation, trusted source that hosts over 180 large scale open source projects     
  • Scalable and neutral governance accessible to all 
  • Neutral hosting of your project’s trademark and any related assets and accounts 
  • Marketing and awareness
  • Collaboration opportunities with other LF AI projects and broadly other Linux Foundation projects
  • Compliance scans with reports delivered to the projects’ mailing lists 
  • Infrastructure and IT enablement (specifics depend on each project and the hosting level)

Specific Benefits for Incubation Projects 

In addition to the above stated common benefits, Incubation projects enjoy these additional benefits:

  • Your project has the right to refer to the project as an “LF AI Foundation Incubation Project”
  • Appointment of an existing TAC member that will act as a sponsor of your project and provide recommendations regarding governance best practices
  • Access to LF AI booth space at various events for demo purposes  and for meeting the developer community, based on availability

Specific Benefits for Graduation Projects 

In addition to the above stated common benefits, Graduation projects enjoy these additional benefits:

  • Your project has the right to refer to itself as an “LF AI Graduation Project,” which signals to the market that your project has reached a high level of technical maturity with confidence in its readiness for deployment 
  • Projects designed as Graduation Projects by the Governing Board get a voting seat on the TAC
  • Graduation projects are eligible to request and receive funding support contingent on Governing Board approval
  • Priority access to LF AI booth space at various events for demo purposes and for meeting the developer community 
  • Graduation projects have a technical lead appointed for representation of your project on the TAC

Join LF AI as a Project

We’re constantly looking for new projects to join our family. Please reach out to info@lfai.foundation if you’d like to discuss the prospect of your open source AI project joining LF AI as a hosted project.

LF AI Resources

The Institute for Ethical AI & Machine Learning Joins LF AI

By Blog

The LF AI Foundation welcomes the The Institute for Ethical AI & Machine Learning (IEAIML), joining the LF AI as an Associate member. 

IEAIML carries out research into processes and frameworks that help guide AI and ML development. The UK-based institute is led by cross functional teams of volunteers including ML engineers, data scientists, industry experts, policy-makers and professors in STEM, Humanities and Social Sciences. 

LF AI is excited to gain IEAIML contributions to two core efforts in particular: The first is the ML Workflow with the goal of defining a standardized ML pipeline that includes ethics management and providing a reference implementation using LF AI hosted projects. This would encourage integration across LF AI projects and also help create harmonization across the stack. The second is supporting the Trusted AI committee which focuses on translation of the various ethics guidelines and policies into tooling for fairness, robustness, explainability and lineage. 

In addition, LF AI will be collaborating with IEAIML on improving the LF AI Foundation Interactive Landscape based on surveys of open source ML projects that IEAIML has conducted and provided on GitHub.

“We are thrilled to contribute to the LF AI foundation which plays a key role in empowering the global ecosystem of developers to build and extend systems in a responsible, ethical and conscientious way,” said Alejandro Saucedo, Chief Scientist, The Institute for Ethical AI & Machine Learning. “LF AI provides a very important forum that will support the development and extension of professional responsibility frameworks, codes of ethics, standards and beyond.”

Supporting open source AI development is one key element of IEAIML’s mission which aligns well with the LF AI. IEAIML sees increasingly tough challenges around privacy, security and trust relating to the application of AI systems. Ethical frameworks and industry standards will play a critical role, and developers and decision-makers will need to have the right tools to ensure these are in place. IEAIML believes open source will play a key role in ensuring these tools and frameworks exist, which is a main factor for IEAIML joining the LF AI.

IEAIML Resources

  • The 8 principles of responsible ML development – A high level set of principles to empower delivery teams to design, develop and operate machine learning systems in a professionally responsible way
  • The AI Procurement Framework – A practical set of templates (RFP, RFI) to empower industry stakeholders to assess and evaluate the maturity of machine learning systems based on IEAIML’s 8 principles
  • Production Machine Learning List – A list of open source libraries maintained by the community that the IEAIML community is looking forward to contributing to the LF AI ecosystem

Contact IEAIML directly and join the Ethical ML Network (BETA) here: https://ethical.institute/index.html#contact

LF AI Resources

LF AI Day – Paris Edition, Recap

By Blog
Nicolas Demassieux (SVP, Orange Labs Research) in his opening speech 

The LF AI Day – Paris, was held September 16 in Paris-Châtillons, France, at Orange Gardens, 44 Avenue de la République. It was a fantastic day with presentations and a panel discussion from well-known organizations like Orange, NTT, Nokia, Ericsson, IBM, LF AI Foundation, and more.

LF AI Days are regional, one-day events hosted and organized by local members with support from LF AI and its hosted projects. These events are open to all for participation and have no cost to attend. More LF AI events information here.

Dr. Margriet Groenendijk from IBM discussing Trusted AI 


Nicolas Demassieux, SVP, Orange Labs Research, spoke of 3 challenges:

  1. The need to control “AI economics,” meaning ROI optimization and risk management when introducing AI models
  2. The need to speed up development of end-to-end AI tools and interoperability with enterprise data lakes
  3. The need to set up guidelines for trusted and fair AI

Masakatsu Fujiwara, Project Manager, NTT Network Technology Laboratories, talked about NTT view’s that the future of network management will be based on AI-driven autonomous maintenance loops.

Anwar Aftab, Director, Inventive Science, AT&T Labs, discussed how the future for network AI will autonomous, contextual and predictive networks that drive new experiences at higher velocities.

Philippe Carré, Senior Specialist Open Source, Nokia Bell-Labs & CTO, covered how Nokia’s three priorities for AI operations are Security, Fault Management, and Configuration management.

The Startups Panel Discussion, entitled “Barriers for AI development,” covered several types of barriers and including participation from François Tillerot, Intrapreneur-CMO, Orange AI Marketplace, Rahul Chakkara, Co-Founder, Manas AI, Laurent Depersin, Research & Innovation Home Lab Director, Interdigital, Marion Carré, CEO Ask Mona, and Sana Ben Jemaa, Project Manager Radio & AI, Orange Labs Networks

  • Why to introduce AI – Difficulty to describe an AI use case and translate business benefit, not clear ROI.
    • Supporting customers in their strategy to introduce AI and build ROI can tackle this issue
  • Project Technical or HR (skills) issues – Multiple environment/tools constrain End to End Solution, lack of talent/skills in companies, difficulty to deploy and scale.
    • Open source solutions, in particular LF AI and Acumos AI project will facilitate mutualized approaches and multi-skills collaboration to workaround
  • Readiness (technical or mindset) – Data supply chain not ready, lack of trust
    • Trustful AI approaches and better awareness in AI capabilities (also avoiding “overselling” AI) are potential solutions to tackle this.

The list of presentations made available:

Startup Panel discussing barriers for AI development

Come join us next time, and join the open source AI conversation!

Announcing the First Ever LF AI Summit

By Blog

The LF AI Foundation is proud to announce our first LF AI Summit. This is an important step forward in our support for open source AI development around the world. The LF AI Summit will facilitate presentations, discussions and networking among an incredible group of leading AI specialists and organizations like AT&T, Amazon, Capital One, Google, IBM, and more. The 3-day event will bring together individuals and organizations to share information and best-practices and make to help make important decisions in areas of privacy, ethics, training and much more.

The LF AI Summit will be co-located with the Open Source Summit EU, being held in Lyon, France, October 28-30. Attendees register for the Open Source Summit EU. There is no extra fee to attend the LF AI Summit.

Explore LF AI Summit Agenda

The LF AI Summit takes place over three days and is devoted to a wide array of open source AI topics. There are 19 presentations scheduled which cover AI privacy, ethics, training pipeline, model versioning and interworking with network automation and edge cloud technologies.

The LF AI Foundation itself will have a dedicated booth with demos of AI Marketplace, Acumos, plus LF AI hosted projects including Horovod, Angel, Pyro, and EDL. Please come by the booth and ask any questions. Find out how Acumos and the LF AI projects are expanding AI beyond specialists to all groups within companies. And find out how you can host your own project in LF AI!

Join us by registering to attend the Open Source Summit EU – Register Now! 

Please note, the LF AI Summit is co-located with the Open Source Summit EU. Please register for the Open Source Summit EU. There is no extra fee to attend the LF AI Summit.