LF AI Receives Best Contribution Award from Chinese Association for Artificial Intelligence (CAAI)

By Blog

LF AI is pleased to receive the Best Contribution Award from the Chinese Association for Artificial Intelligence (CAAI). Communities are based on contributing; this award has special significance in that regard. 

Thank you!

CAAI is devoted to academic activities in science and technology in the People’s Republic of China. CAAI has 40 branches covering the fields of science and smart technology. It is the only state-level science and technology organization in the field of artificial intelligence under the Ministry of Civil Affairs. 

LF AI co-organized 3 successful national AI conferences with CAAI in 2019. We look forward to more involvement in 2020 in terms of projects with their members and collaboration on events.

Pictures from GAITC 2019, a CAAI event

LF AI Delivers Acumos AI Clio Release

By Blog, Press Release

Third Acumos AI release adds and extends Model On Boarding, Design Studio, Federation, License Management

San Francisco – November 26, 2019 – The LF AI Foundation, the organization building an open AI community to drive open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), today announced the third software release of the Acumos AI Project, codenamed Clio. Clio is focused on improving the experience with users for “first hand” feature requests like how to easily on board AI models, how to design and manage support for pluggable frameworks, how to more easily handle federation with ONAP and O-RAN, license management, and more.

Acumos AI is a platform and open source framework that makes it easy to build, share, and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies and accelerates innovation.

“Clio is an important step forward in the democratization of AI, making open source AI technology available and easily utilized and not limited to AI specialists and Data Scientists. The features introduced in this new Acumos release continue to pave the path towards AI accessibility, further extending users’ ability to build and share AI applications more easily,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “Clio marks a higher level of maturity; this is deployment grade. If you’re considering Acumos AI, we invite you to experiment with it and experience the live instance of the marketplace available from https://marketplace.acumos.org,”

Major highlights of the Clio release include:

  • ONAP Integration: Acumos AI/ML models can now be used in ONAP for virtualized network elements automation and optimization.
  • ORAN 5G RIC/XApp Deployment: Acumos AI/ML models can be deployed in ORAN RIC as XApp micro services to help optimize virtualized RAN.
  • IPR/Cross Company Licensing Entitlement: Companies can publish commercial models and negotiate license agreements for use of their models. Implementation of a new License Usage Manager feature enables the support of license entitlements for Acumos models.
  • Data/Training Pipelines: New feature to help data scientists create, integrate and deploy NiFi data pipelines.
  • Pluggable MLWB Framework: Enterprise Design Tools are now Integrated to support pluggable framework.
  • Support for C/C++ Client: Acumos has added the ability to onboard and deploy ML models in C and C++ languages (in additional to Java, Python, and R).
  • Onboarding Spark/Java: Acumos can now take advantage of the popular Apache Spark cluster computing framework when onboarding and deploying ML models.
  • Model Ratings: Automated way of assigning initial ratings to models based on available metadata.
  • Enhanced Platform Adaptation on Kubernetes: Through integration of a Jenkins server as a workflow engine, Acumos now supports customization to meet Operator needs in mission-critical business functions such as how Acumos ML solutions are assessed for compliance with license and security policies, and where/how Acumos ML solutions are deployed for use under Kubernetes.

“LF AI members, including Amdocs, ATT, Ericsson, Huawei, Nokia, Tech Mahindra, and others are contributing to the evolution of the platform to ease the onboarding and the deployment of AI models,” said Dr. Ofer Hermoni, Chair of LF AI Technical Advisory Council. “Acumos was the first hosted project at the Graduate level with several ongoing efforts to integrate it with other LF AI projects. This highlights an increase in collaboration across hosted projects, serving our mission to reduce redundancies, and increase cross pollination and harmonization of efforts across projects.”

Full release notes available here: https://wiki.acumos.org/display/REL/Acumos_Clio_Release

Supporting Quotes

“Since the days of Claude Shannon, AT&T has played an active role in the evolution of AI. And, Acumos is a shining example of how open collaboration and community engagement can excel this evolution faster than ever before,” said Mazin Gilbert, Vice President of Advanced Technology and Systems at AT&T. “We’re thrilled to work with industry leaders to make AI better, smarter and more accessible to everyone.”

Sachin Desai, LF AI Board Member and Ericsson VP of Global AI Accelerator said, “Open Source contributions and collaborations are critical aspects of Ericsson’s AI journey. We are excited to contribute to Acumos Clio release in areas such as licensing and integration with ONAP, related to its commercialization and industrialization to address the needs of our customers. We look forward to building on this in collaboration with our customers and Acumos ecosystem partners.”

“Huawei congratulates the successful delivery of Acumos Clio release. The Acumos-ONAP Integration work will enable model-driven intelligence to enhance the network orchestration and automation. This deliverable can rapidly automate new services and support complete lifecycle management,” said Bill Ren, Chief Open Source Liaison Officer, Huawei.

“Nokia welcomes the third release of Acumos, Clio. We find that the community has been very successful in continuing improving Acumos over the past releases, and the Clio release shows dedication of the Acumos community,” said Jonne Soininen, Head of Open Source Initiatives, Nokia. “Nokia believes Acumos occupies a unique place in the industry, and this successful cooperation shows the power of working together on AI in open source.”

“Orange is proud to contribute to the LF AI Acumos project as we are convinced that this Open Source Machine learning platform will cover some important future AI needs of telecom operators in the field of 5G and virtualized networks,” said Emmanuel Lugagne Delpon, SVP Orange Labs Networks and Group CTO. “With Clio, a new step has been passed toward the full automated deployment of AI model in open source platform such as ONAP & ORAN and we are very impatient to use these new features in our Orange AI Marketplace.”

“Tech Mahindra is excited to see Acumos reaching its third major milestone with the release of Clio. As part of our TechMNxt charter, co-innovation and co-creation in the field of AI & ML are one of our key focus areas, and we will continue to work towards the enhancement of Acumos platform,” said Sachin Saraf, SVP and VBU Head, CME Business, Tech Mahindra. “Our enterprise grade solutions are playing a key role in accelerating the adoption of Linux Foundation Artificial Intelligence open source projects.”

Acumos AI Key Statistics

  • 8 Projects
  • 62 Repositories
  • 1.27M+ lines of code
  • 4K+ Commits
  • 84 Contributors
  • 78 Reviewers

Acumos Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Pyro 1.0 Has Arrived!

By Blog

Congratulations to the Pyro team! The 1.0 release is out and available for immediate download

Pyro joined LF AI in January 2019 as an Incubation Project. Since then, Pyro has had an extremely productive year, producing eight releases following fast development cycles. 

Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling. It is developed and maintained by Uber AI and community contributors. Pyro project page is here.

According to Fritz Obermeyer, Pyro Engineering Lead and Senior Research Scientist at Uber, the objective of this release is to stabilize Pyro’s interface, making it safer to build high level components on top of Pyro. 

New Features

  • Many new normalizing flows and reorganized pyro.distributions.transforms module
  • New module pyro.contrib.timeseries for fast inference in temporal Gaussian Processes and state space models
  • Improved support for model serving including: New Predictive utility for predicting from trained models, and a PyroModule class that adds Pyro effects to PyTorch nn.Module – Together these utilities allow serving of jit-compiled Pyro models

Full Pyro 1.0 release notes are available here.

Community Stats at a Glance

Pyro has 5,700 Stars, 661 Forks, and is used by 130 other repos. A wide range of academic institutions are using Pyro including Stanford, Harvard, MIT, UPenn, Oxford, Florida State University, University of British Columbia (UBC), New York University (NYU), University of Copenhagen, Columbia, National University of Singapore (NUS), and more. 

Feedback 

We’re always looking to learn who is using Pyro and how you are using it. Please reach out to us at forum.pyro.ai! We’d like to know about your requirements, get your feedback and improve Pyro based on usage models. Even if you are not looking to contribute to the project, we’d like to include you in our community as a user.

Interested in Deep Probabilistic Modeling?

With this major 1.0 release, this is a great time to join the Pyro community! Start by looking through the Pyro Documentation or contact us directly

Pyro Key Links:

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

LF AI Welcomes ONNX, Ecosystem for Interoperable AI Models, as Graduate Project

By Blog, Press Release

Active contributors to ONNX code base include over 30 blue chip companies in AI including Amazon, Facebook, Intel®, Microsoft, NVIDIA

San Francisco – November 14, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), is announcing today the Open Neural Network eXchange (ONNX) is its newest graduate level project. Moving ONNX under the umbrella of LF AI governance and management is viewed as a key milestone in establishing ONNX as a vendor-neutral open format standard. 

ONNX is an open format used to represent machine learning and deep learning models. An ecosystem of products supporting ONNX provides AI capabilities like model creation and export, visualization, optimization, and acceleration capabilities. Among its many advantages, ONNX provides portability, allowing AI developers to more easily move AI models between tools that are part of trusted AI/ML/DL workflows. 

“We are pleased to welcome ONNX to the LF AI Foundation. We see ONNX as a key project in the continued growth of open source AI,” said Mazin Gilbert, Chair of the LF AI Foundation Governing Board. “We are committed to expanding open source AI, and supporting trust, transparent and accessible community development.”

“ONNX is not just a spec that companies endorse, it’s already being actively implemented in their products. This is because ONNX is an open format and is committed to developing and supporting a wide choice of frameworks and platforms. Joining the LF AI shows a determination to continue on this path, and will help accelerate technical development and connections with the wider open source AI community around the world,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “We’re happy to provide ONNX an independent home and work with the community to boost its profile as a vendor neutral standard. ONNX will retain its existing OSI-approved open source license, its governance structure and their established development practices. We are committed to providing ONNX with a host of supporting services especially in the area of marketing and community events to extend its reach and adoption.”

The ONNX community was established in 2017 by Facebook and Microsoft to create an open ecosystem for interchangeable models. Support for ONNX has grown to over 30 registered companies, plus many more end user companies, around the world. ONNX is a mature project with 138 contributors, over 7K stars and over 1K forks. 

“Huawei welcomes the ONNX project joining LF AI. As one of the initial premier members of the LF AI Foundation as well as an early member of the ONNX community, Huawei has been fully supportive throughout the entire process,” said XiaoLi Jiang, GM of Open Source Ecosystem in Huawei, “ONNX as model exchange format standard is a great addition for interoperability which enables broader collaboration for LF AI. Huawei will continue to increase our open source investments and collaborate with the open source community to advance the application and the development in Artificial Intelligence.”

“IBM is delighted to see the ONNX community joining the Linux Foundation as it will encourage accelerated development in the space. ONNX allows the transfer of models between deep learning frameworks and simplifies the deployment of trained models to inference servers in hybrid cloud environments via Watson Machine Learning. In addition, provides a standardized format to run models on both Watson Machine Learning Accelerator and IBM Z that benefit from IBM Power, enabling clients to infuse deep learning insights into transactions as they occur,” said Steven Eliuk, IBM Vice President, Deep Learning & Governance Automation, IBM Global Chief Data Office.

“Intel is committed to support ONNX in our software products such as nGraph, Intel® Distribution of the OpenVINO™ Toolkit, and PLAIDML to provide acceleration on existing Intel hardware platforms and upcoming deep learning ASICs,” said Dr. Harry Kim, Head of AI Software Product Management.  “Developers from Intel have already contributed to defining quantization specs and large model support for ONNX, and we expect even more contributions to the project as ONNX joins the Linux Foundation.”

“We’re thrilled that the Linux Foundation has selected ONNX to expand open source AI capabilities for developers,” said Eric Boyd, corporate vice president, AI Platform, Microsoft Corp. “Microsoft has been deeply committed to open source and continues to invest in tools and platforms that empower developers to be more productive. We use ONNX across our product portfolio – such as Azure AI – to deliver improved experiences to consumers and developers alike.”

“NVIDIA GPUs and TensorRT offer a programmable, high-performance platform for inference. The ONNX open format complements these and benefits from wide adoption within the AI ecosystem,” said Rob Ober, chief platform architect for Data Center Products at NVIDIA. “As an active contributor to ONNX, we’re glad to see the project join the Linux Foundation and look forward to industry-wide collaboration and innovation.”

For more information on getting involved immediately with ONNX, please see the following resources:

ONNX GitHub: https://github.com/onnx/
Discussions: https://gitter.im/onnx/
DockerHub: https://hub.docker.com/r/onnx/onnx-ecosystem
Main Site: https://onnx.ai
Twitter: https://twitter.com/onnxai
Facebook: https://facebook.com/onnxai

Companies that have already implemented ONNX in products include Alibaba, AMD, Apple, ARM, AWS, Baidu, BITMAIN, CEVA, Facebook, Graphcore, Habana, HP, Huawei, IBM, Idein, Intel, Mathworks, Mediatek, Microsoft, NVIDIA, NXP, Oath, Preferred Networks, Qualcomm, SAS, Skymizer, Synopsys, Tencent, Xiaomi, Unity.

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

Technical Presentations from Angel Meetup – Oct 13 – Shenzhen

By Blog

The Angel technical meetup was held October 13 in Shenzhen. It was an excellent afternoon of presentations, technical discussions and networking. Thank you to everyone who participated!

Presentations and write up from the meetup are available here:

Angel is an LF AI Foundation incubation project that offers a high-performance distributed machine learning platform based on the philosophy of Parameter Server. It is tuned for performance with big data and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension model.

To learn more on Angel, please visit: https://lfai.foundation/projects/angel-ml/

LF AI Resources

NYU Joins LF AI as Associate Member

By Blog

The LF AI Foundation welcomes New York University (NYU) joining the LF AI as an Associate member. 

NYU is a well-known private research university, founded in 1831, and based in New York City. NYU also has degree-granting campuses in Abu Dhabi and Shanghai, and academic centers around the world.

The university has over 50,000 students, split roughly in half into undergraduate and graduate.

Prof. Shivendra Panwar, Director of the NY State Center for Advanced Technology in Telecommunications (CATT), will be the representative to LF AI. We are looking forward to collaborating with NYU on open source AI tools and methodologies, including Acumos and more.

Welcome! 

Apache NiFi ‹› AI Fairness 360 (AIF360) Integration – Trusted AI Architecture Development Report 1

By Blog

By Romeo Kienzler, Chief Data Scientist, IBM Center for Open Source Data and AI Technologies

We’re currently in the process of integrating the Trusted AI Toolkits into Acumos AI. There are of course many possibilities in doing so – therefore we’ve started the journey with an architecture development method. Maybe not many of you are familiar with TOGAF – The Open Group Architecture Framework – but we’re making use of it here in order to make sure the architectural choices of integrating the Trusted AI Toolkits are sound. As you can see below in Figure 1, the development of an architecture is an iterative process. Since we’ve completed one iteration, we want to give you an update on the actual development of those individual process steps.

Figure 1 – The Open Group Architecture Framework (TOGAF)

Preliminary

It is agreed by many AI practitioners that Trusted AI is a key property for large scale adoption of AI into enterprise. A set of questions illustrated in Figure 2, taken from Todd Moore’s (IBM VP, Open Technology) keynote at the Linux Foundation Conference in Lyon 2019 needs to be asked about every AI model apart from it’s generic performance metrics like accuracy, F1 score or area under ROC.

Figure 2: Questions to be asked once AI model training is done

Since open source toolkits exist within the Linux Foundation AI to answer these questions, the task is to find out, how they are used most efficiently.

Architecture Vision

To answer the questions above the following checks and mitigations against an AI model deployment candidate must be done:

  • Adversarial Robustness Assessment and Mitigation
  • Bias Detection and Mitigation
  • Explainability
  • Accountability (Repeatability / Data and Model Lineage)

A set of open source tools exist for each of those tasks. In the following we want to make sure to identify the correct tools and the correct ways of integrating them to maximize their positive impact. So the overall value proposition of this architecture is the removal of

major road blocks of brining AI models into production, by generating, qualifying and quantifying trust, affecting Stakeholders like Regulators, Auditors and Business Representatives. Ease of use and adoption rate are the main drivers for transformation and can be seen as the main KPIs here.

Therefore, the main risk is complexity. The higher the complexity, the more adoption rate declines.

Information Systems Architecture

In this current iteration we only focus on bias detection using the AIF360 toolkit. Bias mitigation, adversarial robustness, explainability and accountability will be covered in future versions of this document. Although such a component can be deployed in many ways in an abundance of different possible information systems architectures, we’ve identified only two different generic integrations scenarios, which we call data driven integration and model driven integration.

Data driven integration

In data driven integration the AI model is only taking part in such that it generates predictions which are stored to a database. Using the model’s predictions and the ground truth, bias metrics can be computed. This works exactly in the same way ordinary model validation is done on a test set which hasn’t been used for training the model. The test set is applied to the model and then it’s performance is assessed on the predictions generated by the model on the test set using a metric algorithm. So the same rule applies here, but instead of an ordinary metric algorithm, the algorithms for bias measurement are used. This process is illustrated in Figure 3.

Figure 3: Data driven integration uses predictions created by the model and ground truth data to compute bias metrics

Model driven integration

On the other hand, a second integration scenario might be feasible as well which we call “model driven integration”. In this case, no data is provided to the bias detection library, only the model and a configuration object which contains information on the protected attributes, the label columns and the schema. In this case, the model has to be executed in an appropriate runtime first using artificial data. If this is a feasible way of integration will be determined during the next iterations of this project. Figure 4 illustrates this.

Figure 4: The model is created using data but only the standalone model is assessed without any further need for data

Hybrid integration

Since model driven integration is not yet confirmed to work, we propose finally a hybrid integration, as illustrated in Figure 5, where the model is executed in an appropriate environment but has access to the test data set. This is very similar to the data driven approach which the difference that model predictions are not needed beforehand but are created on the fly during execution of the validation process. This might have advantages in the area of data lineage / accountability or facilitating operational aspects.

Figure 5: Hybrid integration allows the bias detection component accessing the test data set

Technology Architecture

Among others, integration into Acumos AI is one of the most important aspects of this project. Although other integration points exist, we first started with an evaluation of integration into Apache Nifi since Nifi will be part of the next release of Acumos AI and play a central role in data integration tasks. Therefore, a POC was conducted on integration the LF AI AIF360 (AI Fairness 360) toolkit as a custom processor into Apache Nifi using the data driven approach.

Opportunities and Solutions

For simplicity we started with the ExecuteStreamCommand Processor within Nifi which allows wrapping any executable which reads from STDIN and writes to STDOUT into a Nifi Processor.

Implementation Details

Environment Setup

We’ve used an Ubuntu 18.04 Server LTS system and installed Apache Nifi on top of it. The following script installs Apache Nifi and takes care of the necessary configuration:

apt update

apt install -y openjdk-8-jdk unzip git

wget http://mirror.easyname.ch/apache/nifi/1.9.2/nifi-1.9.2-bin.zip

unzip nifi-1.9.2-bin.zip

apt install -y python3-pip

pip3 install –upgrade pip

apt install -y python3-venv

python3 -m venv venv

source venv/bin/activate

pip3 install aif360

git clone https://github.com/romeokienzler/lfai_nifi.git

./nifi-1.9.2/bin/nifi.sh start

Test

Now it’s time to test. In a browser, Nifi can be accessed on port 8080. The file https://github.com/romeokienzler/lfai_nifi/blob/master/AIF360.xml contains the template to create the flow. Now it is possible to copy the “fair” and “unfair” test data into the “in” folder and it will be consumed by the Nifi flow. This flow is illustrated in Figure 6.

Figure 6: The sample Nifi flow which uses the bias detection processor

After the flow has run, the bias metrics are attached as attributes to the flowfile as shown in Figure 7.

Figure 7: Bias metrics are attached as attributes to the flowfile

Future Work

In the next steps we’ll add bias mitigation as well to this prototype. Then we’ll evaluate the other integration scenarios mentioned and identify the best way of integrating them to Acumos AI. Finally, we’ll integrate the remaining toolkits for adversarial robustness, explainability and accountability into Acumos AI.

Conclusion

We’ve shown that using a custom processor Trusted AI toolkits can be integrated into Apache Nifi and Acumos AI.

Trusted AI Committee Established by LF AI Foundation

By Blog

Can We Trust AI?

AI is advancing rapidly within the enterprise with more than half of organizations launching at least one AI deployment in production. At the same time as organizations work to improve the performance of AI, the teams building and deploying AI have to grapple with the challenges of determining whether the AI models can be trusted.  Implementing trusted AI processes requires assessing to what degree the AI models are fair, secure, explainable, and have well-documented lineage.  

Collection of logos for the LF AI member organizations who are part of the Trusted AI Committee. See full list on the LF AI Wiki.

LF AI Trusted AI Committee

LF AI is an umbrella foundation of the Linux Foundation that supports open source innovation in artificial intelligence, machine learning, and deep learning. To build trust in the adoption of AI, the Trusted AI Committee has been established as part of Linux Foundation AI (LF AI.)  The mission of the committee is to:  

  • (a) define policies, guidelines, tooling and use cases by industry to create responsible and trusted AI
  • (b) survey and contact current open source trusted AI related projects to join LF AI efforts, 
  • (c) create a badging or certification process for open source projects that meet the trusted AI policies/guidelines defined by LF AI, and
  • (d) create a document that describes the basic concepts and definitions in relation to trusted AI and aims to standardize the vocabulary/terminology.


The Trusted AI Committee has three chairs, spread across regions (Asia, Europe and U.S.A). 

Please refer to the wiki for more details.

Formation of Working Groups

To begin this work the Trusted AI Committee has established two working groups made up of a diverse range of committee members from multiple LF AI members from around the world. The two working groups are: (1) Principles Working Group, and (2) Use Cases Working Group. Both working groups recognize the importance of diversity in the voices that contribute to solving problems in this space. Both working groups will work to increase the diversity of contributors while maintaining a balance between Europe, Asia, and America.

Principles of Trusted AI

The Principles Working Group (PWG) is creating an initial whitepaper that surveys a wide range of prior work and will propose practical guidelines. The PWG has set ambitious goals that will inform the work of the Use Cases Working Group. First, the PWG will define a set of baseline definitions for trusted AI. To inform this, they will collect existing reference materials, analyze the materials according to an appropriate methodology, identify a set of common principles, and propose guidelines for any AI open source project – that can be iteratively refined as principles are put into practice via operational guidelines. They will then identify tools and open source libraries that can be used to implement these common principles. They will discuss and document the relevance of self-certification and audit programs as needed to ensure trust in open AI tools and libraries. 

Use Cases by Project, Industry, and Technology

The Use Cases Working Group (UWG) is creating code for specific industry applications of AI (use cases) that can be assessed using the guidelines developed by the PWG, and to provide feedback to make updates. This working group aims to identify open source trusted AI tools from member and non member companies. The distinction of use cases by industry is imperative for adoption so they seek to identify and implement industry use cases for the financial industry, automotive industry, etc. Use cases that outline technical integration between open source projects, e.g. Acumos and AIF360. Next, the UWG will work to create technical guidelines, integration and best practices for trusted ML Functions which can be used in context of ML Ops. As necessary the UWG will identify and implement integration points between external projects.  

Future Goals of the Working Groups

After achieving as many of these goals as possible the Use Cases Working Group will define the set of initial projects to drive the integration work of additional projects. The UWG will build a team of core contributors with an emphasis on maintaining collaboration between Europe, Asia and America. This team will work toward the creation of best practices and reference architecture for MLOps in context of trusted AI, the creation of Kubeflow Pipelines for Trusted AI Committee projects to be consumed within ML Platforms, and Apache Nifi Pipelines with trusted AI projects for Acumos consumption. The UWG will define requirements around lineage tracking, metadata collection etc. Lastly, with so many telephone communication companies in the LF AI umbrella, the working group then plans to dive into Telco use cases for trustworthy AI. 

LF AI Welcomes Adlik, Toolkit for Accelerating Deep Learning Inference, as Newest Incubation Project

By Blog

Contributed by ZTE, Adlik allows deep learning models to be deployed to different platforms with high performance and flexibility

San Francisco – October 21, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), today welcomes Adlik, joining the LF AI as an incubation project. Adlik comes from LF AI Premier member ZTE, which has committed to hosting Adlik in a neutral environment with an open governance model that encourages contributions and adoption.

“We are extremely pleased to welcome Adlik to the LF AI. Today’s announcement is an important contribution to the growing ecosystem of open source AI,” said Dr. Ibrahim Haddad, Executive Director of LF AI. “Adlik optimizes models developed in widely used frameworks like Tensorflow, Keras and Caffe and has the potential for wide impact in the AI space. Adlik is poised to help the overall growth of open source AI, and we look forward to supporting Adlik’s technology development and user adoption around the world.”

The goal of Adlik is to accelerate deep learning inference process both on cloud and embedded environments. Adlik consists of two sub projects: model compiler and serving platform. The model compiler supports several optimizing technologies like pruning, quantization and structural compression to optimize models developed in major frameworks like Tensorflow, Keras, and Caffe, so that they can run with lower latency and higher computing efficiency. The serving platform provides deep learning models with optimized runtime based on the deployment environment such as CPU, GPU, and FPGA. Based on a deep learning model, users of Adlik can optimize with a model compiler and then deploy to platforms utilizing the serving platform.

“We are very pleased to share knowledge and explore deploying deep learning technologies together. Adlik is a tool for models, and it will support more and more training frameworks and model optimization algorithms in the near future,” said Wei Meng, Director of Standard and Open Source Planning, Technology Planning Dept., ZTE Corporation. “We are happy to see a good ecosystem based on Adlik collaborate with other projects. Any developers are welcome to contribute to Adlik. Let’s all work together!” 

Release 1 of Adlik is expected before the end of the year with the following main features: Support for optimization like quantization and pruning; Support for compilation for models from a wider range of frameworks; Support for customization of runtime and service core; Support for FPGA runtime; Support for multiple instances for serving models.

For more information on getting involved immediately with Adlik, please see the following Adlik resources:

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

A Guide to Hosting Your Project in LF AI

By Blog

Building an open source software project and wanting to gain traction? Just providing a software repo, mailing list, and a website is not enough. A much wider set of services, including scalable and neutral governance, is critical for increasing adoption of open source projects. 

The LF AI Foundation (LF AI) provides a wide range of services for its hosted projects with a focus on increasing development and innovation in the open source AI ecosystem. By being part of LF AI, a hosted project gets access to program management services, event management services, marketing services and programs, PR support, legal services, and staff eager to help grow your project. 

All services act as enablers to propel your project further, providing solid ground on which various organizations and interested individuals would feel propelled to join the project and be part of its community of users and/or contributors versus other projects. 

Why host a project under LF AI?

  1. You believe your project will gain wider community adoption if it’s no longer solely affiliated with a corporate partner
  2. Several companies are working on very similar projects, and transferring management to an open source foundation would unite people under a common project
  3. There are legal or administrative tasks essential to the health of your project, and it’s not clear which current participant should own these tasks. These types of needs typically only arise after a project has already become reasonably established, with an active contributor community and often one or more dedicated corporate partners.

How are projects on-boarded into LF AI?

Projects are on-boarded and progress pursuant to the LF AI Foundation’s Project Process and Lifecycle Document

LF AI hosted projects fall into one of three stages: Incubation, Graduation, or Emeritus.

IncubationGraduation
Emeritus
The core 5 requirements for a project to qualify as an incubation project are:

Use an approved OSI open source license

Be supported by an LF AI member

Fit within the mission and scope of LF AI

Allow neutral ownership of project assets such as a trademark, domain or GitHub account (the community can define rules and manage them)

Have a neutral governance that allows anyone to participate in the technical community, whether or not a financial member or supporter of the project
In addition to the incubation requirements, a project must meet these requirements: 

Have a healthy number of committers from at least two organizations

Have achieved and maintained a Core Infrastructure Initiative Best Practices Badge

Demonstrate a substantial and ongoing flow of commits and merged contributions

Document current project owners, and current and emeritus committers in OWNERS.md and COMMITTERS.md files

Document project’s governance (we help projects create a governance model that works for them or simply help them document their existing governance)
Emeritus projects are projects which the maintainers feel have reached or are nearing end-of-life.

Emeritus projects have contributed to the ecosystem, but are not necessarily recommended for modern development as there may be more actively maintained choices.
Accepting incubation projects into LF AI requires a positive vote of the Technical Advisory Council (TAC)  Accepting graduation projects into LF AI requires the positive vote of both the TAC and the Governing Board

How does Your Project Transition from Incubation to Graduation?

The TAC undertakes an annual review of all LF AI hosted projects to assess whether each Incubation stage project is making adequate progress towards the Graduation stage, and whether each Graduation stage project is maintaining progress to remain at Graduation level. 

The TAC then provides a set of recommendations for each project to improve and/or a recommendation to the LF AI Governing Board on moving a project across stages. 

Common Benefits to Incubation and Graduation Projects

  • Access to a larger community within that same ecosystem leading to larger pipeline of potential users and contributors 
  • Validation from the Linux Foundation, trusted source that hosts over 180 large scale open source projects     
  • Scalable and neutral governance accessible to all 
  • Neutral hosting of your project’s trademark and any related assets and accounts 
  • Marketing and awareness
  • Collaboration opportunities with other LF AI projects and broadly other Linux Foundation projects
  • Compliance scans with reports delivered to the projects’ mailing lists 
  • Infrastructure and IT enablement (specifics depend on each project and the hosting level)

Specific Benefits for Incubation Projects 

In addition to the above stated common benefits, Incubation projects enjoy these additional benefits:

  • Your project has the right to refer to the project as an “LF AI Foundation Incubation Project”
  • Appointment of an existing TAC member that will act as a sponsor of your project and provide recommendations regarding governance best practices
  • Access to LF AI booth space at various events for demo purposes  and for meeting the developer community, based on availability

Specific Benefits for Graduation Projects 

In addition to the above stated common benefits, Graduation projects enjoy these additional benefits:

  • Your project has the right to refer to itself as an “LF AI Graduation Project,” which signals to the market that your project has reached a high level of technical maturity with confidence in its readiness for deployment 
  • Projects designed as Graduation Projects by the Governing Board get a voting seat on the TAC
  • Graduation projects are eligible to request and receive funding support contingent on Governing Board approval
  • Priority access to LF AI booth space at various events for demo purposes and for meeting the developer community 
  • Graduation projects have a technical lead appointed for representation of your project on the TAC

Join LF AI as a Project

We’re constantly looking for new projects to join our family. Please reach out to info@lfai.foundation if you’d like to discuss the prospect of your open source AI project joining LF AI as a hosted project.

LF AI Resources