Updated LF AI Project Proposal Process and Lifecycle Document

By Blog

Download the new and improved ​“LF AI Projects: Process and Lifecycle​” (PDF) to learn the process of submitting your open source project for hosting under LF AI Foundation!


The LF AI Foundation is releasing today an updated version of its Project Process and Lifecycle document, a little over a year since we had our first version in August of 2018. Over the past year, we have welcomed four new projects to the Foundation (Angel, EDL, Horovod and Pyro) and went through the on boarding process with each one of them. As a result of these experiences and various incoming feedback, we realized there are many improvements opportunities to be made. We took action and the outcome is an improved document that better describes the various stages of the projects, explains how a project transitions from one stage to another, and a detailed description of the various ways we support the project.

If you are interested in hosting your open source AI/ML/DL project in the LF AI, please review the document and email us via info@lfai.foundation. We’re eager to help and discuss with you such possibilities.

For further reading, please visit these pages:

LF AI Day – Shanghai: Full Day Deep Dive into Open Source AI

By Blog

Register Now! 

Huawei, Tencent and the LF AI Foundation are pleased to announce LF AI Day – Shanghai is being held in beautiful Shanghai on September 17 at the Huawei Institute of Research and Development. LF AI Days are regional, one-day events hosted and organized by local members with support from LF AI and our hosted projects. Speakers from leading operators and AI industry with a focus on open source strategies for machine learning and deep leaning.

These events are open to all for participation and have no cost to attend.

Agenda is available from: https://www.lfasiallc.com/events/lf-ai-day-shanghai-2019/program/agenda/

For questions, please contact info@lfai.foundation

To view LF AI Days happening is other geographical regions, please visit the LF AI Events page. 

Register Now! 

Introducing FATE 1.0: Milestone Version Introduces New Features, Stability and Performance Enhancements

By Blog

This is a guest blog post by the FATE community, a Linux Foundation member with commonality of interests with LF AI

The FATE community is excited to announce the availability of FATE 1.0. We are striving to improve the development of federated learning technologies to achieve more powerful functions and applications. We consider FATE 1.0 to be a milestone version which empowers the FATE community with more powerful tools and a significantly improved developer experience.

FATE (Federated AI Technology Enabler) is a federated learning framework that fosters collaboration across companies and institutes to perform AI model training and inference in accordance with user privacy, data confidentiality and government regulations.

FATE recently joined the Linux Foundation with several organizations supporting the project including 4Paradigm, CETC Big Data Research Institute, Clustar, JD Intelligent Cities Research, Squirrel AI Learning, Tencent and WeBank.

What’s new in the FATE 1.0 release

  • FATEBoard, a visual tool for federated learning modeling for end-users
  • FATEFlow, an end-to-end pipeline platform for federated learning
  • Performance updates for all algorithm modules
  • Mature features of online federated inference

FATE 1.0 benefits and features

  • FATEBoard visualizes the federated learning process
    • Greatly improving the federated modeling experience, FATEBoard allows end-users to explore and understand models easily and effectively
    • FATEBoard supports visualization in the status changes of training, model graphs, logs tracking, and much more, which makes federated learning modeling easier to understand, debug and optimize
    • Click here for more information
  • FATEFlow builds highly flexible, high performance federated learning pipeline production service
    • FATEFlow supports model life-cycle management functions, which implement state management of pipelines and the collaborative scheduling of operation, and automatically tracks data, models, metrics, and logs generated in the task to facilitate analysis of users
    • Learn more or get started here
  • Performance Updates provide high flexibility, high stability and high performance for federated learning
    • FATE 1.0 supports use of DSL to describe federated modeling workflow
    • FATE 1.0 introduces a new Homomorphic Encryption algorithm based on Affline Transforms
    • FATE 1.0 also supports the Nesterov Momentum SGD Optimizer, which makes the federated learning algorithm converge quickly

Getting Started

FATE supports three deployment modes: Standalone in Docker, Standalone Compiled, and Cluster Compiled. The Cluster in Docker mode is expected to come with the next release. Stay tuned by joining the Fate-FedAI mailing-list, or you can also visit the FATE README

Suggestions or Contributions

Join us in our community via regular meetings, or our mailing-list and give us your feedback.

Anyone interested in federated learning is welcome to contribute code and submit Issues or Pull Requests. Please refer to the FATE project contribution guide first.

About the FATE Project

FATE is an open-source project initiated by WeBank’s AI Group to provide a secure computing framework for building the federated AI ecosystem. It implements a secure computation version of various machine learning algorithms, including logistic regression, tree-based algorithms, deep learning and transfer learning. For developers who need more than out-of-box algorithms, FATE provides a framework to implement new machine learning algorithms in a secure MPC architecture. Learn more about the project at https://github.com/WeBankFinTech/FATE.

IBM Joins LF AI Foundation

By Blog

San Diego – Open Source Summit North America – Aug 21, 2019 – The LF AI Foundation, the organization building an open AI community to drive open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), today announced IBM is joining LF AI as a General Member. IBM, a global leader in delivering AI solutions and an acknowledged leader in bringing AI into widespread use with commercial solutions like Watson, has been working closely on an informal basis with the LF AI Foundation participating in events worldwide and contributing to open source trusted AI workflows with projects like the LF AI Foundation Technical Advisory Committee’s ML Workflow project.

“IBM is a world leader in AI. They provide leadership not only technically, but also from an ethical and trusted AI standpoint. IBM will help spread AI with clear guidelines on ethics, fairness, robustness, and explainability that benefit all participants of the open source AI ecosystem. This is a big step forward in strengthening the reach of AI and helping data scientists and developers worldwide,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “We’re excited to have IBM join the LF AI Foundation to promote and shape the future of trusted AI workflows, and foster synergy and collaboration across multiple Linux Foundation umbrella foundations.”

IBM is a Platinum Member of the Linux Foundation, and, in addition to joining in the LF AI Foundation, is also a member of the Open Mainframe Project, the Cloud Native Computing Foundation, the OpenJS Foundation, the Hyperledger Foundation, the R Consortium, LF Edge LF Energy, LF ONAP, ODPi,  and many others. IBM is strongly committed to the development and advancement of the open source ecosystem. IBM’s data and AI product offerings are built on open source and are strengthened by the accelerated pace and energy of community development. With their membership in LF AI, they are committing to support open source AI technologies and to collaborate with the global community in a vendor-neutral environment for advancing the open source AI platform. 

“IBM has a long history of contributions to open source foundations and community projects,” said IBM VP Todd Moore, Open Technologies and Developer Ecosystem.“ The time is right for IBM to join LF AI as a General Member to work closely with existing members and the broader community to lay the foundations for trusted AI workflows together. IBM Research is a leader in trusted AI and ethical AI guidelines, and IBM’s Data and AI offerings are built on open source components.”

“We are very excited to have IBM join the LF AI Foundation. They are a key piece in the continued growth of the LF AI Foundation. We have been working closely together on areas like our ML Workflow effort and exploring possible collaboration with other industry initiatives and other Linux Foundation hosted projects. IBM has already been active suggesting additions to the LF AI landscape, including projects in the area of trusted AI.  This announcement expends an already strong relationship,” said Dr. Ofer Hermoni, Director of Product Strategy at Amdocs and Chair of the LF AI Technical Advisory Counsel. “IBM is well-known for their leadership in open source AI ethics, and we welcome their strong contributions in these areas as part of the LF AI Foundation.”

“AT&T is excited to welcome IBM and its AI expertise to the LF AI community. It is encouraging to see industry leaders commit to open innovation with ethics as a strong foundation,” said  Mazin Gilbert, Vice President at AT&T Labs.

“IBM is a welcome addition to the LF AI Foundation membership as a leader in the development of open source AI through their Center for Open Source Data and AI Technology” said Dr. Jamil Chawki, Chairman of the LF AI Foundation Outreach Committee. “We look forward to working closely with IBM and helping chart the future of AI together, including the area of trusted AI workflows.”

“As one of the founding members of LF AI, Tech Mahindra is excited to see the AI ecosystem growing. We extend our warm welcome to IBM for joining LF AI  and bringing in the expertise in Ethical AI which will play a crucial role in ensuring that we build AI right,” said Dr. Satish Pai, Sr. Vice President, Americas Communications, Media and Entertainment, Tech Mahindra. “Tech Mahindra looks forward to collaborate and create synergies with IBM across LF AI hosted projects including Acumos.”

“IBM is a pioneer in the field of AI. Tencent sends its warmest congratulations on joining the LF AI Foundation,” said Dr. Han Xiao, Engineering Lead, Tencent AI Lab. “IBM’s continued success using and developing AI applications will help strengthen the message of the LF AI Foundation, and together with all the members we can build an open and collaborative AI ecosystem.”

About IBM Data and AI

For more information go to https://www.ibm.com/analytics/ and https://developer.ibm.com/code/open/centers/codait/.

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

Horovod Updates

By Blog

Version 0.17.0 of Horovod, the distributed deep learning framework, has been released. With the new release, Horovod extends and improves support for machine learning platforms and libraries. The release also contains a new run tool, performance improvements, and minor bug fixes.

Horovodrun

Running Horovod training directly using Open MPI gives a lot of flexibility and allows fine-grained control over options and settings. The flexibility comes with the challenge of providing a significant number of parameters and values, even for simple operations. Missing or wrong parameters or values will prevent Horovod from running successfully.

With this release, the command-line utility horovodrun is introduced. The utility horovodrun is an Open MPI-based wrapper for running Horovod scripts, without the complexity of running Open MPI commands. The horovodrun utility automatically detects and sets parameters and allows the user to show the used MPI command if desired. 

Example

Let’s say we have a Horovod script train.py, and we want to run it on one machine using four GPUs, the horovodrun command would be:

horovodrun -np 4 -H localhost:4 python train.py

The flag -np specifies the number of processes, and the -H flag specifies the host. If more machines are used, the hosts can be listed separated by commas.

The equivalent Open MPI command would be:

mpirun -np 4 \
    -H localhost:4 \
    -bind-to none -map-by slot \
    -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
    -mca pml ob1 -mca btl ^openib \
    python train.py

Apache MXNet 

Apache MXNet is a high performant deep learning framework used for building, training, and deploying deep neural networks and supports distributed training.

Apache MXNet 1.4.1 and 1.5.0 are the releases officially supporting Horovod. Previously, the MXNet 1.4.0 release supported Horovod on certain OS, and users had to run the master branch version of MXNet to have Horovod support. In addition, the DistributedTrainer object is now introduced to better support Gluon APIs and enable Automatic Mixed Precision (AMP) in MXNet.

MPI-less Horovod alpha

MPI is used extensively in the supercomputing community for high-performance parallel computing, but can be difficult to install and configure for the first time.  This change introduces support for Facebook’s Gloo as an alternative to running Horovod with MPI. Gloo comes included with Horovod, and allows users to run Horovod without requiring MPI to be installed.  

For environments that have support both MPI and Gloo, users can choose their preferred library at runtime with a single flag to horovodrun:

$ horovodrun –gloo -np 2 python train.py

Gloo support is still early in its development, and more features are coming soon, most notably: fault tolerance. Stay tuned!

TensorFlow 2.0 support

TensorFlow 2.0 introduces some significant changes, not only when it comes to new features, but also when it comes to the API. The changes include removing redundant APIs and making the API more consistent, with a focus on improving the integration experience. Horovod supports the new TensorFlow 2.0 features and APIs in the latest release.

Intel MLSL

Intel Machine Learning Scaling Library (MLSL) offers a set of communication features which can provide performance benefits for distributed performance, such as asynchronous progress compute/communication overlap, message prioritization, support for data/model/hybrid parallelism, and employment of multiple background processes for communication.

Horovod supports different communication backends, such as MPI, NCCL, and DDL, and with the latest release, Horovod also supports Intel MLSL. Using MLSL as the communication backend improves both the scalability of communication-bound workloads and the compute/communication ratio. 

Improvements

Horovod version 0.16.3 contains performance improvements for existing features with the most noteworthy being updates for PyTorch and large clusters. 

PyTorch performance

PyTorch is an open source, Python-based framework built for easy and efficient deep learning. PyTorch can utilize GPUs to accelerate tensor computation, and provides great flexibility and speed. 

In the new release of Horovod, performance has been improved for gradient clipping, which is a method used for preventing instabilities caused by gradients with excessively large values. 

Large cluster performance

Performance for ultra-large clusters is improved in Horovod 0.16.3. One example of an ultra-large cluster, which takes advantage of this improvement, is Oak Ridge National Laboratory’s Summit supercomputer. Summit has more than 27,000 GPUs and was built to provide computing power to solve large-scale deep learning tasks for which great complexity and high fidelity is required.  

In Horovod, network communication is used in two distinct ways. First and foremost, network communication is used to carry out the collective operations to allreduce/allgather/broadcast tensors across workers during training. To drive these operations, network communication is also used for coordination/control to determine tensor readiness across all workers, and subsequently, what collective operations to carry out. With large cases on these systems spanning many hundreds to thousands of GPUs, the coordination/control logic alone can become a severe limiter to obtaining good parallel efficiency. To alleviate this bottleneck, NVIDIA contributed an improvement to the coordination/control implementation in Horovod to reduce the network communication usage for this phase of operation. In the improved implementation, a caching mechanism is introduced to store tensor metadata that was, in the original implementation, redundantly communicated across the network at each training step. With this change, coordination/control requires as little as a single bit per tensor communicated across the network per training step, instead of several bytes of serialized metadata per tensor.

RDMA support in the provided docker containers

Starting with Horovod version 0.16.4, RDMA support is available with Docker containers, which increases Horovod’s performance.

Previously it was necessary to build your own Docker image with the appropriate libraries to, such as MOFED, to run Horovod with RDMA. That is no longer necessary, as the provided containers now support RDMA. 

If you have Mellanox NICs, we recommend that you mount your Mellanox devices (/dev/infiniband) in the container and enable the IPC_LOCK capability for memory registration:

$ nvidia-docker run -it –network=host -v /mnt/share/ssh:/root/.ssh –cap-add=IPC_LOCK –device=/dev/infiniband horovod:latest
root@c278c88dd552:/examples# …

You need to specify these additional configuration options on primary and secondary workers.

Curious about how Horovod can make your model training faster and more scalable? Check out these new updates and try out the framework for yourself. Be sure to join the Horovod Announce and Horovod Technical-Discuss mailing list.

Angel 3.0 Available Now – Major Milestone in Providing Full ML Stack

By Blog

Angel 3.0 is now available via https://github.com/Angel-ML/angel. Angel offers a full stack machine learning platform designed for sparse data and huge model scenarios, built on a high-performance Parameter Server (PS). Angel is used by Tencent and more the 100 companies in products or internally to their organizations. It boasts 4200+ stars on GitHub, 7 sub-projects, 1100+ forks, and 2000+ commits.

Angel joined the LF AI Foundation in August 2018 as an incubation project from Tencent, a Premier member of the Foundation. 

Angel 3.0 Features

Angel 3.0 adds Auto Feature Engineering, New or Enhanced Computation Engines including Angel native, Spark on Angel (SONA) and PyTorch on Angel (PyTONA). Angel 3.0 therefore allows users to switch to Angel from Spark or PyTorch smoothly with nearly zero cost. 

A detailed white paper on Angel 3.0 authored by Fitz Wang, Ph.D., Senior Researcher, Tencent, and Angel’s maintainer and core developer, introduces the new features of Angel 3.0. It shows what distinguishes Angel from existing machine learning platforms such as TensorFlow, PyTorch, MxNet, PaddlePaddle and Spark. It is available here:

LF AI Foundation Projects 

LF AI is an LF umbrella foundation that was founded in March 2018 to support and sustain collaboration and open source innovation in AI, machine learning and deep learning. It offers a neutral environment to its hosted open source projects and support them with a number of services to help the projects gain wider adoption. Current projects include Acumos AI, Angel, Elastic Deep Learning (EDL), Horovod, and Pyro. For more information please on these projects, please visit: https://lfai.foundation/projects/

The LF AI Foundation supports open source AI developers and organizations around the world. We are constantly looking to host and support additional projects. People interested in hosting  their projects under the LF AI Foundation are encouraged to email us at info@lfai.foundation. Details on proposing projects for hosting in LF AI are available via https://github.com/lfai/proposing-projects.

Meet Angel’s Developers at OSS NA

Angel core maintainers and other developers are presenting on August 20 at the LF AI Meetings in San Diego, co-located at the Open Source Summit NA, and will also be at the LF AI booth Aug 21-23 to show demos and answer questions.

For more information, including both schedules and more, please see:

LF AI Meetings, San Diego – How to Register 

LF AI Meetings, San Diego – Agenda

LF AI Booth #43 – Developer Schedule at Open Source Summit

Integration Among Tools – Key to Machine Learning Implementation Success

By Blog

Use and adoption of Machine Learning (ML) and Deep Learning (DL) technologies is exploding with the availability of dozens of such open source libraries, frameworks and platforms, not to mention all the proprietary solutions. While there are many applications and tools out there, the integration between them can be complicated, can pose additional challenges especially in relation to long term sustainability, and may present a barrier for and adoption as part of a commercial product/service. 

To help developers and data scientists make sense of the diversity of projects, the LF AI landscape (Figure 1) was originally published in December 2018 and has been continuously updated ever since.  The LF AI landscape is an interactive tool that shows both how fragmented the space as well as the wide range of projects in each technology category.

Figure 1: LF AI Landscape available via https://l.lfai.foundation

Most open source AI projects started as proprietary efforts and are the result of years of investment and talent acquisition. At different points in time, the founding company (or companies) of such an effort decide to open source projects as a consequence of wanting to build an ecosystem around it and to collaborate with others on constructing a platform. The end result of that phenomenon is a large ecosystem of open source projects.

The important question from an adoption perspective is which open source project to adopt and how to integrate it with other open source solutions (libraries, frameworks, etc.) and internal proprietary stacks.  

Goal: Better Integration among Projects and Tools

One of the goals of LF AI Foundation is building integration among LF AI projects and generally available and open source solutions so users can easily take advantage of a wide array of options and further the adoption of open source for AI solutions. This effort to improve integration and collaboration is aimed at helping bring everyone up to the same level of understanding of common deployments of ML workflow. Few companies are willing or able to provide this. This filtering and analysis is uniquely suited to a foundation like the LF AI Foundation, since we can look across specialties and provide help and guidance.

In his talk “How Linux Foundation is Changing the (Machine-Learning) World,” Ofer Hermoni, Ph.D., Director of Product Strategy, CTO Office, Amdocs, and the Chairperson, of LF AI Technical Advisory Council, highlights one of the key goals of the LF AI: 

“Harmonization, Interoperability – Increase efforts to harmonize open source projects and reduce fragmentation; increase the interoperability among projects”

This has led to the LF AI Technical Advisory Committee (TAC) pushing to clarify the current landscape. First, what is a typical workflow? What projects are already available under the LF AI umbrella that can implement parts of that workflow? Finally, what open source projects are out there that help fill the gaps and provide good alternatives? This way, users can quickly start to understand the larger picture (landscape) and have a great understanding of not just available open source components in the AI/ML/DL space but also how to integrate them together in implementing an end-to-end ML workflow. At the same time, LF AI can better evaluate where integration is already strong and where there are gaps that can be opportunities to collaborate and fill following the open source approach for the benefit of the broader open source AI community.

The reference ML workflow produced by TAC is summed up in three main layers. 

We started with reviewing existing published flows. We then built on them and extended them to create an entire workflow that covers the lifetime of ML integration across three major phases, starting with data preparation including data governance, moving through model creation, including ethics management, and then moving toward solution rollout including security management.

Figure 2: ML Workflow as defined by the LF AI TAC

Second, the identification of existing LF AI hosted projects and where they fit in the ML workflow. 

Figure 3: ML Workflow showcasing the fit of the LF AI hosted projects (Acumos, EDL, Angel, Horovod and Pyro)  

And, third, the ML workflow highlighting other open source projects and where they fit in, such as TensorFlow, Keras, PyTorch, Kubeflow and many more.

Figure 4: Same ML Workflow highlighting the fit of other existing open source projects

The figures are a great way to quickly grasp the entire process and identify the scope of the applications and tools that are needed, and is especially helpful in identifying integration opportunities across these different projects. The result is a better understanding of the connections or lack of connections, and a path to create these connections or integration points. 

Who should use this?

We would like to hear from as many developers and data scientists as possible, since we are just getting started.  There are certainly more connections and gaps to be identified. Integration work takes time. It’s been built up over the past year. This activity is open not only to LF AI members, but to the entire community, and many companies already participate in the discussions.

How Does My Project Get Involved?

The ML Workflow effort is open for participation and we are soliciting feedback to improve our reference workflow. There are various ways in which you can participate and get involved:

Meet the LF AI Team in San Diego (August 20, 2019)

LF AI is hosting an open meeting in San Diego on August 20th with the goal to discuss the ongoing projects, explore new collaboration opportunities, and provide face-to-face feedback and updates on various Foundation ongoing technical efforts. We welcome you to join, get to meet our members, projects, and staff, and explore ways to get involved in our efforts. 

For more information please visit: https://lfai.foundation/event/lf-ai-meetings-in-san-diego/.

About the author

As the Director of Product Strategy in Amdocs’ CTO office, Dr. Ofer Hermoni is responsible for leading all of Amdocs’ activities in the Machine-Learning open-source community, including defining Amdocs’ product strategy in the area of AI/machine learning. In addition, he is the Chairperson of the LF AI Foundation Technical Advisory Council and a member of the LF AI Foundation Governing Board. Ofer is also an active contributor to the Acumos AI project, and a member of the Acumos AI Technical Steering Committee.



Let’s Talk Open Source AI! Open Invitation to the LF AI Meetings on August 20th in San Diego

By Blog

Come join us! The LF AI Meetings are being held in San Diego, Aug 20, 9am-12:30pm one day prior to Open Source Summit North America, San Diego (Aug 21-23). LF AI members meet and discuss ongoing projects, explore new collaboration opportunities, and provide face-to-face feedback and updates.

It’s a great opportunity to meet with AI developers working on LF AI hosted projects and LF AI staff, too!

Meet the developers! – Who’s at the LF AI Booth #43?

Time SlotWednesday – Aug 21
10:30 am – 12:30 pm Angel, Acumos
2:00 pm– 4:00 pmHorovod, Acumos
4:00 pm – 6:00 pmHorovod, Acumos
6:00 pm – 7:00 pmAcumos

*Booth Crawl & ELC Tech Showcase – 5:30 – 7:00 pm

Time SlotThursday – Aug 22
10:30 am – 12:30 pmAcumos
2:00 pm – 4:00 pmHorovod, Acumos
4:00 pm – 5:30 pmHorovod, Acumos
Time SlotFriday – Aug 23
10:30 am – 12:30 pmAngel, Acumos
2:00 pm – 4:00 pmAngel, Acumos

For registration information and location details, please see: https://lfai.foundation/event/lf-ai-meetings-in-san-diego/

Looking forward to seeing you there!

Registration for LF AI Day – Paris Now Open! Expanding Open Source AI Engagement Across the Globe

By Blog

Register Now! 

Orange and the LF AI Foundation  are excited to announce the LF AI Day – Paris coming up on September 16 in Paris-Châtillon. LF AI Days are regional, one-day events hosted and organized by local members with support from LF AI and its hosted projects. These events are open to all for participation and have no cost to attend.

Hosted at the beautiful Orange Gardens, 44 Avenue de la République, LF AI Day – Paris will feature keynote speakers from leading operators and AI industry, including Orange, NTT, Nokia, Deutsche Telekom, LF AI Foundation, and more. The agenda will focus on open source strategies and ongoing technical developments in the areas of open source machine learning and deep learning. During this event, various AI topics will be covered, including technical presentations, demonstrations of Orange AI Marketplace based on Acumos, an LF AI Graduate project, and a Startups panel discussion.

Agenda (updated Sept 12)

The agenda for the full-day free event is as follows. 

Check-in and registration
Welcome Message,
Nicolas Demassieux SVP, Orange Labs Research
Building Sustainable Open Source AI Ecosystem,
Ibrahim Haddad, Executive Director, LF AI Foundation
Orange AI activities,
Steve Jarrett VP, Orange Data & AI
NTT’s Challenges of AI for Innovative Network Operation
Masakatsu Fujiwara, Project Manager,
NTT Network Technology Laboratories
Coffee Break
Acumos AI – Platform Overview, Releases and Use Cases
Anwar Aftab, Director, Inventive Science, AT&T Labs
We make AI accessible
Jamil Chawki, Intrapreneur-CEO Orange AI Marketplace
and Chair of the LF AI Outreach Committee
Trusted AI – Reproducible, Unbiased and Robust AI Pipelines using Open Source
Romeo Kienzler, Chief Data Scientist
IBM Center for Open Source Data and AI Technologies
Activities in LF AI and Acumos,
Sahar Tahvili, PhD. Lead Data Scientist, Ericsson,
Global Artificial Intelligence Accelerator (GAIA), Sweden
Lunch
Acumos & Orange AI Marketplace demonstration
Philippe Dooze, Project Technical Lead,
Orange Labs Networks
Nokia, AI and Open Source
Philippe Carré, Senior Specialist Open Source, Nokia Bell-Labs & CTO
Startups Panel Discussion, Barriers for AI development,
François Tillerot, Intrapreneur-CMO,
Orange AI Marketplace

Rahul Chakkara, Co-Founder, Manas AI
Laurent Depersin, Research & Innovation Home Lab Director, Interdigital
Marion Carré, CEO Ask Mona
Sana Ben Jemaa, Project Manager Radio & AI, Orange Labs Networks
Open Discussion and Closing Session

There will be a welcome reception after the event. Details will be posted on the event’s page

For questions, please contact info@lfai.foundation

To view LF AI Days happening is other geographical regions, please visit the LF AI Events page. 

Register Now! 

AT&T, Orange, Tech Mahindra Adoption of Acumos AI Builds Foundation for Growth

By Blog

by John Murray, Assistant Vice President of Inventive Science, Intelligent Systems and Incubation, AT&T

With the release of Acumos AI in late 2018, the core idea was to create a sustainable open source AI ecosystem by making it easy to create AI products and services using open source technologies in a neutral environment. Acumos AI was aimed squarely at reducing the need for specialists and lowering the barriers to AI.

Fundamentally, lowering barriers to AI means making it easier to create and train models.

The new Boreas release, just announced in June, does exactly that. Users now have readily available tools to create and train models, enabling the full lifecycle of development from model onboarding and designing, to sharing and deploying. Jupyter Notebooks and NiFi, two popular and well-known document and graphics tools, are now integrated in the pipeline. Access by users through an enhanced UX in the portal provide publishing, unpublishing, deploying and onboarding, model building, chaining, and more.

At the same time, AI model suppliers will be able to provide a software license with their models to ensure that the user has acquired the right to use the model. This is key for marketplace-like transactions. Boreas explicitly supports licenses and Right-To-Use for proprietary models. It also now supports license scans of models and metadata.

The new features in Boreas move AI development forward significantly, allowing developers and data scientists who are not AI specialists to develop and deploy apps.

Leadership and Real World Implementations

The LF AI Foundation charter promises to connect members and contributors with the innovative technical projects, companies, and developer communities that are transforming AI and Machine Learning. But the question is always, is it being used? And how does it perform in the real world?

AT&T, Orange and Tech Mahindra are three great examples how Acumos AI has jumped forward quickly in the last 6 months. All three companies are founding members of the LF AI Foundation and have been providing leadership in both development resources and real world implementations of the Acumos AI framework and marketplace. The reach of their current deployments is distinctly international and extremely ambitious.

AT&T – Infusing AI Across Operation – Big and Small

Two years ago, AT&T saw an opportunity to make AI more accessible and reduce barriers to this exciting industry. Together with Tech Mahindra and The Linux Foundation, AT&T developed Acumos AI to serve as an open marketplace for innovators to create and exchange the best AI solutions possible. Two years and two releases later, we’ve seen firsthand the success of this open approach. It’s led to the creation of new solutions from students, developers, startups and several groups across AT&T’s varied business.

At AT&T, we’re not only helping to improve the Acumos AI code for the public, we’re also using it to improve efficiencies in our own organization. In the past year, AT&T has leveraged Acumos models across customer care, network security, and a variety of different aspects of the business. And, with each release comes additional enhancements, capabilities and opportunities to infuse AI across operations – big and small.

Orange – We make AI accessible and ready for 5G

Orange is using Acumos for its new AI Marketplace. Orange is a leading telecommunications company with 273 million customers worldwide and revenue of €41B (2017). The Orange AI Marketplace is an AI app store where developers can publish and share AI services that can be quickly and easily deployed by customers.

Orange has increased involvement in Acumos significantly. Orange’s contributions to Acumos AI include the onboarding enhancements seen in the new Acumos Boreas release. After testing the publication and export of AI model for operations use cases – such as incident detection and tickets classification – Acumos was deployed as the basis for the Orange AI Marketplace.

The second half of 2019 will see even more implementations and further growth by Acumos AI. Please come back to find out more information here on the LF AI Foundation blog covering innovative use cases and key implementations of Acumos AI worldwide.

Acumos was also proposed by Orange as an AI platform for the European research project AI4EU.  The goals of AI4EU are ambitious, including making the promosis of AI “real” for the EU, and creating a collaborative AI European platform to nurture economic growth. Involving 80 partners, covering 21 countries, the project kicked off in January 2019 and will run for three years and it is expected to implement Acumos by the end of 2019.

Tech Mahindra GAiA – Democratizing AI

Tech Mahindra GAiA is the first enterprise-grade open source AI platform. It hosts a marketplace of AI models which can be applied to use cases in multiple industry verticals. These are used as the basis for building, sharing and rapidly deploying AI-driven services and applications to solve business critical problems.

GAiA is available for commercial products and services and supports open source distribution at the same time. Tech Mahindra is aiming to fully democratize AI. The core concept behind GAiA is that the knowledge and expertise around AI should be universally accessible.

The launch of the GAiA platform is in line with Tech Mahindra’s TechMNxt charter which focuses on leveraging next generation technologies like AI to address real world problems and meet the customer’s evolving and dynamic needs.

Getting Involved in the LF AI Foundation and Acumos

What to get involved? It’s easy to get started! You can get involved with specific projects with development, review, events, documentation, and much more. You can participate in the Technical Advisory Committee (TAC) by joining the discussions on bi-weekly calls, identifying collaboration opportunities, inviting speakers to outside events, evaluating new projects, and more more, And you can take advantage of marketing and outreach provided by the LF AI Foundation. 

The full “Getting Involved Guide” is available for current and prospective members.

“We’ve written this guide to provide you a complete reference to the LF AI community. You will learn how to engage with your communities of interest, all the different ways you can contribute, and how to get help when you need it. If you have suggestions for enhancing this guide, please get in touch with LF AI staff.”

If you are interested in joining the LF AI Foundation: https://lfai.foundation/about/join/


John Murray Bio

John Murray is the Assistant Vice President of Inventive Science, Intelligent Systems and Incubation at AT&T. He leads the Intelligent Systems and Incubation organization which uses software, platforms, data, analytics and AI and machine learning to deliver solutions that address AT&T’s needs. He is an expert in design and building advanced communications systems and is involved in key initiatives such as ONAP, Acumos, data management, and automation and communications systems.