All Posts By

LF AI

Horovod Version 0.19.0 Now Available!

By Blog

Horovod, an LF AI Foundation Incubation Project, has released version 0.19.0 and we’re thrilled to see the results of their hard work. Horovod is a distributed deep learning framework that improves the speed, scale, and resource utilization of deep learning training.

In version 0.19.0, Horovod adds tighter integration with Apache Spark, including a new high-level Horovod Spark Estimator framework and support for accelerator-aware task-level scheduling in the upcoming Spark 3.0 release. With Horovod Spark Estimators, you can train your deep neural network directly on your existing Spark DataFrame, leveraging Horovod’s ability to scale to hundreds of workers in parallel without any specialized code for distributed training. This enables deep learning frameworks to integrate seamlessly with ETL jobs, allowing for more streamlined production jobs, with faster iteration between feature engineering and model training. 

This release also contains experimental new features including a join operation for PyTorch and the ability to launch Horovod jobs programmatically from environments like notebooks using a new interactive run mode

With the new join operation, users no longer need to worry about how evenly their dataset divides when training. Just add a join step at the end of each epoch, and Horovod will train on any extra batches without causing the waiting workers to deadlock.

Using Horovod’s new interactive mode, users can launch distributed training jobs in a single line of Python. Define the distributed training function, execute it with multiple parallel processes, then return the results as a Python list of objects. This new API mirrors horovod.spark, but can run on any nodes you would normally use with horovodrun.

Full release notes for Horovod version 0.19.0 available here. Curious about how Horovod can make your model training faster and more scalable? Check out these new updates and try out the framework. And be sure to join the Horovod Announce and Horovod Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Horovod team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website here.

Horovod Key Links

LF AI Resources

LF AI Foundation Announces Graduation of Angel Project

By Blog

Distributed machine learning platform has evolved into a full stack machine learning platform, ready for large scale deployment

SAN FRANCISCO – December 19, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), is announcing today that hosted project Angel is moving from an Incubation to a Graduation Level Project. This graduation is the result of Angel demonstrating thriving adoption, an ongoing flow of contributions from multiple organizations, and a documented and structured open governance process. Angel has achieved a Core Infrastructure Initiative Best Practices Badge, and demonstrated a strong commitment to community.

Angel is a distributed machine learning platform based on parameter server. It was open sourced by Tencent, the project founder, in July 2017 and then joined LF AI as an Incubation Project in August 2018. The initial focus of Angel was on sparse data and big model training. However, Angel now includes feature engineering, model training, hyper-parameter tuning and model serving, and has evolved  into a full stack machine learning platform.

“With Angel, we’ve seen impressive speed in adding new features and rollout in large corporations at scale. With the 3.0 release of Angel, we have witnessed excellent progress in features, adoption and contributions in a short period of time,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “This is a big step forward signaling to the market a maturing open source technology ready for large scale deployment. Congratulations, Angel!”

More than 100 companies or institutions use Angel in products or inside the firewall. The extensive list of implementations includes well-known names like Weibo, Huawei, Xiaomi, Baidu, DiDi, and many more. 

“We are excited to move from Incubation to Graduate Level Project in LF AI, and we see that as just another important milestone in the process, not the end goal. We need to continue to push both technically and with community outreach, to increase momentum, adoption and encourage additional contributions. We will continue to aim for lofty goals,” said Fitz Wang, Senior Researcher at Tencent, Angel Technical Project Lead. “We will be deeply involved in LF AI events in 2020 and present at several events under the LF AI booth. If you’d like to contribute to Angel, please reach out to us via our mailing lists and visit the LF AI booth at any of the LF events.”

Feature Roadmap for 2020

  • Version 3.2 – Graph Computing, adding more algorithms
    • Traditional graph algorithms: Closeness, HyperANF, more
    • Graph Embedding algorithms: Node2Vec, DeepWalk
    • Graph neural network: GraphSAGE
  • Version 3.3 – Federated Learning

Angel Project Resources

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

LF AI Foundation Welcomes ZILLIZ as Premier Member

By Blog

LF AI continues fast pace of membership and project portfolio growth

GPU hardware-accelerated Analytics Platform for Massive-Scale Geospatial and Temporal Data

SAN FRANCISCO – December 17, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), is announcing today that ZILLIZ has joined the Foundation as a Premier member.

ZILLIZ was founded in 2016 with its headquarter in Shanghai. With the vision of “Reinvent Data Science”, ZILLIZ aligns itself on developing open source data science software leveraging new generation heterogenous computing technologies. Milvus, a high-performance vector search engine for deep learning applications open sourced by ZILLIZ recently, is gathering momentum in the open source AI community.

“We are pushing forward a globalization strategy that fully incorporates global open source communities. We believe open development leads to greater implementation and greater good for all,” said ZILLIZ Founder and CEO Charles Xie. “We think the most critical data challenges today are processing unstructured data which are explosively growing. And even for structured data, we also need new approaches while 5G/IoT applications are gaining dominance in the next decade. We believe open source and open collaboration will foster more innovations to address these challenges.”

“As a pioneer of data science software embracing heterogenous hardware, ZILLIZ is enabling enterprises to transform the unstructured data from digital contents to data assets, which is essential in building high quality AI systems and services.” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “We are pleased to welcome ZILLIZ as a Premium Member of LF AI and excited to support their contributions connected with the open source AI community including their Milvus project.”

LF AI Project Portfolio Growth

2019 has been a growth year for LF AI, seeing the foundation quickly adding to its portfolio of hosted projects. LF AI currently hosts the following projects: Acumos, Angel, Elastic Deep Learning, Horovod, Pyro, Adlik and ONNX. Two more projects will be joining in December and will be announced at a later date. 

To learn more about hosting a project in LF AI and the benefits, please visit https://lfai.foundation/ and explore the “Projects” main menu item.

A full list of the LF AI hosted projects is available here: https://lfai.foundation/projects/

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

Thank you! LF AI Day Shanghai Summary

By Blog

Organizer: LF AI Foundation
Co-organizer: Huawei, Tencent, Baidu, Alibaba, DiDi, WeBank, Tesra Sponsor: Huawei, Tencent

From Jessica Kim, LF AI Outreach Committee Chairperson: “With China’s first commercial deployment into 5G, the real era of intelligence has arrived, but we still have a lot of technical issues that need to be explored and solved in a practical way, and people from different industries and different technical fields need to work together.

On September 17th, 2019, at the first LF AI Day in China, held at the Huawei Research Institute in Shanghai, senior technical experts from Huawei, Tencent, Baidu, Alibaba Cloud, DiDi, Tesra and Webank gathered to share the applications and practices of AI. Online live-streaming viewing rates exceeded more than 1500 viewers. For folks who missed the whole day live event, Huawei Editor prepared the event summary, to be able to review the wonderful moments of the General Assembly!”

LF AI Receives Best Contribution Award from Chinese Association for Artificial Intelligence (CAAI)

By Blog

LF AI is pleased to receive the Best Contribution Award from the Chinese Association for Artificial Intelligence (CAAI). Communities are based on contributing; this award has special significance in that regard. 

Thank you!

CAAI is devoted to academic activities in science and technology in the People’s Republic of China. CAAI has 40 branches covering the fields of science and smart technology. It is the only state-level science and technology organization in the field of artificial intelligence under the Ministry of Civil Affairs. 

LF AI co-organized 3 successful national AI conferences with CAAI in 2019. We look forward to more involvement in 2020 in terms of projects with their members and collaboration on events.

Pictures from GAITC 2019, a CAAI event

LF AI Delivers Acumos AI Clio Release

By Blog, Press Release

Third Acumos AI release adds and extends Model On Boarding, Design Studio, Federation, License Management

San Francisco – November 26, 2019 – The LF AI Foundation, the organization building an open AI community to drive open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), today announced the third software release of the Acumos AI Project, codenamed Clio. Clio is focused on improving the experience with users for “first hand” feature requests like how to easily on board AI models, how to design and manage support for pluggable frameworks, how to more easily handle federation with ONAP and O-RAN, license management, and more.

Acumos AI is a platform and open source framework that makes it easy to build, share, and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies and accelerates innovation.

“Clio is an important step forward in the democratization of AI, making open source AI technology available and easily utilized and not limited to AI specialists and Data Scientists. The features introduced in this new Acumos release continue to pave the path towards AI accessibility, further extending users’ ability to build and share AI applications more easily,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “Clio marks a higher level of maturity; this is deployment grade. If you’re considering Acumos AI, we invite you to experiment with it and experience the live instance of the marketplace available from https://marketplace.acumos.org,”

Major highlights of the Clio release include:

  • ONAP Integration: Acumos AI/ML models can now be used in ONAP for virtualized network elements automation and optimization.
  • ORAN 5G RIC/XApp Deployment: Acumos AI/ML models can be deployed in ORAN RIC as XApp micro services to help optimize virtualized RAN.
  • IPR/Cross Company Licensing Entitlement: Companies can publish commercial models and negotiate license agreements for use of their models. Implementation of a new License Usage Manager feature enables the support of license entitlements for Acumos models.
  • Data/Training Pipelines: New feature to help data scientists create, integrate and deploy NiFi data pipelines.
  • Pluggable MLWB Framework: Enterprise Design Tools are now Integrated to support pluggable framework.
  • Support for C/C++ Client: Acumos has added the ability to onboard and deploy ML models in C and C++ languages (in additional to Java, Python, and R).
  • Onboarding Spark/Java: Acumos can now take advantage of the popular Apache Spark cluster computing framework when onboarding and deploying ML models.
  • Model Ratings: Automated way of assigning initial ratings to models based on available metadata.
  • Enhanced Platform Adaptation on Kubernetes: Through integration of a Jenkins server as a workflow engine, Acumos now supports customization to meet Operator needs in mission-critical business functions such as how Acumos ML solutions are assessed for compliance with license and security policies, and where/how Acumos ML solutions are deployed for use under Kubernetes.

“LF AI members, including Amdocs, ATT, Ericsson, Huawei, Nokia, Tech Mahindra, and others are contributing to the evolution of the platform to ease the onboarding and the deployment of AI models,” said Dr. Ofer Hermoni, Chair of LF AI Technical Advisory Council. “Acumos was the first hosted project at the Graduate level with several ongoing efforts to integrate it with other LF AI projects. This highlights an increase in collaboration across hosted projects, serving our mission to reduce redundancies, and increase cross pollination and harmonization of efforts across projects.”

Full release notes available here: https://wiki.acumos.org/display/REL/Acumos_Clio_Release

Supporting Quotes

“Since the days of Claude Shannon, AT&T has played an active role in the evolution of AI. And, Acumos is a shining example of how open collaboration and community engagement can excel this evolution faster than ever before,” said Mazin Gilbert, Vice President of Advanced Technology and Systems at AT&T. “We’re thrilled to work with industry leaders to make AI better, smarter and more accessible to everyone.”

Sachin Desai, LF AI Board Member and Ericsson VP of Global AI Accelerator said, “Open Source contributions and collaborations are critical aspects of Ericsson’s AI journey. We are excited to contribute to Acumos Clio release in areas such as licensing and integration with ONAP, related to its commercialization and industrialization to address the needs of our customers. We look forward to building on this in collaboration with our customers and Acumos ecosystem partners.”

“Huawei congratulates the successful delivery of Acumos Clio release. The Acumos-ONAP Integration work will enable model-driven intelligence to enhance the network orchestration and automation. This deliverable can rapidly automate new services and support complete lifecycle management,” said Bill Ren, Chief Open Source Liaison Officer, Huawei.

“Nokia welcomes the third release of Acumos, Clio. We find that the community has been very successful in continuing improving Acumos over the past releases, and the Clio release shows dedication of the Acumos community,” said Jonne Soininen, Head of Open Source Initiatives, Nokia. “Nokia believes Acumos occupies a unique place in the industry, and this successful cooperation shows the power of working together on AI in open source.”

“Orange is proud to contribute to the LF AI Acumos project as we are convinced that this Open Source Machine learning platform will cover some important future AI needs of telecom operators in the field of 5G and virtualized networks,” said Emmanuel Lugagne Delpon, SVP Orange Labs Networks and Group CTO. “With Clio, a new step has been passed toward the full automated deployment of AI model in open source platform such as ONAP & ORAN and we are very impatient to use these new features in our Orange AI Marketplace.”

“Tech Mahindra is excited to see Acumos reaching its third major milestone with the release of Clio. As part of our TechMNxt charter, co-innovation and co-creation in the field of AI & ML are one of our key focus areas, and we will continue to work towards the enhancement of Acumos platform,” said Sachin Saraf, SVP and VBU Head, CME Business, Tech Mahindra. “Our enterprise grade solutions are playing a key role in accelerating the adoption of Linux Foundation Artificial Intelligence open source projects.”

Acumos AI Key Statistics

  • 8 Projects
  • 62 Repositories
  • 1.27M+ lines of code
  • 4K+ Commits
  • 84 Contributors
  • 78 Reviewers

Acumos Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Pyro 1.0 Has Arrived!

By Blog

Congratulations to the Pyro team! The 1.0 release is out and available for immediate download

Pyro joined LF AI in January 2019 as an Incubation Project. Since then, Pyro has had an extremely productive year, producing eight releases following fast development cycles. 

Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling. It is developed and maintained by Uber AI and community contributors. Pyro project page is here.

According to Fritz Obermeyer, Pyro Engineering Lead and Senior Research Scientist at Uber, the objective of this release is to stabilize Pyro’s interface, making it safer to build high level components on top of Pyro. 

New Features

  • Many new normalizing flows and reorganized pyro.distributions.transforms module
  • New module pyro.contrib.timeseries for fast inference in temporal Gaussian Processes and state space models
  • Improved support for model serving including: New Predictive utility for predicting from trained models, and a PyroModule class that adds Pyro effects to PyTorch nn.Module – Together these utilities allow serving of jit-compiled Pyro models

Full Pyro 1.0 release notes are available here.

Community Stats at a Glance

Pyro has 5,700 Stars, 661 Forks, and is used by 130 other repos. A wide range of academic institutions are using Pyro including Stanford, Harvard, MIT, UPenn, Oxford, Florida State University, University of British Columbia (UBC), New York University (NYU), University of Copenhagen, Columbia, National University of Singapore (NUS), and more. 

Feedback 

We’re always looking to learn who is using Pyro and how you are using it. Please reach out to us at forum.pyro.ai! We’d like to know about your requirements, get your feedback and improve Pyro based on usage models. Even if you are not looking to contribute to the project, we’d like to include you in our community as a user.

Interested in Deep Probabilistic Modeling?

With this major 1.0 release, this is a great time to join the Pyro community! Start by looking through the Pyro Documentation or contact us directly

Pyro Key Links:

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

LF AI Welcomes ONNX, Ecosystem for Interoperable AI Models, as Graduate Project

By Blog, Press Release

Active contributors to ONNX code base include over 30 blue chip companies in AI including Amazon, Facebook, Intel®, Microsoft, NVIDIA

San Francisco – November 14, 2019 – The LF AI Foundation, the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), is announcing today the Open Neural Network eXchange (ONNX) is its newest graduate level project. Moving ONNX under the umbrella of LF AI governance and management is viewed as a key milestone in establishing ONNX as a vendor-neutral open format standard. 

ONNX is an open format used to represent machine learning and deep learning models. An ecosystem of products supporting ONNX provides AI capabilities like model creation and export, visualization, optimization, and acceleration capabilities. Among its many advantages, ONNX provides portability, allowing AI developers to more easily move AI models between tools that are part of trusted AI/ML/DL workflows. 

“We are pleased to welcome ONNX to the LF AI Foundation. We see ONNX as a key project in the continued growth of open source AI,” said Mazin Gilbert, Chair of the LF AI Foundation Governing Board. “We are committed to expanding open source AI, and supporting trust, transparent and accessible community development.”

“ONNX is not just a spec that companies endorse, it’s already being actively implemented in their products. This is because ONNX is an open format and is committed to developing and supporting a wide choice of frameworks and platforms. Joining the LF AI shows a determination to continue on this path, and will help accelerate technical development and connections with the wider open source AI community around the world,” said Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation. “We’re happy to provide ONNX an independent home and work with the community to boost its profile as a vendor neutral standard. ONNX will retain its existing OSI-approved open source license, its governance structure and their established development practices. We are committed to providing ONNX with a host of supporting services especially in the area of marketing and community events to extend its reach and adoption.”

The ONNX community was established in 2017 by Facebook and Microsoft to create an open ecosystem for interchangeable models. Support for ONNX has grown to over 30 registered companies, plus many more end user companies, around the world. ONNX is a mature project with 138 contributors, over 7K stars and over 1K forks. 

“Huawei welcomes the ONNX project joining LF AI. As one of the initial premier members of the LF AI Foundation as well as an early member of the ONNX community, Huawei has been fully supportive throughout the entire process,” said XiaoLi Jiang, GM of Open Source Ecosystem in Huawei, “ONNX as model exchange format standard is a great addition for interoperability which enables broader collaboration for LF AI. Huawei will continue to increase our open source investments and collaborate with the open source community to advance the application and the development in Artificial Intelligence.”

“IBM is delighted to see the ONNX community joining the Linux Foundation as it will encourage accelerated development in the space. ONNX allows the transfer of models between deep learning frameworks and simplifies the deployment of trained models to inference servers in hybrid cloud environments via Watson Machine Learning. In addition, provides a standardized format to run models on both Watson Machine Learning Accelerator and IBM Z that benefit from IBM Power, enabling clients to infuse deep learning insights into transactions as they occur,” said Steven Eliuk, IBM Vice President, Deep Learning & Governance Automation, IBM Global Chief Data Office.

“Intel is committed to support ONNX in our software products such as nGraph, Intel® Distribution of the OpenVINO™ Toolkit, and PLAIDML to provide acceleration on existing Intel hardware platforms and upcoming deep learning ASICs,” said Dr. Harry Kim, Head of AI Software Product Management.  “Developers from Intel have already contributed to defining quantization specs and large model support for ONNX, and we expect even more contributions to the project as ONNX joins the Linux Foundation.”

“We’re thrilled that the Linux Foundation has selected ONNX to expand open source AI capabilities for developers,” said Eric Boyd, corporate vice president, AI Platform, Microsoft Corp. “Microsoft has been deeply committed to open source and continues to invest in tools and platforms that empower developers to be more productive. We use ONNX across our product portfolio – such as Azure AI – to deliver improved experiences to consumers and developers alike.”

“NVIDIA GPUs and TensorRT offer a programmable, high-performance platform for inference. The ONNX open format complements these and benefits from wide adoption within the AI ecosystem,” said Rob Ober, chief platform architect for Data Center Products at NVIDIA. “As an active contributor to ONNX, we’re glad to see the project join the Linux Foundation and look forward to industry-wide collaboration and innovation.”

For more information on getting involved immediately with ONNX, please see the following resources:

ONNX GitHub: https://github.com/onnx/
Discussions: https://gitter.im/onnx/
DockerHub: https://hub.docker.com/r/onnx/onnx-ecosystem
Main Site: https://onnx.ai
Twitter: https://twitter.com/onnxai
Facebook: https://facebook.com/onnxai

Companies that have already implemented ONNX in products include Alibaba, AMD, Apple, ARM, AWS, Baidu, BITMAIN, CEVA, Facebook, Graphcore, Habana, HP, Huawei, IBM, Idein, Intel, Mathworks, Mediatek, Microsoft, NVIDIA, NXP, Oath, Preferred Networks, Qualcomm, SAS, Skymizer, Synopsys, Tencent, Xiaomi, Unity.

LF AI Resources

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML and DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.

About Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

Technical Presentations from Angel Meetup – Oct 13 – Shenzhen

By Blog

The Angel technical meetup was held October 13 in Shenzhen. It was an excellent afternoon of presentations, technical discussions and networking. Thank you to everyone who participated!

Presentations and write up from the meetup are available here:

Angel is an LF AI Foundation incubation project that offers a high-performance distributed machine learning platform based on the philosophy of Parameter Server. It is tuned for performance with big data and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension model.

To learn more on Angel, please visit: https://lfai.foundation/projects/angel-ml/

LF AI Resources

NYU Joins LF AI as Associate Member

By Blog

The LF AI Foundation welcomes New York University (NYU) joining the LF AI as an Associate member. 

NYU is a well-known private research university, founded in 1831, and based in New York City. NYU also has degree-granting campuses in Abu Dhabi and Shanghai, and academic centers around the world.

The university has over 50,000 students, split roughly in half into undergraduate and graduate.

Prof. Shivendra Panwar, Director of the NY State Center for Advanced Technology in Telecommunications (CATT), will be the representative to LF AI. We are looking forward to collaborating with NYU on open source AI tools and methodologies, including Acumos and more.

Welcome!