Category

Blog

Newly Elected ONNX Steering Committee Announced!

By Blog

Author(s): The ONNX Steering Committee

The ONNX community continues to grow with new tools supporting the spec and nearly two hundred individuals from one hundred organizations attending the April 2020 community meeting. Along with the strong growth of this open source project, we are excited to announce that the governance structure is working well and elections have resulted in newly appointed steering committee members. This is another important step to ensure an open, adaptive, sustainable future for the ONNX project.

The ONNX steering committee as of June 1st are: 

The community expresses sincere gratitude to the three former members, both for exemplary service as well as continuing participation and support for ONNX spec and community: 

The past and present steering committee members wish to thank all those who self-nominated as well as those who voted in the election. Solid contributions to SIGS, Working Groups, and Community Meetings continue to be the best way to grow eminence in the ONNX community. For those who plan to self-nominate in next year’s election, participation is essential.   Also, community outreach to other projects in the LF AI Foundation and contributions to defining the ONNX Roadmap are encouraged.

ONNX is an open format to represent and optimize deep learning and machine learning models that deploy and execute on diverse hardware platforms and clouds. ONNX allows AI developers to more easily move AI models between tools that are part of trusted AI/ML/DL workflows. The ONNX community was established in 2017 to create an open ecosystem for interchangeable models, and quickly grew as tool vendors and enterprises adopted ONNX for their products and internal processes. Support for ONNX spec as an industry standard continues to grow with the support of contributors from across geographies and industry sectors. ONNX is a graduated project of the LF AI Foundation under multi-vendor open governance, in accordance with industry best practice. ONNX community values are: Open, welcoming, respectful, transparent, accessible, meritorious, and speedy. In accordance with our ONNX community principle of being welcoming, all ONNX Steering Committee meetings are open to the community to attend. We welcome your contributions to ONNX.

Congrats to everyone involved and thank you for your contributions to the ONNX project!

The ONNX Steering Committee

ONNX Key Links

LF AI Resources

Acumos Demeter Release Now Available!

By Blog

Acumos, an LF AI Foundation Graduated Project, has announced their fourth software release, codenamed Demeter. We’re thrilled to see another great release from the community!

This new version introduces a fully cloud-enabled platform which is quick and easy to deploy and features elastic performance based upon load, making it suitable for use by small teams or large companies. Demeter also introduces bidirectional communications on the federation link, allowing developers to securely receive information about model performance and provide users with automatic and timely updates.

Acumos is a platform and open source framework that makes it easy to build, share, and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies and accelerates innovation.

Major highlights of the Demeter release include:

Cloud Enablement:

  • Containerized platform deployment incorporating cloud native functions, horizontal scaling, and implementation flexibility.

On Boarding:

  • CLI message response with the Acumos Docker model.
  • Support for Pre-dockerized and Dockerized model URI with protobuf file to render models usable in Design studio.

Licensing:

  • Activity tracking and reporting – License usage manager (LUM) maintains logs of model usage.
  • Integration of License module with Portal UI.

Training:

  • Bidirectional communication over the federation link between subscriber and supplier instances to support ML life cycle management and continuous learning.

ML Workbench:

  • Predictor Manager
    • The Predictor Manager manages the model deployment, visualization of deployment metadata and association to a project.
  • Data source
    • The Data Source feature allows users to create and associate project data with a model to create, update, and delete data sets used for training, validation and testing.

To learn more about the Acumos Demeter release, check out the full release notes. Want to get involved with Acumos? Be sure to join the Acumos-Announce and Acumos Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Acumos team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.

Mazin Gilbert, VP of Technology and Innovation, AT&T, said: “AT&T is proud to be a founding member of the Acumos platform, which has radically improved over the past 2 years. As AI begins to play a central role in 5G, openness and industry collaboration are proving more valuable than ever. Acumos has brought together some of the brightest minds in the industry, and we’re consistently encouraged by the progress.”

Sachin Desai, LF AI Board Member and Ericsson VP of Global AI Accelerator, said: “In the Acumos Demeter release, Ericsson has contributed in areas such as bidirectional communications and enhanced licensing functionalities which enable continuous training, transfer learning and its commercialization. This new release will enable a broader range of applications and collaborations among Acumos ecosystem partners and support more complete life cycle management of AI in telecommunication. We are excited to continue providing Ericsson’s technology leadership related to AI/ML in telecommunication domain through our active engagement in the open source community.”

Jonne Soininen, Head of Open Source Initiatives at Nokia, said: “Acumos with the Demeter Release offers a complete experience for ML modelers and model consumers beyond the marketplace. As a founding member of the LF AI, Nokia is excited about the possibilities of Acumos in powering ML model marketplaces around network automation. We are always impressed with the great results of the Acumos community and the Demeter release is no different.” 

Emmanuel Lugagne Delpon, SVP Orange Labs Networks and Group CTO, said: “The Demeter Release of Acumos is a new step towards a seamless way to produce and deploy AI models in various environments such as 5G and ONAP network automation: this will be eased thanks to the Cloud Enablement and Predictor Manager features. The addition of Data Source – a keystone of each AI project – is also a crucial feature for companies who want to manage all of their AI models in a centralized Marketplace catalogue (from the model training to the deployment). The development of the Demeter release during the unique situation the world is facing today is really impressive: it demonstrates the strength of open source communities combined with individual commitments of each contributor. To favor cross cooperation between communities and to meet its operational objectives, Orange is also active in the integration of this new release with ONAP in order to test 5G use cases.”

Sachin Saraf, SVP and VBU Head, CME Business, Tech Mahindra, said: “Tech Mahindra is excited to see Acumos AI reaching its fourth major milestone in a short span of two years with the release of Demeter. Seamless hosting of the Acumos AI platform has been energized by the Cloud Enablement feature. Predictor management, as well as the data source features of ML workbench, which allows users to manage and associate the models to the project and data. As part of our TechMNxt charter, the latest Acumos AI release fosters co-innovation and co-creation in the field of AI and ML and is a mainstay of our key focus areas. We will continue to work towards the enrichment of Acumos AI platform. Our enterprise grade solutions are anchoring a key role in accelerating the adoption of Linux Foundation Artificial Intelligence open source projects.”

Bingtao Han, Chief System Architecture Expert, ZTE, said: “Acumos Demeter release marks a significant milestone for Acumos development with enhanced support for bringing ML models into production, like much more flexible cloud enablement and deployment management of ML models. ZTE believes that Acumos has great potential to be a key enabler of network intelligence in 5G evolution. We’ll continue to support Acumos and embrace the opportunities in open source practices.”

Acumos Key Links

LF AI Resources

Join the LF AI Foundation at OSS+ELC NA 2020

By Blog

The LF AI Foundation is excited to announce our participation at the upcoming Open Source Summit + Embedded Linux Conference North America 2020! The event will be held virtually, and registration is only $50 for four days of learning and collaboration. 

Below are all the different ways to interact with the LF AI Foundation at the conference. We hope to see you there!

Attend Sessions in the AI/ML/DL Track 

The LF AI Foundation will be hosting an AI/ML/DL Track at OSS NA. Join these sessions to learn the latest updates from our projects and hear from leaders in the AI industry. Register for OSS NA to attend.

Visit us at the LF AI Booth!

Come chat with us at our virtual booth at OSS NA, located in the Bronze Hall. Various LF AI community members will be around all week to answer any questions you have. You’ll also be able to get more information on how to get involved with the LF AI Foundation.

Attend the LF AI Mini Summit!

We invite you to join us for our LF AI Foundation Mini Summit where we will cover the latest updates from the Foundation, Technical Advisory Council, Trusted AI Committee, and more. We look forward to uncovering new collaboration opportunities among our growing community. 

Join us by registering to attend the Open Source Summit NA – Register Now

The LF AI Mini Summit is co-located with the Open Source Summit NA and will be held virtually on Thursday, July 2 at 10:00 – 11:30am Central Daylight Time (UTC -5). You will need to be registered for OSS NA to attend. OSS NA registration costs $50 USD, which includes access to the LF AI Mini Summit.

The LF AI Foundation mission is to build and support an open AI community, and drive open source innovation in the AI, ML, and DL domains by enabling collaboration and the creation of new opportunities for all the members of the community. 

Want to get involved with the LF AI Foundation? Be sure to subscribe to our mailing lists to join the community and stay connected on the latest updates. 

LF AI Resources

Virtual LF AI Day EU – June 22, 2020

By Blog

Orange and the LF AI Foundation are pleased to announce the upcoming Virtual LF AI Day* EU – Europe 2020, to be held via Zoom on Monday, June 22. 

This event will feature keynote speakers from leading operators in the AI industry with a focus on open source strategies for machine learning and deep learning.

During this event, various AI topics will be covered, including technical presentations from startups, demonstrations of AI Marketplace and discussions of LF AI projects.

Registration is now open and the event is free to attend. Capacity will be 200 attendees. For up to date information on this virtual meetup, please visit the event website

Note: Due to the Novel Coronavirus situation (COVID-19), the event hosts have decided to make this a virtual-only event via Zoom in order to ensure the safety of our event participants and organizers. We look forward to connecting with you at a future event in person.

Event host, Orange, is a leading telecommunications company with headquarters in France. They are the largest telecoms operator in France, with the bulk of their operations in Europe, Africa and the Middle East.

As an LF AI General Member, Orange is involved within the LF AI Governing Board, Outreach Committee, Trusted AI Committee and an active contributor to the LF AI Acumos project.

*LF AI Day is a regional, one-day event hosted and organized by local members with support from LF AI and its Projects. Learn more about the LF AI Foundation here.

LF AI Resources

All-In Open Source: Why I Quit Tech Giant and Found My OSS Startup

By Blog

Author: Han Xiao, Founder & CEO of Jina AI. Former board member of the LF AI Foundation.

In Feb. 2020, I left Tencent AI and found my startup Jina AI. Jina AI is a neural search company that provides cloud-native neural search solutions powered by AI and deep learning. On April 28th 2020, we released our core product “Jina” in open source. You can use Jina for searching anything: image-to-image, video-to-video, tweet-to-tweet, audio-to-audio, code-to-code, etc. To understand our ambition at Jina AI, I often explain Jina with two analogies.

  • A “Tensorflow” for search. Tensorflow, Pytorch, MXNet, Mindspore, are universal frameworks for deep learning. You can use them for recognizing cats from dogs, or playing Go and DOTA. They are powerful and versatile but not optimized for a specific domain. In Jina, we are focusing on one domain only: the search. We are building on top of the universal deep learning framework and provide an infrastructure for any AI-powered search applications.
  • A design pattern.  There are design patterns for every era: from functional programming to object-oriented programming. The same goes for the search system. Thirty years ago, it all started with a simple textbox. Many design patterns have been proposed for implementing the search system behind this textbox, some of which are incredibly successful commercial-wise. In the era of neural search, a query can go beyond a few keywords: an image, a video, a code snippet, or an audio file. When traditional symbolic search systems can not effectively handle these data formats, people need a new design pattern for building neural search systems. That’s what Jina is: a new design pattern for this new era.

Who set me on this path?

I’ve been working in the field of AI, especially in open-source AI for some time. You may have heard or used my previous work on Fashion-MNIST and bert-as-service. From 2018 to early 2020, I was an Engineering Lead at Tencent AI Lab, where I led a team to build the search infrastructure of China’s everyday app: WeChat.

In 2019, I was representing Tencent as the board member of the LF AI Foundation. It is this year I learned how professional open-source initiative works. Besides reviewing the proposal of high-quality open source projects, I actively engaged in meetings of the Governing Board, Technical Advisory Council, Outreach Committee, and Trusted AI Committee, providing input to this global community. I co-organized multiple offline events including LF AI Day Shanghai, a Christmas gathering. I helped to foster an open tech culture and expand LF AI’s influence within the company. By the end of 2019, Tencent has a seat in each subcommittee, and is among the most engaged corporate members of the Foundation.

Two things I learned during my work at the LF AI Foundation:

  • Open source = Open source code + Open governance. Community is the key.
  • Open source AI infrastructure is the future, and I need to act now.

I’m sure many share the same vision as I do. But my belief is so strong that it drives me jumping out of the tech giant and doing Jina AI as a startup from scratch. Challenging as it is, this is the opportunity I can not miss, and this is the future I believe in. All of my team share this belief as strong as I do. At Jina AI, we only do what we believe. I always tell my team: people who actually make change are the ones who believe that change is possible.

Challenges of an OSS company

Running an open-source software (OSS) company needs courage, an open mindset, and a strong belief. 

As an OSS company, when you first show the codebase to the world, you need courage. The code quality is now a symbol of the company. Are you following the best practice? Are you making a tech debt here and there? Open source is an excellent touchstone to help you understand and improve the quality of software engineering and development procedures. 

Embracing the community is vital for an OSS company. It requires an open mindset. Doing open source is not the same as doing a press release or a spotlight speech: it is not a one-way communication. You need to walk into the community, talk to them, solve their issues, answer their questions, and accept their criticisms. You need to manage your ego and do trivial things such as maintenance, housekeeping.

Some people may think that big tech companies hold a better position when committing to open-source because they can leverage better resources. That is not true. No matter how big the company is, each has its comfort zone built over the years. For many tech companies, open-source is a new game: the value it brings is often not quantifiable through short-term KPI/OKR, and the rules of play are not familiar to everyone. Not every decision-maker in the company believes in it. It’s like a person who has been playing Go for years, with a high rank, and enjoys it. One day you just show up and tell this guy: hey, let’s play mahjong, mahjong is fun! And you are expecting this guy to say “sure“? Regardless of the company’s size, it is always important to make everyone inside the company believe the value of open source. After all, it is always the individual who gets things done.

Best time for AI engineering

For engineers who want to do open source on AI, this is the best time. Thanks to the deep learning frameworks and off-the-shelf pre-trained models, there are many opportunities in the end-to-end application market for individuals to make significant contributions. Ask your colleagues or friends “which AI package do you use for daily tasks such as machine translation/image enhancement/data compression/code completion?”, you would get different answers from person to person. It is often an indicator that the market is still uncontested, and there is ample opportunity for growth and building a community around it.

One thing I like to remind the AI open-source developer is the sustainability of the project. With new AI algorithms popping up every day, how do you keep up the pace? What is the scope of your project? How do you maintain the project when facing community requests? When I was developing bert-as-service, I received many requests on extending it to AlBERT, DistilBERT, BioBERT etc. I prioritize those that fit into my roadmap. Sometimes this means hard-feeling to some people. But let’s be frank, you can’t solve every issue, not by yourself. It is not how open source works and certainly not how you work. The most considerable risk of open-source software is that the core developers behind are burned-out. The best open source may not be shiniest, but the one that lives the longest. So keep your enthusiasm and stay long!

Doing open-source is doing a startup

In the end, doing open source projects is like doing a startup, technical advantage is only part of the story.

Uploading the code to Github is just a starting point, and there are tasks such as operating, branding, and community management to consider. Like entrepreneurship, you need to draw a “pie” that encapsulates the passions and dreams of the community. You need to have the determination and precise target not to get sidetracked by the community issues.

As someone with a Machine Learning Ph.D., I’ve never believed that some black-magic algorithm would be the competitive advantage of an open-source project. Instead, I’m more convinced that sound engineering, attention to detail, slick user-experience, and community-driven governance model ultimately determine user retention.

The most important thing is often your understanding and belief in open source. If you are an idealist, then you will inspire those idealists to march with you. If you’re a detail-oriented person, every little feature in your project will be worshipped by those who care about the details. If you are a warm-hearted person, then the community you build up will appreciate your selfless giving.

Whichever person you are, it’s what you believe in open source makes what open source is.

Jina AI Key Links

Milvus v0.9.0 Release Now Available!

By Blog

Milvus, an LF AI Foundation Incubation-Stage Project, has released version 0.9.0. We’re thrilled to see lots of momentum from this community!

In version 0.9.0, Milvus adds a lot of new features, improvements, and bug fixes: 

New features

  • Checks the CPU instruction set, GPU driver version, and CUDA version, when Milvus starts up. #2054 #2111
  • Prevents multiple Milvus instances from accessing the same Milvus database at the same time. #2059
  • Supports log file rotating. #2206
  • Suspends index building when a search request comes in. #2283

Improvements

  • Refactors log output. #221
  • Upgrades OpenBLAS to improve Milvus’ performance. #1796
  • Unifies the vector distance calculation algorithms among FAISS, NSG, HNSW, and ANNOY. #1965
  • Supports SSE4.2 instruction set. #2039
  • Refactors the configuration files. #2149 #2167
  • Uses Elkan K-means algorithm to improve the IVF index performance. #2178

Bug fixes and API changes 

The Milvus Project invites you to adopt or upgrade to version 0.9.0 in your application, and welcomes feedback. To learn more about the Milvus 0.9.0 release, check out the full release notes. Want to get involved with Milvus? Be sure to join the Milvus-Announce and Milvus Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Milvus team! We look forward to continued growth and success as part of the LF AI Foundation. To learn about hosting an open source project with us, visit the LF AI Foundation website.

Milvus Key Links

LF AI Resources

Welcome LF AI Newly Elected Leaders

By Blog
Logo

We are excited to welcome three newly elected leaders to the LF AI Foundation. We look forward to their leadership in the upcoming year and for their help in supporting open source innovation and projects within the artificial intelligence (AI), machine learning (ML), and deep learning (DL) space.

A huge thank you to previously elected leaders for all of their contributions, and congratulations to the newly elected Governing Board Chairperson, Treasurer, and Technical Advisory Council Chairperson. For more details on the leadership roles, please take a look at the LF AI Foundation Charter available here. Learn more about each leader below:

Charles “Starlord” Xie, Chairperson, Governing Board

Charles “Starlord” Xie, Chairperson, Governing Board

Starlord, Founder and CEO of Zilliz, was elected as the new Chairperson of the LF AI Governing Board. Starlord will lead community partners to further enhance the leading position of the open source AI community in the ​​industry.

Starlord is an expert in database and AI with more than 18 years of experience. He is the founder and CEO of Zilliz, an open-source software company with a mission to reinvent data science. Before Zilliz, Starlord worked many years at the Oracle US headquarters, developing Oracle’s relational database systems. Starlord is a founding member and a key contributor of the Oracle 12c cloud database project. Oracle 12c is a huge success and has realized more than $10B of accumulated revenue. Starlord received a master’s degree in computer science from University of Wisconsin-Madison and a bachelor’s degree from Huazhong University of Science and Technology.

Starlord said: “It is a great honor to be elected as the chairman of the LF AI Governing Board, and I thank the global open source AI community for their support and trust. In the past two years, the LF AI foundation has developed rapidly and has incubated a group of people from Microsoft, IBM, Facebook , Tencent, Baidu, AT&T, ZTE and Zilliz and other excellent AI projects. Let us work together to build a wider and more dynamic open source AI community!”

Jonne Soininen, Treasurer, Governing Board

Jonne Soininen, Treasurer, Governing Board

Jonne Soininen has been re-elected as Treasurer of the LF AI Governing Board for the third year.

Jonne is an open source enthusiast and Head of Open Source Initiatives in Nokia. In addition to the LF AI Governing Board, Jonne serves as Treasurer of the Linux Foundation Networking (LFN) Governing Board and the Chair of the Strategic Planning Committee (SPC). Prior to his current position at Nokia, he worked in different positions within Nokia, Nokia Siemens Networks, Renesas Mobile, and Broadcom, and has an extensive history in telecommunications ranging over 20 years.

Jonne said: “I am very grateful for the renewed opportunity to serve the LF AI community as the Treasurer of the LF AI Governing Board. The LF AI has evolved tremendously over the first two years of existence. I am both excited and humbled for being trusted to continue contributing to the development of this community.”

Jim Spohrer, Chairperson, Technical Advisory Council (TAC)

Jim Spohrer, Director of the Cognitive Opentech Group (COG) at IBM Research, has been elected as Chairperson for the LF AI’s Technical Advisory Council (TAC).

Dr. Spohrer directs IBM’s open source Artificial Intelligence developer ecosystem effort – including IBM’s Center for Opensource Data and AI Technologies (CODAIT). At IBM, he was CTO IBM Venture Capital Relations Group, co-founded IBM Almaden Service Research, and led IBM Global University Programs.  After his MIT BS in Physics, he developed speech recognition systems at Verbex (Exxon) before receiving his Yale PhD in Computer Science/AI. In the 1990’s, he attained Apple Computers’ Distinguished Engineer Scientist and Technologist role for next generation learning platforms. With over ninety publications and nine patents, he received the Gummesson Service Research award, Vargo and Lusch Service-Dominant Logic award, Daniel Berg Service Systems award, and a PICMET Fellow for advancing Service Science.

Dr. Spohrer said: “Grateful to be elected to serve the community as LF AI TAC Chairperson, while we work together to advance open source data and AI technologies at this exciting time in the history of Artificial Intelligence. AI is hard, and will take decades to solve, but the foundations are being put in place at LF AI today with open source projects such as ONNX, Horovod, Angel, Acumos, Ludwig, ForestFlow, Adlik, EDL, NNStreamer, Marquez, Milvus, Pyro, and sparklyr.”

Join the LF AI Community!

The LF AI Foundation is committed to building an open source AI community in the fields of artificial intelligence (AI), machine learning (ML) and deep learning (DL). LF AI drives open source innovation in the field of AI by creating new opportunities for all community members to collaborate with each other. Interested in joining the LF AI community as a member? Learn more here.

LF AI Resources

ONNX 1.7 Now Available!

By Blog

ONNX, an LF AI Foundation Graduated Project, has released version 1.7 and we’re thrilled to see this latest set of improvements. ONNX is an open format to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. 

In version 1.7, you can find the following:

  • Model training introduced as a technical preview, which expands ONNX beyond its original inference capabilities 
  • New and updated operators to support more models and data types
  • Functions are enhanced to enable dynamic function body registration and multiple operator sets
  • Operator documentation is also updated with more details to clarify the expected behavior

To learn more about the ONNX 1.7 release, check out the full release notes. Want to get involved with ONNX? Be sure to join the ONNX Announce and ONNX Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the ONNX team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.

ONNX Key Links

LF AI Resources

Angel 3.1.0 Release Now Available!

By Blog

Angel, an LF AI Foundation Graduated Project, has released version 3.1.0 and we’re thrilled to see lots of momentum within this community. The Angel Project is a high-performance distributed machine learning platform based on Parameter Server, running on YARN and Apache Spark. It is tuned for performance with big data and provides advantages in handling higher dimension models. It supports big and complex models with billions of parameters, partitions parameters of complex models into multiple parameter-server nodes, and implements a variety of machine learning algorithms using efficient model-updating interfaces and functions, as well as flexible consistency models for synchronization.

In version 3.1.0, Angel adds a variety of improvements, including: 

  • Features in graph learning with the trend of graph data structure adopted for many applications such as social network analysis and recommendation systems
  • Publishing a collection of well implemented graph algorithms such as traditional learning, graph embedding, and graph deep learning – These algorithms can be used directly in the production model by calling with simple configurations
  • Providing an operator API for graph manipulations including building graph, and operating the vertices and edges
  • Enabling the support of GPU devices within the PyTorch-on-Angel running mode – With this feature it’s possible to leverage the hardwares to speed up the computation intensive algorithms

The Angel Project invites you to adopt or upgrade Angel of version 3.1.0 in your application, and welcomes feedback. To learn more about the Angel 3.1.0 release, check out the full release notes. Want to get involved with Angel? Be sure to join the Angel-Announce and Angel Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Angel team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.

Angel Key Links

LF AI Resources

Thank You IBM & ONNX for a Great LF AI Day

By Blog

A big thank you to IBM and ONNX for hosting a great virtual meetup! The LF AI Day ONNX Community Virtual Meetup was held on April 9, 2020 and was a great success with close to 200 attendees joining live. 

The meetup included ONNX Community updates, partner/end-user stories, and SIG/WG updates. The virtual meetup was an opportunity to connect with and hear from people working with ONNX across a variety of groups. A special thank you to Thomas Truong and Jim Spohrer from IBM for working closely with the ONNX Technical Steering Committee, SIG’s, and Working Groups to curate the content. 

Missed the meetup? Check out the recordings at bit.ly/lfaiday-onnxmeetup-040920.

This meetup took on a virtual format but we look forward to connecting again at another event in person soon. LF AI Day is a regional, one-day event hosted and organized by local members with support from LF AI, its members, and projects. If you are interested in hosting an LF AI Day please email info@lfai.foundation to discuss.

ONNX, an LF AI Foundation Graduated Project, is an open format to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. Be sure to join the ONNX Announce mailing list and ONNX Gitter to join the community and stay connected on the latest updates. 

ONNX Key Links

LF AI Resources