Category

Blog

All-In Open Source: Why I Quit Tech Giant and Found My OSS Startup

By Blog

Author: Han Xiao, Founder & CEO of Jina AI. Former board member of the LF AI Foundation.

In Feb. 2020, I left Tencent AI and found my startup Jina AI. Jina AI is a neural search company that provides cloud-native neural search solutions powered by AI and deep learning. On April 28th 2020, we released our core product “Jina” in open source. You can use Jina for searching anything: image-to-image, video-to-video, tweet-to-tweet, audio-to-audio, code-to-code, etc. To understand our ambition at Jina AI, I often explain Jina with two analogies.

  • A “Tensorflow” for search. Tensorflow, Pytorch, MXNet, Mindspore, are universal frameworks for deep learning. You can use them for recognizing cats from dogs, or playing Go and DOTA. They are powerful and versatile but not optimized for a specific domain. In Jina, we are focusing on one domain only: the search. We are building on top of the universal deep learning framework and provide an infrastructure for any AI-powered search applications.
  • A design pattern.  There are design patterns for every era: from functional programming to object-oriented programming. The same goes for the search system. Thirty years ago, it all started with a simple textbox. Many design patterns have been proposed for implementing the search system behind this textbox, some of which are incredibly successful commercial-wise. In the era of neural search, a query can go beyond a few keywords: an image, a video, a code snippet, or an audio file. When traditional symbolic search systems can not effectively handle these data formats, people need a new design pattern for building neural search systems. That’s what Jina is: a new design pattern for this new era.

Who set me on this path?

I’ve been working in the field of AI, especially in open-source AI for some time. You may have heard or used my previous work on Fashion-MNIST and bert-as-service. From 2018 to early 2020, I was an Engineering Lead at Tencent AI Lab, where I led a team to build the search infrastructure of China’s everyday app: WeChat.

In 2019, I was representing Tencent as the board member of the LF AI Foundation. It is this year I learned how professional open-source initiative works. Besides reviewing the proposal of high-quality open source projects, I actively engaged in meetings of the Governing Board, Technical Advisory Council, Outreach Committee, and Trusted AI Committee, providing input to this global community. I co-organized multiple offline events including LF AI Day Shanghai, a Christmas gathering. I helped to foster an open tech culture and expand LF AI’s influence within the company. By the end of 2019, Tencent has a seat in each subcommittee, and is among the most engaged corporate members of the Foundation.

Two things I learned during my work at the LF AI Foundation:

  • Open source = Open source code + Open governance. Community is the key.
  • Open source AI infrastructure is the future, and I need to act now.

I’m sure many share the same vision as I do. But my belief is so strong that it drives me jumping out of the tech giant and doing Jina AI as a startup from scratch. Challenging as it is, this is the opportunity I can not miss, and this is the future I believe in. All of my team share this belief as strong as I do. At Jina AI, we only do what we believe. I always tell my team: people who actually make change are the ones who believe that change is possible.

Challenges of an OSS company

Running an open-source software (OSS) company needs courage, an open mindset, and a strong belief. 

As an OSS company, when you first show the codebase to the world, you need courage. The code quality is now a symbol of the company. Are you following the best practice? Are you making a tech debt here and there? Open source is an excellent touchstone to help you understand and improve the quality of software engineering and development procedures. 

Embracing the community is vital for an OSS company. It requires an open mindset. Doing open source is not the same as doing a press release or a spotlight speech: it is not a one-way communication. You need to walk into the community, talk to them, solve their issues, answer their questions, and accept their criticisms. You need to manage your ego and do trivial things such as maintenance, housekeeping.

Some people may think that big tech companies hold a better position when committing to open-source because they can leverage better resources. That is not true. No matter how big the company is, each has its comfort zone built over the years. For many tech companies, open-source is a new game: the value it brings is often not quantifiable through short-term KPI/OKR, and the rules of play are not familiar to everyone. Not every decision-maker in the company believes in it. It’s like a person who has been playing Go for years, with a high rank, and enjoys it. One day you just show up and tell this guy: hey, let’s play mahjong, mahjong is fun! And you are expecting this guy to say “sure“? Regardless of the company’s size, it is always important to make everyone inside the company believe the value of open source. After all, it is always the individual who gets things done.

Best time for AI engineering

For engineers who want to do open source on AI, this is the best time. Thanks to the deep learning frameworks and off-the-shelf pre-trained models, there are many opportunities in the end-to-end application market for individuals to make significant contributions. Ask your colleagues or friends “which AI package do you use for daily tasks such as machine translation/image enhancement/data compression/code completion?”, you would get different answers from person to person. It is often an indicator that the market is still uncontested, and there is ample opportunity for growth and building a community around it.

One thing I like to remind the AI open-source developer is the sustainability of the project. With new AI algorithms popping up every day, how do you keep up the pace? What is the scope of your project? How do you maintain the project when facing community requests? When I was developing bert-as-service, I received many requests on extending it to AlBERT, DistilBERT, BioBERT etc. I prioritize those that fit into my roadmap. Sometimes this means hard-feeling to some people. But let’s be frank, you can’t solve every issue, not by yourself. It is not how open source works and certainly not how you work. The most considerable risk of open-source software is that the core developers behind are burned-out. The best open source may not be shiniest, but the one that lives the longest. So keep your enthusiasm and stay long!

Doing open-source is doing a startup

In the end, doing open source projects is like doing a startup, technical advantage is only part of the story.

Uploading the code to Github is just a starting point, and there are tasks such as operating, branding, and community management to consider. Like entrepreneurship, you need to draw a “pie” that encapsulates the passions and dreams of the community. You need to have the determination and precise target not to get sidetracked by the community issues.

As someone with a Machine Learning Ph.D., I’ve never believed that some black-magic algorithm would be the competitive advantage of an open-source project. Instead, I’m more convinced that sound engineering, attention to detail, slick user-experience, and community-driven governance model ultimately determine user retention.

The most important thing is often your understanding and belief in open source. If you are an idealist, then you will inspire those idealists to march with you. If you’re a detail-oriented person, every little feature in your project will be worshipped by those who care about the details. If you are a warm-hearted person, then the community you build up will appreciate your selfless giving.

Whichever person you are, it’s what you believe in open source makes what open source is.

Jina AI Key Links

Milvus v0.9.0 Release Now Available!

By Blog

Milvus, an LF AI Foundation Incubation-Stage Project, has released version 0.9.0. We’re thrilled to see lots of momentum from this community!

In version 0.9.0, Milvus adds a lot of new features, improvements, and bug fixes: 

New features

  • Checks the CPU instruction set, GPU driver version, and CUDA version, when Milvus starts up. #2054 #2111
  • Prevents multiple Milvus instances from accessing the same Milvus database at the same time. #2059
  • Supports log file rotating. #2206
  • Suspends index building when a search request comes in. #2283

Improvements

  • Refactors log output. #221
  • Upgrades OpenBLAS to improve Milvus’ performance. #1796
  • Unifies the vector distance calculation algorithms among FAISS, NSG, HNSW, and ANNOY. #1965
  • Supports SSE4.2 instruction set. #2039
  • Refactors the configuration files. #2149 #2167
  • Uses Elkan K-means algorithm to improve the IVF index performance. #2178

Bug fixes and API changes 

The Milvus Project invites you to adopt or upgrade to version 0.9.0 in your application, and welcomes feedback. To learn more about the Milvus 0.9.0 release, check out the full release notes. Want to get involved with Milvus? Be sure to join the Milvus-Announce and Milvus Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Milvus team! We look forward to continued growth and success as part of the LF AI Foundation. To learn about hosting an open source project with us, visit the LF AI Foundation website.

Milvus Key Links

LF AI Resources

Welcome LF AI Newly Elected Leaders

By Blog
Logo

We are excited to welcome three newly elected leaders to the LF AI Foundation. We look forward to their leadership in the upcoming year and for their help in supporting open source innovation and projects within the artificial intelligence (AI), machine learning (ML), and deep learning (DL) space.

A huge thank you to previously elected leaders for all of their contributions, and congratulations to the newly elected Governing Board Chairperson, Treasurer, and Technical Advisory Council Chairperson. For more details on the leadership roles, please take a look at the LF AI Foundation Charter available here. Learn more about each leader below:

Charles “Starlord” Xie, Chairperson, Governing Board

Charles “Starlord” Xie, Chairperson, Governing Board

Starlord, Founder and CEO of Zilliz, was elected as the new Chairperson of the LF AI Governing Board. Starlord will lead community partners to further enhance the leading position of the open source AI community in the ​​industry.

Starlord is an expert in database and AI with more than 18 years of experience. He is the founder and CEO of Zilliz, an open-source software company with a mission to reinvent data science. Before Zilliz, Starlord worked many years at the Oracle US headquarters, developing Oracle’s relational database systems. Starlord is a founding member and a key contributor of the Oracle 12c cloud database project. Oracle 12c is a huge success and has realized more than $10B of accumulated revenue. Starlord received a master’s degree in computer science from University of Wisconsin-Madison and a bachelor’s degree from Huazhong University of Science and Technology.

Starlord said: “It is a great honor to be elected as the chairman of the LF AI Governing Board, and I thank the global open source AI community for their support and trust. In the past two years, the LF AI foundation has developed rapidly and has incubated a group of people from Microsoft, IBM, Facebook , Tencent, Baidu, AT&T, ZTE and Zilliz and other excellent AI projects. Let us work together to build a wider and more dynamic open source AI community!”

Jonne Soininen, Treasurer, Governing Board

Jonne Soininen, Treasurer, Governing Board

Jonne Soininen has been re-elected as Treasurer of the LF AI Governing Board for the third year.

Jonne is an open source enthusiast and Head of Open Source Initiatives in Nokia. In addition to the LF AI Governing Board, Jonne serves as Treasurer of the Linux Foundation Networking (LFN) Governing Board and the Chair of the Strategic Planning Committee (SPC). Prior to his current position at Nokia, he worked in different positions within Nokia, Nokia Siemens Networks, Renesas Mobile, and Broadcom, and has an extensive history in telecommunications ranging over 20 years.

Jonne said: “I am very grateful for the renewed opportunity to serve the LF AI community as the Treasurer of the LF AI Governing Board. The LF AI has evolved tremendously over the first two years of existence. I am both excited and humbled for being trusted to continue contributing to the development of this community.”

Jim Spohrer, Chairperson, Technical Advisory Council (TAC)

Jim Spohrer, Director of the Cognitive Opentech Group (COG) at IBM Research, has been elected as Chairperson for the LF AI’s Technical Advisory Council (TAC).

Dr. Spohrer directs IBM’s open source Artificial Intelligence developer ecosystem effort – including IBM’s Center for Opensource Data and AI Technologies (CODAIT). At IBM, he was CTO IBM Venture Capital Relations Group, co-founded IBM Almaden Service Research, and led IBM Global University Programs.  After his MIT BS in Physics, he developed speech recognition systems at Verbex (Exxon) before receiving his Yale PhD in Computer Science/AI. In the 1990’s, he attained Apple Computers’ Distinguished Engineer Scientist and Technologist role for next generation learning platforms. With over ninety publications and nine patents, he received the Gummesson Service Research award, Vargo and Lusch Service-Dominant Logic award, Daniel Berg Service Systems award, and a PICMET Fellow for advancing Service Science.

Dr. Spohrer said: “Grateful to be elected to serve the community as LF AI TAC Chairperson, while we work together to advance open source data and AI technologies at this exciting time in the history of Artificial Intelligence. AI is hard, and will take decades to solve, but the foundations are being put in place at LF AI today with open source projects such as ONNX, Horovod, Angel, Acumos, Ludwig, ForestFlow, Adlik, EDL, NNStreamer, Marquez, Milvus, Pyro, and sparklyr.”

Join the LF AI Community!

The LF AI Foundation is committed to building an open source AI community in the fields of artificial intelligence (AI), machine learning (ML) and deep learning (DL). LF AI drives open source innovation in the field of AI by creating new opportunities for all community members to collaborate with each other. Interested in joining the LF AI community as a member? Learn more here.

LF AI Resources

ONNX 1.7 Now Available!

By Blog

ONNX, an LF AI Foundation Graduated Project, has released version 1.7 and we’re thrilled to see this latest set of improvements. ONNX is an open format to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. 

In version 1.7, you can find the following:

  • Model training introduced as a technical preview, which expands ONNX beyond its original inference capabilities 
  • New and updated operators to support more models and data types
  • Functions are enhanced to enable dynamic function body registration and multiple operator sets
  • Operator documentation is also updated with more details to clarify the expected behavior

To learn more about the ONNX 1.7 release, check out the full release notes. Want to get involved with ONNX? Be sure to join the ONNX Announce and ONNX Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the ONNX team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.

ONNX Key Links

LF AI Resources

Angel 3.1.0 Release Now Available!

By Blog

Angel, an LF AI Foundation Graduated Project, has released version 3.1.0 and we’re thrilled to see lots of momentum within this community. The Angel Project is a high-performance distributed machine learning platform based on Parameter Server, running on YARN and Apache Spark. It is tuned for performance with big data and provides advantages in handling higher dimension models. It supports big and complex models with billions of parameters, partitions parameters of complex models into multiple parameter-server nodes, and implements a variety of machine learning algorithms using efficient model-updating interfaces and functions, as well as flexible consistency models for synchronization.

In version 3.1.0, Angel adds a variety of improvements, including: 

  • Features in graph learning with the trend of graph data structure adopted for many applications such as social network analysis and recommendation systems
  • Publishing a collection of well implemented graph algorithms such as traditional learning, graph embedding, and graph deep learning – These algorithms can be used directly in the production model by calling with simple configurations
  • Providing an operator API for graph manipulations including building graph, and operating the vertices and edges
  • Enabling the support of GPU devices within the PyTorch-on-Angel running mode – With this feature it’s possible to leverage the hardwares to speed up the computation intensive algorithms

The Angel Project invites you to adopt or upgrade Angel of version 3.1.0 in your application, and welcomes feedback. To learn more about the Angel 3.1.0 release, check out the full release notes. Want to get involved with Angel? Be sure to join the Angel-Announce and Angel Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the Angel team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.

Angel Key Links

LF AI Resources

Thank You IBM & ONNX for a Great LF AI Day

By Blog

A big thank you to IBM and ONNX for hosting a great virtual meetup! The LF AI Day ONNX Community Virtual Meetup was held on April 9, 2020 and was a great success with close to 200 attendees joining live. 

The meetup included ONNX Community updates, partner/end-user stories, and SIG/WG updates. The virtual meetup was an opportunity to connect with and hear from people working with ONNX across a variety of groups. A special thank you to Thomas Truong and Jim Spohrer from IBM for working closely with the ONNX Technical Steering Committee, SIG’s, and Working Groups to curate the content. 

Missed the meetup? Check out the recordings at bit.ly/lfaiday-onnxmeetup-040920.

This meetup took on a virtual format but we look forward to connecting again at another event in person soon. LF AI Day is a regional, one-day event hosted and organized by local members with support from LF AI, its members, and projects. If you are interested in hosting an LF AI Day please email info@lfai.foundation to discuss.

ONNX, an LF AI Foundation Graduated Project, is an open format to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. Be sure to join the ONNX Announce mailing list and ONNX Gitter to join the community and stay connected on the latest updates. 

ONNX Key Links

LF AI Resources

sparklyr 1.2.0 Now Available!

By Blog

sparklyr, an LF AI Foundation Incubation Project, has released version 1.2.0 and we’re excited to see a great release with contributions from several members of the community. sparklyr is an R Language package that lets you analyze data in Apache Spark, the well-known engine for big data processing, while using familiar tools in R. The R Language is widely used by data scientists and statisticians around the world and is known for its advanced features in statistical computing and graphics. 

In version 1.2.0, sparklyr adds a variety of improvements, including: 

  • sparklyr now supports Databricks Connect 
  • A number of interop issues with Spark 3.0.0-preview were fixed
  • The `registerDoSpark` method was implemented to allow Spark to be used as a `foreach` parallel backend in Sparklyr (see registerDoSpark.Rd)
  • And more…A complete list of changes can be found in sparklyr 1.2.0 section of the NEWS.md file: sparklyr-1.2.0

The power of open source projects is the aggregate contributions originating from different community members and organizations that collectively help drive the advancement of the projects and their roadmaps. The sparklyr community is a great example of this process and was instrumental in producing this release. A special THANK YOU goes out to the following community members for their contributions of commits and pull request reviews!

To learn more about the sparklyr 1.2.0 release, check out the full release notes. Want to get involved with sparklyr? Be sure to join the sparklyr-Announce and sparklyr Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the sparklyr team and we look forward to continued growth and success as part of the LF AI Foundation! To learn about hosting an open source project with us, visit the LF AI Foundation website.

sparklyr Key Links

LF AI Resources

ForestFlow Joins LF AI as New Incubation Project

By Blog

The LF AI Foundation (LF AI), the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), today is announcing ForestFlow as its latest Incubation Project. ForestFlow is a scalable policy-based cloud-native machine learning model server. ForestFlow strives to strike a balance between the flexibility it offers data scientists and the adoption of standards while reducing friction between Data Science, Engineering and Operations teams. ForestFlow was released and open sourced by Dreamworks.

“We are very pleased to welcome ForestFlow to LF AI. ForestFlow provides an easy way to deploy ML models to production and realize business value on an open source platform that can scale as the user’s projects and requirements scale,” said Dr. Ibrahim Haddad, Executive Director of LF AI. “We look forward to supporting this project and helping it to thrive under a neutral, vendor-free, and open governance.” LF AI supports projects via a wide range of benefits; and the first step is joining as an Incubation Project. 

Ahmad Alkilani, Principal Architect and developer of ForestFlow at DreamWorks Animation, said, “We developed ForestFlow in response to our need to move ML models into production that affected the scheduling and placement of rendering jobs and the throughput of our rendering pipeline which has a material impact to our bottom line. Our focus was on maintaining our own teams’ agility and keeping ML models fresh in response to changes in data, features, or simply the production tools that historical data was associated with. Another pillar for developing ForestFlow was the openness of the solution we chose. We were looking to minimize vendor lock-in having a solution that was amenable to on-premise and cloud deployments all the same while offloading deployment complexities from the job description of a Data Scientist. We want our team to focus on extracting the most value they can out of the data we have and not have to worry about operational concerns. We also needed a hands-off approach to quickly iterate and promote or demote models based on observed metrics of staleness and performance. With these goals in mind, we also realize the value of open source software and the value the Linux Foundation brings to any project and specifically LF AI in this space. DreamWorks Animation is pleased that LF AI will manage the neutral open governance for ForestFlow to help foster the growth of the project.”

Continuous deployment and lifecycle management of Machine Learning/Deep Learning models is currently widely accepted as a primary bottleneck for gaining value out of ML projects. Hear from ForestFlow about why they set out to create this project: 

  • We wanted to reduce friction between our data science, engineering and operations teams
  • We wanted to give data scientists the flexibility to use the tools they wanted (H2O, TensorFlow, Spark export to PFA etc..)
  • We wanted to automate certain lifecycle management aspects of model deployments like automatic performance or time-based routing and retirement of stale models
  • We wanted a model server that allows easy A/B testing, Shadow (listen only) deployments and Canary deployments. This allows our Data Scientists to experiment with real production data without impacting production and using the same tooling they would when deployment to production.
  • We wanted something that was easy to deploy and scale for different deployment scenarios (on-prem local data center single instance, cluster of instances, Kubernetes managed, Cloud native etc..)
  • We wanted the ability to treat inference requests as a stream and log predictions as a stream. This allows us to test new models against a stream of older infer requests.
  • We wanted to avoid the “super-hero” data scientist that knows how to dockerize an application, apply the science, build an API and deploy to production. This does not scale well and is difficult to support and maintain.
  • Most of all, we wanted repeatability. We didn’t want to reinvent the wheel once we had support for a specific framework.

ForestFlow is policy-based to support the automation of Machine Learning/Deep Learning operations which is critical to scaling human resources. ForestFlow lends itself well to workflows based on automatic retraining, version control, A/B testing, Canary Model deployments, Shadow testing, automatic time or performance-based model deprecation and time or performance-based model routing in real-time. The aim for ForestFlow is to provide data scientists a simple means to deploy models to a production system with minimal friction accelerating the development to production value proposition. Check out the quickstart guide to get an overview of setting up ForestFlow and an example on inference. 

Learn more about ForestFlow here and be sure to join the ForestFlow-Announce and ForestFlow-Technical-Discuss mail lists to join the community and stay connected on the latest updates. 

A warm welcome to ForestFlow and we look forward to the project’s continued growth and success as part of the LF AI Foundation. To learn about how to host an open source project with us, visit the LF AI website.

ForestFlow Key Links

LF AI Resources

LF AI Hosted Projects Cross Collaboration: Angel and Acumos

By Blog

Guest Author(s): LF AI Graduated Projects, Angel and Acumos

The goal of the LF AI Foundation (LF AI) is to accelerate and sustain the growth of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, ML, and DL innovation. Projects are hosted in either of two stages: graduation and incubation. At the time of publishing this blog post, LF AI hosts three graduation level projects (Acumos, Angel, and ONNX) and eight incubation level projects (Adlik, Elastic Deep Learning, Horovod, Marquez, Milvus, NNStreamer, Pyro and sparklyr).

The incubation stage is designated for new or early-stage projects that are aligned with the LF AI mission and require help to foster adoption and contribution in order to sustain and grow the project. Incubation projects may receive mentorship from the LF AI Technical Advisory Council (TAC) and are expected to actively develop their community of contributors, governance, project documentation, and other variables that factor into broad success and adoption.

Incubation projects are eligible to graduate when they meet a certain number of criteria demonstrating significant growth of contributors and adopters, commitment to open governance, achieving and maintaining a CII best practices badge, and establishing collaboration with other LF AI hosted projects. Getting to this stage requires work, perseverance, and tangible signs of progress.

On the other hand, the graduation stage signaled projects that have achieved significant growth of contributors and adopters, are important to the ecosystem, and also are eligible for foundational financial support.

Angel Project

Angel joined LF AI as an incubation project in August 2018. It is a high-performance distributed machine learning platform based on the philosophy of Parameter Server. It is tuned for high performance and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension models. The Angel Project has been proactively collaborating with the Acumos Project community, resulting in positive outcomes to both communities.

In its effort to move to graduation, the Angel Project community looked at the full range of LF AI hosted projects and chose Acumos for integration.

Why Acumos?

Inside AI open source community, cross-project collaboration is essential. The Angel platform focuses on training of models based on machine learning algorithms while it doesn’t host any public model marketplace. On the other hand, Acumos supports an AI marketplace that empowers data scientists to publish adaptive AI models, while shielding them from the need to custom develop fully integrated solutions.

This makes Angel teaming up with Acumos a perfect match as the two would work like a factory and distributor after integration and therefore create a synergy effect. The Angel team believed that integration with Acumos could encourage and facilitate algorithm sharing by Angel users and therefore benefit the overall community.

In the following sections, we will explore some of the challenges the projects faced during the process and how integration was achieved.

Integration Challenges

Challenge A: Lack of reference to on-board Java-based model to Acumos marketplace that was dominated by Python models. This challenge was solved with the assistance of Acumos technical gurus from AT&T, Tech Mahindra, and Orange. They provided clear guidance and instructions including jar package access, configuration, as well as Java model preparation.

Challenge B: Seeking deployed internet accessible environment. It was appreciated that Huawei offered access to Acumos environments set on its public cloud in Hong Kong. However, the uploading process wasn’t all smooth sailing as several attempts failed due to unsuccessful generation of artifacts. The problem was later solved with the help from AT&T and Huawei by restarting Nexus and cleaning the disk to address the insufficient storage issue.

What Was Achieved?

A successful integration of Angel and Acumos demonstrated that Angel’ s Java-based models could be on-boarded to a marketplace dominated by Python projects.

At the same time, connecting Angel and Acumos in both API invoking and production deployment would allow more developers to use the Angel framework to train domain specific algorithms and share their works with people around the world. Acumos also become a stronger platform by adding more frameworks and users.

Cross project collaboration played a key role in Angel’s graduation as it proved that the project was an open system and could be connected with other projects. Only by demonstrating the capability of linking both upstream and downstream components in a productive data pipeline, a project could be deemed as a member of the global machine learning community, rather than an isolated system.

The collaboration between Angel and Acumos sets an example for other incubation level projects hosted by LF AI. The foundation hopes that more projects will follow the footsteps of Angel and Acumos, and with collective efforts, a sustainable development of harmonized community can be achieved soon.

Next Steps

To encourage further collaboration, Angel plans to invite global diversified users to publish their models onto Acumos. In parallel, Angel will also look at opportunities to incorporate their project with other components such as ML-flow framework, Web portal and monitoring system, more formats of model file support, etc.

To learn more about these two LF AI hosted projects, and to view all projects, visit the LF AI Projects page. If you would like to learn more about hosting a project in LF AI and the benefits, click here.

LF AI Resources

NNStreamer Joins LF AI as New Incubation Project

By Blog

The LF AI Foundation (LF AI), the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), today is announcing NNStreamer as its latest Incubation Project. NNStreamer is a set of Gstreamer plugins that support ease and efficiency for Gstreamer developers adopting neural network models and neural network developers managing neural network pipelines and their filters. NNStreaner was released and open sourced by Samsung.

“We are very pleased to welcome NNStreamer to LF AI. Machine Learning applications often process online stream input data in real-time, which can create a complex system. NNStreamer can be used to easily represent and efficiently execute against these challenges,” said Dr. Ibrahim Haddad, Executive Director of LF AI. “We look forward to supporting this project and helping it to thrive under a neutral, vendor-free, and open governance.” LF AI supports projects via a wide range of benefits; and the first step is joining as an Incubation Project. Full details on why you should host your open source project with LF AI are available here.

NNStreamer promotes easier and more efficient development of on-device AI systems by allowing the description of general systems with various input, outputs, processors, and neural networks with the pipe-and-filter architecture. It provides easy-to-use APIs with corresponding SDKs as well: C-APIs (all platforms), Tizen.NET (C#), and Android (Java) along with a wide range of neural network frameworks and software platforms (Ubuntu, macOS, OpenEmbedded). NNStreamer became an open source project in 2018 and is under active development with the Tizen project and a wide range of consumer electronics devices. 

Learn more about NNStreamer via their GitHub. You can also check out the NNStreamer 2018 GStreamer Conference presentation recording here, as well as their presentation at the Samsung Developer Conference in 2019 here. And be sure to join the NNStreamer-Announce and NNStreamer-Technical-Discuss mail lists to join the community and stay connected on the latest updates. 

A warm welcome to NNStreamer and we look forward to the project’s continued growth and success as part of the LF AI Foundation. To learn about how to host an open source project with us, visit the LF AI website.

NNStreamer Key Links

LF AI Resources