Are you in Government or the Public Sector? The Call for Participation for the AAAI AI in Government and Public Sector Fall Symposium is Open!

By Blog

Government is at the front lines of the democratization of AI. The scale of participation and the importance in citizens lives means that government and public section approaches to open source AI will be a central component of how development changes and evolves in the coming years.

The Association for the Advancement of Artificial Intelligence (AAAI) is holding its 2019 Fall Symposium Series in Washington, DC, Nov 7–9, 2019.

This symposium will focus on a wide array of government and public sector AI topics. From the Call for Papers (see attached PDF for more information)

“There are hundreds of open source AI related projects focusing on several AI sub-domains such as deep learning, machine learning, models, natural language processing, speech recognition, data, reinforcement learning, notebook environments, ethics and many more.  How can government entities leverage the abundance of open source AI projects and solutions in building their own platforms and services? Based on which criteria should we evaluate various projects aiming to solve same or similar problems? What kind of framework should be in place to validate these projects, and allowing the trust in AI code that will be deployed for public service?”

Submit your proposal by July 26 through the AAAI EasyChair.org site choosing the AAAI/FSS-19 Artificial Intelligence in Government and Public Sector track: https://easychair.org/conferences/?conf=fss19

Contact Frank Stein (fstein@us.ibm.com) with any questions.

LF Deep Learning Becomes LF AI Foundation to Encompass Growing Portfolio of Technologies

By Blog

Today we’re announcing a name change to our Foundation, but it’s really about so much more than a name. It’s about reflecting the growing scope of our organization and the increasing number of technologies being built in our community. That’s why the new name is LF AI Foundation, which encompasses AI (artificial intelligence), machine learning, deep learning and more.

We are on the precipice of a major technological shift with AI, which is exactly the point in any technology evolution where open source software and community comes into play. The interest and contribution to our work is accelerating and the name change reflects that.

Over the past year, we’ve encountered new projects being hosted with us, rapid code releases within those projects and additional members supporting, adopting and contributing to this work. Our portfolio of projects, in particular, is expanding in ways that are supporting developer communities across AI, all under our stewardship. From Acumos to Angel, Elastic Deep Learning, Horovod and Pyro, we are building an upstream technical open source community that crosses artificial intelligence, machine learning, deep learning and other AI sub-domains. It’s a natural time to more accurately reflect the intensive and comprehensive collaboration at work within our community.

The  LF AI Foundation will  formally expand its scope to support a growing ecosystem of AI, machine learning and deep learning technologies. In just the last six months, the overall ecosystem captured in our landscape has grown from 80 to more than 170 projects with a combined 350 million lines of code from more than 80 different organizations around the world. This level and pace of collaborative open source development is similar to the earliest days of Linux, blockchain, cloud and containers. The time to put the proper infrastructure and scope in place is at hand.

Join Us at the Open Source Summit NA

We’re hosting our LF AI members and community for meetings and discussion sessions on August 20th, one day before the Open Source Summit NA. Please join us in exploring and discussing LF AI and our projects. You can register to attend as part of your OSS NA registration.

Additional LF AI Resources:

About LF AI Foundation

The LF AI Foundation, a Linux Foundation project, accelerates and sustains the growth of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. Backed by many of the world’s largest technology leaders, LF AI is a neutral space for harmonization and ecosystem engagement to advance AI, MLand DL innovation. To get involved with the LF AI Foundation, please visit https://lfai.foundation.  

The LF Deep Learning Foundation Welcomes New Member Sylabs

By Blog

Sylabs, the company offering products, services, and solutions based on Singularity, the open source container solution for compute based workloads, is the latest new member to join the LF Deep Learning Foundation at the Linux Foundation.

Singularity was developed originally for high-performance computing (HPC) and is rooted in the open source community.

“We’re developing a solution for containerization that targets those with compute-driven workloads, so the fit with LF Deep Learning is highly relevant,” said Gregory Kurtzer, CEO and founder of Sylabs. “There’s a massive need to properly containerize and support workflows related to artificial intelligence and machine/deep learning.”

With a focus on compute-centric containerization, Sylabs also joined the Linux Foundation initiative behind Kubernetes, the Cloud Native Computing Foundation.

“AI in containerization is evolving rapidly and we are pleased to welcome forward-thinking new member companies like Sylabs,” said Ibrahim Haddad, executive director of LF Deep Learning Foundation. “Sylabs brings experience in both containers and AI to the LFDL and we are looking forward to working together to benefit the open source community.”    

Sylab’s integration between Singularity and Kubernetes leverages the Open Containers Initiative (OCI) image and runtime specifications as of the recent Singularity 3.1.0 release. Sylab’s recent blog shares a demonstration around this based upon a use case from Deep Learning.

Sylabs is excited to get more involved in the LF DL community and advance cloud native computing and AI innovation and efficiency around the compute-driven model. Sylabs will be attending KubeCon+CloudNativeCon North America later this year, while LF DL community members Huawei and Uber are taking part in KubeCon+CloudNativeCon+Open Source Summit China, June 24-26, 2019, in Shanghai, where the latest open source AI/DL/ML developments will be featured.

LF Deep Learning is building a sustainable ecosystem that makes it easy to create new AI products and services using open source technologies. Today, LF DL includes the following projects:

    • Acumos, a platform to build, share and deploy AI apps;
    • Angel ML, a flexible and Powerful Parameter Server for large-scale machine learning;
    • EDL, an Elastic Deep Learning framework designed to build cluster cloud services;
    • Horovod, a framework for distributed training across multiple machines; and
    • Pyro, a deep probabilistic programming framework that facilitates large-scale exploration of AI models.

For more on LF DL news and progress, join our mailing list and follow us on Twitter.

Horovod Adds Support for PySpark and Apache MXNet and Additional Features for Faster Training

By Blog

Carsten Jacobsen, Open Source Developer Advocate @ Uber 

Excerpt: Horovod adds support for more frameworks in the latest release and introduces new features to improve versatility and productivity.

Horovod, a distributed deep learning framework created by Uber, makes distributed deep learning fast and easy-to-use. Horovod improves the speed, scale, and resource allocation for training machine learning (ML) models with TensorFlow, Keras, PyTorch, and Apache MXNet. LF Deep Learning, a Linux Foundation project which supports and sustains open source innovation in artificial intelligence and machine learning, accepted Horovod as one of its hosted projects in December 2018. Since the project was accepted as a hosted, additional contributions and collaboration beyond Uber immediately occurred due to LF Deep Learning’s neutral environment, open governance and set of enablers that the foundation offered the project.

The updates in this latest release improve Horovod in three key ways: adding support and integration for more frameworks, improving existing features, and preparing the framework for changes coming with TensorFlow 2.0. Combined, these new functionalities and capabilities make Horovod easier, faster, and more versatile for its growing base of users, including NVIDIA and the Oak Ridge National Laboratory. Horovod has also been integrated with various deep learning ecosystems, including AWS, Google, Azure, and IBM Watson.

With this release, a number of new use cases for Horovod have been added with the purpose of making the framework a more versatile tool for training deep learning models. As the list of integrations and supported frameworks grows, users can leverage Horovod to accelerate a larger number of open source models, and use the same techniques across multiple frameworks.

PySpark and Petastorm support

Capable of handling a massive volume of data, Apache Spark is used across many machine learning environments. The ease-of-use, in-memory processing capabilities, near real-time analytics, and rich set of integration options, like Spark MLlib and Spark SQL, has made Spark a popular choice.

Given its scalability and ease-of-use, Horovod has received interest from broader, Python-based machine learning communities, including Apache Spark. With the release of PySpark support and integration, Horovod becomes useful to a wider set of users.

A typical workflow for PySpark before Horovod was to do data preparation in PySpark, save the results in the intermediate storage, run a different deep learning training job using a different cluster solution, export the trained model, and run evaluation in PySpark. Horovod’s integration with PySpark allows performing all these steps in the same environment.

In order to smooth out data transfer between PySpark and Horovod in Spark clusters, Horovod relies on Petastorm, an open source data access library for deep learning developed by Uber Advanced Technologies Group (ATG). Petastorm, open sourced in September 2018, enables single machine or distributed training and evaluation of deep learning models directly from multi-terabyte datasets.

A typical Petastorm use case entails preprocessing the data in PySpark, writing it out to storage in Apache Parquet, a highly efficient columnar storage format, and reading the data in TensorFlow or PyTorch using Petastorm.

Both Apache Spark and Petastorm are also used in some applications internally at Uber, so extending Horovod’s support to include PySpark and Petastorm has been a natural step in the process of making Horovod a more versatile tool.

Apache MXNet support

Apache MXNet (incubating) is an open source deep learning framework that facilitates more flexible and efficient neural network training. Amazon is a large contributor to both Horovod and MXNet, and natively supports both frameworks on Amazon EC2 P3 instances and Amazon SageMaker.

Like its recent support of PySpark, Horovod’s integration with MXNet is part of a larger effort to make Horovod available to a broader community, further expanding access to faster and easier model training.

Autotuning

The third update in this latest release is Horovod’s introduction of an alpha version of autotuning. In this release, autotuning is optional, but it will be turned on as default in future releases.

Horovod supports a number of internal parameters that can be adjusted to improve performance for variations in hardware and model architecture. Such parameters include the fusion buffer threshold for determining how many tensors can be batched together into a single allreduce, cycle time for controlling the frequency of allreduce batches, and hierarchical allreduce as an alternative to single-ring allreduce when the number of hosts becomes very large.

Finding the right values for these parameters can yield performance improvements as much as 30 percent. However, trying different parameters by hand is a time-consuming exercise in trial-and-error.

Horovod’s autotuning system removes the guesswork by dynamically exploring and selecting the best internal parameter values using Bayesian optimization.

Autotuning automates the otherwise manual process of trying different options and parameter values to identify the best configuration, which must be repeated if there are changes in hardware, scale, or models. Courtesy of automation, autotuning makes parameter optimization more efficient for faster model training.

Embedding improvements

Embedding is commonly used in machine learning use cases involving natural language processing (NLP) and learning from tabular data. At Uber’s datastore, Uber trips data is stored as tabular data which have some categorical bounds. In a use case like Uber’s, the number of embeddings and the size of embeddings will scale. With this latest release, Horovod has enhanced its capability of scaling deep learning models that make heavy use of embeddings, such as Transformer and BERT.

In addition, these embedding improvements facilitate large embedding gradients faster as well as the fusion of small embedding gradients, allowing for a larger number of embeddings to process operations faster.

Eager execution support in TensorFlow

Eager execution will be the default mode in TensorFlow 2.0. Eager execution allows developers to create models in an imperative programming environment, where operations are evaluated immediately, and the result is returned as real values. Eager execution eliminates the need to create sessions and work with graphs.

With eager execution’s support for dynamic models, model evaluation and debugging is made easier and faster. Eager execution also makes working with TensorFlow more intuitive for less experienced developers.

In the past, running Horovod with eager execution meant calculating each tensor gradient across all workers sequentially, without any tensor batching or parallelism. With the latest release, eager execution is fully supported. Tensor batching with eager execution improved performance by over 6x in our experiments. Additionally, users can now make use of a distributed implementation of TensorFlow’s GradientTape to record operations for automatic differentiation.

Mixed precision training

Mixed precision is the combined use of different numerical precisions in a computational method. Using precision lower than FP32 reduces memory requirements by using smaller tensors, allowing deployment of larger networks. In addition, data transfers take less time, and compute performance increases dramatically. GPUs with Tensor Cores support mixed precision and enable users to capitalize on the benefits of lower memory usage and faster data transfers.

Mixed precision training of deep neural networks achieves two main objectives:

  1. Decreases the required amount of memory, enabling training of larger models or training with larger mini-batches
  2. Shortens the training or inference time by reducing the required resources by using lower-precision arithmetic.

In the past, mixed precision training used to break Horovod’s fusion logic, since the sequence of FP16 tensors would be frequently broken by FP32 tensors, and tensors of different precisions could not participate in a single fusion transaction.

With the latest release, NVIDIA contributed an improvement to tensor fusion logic that allows FP16 and FP32 tensor sequences to be processed independently via a look-ahead mechanism.  We have seen up to 26 percent performance improvement with this change.

Curious about how Horovod can make your model training faster and more scalable? Check out these new updates and try out the framework for yourself, and be sure to join the Deep Learning Foundation’s Horovod announcement and technical discussion mailing lists.

Introducing the Interactive Deep Learning Landscape

By Blog

The artificial intelligence (AI), deep learning (DL) and machine learning (ML) space is changing rapidly, with new projects and companies launching, existing ones growing, expanding and consolidating. More companies are also releasing their internal AI, ML, DL efforts under open source licenses to leverage the power of collaborative development, benefit from the innovation multiplier effect of open source, and provide faster, more agile development and accelerated time to market.

To make sense of it all and keep up to date on an ongoing basis, the LF Deep Learning Foundation has created an interactive Deep Learning Landscape, based on the Cloud Native Landscape pioneered by CNCF. This landscape is intended as a map to explore open source AI, ML, DL projects. It also showcases the member companies of the LF Deep Learning Foundation who contribute contribute heavily to open source AI, ML and DL and bring in their own projects to be housed at the Foundation.

This tool allows viewers to filter, obtain detailed information on a specific project or technology, and easily share via stateful URLs. It is intended to help developers, end users and others navigate the complex AI, DL and ML landscape.

All data is also available in a GitHub repo, and anyone may update or add to the landscape by submitting a pull request on GitHub.

We encourage you to spend some time with this tool, learn more about the current AI, DL and ML space, and begin contributing to it.

LF Deep Learning Foundation Announces Project Contribution Process

By Blog
CDLA

The LF Deep Learning Foundation is now accepting proposals for the contribution of projects.

I am very pleased to announce that the LF Deep Learning Foundation has approved a project lifecycle and contribution process to enable the contribution, support and growth of artificial intelligence, machine learning and deep learning open source projects. With these documents in place, the LF Deep Learning Foundation is now accepting proposals for the contribution of projects.

The LF Deep Learning Foundation, a community umbrella project of The Linux Foundation with the mission of supporting artificial intelligence, machine learning and deep learning open source projects, is working to build a self-sustaining ecosystem of projects.  Having a clear roadmap for how to contribute projects is a first step. Contributed projects operate under their own technical governance with collaboration resources allocated and provided by the LF Deep Learning Foundation’s Governing Board. Membership in the LF Deep Learning Foundation is not required to propose a project contribution.

The project lifecycle and contribution process documents can be found here: https://lists.deeplearningfoundation.org/g/tac-general/wiki/Lifecycle-Document-and-Project-Proposal-Process. Note that sign-up to the general LF Deep Learning Foundation mailing list is required to access these materials.

If you are interested in contributing a project, please review the steps and requirements described in the above materials. We are very excited to see what kinds of innovative, forward-thinking projects the community creates.

If you have any questions on how to contribute a project or the types of support LF Deep Learning Foundation is providing to its projects, please reach out to me at snicholas@linuxfoundation.org.

For more information on the LF Deep Learning Foundation, please visit https://www.linuxfoundation.org/projects/deep-learning/.

Compete in the ACUMOS AI Challenge for a Chance to Win $50,000

By Blog
Acumos AI Challenge

The Acumos AI Challenge, presented by AT&T and Tech Mahindra, is an open source developer competition seeking innovative, ground-breaking AI solutions; enter now.

Artificial Intelligence (AI) has quickly evolved over the past few years and is changing the way we interact with the world around us. From digital assistants, to AI apps interpreting MRIs and operating self-driving cars, there has been significant momentum and interest in the potential for machine learning technologies applied to AI.

The Acumos AI Challenge, presented by AT&T and Tech Mahindra, is an open source developer competition seeking innovative, ground-breaking AI solutions from students, developers, and data scientists. We are awarding over $100,000 in prizes, including the chance for finalists to travel to San Francisco to pitch their solutions during the finals on September 11, 2018. Finalists will also have the chance to have their solutions featured in the Acumos Marketplace, exposure, and meetings with AT&T and Tech Mahindra executives.

Acumos AI is a platform and open source framework that makes it easy to build, share, and deploy AI applications. The Acumos AI platform, hosted by The Linux Foundation, simplifies development and provides a marketplace for accessing, using and enhancing AI apps.  

Acumos AI Challenge Banner

We created the Acumos AI Challenge to enable and accelerate AI adoption and innovation, while recognizing developers who are paving the future of AI development. The Acumos AI Challenge seeks innovative AI models across all use cases. Some example use cases include, but are not limited to:

5G & SDN

Build an AI app that improves the overall performance and efficiencies of 5G networks and Software-Defined Networking.

Media & Entertainment

Build an AI model targeting a media or entertainment use case. Examples include solutions for:

  • Broadcast media, internet, film, social media, and ad campaign analysis
  • Video and image recognition, speech and sound recognition, video insight tools, etc.

Security

Build an AI app around network security use cases such as advanced threat protection, cyber security, IoT security, and more.

Enterprise Solutions

Build an AI model targeting an enterprise use case, including solutions for Automotive, Home Automation, Infrastructure, and IoT.

Since it is so easy to onboard new models into Acumos, there are nearly an infinite number of use cases to consider that can benefit consumers and businesses across a multitude of disciplines. When submitting your entry, we encourage you to consider all scenarios that you are passionate about.

The Acumos AI Challenge will be accepting submissions between May 31 – August 5, 2018. Teams are required to submit a working AI model, test dataset, and a demo video under 3 minutes. Register your team for the Challenge beginning May 31, 2018. We encourage you to register early so that you can begin to plan and build your solution and create your demo video.

Prize Packages

Register today and submit your AI solution for a chance to be one of the top three teams to pitch their app at the Palace of Fine Arts in San Francisco on September 11, 2018. The top three teams will each receive:

  • $25,000 Cash
  • Trip to the finals in San Francisco, including air and hotel (for two team members)
  • Meetings with AT&T and Tech Mahindra executives
  • AI Solution featured in Acumos Marketplace

The team that wins the finale will take home an additional $25,000 grand prize, for a total of $50,000.

We look forward to your entry and hope to see you in San Francisco in September!

REGISTER FOR THE ACUMOS AI CHALLENGE

Open Source AI For Everyone: Three Projects to Know

By Blog
Abstract Brain

We look at three open source AI projects aimed at simplifying access to AI tools and insights.

At the intersection of open source and artificial intelligence, innovation is flourishing, and companies ranging from Google to Facebook to IBM are open sourcing AI and machine learning tools.

According to research from IT Intelligence Markets, the global artificial intelligence software market is expected to reach 13.89 billion USD by the end of 2022. However, talk about AI has accelerated faster than actual deployments. According to a detailed McKinsey report on the growing impact of AI, “only about 20 percent of AI-aware companies are currently using one or more of its technologies in a core business process or at scale.” Here, we look at three open source AI projects aimed at simplifying access to AI tools and insights.

TensorFlow

Google has open sourced a software framework called TensorFlow that it spent years developing to support its AI software and other predictive and analytics programs. TensorFlow is the engine behind several Google tools you may already use, including Google Photos and the speech recognition found in the Google app.

Google has also released two new AIY kits that let individuals easily get hands-on with artificial intelligence. Focused on computer vision, and voice assistants, the two kits come as small self-assembly cardboard boxes with all the components needed for use. The kits are currently available at Target in the United States, and, notably, are both based on the open source Raspberry Pi platform—more evidence of how much is going on at the intersection of open source and AI.

Sparkling Water

H2O.ai, formerly known as OxData, has carved out a niche in the machine learning and artificial intelligence arena, offering platform tools as well as Sparkling Water, a package that works with Apache Spark. H2O.ai’s tools, which you can access simply by downloading, operate under Apache licenses, and you can run them on clusters powered by Amazon Web Services (AWS) and others for just a few hundred dollars. Never before has this kind of AI-focused data sifting power been so affordable and easy to deploy.

Sparkling Water includes a toolchain for building machine learning pipelines on Apache Spark. In essence, Sparkling Water is an API that allows Spark users to leverage H2O’s open source machine learning platform instead of or alongside the algorithms that are included in Spark’s existing machine-learning library. H2O.ai has published several use cases for how Sparkling Water and its other open tools are used in fields ranging from genomics to insurance, demonstrating that organizations everywhere can now leverage open source AI tools.

H2O.ai’s Vinod Iyengar, who oversees business development at the company, says they are working to bring the power of AI to businesses. “Our machine learning platform features advanced algorithms that can be applied to specialized use cases and the wide variety of problems that organizations face,” he notes.

Just as open source focused companies such as Red Hat have combined commercial products and services with free and open source ones, H2O.ai is exploring the same model on the artificial intelligence front. Driverless AI is a new commercial product from H2O.ai that aims to ease AI and data science tasks at enterprises. With Driverless AI, non-technical users can gain insights from data, optimize algorithms, and apply machine learning to business processes. Note that, although it leverages tools with open source roots, Driverless AI is a commercial product.

Acumos

Acumos is another open source project aimed at simplifying access to AI. Acumos AI, which is part of the LF Deep Learning Foundation, is a platform and open source framework that makes it easy to build, share, and deploy AI apps. According to the website, “It standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies and accelerates innovation.”

The goal is to make these critical new technologies available to developers and data scientists, including those who may have limited experience with deep learning and AI. Acumos also has a thriving marketplace where you can grab and deploy applications.

“An open and federated AI platform like the Acumos platform allows developers and companies to take advantage of the latest AI technologies and to more easily share proven models and expertise,” said Jim Zemlin, executive director at The Linux Foundation. “Acumos will benefit developers and data scientists across numerous industries and fields, from network and video analytics to content curation, threat prediction, and more.” You can learn more about Acumos here.

7 Axioms for Calm Technology

By Blog
Amber Case

Amber Case

By 2020, 50 billion devices will be online. That projection was made by researchers at Cisco, and it was a key point in Amber Case’s Embedded Linux Conference keynote address, titled “Calm Technology: Design for the Next 50 Years” which is now available for replay.

Case, Author and Fellow at Harvard University’s Berkman Klein Center, referred to the “Dystopian Kitchen of the Future” as she discussed so-called smart devices that are invading our homes and lives, when the way they are implemented is not always so smart. “Half of it is hackable,” she said. “I can imagine your teapot getting hacked and someone gets away with your password. All of this just increases the surface area for attack. I don’t know about you, but I don’t want to have to be a system administrator just to live in my own home.”

Support and Recede

Case also discussed the era of “interruptive technology.” “It’s not just that we are getting text messages and robotic notifications all the time, but we are dealing with bad battery life, disconnected networks and servers that go down,” she said. “How do we design technology for sub-optimal situations instead of the perfect situations that we design for in the lab?”

“What we need is calm technology,” she noted, “where the tech recedes into the background and supports us, amplifying our humanness. The only time a technology understands you the first time is in Star Trek or in films, where they can do 40 takes. Films have helped give us unrealistic expectations about how our technology understands us. We don’t even understand ourselves, not to mention the person standing next to us. How can technology understand us better than that?”

Case noted that the age of calm technology was referenced long ago at Xerox PARC, by early ubiquitous computing researchers, who paved the way for the Internet of Things (IoT). “What matters is not technology itself, but its relationship to us,” they wrote.

7 Axioms

She cited this quote from Xerox researcher Mark Weiser: “A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.”

Case supplied some ordered axioms for developing calm technology:

  1.    Technology shouldn’t require all of our attention, just some of it, and only when necessary.
  2.    Technology should empower the periphery.
  3.    Technology should inform and calm.
  4.    Technology should amplify the best of technology and the best of humanity.
  5.    Technology can communicate, but it doesn’t need to speak.
  6.    Technology should consider social norms.
  7.    The right amount of technology is the minimum amount to solve the problem.

In summing up, Case said that calm technology allows people to “accomplish the same goal with the least amount of mental cost.” In addition to her presentation at the Embedded Linux Conference, Case also maintains a website on calm technology, which offers related papers, exercises and more.

Watch the complete presentation below: