Machine Learning in the Deployment Age

06 March 2020

Machine Learning in the Deployment Age

The last ten years were an incredible decade for artificial intelligence and machine learning, arguably the most productive in the sixty-year history of the field. From ImageNet to AlphaGo, we saw deep neural networks emerge from the academic dustbin to the mainstream; mobilizing millions of data scientists and hundreds of billions of dollars in investment. While consumer-facing products - like digital assistants and self-driving cars - got most of the headlines, the real impact of machine learning has been playing out behind the scenes as more companies look to AI to solve a growing number of business challenges.

At Zetta, we’ve been following the rise of enterprise machine learning since our founding in 2013. As the first focused fund in the area, we’ve been fortunate to partner with a number of the pioneering AI-first enterprise startups, which has given us a front row seat to the first wave of enterprise AI adoption. Over the past seven years, we’ve seen machine learning go from obscurity to curiosity to delivering real business value. But machine learning has a long way to go before it can be widely deployed and fully utilized by businesses: production-grade systems are expensive to build, hard to operationalize and vulnerable to a range of threats we are only just starting to understand.

In her famous book on technological revolutions, Carlota Perez described the tendency of new technologies to follow similar s-shaped development cycles, split into two distinct periods. The first, which she calls the installment phase, is marked by rapid technological development, heavy investment and hype which leads to bubbles and recoveries but ultimately paves the way for a second period which she calls the deployment phase in which the technology is more widely adopted and real value is created. While we haven’t seen a full collapse and recovery cycle in machine learning, we seem to have arrived at a similar inflection point in which both the promise and current perils of the technology are becoming clear. If the last decade was about getting AI to work, this one will have to be about getting AI to work for people and businesses.

To get to the deployment phase, machine learning will have to overcome some key technical limitations and tackle the operational and strategic obstacles standing in the way of broader enterprise adoption. At Zetta, we believe these challenges are among the biggest opportunities that startups can take on. As we kick off the new decade, here are some of the areas we’re most excited about.

Making AI Cheaper and Easier to Build

The first wave of real-world machine learning shows how hard it is to build intelligent systems at enterprise scale. The cost of AI projects have soared with some estimates suggesting that up to 87% of projects never even make it to production. That explains why nearly two-thirds of all AI spending is being funneled to consultants and cloud vendors. Companies looking to build internal capabilities are facing intense competition for data scientists who are commanding higher salaries and churning more often than ever. The imbalance between supply and demand is a hole that won’t be plugged by talent alone, but will require taking aim at the drivers of cost and complexity that plague machine learning today.

Data Quality

Outside of talent, data is the first and most visible cost center for AI projects. While many businesses sit on troves of proprietary data, it’s rarely in a form that is conducive to AI. As a result a multi-billion dollar data preparation market emerged to help businesses unify, enrich and annotate their data for the purpose of training machine learning models. Today, this work is highly manual, dominated by a mix high-end consultants wrangling data and low-cost labor doing annotation. The high costs and explosive demand for these services, especially in markets like autonomous vehicles, has led to some eye-popping valuations in the private markets while underscoring how unsustainable these kinds of services are likely to be in the long term.

Making machine learning accessible to businesses will require new, more efficient ways of improving data quality. Better, more automated tools for data cleaning and enrichment will be important but the cost of data annotation or labeling is by far the biggest obstacle to overcome. A range of new tools are making annotation more efficient by reducing the amount of labeled data a model needs to train. Techniques like active learning highlight the most impactful datum, allowing labelers to focus on the highest yield examples. Similarly transfer learning allows data scientists to train their models on the learnings of another. For smaller or more sensitive datasets, augmenting training sets with synthetic data has been shown to boost performance while improving privacy.

However, the biggest hope for less laborious labeling is in semi and self-supervised learning; techniques that use the structure of raw data to infer labels or the output of one model to train the inputs of another. Some of the leading AI researchers, like Facebook’s Yan Le Cunn, believe these techniques will cover a lot more ground than today’s supervised methods; pointing to the fact that most human learning is gained through experience and reasoning rather than explicit instruction. Compared to supervised learning, these techniques are in their early days but open projects like Snorkel out of Stanford are bringing them closer to real-world applications.

Developer Tools

The tools available to data scientists are in their infancy and so even with the right data and team in place, building production-grade machine learning is needlessly complex. Today, data scientists spend far too much of their time building and maintaining infrastructure, stitching together disjointed tools and building piecemeal pipelines to move models from development into production. Many of today’s most widely used tools are showing their age and failing to meet the challenge posed by new kinds of data and system architectures as machine learning has become more performant and real-time. This has led some of the big tech companies to develop their own internal tools, leaving others to scale back or shelve their more ambitious projects.

To bring machine learning into the enterprise era, we’ll need a new suite of tools for data scientists that are more tightly integrated, production oriented and capable of managing real-time data and distributed systems. Collaborative, cloud-native notebooks are an important starting point to bring more of the data science workflow online and make it easier to connect the different pieces of the pipeline. Better tools for model training, tuning and testing are sorely needed as well. Lastly, we need smarter tools for deploying models across distributed architectures and managing complex infrastructure at scale, like Project Ray out of Berkeley.

While tools like Ray are making data scientists more productive, another class of tools like Google’s AutoML are trying to take AI development out of the hands of data scientists all together, allowing non-engineers and experts to build intelligent models. A range of tools from Google, Oracle and startups like Akkio, Runway and Lob are trying to make machine learning more accessible through standalone tools and integrations into existing products like Google Sheets and Photoshop. While these tools are early, they’re being buoyed by the rising popularity of no-code and low-code tools which are opening up enterprise app development to more non engineers.

Making AI More Robust and Safe to Deploy

The last decade of AI research has produced some truly impressive demonstrations but reproducing those results in the real-world has proven more challenging than expected. Under normal real-world circumstances, models can behave in unexpected and sometimes dangerous ways while also being vulnerable to intentional attacks. The opaque nature of deep learning makes it difficult to predict machine behavior and understand why specific predictions were made. Together, these factors have made it difficult for businesses to use machine learning in mission-critical applications and in regulated industries where they could do a lot of good. To bring machine learning into the deployment phase we’ll need new approaches to building more robust and secure models that can be safely deployed.

Model Robustness

Since the early days of ImageNet, the brittleness of deep learning has been on display: from harmless classifiers mistaking chihuahuas for blueberry muffins to more disturbing cases involving facial recognition and race. Beyond unintentional errors, researchers have found ways to deliberately manipulate models: designing objects to fool security cameras and making small changes to signs and road markers to steer autonomous vehicles into traffic. It’s unclear if these kinds of attacks have been waged on live systems, but for a lot of real-world applications it’s not worth the risk.

Building robust machine learning is one of the core technical challenges of the field which has become a major focus of research over the past few years. Techniques like adversarial training, make models more resilient by feeding them intentionally mislabeled data or examples with small amounts of noise while other approaches have used generative models, like deepfake producing GANs, to synthesize clean examples from malicious ones. The ability to test model robustness under different circumstances is an important piece of the puzzle around which researchers are starting to build a new class of model testing tools. Long term, researchers hope that more reasoning-based approaches to machine learning will be immune to the brightness of today’s deep learning but if an until then better tools for robust training and testing are sorely needed.

Data Security

In an age of frequent data breaches and growing concerns around users privacy, machine learning introduces a range of new concerns. Training models often involves moving large pools of user data out of secure databases and into local and unprotected environments where it is vulnerable to theft or exposure. But even without the underlying data, researchers have shown it’s possible to infer sensitive information from the model itself; a risk that actually grows with more explainable models. As machine learning has grown in popularity so has the range of different attacks and scope of vulnerabilities, which has made it hard for regulated industries - like healthcare and consumer finance - to adopt machine learning for sensitive applications.

Fortunately, there are a number of new techniques strengthening data privacy in machine learning which we expect to be commercialized over the next few years. The most high-profile, federated learning, allows models to train on local devices without pooling user data in the cloud. Google, who developed the technique, has been using it to power auto-suggestions on the Android keyboard since 2017 to great avail. Other approaches are using cryptography - like secure multi-party computation and homomorphic encryption - to allow models to train and make inferences on encrypted data. An encrypted version of Google’s TensorFlow framework has even been gaining popularity in security circles. It’s not yet clear how well these solutions will scale or the kinds of effects they will have on performance, but it’s obvious that privacy-preserving tools will be a critical piece of machine learning’s deployment phase.

Explainability

For certain critical applications, robust and privacy-preserving machine learning won’t be enough. In areas like diagnosing disease and assessing creditworthiness, understanding how decisions are made has important ethical, business and regulatory implications. The ‘black box’ nature of many machine learning models makes it difficult to interpret algorithmic inference, making it hard to adopt AI in certain industries. These concerns are likely to increase as more countries adopt privacy laws like GDPR which includes an explicit “right to explanation”.

Different approaches to explainability have emerged over the past few years, focusing on different domains and application types. In computer vision, researchers at MIT built a tool that dissects the layers of a neural network and lets users identify the individual nodes in a network responsible for identifying certain features in a scene. More recent work builds on this approach by using generative models to offer plain-language, domain specific explanations accessible to business users. Other approaches, like functional transparency, aim to combine deep learning with statistical and causal modeling by training deep neural networks to uncover and follow causal relationships in the data they are being fed.

These are just some of the areas we’re excited about at Zetta and just a few thoughts on the many things that will have to come together for businesses to truly leverage AI. We’d love to hear what you think and especially what you’re working on!