Innovations of the Next Decade

03 December 2019

Innovations of the Next Decade

image

We are on the brink of an exciting new decade for AI. We are seeing a proliferation of mind-boggling applications that go far beyond the breakthroughs of the past decade. So far, companies embraced AI to optimize existing enterprise workflows (applications optimized with AI) or to augment and automate human labor (applications automated with AI). In the next decade, AI will be used to tackle problems previously un-solvable by the human race.

We previously wrote about seven key areas:

1. Making models work on ‘small data’ or less data;

2. Creating smaller neural nets that can run with less compute or on edge devices;

3. Model reproducibility or explainability, so these complicated applications can be trusted by society;

4. The Commercialization of GANs (Generative Adversarial Networks);

5. Model security;

6. Algorithmic bias; and

7. Privacy preserving machine learning.

These seven areas are explained in more detail below, and we are adding an eighth area: AI efficiency.

AI efficiency. While AI holds vast promise for reducing environmental and economic waste, building and training modern ML systems comes with surprising costs. An analysis from OpenAI showed that the amount of compute required to train large models has doubled every 3.4 months since 2012, resulting in a staggering 300,000x increase (compared to a 7x increase in processing power via Moore’s Law). Researchers blame this spike in demand on the emergence of massively parallel training architectures and more sophisticated algorithms that learn from simulated experiments and self-play instead of labeled data.

As these techniques become more popular, the cost of developing cutting-edge AI could rise significantly - putting it out of reach of startups and academic researchers - and the environmental costs are likely to mount unless researchers can develop more efficient methods. Fortunately, a number of efforts at training models more efficiently are underway. A team from MIT has demonstrated a Neural Architecture Search algorithm that achieves state of the art performance with 200x less compute while a team at Stanford is developing new chips models after the human brain that can train models with 100x less energy than current GPUs. These are early efforts to be sure, but in an area we expect to see a lot more work over the next decade.

Here are the seven previous areas we highlighted as candidates for innovation breakthroughs in the next decade.

Privacy preserving machine learning. In the age of nearly constant data breaches, data privacy has become a defining issue for tech regulators, leading to sweeping laws like GDPR in Europe and San Francisco’s recent decision to ban facial recognition software. This poses a particular challenge for machine learning where user data is collected and pooled in the cloud in order to train intelligent systems. However, a new class of privacy-preserving machine learning architectures such as ‘Federated Learning’ could hold the key to maintaining user privacy while allowing models to train on real world data.

In federated learning, a model is downloaded to an edge device — like a mobile phone — where it runs locally and sends a periodic summary of its learnings as an encrypted message to the cloud. There, thousands of individual summaries are averaged together to update the model without user data ever leaving the device. Originally developed at Google in 2017 to power keyboard recommendations, federated learning since inspired a range of new techniques and applications.

In particular, privacy-preserving ML is catching on in healthcare where it’s being used to read EEGsdevelop drugs and predict cardiac arrest through smart speakers at home. Other groups, concerned with device-level vulnerabilities, are combining federated learning with encrypted hardware and are working with major healthcare systems like Stanford.

It’s still early days for the technology — with plenty of performance limitations and security vulnerabilities to work out — but we expect to see many more applications of privacy-preserving machine learning in the coming decade.

The ethics of algorithmic bias. Algorithmic bias becomes a growing concern as more decision making is turned over to intelligent software. Discrimination - on the basis of gender, race, socio-economic and health status - has been observed across a range of predictive systems from sentencing to loan applications to resume screening. Beyond the clear ethical implications, algorithmic biases potentially create liability for businesses that risk violating laws like the Fair Housing Act (FHA)Americans with Disabilities Act (ADA) and the Fair Credit Reporting Act (FCRA).

No single solution will suffice because algorithmic bias has a variety of different causes and forms but AI researchers are working on a range of new techniques for ensuring fairer decision making. Researchers at Columbia and MIT built a ‘counterfactual model’ for comparing algorithms against alternatives and human discretion while a team at IBM research created a set of open-source fairness benchmarks against which to test models. Researchers proposed building models with more representative subsampling and developed techniques for hiding sensitive information in training data to mitigate bias and build fairer systems; a variation of which was recently implemented by LinkedIn to reduce bias in its search rankings.

While it’s early days, we expect to see much more work done in the area of algorithmic bias across startups, large companies and policy organizations.

Making models work on ‘small data’ or less data. There will be a greater need to make AI models work on less data. The most conventional approach to use will be ‘transfer learning’, allowing models to be easily created from other, more generic models. Building AI-first companies on small data is hard, but very valuable. Google recently pioneered AutoML so new models could be created and trained where previous models left off. For example, a generic city model that took a large variety of data, could be a starting point for transfer learning to create a model for New York or a model for Chicago. While transfer learning will reduce effort and data science skills, there are only subsets of applications where a new model can be created with ‘small data’. Without enough ‘big data’ the algorithm will often fail. There are other promising techniques being created including reinforcement learning, agent-based modeling, expert swarms and data reduction frameworks. At least one could be useful to make new models work well using ‘small data’.

Creating more compacted neural nets. In the past few months, we have seen research breakthroughs at MITPrincetonBerkeley and SRI that can create neural nets at as little as 10% of the size without loss of accuracy. This has the potential to let us run learning models on low power sensors and cell phones. This will allow a whole new world of AI that goes beyond GPUs to standard CPUs or small, inexpensive, devices by pushing AI computing to the edges. This will empower a new set of real-time local applications, like self-driving cars and consumer electronics, a next-generation embedded world of devices.

Model reproducibility and explainability. Over the past 6 months, McKinsey has written two pieces looking at both sides of AI, the side that offers great economic impact — as much as $13T per year by 2030 — and the side that shows the fear from the risks of AI — loss of life, the compromise of national security, reputational damage, and regulatory backlash. While these risks certainly include issues with data, technology, and security, a key risk is models misbehaving. For these new applications, model reproducibility and explainability will be a must to allow a company to quickly backtrack before losing public trust. Pete Warden at Google refers to this as the ML reproducibility crisis. IBM refers to creating trust and transparency as the key to the AI revolution. At Zetta, we are investors in Domino Data Lab, the model management platform that pioneered and has advanced the industry’s leading, patent-pending, model reproducibility engine as part of their model management platform.

The Commercialization of GANs (Generative Adversarial Networks). General Adversarial Networks use two neural nets to train a model. One neural net is called the generator. This creates ‘close’ artificial samples based on a small amount of training data. The other neural net is the discriminator, which decides whether the sample is real, they each learn to get smarter as the two neural nets compete. Like a super ‘creative and discriminating’ human, the system can understand patterns and compositions in unlabeled datasets and produce new real-world data. Ian Goodfellow and his colleagues introduced the concept of GANs in 2014, and AI pioneer Yann LeCun has described GANs as one of the biggest breakthroughs in machine learning in the last 20 years. GANs have been used in fashion and advertising, art generation, video games, and facial and voice imitation. While toy-like in application, the results can sometimes be quite sophisticated. For example, Christie’s sold a human portrait created by a GAN for $432,500 this past October. We will see serious breakthrough applications in marketing, cybersecurity, architecture, crime-solving, scientific discovery, climate change mitigation, and medical imaging enhancement in the next decade. For example, just earlier this year, GANs were used to successfully model the distribution of dark matter in space. GANs will produce a decade of applications, where computers will solve problems beyond the capabilities of even the smartest and most well-equipped human beings.

Making models more secure. Model security is a growing concern as machine learning becomes more pervasive. Researchers demonstrated that neural networks can be easily fooled into misclassifying objects — mistaking a toy turtle for a gun, a stop sign for a speed limit marker and even steering a Tesla into oncoming traffic by adding a few small stickers to the road. Luckily, researchers are working on ways to protect AI against these so-called ‘adversarial attacks’ by training models on examples designed to fool them and nesting models within each other to smooth the space exploited by certain kinds of attacks. Research from UMD, CMU and OpenAI have shown these techniques to be effective against the most common forms of attack but the authors are quick to admit they can be overcome by bad actors with greater computing power and by new modes of attack. Over the next decade, we expect model security to be a major area of development and a key enabler of AI adoption in sectors like transportation, medicine and defense.

We will be writing more about AI-enabled applications and the technological breakthroughs needed to support them every month for the rest of this year as a forerunner to an exciting decade ahead for AI investors. We welcome and look forward to your thoughts — the wisdom of our community — to help us refine our criteria and thinking.