In 1960, the physicist Eugene Wigner wrote a famous essay titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” in which he explored the question of why mathematics is so remarkably useful in the natural sciences.

A contemporary example of such “unreasonable effectiveness” is the success that machine learning has had in transforming many disciplines in the past decade. Particularly impressive is the progress in autonomous vehicles. In the 2004 DARPA Grand Challenge for autonomous vehicles, which popularized the idea of driverless cars, none of the vehicles was able to complete a relatively simple route through the Mojave Desert, and I thought it unlikely that I would see driverless cars operating in urban environments in my lifetime.

Since that time, progress in this area has been phenomenal, thanks to rapid advances in using machine learning for sensing and navigation (and in building low-cost sensors and controls). Driverless long-haul trucks are apparently just a few years away, and the main worry now is not so much the safety of these trucks but the specter of unemployment facing millions of people currently employed as truck drivers.

In contrast to these astonishing changes, the impact of machine learning on traditional Computer Science areas like computer architecture, operating systems, and programming languages has been minimal.

By this, I don’t mean that computer architects and systems designers have ignored machine learning. Systems like GraphLab and Galois have been designed to make it easier to write machine learning applications on high-performance parallel platforms. The architecture community is exploring energy-efficient implementations of deep neural nets, and last year’s ISCA had no fewer than three sessions on architectures for deep neural nets! What these examples show however is that good systems research can have an impact on the implementation of machine learning applications; what I am looking for are examples of how the systems area has been transformed by machine learning.

Here’s the kind of thing I have in mind.

Back in 2001, my colleague Calvin Lin and his student Daniel Jimenez used a perceptron for branch prediction, and they showed that their predictor was more accurate than conventional predictors like two-bit counters because it was more effective at exploiting long branch histories. AMD uses a perceptron-based predictor in their new Ryzen processor (I should point out though that the 2016 Branch Predictor Championship was won by Andre Seznec’s  TAGE predictor, which beat all perceptron-based predictors!).

Another example is the use of regression techniques from machine learning to build models of program behavior. If I replace 64-bit arithmetic with 32-bit arithmetic in a program, how much does it change the output of the program and how much does it reduce energy consumption? For many programs, it is not feasible to build analytical models to answer these kinds of questions (among other things, the answers are usually dependent on the input values), but if you have a lot of training data, Xin Sui has shown that you can often use regression to build (non-linear) proxy functions to answer these kinds of questions fairly accurately.

Neither of these examples however is a truly transformative use of machine learning in systems building. By transformative, I mean something that causes people to wonder whether computer architects, programmers, compiler writers or operating systems builders will soon be joining truck drivers and limo drivers in the unemployment line!

It is possible that I am wrong and that machine learning has already revolutionized some aspects of systems building. If so, I welcome your feedback about these revolutions, and I will summarize the more interesting responses in my next blog post.

But if I am not wrong and the revolution has not yet begun, isn’t it time to get to work?

Source: Computer Architecture Today

News categories: