Machine Learning Technology Stack – Decision Making Steps

22 Jan 2020

Machine Learning Technology StackThe technology arena is moving incredibly quickly. Therefore, I believe it’s important to bring experts together to discuss, debate, and exchange ideas to help us all navigate the rapidly changing landscape. We organize and sponsor TechDebates.org to accomplish exactly this. Recently, we held a TechDebate in Chicago to discuss the best way to approach machine learning technology stacks. Dan Kirsche, head of software engineering for Enova, brought his rich experience to the discussion, and he shared some of his experiences across tech leadership roles.

After the TechDebate, I had the opportunity to talk with Dan a bit more about the topic.

There’s a drive toward requiring less machine learning training data and collecting that data more quickly. What is your view on this?

DAN: Humans, both adults, and children are very good at learning from a small sample size and then applying that knowledge to other situations. This greatly reduces the time to learn something new because the knowledge is transferable. On the other hand, computers, even for basic decision-making, need huge data sets. Recently there has been a focus on machine learning to develop ways to reduce the number of data points needed to build high-performing models. This would greatly reduce the time to build models simply by reducing all the time associated with the collection of the data.

Consider self-driving cars as an example. There are many variables a computer needs to consider to make a driving decision, so if data acquired through actual experience is the only data available to teach the computer, it’s going to take a long time to gather data for nearly limitless decision-making.

There is another way to reduce the data collection time, generate simulated data. Waymo—formerly the Google self-driving car project—had driven about eight million miles as of its announcement in the summer of 2018. But it had driven more than five billion miles in simulation. The learning based on the simulated data is allowing Waymo to be far smarter far more quickly than if it were solely reliant on learning from actual miles driven.

What are the most challenging aspects of machine learning projects?

DAN: Many teams first focus on automation of training and deploying, but that is actually the easiest part. Data is the biggest challenge. Collecting the right data, transforming it, ensuring canonical data is used and cleansing the data are the more challenging aspects of machine learning. Another challenge is using models in production in a performant manner. Many times models require both customer inputted data as well as aggregated data. It’s not easy to generate the data, score it, and return the prediction back with minimal latency.

How do you prioritize machine learning decision-making and build out your machine learning technology stack?

DAN: It’s good to start with a manual process and then refine based on your objectives. There are 4 steps in managing the data that can be automated by the tech stack; collecting, storing, cleansing & transforming, and generating data sets.  During model building, there are two additional steps of determining which inputs and the most performant algorithms.

Finally, once the model is being used in production, things change (e.g., consumer habits, the business itself); maybe your business is seasonal. So you need to retrain the model accordingly.

Each use case has different needs, so looking at where you spend the most time is a good place to invest in automation first.

Machine learning technologies progress rapidly, so you want to do as little custom-building as possible. Its also important to use popular and progressive technologies to more easily recruit top talent since they want to stay relevant and build skills with the current toolbox of technologies.

Where is machine learning heading?

DAN: There’s the near term, and there’s the long term. Machine learning used to be a very manual process of building models, training them, and retraining them, and hardware was expensive. Now we’re building better algorithms, our hardware is getting better, and there is automation throughout. So I think in the near-term machine learning will continue to see more automation. We are seeing the data selection step become more automated which allows us to build models cheaper and faster, accelerating the spread of machine learning into more facets of our lives.

In the long term, I think the question is more of what machine learning is doing to our communities and our world. Using deep learning and neural networks in addition to statistical algorithms, machines are getting smarter quickly. They’re beginning to bridge the gap between what a machine is capable of doing and what humans can do.

This is a similar phenomenon that happened during the Industrial Revolution, where machines took over the manual work that humans were doing. In this revolution its the knowledge work that machines are performing. Mundane tasks are first to be automated, even the complex tasks machines are augmenting humans to be more productive. The future of work is an interesting topic these days and something all of us will get to figure out.

To learn more about rapidly evolving machine learning and other technological advancements, watch for upcoming TechDebates being held around the world. Our company, Sphere Software, is a sponsor and organizer of TechDebates