August 12, 2020

Successful ML deployments and how to deliver them

Paul Clauson

Machine learning is vital for AI projects. But why do we see so few successful ML deployments? And how can you make your project a success?

Artificial intelligence projects are becoming extremely common in all areas of business. There’s little doubt that AI will prove as revolutionary as personal computers, the Internet, and the cloud. However, we are still in the early stages of the revolution. AI relies on machine learning models. Creating and deploying these is still a real challenge for most companies. In this blog, I look at what makes for a successful ML deployment and give some tips on delivering success.

The transformative power of AI

AI has the potential to disrupt and transform almost every business and industry. This is because it can leverage all your data and generate actionable insights. Three applications of AI are particularly relevant in a business context: forecasting, anomaly detection, and knowledge discovery.

Forecasting

Forecasting allows you to predict the future and plan accordingly. It can be used to forecast demand for stock in order to allow optimal planning. You can use it to predict spikes in demand for power to try and avoid brownouts. It even allows you to predict in advance what is the best time to launch a marketing campaign. All these require you to create suitable ML models from your data.

Anomaly detection

Often companies need to be able to identify anomalies and outliers. These can be a sign of impending machine failure. Or they might indicate your server is currently under attack. One of the most common uses is for fraud detection in financial services.

Knowledge discovery

Often, businesses are presented with the problem of finding specific data within a large unstructured dataset. Typically, this can be finding all emails relating to a subject in order to defend a legal suit. But it can also be used to identify prior art for a possible patent application. And it can be combined with natural language processing to create an intelligent support bot.

Machine learning—the power behind the AI

Machine learning powers the majority of these AI applications. As a result, delivering an AI project requires you to achieve a successful ML deployment. But what does that actually mean? Let’s look at the process of actually creating and deploying an ML model.

Seven steps to successful ML deployment

Creating and deploying an ML model is a long and complex process. It starts with identifying the data you have available and ends with a successful deployment in production. Generally, people identify seven distinct steps in the process.

Collect the data

First, you need to collect all the data you need. This means finding data sources, analyzing them, and identifying the data you need. This can be especially problematic with historical data since the formats are often obsolete and may have changed over time. Typically, this can take you weeks to get right. And you need to remember that the more data you have, the better your model will be.

Migrate to the cloud

Some ML models can be run locally on laptops or powerful microcontrollers. However, generating ML models requires enormous computing power. You will also be handling large volumes of data, sometimes reaching terabytes or even petabytes. So, you have to move your data into the cloud. This is a specialized task and can take a long time, especially if you are migrating large amounts of legacy data.

Clean up the data

In general, most datasets are dirty and confused. Some variables may be missing, others may be wrong. So, you need to employ a data scientist to pre-process your data. This includes removing duplicates, filtering, combining variables, and even labeling the data for supervised learning. This whole process is slow, and many enterprises say preparing data for AI is one of their biggest challenges.

Select your ML model

Now you’re finally ready to start creating your AI model. This starts with model selection. Machine learning is a hugely active field of research, and there are literally thousands of different ML models out there. Some of these will be used for supervised learning and others for unsupervised learning. They include all kinds of complex algorithms and structures such as neural networks, decision trees, or random forests. Choosing the correct model for your application is vital but there are often no clear rules. In many cases, it comes down to gut instinct and experience.

Train and verify the model

So, you have a model and you have a large lake of data available in the cloud. The next step is actually training the model. NB the following is assuming that you are using supervised learning. There is an equivalent step for unsupervised learning, but I won’t describe it here. Training a model starts with splitting your data into three. The bulk of your data (typically 75-80%) is used for training. The rest is set aside for the next steps. The goal of training is to teach your model to correctly identify the features in your data that interest you. For instance, you might feed in photos of pets and try to teach it to identify which photos show dogs.

Training is a long and iterative process. Each time you train the model, you need to assess whether it is accurate using data that wasn’t used for training. This is what some of the remaining data is used for. Eventually, you will have a model that is reasonably accurate.

Validate the model is fit for purpose

At this stage, you have a trained model and can test whether it is suitable for the job needed. Validation takes days to weeks, depending on the sort of task involved. In the validation step, you tune the control parameters used by the model. These are called hyperparameters and affect things like the sensitivity of the model to noisy data. This will take days to do, and typically, you will have to repeat the process many times. This step uses the last of the data you kept back in step 5.

Deploy the model in production

The final stage is the critical one and is often overlooked. How do you actually embed your model in your production system? How do you make sure you can use the model for its intended purpose. In short, how do you ensure a successful ML deployment? Fortunately, there are a few standard approaches that can help, as we will see next.

Deploying an ML model in the field

ML models can be deployed in two ways. Either they run in the cloud on specialized virtual servers and are accessed via an API. Or they run in an edge device and rely on high-powered ARM microcontrollers or GPU-enabled laptops. In the first case, your issue is in creating a suitable API to make it easy to call the model. You need to be able to feed in your new data, run the model, and retrieve the results. This is a pure programming problem of the sort full-stack developers are used to solving.

In the second case, the problem is enabling a complex model to run on relatively low-power hardware. Typically, the solution is to turn to TensorFLow Lite, or a similar lightweight ML framework. You can then create a program that embeds the model alongside the source of the data (e.g. sensors). This is a whole topic in itself, suffice to say that it requires engineers with experience of creating real-time applications.

Conclusions

Successful ML deployments are achievable if you allow yourself enough time. However, you will need to employ skilled engineers and pay out a lot of money for renting virtual servers. Alternatively, you could try using Sonasoft Nugene, our AI bot factory. This will shorten the entire process and it delivers working ML models that are ready to plug and play.

White Paper

SAIBRE AI Ecosystem

End-to-end AI applications that solve any business problem