What is MLOps? Understanding Machine Learning in Production

WekaIO Inc. December 15, 2020
What is MLOps? Understanding Machine Learning in Production

The last decade has seen major advancements in artificial intelligence capabilities and in the ability to process and evaluate big data. Consequently, organizations have begun adopting machine learning for use in business solutions. In turn, it has become increasingly important to institute processes for developing, deploying, and maintaining these machine learning systems in a reliable and efficient manner. Enter MLOps.

MLOps represents a set of practices for developing and maintaining machine learning applications. Below, we’ll delve into some of the practices that govern an MLOps implementation and how these practices can help data engineers, developers, and operations personnel in producing machine learning solutions that are as effective and dependable as possible.

What is MLOps? And how does it compare with DevOps?

As mentioned, MLOps represents a set of practices for developing and maintaining machine learning systems. If you are familiar with DevOps (and by this point you almost certainly are), then this sentence may sound familiar: DevOps is a set of guidelines and practices that assist teams looking to develop reliable software within shortened development cycles. 

This is accomplished through the use of process automation and tooling that enables continuous integration, continuous delivery, and increased reliability with the help of automated unit and integration testing. When implemented properly, these practices and workflows help development and operations teams to validate code changes in an efficient manner and deliver high-quality functionality in shorter time frames.

In many respects, the goals of DevOps and MLOps are similar. While the objective of DevOps is to develop reliable software systems in shorter cycles,, the objective of MLOps is to more efficiently develop and deliver reliable machine learning systems. With that said, traditional software products and machine learning systems differ greatly. Therefore, the processes for managing these systems needs to account for these differences. Before we get into MLOps best practices, let’s take a look at the unique complexities associated with developing and maintaining machine learning systems.

The Challenges in Building and Managing Machine Learning Systems

Machine learning (ML) systems are trained by large datasets. The data within these datasets is first cleaned in a manner that facilitates the development of an effective machine learning model. To do so, data engineers develop data pipelines to produce clean datasets.

The process of data collection never yields a result set that can be immediately used for training an ML model. Therefore, data pipelines contain cleansing processes that are crucial in producing usable datasets. These processes include those for correcting inaccurate data, removing duplicate records, removing irrelevant or invariable fields, dealing with outlying records (those that contain statistically significant differences from the rest of the dataset), and more. Through these operations, the data is refined and clean datasets are produced that are without the inconsistencies found following the collection of raw data. These clean datasets can then be used in training as they will allow for the model to more easily and accurately recognize patterns in the data, enabling the development of consistent predictive behaviors. In contrast, when using unclean data to train a model, erratic and inconsistent model behavior can result, thereby rendering the end product useless.

Data scientists then pick up the process from here, developing an ML pipeline that produces a model that provides the desired predictive capabilities. Finally, the resulting model pipeline is made available to software engineers who operationalize it for use in production.

In other words, instead of managing one pipeline, validating one code base, and monitoring one application (as would be the case with a traditional software product), an ML system has a greater number of properties that need to be managed and monitored. These include the data pipeline, the model pipeline, the release process, and the resulting model’s performance in production–the latter of which can change as the data being provided to it evolves.

What do we mean by the performance of the production model changing over time? Let’s consider the case of a typical software development pipeline. The behavior of a traditional software product is solely defined by its code. Requirements are defined by the business, the code is written to satisfy these requirements, and a packaged application is deployed to production where it can be utilized by the end users. Once deployed, the code does not change and, therefore, neither does the behavior of the application. With a machine learning system, however, an additional aspect defines the behavior of the final product. By this I mean data, of course.

Due to the impact that data has on the end product, AI practitioners must use structured processes for ML model development and closely monitor the solution in production for the system to remain effective. The data being evaluated by the model in production continuously evolves. And when it strays too far from the data that was used to train and test, the model’s predictive capabilities begin to decay.

By monitoring model performance, this decline can be acknowledged early on and analysis can be performed to determine the root cause (in the case we are describing, data drift). And with structured processes in place to manage the ML development lifecycle, the data science team can make the necessary adjustments and retrain/test in an efficient manner – ensuring that an updated and effective solution can be re-deployed as soon as possible.

Leveraging MLOps to Simplify the Process for Managing Machine Learning Workflows

So now we know what MLOps is, and we understand some of the complexities involved in developing a machine learning system. Let’s consider some best practices for implementing MLOps in a way that helps organizations effectively manage their ML-powered applications in production.

Automate the ML lifecycle

As with DevOps, automation is a key component of implementing MLOps. This implementation includes creating reusable ML pipelines for model reproducibility and automating other aspects of the process to enable the production of models with more accurate predictive capabilities as time goes on. One example of this additional automation is that for data validation, ensuring that the datasets being provided to train the model are as complete and clean as possible. This means building automated processes for operating on raw data.

For instance, maybe the data acquired during the collection process contains fields that are irrelevant in the context of the model being developed. A process can be built to drop these fields. Additionally, maybe one of the fields contains inconsistently formatted responses, such as using “yes,” “Y,” and “true” interchangeably. Practitioners can develop procedures that standardize these responses so that the data science team is dealing with data that is consistent during the model development and training processes. Training your model with invalid and inconsistent data can significantly alter the predictive capabilities of your model, resulting in an inefficient or unusable end product.

Increase collaboration by breaking down silos

Building a machine learning system requires the expertise of personnel with varying backgrounds. Teams often include data engineers, data scientists, software developers, and IT personnel. Increasing the transparency across all steps of the build and deployment processes is necessary to ensure that effective systems are being deployed with as much efficiency as possible. This means version control for changing data and machine learning pipelines and less time working in isolation.

Monitor, monitor, monitor

While a traditional application’s functionality defined by its code, with machine learning systems, however, a combination of code and data define its behavior. Code (once released) is static, yet data is constantly changing, and changing data can cause the performance of a once effective machine learning model to deteriorate over time. Thus, MLOps advocates for monitoring the predictive capabilities of the machine learning model in production. This enables the data scientists and engineers to grade the reliability of the system as time goes on and empowers them to identify when and how the system must be altered to regain or even improve upon its previous levels of effectiveness.

Wrapping Up

Over the last decade, DevOps has proven to be extremely effective in managing the processes for the development and maintenance of traditional software applications. This methodology has enabled development teams to produce reliable applications that are released with great frequency.

Given their complexity, it’s no surprise that there has been a movement to apply “DevOps-esque” principles to assist in developing and releasing reliable machine learning systems in an efficient manner. Known as MLOps, these practices of reusable ML pipelines, automated validation, monitoring and more, enable data engineers, data scientists, and software development and operations personnel to work in lockstep with one another to develop and maintain production-worthy machine learning applications.

Bio

Scott Fitzpatrick has over five years of experience as a software developer. He has worked with many languages, including Java, ColdFusion, HTML/CSS, JavaScript and SQL.

 

Related Resources

Datasheet
Datasheet

Datasheet - Hitachi Content Software for File

Download
Solution Brief
Solution Brief

Hitachi Content Software for File: High Performance Storage for AI & ML

Download
Analyst Report
Analyst Report

ESG Research Report: 2021 Technology Spending Intentions Survey

Download