MLOps Best Practices: From Experimentation to Production

Published on December 15, 2025

Building a model is just 20% of the ML journey—the real challenge is deploying, monitoring, and maintaining it in production. MLOps brings DevOps principles to machine learning, enabling reproducible experiments, automated pipelines, and reliable deployments.

The MLOps Lifecycle

A mature MLOps practice covers four key areas:

  1. Experiment Tracking: Logging parameters, metrics, and artifacts
  2. Model Registry: Versioning and managing model lifecycles
  3. CI/CD Pipelines: Automated testing and deployment
  4. Monitoring: Detecting drift and performance degradation

1. Experiment Tracking with MLflow

import mlflow
import mlflow.sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score

# Set tracking URI (local or remote)
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("crack-detection-v2")

with mlflow.start_run(run_name="rf-baseline"):
    # Log parameters
    params = {
        "n_estimators": 100,
        "max_depth": 10,
        "min_samples_split": 5
    }
    mlflow.log_params(params)
    
    # Train model
    model = RandomForestClassifier(**params)
    model.fit(X_train, y_train)
    
    # Log metrics
    y_pred = model.predict(X_test)
    mlflow.log_metric("accuracy", accuracy_score(y_test, y_pred))
    mlflow.log_metric("f1_score", f1_score(y_test, y_pred))
    
    # Log model
    mlflow.sklearn.log_model(model, "model")
    
    # Log artifacts (plots, configs)
    mlflow.log_artifact("confusion_matrix.png")

2. Model Registry and Versioning

# Register model to MLflow Model Registry
model_uri = f"runs:/{mlflow.active_run().info.run_id}/model"
model_details = mlflow.register_model(model_uri, "CrackDetectionModel")

# Transition model stages
from mlflow.tracking import MlflowClient
client = MlflowClient()

# Move to staging
client.transition_model_version_stage(
    name="CrackDetectionModel",
    version=model_details.version,
    stage="Staging"
)

# After validation, promote to production
client.transition_model_version_stage(
    name="CrackDetectionModel",
    version=model_details.version,
    stage="Production"
)

3. CI/CD Pipeline with GitHub Actions

# .github/workflows/ml-pipeline.yml
name: ML Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
      
      - name: Install dependencies
        run: |
          pip install -r requirements.txt
          pip install pytest pytest-cov
      
      - name: Run tests
        run: pytest tests/ --cov=src --cov-report=xml
      
      - name: Model validation
        run: python scripts/validate_model.py

  deploy:
    needs: test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to production
        run: |
          # Build and push Docker image
          docker build -t myregistry/model:${{ github.sha }} .
          docker push myregistry/model:${{ github.sha }}

4. Model Serving with FastAPI

from fastapi import FastAPI, File, UploadFile
import mlflow.pyfunc
from PIL import Image
import io

app = FastAPI(title="Crack Detection API")

# Load production model
model = mlflow.pyfunc.load_model("models:/CrackDetectionModel/Production")

@app.post("/predict")
async def predict(file: UploadFile = File(...)):
    """Predict crack presence in uploaded image."""
    image_bytes = await file.read()
    image = Image.open(io.BytesIO(image_bytes))
    
    # Preprocess and predict
    features = preprocess(image)
    prediction = model.predict(features)
    
    return {
        "filename": file.filename,
        "prediction": "crack" if prediction[0] == 1 else "no_crack",
        "confidence": float(prediction[1])
    }

@app.get("/health")
def health_check():
    return {"status": "healthy", "model_version": "1.2.0"}

5. Monitoring and Drift Detection

from evidently import ColumnMapping
from evidently.report import Report
from evidently.metrics import DataDriftTable, DatasetDriftMetric

def check_data_drift(reference_data, current_data):
    """Detect data drift between training and production data."""
    report = Report(metrics=[
        DatasetDriftMetric(),
        DataDriftTable()
    ])
    
    report.run(
        reference_data=reference_data,
        current_data=current_data
    )
    
    # Get drift score
    drift_detected = report.as_dict()['metrics'][0]['result']['dataset_drift']
    
    if drift_detected:
        alert_team("Data drift detected! Consider retraining.")
    
    return report

Project Structure for MLOps

ml-project/
├── .github/workflows/      # CI/CD pipelines
├── data/
│   ├── raw/               # Original data
│   └── processed/         # Transformed data
├── models/                # Saved model artifacts
├── notebooks/             # Exploration notebooks
├── src/
│   ├── data/             # Data processing
│   ├── features/         # Feature engineering
│   ├── models/           # Model training
│   └── serving/          # API endpoints
├── tests/                 # Unit and integration tests
├── Dockerfile
├── requirements.txt
├── mlflow.yaml           # MLflow config
└── dvc.yaml              # Data version control

Conclusion

MLOps transforms ML from experimental notebooks to production-ready systems. Start with experiment tracking, then progressively add model registry, CI/CD, and monitoring. The investment pays off in reproducibility, reliability, and faster iteration cycles.