Wilson Mar bio photo

Wilson Mar

Hello!

Calendar YouTube Github Acronyms

LinkedIn BuyMeACoffee

Get full visibility and versioning of models, their metadata, and compare metrics from runs.

US (English)   Norsk (Norwegian)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Estonian   اَلْعَرَبِيَّةُ (Egypt Arabic)   Napali   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

Competition

LangSmith:

  • Debugging
  • Playground
  • Prompt Management
  • Prompt Management
  • Annotation
  • Testing
  • Monitoring

Install

PROTIP: Using uv rather than pip:

  1. Python
    python --version
    
  2. Ensure you have the latest uv utilities installed, including the global uv configuration directory ~/.config/uv/ and uv.toml file:
    uv config --show
    uv --version
    

    uv 0.9.13 (Homebrew 2025-11-26) ```

  3. PROTIP: Create a folder to receive files, populate with .git folder, .gitignore, pyproject.toml, README.md, .python-version
    uv init mlflow1
    cd mlflow1
    
  4. PROTIP: Install CLI tool:
    pipx install mlflow
    mlflow --version
    
    mlflow, version 3.7.0
  5. NOTE: MLflow describes its releases at: https://github.com/mlflow/mlflow/releases
  6. PROTIP: Scan for vulnerabilities by running the safety report:
    pipx runpip mlflow list --format=freeze | safety scan --stdin
    
  7. Research CVEs found. CAUTION: Instead of detailing specifics about security issues in public, follow the procedure in their SECURITY.md (email).
  8. Download components:
    mlflow ui
    

    Without configuration means these warning message appear:

    Backend store URI not provided. Using sqlite:///mlflow.db
    Registry store URI not provided. Using backend store URI.
    

    Look for:

    INFO:     Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
    ...
    INFO:     Application startup complete.
    
  9. Open another CLI Terminal window
  10. Open your default browser:
    open http://127.0.0.1:5000
    

    MLflow artifacts

    The menu that appears lists the three artifacts MLflow works with:

    mlflow3.7-menu-202x202.png

    • Experiments that run
    • Models based on user
    • Prompts

    MLFlow workflows

    1. Log traces
    2. Train models
    3. Run evaluation
    4. Register prompts

    MLflow tool selection

  11. Click “Docs” at the upper-right corner to see that version 2 add two approaches, running on either Open Source (your servers) or on Databicks servers:

    • (Classic) Model Training - Access comprehensive guides for experiment tracking, model packaging, registry management, and deployment. Get started with MLflow’s core functionality for traditional machine learning workflows, hyperparameter tuning, and model lifecycle management.
      • https://www.mlflow.org/docs/latest/ml/
      • https://docs.databricks.com/aws/en/mlflow/
      • https://docs.databricks.com/aws/en/getting-started/free-edition

    • GenAI Apps & Agents - Explore tools for GenAI tracing, prompt management, foundation model deployment, and evaluation frameworks. Learn how to track, evaluate, and optimize your generative AI applications and agent workflows with MLflow.

    Automation from manual tracking

    Without MLflow, Machine Learning engineers track their runs using a spreadsheet such as this:

    mlflow-spreadsheet-1436x319.png

    The clumsiness of spreadsheets are well known.

    MLflow provides a GUI to present data many different ways.

    Aliases can be associated with specific runs, such as “@Challenger”.

    Experiment Artifacts

    Path: mlflow-artifacts:/…

    For Logistic Regression:

    • MLmodel
    • conda.yaml (if you’re using Conda environment)
    • model.pkl (“pikle”)
    • python_env.yaml
    • requirements.txt

    A sample print(classifaction_report(y_test, y_pred_xgb) after an experiment run yields:

    mlflow3.7-report-744x250.png

    Metrics for a single experiment

    mlflow3.7-1metric-478x211.png

    recall_class_0

    recall_class_1

    Compare metrics from selected experiments

    mlflow3.7-compare-1530x535.png

    Dagshub

    https://github.com/code/mlflow_dagshub_demo

References:

[1] VIDEO by codebasics.io who offers a class.

https://mlflow.github.io/mlflow-website/blog/deep-learning-part-2/ Deep Learning with MLflow (Part 2) uses dataset https://huggingface.co/datasets/coastalcph/lex_glue/viewer/unfair_tos

https://medium.com/@mohsenim/tracking-machine-learning-experiments-with-mlflow-and-dockerizing-trained-models-germany-car-price-e539303b6f97 Tracking Machine Learning Experiments with MLflow and Dockerizing Trained Models: Germany Car Price Prediction Case Study

https://aws.amazon.com/blogs/machine-learning/securing-mlflow-in-aws-fine-grained-access-control-with-aws-native-services/

https://mlflow.org/docs/latest/ml/tracking/tutorials/remote-server Remote Experiment Tracking with MLflow Tracking Server

https://viso.ai/deep-learning/mlflow-machine-learning-experimentation/ MLflow: Simplifying Machine Learning Experimentation

https://arxiv.org/pdf/2202.10169 MACHINE LEARNING OPERATIONS: A SURVEY ON MLOPS TOOL SUPPORT by Nipuni Hewage and Dulani Meedeniya


25-12-14 v006 cli & References :2025-01-16-mlflow.md created 2025-01-16