Getting Started With Snowflake Snowpark ML: A Step-by-Step Guide

Snowflake’s Snowpark brings machine learning (ML) closer to your data by enabling developers and data scientists to use Python for ML workflows directly within the Snowflake Data Cloud. This guide will walk you through setting up the Snowpark ML library, configuring your environment, and implementing a basic ML use case.

Why Use Snowpark for Machine Learning?

Snowpark offers several advantages for ML workflows:

  1. Process data and build models within Snowflake, reducing data movement and latency.

  2. Scale ML tasks efficiently using Snowflake's elastic compute capabilities.

  3. Centralize data pipelines, transformations, and ML workflows in one environment.

  4. Write code in Python, Java, or Scala for seamless library integration.

  5. Integrate Snowpark with tools like Jupyter and Streamlit for enhanced workflows.

Whether you're a data scientist or a developer, Snowpark simplifies ML workflows by bringing computation closer to your data.


Step 1: Prerequisites

Before diving into Snowpark ML, ensure you have the following:

  • Snowflake account.

  • SnowSQL CLI or any supported Snowflake IDE (e.g., Snowsight).

  • Python 3.8+ installed locally.

  • Necessary Python packages: snowflake-snowpark-python and scikit-learn.

Install the required packages using pip:
pip install snowflake-snowpark-python scikit-learn


Step 2: Set Up Snowpark ML Library

  1. Ensure your Snowflake account is Snowpark-enabled. Verify or enable it via your Snowflake admin console.

  2. Create a Python runtime environment in Snowflake to execute your ML models:
    CREATE STAGE my_python_lib;

  3. Upload your required Python packages (like scikit-learn) to the stage:
    snowsql -q "PUT file://path/to/your/package.zip @my_python_lib AUTO_COMPRESS=TRUE;"

  4. Grant permissions to the Snowpark role to use external libraries:
    GRANT USAGE ON STAGE my_python_lib TO ROLE my_role;


Step 3: Configure Snowflake Connection in Python

Set up your Python script to connect to Snowflake:

from snowflake.snowpark import Session

 

# Define your Snowflake connection parameters

connection_parameters = {

    "account": "your_account",

    "user": "your_username",

    "password": "your_password",

    "role": "your_role",

    "warehouse": "your_warehouse",

    "database": "your_database",

    "schema": "your_schema"

}

 

# Create a Snowpark session

session = Session.builder.configs(connection_parameters).create()

print("Connection successful!")

Step 4: A Simple ML Use Case – Predicting Customer Attrition

Data Preparation

  1. Load a sample dataset into Snowflake:


CREATE OR REPLACE TABLE cust_data (
cust_id INT,
age INT,
monthly_exp FLOAT,
attrition INT
);

INSERT INTO cust_data VALUES
(1, 25, 50.5, 0),
(2, 45, 80.3, 1),
(3, 30, 60.2, 0),
(4, 50, 90.7, 1);

 

  1. Access the data in Snowpark:
    df = session.table("cust_data")
    print(df.collect())

Building an Attrition Prediction Model

  1. Extract features and labels:


from snowflake.snowpark.functions import col
features = df.select(col("age"), col("monthly_exp"))
labels = df.select(col("attrition"))

 

  1. Locally train a Logistic Regression model using scikit-learn:


from sklearn.linear_model import LogisticRegression
import numpy as np

# Prepare data

X = np.array(features.collect())
y = np.array(labels.collect()).ravel()

# Train model

model = LogisticRegression()
model.fit(X, y)
print("Model trained successfully!")

 

  1. Locally save the model and deploy it to Snowflake as a stage file:


pickle.dump(model, open("attrition_model.pkl", "wb"))
snowsql -q "PUT file://attrition_model.pkl @my_python_lib AUTO_COMPRESS=TRUE;"

Predict Customer Attrition in Snowflake

Use Snowflake’s UDFs to load and use the model:
from snowflake.snowpark.types import PandasDataFrame, PandasSeries
import pickle

# Define a UDF

def predict_attrition(age, monthly_exp):
model = pickle.load(open("attrition_model.pkl", "rb"))
return model.predict([[age,monthlyexp]])[0][[age,monthlye xp]])[0]

# Register UDF

session.udf.register(predict_attrition, return_type=IntType(), input_types=[IntType(), FloatType()])

  1. Apply the UDF to predict attrition:
    result = df.select("cust_id", predict_attrition("age", "monthly_exp").alias("attrition_prediction"))
    result.show()


Best Practices for Snowflake Snowpark in ML

  1. Use Snowflake's SQL engine for preprocessing to boost performance.

  2. Design efficient UDFs for non-native computations and limit data passed to them.

  3. Version and store models centrally for easy deployment and tracking.

  4. Monitor resource usage with query profiling and optimize warehouse scaling.

  5. Validate pipelines with sample data before running on full datasets.


Conclusion

You’ve successfully set up Snowpark ML, configured your environment, and implemented a basic Attrition Prediction model. Snowpark allows you to scale ML workflows directly within Snowflake, reducing data movement and improving operational efficiency.

For organizations looking to streamline their DevOps, DevSecOps, DataOps, or ML Ops workflows, ZippyOPS offers expert consulting, implementation, and management services. Explore our servicesproducts, and solutions. For more insights, check out our YouTube playlist.

If this seems interesting, please email us at [email protected] for a call

 

Recent Comments

No comments

Leave a Comment