• Mail us
  • Book a Meeting
  • Call us
  • Chat with us

AI/ML

Easy Setup of n8n on Azure for Workflow Automation


Introduction to n8n Azure Integration for Workflow Automation

Microsoft Azure provides a robust and scalable cloud environment perfect for deploying machine learning models. Integrating this with n8n, a powerful workflow automation tool, allows you to streamline deployment pipelines with ease. Whether you're working with Azure Blob Storage, Virtual Machines, or Azure ML, this guide helps you set up n8n on Azure for seamless workflow automation.

 

n8n System Requirements and Prerequisites

Make sure to have:

  • An Azure account
  • n8n is either hosted on a server or installed locally.
  • A trained ML model (TensorFlow, Pytorch, Scikit-learn etc.)
  • Authenticated Azure CLI
  • Docker if you want to do containerized deployments (Optional)

Step 1: Set Up the Azure Environment

1. Set Up n8n

Install n8n locally

To set up n8n, run the following code snippet.

npm install -g n8n

 

Then run:

n8n

 

n8n spins up a service on your local machine that can be accessed from http://localhost:5678 on any web browser.

Server Deployment (Docker Method)

If you want to host n8n on a server, you may accomplish this using Docker:

docker run -it --rm \    -p 5678:5678 \    -v ~/.n8n:/home/node/.n8n \    n8nio/n8n

This makes n8n on Azure publicly accessible via your server IP.

2. Set a Container for Azure Blob Storage

  • Head over to the Azure Portal > Storage Accounts tab > select Create.

  • Upon completion, go to Containers > press on New

  • Identify the container (for instance, ml-model-container).

  • Your model file can now be uploaded (e.g., model.pkl, model.h5).

     

    Using n8n Azure Blob integration, you can later fetch this model programmatically.

3. Set up the Machine Learning Model on Azure

  • Log into Azure Machine Learning Studio > click on the Models tab > press on Register Model

  • Set the model’s source as Azure Blob Storage

  • Assign model to an Endpoint

     

    This step is crucial for those leveraging workflow automation on Azure via ML inference.

4. Set up with Virtual Machine Deployment on Azure (Optional Final Deployment Step)

  • Log into Azure Portal > select Virtual Machines > select Create.

  • Select a feasible instance type (e.g., Standard_D2_v2 for small models or Standard_NC6 for GPU-based models).

  • Set up networking for HTTP access (allow 5000 for Flask and 8000 for FastAPI).

  • An SSH connection should be established to enable the installation of prerequisites:

sudo apt update && sudo apt install python3-pip -y   pip install flask tensorflow azure-storage-blob 

 

Step 2: Create Deployment n8n Workflow

1. Develop the Workflow

  • Configure an HTTP Trigger Node (to accept API calls)
  • Set up an Azure Blob Storage Node (to pull the model from blob storage)
  • For Azure ML inference use the Azure Machine Learning Node or use an HTTP Request Node for VM deployments.
  • Handle incoming data and provide the necessary output.

With Azure n8n integration, orchestrating data movement and model logic becomes effortless.

2. Additional Steps: Save and Deploy the Workflow

  • Select the Activate Workflow option.
  • Test by sending sample JSON input.

 

Step 3: Exposing the Model as an API

  • In case of Azure ML, utilize the provided endpoint URL.
  • For Azure VM users, deploy the model with Flask:
import picklefrom flask import Flask, request, jsonifyfrom azure.storage.blob import BlobServiceClientapp = Flask(__name__)# Get model from Azure blob storageconnection_string = 'YOUR_AZURE_STORAGE_CONNECTION_STRING'container_name = 'ml-model-container'blob_name = 'model.pkl'# Get blob service clientblob_service_client = BlobServiceClient.from_connection_string(connection_string)blob_client = blob_service_client.get_blob_client(container=container_name,blob=blob_name)with open('model.pkl','wb') as f: f.write(blob_client.download_blob().readall())model=pickle.load(open('model.pkl','rb'))@app.route('/predict',methods=['POST'])def predict(): # Get model prediction data=request.json['features'] prediction=model.predict([data]) return jsonify({'prediction': prediction.tolist()})if __name__=='__main__': app.run(host='0.0.0.0',port=5000)

 

Make sure network security groups permit traffic on port 5000.

 

Step 4: Automating Model Updates

  • Utilize an n8n Scheduler Node to automatically and regularly check for a new model.
  • Download the new version and trigger workflow redeploy.

    This is where n8n Azure Container Apps or Azure DevOps can play a supporting role for more robust CI/CD.

Conclusion

Integrating n8n workflow automation with Azure ML and Blob Storage allows for the streamlining of the entire machine learning deployment pipeline. Whether using VMs or taking advantage of Azure Container Apps, n8n on Azure offers unparalleled agility, reach, and acceleration.

Ready to optimize your AI infrastructure? Contact us today and leverage our AI/ML expertise!  

Share

facebook
LinkedIn
Twitter
Mail
AI/ML

Related Center Of Excellence