Lesson 06: Roboflow on Raspberry Pi 5
Roboflow is a powerful end-to-end platform designed to simplify and accelerate the development of computer vision projects. It provides tools to manage, preprocess, and label image datasets, train machine learning models, and deploy them for real-world applications. Whether you're building object detection, image classification, or instance segmentation models, Roboflow offers a streamlined workflow tailored to developers and data scientists of all skill levels.
Key Features
- Dataset Management: Organize and preprocess images with automated tools to resize, augment, and split datasets.
- Annotation Tools: Simplify labeling with an intuitive interface for bounding boxes, segmentation, and more.
- Model Training: Integrate with popular frameworks like TensorFlow, PyTorch, and YOLO, or use Roboflow’s hosted training options.
- Deployment: Deploy trained models via APIs for real-time inference in production environments.
- Collaboration: Work seamlessly with teams through shared datasets and version control.
Why Use Roboflow?
- Saves time by automating repetitive tasks in computer vision workflows.
- Supports a wide range of file formats and ML frameworks.
- Easy to use for beginners while offering advanced features for experts.
- Scales from prototyping to production-level applications.
Roboflow is widely used in industries like robotics, healthcare, retail, and agriculture to enable vision-driven solutions easily.
6.1
6.2 Installing and Setting Up Roboflow on Raspberry Pi 5
Installing and Setting Up Roboflow on Raspberry Pi 5
Follow these steps to install and set up Roboflow for inference on your Raspberry Pi 5:
Step 1: Install Docker
Roboflow's inference server runs as a microservice using Docker. Begin by installing Docker on your Raspberry Pi 5. Use the following guide to set up Docker:
Step 2: Pull the Inference Server
- Pull the Roboflow inference server image from Docker Hub:
This will automatically detect your Raspberry Pi's architecture and download the appropriate version.sudo docker pull roboflow/inference-server:cpu
- For Raspberry Pi (ARM CPU):
If you are using a Raspberry Pi or another ARM-based device, you can pull and run the optimized Docker container with the following command:sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu
Step 3: Run the Inference Server
Run the inference server, passing through your network card:
sudo docker run --net=host roboflow/inference-server:cpu
Or
sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu
Verification
After starting the server, verify it is running by navigating to http://localhost:9001 in your browser on the Raspberry Pi (or using curl in headless mode). A welcome message indicates the server is operational.
curl http://localhost:9001
{
"server": {
"package": "roboflow-inference-server-cpu",
"version": "1.4.0"
},
"roboflow": {
"package": "roboflow-node",
"version": "0.2.25"
}
}
Step 4: Install the Roboflow Python SDK
You can use the Roboflow Python SDK to interact with your inference server or Roboflow's Hosted Inference API. Follow these steps:
- Set up a Python virtual environment:
python -m venv roboflow source roboflow/bin/activate
- Install the Roboflow SDK:
pip install roboflow
Step 5: Test Your Setup with a Python Script
Create a Python script (infer.py) to test the Roboflow module. Use the following example:
import json
# import the Roboflow Python package
from roboflow import Roboflow
VERSION_NUMBER = 1
# instantiate the Roboflow object and authenticate with your credentials
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
# load/connect to your project
#project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
project = rf.workspace().project("YOUR_PROJECT")
# load/connect to your trained model
model = project.version(VERSION_NUMBER, local="http://localhost:9001").model
# perform inference on an image file
prediction = version.model.predict("YOUR_IMAGE.jpg")
# print prediction results in JSON
print(prediction.json())
json_string = prediction.json()
formatted_json = json.dumps(json_string), indent=4)
print(formatted_json)
# predict on a local image
prediction = model.predict("YOUR_IMAGE.jpg")
# Predict on a hosted image via file name
prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
# Predict on a hosted image via URL
prediction = model.predict("https://...", hosted=True)
# save inference image
prediction.save("result.jpg")
# Plot the prediction in an interactive environment
prediction.plot()
# Convert predictions to JSON
print(prediction.json())
- Replace YOUR_PRIVATE_API_KEY, YOUR_WORKSPACE, YOUR_PROJECT, VERSION_NUMBER, and YOUR_IMAGE.jpg with your actual credentials and inputs.
- Save the file and run it:
python3 infer.py
Step 6: Enable Remote Inference
If you'd like to perform inference from another machine on the network, update the local parameter in the model variable definition to use Raspberry Pi's local IP address instead of localhost. For example:
model = project.version(VERSION_NUMBER, local="http://<Raspberry_Pi_IP>:9001/").model
This setup allows the Raspberry Pi to act as a client-server, receiving image data for inference from other devices.
Roboflow will be fully installed and ready for use on your Raspberry Pi 5 with these steps.
References: