Production Scaling
Overview
Scaling Anthias in production involves managing multiple services effectively, configuring Docker containers, handling deployment via balenaCloud, and ensuring optimal resource allocation.
Step 1: Docker Compose Configuration
The scaling starts with the docker-compose.yml
file. It defines the services required for the deployment. The main services that often need to be scaled include nginx
, which serves static content and acts as a reverse proxy.
version: '3'
services:
rpi-imager:
build:
dockerfile: Dockerfile.rpi-imager
nginx:
image: nginx:stable-alpine
volumes:
- .:/usr/share/nginx/html:ro
ports:
- 8080:80
Step 2: Dockerfile Optimization
In the Dockerfile
, ensure that only the necessary dependencies are included. For production, it is essential to build a slim image to reduce overhead and improve performance.
FROM --platform=linux/arm/v7 balenalib/raspberrypi3:bookworm as builder
# Install essential packages
RUN apt-get update && apt-get install -y \
build-essential \
nodejs \
python3-pip \
...
FROM debian:bookworm
# Setup environment
ENV ENVIRONMENT production
WORKDIR /build
COPY --from=builder /usr/lib/ /usr/src/app/usr/lib/
Step 3: Environment Configuration
Use environment variables to configure the scaling environment properly. Set variables that enable essential configurations for production accessibility.
export MY_IP=$(ip -4 route get 8.8.8.8 | awk {'print $7'})
TOTAL_MEMORY_KB=$(grep MemTotal /proc/meminfo | awk {'print $2'})
export VIEWER_MEMORY_LIMIT_KB=$(echo "$TOTAL_MEMORY_KB" \* 0.8 | bc)
export SHM_SIZE_KB="$(echo "$TOTAL_MEMORY_KB" \* 0.3 | bc | cut -d'.' -f1)"
Step 4: Deployment with Balena
Make sure to leverage balenaCloud for managing your fleets. Create a fleet and configure necessary environment variables using CLI commands.
Create a fleet:
balena env add BALENA_HOST_CONFIG_gpu_mem $GPU_MEM_VALUE --fleet $FLEET_NAME balena env add BALENA_HOST_CONFIG_dtoverlay vc4-kms-v3d --fleet $FLEET_NAME
Deploy your application using the following command:
./bin/deploy_to_balena.sh --board $BOARD_TYPE --fleet $FLEET_NAME
To deploy local changes, use:
./bin/deploy_to_balena.sh --board $BOARD_TYPE --fleet $FLEET_NAME --dev
Step 5: Implementing Auto-Scaling
Auto-scaling can be achieved by setting thresholds for service performance metrics, such as CPU and memory usage. Use Docker’s resource limit parameters when deploying:
viewer:
image: anthias-viewer
deploy:
resources:
limits:
cpus: '2'
memory: 2G
Step 6: Monitoring and Diagnostics
Implement monitoring to assess the health and performance of the deployed services. Collect metrics and logs to understand usage patterns, using requests and responses effectively for load monitoring.
def get_load_avg():
load_avg = {}
get_load_avg = os.getloadavg()
load_avg['1 min'] = round(get_load_avg[0], 2)
...
return load_avg
Integrating a dashboard or logging system is key to successful scaling.
Conclusion
Having proper configurations, streamlined deployments, and robust monitoring systems ensures that Anthias can efficiently scale in production. Adhere to best practices in managing resources, dependencies, and configurations to maximize performance and reliability.
*Source: Various source files from the Anthias repository.*