Production Deployment

Prerequisites

Ensure Docker and Docker Compose are installed on your system. This guide assumes the user is comfortable with these technologies and has familiarity with TypeScript and related environments.

Step 1: Set Up Environment Variables

Create a .env file in the root of your project with the necessary configuration values. This file will be used to set up environment variables for both the backend and frontend services.

# .env
DATABASE_URL="your_database_url"
REDIS_URL="redis:6379"

Step 2: Define Services in docker-compose.yml

The docker-compose.yml file defines multiple services necessary for deployment. Ensure the structure includes Redis, backend, frontend, background tasks, and local embedding reranker as shown below:

version: '3.8'

services:
  redis:
    image: redis:6.0.16
    restart: always
    volumes:
      - ./redis-data:/data
    command: ["redis-server", "--loglevel", "warning"]

  backend:
    image: tidbai/backend:0.2.8
    restart: always
    depends_on:
      - redis
    ports:
      - "8000:80"
    env_file:
      - .env
    volumes:
      - ./data:/shared/data
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "6"

  frontend:
    image: tidbai/frontend:0.2.8
    restart: always
    depends_on:
      - backend
    ports:
      - 3000:3000
    environment:
      BASE_URL: http://backend
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "6"

  background:
    image: tidbai/backend:0.2.8
    restart: always
    depends_on:
      - redis
    ports:
      - "5555:5555"
    env_file:
      - .env
    volumes:
      - ./data:/shared/data
    command: /usr/bin/supervisord
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "6"

  local-embedding-reranker:
    image: tidbai/local-embedding-reranker:v3-with-cache
    ports:
      - 5001:5001
    environment:
      - PRE_LOAD_DEFAULT_EMBEDDING_MODEL=true
      - PRE_LOAD_DEFAULT_RERANKER_MODEL=false
      - TRANSFORMERS_OFFLINE=1
    profiles:
      - local-embedding-reranker

Step 3: Build the Docker Images

To build the Docker images defined in the Dockerfile, execute the following command in the terminal. This will compile and create the necessary images for both the frontend and backend.

docker-compose build

Step 4: Run the Docker Containers

Start the services defined in docker-compose.yml with:

docker-compose up -d

The -d flag runs the containers in detached mode. This command will start the Redis database, backend service, frontend service, background process, and local embedding reranker in the background.

Step 5: Access the Application

After the services are up, the application can be accessed via:

  • Frontend: http://localhost:3000
  • Backend API: http://localhost:8000

Step 6: Monitoring and Logging

Monitor logs for each service using:

docker-compose logs -f

This command will allow you to tail logs for all services continuously, making it easier to debug issues.

Step 7: Stopping and Restarting the Services

To stop the application, run:

docker-compose down

To restart, simply run docker-compose up -d again.

Step 8: Building with Dockerfile

In the Dockerfile, it’s essential to note how the production environment is set up. The Dockerfile exposes Port 3000 and ensures the frontend application runs properly:

FROM node:20-alpine AS base

# ... Other build processes ...

FROM base AS runner
WORKDIR /tidb.ai

ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME=0.0.0.0

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

COPY --from=builder --chown=nextjs:nodejs /tidb.ai/frontend/app/.next/standalone .
COPY --from=builder --chown=nextjs:nodejs /tidb.ai/frontend/app/.next/static app/.next/static
COPY --from=builder /tidb.ai/frontend/app/public app/public

USER nextjs

EXPOSE 3000

CMD ["node", "app/server.js"]

Conclusion

Follow the steps outlined to deploy the application in a production environment. Ensure to adjust environment settings and configurations specific to your deployment needs.

Source: Information gathered from the docker-compose.yml and Dockerfile provided in the project structure.