GenAI Stack

Understanding the GenAI Stack and its purpose.

Environment Configuration

Creating and configuring the .env file. Understanding the available variables and their descriptions.

LLM Configuration

Installing and running Ollama on MacOS and Linux. Using pre-built models or bringing your own API keys.

Docker Compose

Understanding the docker-compose.yml file. Running docker compose up, docker compose up --build, docker compose watch, and docker compose down.

Applications

Understanding the different applications in the stack. Exploring the main files, compose names, URLs, and descriptions for each application.

Support Bot

Learning how to answer support questions based on recent entries. Demonstrating the difference between RAG Disabled and RAG Enabled modes. Generating support tickets based on the style of highly rated questions in the database.

Loader

Importing recent Stack Overflow data for certain tags into a KG. Embedding questions and answers and storing them in vector index. Choosing tags, running imports, and seeing progress and some stats of data in the database.

PDF Reader

Loading a local PDF into text chunks and embedding it into Neo4j. Asking questions about the PDF contents and having the LLM answer them using vector similarity search.

Standalone HTTP API

Understanding the functionality and endpoints of the standalone HTTP API. Using non-streaming and SSE streaming endpoints.

Static front-end

Building the application separately from the back-end code using modern best practices. Instant auto-reload on changes using the Docker watch sync config.

Dockerfiles

Understanding the purpose and contents of each Dockerfile.

Scripts

Understanding the purpose and usage of each script.

Troubleshooting and Debugging

Learning how to troubleshoot and debug issues with the stack.

Deployment

Understanding how to deploy the stack in various environments.

Codebase

Learn the codebase to contribute to genai-stack