A hands-on workshop to build a movie searcher using Redis Cloud, Vector Search, RAG (Retrieval Augmented Generation), and Semantic Caching. Learn how to implement various search techniques including vector similarity search, hybrid search, full-text search, and more! Also includes a Help Center with guardrails using semantic router and PII protection.
- Overview
- Workshop Challenges
- Architecture
- Quick Start
- Search Types
- Help Center & RAG
- Troubleshooting
- Resources
This workshop guides you through building a complete movie recommendation system that leverages:
- Redis Cloud - Fully managed Redis database with vector search capabilities
- Vector Similarity Search - Find movies by semantic meaning
- Full-Text Search - Traditional keyword-based search with BM25 scoring
- Hybrid Search - Combine vector and text search for best results
- Filtered Search - Apply metadata filters (genre, rating) to vector results
- Range Queries - Find results within a semantic distance threshold
- Semantic Caching - Cache LLM responses for faster repeated queries
- Help Center RAG - AI-powered customer support with article retrieval
- Semantic Router Guardrails - Block off-topic queries using semantic routing
- PII Protection - Prevent caching of personally identifiable information
Complete these challenges in order to build out the full application. Look for # TODO comments in the code.
| # | Challenge | File | Description |
|---|---|---|---|
| 1 | Index Schema | backend/config.py |
Define the Redis index schema with field types (text, tag, numeric, vector) |
| 2 | Vector Query | backend/search_engine.py |
Create a VectorQuery for semantic similarity search |
| 3 | Text Query | backend/search_engine.py |
Create a TextQuery for BM25 keyword search |
| 4 | Hybrid Query | backend/search_engine.py |
Create an AggregateHybridQuery combining vector + text search |
| 5 | Range Query | backend/search_engine.py |
Modify the distance threshold and observe results |
| # | Challenge | File | Description |
|---|---|---|---|
| 6 | Create Cache | backend/semantic_cache.py |
Initialize a SemanticCache with Redis |
| 7 | Check Cache | backend/semantic_cache.py |
Query the cache for semantically similar entries |
| 8 | Store in Cache | backend/semantic_cache.py |
Store query-response pairs in the cache |
| # | Challenge | File | Description |
|---|---|---|---|
| 9 | PII Detection | backend/guardrails.py |
Add regex patterns to detect sensitive information |
| 10 | Semantic Router | backend/guardrails.py |
Create a SemanticRouter to filter off-topic queries |
- Search for
# TODOin your IDE to find all challenges - Each challenge has hints in the comments above it
- Test your changes using the UI at
http://localhost:3000
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (React) β
β Movie Search UI β Help Center Chat β http://localhost:3000
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NGINX Reverse Proxy β
β Routes /api/* to backend β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Backend (FastAPI) β
β http://localhost:8000 β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β MovieSearchEngine β β
β β β’ Vector Search β’ Hybrid Search β’ Range Search β β
β β β’ Filtered Search β’ Keyword Search β’ Embeddings Cache β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β HelpCenterEngine β β
β β β’ RAG Pipeline β’ Semantic Cache β’ OpenAI LLM β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Guardrails β β
β β β’ Semantic Router (topic filtering) β β
β β β’ PII Detection (email, phone, SSN, credit card) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββ΄ββββββββββββ β
β βΌ βΌ β
β βββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββ β
β β HuggingFace Vectorizer β β OpenAI GPT-4o-mini β β
β β (all-MiniLM-L6-v2) β β (Response Generation) β β
β βββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Redis Cloud β
β redis://default:***@your-endpoint:port β
β β
β ββββββββββββββββββ ββββββββββββββββββ βββββββββββββββββββββββββββ
β β Movie Index β β Help Articles β β Semantic Cache ββ
β β (HNSW/FLAT) β β Index β β (LLM Responses) ββ
β ββββββββββββββββββ ββββββββββββββββββ βββββββββββββββββββββββββββ
β ββββββββββββββββββ ββββββββββββββββββ β
β β Embeddings β β Router Index β β
β β Cache β β (Guardrails) β β
β ββββββββββββββββββ ββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Python 3.11+ & Node.js 20+ (recommended)
- Docker & Docker Compose (alternative)
- Redis Cloud account - redis.io/try-free
- OpenAI API Key - platform.openai.com/api-keys (for Help Center)
- Create a free database at redis.io/try-free
- Copy your connection URL:
redis://default:PASSWORD@ENDPOINT:PORT
# Clone and configure
git clone <repo-url>
cd full_stack_redis_ai_workshop
# Create .env file
echo "REDIS_URL=redis://default:YOUR_PASSWORD@YOUR_ENDPOINT:PORT" > .env
echo "OPENAI_API_KEY=your_key_here" >> .envTerminal 1 - Backend:
python3 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r backend/requirements.txt
uvicorn backend.main:app --reload --port 8000Terminal 2 - Frontend:
cd frontend
npm install
npm run devAccess at: http://localhost:5173
# Import movie data
./scripts/import_data.sh
# Create search index (via UI or curl)
curl -X POST http://localhost:8000/api/create-indexdocker-compose up --buildAccess at: http://localhost:3000
Make ports 3000, 5173, and 8000 public in the Ports tab.
| Type | Best For | Example Query |
|---|---|---|
| Vector | Semantic meaning | "Murder movies with twist" |
| Keyword | Exact matches | "Murder movies with twist" |
| Hybrid | Best of both | "college friends story" (alpha=0.5) |
| Filtered | With constraints | "emotional story" + genre:romance |
| Range | High relevance only | "smuggling syndicate" + threshold < 0.45 |
The Help Center uses a complete RAG pipeline:
Query β Guardrails β Cache Check β Article Search β LLM Response
Try it:
- "How do I reset my password?" β LLM response
- Same question again β Cached response
- "What's the weather?" β Blocked (off-topic)
Setup:
curl -X POST http://localhost:8000/api/help/ingestIf you see connection errors:
- Verify your Redis Cloud database is running (check the dashboard)
- Ensure your
REDIS_URLis correct in the.envfile - Check that your IP is whitelisted in Redis Cloud security settings
If RIOT import fails:
- Ensure RIOT is installed:
riot --version - Verify your
REDIS_URLenvironment variable is set correctly - Check that the
resources/movies.jsonfile exists
If /api/create-index fails:
- Ensure RIOT import was run first (check for
movie:*keys in Redis) - Check the backend logs for detailed error messages
- Verify Redis Cloud connectivity with
/api/health - Ensure your database has enough memory (30MB free tier should be sufficient)
This project is licensed under the MIT License - see the LICENSE file for details.
- Redis Cloud - Free Trial
- Redis Vector Library (RedisVL)
- RedisVL Semantic Router
- RedisVL LLM Semantic Cache
- RediSearch Documentation
- RIOT - Redis Input/Output Tools