RAGO Documentation¶
Welcome to the RAGO (Retrieval Augmented Generation Optimizer) documentation!
📚 Documentation Structure¶
📁 docs/
├── 📁 code_architecture # Code Architecture
│ ├── 📄 overview.md
├── 📁 installation # Installation guide
│ ├── 📄 elasticsearch.md
│ └── 📄 ollama.md
└── 📁 usage_guide # Usage guide
├── 📁 dataset # Generate and load datasets
│ ├── 📄 data_loader.md
│ └── 📄 generator.md
├── 📁 evaluation # Evaluation and metrics
│ └── 📄 metrics.md
├── 📁 optimization # Optimization methods and strategies
│ ├── 📄 run_experiment.md
│ └── 📄 tpe.md
└── 📁 rag # RAG concepts, configurations and components
├── 📄 rag_concepts.md
├── 📄 rag_configuration.md
├── 📄 reader.md
└── 📄 retriever.md
🎯 Quick Navigation¶
🚀 Getting Started¶
- Installation - Setup & ollama configuration
- Quick Start - Your first optimization
📖 Core Documentation¶
- RAG Concepts - Understanding RAG
- RAG Configuration - Parameters & search space
- Retriever - Retrieval methods
- Reader - Generation strategies
⚙️ Optimization¶
- Run Optimization - Optimization parameters and strategies
- TPE Algorithm - Bayesian optimization theory
🔧 Evaluation & Datasets¶
- Evaluators - Evaluators overview (BertScore, SimilarityScore, LLM-as-Judge)
- Metrics - Evaluation metrics
- Dataset Loader - Dataset loading and format
- Dataset Generator - Dataset generators
🔬 Core Concepts¶
RAG (Retrieval Augmented Generation) combines: 1. Retrieve relevant documents from knowledge base 2. Augment LLM prompt with context 3. Generate informed answers
RAG Optimization automatically finds the best configuration (retriever, embeddings, LLM params) for your use case using the Optuna optimization framework (Bayesian method by default).
→ Learn more: RAG Concepts | Config Space
📖 External Resources¶
Research Papers¶
- Tree-structured Parzen Estimator - TPE optimization algorithm
- BERTScore - Semantic evaluation metrics
- LLM-as-a-Judge - Using LLMs for evaluation
Dependencies¶
- Optuna - Hyperparameter optimization framework
- LangChain - LLM application framework
- LlamaIndex - Data framework for LLMs
- Ollama - Run LLMs locally
💡 Need Help?¶
- 💬 Ask in GitHub Discussions
- 🐛 Report bugs in Issues