Remove wiki folder - content moved to GitHub wiki

This commit is contained in:
kelinfoxy
2026-01-21 20:27:13 -05:00
parent 43f78b384f
commit 99c6a39333
95 changed files with 0 additions and 32240 deletions

View File

@@ -1,299 +0,0 @@
# AI Management Guide
## Overview
The EZ-Homelab is designed for **AI-assisted management** using GitHub Copilot in VS Code. This guide explains how to leverage AI capabilities for deploying, configuring, and maintaining your homelab infrastructure.
## AI Assistant Capabilities
### 🤖 Copilot Integration
The AI assistant is specifically trained on the AI-Homelab architecture and can:
- **Deploy Services**: Generate Docker Compose configurations
- **Configure Networks**: Set up proper network routing
- **Manage Authentication**: Configure Authelia SSO rules
- **Troubleshoot Issues**: Diagnose and fix common problems
- **Update Services**: Handle version updates and migrations
- **Create Documentation**: Generate service-specific guides
### 🎯 AI-First Design
The entire system is built with AI management in mind:
- **File-Based Configuration**: All settings in editable YAML files
- **Declarative Architecture**: Define desired state, AI handles implementation
- **Comprehensive Documentation**: AI can reference complete guides
- **Template System**: Ready-to-use configuration templates
## Getting Started with AI Management
### Prerequisites
1. **VS Code** with GitHub Copilot extension
2. **EZ-Homelab Repository** cloned locally
3. **Basic Understanding** of Docker concepts
### Initial Setup
```bash
# Clone the repository
git clone https://github.com/kelinfoxy/EZ-Homelab.git
cd EZ-Homelab
# AI will help with configuration
# Ask: "Help me configure the .env file"
```
## AI Management Workflows
### 1. Service Deployment
**Ask the AI:**
- "Deploy Nextcloud with PostgreSQL database"
- "Add Jellyfin media server to my stack"
- "Create a monitoring stack with Grafana and Prometheus"
**AI Will:**
- Generate appropriate Docker Compose files
- Configure Traefik labels for routing
- Set up Authelia authentication
- Add service to Homepage dashboard
- Provide deployment commands
### 2. Configuration Management
**Ask the AI:**
- "Configure Authelia for two-factor authentication"
- "Set up VPN routing for qBittorrent"
- "Create backup strategy for my services"
**AI Will:**
- Modify configuration files
- Update environment variables
- Generate security settings
- Create backup scripts
### 3. Troubleshooting
**Ask the AI:**
- "Why isn't my service accessible?"
- "Fix SSL certificate issues"
- "Resolve port conflicts"
**AI Will:**
- Analyze logs and configurations
- Identify root causes
- Provide step-by-step fixes
- Prevent future issues
### 4. System Updates
**Ask the AI:**
- "Update all services to latest versions"
- "Migrate from old configuration format"
- "Add new features to existing services"
**AI Will:**
- Check for updates
- Handle breaking changes
- Update configurations
- Test compatibility
## AI Assistant Instructions
The AI assistant follows these core principles:
### Project Architecture Understanding
- **Core Infrastructure**: DuckDNS, Traefik, Authelia, Gluetun, Sablier (deploy first)
- **Service Categories**: 10 categories with 70+ services
- **Network Model**: traefik-network primary, VPN routing for downloads
- **Security Model**: Authelia SSO by default, explicit bypasses
### File Structure Standards
```
docker-compose/ # Service templates
├── core/ # Core infrastructure
├── infrastructure/ # Management tools
├── media/ # Media services
└── ...
/opt/stacks/ # Runtime deployments
├── core/ # Essential services
├── infrastructure/ # Management stack
├── media/ # Media stack
└── ...
```
### Critical Operational Principles
#### 1. Security-First SSO Strategy
- **Default**: ALL services start with Authelia middleware
- **Bypass**: Only Plex and Jellyfin for app compatibility
- **Disabling**: Comment middleware line: `# - "traefik.http.routers.SERVICE.middlewares=authelia@docker"`
#### 2. Traefik Label Patterns
Standard routing configuration:
```yaml
labels:
- "traefik.enable=true"
- "traefik.http.routers.SERVICE.rule=Host(`SERVICE.${DOMAIN}`)"
- "traefik.http.routers.SERVICE.entrypoints=websecure"
- "traefik.http.routers.SERVICE.tls.certresolver=letsencrypt"
- "traefik.http.routers.SERVICE.middlewares=authelia@docker"
- "traefik.http.services.SERVICE.loadbalancer.server.port=PORT"
- "x-dockge.url=https://SERVICE.${DOMAIN}"
```
#### 3. Resource Management
Apply limits to prevent resource exhaustion:
```yaml
deploy:
resources:
limits:
cpus: '2.0' # Max CPU cores
memory: 4G # Max memory
pids: 1024 # Max processes
reservations:
cpus: '0.5' # Guaranteed CPU
memory: 1G # Guaranteed memory
```
#### 4. Storage Strategy
- **Configs**: `./service/config:/config` relative to stack directory
- **Small Data**: Named volumes (databases, app data <50GB)
- **Large Data**: External mounts `/mnt/media`, `/mnt/downloads`
- **Secrets**: `.env` files in stack directories
#### 5. LinuxServer.io Preference
- Use `lscr.io/linuxserver/*` images when available
- Standard environment: `PUID=1000`, `PGID=1000`, `TZ=${TZ}`
### AI Management Capabilities
The AI can manage the homelab by:
- **Creating services**: Generate compose files with proper Traefik labels
- **Modifying routes**: Edit Traefik labels in compose files
- **Managing external hosts**: Update Traefik dynamic configuration
- **Configuring Homepage**: Edit services.yaml for dashboard
- **Toggling SSO**: Add/remove Authelia middleware labels
- **Adding VPN routing**: Change network_mode and update Gluetun ports
- **Environment management**: Update .env (remind users to copy to stacks)
## Practical AI Usage Examples
### Deploying a New Service
```
User: "Add a GitLab instance to my homelab"
AI Response:
1. Creates /opt/stacks/development/docker-compose.yml
2. Configures PostgreSQL database
3. Sets up Traefik routing with Authelia
4. Adds to Homepage dashboard
5. Provides deployment commands
```
### Troubleshooting Issues
```
User: "My Traefik isn't routing to new services"
AI Response:
1. Checks Traefik configuration
2. Verifies network connectivity
3. Examines service labels
4. Provides specific fix commands
```
### Configuration Updates
```
User: "Enable 2FA for all admin services"
AI Response:
1. Updates Authelia configuration.yml
2. Modifies access control rules
3. Regenerates secrets if needed
4. Tests authentication flow
```
## AI vs Manual Management
### When to Use AI
- **New Deployments**: Service setup and configuration
- **Complex Changes**: Multi-service modifications
- **Troubleshooting**: Issue diagnosis and resolution
- **Documentation**: Understanding system architecture
- **Updates**: Version upgrades and migrations
### When to Use Manual Methods
- **Simple Tasks**: Basic Docker commands
- **Direct Access**: Container shell access
- **Performance Monitoring**: Real-time system checks
- **Emergency Recovery**: When AI access is unavailable
## Best Practices for AI Management
### 1. Clear Communication
- **Specific Requests**: "Add PostgreSQL database for Nextcloud" vs "Add database"
- **Context Provided**: Include current setup details
- **Expected Outcomes**: State what you want to achieve
### 2. Iterative Approach
- **Start Small**: Deploy one service at a time
- **Test Incrementally**: Verify each change works
- **Backup First**: Create backups before major changes
### 3. Documentation Integration
- **Reference Guides**: AI uses provided documentation
- **Update Records**: Keep change logs for troubleshooting
- **Share Knowledge**: Document custom configurations
### 4. Security Awareness
- **Review Changes**: Always check AI-generated configurations
- **Access Control**: Understand authentication implications
- **Network Security**: Verify VPN and firewall rules
## Advanced AI Features
### Template System
- **Service Templates**: Pre-configured service definitions
- **Configuration Templates**: Ready-to-use config files
- **Environment Templates**: .env file examples
### Integration Capabilities
- **Multi-Service**: Deploy complete stacks
- **Cross-Service**: Configure service interactions
- **External Services**: Proxy non-Docker services
- **Backup Integration**: Automated backup configurations
### Learning and Adaptation
- **Pattern Recognition**: Learns from previous deployments
- **Error Prevention**: Avoids common configuration mistakes
- **Optimization**: Suggests performance improvements
## Getting Help
### AI Assistant Commands
- **General Help**: "Help me with EZ-Homelab management"
- **Specific Tasks**: "How do I deploy a new service?"
- **Troubleshooting**: "Why isn't my service working?"
- **Configuration**: "How do I configure Authelia?"
### Documentation Resources
- **Copilot Instructions**: Detailed AI capabilities
- **Service Guides**: Individual service documentation
- **Troubleshooting**: Common issues and solutions
- **Quick Reference**: Command cheat sheet
### Community Support
- **GitHub Issues**: Bug reports and feature requests
- **Discussions**: Community questions and answers
- **Wiki**: Comprehensive documentation
## Future AI Enhancements
### Planned Features
- **Automated Testing**: Service health verification
- **Performance Optimization**: Resource tuning recommendations
- **Security Auditing**: Configuration security checks
- **Backup Validation**: Automated backup testing
### Integration Improvements
- **CI/CD Integration**: Automated deployment pipelines
- **Monitoring Integration**: AI-driven alerting
- **Cost Optimization**: Resource usage analysis
The EZ-Homelab's AI-first design makes complex homelab management accessible to users of all skill levels while maintaining production-ready reliability and security.</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\AI-Management-Guide.md

View File

@@ -1,205 +0,0 @@
# AI Management Prompts
This guide provides example prompts you can use with GitHub Copilot to manage your homelab. These prompts leverage the AI assistant's knowledge of your infrastructure to perform common tasks.
## Container and Stack Management
### Controling Services
- "Start/Pause/Stop/Restart the media stack"
- "Start/Pause/Stop/Restart wordpress"
- "Deploy the monitoring stack"
- "Bring up the core infrastructure"
### Restarting a service vs restarting a stack
Restarting a container (service) does NOT pick up changes to the compose file
Restarting a stack uses docker compose down && docker compose up -d
This DOES load changes to the compose file
Restart traefik/sablier container(s) to apply configuration changes
Restart the core stack to apply changes to the compose file
Homepage configuration is loaded on demand, no restart needed
### Status Checks
>If your server keeps locking up, it's likely stuck background processes.
VS Code Copilot Chat in Agent mode sometimes leaves a stuck process running (in my experience)
- "Show me the status of all containers"
- "Check if the media services are running"
- "List all deployed stacks"
- "Monitor container resource usage"
- "Check for and close any stuck proccesses hogging system resources"
## Service Configuration
### Adding New Services
- "Add Plex to my media stack"
- "Install Nextcloud for file sharing"
- "Set up Grafana for monitoring"
- "Deploy Home Assistant for automation"
### Modifying Existing Services
- "Change the port for my Plex service"
- "Update the domain for Authelia"
- "Configure VPN routing for qBittorrent"
- "Add SSL certificate for new service"
### Network Configuration
- "Configure Traefik routing for my new service"
- "Set up Authelia protection for admin services"
- "Create external proxy for Raspberry Pi service"
- "Configure Sablier lazy loading"
## Troubleshooting
### Log Analysis
- "Check logs for the media stack"
- "Analyze errors in the monitoring services"
- "Review Traefik routing issues"
- "Examine Authelia authentication problems"
### Performance Issues
- "Monitor resource usage for containers"
- "Check for memory leaks in services"
- "Analyze network connectivity issues"
- "Review disk space usage"
### Configuration Problems
- "Validate Docker Compose syntax"
- "Check environment variable configuration"
- "Verify network connectivity between services"
- "Test SSL certificate validity"
### System Health Checks
- "Check for stuck processes consuming resources"
- "Monitor system load and identify bottlenecks"
- "Verify file permissions for Docker volumes"
- "Check Docker daemon status and logs"
- "Validate network connectivity and DNS resolution"
- "Review system resource limits and quotas"
### Stuck Process Investigation
- "Find and terminate any stuck AI assistant processes"
- "Check for runaway Docker containers"
- "Monitor background jobs and cleanup if needed"
- "Review system process table for anomalies"
- "Check VS Code extension processes that may be hanging"
### File Permission Issues
- "Fix permission problems with Docker volumes"
- "Set correct ownership for service configuration files"
- "Resolve access denied errors for mounted directories"
- "Validate PUID/PGID settings match system users"
- "Check and repair file system permissions recursively"
## Backup and Recovery
### Creating Backups
- "Set up backup for my media files"
- "Configure automated backups for databases"
- "Create backup strategy for configurations"
- "Schedule regular system backups"
### Restoring Services
- "Restore from backup after failure"
- "Recover deleted configuration files"
- "Rebuild corrupted database"
- "Restore service from snapshot"
## Monitoring and Maintenance
### System Monitoring
- "Set up Grafana dashboards"
- "Configure Prometheus metrics"
- "Create uptime monitoring"
- "Set up log aggregation"
### Updates and Upgrades
- "Update all containers to latest versions"
- "Upgrade specific service to new version"
- "Check for security updates"
- "Apply system patches"
## Security Management
### Access Control
- "Add new user to Authelia"
- "Configure two-factor authentication"
- "Set up access policies"
- "Manage user permissions"
### SSL and Certificates
- "Renew SSL certificates"
- "Configure wildcard certificates"
- "Set up custom domains"
- "Troubleshoot certificate issues"
## Scaling and Optimization
### Resource Management
- "Optimize container resource limits"
- "Configure GPU access for services"
- "Set up load balancing"
- "Scale services horizontally"
### Storage Management
- "Configure additional storage drives"
- "Set up network storage"
- "Optimize disk usage"
- "Configure backup storage"
## Custom Configurations
### Advanced Setup
- "Create multi-server deployment"
- "Configure external service proxying"
- "Set up VPN routing for downloads"
- "Configure custom networking"
### Integration Tasks
- "Connect services to external APIs"
- "Configure webhook integrations"
- "Set up automated workflows"
- "Create custom monitoring alerts"
## Getting Help
### Documentation
- "Show me the service documentation"
- "Explain how Traefik routing works"
- "Guide me through SSL setup"
- "Help me understand Docker networking"
### Best Practices
- "Review my configuration for security"
- "Optimize my setup for performance"
- "Suggest backup improvements"
- "Recommend monitoring enhancements"
## Prompt Tips
### Be Specific
- Include service names: "Configure Plex, not just media service"
- Specify actions: "Add user" vs "Manage users"
- Mention locations: "In the media stack" vs "Somewhere"
### Provide Context
- "I'm getting error X when doing Y"
- "Service Z isn't starting after configuration change"
- "I need to connect service A to service B"
### Use Natural Language
- "Make my homelab more secure"
- "Help me set up backups"
- "Fix my broken service"
### Follow Up
- "That didn't work, try a different approach"
- "Show me the logs for that service"
- "Explain what that configuration does"
Remember: The AI assistant has full knowledge of your homelab architecture and can perform complex tasks. Start with simple requests and build up to more complex operations as you become comfortable with the system.

View File

@@ -1,179 +0,0 @@
# AI-Assisted VS Code Setup
This guide shows you how to use VS Code with GitHub Copilot on your local PC to set up and manage your homelab server remotely. The AI assistant will help you configure your server from scratch.
## Prerequisites
- VS Code installed on your local PC
- GitHub Copilot extension installed
- SSH access to your homelab server (fresh Ubuntu/Debian install)
- Basic familiarity with VS Code
## Step 1: Install Required Extensions
1. Open VS Code on your local PC
2. Go to Extensions (Ctrl+Shift+X)
3. Search for and install:
- **GitHub Copilot** (by GitHub) - AI assistant
- **Remote SSH** (by Microsoft) - for connecting to your server
- **Docker** (by Microsoft) - for Docker support
- **YAML** (by Red Hat) - for editing compose files
## Step 2: Connect to Your Homelab Server
1. In VS Code, open the Command Palette (Ctrl+Shift+P)
2. Type "Remote-SSH: Connect to Host..."
3. Enter your server's SSH details: `ssh user@your-server-ip`
4. Authenticate with your password or SSH key
## Step 3: Use AI to Set Up Your Server
With VS Code connected to your server, you can now use GitHub Copilot to guide you through the entire setup process:
### Initial Server Setup
- **Clone repository**: Ask Copilot "Help me clone the AI-Homelab repository"
- **Configure environment**: "Guide me through setting up the .env file"
- **Run setup scripts**: "Walk me through running the setup-homelab.sh script"
- **Deploy services**: "Help me run the deployment script"
### AI-Assisted Configuration
The AI will help you:
- Generate secure passwords and API keys
- Configure domain settings and SSL certificates
- Set up user accounts and permissions
- Troubleshoot any issues that arise
## Step 4: Open the AI-Homelab Repository
1. Once connected to your server, open the terminal in VS Code (Ctrl+`)
2. Navigate to your repository:
```bash
cd ~/AI-Homelab
```
3. Open the folder in VS Code: `File > Open Folder` and select `/home/your-user/AI-Homelab`
## Step 5: Enable GitHub Copilot
1. Make sure you're signed into GitHub in VS Code
2. GitHub Copilot should activate automatically
3. You can test it by opening a file and typing a comment or code
## How Services Get Added
### The AI Way (Recommended)
1. **Tell the AI**: "Add Plex to my media stack"
2. **AI Creates**: Docker Compose file with proper configuration
3. **AI Configures**: Traefik routing, Authelia protection, resource limits
4. **AI Deploys**: Service goes live with HTTPS and SSO
5. **AI Updates**: Homepage dashboard automatically
### Manual Way
1. **Find Service**: Choose from 50+ pre-configured services
2. **Upload to Dockge**: Use the web interface
3. **Configure**: Set environment variables and volumes
4. **Deploy**: Click deploy and wait
5. **Access**: Service is immediately available at `https://servicename.yourdomain.duckdns.org`
**Note**: If your core stack (Traefik, Authelia) is on a separate server, you'll need to:
- Configure external routing in Traefik's dynamic configuration
- Set up Sablier lazy loading rules for the remote server
- Ensure proper network connectivity between servers
## Storage Strategy
### Configuration Files
- **Location**: `/opt/stacks/stack-name/config/`
- **Purpose**: Service settings, databases, user data
- **Backup**: Included in automatic backups
### Media & Large Data
- **Location**: `/mnt/media/`, `/mnt/downloads/`
- **Purpose**: Movies, TV shows, music, downloads
- **Performance**: Direct mounted drives for speed
- **Important**: You'll need additional physical drives mounted at these locations for media storage
## AI Features
### VS Code Integration
- **Copilot Chat**: Natural language commands for infrastructure management
- **File Editing**: AI modifies Docker Compose files, configuration YAML
- **Troubleshooting**: AI analyzes logs and suggests fixes
- **Documentation**: AI keeps docs synchronized with deployed services
- **Direct File Access**: You can view and modify files directly in VS Code
- **Manual Changes**: Tell the AI to check your manual changes: "Review the changes I just made to the compose file"
## Scaling & Customization
### Adding Services
- **Pre-built**: 50+ services ready to deploy
- **Custom**: AI can create configurations for any Docker service
- **External**: Proxy services on other devices (Raspberry Pi, NAS)
### Deploying Additional Servers
You can deploy multiple servers for different purposes:
#### Core Stack on Separate Server
- **Purpose**: Dedicated server for reverse proxy, authentication, and VPN
- **Deployment**: Deploy core stack first on the dedicated server
- **Impact on Other Servers**:
- **Traefik**: Configure external routing for services on other servers
- **Sablier**: Set up lazy loading rules for remote services
- **Compose Files**: Services reference the core server's Traefik network externally
#### Media Server Example
- **Server 1**: Core stack (Traefik, Authelia, Gluetun)
- **Server 2**: Media services (Plex, Sonarr, Radarr)
- **Configuration**: Media server compose files connect to core server's networks
## Port Forwarding Requirements
**Important**: You must forward ports 80 and 443 from your router to your homelab server for SSL certificates and web access to work.
### Router Configuration
1. Log into your router's admin interface
2. Find the port forwarding section
3. Forward:
- **Port 80** (HTTP) → Your server's IP address
- **Port 443** (HTTPS) → Your server's IP address
4. Save changes and test connectivity
### Why This Matters
- **SSL Certificates**: Let's Encrypt needs port 80 for domain validation
- **HTTPS Access**: All services use port 443 for secure connections
- **Wildcard Certificates**: Enables `*.yourdomain.duckdns.org` subdomains
## Best Practices
- **Always backup** before making changes
- **Test in isolation** - deploy single services first
- **Use the AI** for complex configurations
- **Read the documentation** linked in responses
- **Validate YAML** before deploying: `docker compose config`
## Troubleshooting
### Copilot Not Working
- Check your GitHub subscription includes Copilot
- Ensure you're signed into GitHub in VS Code
- Try reloading VS Code window
### SSH Connection Issues
- Verify SSH keys are set up correctly
- Check firewall settings on your server
- Ensure SSH service is running
### AI Not Understanding Context
- Open the relevant files first
- Provide specific file paths
- Include error messages when troubleshooting
## Next Steps
Once set up, you can manage your entire homelab through VS Code:
- Deploy new services
- Modify configurations
- Monitor logs
- Troubleshoot issues
- Scale your infrastructure
The AI assistant follows the same patterns and conventions as your existing setup, ensuring consistency and reliability.

View File

@@ -1,256 +0,0 @@
# Authelia Customization Guide
This guide covers how to customize Authelia for your specific needs.
## Available Customization Options
### 1. Branding and Appearance
Edit `/opt/stacks/core/authelia/configuration.yml`:
```yaml
# Custom logo and branding
theme: dark # Options: light, dark, grey, auto
# No built-in web UI for configuration
# All settings managed via YAML files
```
### 2. User Management
Users are managed in `/opt/stacks/core/authelia/users_database.yml`:
```yaml
users:
username:
displayname: "Display Name"
password: "$argon2id$v=19$m=65536..." # Generated with authelia hash-password
email: user@example.com
groups:
- admins
- users
```
Generate password hash:
```bash
docker run --rm authelia/authelia:4.37 authelia crypto hash generate argon2 --password 'yourpassword'
```
### 3. Access Control Rules
Customize who can access what in `configuration.yml`:
```yaml
access_control:
default_policy: deny
rules:
# Public services (no auth)
- domain:
- "jellyfin.yourdomain.com"
- "plex.yourdomain.com"
policy: bypass
# Admin only services
- domain:
- "dockge.yourdomain.com"
- "portainer.yourdomain.com"
policy: two_factor
subject:
- "group:admins"
# All authenticated users
- domain: "*.yourdomain.com"
policy: one_factor
```
### 4. Two-Factor Authentication (2FA)
- TOTP (Time-based One-Time Password) via apps like Google Authenticator, Authy
- Configure in `configuration.yml` under `totp:` section
- Per-user enrollment via Authelia UI at `https://auth.${DOMAIN}`
### 5. Session Management
Edit `configuration.yml`:
```yaml
session:
name: authelia_session
expiration: 1h # How long before re-login required
inactivity: 5m # Timeout after inactivity
remember_me_duration: 1M # "Remember me" checkbox duration
```
### 6. Notification Settings
Email notifications for password resets, 2FA enrollment:
```yaml
notifier:
smtp:
host: smtp.gmail.com
port: 587
username: your-email@gmail.com
password: app-password
sender: authelia@yourdomain.com
```
## No Web UI for Configuration
⚠️ **Important**: Authelia does **not** have a configuration web UI. All configuration is done via YAML files:
- `/opt/stacks/core/authelia/configuration.yml` - Main settings
- `/opt/stacks/core/authelia/users_database.yml` - User accounts
This is **by design** and makes Authelia perfect for AI management and security-first approach:
- AI can read and modify YAML files
- Version control friendly
- No UI clicks required
- Infrastructure as code
- Secure by default
**Web UI Available For:**
- Login page: `https://auth.${DOMAIN}`
- User profile: Change password, enroll 2FA
- Device enrollment: Manage trusted devices
## Alternative with Web UI: Authentik
If you need a web UI for user management, Authentik is included in the alternatives stack:
- **Authentik**: Full-featured SSO with web UI for user/group management
- Access at: `https://authentik.${DOMAIN}`
- Includes PostgreSQL database and Redis cache
- More complex but offers GUI-based configuration
- Deploy via Dockge when needed
**Other Alternatives:**
- **Keycloak**: Enterprise-grade SSO with web UI
- **Authelia + LDAP**: Use LDAP with web management (phpLDAPadmin, etc.)
## Quick Configuration with AI
Since all Authelia configuration is file-based, you can use the AI assistant to:
- Add/remove users
- Modify access rules
- Change session settings
- Update branding
- Enable/disable features
Just ask: "Add a new user to Authelia" or "Change session timeout to 2 hours"
## Common Customizations
### Adding a New User
1. Generate password hash:
```bash
docker run --rm authelia/authelia:4.37 authelia crypto hash generate argon2 --password 'newuserpassword'
```
2. Edit `/opt/stacks/core/authelia/users_database.yml`:
```yaml
users:
admin:
# existing admin user...
newuser:
displayname: "New User"
password: "$argon2id$v=19$m=65536..." # paste generated hash
email: newuser@example.com
groups:
- users
```
3. Restart Authelia:
```bash
cd /opt/stacks/core
docker compose restart authelia
```
### Bypass SSO for Specific Service
Edit the service's Traefik labels to remove the Authelia middleware:
```yaml
# Before (SSO protected)
labels:
- "traefik.http.routers.service.middlewares=authelia@docker"
# After (bypass SSO)
labels:
# - "traefik.http.routers.service.middlewares=authelia@docker" # commented out
```
### Change Session Timeout
Edit `/opt/stacks/core/authelia/configuration.yml`:
```yaml
session:
expiration: 12h # Changed from 1h to 12h
inactivity: 30m # Changed from 5m to 30m
```
Restart Authelia to apply changes.
### Enable SMTP Notifications
Edit `/opt/stacks/core/authelia/configuration.yml`:
```yaml
notifier:
smtp:
host: smtp.gmail.com
port: 587
username: your-email@gmail.com
password: your-app-password # Use app-specific password
sender: authelia@yourdomain.com
subject: "[Authelia] {title}"
```
### Create Admin-Only Access Rule
Edit `/opt/stacks/core/authelia/configuration.yml`:
```yaml
access_control:
rules:
# Admin-only services
- domain:
- "dockge.yourdomain.duckdns.org"
- "traefik.yourdomain.duckdns.org"
- "portainer.yourdomain.duckdns.org"
policy: two_factor
subject:
- "group:admins"
# All other services - any authenticated user
- domain: "*.yourdomain.duckdns.org"
policy: one_factor
```
Restart Authelia after changes.
## Troubleshooting
### User Can't Log In
1. Check password hash format in users_database.yml
2. Verify email address matches
3. Check Authelia logs: `docker logs authelia`
### 2FA Not Working
1. Ensure time sync on server: `timedatectl`
2. Check TOTP configuration in configuration.yml
3. Regenerate QR code for user
### Sessions Expire Too Quickly
Increase session expiration in configuration.yml:
```yaml
session:
expiration: 24h
inactivity: 1h
```
### Can't Access Specific Service
Check access control rules - service domain may be blocked by default_policy: deny
## Additional Resources
- [Authelia Documentation](https://www.authelia.com/docs/)
- [Authelia Service Docs](service-docs/authelia.md)
- [Getting Started Guide](getting-started.md)

View File

@@ -1,112 +0,0 @@
# Automated Setup (Recommended)
For most users, the automated setup script handles everything from system preparation to deployment.
## Prerequisites
- **Fresh Debian/Ubuntu server** (or existing system)
- **Root/sudo access**
- **Internet connection**
- **Ports 80 and 443 forwarded** from your router to your server (required for SSL certificates)
- **VS Code with GitHub Copilot** (for AI assistance)
## Simple Setup
1. **Connect to your server** via SSH
>Tip: Use VS Code on your local machine to ssh
in to your server for the easiest install!
2. **Install git if needed**
```bash
sudo apt update && sudo apt upgrade -y && sudo apt install git
3. **Clone the repository**:
```bash
git clone https://github.com/kelinfoxy/AI-Homelab.git
cd AI-Homelab
4. **Configure environment**:
```bash
cp .env.example .env
nano .env # Edit with your domain and tokens
```
**Required variables in .env:**
- `DOMAIN` - Your DuckDNS domain (e.g., yourdomain.duckdns.org)
- `DUCKDNS_TOKEN` - Your DuckDNS token from [duckdns.org](https://www.duckdns.org/)
- `ACME_EMAIL` - Your email for Let's Encrypt certificates
- `SURFSHARK_USERNAME` and `SURFSHARK_PASSWORD` - If using VPN
**Note:** The `.env` file stays in the repository folder (`~/EZ-Homelab/.env`). The deploy script copies it to stack directories automatically. Authelia secrets (JWT, session, encryption key) are auto-generated by the setup script - leave them with default values for now.
5. **Run the setup script:**
```bash
sudo ./scripts/setup-homelab.sh
```
The script will:
- Update system packages
- Install Docker Engine + Compose V2 (if needed)
- Configure user groups (docker, sudo)
- Set up firewall (UFW)
- Enable SSH server
- **Generate Authelia secrets** (JWT, session, encryption key)
- **Prompt for admin username, password, and email**
- **Generate argon2id password hash** (30-60 seconds)
- Create `/opt/stacks/` directory structure
- Set up Docker networks (homelab, traefik, dockerproxy, media)
- Detect NVIDIA GPU and offer driver installation
**Important:** If NVIDIA drivers were installed, reboot your system now before continuing.
6. **Deploy homelab**:
```bash
sudo ./scripts/deploy-homelab.sh
```
**The deploy script automatically:**
- Creates Docker networks
- Configures Traefik with your email and domain
- **Obtains wildcard SSL certificate** (*.yourdomain.duckdns.org) via DNS challenge
- Deploys core stack (DuckDNS, Traefik, Authelia, Gluetun)
- Deploys infrastructure stack (Dockge, Pi-hole, monitoring)
- Deploys dashboards stack (Homepage, Homarr)
- Opens Dockge in your browser
**Note:** Certificate generation may take 2-5 minutes. All services will use the wildcard certificate automatically.
**Login credentials:**
- Username: `admin` (default username - or the custom username you specified during setup)
- Password: The secure password you created when prompted by the setup script
**That's it!** Your homelab is ready.
**Access Dockge at `https://dockge.yourdomain.duckdns.org`**
## What the Setup Script Does
The `setup-homelab.sh` script is a comprehensive first-run configuration tool:
**System Preparation:**
- ✅ Pre-flight checks (internet connectivity, disk space 50GB+)
- ✅ Updates system packages
- ✅ Installs required packages (git, curl, etc.)
- ✅ Installs Docker Engine + Compose V2 (if not present)
- ✅ Configures user permissions (docker, sudo groups)
- ✅ Sets up firewall (UFW with SSH, HTTP, HTTPS)
- ✅ Enables SSH server
**Authelia Configuration (Interactive):**
- ✅ Generates three cryptographic secrets (JWT, session, encryption)
- ✅ Prompts for admin username (default: admin)
- ✅ Prompts for secure password with confirmation
- ✅ Prompts for admin email address
- ✅ Generates argon2id password hash using Docker (30-60s process)
- ✅ Validates Docker is available before password operations
- ✅ Saves credentials securely for deployment script
**Infrastructure Setup:**
- ✅ Creates directory structure (`/opt/stacks/`)
- ✅ Sets up Docker networks (homelab, traefik, dockerproxy, media)
- ✅ Detects NVIDIA GPU and offers driver installation
**Safety Features:**
- Skips completed steps (safe to re-run)
- Timeout handling (60s for Docker operations)
- Comprehensive error messages with troubleshooting hints
- Exit on critical failures with clear next steps

View File

@@ -1,196 +0,0 @@
# Comprehensive Backup Guide: Restic + Backrest
This guide covers your configured setup with **Restic** (the backup engine) and **Backrest** (the web UI for managing Restic). It includes current info (as of January 2026), configurations, examples, and best practices. Your setup backs up Docker volumes, service configs, and databases with automated stop/start hooks.
## Overview
### What is Restic?
Restic (latest: v0.18.1) is a modern, open-source backup program written in Go. It provides:
- **Deduplication**: Efficiently stores only changed data.
- **Encryption**: All data is encrypted with AES-256.
- **Snapshots**: Point-in-time backups with metadata.
- **Cross-Platform**: Works on Linux, macOS, Windows.
- **Backends**: Supports local, SFTP, S3, etc.
- **Features**: Compression, locking, pruning, mounting snapshots.
Restic is command-line based; Backrest provides a web UI.
### What is Backrest?
Backrest (latest: v1.10.1) is a web-based UI for Restic, built with Go and SvelteKit. It simplifies Restic management:
- **Web Interface**: Create repos, plans, and monitor backups.
- **Automation**: Scheduled backups, hooks (pre/post commands).
- **Integration**: Runs Restic under the hood.
- **Features**: Multi-repo support, retention policies, notifications.
Your Backrest instance is containerized, with access to Docker volumes and host paths.
## Your Current Setup
### Backrest Configuration
- **Container**: `garethgeorge/backrest:latest` on port 9898.
- **Mounts**:
- `/var/lib/docker/volumes:/docker_volumes` (all Docker volumes).
- `/opt/stacks:/opt/stacks` (service configs and data).
- `/var/run/docker.sock` (for Docker commands in hooks).
- **Environment**:
- `BACKREST_DATA=/data` (internal data).
- `BACKREST_CONFIG=/config/config.json` (plans/repos).
- `BACKREST_UI_CONFIG={"baseURL": "https://backrest.kelin-casa.duckdns.org"}` (UI base URL).
- **Repos**: `jarvis-restic-repo` (your main repo).
- **Plans**:
- **jarvis-database-backup**: Backs up `_mysql` volumes with DB stop/start hooks.
- **Other Plans**: For volumes/configs (e.g., individual paths like `/docker_volumes/gitea_data/_data`).
### Key Features in Use
- **Hooks**: Pre-backup stops DBs, post-backup starts them.
- **Retention**: Keep last 30 snapshots.
- **Schedule**: Daily backups.
- **Paths**: Selective (e.g., DB volumes, service data).
## Step-by-Step Guide
### 1. Accessing Backrest
- URL: `https://backrest.kelin-casa.duckdns.org`
- Auth: Via Authelia (as configured in NPM).
- UI Sections: Repos, Plans, Logs.
### 2. Managing Repos
Repos store backups. Yours is `jarvis-restic-repo`.
#### Create a New Repo
1. Go to **Repos** > **Add Repo**.
2. **Name**: `my-new-repo`.
3. **Storage**: Choose backend (e.g., Local: `/data/repos/my-new-repo`).
4. **Password**: Set a strong password (Restic encrypts with this).
5. **Initialize**: Backrest runs `restic init`.
#### Example Config (JSON)
```json
{
"id": "jarvis-restic-repo",
"uri": "/repos/jarvis-restic-repo",
"password": "your-secure-password",
"env": {},
"flags": []
}
```
#### Best Practices
- Use strong passwords.
- For remote: Use SFTP/S3 for offsite backups.
- Test access: `restic list snapshots --repo /repos/repo-name`.
### 3. Creating Backup Plans
Plans define what/when/how to back up.
#### Your DB Plan Example
```json
{
"id": "jarvis-database-backup",
"repo": "jarvis-restic-repo",
"paths": [
"/docker_volumes/bookstack_mysql/_data",
"/docker_volumes/gitea_mysql/_data",
"/docker_volumes/mediawiki_mysql/_data",
"/docker_volumes/nextcloud_mysql/_data",
"/docker_volumes/formio_mysql/_data"
],
"excludes": [],
"iexcludes": [],
"schedule": {
"clock": "CLOCK_LOCAL",
"maxFrequencyDays": 1
},
"backup_flags": [],
"hooks": [
{
"actionCommand": {
"command": "for vol in $(docker volume ls -q | grep '_mysql$'); do docker ps -q --filter volume=$vol | xargs -r docker stop || true; done"
},
"conditions": ["CONDITION_SNAPSHOT_START"],
"onError": "ON_ERROR_CANCEL"
},
{
"actionCommand": {
"command": "for vol in $(docker volume ls -q | grep '_mysql$'); do docker ps -a -q --filter volume=$vol | xargs -r docker start || true; done"
},
"conditions": ["CONDITION_SNAPSHOT_END"],
"onError": "ON_ERROR_RETRY_1MINUTE"
}
],
"retention": {
"policyKeepLastN": 30
}
}
```
#### Create a New Plan
1. Go to **Plans** > **Add Plan**.
2. **Repo**: Select `jarvis-restic-repo`.
3. **Paths**: Add directories (e.g., `/opt/stacks/homepage/config`).
4. **Schedule**: Set frequency (e.g., daily).
5. **Hooks**: Add pre/post commands (e.g., for non-DB backups).
6. **Retention**: Keep last N snapshots.
7. **Save & Run**: Test with **Run Backup Now**.
#### Example: Volume Backup Plan
```json
{
"id": "volumes-backup",
"repo": "jarvis-restic-repo",
"paths": [
"/docker_volumes/gitea_data/_data",
"/docker_volumes/nextcloud_html/_data"
],
"schedule": {"maxFrequencyDays": 1},
"retention": {"policyKeepLastN": 14}
}
```
### 4. Running & Monitoring Backups
- **Manual**: Plans > Select plan > **Run Backup**.
- **Scheduled**: Runs automatically per schedule.
- **Logs**: Check **Logs** tab for output/errors.
- **Status**: View snapshots in repo details.
#### Restic Commands (via Backrest)
Backrest runs Restic under the hood. Examples:
- Backup: `restic backup /path --repo /repo`
- List: `restic snapshots --repo /repo`
- Restore: `restic restore latest --repo /repo --target /restore/path`
- Prune: `restic forget --keep-last 30 --repo /repo`
### 5. Restoring Data
1. Go to **Repos** > Select repo > **Snapshots**.
2. Choose snapshot > **Restore**.
3. Select paths/files > Target directory.
4. Run restore.
#### Example Restore Command
```bash
restic restore latest --repo /repos/jarvis-restic-repo --target /tmp/restore --path /docker_volumes/bookstack_mysql/_data
```
### 6. Advanced Features
- **Excludes**: Add glob patterns (e.g., `*.log`) to skip files.
- **Compression**: Enable in backup flags: `--compression max`.
- **Notifications**: Configure webhooks in Backrest settings.
- **Mounting**: `restic mount /repo /mnt/backup` to browse snapshots.
- **Forget/Prune**: Auto-managed via retention, or manual: `restic forget --keep-daily 7 --repo /repo`.
### 7. Best Practices
- **Security**: Use strong repo passwords. Limit Backrest access.
- **Testing**: Regularly test restores.
- **Retention**: Balance storage (e.g., keep 30 daily).
- **Monitoring**: Check logs for failures.
- **Offsite**: Add a remote repo for disaster recovery.
- **Performance**: Schedule during low-usage times.
- **Updates**: Keep Restic/Backrest updated (current versions noted).
### 8. Troubleshooting
- **Hook Failures**: Check Docker socket access; ensure CLI installed.
- **Permissions**: Bind mounts may need `chown` to match container user.
- **Space**: Monitor repo size; prune old snapshots.
- **Errors**: Logs show Restic output; search for "exit status" codes.
This covers your setup. For more, visit [Restic Docs](https://restic.net/) and [Backrest GitHub](https://github.com/garethgeorge/backrest). Let me know if you need help with specific configs!

View File

@@ -1,467 +0,0 @@
# AI Homelab Management Assistant
You are an AI assistant for the **AI-Homelab** project - a production-ready Docker homelab infrastructure managed through GitHub Copilot in VS Code. This system deploys 50+ services with automated SSL, SSO authentication, and VPN routing using a file-based, AI-manageable architecture.
## Project Architecture
### Core Infrastructure (Deploy First)
The **core stack** (`/opt/stacks/core/`) contains essential services that must run before others:
- **DuckDNS**: Dynamic DNS with Let's Encrypt DNS challenge for wildcard SSL (`*.yourdomain.duckdns.org`)
- **Traefik**: Reverse proxy with automatic HTTPS termination (labels-based routing, file provider for external hosts)
- **Authelia**: SSO authentication (auto-generated secrets, file-based user database)
- **Gluetun**: VPN client (Surfshark WireGuard/OpenVPN) for download services
- **Sablier**: Lazy loading service for on-demand container startup (saves resources)
### Deployment Model
- **Two-script setup**: `setup-homelab.sh` (system prep, Docker install, secrets generation) → `deploy-homelab.sh` (automated deployment)
- **Dockge-based management**: All stacks in `/opt/stacks/`, managed via web UI at `dockge.${DOMAIN}`
- **Automated workflows**: Scripts create directories, configure networks, deploy stacks, wait for health checks
- **Repository location**: `/home/kelin/AI-Homelab/` (templates in `docker-compose/`, docs in `docs/`)
### File Structure Standards
```
docker-compose/
├── core.yml # Main compose files (legacy)
├── infrastructure.yml
├── media.yml
└── core/ # New organized structure
├── docker-compose.yml # Individual stack configs
├── authelia/
├── duckdns/
└── traefik/
/opt/stacks/ # Runtime location (created by scripts)
├── core/ # DuckDNS, Traefik, Authelia, Gluetun (deploy FIRST)
├── infrastructure/ # Dockge, Pi-hole, monitoring tools
├── dashboards/ # Homepage (AI-configured), Homarr
├── media/ # Plex, Jellyfin, Calibre-web, qBittorrent
├── media-management/ # *arr services (Sonarr, Radarr, etc.)
├── homeassistant/ # Home Assistant, Node-RED, Zigbee2MQTT
├── productivity/ # Nextcloud, Gitea, Bookstack
├── monitoring/ # Grafana, Prometheus, Uptime Kuma
└── utilities/ # Duplicati, FreshRSS, Wallabag
```
### Network Architecture
- **traefik-network**: Primary network for all services behind Traefik
- **Gluetun network mode**: Download clients use `network_mode: "service:gluetun"` for VPN routing
- **Port mapping**: Only core services expose ports (80/443 for Traefik); others route via Traefik labels
## Critical Operational Principles
### 1. Security-First SSO Strategy
- **Default stance**: ALL services start with Authelia middleware enabled
- **Bypass exceptions**: Only Plex and Jellyfin (for device/app compatibility)
- **Disabling SSO**: Comment (don't delete) the middleware line: `# - "traefik.http.routers.SERVICE.middlewares=authelia@docker"`
- **Rationale**: Security by default; users explicitly opt-out for specific services
### 2. Traefik Label Patterns
Standard routing configuration for new services:
```yaml
labels:
- "traefik.enable=true"
- "traefik.http.routers.SERVICE.rule=Host(`SERVICE.${DOMAIN}`)"
- "traefik.http.routers.SERVICE.entrypoints=websecure"
- "traefik.http.routers.SERVICE.tls.certresolver=letsencrypt" # Uses wildcard cert
- "traefik.http.routers.SERVICE.middlewares=authelia@docker" # SSO protection (comment out to disable)
- "traefik.http.services.SERVICE.loadbalancer.server.port=PORT" # If not default
- "x-dockge.url=https://SERVICE.${DOMAIN}" # Service discovery in Dockge
# Optional: Sablier lazy loading (comment out to disable)
# - "sablier.enable=true"
# - "sablier.group=core-SERVICE"
# - "sablier.start-on-demand=true"
```
### 3. Resource Management
Apply resource limits to prevent resource exhaustion:
```yaml
deploy:
resources:
limits:
cpus: '2.0' # Max CPU cores
memory: 4G # Max memory
pids: 1024 # Max processes
reservations:
cpus: '0.5' # Guaranteed CPU
memory: 1G # Guaranteed memory
```
### 4. Storage Strategy
- **Configs**: Bind mount `./service/config:/config` relative to stack directory
- **Small data**: Named volumes (databases, app data <50GB)
- **Large data**: External mounts `/mnt/media`, `/mnt/downloads` (user must configure)
- **Secrets**: `.env` files in stack directories (auto-copied from `~/EZ-Homelab/.env`)
### 5. LinuxServer.io Preference
- Use `lscr.io/linuxserver/*` images when available (PUID/PGID support for permissions)
- Standard environment: `PUID=1000`, `PGID=1000`, `TZ=${TZ}`
### 6. External Host Proxying
Proxy non-Docker services (Raspberry Pi, NAS) via Traefik file provider:
- Create routes in `/opt/stacks/core/traefik/dynamic/external.yml`
- Example pattern documented in `docs/proxying-external-hosts.md`
- AI can manage these YAML files directly
## Developer Workflows
### First-Time Deployment
```bash
cd ~/AI-Homelab
sudo ./scripts/setup-homelab.sh # System prep, Docker install, Authelia secrets
# Reboot if NVIDIA drivers installed
sudo ./scripts/deploy-homelab.sh # Deploy core+infrastructure stacks, open Dockge
```
### Managing Services via Scripts
- **setup-homelab.sh**: Idempotent system preparation (skips completed steps, runs on bare Debian)
- Steps: Update system → Install Docker → Configure firewall → Generate Authelia secrets → Create directories/networks → NVIDIA driver detection
- Auto-generates: JWT secret (64 hex), session secret (64 hex), encryption key (64 hex), admin password hash
- Creates `homelab-network` and `traefik-network` Docker networks
- **deploy-homelab.sh**: Automated stack deployment (requires `.env` configured first)
- Steps: Validate prerequisites → Create directories → Deploy core → Deploy infrastructure → Deploy dashboards → Prepare additional stacks → Wait for Dockge
- Copies `.env` to `/opt/stacks/core/.env` and `/opt/stacks/infrastructure/.env`
- Waits for service health checks before proceeding
### Testing Changes
```bash
# Test in isolation before modifying stacks
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi # GPU test
docker compose -f docker-compose.yml config # Validate YAML syntax
docker compose -f docker-compose.yml up -d SERVICE # Deploy single service
docker compose logs -f SERVICE # Monitor logs
```
### Common Operations
```bash
cd /opt/stacks/STACK_NAME
docker compose up -d # Start stack
docker compose restart SERVICE # Restart service
docker compose logs -f SERVICE # Tail logs
docker compose pull && docker compose up -d # Update images
```
## Creating a New Docker Service
## Creating a New Docker Service
### Service Definition Template
```yaml
services:
service-name:
image: linuxserver/service:latest # Pin versions in production; prefer LinuxServer.io
container_name: service-name
restart: unless-stopped
networks:
- traefik-network
volumes:
- ./service-name/config:/config # Config in stack directory
- service-data:/data # Named volume for persistent data
# Large data on separate drives:
# - /mnt/media:/media
# - /mnt/downloads:/downloads
environment:
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
- TZ=${TZ}
deploy: # Resource limits (recommended)
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
labels:
- "traefik.enable=true"
- "traefik.http.routers.service-name.rule=Host(`service.${DOMAIN}`)"
- "traefik.http.routers.service-name.entrypoints=websecure"
- "traefik.http.routers.service-name.tls.certresolver=letsencrypt"
- "traefik.http.routers.service-name.middlewares=authelia@docker" # SSO enabled by default
- "traefik.http.services.service-name.loadbalancer.server.port=8080" # If non-standard port
- "x-dockge.url=https://service.${DOMAIN}" # Service discovery
- "homelab.category=category-name"
- "homelab.description=Service description"
volumes:
service-data:
driver: local
networks:
traefik-network:
external: true
```
### VPN-Routed Service (Downloads)
Route through Gluetun for VPN protection:
```yaml
services:
# Gluetun already running in core stack
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
network_mode: "service:gluetun" # Routes through VPN
depends_on:
- gluetun
volumes:
- ./qbittorrent/config:/config
- /mnt/downloads:/downloads
environment:
- PUID=1000
- PGID=1000
- TZ=${TZ}
# No ports needed - mapped in Gluetun service
# No Traefik labels - access via Gluetun's network
```
Add ports to Gluetun in core stack:
```yaml
gluetun:
ports:
- 8080:8080 # qBittorrent WebUI
```
### Authelia Bypass Example (Jellyfin)
```yaml
labels:
- "traefik.enable=true"
- "traefik.http.routers.jellyfin.rule=Host(`jellyfin.${DOMAIN}`)"
- "traefik.http.routers.jellyfin.entrypoints=websecure"
- "traefik.http.routers.jellyfin.tls.certresolver=letsencrypt"
# NO authelia middleware - direct access for apps/devices
- "x-dockge.url=https://jellyfin.${DOMAIN}"
# Optional: Sablier lazy loading (uncomment to enable)
# - "sablier.enable=true"
# - "sablier.group=media-jellyfin"
# - "sablier.start-on-demand=true"
```
## Modifying Existing Services
## Modifying Existing Services
### Safe Modification Process
1. **Read entire compose file** - understand dependencies (networks, volumes, depends_on)
2. **Check for impacts** - search for service references across other compose files
3. **Validate YAML** - `docker compose config` before deploying
4. **Test in place** - restart single service: `docker compose up -d SERVICE`
5. **Monitor logs** - `docker compose logs -f SERVICE` to verify functionality
### Common Modifications
- **Toggle SSO**: Comment/uncomment `middlewares=authelia@docker` label
- **Toggle lazy loading**: Comment/uncomment Sablier labels (`sablier.enable`, `sablier.group`, `sablier.start-on-demand`)
- **Change port**: Update `loadbalancer.server.port` label
- **Add VPN routing**: Change to `network_mode: "service:gluetun"`, map ports in Gluetun
- **Update subdomain**: Modify `Host()` rule in Traefik labels
- **Environment changes**: Update in `.env`, redeploy: `docker compose up -d`
## Project-Specific Conventions
### Why Traefik vs Nginx Proxy Manager
- **File-based configuration**: AI can modify labels/YAML directly (no web UI clicks)
- **Docker label discovery**: Services auto-register routes when deployed
- **Let's Encrypt automation**: Wildcard cert via DuckDNS DNS challenge (single cert for all services)
- **Dynamic reloading**: Changes apply without container restarts
### Authelia Password Generation
Secrets auto-generated by `setup-homelab.sh`:
- JWT secret: `openssl rand -hex 64`
- Session secret: `openssl rand -hex 64`
- Encryption key: `openssl rand -hex 64`
- Admin password: Hashed with `docker run authelia/authelia:latest authelia crypto hash generate argon2`
- Stored in `.env` file, never committed to git
### DuckDNS Wildcard Certificate
- **Single certificate**: `*.yourdomain.duckdns.org` covers all subdomains
- **DNS challenge**: Traefik uses DuckDNS token for Let's Encrypt validation
- **Certificate storage**: `/opt/stacks/core/traefik/acme.json` (600 permissions)
- **Renewal**: Traefik handles automatically (90-day Let's Encrypt certs)
- **Usage**: Services use `tls.certresolver=letsencrypt` label (no per-service cert requests)
### Homepage Dashboard AI Configuration
Homepage (`/opt/stacks/dashboards/`) uses dynamic variable replacement:
- Services configured in `homepage/config/services.yaml`
- URLs use **hard-coded domains** (e.g., `https://service.kelin-casa.duckdns.org`) - NO variables supported
- AI can add/remove service entries by editing YAML
- Bookmarks, widgets configured similarly in separate YAML files
### Resource Limits Pattern
Apply limits to all services to prevent resource exhaustion:
- **Core services**: Low limits (DuckDNS: 0.1 CPU, 64MB RAM)
- **Web services**: Medium limits (1-2 CPU, 1-4GB RAM)
- **Media services**: High limits (2-4 CPU, 4-8GB RAM)
- **Always set reservations** for guaranteed minimum resources
### x-dockge.url Labels
Include service discovery labels for Dockge UI:
```yaml
labels:
- "x-dockge.url=https://service.${DOMAIN}" # Shows direct link in Dockge
```
## Key Documentation References
- **[Getting Started](../docs/getting-started.md)**: Step-by-step deployment guide
- **[Docker Guidelines](../docs/docker-guidelines.md)**: Comprehensive service management patterns (1000+ lines)
- **[Services Reference](../docs/services-overview.md)**: All 50+ pre-configured services
- **[Proxying External Hosts](../docs/proxying-external-hosts.md)**: Traefik file provider patterns for non-Docker services
- **[Quick Reference](../docs/quick-reference.md)**: Command cheat sheet and troubleshooting
## Pre-Deployment Safety Checks
Before deploying any service changes:
- [ ] YAML syntax valid (`docker compose config`)
- [ ] No port conflicts (check `docker ps --format "table {{.Names}}\t{{.Ports}}"`)
- [ ] Networks exist (`docker network ls | grep -E 'traefik-network|homelab-network'`)
- [ ] Volume paths correct (`/opt/stacks/` for configs, `/mnt/` for large data)
- [ ] `.env` variables populated (source stack `.env` and check `echo $DOMAIN`)
- [ ] Traefik labels complete (enable, rule, entrypoint, tls, middleware)
- [ ] SSO appropriate (default enabled, bypass only for Plex/Jellyfin)
- [ ] VPN routing configured if download service (network_mode + Gluetun ports)
- [ ] LinuxServer.io image available (check hub.docker.com/u/linuxserver)
- [ ] Resource limits set (deploy.resources section)
## Troubleshooting Common Issues
### Service Won't Start
```bash
docker compose logs SERVICE # Check error messages
docker compose config # Validate YAML syntax
docker ps -a | grep SERVICE # Check exit code
```
Common causes: Port conflict, missing `.env` variable, network not found, volume permission denied
### Traefik Not Routing
```bash
docker logs traefik | grep SERVICE # Check if route registered
curl -k https://traefik.${DOMAIN}/api/http/routers # Inspect routes (if API enabled)
```
Verify: Service on `traefik-network`, labels correctly formatted, `traefik.enable=true`, Traefik restarted after label changes
### Authelia SSO Not Prompting
Check middleware: `docker compose config | grep -A5 SERVICE | grep authelia`
Verify: Authelia container running, middleware label present, no bypass rule in `authelia/configuration.yml`
### VPN Not Working (Gluetun)
```bash
docker exec gluetun sh -c "curl -s ifconfig.me" # Check VPN IP
docker logs gluetun | grep -i wireguard # Verify connection
```
Verify: `SURFSHARK_PRIVATE_KEY` set in `.env`, service using `network_mode: "service:gluetun"`, ports mapped in Gluetun
### Wildcard Certificate Issues
```bash
docker logs traefik | grep -i certificate
ls -lh /opt/stacks/core/traefik/acme.json # Should be 600 permissions
```
Verify: DuckDNS token valid, `DUCKDNS_TOKEN` in `.env`, DNS propagation (wait 2-5 min), acme.json writable by Traefik
### Stuck Processes and Resource Issues
```bash
# Check for stuck processes
ps aux | grep -E "(copilot|vscode|node)" | grep -v grep
# Monitor system resources
top -b -n1 | head -20
# Kill stuck processes (be careful!)
kill -9 PID # Replace PID with actual process ID
# Check Docker container resource usage
docker stats --no-stream
# Clean up stopped containers
docker container prune -f
```
### File Permission Issues
```bash
# Check file ownership
ls -la /opt/stacks/stack-name/config/
# Fix permissions for Docker volumes
sudo chown -R 1000:1000 /opt/stacks/stack-name/config/
# Validate PUID/PGID in .env
echo "PUID: $PUID, PGID: $PGID"
id -u && id -g # Should match PUID/PGID
# Check volume mount permissions
docker run --rm -v /opt/stacks/stack-name/config:/test busybox ls -la /test
```
### System Health Checks
```bash
# Check Docker daemon status
sudo systemctl status docker
# Verify network connectivity
ping -c 3 google.com
# Check DNS resolution
nslookup yourdomain.duckdns.org
# Monitor disk space
df -h
# Check system load
uptime && cat /proc/loadavg
```
## AI Management Capabilities
You can manage this homelab by:
- **Creating services**: Generate compose files in `/opt/stacks/` with proper Traefik labels
- **Modifying routes**: Edit Traefik labels in compose files
- **Managing external hosts**: Update `/opt/stacks/core/traefik/dynamic/external.yml`
- **Configuring Homepage**: Edit `services.yaml`, `bookmarks.yaml` in homepage config
- **Toggling SSO**: Add/remove Authelia middleware labels
- **Adding VPN routing**: Change network_mode and update Gluetun ports
- **Environment management**: Update `.env` (but remind users to manually copy to stacks)
### What NOT to Do
- Never commit `.env` files to git (contain secrets)
- Don't use `docker run` for persistent services (use compose in `/opt/stacks/`)
- Don't manually request Let's Encrypt certs (Traefik handles via wildcard)
- Don't edit Authelia/Traefik config via web UI (use YAML files)
- Don't expose download clients without VPN (route through Gluetun)
## Quick Command Reference
```bash
# View all running containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Check service logs
cd /opt/stacks/STACK && docker compose logs -f SERVICE
# Restart specific service
cd /opt/stacks/STACK && docker compose restart SERVICE
# Update images and redeploy
cd /opt/stacks/STACK && docker compose pull && docker compose up -d
# Validate compose file
docker compose -f /opt/stacks/STACK/docker-compose.yml config
# Check Traefik routes
docker logs traefik | grep -i "Creating router\|Adding route"
# Check network connectivity
docker exec SERVICE ping -c 2 traefik
# View environment variables
cd /opt/stacks/STACK && docker compose config | grep -A20 "environment:"
# Test NVIDIA GPU access
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
```
## User Context Notes
- **User**: kelin (PUID=1000, PGID=1000)
- **Repository**: `/home/kelin/AI-Homelab/`
- **Current Status**: Production-ready with comprehensive error handling and resource management
- **Recent work**: Resource limits implementation, subdirectory organization, enhanced validation
When interacting with users, prioritize **security** (SSO by default), **consistency** (follow existing patterns), and **stack-awareness** (consider service dependencies). Always explain the "why" behind architectural decisions and reference specific files/paths when describing changes.

View File

@@ -1,227 +0,0 @@
# Core Infrastructure
## Overview
The **Core Infrastructure** stack contains the essential services that must be deployed **first** and run continuously. These services form the foundation that all other services depend on.
## Services Included
### 🦆 DuckDNS
**Purpose**: Dynamic DNS updates with wildcard SSL certificate support
- **URL**: No web interface (background service)
- **Function**: Updates DuckDNS with your IP address
- **SSL**: Enables `*.yourdomain.duckdns.org` wildcard certificates
- **Configuration**: Requires `DUCKDNS_TOKEN` and subdomain list
### 🌐 Traefik
**Purpose**: Reverse proxy with automatic HTTPS termination
- **URL**: `https://traefik.yourdomain.duckdns.org`
- **Function**: Routes all traffic to internal services
- **SSL**: Automatic Let's Encrypt certificates via DNS challenge
- **Configuration**: Label-based routing, middleware support
### 🔐 Authelia
**Purpose**: Single Sign-On (SSO) authentication service
- **URL**: `https://auth.yourdomain.duckdns.org`
- **Function**: Centralized authentication for all services
- **Features**: 2FA support, user management, access rules
- **Configuration**: File-based users, auto-generated secrets
### 🔒 Gluetun
**Purpose**: VPN client for secure download routing
- **URL**: No web interface (background service)
- **Function**: Routes download traffic through VPN
- **Providers**: Surfshark WireGuard/OpenVPN support
- **Configuration**: `SURFSHARK_PRIVATE_KEY` required
### ⚡ Sablier
**Purpose**: Lazy loading service for on-demand container startup
- **URL**: `http://sablier.yourdomain.duckdns.org:10000`
- **Function**: Starts containers only when accessed (saves resources)
- **Integration**: Traefik middleware for automatic startup
- **Configuration**: Group-based service management
## Deployment Order
### Critical Dependencies
1. **DuckDNS** must start first (DNS updates)
2. **Traefik** requires DuckDNS (SSL certificates)
3. **Authelia** requires Traefik (web interface)
4. **Gluetun** can start independently (VPN)
5. **Sablier** requires Traefik (middleware integration)
### Automated Deployment
```bash
# Core stack deploys in correct order
cd /opt/stacks/core
docker compose up -d
```
## Network Configuration
### Required Networks
- **traefik-network**: All services connect here for routing
- **homelab-network**: Internal service communication
### Port Exposure
- **80/443**: Traefik (external access only)
- **8080**: Traefik dashboard (internal)
- **10000**: Sablier (internal)
## Security Considerations
### Authentication Flow
1. User accesses `service.yourdomain.duckdns.org`
2. Traefik routes to Authelia for authentication
3. Authelia redirects back to service after login
4. Service receives authenticated user context
### VPN Integration
- Download services use `network_mode: "service:gluetun"`
- All torrent/Usenet traffic routes through VPN
- Prevents IP leaks and ISP throttling
### Certificate Security
- Wildcard certificate covers all subdomains
- Automatic renewal via Let's Encrypt
- DNS challenge prevents port 80 exposure
## Configuration Files
### Environment Variables (.env)
```bash
# Domain and DNS
DOMAIN=yourdomain.duckdns.org
DUCKDNS_TOKEN=your-duckdns-token
DUCKDNS_SUBDOMAINS=yourdomain
# Authelia Secrets (auto-generated)
AUTHELIA_JWT_SECRET=64-char-secret
AUTHELIA_SESSION_SECRET=64-char-secret
AUTHELIA_STORAGE_ENCRYPTION_KEY=64-char-secret
# VPN (optional)
SURFSHARK_PRIVATE_KEY=your-vpn-key
```
### Traefik Configuration
- **Static**: `traefik.yml` (entrypoints, providers)
- **Dynamic**: `dynamic/` directory (routes, middleware)
- **Certificates**: `acme.json` (auto-managed)
### Authelia Configuration
- **Users**: `users_database.yml` (user accounts)
- **Settings**: `configuration.yml` (authentication rules)
- **Secrets**: Auto-generated during setup
## Monitoring & Maintenance
### Health Checks
- **DuckDNS**: IP update verification
- **Traefik**: Route configuration validation
- **Authelia**: Authentication service status
- **Gluetun**: VPN connection status
- **Sablier**: Lazy loading functionality
### Log Locations
- **Traefik**: `/opt/stacks/core/traefik/logs/`
- **Authelia**: Container logs via `docker logs authelia`
- **Gluetun**: Connection status in container logs
### Backup Requirements
- **Configuration**: All YAML files in `/opt/stacks/core/`
- **Certificates**: `acme.json` (contains private keys)
- **User Database**: `users_database.yml` (contains hashed passwords)
## Troubleshooting
### Common Issues
#### Traefik Not Routing
```bash
# Check Traefik logs
docker logs traefik
# Verify routes
curl -k https://traefik.yourdomain.duckdns.org/api/http/routers
```
#### Authelia Authentication Failing
```bash
# Check configuration
docker exec authelia authelia validate-config /config/configuration.yml
# Verify user database
docker exec authelia authelia validate-config /config/users_database.yml
```
#### VPN Connection Issues
```bash
# Check Gluetun status
docker logs gluetun
# Test VPN IP
docker exec gluetun curl -s ifconfig.me
```
#### DNS Not Updating
```bash
# Check DuckDNS logs
docker logs duckdns
# Manual IP check
curl https://www.duckdns.org/update?domains=yourdomain&token=YOUR_TOKEN&ip=
```
## Performance Optimization
### Resource Limits
```yaml
# Recommended limits for core services
duckdns:
cpus: '0.1'
memory: 64M
traefik:
cpus: '0.5'
memory: 256M
authelia:
cpus: '0.25'
memory: 128M
gluetun:
cpus: '0.5'
memory: 256M
sablier:
cpus: '0.1'
memory: 64M
```
### Scaling Considerations
- **CPU**: Traefik may need more CPU with many services
- **Memory**: Authelia caches user sessions
- **Network**: VPN throughput affects download speeds
## Integration Points
### Service Dependencies
All other stacks depend on core infrastructure:
- **Infrastructure**: Requires Traefik for routing
- **Dashboards**: Requires Authelia for authentication
- **Media**: May use VPN routing through Gluetun
- **All Services**: Use wildcard SSL certificates
### External Access
- **Port Forwarding**: Router must forward 80/443 to server
- **Firewall**: Allow inbound 80/443 traffic
- **DNS**: DuckDNS provides dynamic DNS resolution
This core infrastructure provides a solid, secure foundation for your entire homelab ecosystem.</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\Core-Infrastructure.md

View File

@@ -1,60 +0,0 @@
# Core Stack
## The core stack contains the critical infrastructure for a homelab.
>There are always other options.
I chose these for easy of use and file based configuration (so AI can do it all)
>For a multi server homelab, only install core services on one server.
* DuckDNS with LetsEncrypt- Free subdomains and wildcard SSL Certificates
* Authelia - Single Sign On (SSO) Authentication with optional 2 Factor Authentication
* Traefik - Proxy Host Routing to access your services through your duckdns url
* Sablier - Both a container and a Traefik plugin, enables ondemand services (saving resources and power)
## DuckDNS with LetsEncrypt
Get your free subdomain at http://www.duckdns.org
>Copy your DuckDNS Token, you'll need to add that to the .env file
DuckDNS service will keep your subdomain pointing to your IP address, even if it changes.
LetsEncrypt service will generate 2 certificates, one for your duckdns subdomain, and one wildcard for the same domain.
The wildcard certificate is used by all the services. No need for an individual certificate per service.
Certificates will auto renew. This is a set it and forget service, no webui, no changes needed when adding a new service.
>It can take 2-5 minutes for the certificates to be generated and applied. Until then it will use a self signed certificate. You just need to accept the browser warnings. Once it's applied the browser warnings will go away.
## Authelia
Provides Single Sign On (SSO) Authentication for your services.
>Some services you may not want behind SSO, like Jellyfin/Plex if you watch your media through an app on a smart TV, firestick or phone.
* Optional 2 Factor Authentication
* Easy to enable/disable - Comment out 1 line in the compose file then bring the stack down and back up to disable (not restart)
* Configured on a per service basis
* A public service (like wordpress) can bypass authelia login and let the service handle it's own authentication
* Admin services (like dockge/portainer) can use 2 Factor Authorization for an extra layer of security
## Traefik
Provides proxy routing for your services so you can access them at a url like wordpress.my-subdoain.duckdns.org
For services on a Remote Host, create a file in the traefik/dynamic folder like my-server-external-host.yml
Create 1 file per remote host.
## Sablier
Provides ondemand container management (start/pause/stop)
>Important: The stack must be up and the service stopped for Sablier to work.
Tip: If you have a lot of services, use a script to get the services to that state.
If a service is down and anything requests the url for that service, Sablier will start the service and redirect after a short delay (however long it takes the service to come up), usually less than 20 seconds.
Once the set inactivity period has elapsed, Sablier will pause or stop the container according to the settings.
This saves resources and electricity. Allowing you have more services installed, configured, and ready to use even if the server is incapable of running all the services simultaniously. Great for single board computers, mini PCs, low resource systems, and your power bill.

File diff suppressed because it is too large Load Diff

View File

@@ -1,221 +0,0 @@
# Environment Configuration Guide
This guide explains how to configure the `.env` file required for your AI-Homelab deployment.
## Prerequisites
Before running the setup scripts, you must create and configure a `.env` file in the root of the repository.
### Step 1: Copy the Example File
```bash
# Copy the example file to create your .env file
cp .env.example .env
```
This ensures you have all the required variables with proper defaults and structure.
### Step 2: Edit Your Configuration
Open the newly created `.env` file and modify the values to match your setup:
```bash
# Edit the file with your preferred editor
nano .env # or use any text editor
```
## Required Variables
The `.env` file contains the following key variables that you must configure:
```bash
# Domain Configuration
DOMAIN=yourdomain.duckdns.org
DUCKDNS_TOKEN=your-duckdns-token
DUCKDNS_SUBDOMAINS=yourdomain
# User Configuration (Linux user IDs)
PUID=1000
PGID=1000
# Default Credentials (used by multiple services)
DEFAULT_USER=admin
DEFAULT_PASSWORD=changeme
DEFAULT_EMAIL=admin@example.com
# Timezone
TZ=America/New_York
# Authelia Secrets (auto-generated by setup script)
AUTHELIA_JWT_SECRET=64-char-secret
AUTHELIA_SESSION_SECRET=64-char-secret
AUTHELIA_STORAGE_ENCRYPTION_KEY=64-char-secret
# VPN Configuration (optional, for download services)
SURFSHARK_PRIVATE_KEY=your-vpn-private-key
```
## Step-by-Step Configuration
### 1. Domain Setup
**DuckDNS Configuration:**
1. Go to [duckdns.org](https://duckdns.org)
2. Create an account and get your token
3. Choose your subdomain (e.g., `yourname`)
4. Your domain will be `yourname.duckdns.org`
```bash
DOMAIN=yourname.duckdns.org
DUCKDNS_TOKEN=your-actual-token-here
DUCKDNS_SUBDOMAINS=yourname
```
### 2. User IDs
Get your Linux user and group IDs:
```bash
# Run these commands to get your IDs
id -u # This gives you PUID
id -g # This gives you PGID
```
Typically these are both `1000` for the first user.
### 3. Default Credentials
Set default credentials that will be used by multiple services for easier setup:
```bash
DEFAULT_USER=admin
DEFAULT_PASSWORD=changeme
DEFAULT_EMAIL=admin@example.com
```
**Why these variables?**
- Many services (Nextcloud, Grafana, databases, etc.) use similar admin credentials
- Setting defaults here makes it easier to manage and remember passwords
- You can change these to your preferred values
- Individual services will use these defaults unless overridden
### 4. Timezone
Set your timezone. Common options:
- `America/New_York`
- `America/Los_Angeles`
- `Europe/London`
- `Asia/Tokyo`
- `Australia/Sydney`
### 4. Authelia Secrets
**⚠️ IMPORTANT:** These are auto-generated by the setup script. Do NOT set them manually.
The `setup-homelab.sh` script will generate secure random secrets for:
- JWT Secret (64 hex characters)
- Session Secret (64 hex characters)
- Storage Encryption Key (64 hex characters)
### 5. VPN Configuration (Optional)
If you want to route download services through VPN:
1. Get Surfshark WireGuard credentials
2. Extract the private key from your `.conf` file
3. Add to `.env`:
```bash
SURFSHARK_PRIVATE_KEY=your-private-key-here
```
## Setup Process
### Step 1: Copy the Template
```bash
# Copy the example file (this creates your .env file)
cp .env.example .env
```
### Step 2: Edit Required Variables
Open `.env` in your editor and update these essential variables:
```bash
# Required: Set your domain
DOMAIN=yourdomain.duckdns.org
DUCKDNS_TOKEN=your-actual-duckdns-token
DUCKDNS_SUBDOMAINS=yourdomain
# User Configuration (Linux user IDs)
PUID=1000
PGID=1000
# Required: Set default credentials
DEFAULT_USER=admin
DEFAULT_PASSWORD=your-secure-password
DEFAULT_EMAIL=your-email@example.com
# Required: Set your timezone
TZ=America/New_York
# Optional: Configure VPN if using download services
# Note: these are not the same as what you use to log into surfshark.com with
SURFSHARK_USERNAME=your-surfshark-username
SURFSHARK_PASSWORD=your-surfshark-password
```
### Step 3: Review and Customize
The `.env.example` file contains all available configuration options. Review and customize:
- **Default Credentials**: Used by multiple services for consistency
- **Service-Specific Settings**: API keys, database passwords, etc.
- **Directory Paths**: Adjust for your storage setup
- **Optional Features**: VPN, monitoring, development tools
## Validation
After creating your `.env` file:
```bash
# Check that all required variables are set
grep -E "^(DOMAIN|DUCKDNS_TOKEN|PUID|PGID|TZ|DEFAULT_USER|DEFAULT_PASSWORD|DEFAULT_EMAIL)=" .env
# Validate domain format
echo $DOMAIN | grep -E "\.duckdns\.org$"
```
## Security Notes
- Never commit `.env` files to git
- Keep your DuckDNS token secure
- The setup script generates cryptographically secure secrets for Authelia
- VPN keys should be kept confidential
## Troubleshooting
**Missing .env file:**
```
Error: .env file not found
```
→ Copy the example file: `cp .env.example .env`
**Invalid domain:**
```
Error: DOMAIN must end with .duckdns.org
```
→ Check your domain format in the `.env` file
**Missing DuckDNS token:**
```
Error: DUCKDNS_TOKEN is required
```
→ Get your token from duckdns.org and add it to `.env`
**Permission issues:**
```
Error: Cannot write to .env
```
→ Check file permissions: `chmod 600 .env`

View File

@@ -1,44 +0,0 @@
# Getting Started Guide
Welcome to your AI-powered homelab! This guide will walk you through setting up your production-ready infrastructure with Dockge, Traefik, Authelia, and [50+ services](services-overview.md).
## Getting Started Checklist
- [ ] Clone this repository to your home folder
- [ ] Configure `.env` file with your domain and tokens ([see prerequisites](env-configuration.md))
- [ ] Forward ports 80 and 443 from your router to your server
- [ ] Run setup script (generates Authelia secrets and admin user) ([setup-homelab.sh](../scripts/setup-homelab.sh))
- [ ] Log out and back in for Docker group permissions
- [ ] Run deployment script (deploys all core, infrastructure & dashboard services) ([deploy-homelab.sh](../scripts/deploy-homelab.sh))
- [ ] Access Dockge web UI (`https://dockge.${DOMAIN}`)
- [ ] Set up 2FA with Authelia ([Authelia setup guide](service-docs/authelia.md))
- [ ] (optional) Deploy additional stacks as needed via Dockge ([services overview](services-overview.md))
- [ ] Configure and use VS Code with Github Copilot to manage the server ([AI management](.github/copilot-instructions.md) | [Example prompts](ai-management-prompts.md))
## Setup Options
Choose the setup method that works best for you:
### 🚀 Automated Setup (Recommended)
For most users, the automated scripts handle everything. See [Automated Setup Guide](automated-setup.md) for step-by-step instructions.
### 🔧 Manual Setup
If you prefer manual control or the automated script fails, see the [Manual Setup Guide](manual-setup.md) for detailed instructions.
### 🤖 AI-Assisted Setup
Learn how to use VS Code with GitHub Copilot for AI-powered homelab management. See [AI VS Code Setup](ai-vscode-setup.md).
## How It All Works
Before diving in, understand how your homelab infrastructure works together. See [How Your AI Homelab Works](how-it-works.md) for a comprehensive overview.
## SSL Certificates
Your homelab uses Let's Encrypt for automatic HTTPS certificates. See [SSL Certificates Guide](ssl-certificates.md) for details on certificate management and troubleshooting.
## What Comes Next
After setup, learn what to do with your running homelab. See [Post-Setup Guide](post-setup.md) for accessing services, customization, and maintenance.
## On-Demand Remote Services
For advanced users: Learn how to set up lazy-loading services on remote servers (like Raspberry Pi) that start automatically when accessed. See [On-Demand Remote Services](Ondemand-Remote-Services.md).

View File

@@ -1,151 +0,0 @@
# EZ-Homelab Wiki
Welcome to the **EZ-Homelab Wiki** - the comprehensive source of truth for deploying and managing a production-ready homelab infrastructure with 50+ services.
[![Docker](https://img.shields.io/badge/docker-%230db7ed.svg?style=flat&logo=docker&logoColor=white)](https://docker.com)
[![Traefik](https://img.shields.io/badge/Traefik-24.0.0-24A1C6)](https://traefik.io)
[![Authelia](https://img.shields.io/badge/Authelia-4.38.0-113155)](https://www.authelia.com)
## 📖 Wiki Overview
This wiki serves as the **single source of truth** for the EZ-Homelab project, containing all documentation, guides, and reference materials needed to deploy and manage your homelab infrastructure.
### 🎯 Key Features
- **Production-Ready**: Automated SSL, SSO authentication, and VPN routing
- **AI-Manageable**: File-based architecture designed for AI assistance
- **Comprehensive**: 70+ services across 10 categories
- **Secure by Default**: Authelia SSO protection with bypass options
- **Easy Management**: Dockge web UI for visual stack management
### 🏗️ Architecture Overview
The EZ-Homelab uses a layered architecture:
1. **Core Infrastructure** (Deploy First)
- DuckDNS: Dynamic DNS with wildcard SSL
- Traefik: Reverse proxy with automatic HTTPS
- Authelia: Single Sign-On authentication
- Gluetun: VPN client for secure downloads
- Sablier: Lazy loading for resource efficiency
2. **Service Layers**
- Infrastructure: Management and monitoring tools
- Dashboards: Homepage and Homarr interfaces
- Media: Plex, Jellyfin, and automation tools
- Productivity: Nextcloud, Gitea, documentation tools
- Home Automation: Home Assistant ecosystem
- Monitoring: Grafana, Prometheus, alerting
- Development: GitLab, Jupyter, VS Code server
## 🚀 Quick Start
### Prerequisites
- Fresh Debian/Ubuntu server (or existing system)
- Root/sudo access
- Internet connection
- VS Code with GitHub Copilot (recommended)
### Automated Deployment
```bash
git clone https://github.com/kelinfoxy/EZ-Homelab.git
cd EZ-Homelab
cp .env.example .env
nano .env # Configure your domain and tokens
sudo ./scripts/setup-homelab.sh
sudo ./scripts/deploy-homelab.sh
```
**Access your homelab:**
- **Dockge**: `https://dockge.yourdomain.duckdns.org` (primary management)
- **Homepage**: `https://homepage.yourdomain.duckdns.org` (service dashboard)
- **Authelia**: `https://auth.yourdomain.duckdns.org` (SSO login)
## 📚 Documentation Structure
### 🏁 Getting Started
- [[Getting Started Guide]] - Complete setup and deployment
- [[Environment Configuration]] - Required settings and tokens
- [[Automated Setup]] - One-click deployment process
- [[Manual Setup]] - Step-by-step manual installation
- [[Post-Setup Guide]] - What to do after deployment
### 🏗️ Architecture & Design
- [[System Architecture]] - High-level component overview
- [[Network Architecture]] - Service communication patterns
- [[Security Model]] - Authentication and access control
- [[Storage Strategy]] - Data persistence and organization
- [[Docker Guidelines]] - Service management patterns
### 💾 Backup & Recovery
- [[Backup Strategy]] - Restic + Backrest comprehensive guide
- [[Backrest Service]] - Web UI backup management
### 📦 Services & Stacks
- [[Services Overview]] - All 70+ available services
- [[Core Infrastructure]] - Essential services (deploy first)
- [[Infrastructure Services]] - Management and monitoring
- [[Media Services]] - Streaming and automation
- [[Media Management]] - *Arr stack services
- [[Home Automation]] - Smart home integration
- [[Productivity Tools]] - Collaboration and organization
- [[Development Tools]] - Coding and deployment
- [[Monitoring Stack]] - Observability and alerting
- [[Utilities]] - Additional helpful services
### 🛠️ Operations & Management
- [[Quick Reference]] - Command cheat sheet
- [[Ports in Use]] - Complete port mapping reference
- [[Troubleshooting]] - Common issues and solutions
- [[SSL Certificates]] - HTTPS and certificate management
- [[Proxying External Hosts]] - Connect non-Docker services
- [[Resource Limits]] - Performance optimization
### 🤖 AI & Automation
- [[AI Management Guide]] - Using AI for homelab management
- [[Copilot Instructions]] - AI assistant configuration
- [[VS Code Setup]] - Development environment
- [[AI Prompt Examples]] - Sample AI interactions
### 📋 Reference Materials
- [[Service Documentation]] - Individual service guides
- [[Configuration Templates]] - Ready-to-use configs
- [[Script Reference]] - Automation scripts
- [[Action Reports]] - Deployment logs and reports
## 🔧 Development & Contribution
### For Contributors
- [[Development Notes]] - Technical implementation details
- [[Contributing Guide]] - How to contribute to the project
- [[Code Standards]] - Development best practices
### Repository Structure
```
EZ-Homelab/
├── docs/ # Documentation
├── docker-compose/ # Service definitions
├── config-templates/ # Configuration templates
├── scripts/ # Deployment scripts
├── .github/ # GitHub configuration
└── wiki/ # This wiki (source of truth)
```
## 📞 Support & Community
- **Issues**: [GitHub Issues](https://github.com/kelinfoxy/EZ-Homelab/issues)
- **Discussions**: [GitHub Discussions](https://github.com/kelinfoxy/EZ-Homelab/discussions)
- **Documentation**: This wiki is the primary source of truth
## 📈 Project Status
- **Version**: 1.0.0 (Production Ready)
- **Services**: 70+ services across 10 categories
- **Architecture**: File-based, AI-manageable
- **Management**: Dockge web UI
- **Security**: Authelia SSO with VPN routing
---
*This wiki is automatically maintained and serves as the single source of truth for the EZ-Homelab project. All information is kept current with the latest documentation.*</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\Home.md

View File

@@ -1,206 +0,0 @@
# How Your AI Homelab Works
Welcome to your AI-powered homelab! This guide explains how all the components work together to create a production-ready, self-managing infrastructure. Don't worry if it seems complex at first - the AI assistant handles most of the technical details for you.
## Quick Overview
Your homelab is a **Docker-based infrastructure** that automatically:
- Provides secure HTTPS access to all services
- Manages user authentication and authorization
- Routes traffic intelligently
- Updates itself
- Backs up your data
- Monitors system health
Everything runs in **containers** - like lightweight virtual machines - that are orchestrated by **Docker Compose** and managed through the **Dockge** web interface.
## Core Components
### 🏠 **Homepage Dashboard** (`https://home.yourdomain.duckdns.org`)
Your central hub for accessing all services. Think of it as the "start menu" for your homelab.
- **What it does**: Shows all your deployed services with quick links
- **AI Integration**: The AI can automatically add new services and configure widgets
- **Customization**: Add weather, system stats, and service-specific widgets
- **Configuration**: [docker-compose/dashboards/](docker-compose/dashboards/) | [service-docs/homepage.md](service-docs/homepage.md)
### 🐳 **Dockge** (`https://dockge.yourdomain.duckdns.org`)
Your primary management interface for deploying and managing services.
- **What it does**: Web-based Docker Compose manager
- **Stacks**: Groups services into logical units (media, monitoring, productivity)
- **One-Click Deploy**: Upload compose files and deploy instantly
- **Configuration**: [docker-compose/infrastructure/](docker-compose/infrastructure/) | [service-docs/dockge.md](service-docs/dockge.md)
### 🔐 **Authelia** (`https://auth.yourdomain.duckdns.org`)
Your security gatekeeper that protects sensitive services.
- **What it does**: Single sign-on (SSO) authentication
- **Security**: Two-factor authentication, session management
- **Smart Bypass**: Automatically bypasses auth for media apps (Plex, Jellyfin)
- **Configuration**: [docker-compose/core/](docker-compose/core/) | [service-docs/authelia.md](service-docs/authelia.md)
### 🌐 **Traefik** (`https://traefik.yourdomain.duckdns.org`)
Your intelligent traffic director and SSL certificate manager.
- **What it does**: Reverse proxy that routes web traffic to the right services
- **SSL**: Automatically obtains and renews free HTTPS certificates
- **Labels**: Services "advertise" themselves to Traefik via Docker labels
- **Configuration**: [docker-compose/core/](docker-compose/core/) | [service-docs/traefik.md](service-docs/traefik.md)
### 🦆 **DuckDNS**
Your dynamic DNS service that gives your homelab a consistent domain name.
- **What it does**: Updates `yourdomain.duckdns.org` to point to your home IP
- **Integration**: Works with Traefik to get wildcard SSL certificates
- **Configuration**: [docker-compose/core/](docker-compose/core/) | [service-docs/duckdns.md](service-docs/duckdns.md)
### 🛡️ **Gluetun (VPN)**
Your download traffic protector.
- **What it does**: Routes torrent and download traffic through VPN
- **Security**: Prevents ISP throttling and hides your IP for downloads
- **Integration**: Download services connect through Gluetun's network
- **Configuration**: [docker-compose/core/](docker-compose/core/) | [service-docs/gluetun.md](service-docs/gluetun.md)
## How Services Get Added
### The AI Way (Recommended)
1. **Tell the AI**: "Add Plex to my media stack"
2. **AI Creates**: Docker Compose file with proper configuration
3. **AI Configures**: Traefik routing, Authelia protection, resource limits
4. **AI Deploys**: Service goes live with HTTPS and SSO
5. **AI Updates**: Homepage dashboard automatically
### Manual Way
1. **Find Service**: Choose from 50+ pre-configured services
2. **Upload to Dockge**: Use the web interface
3. **Configure**: Set environment variables and volumes
4. **Deploy**: Click deploy and wait
5. **Access**: Service is immediately available at `https://servicename.yourdomain.duckdns.org`
## Network Architecture
### Internal Networks
- **`traefik-network`**: All web-facing services connect here
- **`homelab-network`**: Internal service communication
- **`media-network`**: Media services (Plex, Jellyfin, etc.)
- **VPN Networks**: Download services route through Gluetun
### External Access
- **Port 80/443**: Only Traefik exposes these to the internet
- **Domain**: `*.yourdomain.duckdns.org` points to your home
- **SSL**: Wildcard certificate covers all subdomains automatically
## Storage Strategy
### Configuration Files
- **Location**: `/opt/stacks/stack-name/config/`
- **Purpose**: Service settings, databases, user data
- **Backup**: Included in automatic backups
### Media & Large Data
- **Location**: `/mnt/media/`, `/mnt/downloads/`
- **Purpose**: Movies, TV shows, music, downloads
- **Performance**: Direct mounted drives for speed
### Secrets & Environment
- **Location**: `.env` files in each stack directory
- **Security**: Never committed to git
- **Management**: AI can help update variables
## AI Features
### VS Code Integration
- **Copilot Chat**: Natural language commands for infrastructure management
- **File Editing**: AI modifies Docker Compose files, configuration YAML
- **Troubleshooting**: AI analyzes logs and suggests fixes
- **Documentation**: AI keeps docs synchronized with deployed services
### OpenWebUI (Future)
- **Web Interface**: Chat with AI directly in your browser
- **API Tools**: AI can interact with your services' APIs
- **Workflows**: Automated service management and monitoring
- **Status**: Currently in development phase
## Lazy Loading (Sablier)
Some services start **on-demand** to save resources:
- **How it works**: Service starts when you first access it
- **Benefits**: Saves RAM and CPU when services aren't in use
- **Configuration**: AI manages the lazy loading rules
## Monitoring & Maintenance
### Built-in Monitoring
- **Grafana/Prometheus**: System metrics and dashboards
- **Uptime Kuma**: Service uptime monitoring
- **Dozzle**: Live container log viewing
- **Node Exporter**: Hardware monitoring
### Automatic Updates
- **Watchtower**: Updates Docker images automatically
- **Backrest**: Scheduled backups using Restic
- **Certificate Renewal**: SSL certificates renew automatically
## Security Model
### Defense in Depth
1. **Network Level**: Firewall blocks unauthorized access
2. **SSL/TLS**: All traffic encrypted with valid certificates
3. **Authentication**: Authelia protects admin interfaces
4. **Authorization**: User roles and permissions
5. **Container Security**: Services run as non-root users
### VPN for Downloads
- **Purpose**: Hide IP address for torrenting
- **Implementation**: Download containers route through VPN
- **Provider**: Surfshark (configurable)
## Scaling & Customization
### Adding Services
- **Pre-built**: [50+ services](services-overview.md) ready to deploy
- **Custom**: AI can create configurations for any Docker service
- **External**: Proxy services on other devices (Raspberry Pi, NAS)
### Resource Management
- **Limits**: CPU, memory, and I/O limits prevent resource exhaustion
- **Reservations**: Guaranteed minimum resources
- **GPU Support**: Automatic NVIDIA GPU detection and configuration
## Troubleshooting Philosophy
- **Logs First**: Every service provides detailed logs. The AI can help analyze them.
- **Isolation Testing**: Deploy services one at a time to identify conflicts.
- **Configuration Validation**: AI validates Docker Compose syntax before deployment.
- **Rollback Ready**: Previous configurations are preserved for quick recovery.
## Getting Help
### Documentation Links
- **[Automated Setup](automated-setup.md)**: Step-by-step deployment
- **[SSL Certificates](ssl-certificates.md)**: HTTPS configuration details
- **[Post-Setup](post-setup.md)**: What to do after deployment
- **[AI VS Code Setup](ai-vscode-setup.md)**: Configure AI assistance
- **[AI Management Prompts](ai-management-prompts.md)**: Example commands for AI assistant
- **[Services Overview](../docs/services-overview.md)**: All available services
- **[Docker Guidelines](../docs/docker-guidelines.md)**: Technical details
### AI Assistance
- **In VS Code**: Use GitHub Copilot Chat for instant help
- **Examples**:
- "Add a new service to my homelab"
- "Fix SSL certificate issues"
- "Configure backup for my data"
- "Set up monitoring dashboard"
### Community Resources
- **GitHub Issues**: Report bugs or request features
- **Discussions**: Ask questions and share configurations
- **Wiki**: Community-contributed guides and tutorials
## Architecture Summary
Your homelab follows these principles:
- **Infrastructure as Code**: Everything defined in files
- **GitOps**: Version control for all configurations
- **Security First**: SSO protection by default
- **AI-Assisted**: Intelligent management and troubleshooting
- **Production Ready**: Monitoring, backups, and high availability
The result is a powerful, secure, and easy-to-manage homelab that grows with your needs!

View File

@@ -1,269 +0,0 @@
# Infrastructure Services
## Overview
The **Infrastructure Services** stack provides the management, monitoring, and operational tools needed to maintain your homelab. These services enhance the core infrastructure with advanced management capabilities.
## Services Included
### 🐳 Dockge
**Purpose**: Primary stack management interface
- **URL**: `https://dockge.yourdomain.duckdns.org`
- **Function**: Visual Docker Compose stack management
- **Features**: Web UI for deploying/managing stacks
- **Authentication**: Protected by Authelia SSO
### 🐳 Portainer
**Purpose**: Advanced container management
- **URL**: `https://portainer.yourdomain.duckdns.org`
- **Function**: Detailed container and image management
- **Features**: Container logs, exec, resource monitoring
- **Authentication**: Protected by Authelia SSO
### 🛡️ Authentik (Alternative SSO)
**Purpose**: Advanced identity management system
- **URL**: `https://authentik.yourdomain.duckdns.org`
- **Function**: Full-featured SSO with web UI management
- **Components**: Server, Worker, PostgreSQL, Redis
- **Features**: User groups, policies, integrations
### 🛡️ Pi-hole
**Purpose**: Network-wide ad blocking and DNS
- **URL**: `http://pihole.yourdomain.duckdns.org`
- **Function**: DNS server with ad blocking
- **Features**: Query logging, client management
- **Authentication**: Protected by Authelia SSO
### 👁️ Dozzle
**Purpose**: Real-time Docker log viewer
- **URL**: `https://dozzle.yourdomain.duckdns.org`
- **Function**: Live container log streaming
- **Features**: Multi-container log viewing, search
- **Authentication**: Protected by Authelia SSO
### 👁️ Glances
**Purpose**: System monitoring dashboard
- **URL**: `https://glances.yourdomain.duckdns.org`
- **Function**: Real-time system resource monitoring
- **Features**: CPU, memory, disk, network stats
- **Authentication**: Protected by Authelia SSO
### 🔄 Watchtower
**Purpose**: Automatic container updates
- **URL**: No web interface (background service)
- **Function**: Monitors and updates Docker containers
- **Features**: Scheduled updates, notifications
- **Configuration**: Cron-based update scheduling
### 🔌 Docker Proxy
**Purpose**: Secure Docker socket access
- **URL**: No web interface (background service)
- **Function**: Provides secure API access to Docker
- **Features**: Token-based authentication
- **Security**: Protects Docker socket from unauthorized access
## Deployment Strategy
### Recommended Order
1. **Dockge** (primary management interface)
2. **Portainer** (advanced container management)
3. **Pi-hole** (network services)
4. **Monitoring** (Dozzle, Glances)
5. **Automation** (Watchtower, Docker Proxy)
### Stack Location
```
/opt/stacks/infrastructure/
├── docker-compose.yml
├── dockge/
├── portainer/
├── pihole/
├── dozzle/
├── glances/
└── .env
```
## Configuration
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
TZ=America/New_York
# Pi-hole configuration
PIHOLE_PASSWORD=secure-admin-password
# Watchtower settings
WATCHTOWER_CLEANUP=true
WATCHTOWER_POLL_INTERVAL=3600
```
### Network Integration
- **traefik-network**: Web interface access
- **dockerproxy-network**: Secure Docker API access
- **homelab-network**: Internal communication
## Security Features
### Authentication Integration
- **Authelia SSO**: All web interfaces protected
- **Role-based Access**: Different permission levels
- **Session Management**: Secure session handling
### Network Security
- **Internal Access**: Services not exposed externally
- **Firewall Rules**: Restricted network access
- **API Security**: Token-based Docker access
## Management Workflows
### Stack Deployment
```bash
# Deploy infrastructure stack
cd /opt/stacks/infrastructure
docker compose up -d
# Access management interfaces
# Dockge: https://dockge.yourdomain.duckdns.org
# Portainer: https://portainer.yourdomain.duckdns.org
```
### Container Monitoring
```bash
# View logs with Dozzle
# https://dozzle.yourdomain.duckdns.org
# System monitoring with Glances
# https://glances.yourdomain.duckdns.org
```
### Updates Management
```bash
# Watchtower handles automatic updates
# Manual update check
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once
```
## Performance Considerations
### Resource Allocation
```yaml
# Recommended resource limits
dockge:
cpus: '0.5'
memory: 256M
portainer:
cpus: '0.5'
memory: 512M
pihole:
cpus: '0.25'
memory: 128M
dozzle:
cpus: '0.25'
memory: 128M
glances:
cpus: '0.25'
memory: 128M
```
### Scaling Guidelines
- **CPU**: Portainer may need more CPU for large deployments
- **Memory**: Pi-hole benefits from additional memory for query logging
- **Storage**: Minimal storage requirements for configurations
## Integration Points
### Core Infrastructure
- **Traefik**: Provides routing and SSL termination
- **Authelia**: Handles authentication for all services
- **Networks**: Connected to traefik-network for access
### Other Stacks
- **All Stacks**: Can be managed through Dockge interface
- **Monitoring**: Provides monitoring for all services
- **Security**: Enhances security through Pi-hole ad blocking
## Troubleshooting
### Common Issues
#### Dockge Not Accessible
```bash
# Check container status
docker compose -f /opt/stacks/infrastructure/docker-compose.yml ps
# View logs
docker compose -f /opt/stacks/infrastructure/docker-compose.yml logs dockge
```
#### Portainer Connection Issues
```bash
# Verify Docker socket access
docker exec portainer docker version
# Check Docker Proxy logs
docker logs dockerproxy
```
#### Pi-hole DNS Issues
```bash
# Check DNS resolution
nslookup google.com 127.0.0.1
# View Pi-hole logs
docker logs pihole
```
#### Watchtower Not Updating
```bash
# Check Watchtower logs
docker logs watchtower
# Manual update test
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once --debug
```
## Backup & Recovery
### Configuration Backup
- **Dockge**: Stack configurations in `/opt/stacks/`
- **Portainer**: Settings stored in named volumes
- **Pi-hole**: Configuration in `/etc/pihole/`
- **All Services**: YAML configurations in stack directories
### Automated Backups
- **Watchtower**: No persistent data to backup
- **Monitoring Data**: Logs and metrics (ephemeral)
- **Settings**: Include in regular backup strategy
## Best Practices
### Operational Guidelines
1. **Use Dockge** as primary management interface
2. **Monitor regularly** with Glances and Dozzle
3. **Keep updated** via Watchtower automation
4. **Secure access** through Authelia SSO
5. **Network protection** via Pi-hole ad blocking
### Maintenance Schedule
- **Daily**: Check system monitoring
- **Weekly**: Review container logs
- **Monthly**: Update base images manually
- **Quarterly**: Security audit and cleanup
This infrastructure stack provides comprehensive management and monitoring capabilities for your homelab environment.</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\Infrastructure-Services.md

View File

@@ -1,216 +0,0 @@
# Manual Setup Guide
If you prefer manual control or the automated script fails, follow these steps for manual installation.
## Prerequisites
- Fresh Debian/Ubuntu server or existing system
- Root/sudo access
- Internet connection
## Step 1: Update System
```bash
sudo apt update && sudo apt upgrade -y
```
## Step 2: Install Docker
```bash
# Install dependencies
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Or use convenience script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add user to docker group
sudo usermod -aG docker $USER
sudo usermod -aG sudo $USER
# Log out and back in, or run: newgrp docker
```
## Step 3: Clone Repository
```bash
cd ~
git clone https://github.com/kelinfoxy/AI-Homelab.git
cd AI-Homelab
```
## Step 4: Configure Environment
```bash
cp .env.example .env
nano .env # Edit all required variables
```
**Required variables:**
- `DOMAIN` - Your DuckDNS domain
- `DUCKDNS_TOKEN` - Your DuckDNS token
- `ACME_EMAIL` - Your email for Let's Encrypt
- `AUTHELIA_JWT_SECRET` - Generate with: `openssl rand -hex 64`
- `AUTHELIA_SESSION_SECRET` - Generate with: `openssl rand -hex 64`
- `AUTHELIA_STORAGE_ENCRYPTION_KEY` - Generate with: `openssl rand -hex 64`
- `SURFSHARK_USERNAME` and `SURFSHARK_PASSWORD` - If using VPN
## Step 5: Create Infrastructure
```bash
# Create directories
sudo mkdir -p /opt/stacks /mnt/{media,database,downloads,backups}
sudo chown -R $USER:$USER /opt/stacks /mnt
# Create networks
docker network create traefik-network
docker network create homelab-network
docker network create dockerproxy-network
docker network create media-network
```
## Step 6: Generate Authelia Password Hash
```bash
# Generate password hash (takes 30-60 seconds)
docker run --rm authelia/authelia:4.37 authelia crypto hash generate argon2 --password 'YourSecurePassword'
# Copy the hash starting with $argon2id$...
```
## Step 7: Configure Authelia
```bash
# Copy Authelia config templates
mkdir -p /opt/stacks/core/authelia
cp config-templates/authelia/* /opt/stacks/core/authelia/
# Edit users_database.yml
nano /opt/stacks/core/authelia/users_database.yml
# Replace password hash and email in the users section:
users:
admin:
displayname: "Admin User"
password: "$argon2id$v=19$m=65536,t=3,p=4$..." # Your hash here
email: your.email@example.com
groups:
- admins
- users
```
## Step 8: Deploy Core Services
```bash
# Deploy core infrastructure
sudo mkdir -p /opt/stacks/core
cp docker-compose/core.yml /opt/stacks/core/docker-compose.yml
cp -r config-templates/traefik /opt/stacks/core/
cp .env /opt/stacks/core/
# Update Traefik email
sed -i "s/admin@example.com/$ACME_EMAIL/" /opt/stacks/core/traefik/traefik.yml
cd /opt/stacks/core
docker compose up -d
```
## Step 9: Deploy Infrastructure Stack
```bash
sudo mkdir -p /opt/stacks/infrastructure
cp docker-compose/infrastructure.yml /opt/stacks/infrastructure/docker-compose.yml
cp .env /opt/stacks/infrastructure/
cd /opt/stacks/infrastructure
docker compose up -d
```
## Step 10: Deploy Dashboards
```bash
sudo mkdir -p /opt/stacks/dashboards
cp docker-compose/dashboards.yml /opt/stacks/dashboards/docker-compose.yml
cp -r config-templates/homepage /opt/stacks/dashboards/
cp .env /opt/stacks/dashboards/
# Replace Homepage domain variables
find /opt/stacks/dashboards/homepage -type f \( -name "*.yaml" -o -name "*.yml" \) -exec sed -i "s/{{HOMEPAGE_VAR_DOMAIN}}/$DOMAIN/g" {} \;
cd /opt/stacks/dashboards
docker compose up -d
```
## Step 11: Verify Deployment
```bash
# Check running containers
docker ps
# Check logs if any service fails
docker logs container-name
# Access services
echo "Dockge: https://dockge.$DOMAIN"
echo "Authelia: https://auth.$DOMAIN"
echo "Traefik: https://traefik.$DOMAIN"
```
## Troubleshooting
**Permission issues:**
```bash
# Ensure proper ownership
sudo chown -R $USER:$USER /opt/stacks
# Check group membership
groups $USER
```
**Container won't start:**
```bash
# Check logs
docker logs container-name
# Check compose file syntax
docker compose config
```
**Network conflicts:**
```bash
# List existing networks
docker network ls
# Remove and recreate if needed
docker network rm network-name
docker network create network-name
```
## When to Use Manual Setup
- Automated script fails on your system
- You want full control over each step
- You're using a non-Debian/Ubuntu distribution
- You need custom configurations
- You're troubleshooting deployment issues
## Switching to Automated
If manual setup works, you can switch to the automated scripts for future updates:
```bash
# Just run the deploy script
cd ~/AI-Homelab
sudo ./scripts/deploy-homelab.sh
```
The deploy script is idempotent - it won't break existing configurations.

View File

@@ -1,117 +0,0 @@
# On Demand Remote Services with Authelia, Sablier & Traefik
## 4 Step Process
1. Add route & service in Traefik external hosts file
2. Add middleware in Sablier config file (sablier.yml)
3. Add labels to compose files on Remote Host
4. Restart services
## Required Information
```bash
<server> - the hostname of the remote server
<service> - the application/container name
<full domain> - the base url for your proxy host (my-subdomain.duckdns.org)
<ip address> - the ip address of the remote server
<port> - the external port exposed by the service
<service display name> - how it will appear on the now loading page
<group name> - use <service name> for a single service, or something descriptive for the group of services that will start together.
```
## Step 1: Add route & service in Traefik external hosts file
### In /opt/stacks/core/traefik/dynamic/external-host-server_name.yml
```yaml
http:
routers:
# Add a section under routers for each Route (Proxy Host)
<service>-<server>:
rule: "Host(`<service>.<full domain>`)"
entryPoints:
- websecure
service: <service>-<server>
tls:
certResolver: letsencrypt
middlewares:
- sablier-<server>-<service>@file
- authelia@docker # comment this line to disable SSO login
# next route goes here
# All Routes go above this line
# Services section defines each service used above
services:
<service>-<server>:
loadBalancer:
servers:
- url: "http://<ip address>:<port>"
passHostHeader: true
# next service goes here
```
## Step 2: Add middlware to sablier config file
### In /opt/stacks/core/traefik/dynamic/sablier.yml
```yaml
http:
middlwares:
# Add a section under middlewares for each Route (Proxy Host)
sablier-<server>-<service>:
plugin:
sablier:
sablierUrl: http://sablier-service:10000
group: <server>-<group name>
sessionDuration: 2m # Increase this for convience
ignoreUserAgent: curl # Don't wake the service for a curl command
dynamic:
displayName: <service display name>
theme: ghost # This can be changed
show-details-by-default: true # May want to disable for production
# Next middleware goes here
```
## Step 3: Add labels to compose files on Remote Host
## On the Remote Server
### Apply lables to the services in the compose files
```yaml
labels:
- sablier.enable=true
- sablier.group=<server>-<group name>
- sablier.start-on-demand=true
```
>**Note**:
Traefik & Authelia labels are not used in the compose file for Remote Hosts
## Step 4: Restart services
### On host system
```bash
docker restart traefik
docker restart sablier-service
```
### On the Remote Host
```bash
cd /opt/stacks/<service>
docker compose down && docker compose up -d
docker stop <service>
```
## Setup Complete
Access your service by the proxy url.

View File

@@ -1,98 +0,0 @@
# Ports in Use
This document tracks all ports used by services in the AI-Homelab. Update this document whenever services are added or ports are changed.
## Core Stack ([core.yml](../docker-compose/core.yml))
| Service | Port | Protocol | Purpose | Internal Port |
|---------|------|----------|---------|---------------|
| [Traefik](../service-docs/traefik.md) | 80 | TCP | HTTP (redirects to HTTPS) | 80 |
| [Traefik](../service-docs/traefik.md) | 443 | TCP | HTTPS | 443 |
| [Traefik](../service-docs/traefik.md) | 8080 | TCP | Dashboard (protected) | 8080 |
## Infrastructure Stack ([infrastructure.yml](../docker-compose/infrastructure.yml))
| Service | Port | Protocol | Purpose | Internal Port |
|---------|------|----------|---------|---------------|
| [Dockge](../service-docs/dockge.md) | 5001 | TCP | Web UI | 5001 |
| [Pi-hole](../service-docs/pihole.md) | 53 | TCP/UDP | DNS | 53 |
| [Docker Proxy](../service-docs/docker-proxy.md) | 127.0.0.1:2375 | TCP | Docker API proxy | 2375 |
## Development Stack ([development.yml](../docker-compose/development.yml))
| Service | Port | Protocol | Purpose | Internal Port |
|---------|------|----------|---------|---------------|
| [PostgreSQL](../service-docs/postgresql.md) | 5432 | TCP | Database | 5432 |
| [Redis](../service-docs/redis.md) | 6379 | TCP | Cache/Database | 6379 |
## Home Assistant Stack ([homeassistant.yml](../docker-compose/homeassistant.yml))
| Service | Port | Protocol | Purpose | Internal Port |
|---------|------|----------|---------|---------------|
| [MotionEye](../service-docs/motioneye.md) | 8765 | TCP | Web UI | 8765 |
| [Mosquitto](../service-docs/mosquitto.md) | 1883 | TCP | MQTT | 1883 |
| [Mosquitto](../service-docs/mosquitto.md) | 9001 | TCP | MQTT Websockets | 9001 |
## Monitoring Stack ([monitoring.yml](../docker-compose/monitoring.yml))
| Service | Port | Protocol | Purpose | Internal Port |
|---------|------|----------|---------|---------------|
| [Prometheus](../service-docs/prometheus.md) | 9090 | TCP | Web UI/Metrics | 9090 |
## VPN Stack ([vpn.yml](../docker-compose/vpn.yml))
| Service | Port | Protocol | Purpose | Internal Port |
|---------|------|----------|---------|---------------|
| [Gluetun](../service-docs/gluetun.md) | 8888 | TCP | HTTP proxy | 8888 |
| [Gluetun](../service-docs/gluetun.md) | 8388 | TCP/UDP | Shadowsocks | 8388 |
| [Gluetun](../service-docs/gluetun.md) | 8081 | TCP | qBittorrent Web UI | 8080 |
| [Gluetun](../service-docs/gluetun.md) | 6881 | TCP/UDP | qBittorrent | 6881 |
## Port Range Reference
| Range | Usage |
|-------|-------|
| 1-1023 | System ports (well-known) |
| 1024-49151 | Registered ports |
| 49152-65535 | Dynamic/private ports |
## Common Port Conflicts
- **Port 80/443**: Used by Traefik for HTTP/HTTPS
- **Port 53**: Used by Pi-hole for DNS
- **Port 2375**: Used by Docker Proxy (localhost only)
- **Port 5001**: Used by Dockge
- **Port 5432**: Used by PostgreSQL
- **Port 6379**: Used by Redis
- **Port 8080**: Used by Traefik dashboard
- **Port 9090**: Used by Prometheus
## Adding New Services
When adding new services:
1. Check this document for available ports
2. Choose ports that don't conflict with existing services
3. Update this document with new port mappings
4. Consider using Traefik labels instead of direct port exposure for web services
## Port Planning Guidelines
- **Web services**: Use Traefik labels (no direct ports needed)
- **Databases**: Use internal networking only (no external ports)
- **VPN services**: Route through Gluetun for security
- **Development tools**: Consider localhost-only binding (127.0.0.1:port)
- **Monitoring**: Use high-numbered ports (9000+ range)
## Updating This Document
This document should be updated whenever:
- New services are added to any stack
- Existing services change their port mappings
- Services are removed from stacks
- Network configurations change
Run this command to find all port mappings in compose files:
```bash
grep -r "ports:" docker-compose/ | grep -v "^#" | sort
```

View File

@@ -1,140 +0,0 @@
# Post-Setup Next Steps
Congratulations! Your AI-powered homelab is now running. Here's what to do next.
## Access Your Services
- **Homepage**: `https://homepage.yourdomain.duckdns.org`
- Great place to start exploring your services
- After configuring your services, come back and add widgets with API keys (optional)
- Or ask the AI to find the API keys and add the widgets
- **Dockge**: `https://dockge.yourdomain.duckdns.org`
- Deploy & Manage the stacks & services
- Your primary management interface
- **Authelia**: `https://auth.yourdomain.duckdns.org`
- Configure 2FA for enhanced security (optional)
- **Traefik**: `https://traefik.yourdomain.duckdns.org`
- View/Edit your routing rules
- Tip: Let the AI manage the routing for you
- **VS Code**: `https://code.yourdomain.duckdns.org`
- Install GitHub Copilot Chat extension
- Open the AI-Homelab repository
- Use AI assistance for:
- Adding new services
- Configuring Traefik routing
- Managing Docker stacks
## Monitoring Services
- Use Dockge to easily view live container logs
- Configure Uptime Kuma to provide uptime tracking with dashboards
- Check Grafana for system metrics and monitoring
## Customize Your Homelab
### Add Custom Services
Tell the AI what service you want to install - give it a Docker-based GitHub repository or Docker Hub image. Use your imagination, the Copilot instructions are configured with best practices and a framework to add new services.
### Remove Unwanted Services
To remove a stack:
```bash
cd /opt/stacks/stack-name
docker compose down
cd ..
sudo rm -rf stack-name
```
To remove the volumes/resources for the stack:
```bash
# Stop stack and remove everything
cd /opt/stacks/stack-name
docker compose down -v --remove-orphans
# Remove unused Docker resources
docker system prune -a --volumes
```
## Set Up Backups
Your homelab includes comprehensive backup solutions. The default setup includes Backrest (Restic-based) for automated backups.
### Quick Backup Setup
1. **Access Backrest**: `https://backrest.yourdomain.duckdns.org`
2. **Configure repositories**: Add local or cloud storage destinations
3. **Set up schedules**: Configure automatic backup schedules
4. **Add backup jobs**: Create jobs for your important data
### What to Back Up
- **Configuration files**: `/opt/stacks/*/config/` directories
- **Databases**: Service-specific database volumes
- **User data**: Nextcloud files, Git repositories, etc.
- **Media libraries**: Movies, TV shows, music (if space allows)
### Backup Commands
```bash
# Manual backup of a volume
docker run --rm \
-v source-volume:/data \
-v /mnt/backups:/backup \
busybox tar czf /backup/volume-backup-$(date +%Y%m%d).tar.gz /data
# List Backrest configurations
cd /opt/stacks/utilities/backrest
docker compose exec backrest restic snapshots
# Restore from backup
docker run --rm \
-v target-volume:/data \
-v /mnt/backups:/backup \
busybox tar xzf /backup/volume-backup.tar.gz -C /
```
For detailed backup configuration, see the [Restic-BackRest-Backup-Guide.md](Restic-BackRest-Backup-Guide.md).
## Troubleshooting
### Script Issues
- **Permission denied**: Run with `sudo`
- **Docker not found**: Log out/in or run `newgrp docker`
- **Network conflicts**: Check existing networks with `docker network ls`
### Service Issues
- **Can't access services**: Check Traefik dashboard at `https://traefik.yourdomain.duckdns.org`
- **SSL certificate errors**: Wait 2-5 minutes for wildcard certificate to be obtained from Let's Encrypt
- Check status: `python3 -c "import json; d=json.load(open('/opt/stacks/core/traefik/acme.json')); print(f'Certificates: {len(d[\"letsencrypt\"][\"Certificates\"])}')"`
- View logs: `docker exec traefik tail -50 /var/log/traefik/traefik.log | grep certificate`
- **Authelia login fails**: Check user database configuration at `/opt/stacks/core/authelia/users_database.yml`
- **"Not secure" warnings**: Clear browser cache or wait for DNS propagation (up to 5 minutes)
- **Check logs**: Use Dozzle web interface at `https://dozzle.yourdomain.duckdns.org` or run `docker logs <container-name>`
### Common Fixes
```bash
# Restart Docker
sudo systemctl restart docker
# Check service logs
cd /opt/stacks/stack-name
docker compose logs -f
# Rebuild service
docker compose up -d --build service-name
```
## Next Steps
1. **Explore services** through Dockge
2. **Set up backups** with Backrest (default Restic-based solution)
3. **Set up monitoring** with Grafana/Prometheus
4. **Add external services** via Traefik proxying
5. **Use AI assistance** for custom configurations
Happy homelabbing! 🚀

View File

@@ -1,348 +0,0 @@
# Proxying External Hosts with Traefik and Authelia
This guide explains how to use Traefik and Authelia to proxy external services (like a Raspberry Pi running Home Assistant) through your domain with HTTPS and optional SSO protection.
## Overview
Traefik can proxy services that aren't running in Docker, such as:
- Home Assistant on a Raspberry Pi
- Other physical servers on your network
- Services running on different machines
- Any HTTP/HTTPS service accessible via IP:PORT
## Method 1: Using Traefik File Provider (Recommended)
### Step 1: Create External Service Configuration
Create a file in `/opt/stacks/traefik/dynamic/` with the format 'external-host-servername.yml'
```yaml
http:
routers:
# Home Assistant on Raspberry Pi
homeassistant-external:
rule: "Host(`ha.yourdomain.duckdns.org`)"
entryPoints:
- websecure
service: homeassistant-external
tls:
certResolver: letsencrypt
# Uncomment to add Authelia protection:
# middlewares:
# - authelia@docker
services:
homeassistant-external:
loadBalancer:
servers:
- url: "http://192.168.1.50:8123" # Replace with your Pi's IP and port
passHostHeader: true
middlewares:
# Optional: Add headers for WebSocket support
homeassistant-headers:
headers:
customRequestHeaders:
X-Forwarded-Proto: "https"
customResponseHeaders:
X-Frame-Options: "SAMEORIGIN"
```
### Step 2: Reload Traefik
Traefik watches the `/opt/stacks/traefik/dynamic/` directory automatically and reloads configurations:
```bash
# Verify configuration is loaded
docker logs traefik | grep external-hosts
# If needed, restart Traefik
cd /opt/stacks/traefik
docker compose restart
```
### Step 3: Test Access
Visit `https://ha.yourdomain.duckdns.org` - Traefik will:
1. Accept the HTTPS connection
2. Proxy the request to `http://192.168.1.50:8123`
3. Return the response with proper SSL
4. (Optionally) Require Authelia login if middleware is configured
## Method 2: Using Docker Labels (Dummy Container)
If you prefer managing routes via Docker labels, create a dummy container:
>This can be resource intensive with serveral services running.
Not recomended due to unnecessary resource/power consumption.
Don't try it on a Raspberry Pi
### Create a Label Container
In `/opt/stacks/external-proxies/docker-compose.yml`:
```yaml
services:
# Dummy container for Raspberry Pi Home Assistant
homeassistant-proxy-labels:
image: alpine:latest
container_name: homeassistant-proxy-labels
command: tail -f /dev/null # Keep container running
restart: unless-stopped
networks:
- traefik-network
labels:
- "traefik.enable=true"
- "traefik.http.routers.ha-external.rule=Host(`ha.${DOMAIN}`)"
- "traefik.http.routers.ha-external.entrypoints=websecure"
- "traefik.http.routers.ha-external.tls.certresolver=letsencrypt"
# Point to external service
- "traefik.http.services.ha-external.loadbalancer.server.url=http://192.168.1.50:8123"
# Optional: Add Authelia (usually not for HA)
# - "traefik.http.routers.ha-external.middlewares=authelia@docker"
networks:
traefik-network:
external: true
```
Deploy:
```bash
cd /opt/stacks/external-proxies
docker compose up -d
```
## Common External Services to Proxy
### Home Assistant (Raspberry Pi)
```yaml
homeassistant-pi:
rule: "Host(`ha.yourdomain.duckdns.org`)"
service: http://192.168.1.50:8123
# No Authelia - HA has its own auth
```
### Router/Firewall Admin Panel
```yaml
router-admin:
rule: "Host(`router.yourdomain.duckdns.org`)"
service: http://192.168.1.1:80
middlewares:
- authelia@docker # Add SSO protection
```
## Advanced Configuration
### WebSocket Support
Some services (like Home Assistant) need WebSocket support:
```yaml
http:
middlewares:
websocket-headers:
headers:
customRequestHeaders:
X-Forwarded-Proto: "https"
Connection: "upgrade"
Upgrade: "websocket"
routers:
homeassistant-external:
middlewares:
- websocket-headers
```
### HTTPS Backend
If your external service already uses HTTPS:
```yaml
http:
services:
https-backend:
loadBalancer:
servers:
- url: "https://192.168.1.50:8123"
serversTransport: insecureTransport
serversTransports:
insecureTransport:
insecureSkipVerify: true # Only if using self-signed cert
```
### IP Whitelist
Restrict access to specific IPs:
```yaml
http:
middlewares:
local-only:
ipWhiteList:
sourceRange:
- "192.168.1.0/24"
- "10.0.0.0/8"
routers:
sensitive-service:
middlewares:
- local-only
- authelia@docker
```
## Authelia Bypass Rules
Configure Authelia to bypass authentication for specific external hosts.
Edit `/opt/stacks/authelia/configuration.yml`:
```yaml
access_control:
rules:
# Bypass for Home Assistant (app access)
- domain: ha.yourdomain.duckdns.org
policy: bypass
# Require auth for router admin
- domain: router.yourdomain.duckdns.org
policy: one_factor
# Two-factor for critical services
- domain: proxmox.yourdomain.duckdns.org
policy: two_factor
```
## DNS Configuration
Ensure your DuckDNS domain points to your public IP:
1. DuckDNS container automatically updates your IP
2. Port forward 80 and 443 to your Traefik server
3. All subdomains (`*.yourdomain.duckdns.org`) point to same IP
4. Traefik routes based on Host header
## Troubleshooting
### Check Traefik Routing
```bash
# View active routes
docker logs traefik | grep "Creating router"
# Check if external host route is loaded
docker logs traefik | grep homeassistant
# View Traefik dashboard
# Visit: https://traefik.yourdomain.duckdns.org
```
### Test Without SSL
```bash
# Temporarily test direct connection
curl -H "Host: ha.yourdomain.duckdns.org" http://localhost/
```
### Check Authelia Logs
```bash
cd /opt/stacks/authelia
docker compose logs -f authelia
```
### Verify External Service
```bash
# Test that external service is reachable
curl http://192.168.1.50:8123
```
## AI Management
The AI can manage external host proxying by:
1. **Reading existing configurations**: Parse `/opt/stacks/traefik/dynamic/*.yml`
2. **Adding new routes**: Create/update YAML files in dynamic directory
3. **Modifying Docker labels**: Update dummy container labels
4. **Configuring Authelia rules**: Edit `configuration.yml` for bypass/require auth
5. **Testing connectivity**: Suggest verification steps
Example AI prompt:
> "Add proxying for my Unifi Controller at 192.168.1.5:8443 with Authelia protection"
AI will:
1. Create route configuration file
2. Add HTTPS backend support (Unifi uses HTTPS)
3. Configure Authelia middleware
4. Add to Homepage dashboard
5. Provide testing instructions
## Security Best Practices
1. **Always use Authelia** for admin interfaces (routers, NAS, etc.)
2. **Bypass Authelia** only for services with their own auth (HA, Plex)
3. **Use IP whitelist** for highly sensitive services
4. **Enable two-factor** for critical infrastructure
5. **Monitor access logs** in Traefik and Authelia
6. **Keep services updated** - Traefik, Authelia, and external services
## Example: Complete External Host Setup
Let's proxy a Raspberry Pi Home Assistant:
1. **Traefik configuration** (`/opt/stacks/traefik/dynamic/raspberry-pi.yml`):
```yaml
http:
routers:
ha-pi:
rule: "Host(`ha.yourdomain.duckdns.org`)"
entryPoints:
- websecure
service: ha-pi
tls:
certResolver: letsencrypt
middlewares:
- ha-headers
services:
ha-pi:
loadBalancer:
servers:
- url: "http://192.168.1.50:8123"
middlewares:
ha-headers:
headers:
customRequestHeaders:
X-Forwarded-Proto: "https"
```
2. **Authelia bypass** (in `/opt/stacks/authelia/configuration.yml`):
```yaml
access_control:
rules:
- domain: ha.yourdomain.duckdns.org
policy: bypass
```
3. **Homepage entry** (in `/opt/stacks/homepage/config/services.yaml`):
```yaml
- Home Automation:
- Home Assistant (Pi):
icon: home-assistant.png
href: https://ha.yourdomain.duckdns.org
description: HA on Raspberry Pi
ping: 192.168.1.50
widget:
type: homeassistant
url: http://192.168.1.50:8123
key: your-long-lived-token
```
4. **Test**:
```bash
# Reload Traefik (automatic, but verify)
docker logs traefik | grep ha-pi
# Visit
https://ha.yourdomain.duckdns.org
```
Done! Your Raspberry Pi Home Assistant is now accessible via your domain with HTTPS. 🎉

View File

@@ -1,598 +0,0 @@
# Quick Reference Guide
## Stack Overview
Your homelab uses separate stacks for organization:
**Deployed by default (12 containers):**
- **`core.yml`** - Essential infrastructure (Traefik, Authelia, DuckDNS, Gluetun) - 4 services
- **`infrastructure.yml`** - Management tools (Dockge, Pi-hole, Dozzle, Glances, Docker Proxy) - 6 services
_Note: Watchtower temporarily disabled due to Docker API compatibility_
- **`dashboards.yml`** - Dashboard services (Homepage, Homarr) - 2 services
**Available in Dockge (deploy as needed):**
- **`media.yml`** - Media services (Plex, Jellyfin, Sonarr, Radarr, Prowlarr, qBittorrent)
- **`media-extended.yml`** - Additional media tools (Readarr, Lidarr, Mylar, Calibre)
- **`homeassistant.yml`** - Home automation (Home Assistant, Node-RED, Zigbee2MQTT, ESPHome)
- **`productivity.yml`** - Productivity apps (Nextcloud, Gitea, Bookstack, Outline, Excalidraw)
- **`monitoring.yml`** - Monitoring stack (Grafana, Prometheus, Uptime Kuma, Netdata)
- **`utilities.yml`** - Utility services (Duplicati, Code Server, FreshRSS, Wallabag)
- **`alternatives.yml`** - Alternative tools (Portainer, Authentik)
> All stacks can be modified by the AI to suit your preferences.
## Deployment Scripts
For detailed information about the deployment scripts, their features, and usage, see [scripts/README.md](../scripts/README.md).
**Quick summary:**
- `setup-homelab.sh` - First-run system setup and Authelia configuration
- `deploy-homelab.sh` - Deploy all core services and prepare additional stacks
- `reset-test-environment.sh` - Testing/development only - removes all deployed services
- `reset-ondemand-services.sh` - Reload services for Sablier lazy loading
## Common Commands
### Service Management
```bash
# Start all services in a stack (from stack directory)
cd /opt/stacks/stack-name/
docker compose up -d
# Start all services (from anywhere, using full path)
docker compose -f /opt/stacks/stack-name/docker-compose.yml up -d
# Start specific service (from stack directory)
cd /opt/stacks/stack-name/
docker compose up -d service-name
# Start specific service (from anywhere)
docker compose -f /opt/stacks/stack-name/docker-compose.yml up -d service-name
# Stop all services (from stack directory)
cd /opt/stacks/stack-name/
docker compose down
# Stop all services (from anywhere)
docker compose -f /opt/stacks/stack-name/docker-compose.yml down
# Stop specific service (from stack directory)
cd /opt/stacks/stack-name/
docker compose stop service-name
# Stop specific service (from anywhere)
docker compose -f /opt/stacks/stack-name/docker-compose.yml stop service-name
# Restart service (from stack directory)
cd /opt/stacks/stack-name/
docker compose restart service-name
# Restart service (from anywhere)
docker compose -f /opt/stacks/stack-name/docker-compose.yml restart service-name
# Remove service and volumes (from stack directory)
cd /opt/stacks/stack-name/
docker compose down -v
# Remove service and volumes (from anywhere)
docker compose -f /opt/stacks/stack-name/docker-compose.yml down -v
```
### Monitoring
> Tip: install the Dozzle service for viewing live logs
```bash
# View logs for entire stack
cd /opt/stacks/stack-name/
docker compose logs -f
# View logs for specific service
cd /opt/stacks/stack-name/
docker compose logs -f service-name
# View last 100 lines
cd /opt/stacks/stack-name/
docker compose logs --tail=100 service-name
# Check service status
docker compose -f /opt/stacks/stack-name/docker-compose.yml ps
# View resource usage
docker stats
# Inspect service
docker inspect container-name
```
### Updates
> Tip: Install the Watchtower service for automatic updates
```bash
# Pull latest images for stack
cd /opt/stacks/stack-name/
docker compose pull
# Pull and update specific service
cd /opt/stacks/stack-name/
docker compose pull service-name
docker compose up -d service-name
```
### Network Management
```bash
# List all networks
docker network ls
# Inspect network
docker network inspect traefik-network
# Create network (if needed)
docker network create network-name
# Remove unused networks
docker network prune
```
### Volume Management
```bash
# List volumes
docker volume ls
# Inspect volume
docker volume inspect volume-name
# Remove unused volumes
docker volume prune
# Backup volume
docker run --rm \
-v volume-name:/data \
-v $(pwd)/backups:/backup \
busybox tar czf /backup/volume-backup.tar.gz /data
# Restore volume
docker run --rm \
-v volume-name:/data \
-v $(pwd)/backups:/backup \
busybox tar xzf /backup/volume-backup.tar.gz -C /
```
### System Maintenance
```bash
# View disk usage
docker system df
# Clean up unused resources
docker system prune
# Clean up everything (use carefully!)
docker system prune -a --volumes
# Remove unused images
docker image prune -a
```
## Port Reference
### Core Infrastructure (core.yml)
- **80/443**: Traefik (reverse proxy)
- **8080**: Traefik dashboard
- **10000**: Sablier (lazy loading service)
### Infrastructure Services (infrastructure.yml)
- **5001**: Dockge (stack manager)
- **9000/9443**: Portainer (Docker UI)
- **53**: Pi-hole (DNS)
- **8082**: Pi-hole (web UI)
- **9999**: Watchtower (auto-updates)
- **8000**: Dozzle (log viewer)
- **61208**: Glances (system monitor)
### Dashboard Services (dashboards.yml)
- **3000**: Homepage dashboard
- **7575**: Homarr dashboard
### Media Services (media.yml)
- **32400**: Plex
- **8096**: Jellyfin
- **8989**: Sonarr
- **7878**: Radarr
- **9696**: Prowlarr
- **8081**: qBittorrent
### Extended Media (media-extended.yml)
- **8787**: Readarr
- **8686**: Lidarr
- **5299**: Lazy Librarian
- **8090**: Mylar3
- **8083**: Calibre-Web
- **5055**: Jellyseerr
- **9697**: FlareSolverr
- **7889**: Tdarr Server
- **8265**: Unmanic
### Home Automation (homeassistant.yml)
- **8123**: Home Assistant
- **6052**: ESPHome
- **8843**: TasmoAdmin
- **1880**: Node-RED
- **1883/9001**: Mosquitto (MQTT)
- **8124**: Zigbee2MQTT
- **8081**: MotionEye
### Productivity (productivity.yml)
- **8080**: Nextcloud
- **9929**: Mealie
- **8084**: WordPress
- **3000**: Gitea
- **8085**: DokuWiki
- **8086**: BookStack
- **8087**: MediaWiki
- **3030**: Form.io
### Monitoring (monitoring.yml)
- **9090**: Prometheus
- **3000**: Grafana
- **3100**: Loki
- **9080**: Promtail
- **9100**: Node Exporter
- **8080**: cAdvisor
- **3001**: Uptime Kuma
### Utilities (utilities.yml)
- **7979**: Backrest (backups)
- **8200**: Duplicati (backups)
- **8443**: Code Server
- **5000**: Form.io
- **3001**: Uptime Kuma
### Development (development.yml)
- **8929**: GitLab
- **5432**: PostgreSQL
- **6379**: Redis
- **5050**: pgAdmin
- **8888**: Jupyter Lab
## Environment Variables Quick Reference
```bash
# User IDs (get with: id -u and id -g)
PUID=1000 # Your user ID
PGID=1000 # Your group ID
# General
TZ=America/New_York # Your timezone
DOMAIN=yourdomain.duckdns.org # Your domain
# DuckDNS
DUCKDNS_TOKEN=your-token
DUCKDNS_SUBDOMAINS=yourdomain
# Authelia
AUTHELIA_JWT_SECRET=64-char-secret
AUTHELIA_SESSION_SECRET=64-char-secret
AUTHELIA_STORAGE_ENCRYPTION_KEY=64-char-secret
# Database passwords
MYSQL_ROOT_PASSWORD=secure-password
POSTGRES_PASSWORD=secure-password
# API Keys (service-specific)
SONARR_API_KEY=your-api-key
RADARR_API_KEY=your-api-key
```
## Network Setup
```bash
# Create all required networks (setup script does this)
docker network create traefik-network
docker network create homelab-network
docker network create media-network
docker network create dockerproxy-network
```
## Service URLs
After deployment, access services at:
```
Core Infrastructure:
https://traefik.${DOMAIN} - Traefik dashboard
https://auth.${DOMAIN} - Authelia login
http://sablier.${DOMAIN}:10000 - Sablier lazy loading (internal)
Infrastructure:
https://dockge.${DOMAIN} - Stack manager (PRIMARY)
https://portainer.${DOMAIN} - Docker UI (secondary)
http://pihole.${DOMAIN} - Pi-hole admin
https://dozzle.${DOMAIN} - Log viewer
https://glances.${DOMAIN} - System monitor
Dashboards:
https://homepage.${DOMAIN} - Homepage dashboard
https://homarr.${DOMAIN} - Homarr dashboard
Media:
https://plex.${DOMAIN} - Plex (no auth)
https://jellyfin.${DOMAIN} - Jellyfin (no auth)
https://sonarr.${DOMAIN} - TV automation
https://radarr.${DOMAIN} - Movie automation
https://prowlarr.${DOMAIN} - Indexer manager
https://torrents.${DOMAIN} - Torrent client
Productivity:
https://nextcloud.${DOMAIN} - Cloud Storage
https://gitea.${DOMAIN} - Gitea
Monitoring:
https://grafana.${DOMAIN} - Metrics dashboard
https://prometheus.${DOMAIN} - Metrics collection
https://status.${DOMAIN} - Uptime monitoring
Utilities:
https://backrest.${DOMAIN} - Backup management (Restic)
```
## Troubleshooting
### SSL Certificates
```bash
# Check wildcard certificate status
python3 -c "import json; d=json.load(open('/opt/stacks/core/traefik/acme.json')); print(f'Certificates: {len(d[\"letsencrypt\"][\"Certificates\"])}')"
# Verify certificate being served
echo | openssl s_client -connect auth.yourdomain.duckdns.org:443 -servername auth.yourdomain.duckdns.org 2>/dev/null | openssl x509 -noout -subject -issuer
# Check DNS TXT records (for DNS challenge)
dig +short TXT _acme-challenge.yourdomain.duckdns.org
# View Traefik certificate logs
docker exec traefik tail -50 /var/log/traefik/traefik.log | grep -E "acme|certificate"
# Reset certificates (if needed)
docker compose -f /opt/stacks/core/docker-compose.yml down
rm /opt/stacks/core/traefik/acme.json
touch /opt/stacks/core/traefik/acme.json
chmod 600 /opt/stacks/core/traefik/acme.json
sleep 60 # Wait for DNS to clear
docker compose -f /opt/stacks/core/docker-compose.yml up -d
```
**Important:** With DuckDNS, only Traefik should request certificates (wildcard cert covers all subdomains). Other services use `tls=true` without `certresolver`.
## Troubleshooting Quick Fixes
### Service won't start
```bash
# Check logs
docker compose -f /opt/stacks/stack-name/docker-compose.yml logs service-name
# Validate configuration
docker compose -f /opt/stacks/stack-name/docker-compose.yml config
# Check port conflicts
sudo netstat -tlnp | grep :PORT
# Restart Docker
sudo systemctl restart docker
```
### Permission errors
```bash
# Check your IDs match .env
id -u # Should match PUID
id -g # Should match PGID
# Fix ownership
sudo chown -R $USER:$USER /opt/stacks/stack-name/
```
### Network issues
```bash
# Check network exists
docker network inspect traefik-network
# Recreate network
docker network rm traefik-network
docker network create traefik-network
```
### Container keeps restarting
```bash
# Watch logs in real-time
docker compose -f /opt/stacks/stack-name/docker-compose.yml logs -f service-name
# Check resource usage
docker stats container-name
# Inspect container
docker inspect container-name | jq .State
```
### SSL certificate issues
```bash
# Check Traefik logs
docker logs traefik
# Check acme.json permissions
ls -la /opt/stacks/core/traefik/acme.json
# Force certificate renewal
# Remove acme.json and restart Traefik
```
## Testing GPU Support (NVIDIA)
```bash
# Test if nvidia-container-toolkit works
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
# Should show your GPU info if working
```
## Backup Commands
```bash
# Backup all config directories
tar czf backup-config-$(date +%Y%m%d).tar.gz /opt/stacks/
# Backup specific volume
docker run --rm \
-v volume-name:/data \
-v /mnt/backups:/backup \
busybox tar czf /backup/volume-name-$(date +%Y%m%d).tar.gz /data
# Backup .env file securely
cp .env .env.backup
```
## Health Checks
```bash
# Check all container health
docker ps --format "table {{.Names}}\t{{.Status}}"
# Check specific service health
docker inspect --format='{{json .State.Health}}' container-name | jq
# Test service connectivity
curl -k https://service.${DOMAIN}
```
## Resource Limits
Add to service definition if needed:
```yaml
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '0.5'
memory: 1G
```
## Common Patterns
### Add a new service to existing stack
1. Edit `/opt/stacks/stack-name/docker-compose.yml`
2. Add service definition following existing patterns
3. Use environment variables from `.env`
4. Connect to appropriate networks
5. Add Traefik labels for routing
6. Test: `docker compose config`
7. Deploy: `docker compose up -d`
### Create a new stack
1. Create directory: `mkdir /opt/stacks/new-stack`
2. Copy compose file: `cp docker-compose/template.yml /opt/stacks/new-stack/docker-compose.yml`
3. Copy env: `cp .env /opt/stacks/new-stack/`
4. Edit configuration
5. Deploy: `cd /opt/stacks/new-stack && docker compose up -d`
### Update service version
1. Edit compose file with new image tag
2. Pull new image: `docker compose pull service-name`
3. Recreate: `docker compose up -d service-name`
4. Check logs: `docker compose logs -f service-name`
### Remove a service
1. Stop service: `docker compose stop service-name`
2. Remove from compose file
3. Remove service: `docker compose rm service-name`
4. Optional: Remove volumes: `docker volume rm volume-name`
## AI Assistant Usage in VS Code
### Ask for help:
- "Add Jellyfin to my media stack"
- "Configure GPU for Plex"
- "Create monitoring dashboard setup"
- "Help me troubleshoot port conflicts"
- "Generate a compose file for Home Assistant"
### The AI will:
- Check existing services and avoid conflicts
- Follow naming conventions and patterns
- Configure Traefik labels automatically
- Apply Authelia middleware appropriately
- Suggest proper volume mounts
- Add services to Homepage dashboard
## Quick Deployment
### Minimal setup
```bash
# Clone and configure
git clone https://github.com/kelinfoxy/AI-Homelab.git
cd AI-Homelab
sudo ./scripts/setup-homelab.sh
cp .env.example .env
nano .env
# Deploy core only
mkdir -p /opt/stacks/core
cp docker-compose/core.yml /opt/stacks/core/docker-compose.yml
cp -r config-templates/traefik /opt/stacks/core/
cp -r config-templates/authelia /opt/stacks/core/
cp .env /opt/stacks/core/
cd /opt/stacks/core && docker compose up -d
```
### Full stack deployment
```bash
# After core is running, deploy all stacks
# Use Dockge UI at https://dockge.yourdomain.duckdns.org
# Or deploy manually:
docker compose -f docker-compose/infrastructure.yml up -d
docker compose -f docker-compose/dashboards.yml up -d
docker compose -f docker-compose/media.yml up -d
# etc.
```
## Maintenance Schedule
### Daily (automated)
- Watchtower checks for updates at 4 AM
### Weekly
- Review logs for each stack
- Check disk space: `docker system df`
### Monthly
- Update pinned versions in compose files
- Backup volumes and configs
- Review security updates
### Quarterly
- Full system audit
- Documentation review
- Performance optimization
## Emergency Commands
```bash
# Stop all containers
docker stop $(docker ps -q)
# Remove all containers
docker rm $(docker ps -aq)
# Remove all images
docker rmi $(docker images -q)
# Reset Docker (nuclear option)
sudo systemctl stop docker
sudo rm -rf /var/lib/docker
sudo systemctl start docker
```

View File

@@ -1,292 +0,0 @@
# EZ-Homelab Wiki
This directory contains the **complete wiki documentation** for the EZ-Homelab project, serving as the **single source of truth** for all project information.
## 📖 Wiki Structure
### Core Documentation
- **`Home.md`** - Main wiki page with overview and navigation
- **`_Sidebar.md`** - Wiki navigation sidebar
- **`_Footer.md`** - Footer with quick links and project info
### Getting Started
- **`Getting-Started-Guide.md`** - Complete setup instructions
- **`Environment-Configuration.md`** - Required settings and tokens
- **`Automated-Setup.md`** - One-click deployment process
- **`Manual-Setup.md`** - Step-by-step manual installation
- **`Post-Setup-Guide.md`** - Post-deployment configuration
### Architecture & Design
- **`System-Architecture.md`** - High-level component overview
- **`Docker-Guidelines.md`** - Service management patterns
- **`Ports-in-Use.md`** - Complete port mapping reference
- **`SSL-Certificates.md`** - HTTPS and certificate management
### Services & Documentation
- **`Services-Overview.md`** - All 70+ services catalog
- **`Service-Documentation.md`** - Individual service guides index
- **`service-docs/`** - Individual service documentation files
- **`Core-Infrastructure.md`** - Essential services guide
- **`Infrastructure-Services.md`** - Management tools guide
### Operations & Management
- **`Quick-Reference.md`** - Command cheat sheet
- **`Backup-Strategy.md`** - Restic + Backrest comprehensive guide
- **`Proxying-External-Hosts.md`** - Connect non-Docker services
- **`Resource-Limits-Template.md`** - Performance optimization
- **`troubleshooting/`** - Issue resolution guides
### AI & Automation
- **`AI-Management-Guide.md`** - Using AI for homelab management
- **`Copilot-Instructions.md`** - AI assistant configuration
- **`AI-VS-Code-Setup.md`** - Development environment setup
- **`AI-Management-Prompts.md`** - Sample AI interactions
### Additional Resources
- **`How-It-Works.md`** - System architecture explanation
- **`Authelia-Customization.md`** - SSO configuration options
- **`On-Demand-Remote-Services.md`** - Lazy loading configuration
- **`action-reports/`** - Deployment logs and reports
## 🎯 Purpose
This wiki serves as the **authoritative source of truth** for the EZ-Homelab project, containing:
-**Complete Documentation** - All setup guides, configuration options, and troubleshooting
-**Service Catalog** - Detailed information for all 70+ available services
-**Architecture Guides** - System design, network configuration, and security models
-**AI Integration** - Copilot instructions and AI management capabilities
-**Operational Guides** - Backup strategies, monitoring, and maintenance
-**Reference Materials** - Port mappings, resource limits, and quick references
## 📋 Wiki Standards
### Naming Convention
- Use `Title-Case-With-Dashes.md` for file names
- Match wiki link format: `[[Wiki Links]]`
- Descriptive, searchable titles
### Content Organization
- **Headers**: Use `# ## ###` hierarchy
- **Links**: Use `[[Wiki Links]]` for internal references
- **Code**: Use backticks for commands and file paths
- **Lists**: Use bullet points for features/options
### Maintenance
- **Single Source of Truth**: All information kept current
- **Comprehensive**: No missing critical information
- **Accurate**: Verified configurations and commands
- **Accessible**: Clear language, logical organization
## 🔄 Synchronization
This wiki is automatically synchronized with the main documentation in `../docs/` and should be updated whenever:
- New services are added
- Configuration changes are made
- Documentation is updated
- New features are implemented
## 📖 Usage
### For Users
- Start with `Home.md` for overview
- Use `_Sidebar.md` for navigation
- Search for specific topics or services
- Reference individual service documentation
### For Contributors
- Update wiki when modifying documentation
- Add new pages for new features
- Maintain link integrity
- Keep information current
### For AI Management
- Copilot uses this wiki as reference
- Contains complete system knowledge
- Provides context for AI assistance
- Enables intelligent homelab management
## 🤝 Contributing
When contributing to the wiki:
1. **Update Content**: Modify relevant pages with new information
2. **Check Links**: Ensure all internal links work
3. **Update Navigation**: Add new pages to `_Sidebar.md` if needed
4. **Verify Accuracy**: Test commands and configurations
5. **Maintain Standards**: Follow naming and formatting conventions
## 📊 Wiki Statistics
- **Total Pages**: 25+ main pages
- **Service Docs**: 70+ individual service guides
- **Categories**: 10 service categories
- **Topics Covered**: Setup, configuration, troubleshooting, architecture
- **Last Updated**: January 21, 2026
---
*This wiki represents the complete knowledge base for the EZ-Homelab project and serves as the primary reference for all users and contributors.*
### 📦 Services & Stacks
#### Core Infrastructure (Deploy First)
Essential services that everything else depends on:
- **[DuckDNS](service-docs/duckdns.md)** - Dynamic DNS updates
- **[Traefik](service-docs/traefik.md)** - Reverse proxy & SSL termination
- **[Authelia](service-docs/authelia.md)** - Single Sign-On authentication
- **[Gluetun](service-docs/gluetun.md)** - VPN client for secure downloads
- **[Sablier](service-docs/sablier.md)** - Lazy loading service for on-demand containers
#### Management & Monitoring
- **[Dockge](service-docs/dockge.md)** - Primary stack management UI
- **[Homepage](service-docs/homepage.md)** - Service dashboard (AI-configurable)
- **[Homarr](service-docs/homarr.md)** - Alternative modern dashboard
- **[Dozzle](service-docs/dozzle.md)** - Real-time log viewer
- **[Glances](service-docs/glances.md)** - System monitoring
- **[Pi-hole](service-docs/pihole.md)** - DNS & ad blocking
#### Media Services
- **[Jellyfin](service-docs/jellyfin.md)** - Open-source media streaming
- **[Plex](service-docs/plex.md)** - Popular media server (alternative)
- **[qBittorrent](service-docs/qbittorrent.md)** - Torrent client (VPN-routed)
- **[Calibre-Web](service-docs/calibre-web.md)** - Ebook reader & server
#### Media Management (Arr Stack)
- **[Sonarr](service-docs/sonarr.md)** - TV show automation
- **[Radarr](service-docs/radarr.md)** - Movie automation
- **[Prowlarr](service-docs/prowlarr.md)** - Indexer management
- **[Readarr](service-docs/readarr.md)** - Ebook/audiobook automation
- **[Lidarr](service-docs/lidarr.md)** - Music library management
- **[Bazarr](service-docs/bazarr.md)** - Subtitle automation
- **[Jellyseerr](service-docs/jellyseerr.md)** - Media request interface
#### Home Automation
- **[Home Assistant](service-docs/home-assistant.md)** - Smart home platform
- **[Node-RED](service-docs/node-red.md)** - Flow-based programming
- **[Zigbee2MQTT](service-docs/zigbee2mqtt.md)** - Zigbee device integration
- **[ESPHome](service-docs/esphome.md)** - ESP device firmware
- **[TasmoAdmin](service-docs/tasmoadmin.md)** - Tasmota device management
- **[MotionEye](service-docs/motioneye.md)** - Video surveillance
#### Productivity & Collaboration
- **[Nextcloud](service-docs/nextcloud.md)** - Self-hosted cloud storage
- **[Gitea](service-docs/gitea.md)** - Git service (GitHub alternative)
- **[BookStack](service-docs/bookstack.md)** - Documentation/wiki platform
- **[WordPress](service-docs/wordpress.md)** - Blog/CMS platform
- **[MediaWiki](service-docs/mediawiki.md)** - Wiki platform
- **[DokuWiki](service-docs/dokuwiki.md)** - Simple wiki
- **[Excalidraw](service-docs/excalidraw.md)** - Collaborative drawing
#### Development Tools
- **[Code Server](service-docs/code-server.md)** - VS Code in the browser
- **[GitLab](service-docs/gitlab.md)** - Complete DevOps platform
- **[Jupyter](service-docs/jupyter.md)** - Interactive computing
- **[pgAdmin](service-docs/pgadmin.md)** - PostgreSQL administration
#### Monitoring & Observability
- **[Grafana](service-docs/grafana.md)** - Metrics visualization
- **[Prometheus](service-docs/prometheus.md)** - Metrics collection
- **[Uptime Kuma](service-docs/uptime-kuma.md)** - Uptime monitoring
- **[Loki](service-docs/loki.md)** - Log aggregation
- **[Promtail](service-docs/promtail.md)** - Log shipping
- **[Node Exporter](service-docs/node-exporter.md)** - System metrics
- **[cAdvisor](service-docs/cadvisor.md)** - Container metrics
#### Utilities & Tools
- **[Backrest](service-docs/backrest.md)** - Backup management (Restic-based, default)
- **[Duplicati](service-docs/duplicati.md)** - Alternative backup solution
- **[FreshRSS](service-docs/freshrss.md)** - RSS feed reader
- **[Wallabag](service-docs/wallabag.md)** - Read-it-later service
- **[Watchtower](service-docs/watchtower.md)** - Automatic updates
- **[Vaultwarden](service-docs/vaultwarden.md)** - Password manager
#### Alternative Services
Services that provide alternatives to the defaults:
- **[Portainer](service-docs/portainer.md)** - Alternative container management
- **[Authentik](service-docs/authentik.md)** - Alternative SSO with web UI
### 🛠️ Development & Operations
#### Docker & Container Management
- **[Docker Guidelines](docker-guidelines.md)** - Complete service management guide
- **[Service Creation](docker-guidelines.md#service-creation-guidelines)** - How to add new services
- **[Service Modification](docker-guidelines.md#service-modification-guidelines)** - Updating existing services
- **[Resource Limits](resource-limits-template.md)** - CPU/memory management
- **[Troubleshooting](docker-guidelines.md#troubleshooting)** - Common issues & fixes
#### External Service Integration
- **[Proxying External Hosts](proxying-external-hosts.md)** - Route non-Docker services through Traefik
- **[External Host Examples](proxying-external-hosts.md#common-external-services-to-proxy)** - Raspberry Pi, NAS, etc.
#### AI & Automation
- **[Copilot Instructions](.github/copilot-instructions.md)** - AI agent guidelines for this codebase
- **[AI Management Capabilities](.github/copilot-instructions.md#ai-management-capabilities)** - What the AI can help with
### 📋 Quick References
#### Commands & Operations
- **[Quick Reference](quick-reference.md)** - Essential commands and workflows
- **[Stack Management](quick-reference.md#service-management)** - Start/stop/restart services
- **[Deployment Scripts](quick-reference.md#deployment-scripts)** - Setup and deployment automation
#### Troubleshooting
- **[Common Issues](quick-reference.md#troubleshooting)** - SSL, networking, permissions
- **[Service Won't Start](quick-reference.md#service-wont-start)** - Debugging steps
- **[Traefik Routing](quick-reference.md#traefik-not-routing)** - Route configuration issues
- **[VPN Problems](quick-reference.md#vpn-not-working-gluetun)** - Gluetun troubleshooting
### 📖 Advanced Topics
#### SSL & Certificates
- **[Wildcard SSL Setup](getting-started.md#notes-about-ssl-certificates-from-letsencrypt-with-duckdns)** - How SSL certificates work
- **[Certificate Troubleshooting](getting-started.md#certificate-troubleshooting)** - SSL issues and fixes
- **[DNS Challenge Process](getting-started.md#dns-challenge-process)** - How domain validation works
#### Security & Access Control
- **[Authelia Configuration](service-docs/authelia.md)** - SSO setup and customization
- **[Bypass Rules](docker-guidelines.md#when-to-use-authelia-sso)** - When to skip authentication
- **[2FA Setup](getting-started.md#set-up-2fa-with-authelia)** - Two-factor authentication
#### Backup & Recovery
- **[Backup Strategies](service-docs/duplicati.md)** - Data protection approaches
- **[Service Backups](service-docs/backrest.md)** - Database backup solutions
- **[Configuration Backup](quick-reference.md#backup-commands)** - Config file preservation
### 🔧 Development & Contributing
#### Repository Structure
- **[File Organization](.github/copilot-instructions.md#file-structure-standards)** - How files are organized
- **[Service Documentation](service-docs/)** - Individual service guides
- **[Configuration Templates](config-templates/)** - Reusable configurations
- **[Scripts](scripts/)** - Automation and deployment tools
#### Development Workflow
- **[Adding Services](docker-guidelines.md#service-creation-guidelines)** - New service integration
- **[Testing Changes](.github/copilot-instructions.md#testing-changes)** - Validation procedures
- **[Resource Limits](resource-limits-template.md)** - Performance management
### 📚 Additional Resources
- **[GitHub Repository](https://github.com/kelinfoxy/EZ-Homelab)** - Source code and issues
- **[Docker Hub](https://hub.docker.com)** - Container images
- **[Traefik Documentation](https://doc.traefik.io/traefik/)** - Official reverse proxy docs
- **[Authelia Documentation](https://www.authelia.com/)** - SSO documentation
- **[DuckDNS](https://www.duckdns.org/)** - Dynamic DNS service
---
## 🎯 Quick Navigation
**New to EZ-Homelab?** → [Getting Started](getting-started.md)
**Need to add a service?** → [Service Creation Guide](docker-guidelines.md#service-creation-guidelines)
**Having issues?** → [Troubleshooting](quick-reference.md#troubleshooting)
**Want to contribute?** → [Development Workflow](docker-guidelines.md#service-creation-guidelines)
---
*This documentation is maintained by AI and community contributors. Last updated: January 20, 2026*

View File

@@ -1,238 +0,0 @@
# AI-Homelab Resource Limits Template
# Modern deploy.resources configuration for Docker Compose
# Based on researched typical usage patterns for homelab services
# These are conservative defaults - monitor and adjust as needed
# ===========================================
# SERVICE TYPE TEMPLATES
# ===========================================
# LIGHTWEIGHT SERVICES (Reverse proxy, auth, DNS, monitoring)
lightweight_service:
deploy:
resources:
limits:
cpus: '0.25' # 25% of 1 CPU core
memory: 128M # 128MB RAM
pids: 256 # Max processes
reservations:
cpus: '0.10' # Reserve 10% of 1 CPU
memory: 64M # Reserve 64MB RAM
# STANDARD WEB SERVICES (Dashboards, simple web apps)
web_service:
deploy:
resources:
limits:
cpus: '0.50' # 50% of 1 CPU core
memory: 256M # 256MB RAM
pids: 512 # Max processes
reservations:
cpus: '0.25' # Reserve 25% of 1 CPU
memory: 128M # Reserve 128MB RAM
# DATABASE SERVICES (PostgreSQL, MariaDB, Redis)
database_service:
deploy:
resources:
limits:
cpus: '1.0' # 1 CPU core
memory: 1G # 1GB RAM (for caching)
pids: 1024 # Max processes
reservations:
cpus: '0.50' # Reserve 0.5 CPU
memory: 512M # Reserve 512MB RAM
# MEDIA SERVERS (Jellyfin, Plex - without GPU)
media_server:
deploy:
resources:
limits:
cpus: '2.0' # 2 CPU cores (for transcoding)
memory: 2G # 2GB RAM
pids: 2048 # Max processes
reservations:
cpus: '1.0' # Reserve 1 CPU
memory: 1G # Reserve 1GB RAM
# DOWNLOADERS (qBittorrent, Transmission)
downloader_service:
deploy:
resources:
limits:
cpus: '1.0' # 1 CPU core
memory: 512M # 512MB RAM
pids: 1024 # Max processes
reservations:
cpus: '0.50' # Reserve 0.5 CPU
memory: 256M # Reserve 256MB RAM
# HEAVY APPLICATIONS (Nextcloud, Gitea with users)
heavy_app:
deploy:
resources:
limits:
cpus: '1.5' # 1.5 CPU cores
memory: 1G # 1GB RAM
pids: 2048 # Max processes
reservations:
cpus: '0.75' # Reserve 0.75 CPU
memory: 512M # Reserve 512MB RAM
# MONITORING STACK (Prometheus, Grafana, Loki)
monitoring_service:
deploy:
resources:
limits:
cpus: '0.75' # 0.75 CPU cores
memory: 512M # 512MB RAM
pids: 1024 # Max processes
reservations:
cpus: '0.25' # Reserve 0.25 CPU
memory: 256M # Reserve 256MB RAM
# ===========================================
# SPECIFIC SERVICE RECOMMENDATIONS
# ===========================================
# Core Infrastructure Stack
traefik: # Reverse proxy - handles SSL/TLS/crypto
template: lightweight_service
notes: "CPU intensive for SSL handshakes, low memory usage"
authelia: # Authentication service
template: lightweight_service
notes: "Very low resource usage, mostly memory for sessions"
duckdns: # DNS updater
template: lightweight_service
notes: "Minimal resources, mostly network I/O"
# Infrastructure Stack
pihole: # DNS ad blocker
template: lightweight_service
notes: "Memory intensive for blocklists, low CPU"
dockge: # Docker management UI
template: web_service
notes: "Light web interface, occasional CPU spikes"
glances: # System monitoring
template: web_service
notes: "Low resource monitoring tool"
# Dashboard Stack
homepage: # Status dashboard
template: web_service
notes: "Static content, very light"
homarr: # Dashboard with widgets
template: web_service
notes: "JavaScript heavy but still light"
# Media Stack
jellyfin: # Media server
template: media_server
notes: "CPU intensive for transcoding, high memory for caching"
calibre_web: # Ebook manager
template: web_service
notes: "Light web app with database"
# VPN Stack
qbittorrent: # Torrent client
template: downloader_service
notes: "Network I/O heavy, moderate CPU for hashing"
# Home Assistant Stack
home_assistant: # Smart home hub
template: heavy_app
notes: "Python app with many integrations, moderate resources"
esphome: # IoT firmware
template: web_service
notes: "Web interface for device management"
nodered: # Automation workflows
template: web_service
notes: "Node.js app, moderate memory usage"
# Productivity Stack
nextcloud: # File sync/sharing
template: heavy_app
notes: "PHP app with database, resource intensive with users"
gitea: # Git server
template: web_service
notes: "Go app, lightweight but scales with repos"
# Monitoring Stack
prometheus: # Metrics collection
template: monitoring_service
notes: "Time-series database, memory intensive for retention"
grafana: # Metrics visualization
template: web_service
notes: "Web dashboard, moderate resources"
loki: # Log aggregation
template: monitoring_service
notes: "Log storage, memory for indexing"
uptime_kuma: # Uptime monitoring
template: web_service
notes: "Monitoring checks, light resource usage"
# Development Stack
code_server: # VS Code in browser
template: heavy_app
notes: "Full IDE, resource intensive for large projects"
# Utility Stack
# Most utilities are lightweight web services
speedtest_tracker:
template: web_service
notes: "Speed test monitoring, occasional CPU usage"
# ===========================================
# RESOURCE MONITORING COMMANDS
# ===========================================
# Monitor current usage
docker stats
# Monitor specific service
docker stats service_name
# Check container resource usage over time
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"
# Check system resources
docker system df
# View running processes in container
docker exec service_name ps aux
# Memory usage details
docker exec service_name cat /proc/meminfo | head -10
# ===========================================
# ADJUSTMENT GUIDELINES
# ===========================================
# If container is killed by OOM:
# 1. Increase memory limit by 50-100%
# 2. Check for memory leaks in application
# 3. Consider adding swap space to host
# If container is slow/unresponsive:
# 1. Increase CPU limits
# 2. Check for CPU bottlenecks
# 3. Monitor disk I/O if database-related
# General rule of thumb:
# - Start with conservative limits
# - Monitor actual usage with 'docker stats'
# - Adjust based on real-world usage patterns
# - Leave 20-30% headroom for spikes</content>
<parameter name="filePath">/home/kelin/AI-Homelab/docs/resource-limits-template.md

View File

@@ -1,164 +0,0 @@
# SSL Certificates with Let's Encrypt and DuckDNS
Your homelab uses **Let's Encrypt** to automatically provide free SSL certificates for all your services. This ensures secure HTTPS connections without manual certificate management.
## How SSL Certificates Work in Your Homelab
### The Certificate Flow
1. **Domain Registration**: DuckDNS provides your dynamic domain (e.g., `yourname.duckdns.org`)
2. **Certificate Request**: Traefik requests a wildcard certificate (`*.yourname.duckdns.org`)
3. **Domain Validation**: Let's Encrypt validates you own the domain via DNS challenge
4. **Certificate Issuance**: Free SSL certificate is issued and stored
5. **Automatic Renewal**: Certificates renew automatically before expiration
### DuckDNS + Let's Encrypt Integration
**DuckDNS** handles dynamic DNS updates, while **Let's Encrypt** provides certificates:
- **DuckDNS**: Updates your public IP → domain mapping every 5 minutes
- **Let's Encrypt**: Issues trusted SSL certificates via ACME protocol
- **DNS Challenge**: Proves domain ownership by setting TXT records
### Wildcard Certificates Explained
Your setup uses a **wildcard certificate** (`*.yourdomain.duckdns.org`) that covers:
- `dockge.yourdomain.duckdns.org`
- `plex.yourdomain.duckdns.org`
- `jellyfin.yourdomain.duckdns.org`
- Any other subdomain automatically
**Why wildcard?** One certificate covers all services - no need for individual certificates per service.
### Certificate Storage & Management
- **Location**: `/opt/stacks/core/traefik/acme.json`
- **Permissions**: Must be `600` (read/write for owner only)
- **Backup**: Always backup this file - contains your certificates
- **Renewal**: Automatic, 30 days before expiration
## Testing vs Production Certificates
### Staging Server (For Testing)
```yaml
# In traefik.yml, add this line for testing:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
```
**Staging Benefits:**
- Unlimited certificates (no rate limits)
- Fast issuance for testing
- **Not trusted by browsers** (shows "Not Secure")
**When to use staging:**
- Setting up new environments
- Testing configurations
- Learning/development
### Production Server (For Live Use)
```yaml
# Remove or comment out caServer line for production
# certificatesResolvers:
# letsencrypt:
# acme:
# # No caServer = production
```
**Production Limits:**
>This is why you want to use staging certificates for testing purposes!!!
>Always use staging certificates if you are running the setup & deploy scripts repeatedly
- **50 certificates per domain per week**
- **5 duplicate certificates per week**
- **Trusted by all browsers**
## Certificate Troubleshooting
### Check Certificate Status
```bash
# Count certificates in storage
python3 -c "import json; d=json.load(open('/opt/stacks/core/traefik/acme.json')); print(f'Certificates: {len(d[\"letsencrypt\"][\"Certificates\"])}')}"
```
### Common Issues & Solutions
**"Certificate not trusted" or "Not Secure" warnings:**
- **Staging certificates**: Expected - use production for live sites
- **DNS propagation**: Wait 5-10 minutes after setup
- **Browser cache**: Clear browser cache and try incognito mode
**Certificate request fails:**
- Check Traefik logs: `docker logs traefik | grep -i certificate`
- Verify DuckDNS token is correct in `.env`
- Ensure ports 80/443 are open and forwarded
- Wait 1+ hours between certificate requests
**Rate limit exceeded:**
- Switch to staging server for testing
- Wait 1 week for production limits to reset
- Check status at: https://letsencrypt.org/docs/rate-limits/
### DNS Challenge Process
When requesting certificates, Traefik:
1. Asks DuckDNS to set TXT record: `_acme-challenge.yourdomain.duckdns.org`
2. Let's Encrypt checks the TXT record to verify ownership
3. If valid, certificate is issued
4. TXT record is cleaned up automatically
**Note:** DuckDNS allows only ONE TXT record at a time. Multiple Traefik instances will conflict.
### Certificate Validation Commands
```bash
# Test certificate validity
echo | openssl s_client -connect yourdomain.duckdns.org:443 -servername dockge.yourdomain.duckdns.org 2>/dev/null | openssl x509 -noout -subject -issuer -dates
# Check if certificate covers wildcards
echo | openssl s_client -connect yourdomain.duckdns.org:443 -servername any-subdomain.yourdomain.duckdns.org 2>/dev/null | openssl x509 -noout -text | grep "Subject Alternative Name"
```
## Best Practices
### For Production
- Use production Let's Encrypt server
- Backup `acme.json` regularly
- Monitor certificate expiration (Traefik dashboard)
- Keep DuckDNS token secure
### For Development/Testing
- Use staging server to avoid rate limits
- Test with different subdomains
- Reset environments safely (preserve `acme.json` if possible)
### Security Notes
- Certificates are stored encrypted in `acme.json`
- Private keys never leave your server
- HTTPS provides encryption in transit
- Consider additional security headers in Traefik
## Port Forwarding Requirements
**Critical**: SSL certificates require ports 80 and 443 to be forwarded from your router to your homelab server.
### Router Configuration
1. Log into your router's admin interface (usually 192.168.1.1)
2. Find the "Port Forwarding" or "NAT" section
3. Create forwarding rules:
- **External Port**: 80 → **Internal IP**: your-server-ip **Internal Port**: 80
- **External Port**: 443 → **Internal IP**: your-server-ip **Internal Port**: 443
4. Protocol: TCP for both
5. Save changes
### Why This Is Required
- **Port 80**: Used by Let's Encrypt for domain ownership verification (HTTP-01 challenge)
- **Port 443**: Used for all HTTPS traffic to your services
- **Wildcard Certificates**: Enables automatic SSL for all `*.yourdomain.duckdns.org` subdomains
### Testing Port Forwarding
```bash
# Test from external network (not your local network)
curl -I http://yourdomain.duckdns.org
# Should return HTTP 200 or redirect to HTTPS
```

View File

@@ -1,199 +0,0 @@
# Service Documentation
## Overview
This section contains detailed documentation for all 70+ services available in the EZ-Homelab. Each service has its own documentation page with setup instructions, configuration options, and troubleshooting guides.
## Service Categories
### Core Infrastructure (Essential - Deploy First)
- [[DuckDNS]] - Dynamic DNS with wildcard SSL
- [[Traefik]] - Reverse proxy and SSL termination
- [[Authelia]] - Single Sign-On authentication
- [[Gluetun]] - VPN client for secure downloads
- [[Sablier]] - Lazy loading service
### Infrastructure & Management
- [[Dockge]] - Primary stack management UI
- [[Portainer]] - Advanced container management
- [[Authentik]] - Alternative SSO with web UI
- [[Pi-hole]] - DNS and ad blocking
- [[Dozzle]] - Real-time log viewer
- [[Glances]] - System monitoring
- [[Watchtower]] - Automatic updates
- [[Docker Proxy]] - Secure Docker API access
### Dashboards & Interfaces
- [[Homepage]] - Service dashboard (AI-configurable)
- [[Homarr]] - Modern dashboard alternative
### Media Services
- [[Plex]] - Popular media server
- [[Jellyfin]] - Open-source media streaming
- [[Calibre-Web]] - Ebook reader and server
### Media Management (*Arr Stack)
- [[Sonarr]] - TV show automation
- [[Radarr]] - Movie automation
- [[Prowlarr]] - Indexer management
- [[Readarr]] - Ebook/audiobook automation
- [[Lidarr]] - Music management
- [[Bazarr]] - Subtitle management
- [[Mylar3]] - Comic book management
- [[Lazy Librarian]] - Book automation
### Download Services
- [[qBittorrent]] - Torrent client (VPN-routed)
- [[FlareSolverr]] - Cloudflare bypass for indexers
### Home Automation
- [[Home Assistant]] - Smart home platform
- [[ESPHome]] - ESP device firmware
- [[TasmoAdmin]] - Tasmota device management
- [[Node-RED]] - Automation workflows
- [[Mosquitto]] - MQTT broker
- [[Zigbee2MQTT]] - Zigbee bridge
- [[MotionEye]] - Video surveillance
### Productivity & Collaboration
- [[Nextcloud]] - File sync and collaboration
- [[Gitea]] - Git service
- [[BookStack]] - Documentation platform
- [[DokuWiki]] - Wiki platform
- [[MediaWiki]] - Advanced wiki
- [[WordPress]] - Blog platform
- [[Form.io]] - Form builder
### Development Tools
- [[GitLab]] - Complete DevOps platform
- [[PostgreSQL]] - SQL database
- [[Redis]] - In-memory data store
- [[pgAdmin]] - PostgreSQL management
- [[Jupyter Lab]] - Interactive notebooks
- [[Code Server]] - VS Code in browser
### Monitoring & Observability
- [[Prometheus]] - Metrics collection
- [[Grafana]] - Visualization and dashboards
- [[Loki]] - Log aggregation
- [[Promtail]] - Log shipping
- [[Node Exporter]] - System metrics
- [[cAdvisor]] - Container metrics
- [[Alertmanager]] - Alert management
- [[Uptime Kuma]] - Uptime monitoring
### Utilities & Tools
- [[Vaultwarden]] - Password manager
- [[Duplicati]] - Encrypted backups
- [[Backrest]] - Restic backup UI
- [[FreshRSS]] - RSS feed reader
- [[Wallabag]] - Read-it-later service
- [[Unmanic]] - Media optimization
- [[Tdarr]] - Video transcoding
- [[Jellyseerr]] - Media requests
## Documentation Structure
Each service documentation page includes:
### 📋 Service Information
- **Purpose**: What the service does
- **URL**: Access URL after deployment
- **Authentication**: SSO protection status
- **Dependencies**: Required services or configurations
### ⚙️ Configuration
- **Environment Variables**: Required settings
- **Volumes**: Data persistence configuration
- **Networks**: Docker network connections
- **Ports**: Internal port mappings
### 🚀 Deployment
- **Stack Location**: Where to deploy
- **Compose File**: Docker Compose configuration
- **Resource Limits**: Recommended CPU/memory limits
- **Health Checks**: Service health verification
### 🔧 Management
- **Updates**: How to update the service
- **Backups**: Data backup procedures
- **Monitoring**: Health check commands
- **Logs**: Log location and viewing
### 🐛 Troubleshooting
- **Common Issues**: Frequent problems and solutions
- **Error Messages**: Specific error resolution
- **Performance**: Optimization tips
- **Recovery**: Service restoration procedures
## Quick Reference
### By Port Number
- **3000**: Grafana, Homarr, Gitea
- **3001**: Uptime Kuma
- **5050**: pgAdmin
- **5055**: Jellyseerr
- **8080**: Code Server, Nextcloud, Traefik dashboard
- **8081**: qBittorrent, MotionEye
- **8083**: Calibre-Web
- **8096**: Jellyfin
- **8123**: Home Assistant, Zigbee2MQTT
- **8200**: Duplicati
- **8888**: Jupyter Lab
- **8989**: Sonarr
- **9090**: Prometheus
- **9696**: Prowlarr
- **9700**: FlareSolverr
### By Category
- **Media Streaming**: Plex (32400), Jellyfin (8096)
- **Automation**: Sonarr (8989), Radarr (7878), Prowlarr (9696)
- **Databases**: PostgreSQL (5432), MariaDB (3306), Redis (6379)
- **Development**: GitLab (80/443), Gitea (3000), Code Server (8080)
- **Monitoring**: Grafana (3000), Prometheus (9090), Uptime Kuma (3001)
## Deployment Guidelines
### Service Dependencies
Some services require others to be running first:
**Required First:**
- Core Infrastructure (DuckDNS, Traefik, Authelia)
**Common Dependencies:**
- **Databases**: PostgreSQL, MariaDB, Redis for data persistence
- **VPN**: Gluetun for download services
- **Reverse Proxy**: Traefik for all web services
- **Authentication**: Authelia for SSO protection
### Resource Requirements
- **Lightweight** (< 256MB RAM): DNS, monitoring, authentication
- **Standard** (256MB - 1GB RAM): Web apps, dashboards, simple services
- **Heavy** (> 1GB RAM): Media servers, databases, development tools
- **Specialized**: GPU-enabled services, high-I/O applications
### Network Security
- **SSO Protected**: Most services require Authelia authentication
- **Bypass Allowed**: Media services (Plex, Jellyfin) for app access
- **VPN Routed**: Download services for IP protection
- **Internal Only**: Databases and supporting services
## Finding Service Documentation
### By Service Name
Use the alphabetical list above or search for the specific service.
### By Function
- **Want to stream media?** → [[Plex]], [[Jellyfin]]
- **Need automation?** → [[Sonarr]], [[Radarr]], [[Prowlarr]]
- **File sharing?** → [[Nextcloud]], [[Gitea]]
- **Monitoring?** → [[Grafana]], [[Prometheus]], [[Uptime Kuma]]
- **Development?** → [[GitLab]], [[Code Server]], [[Jupyter Lab]]
### By Complexity
- **Beginner**: Homepage, Dozzle, Glances
- **Intermediate**: Nextcloud, Gitea, BookStack
- **Advanced**: GitLab, Home Assistant, Prometheus
Each service page provides complete setup instructions and is designed to work with the EZ-Homelab's file-based, AI-manageable architecture.</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\Service-Documentation.md

View File

@@ -1,357 +0,0 @@
# Services Overview
This document provides a comprehensive overview of all 50+ pre-configured services available in the EZ-Homelab repository.
## Services Overview
| Stacks (10) | Services (70 + 6db) | SSO | Storage | Access URLs |
|-------|----------|-----|---------|-------------|
| **📦 core.yaml (3)** | **Deploy First** | | | |
| ├─ DuckDNS | Dynamic DNS updater | - | /opt/stacks/core/duckdns | No UI |
| ├─ Traefik | Reverse proxy + SSL | ✓ | /opt/stacks/core/traefik | traefik.${DOMAIN} |
| └─ Authelia | SSO authentication | - | /opt/stacks/core/authelia | auth.${DOMAIN} |
| **🔒 vpn.yaml (2)** | **VPN Services** | | | |
| ├─ Gluetun | VPN (Surfshark) | - | /opt/stacks/vpn/gluetun | No UI |
| └─ qBittorrent | Torrent (via VPN) | ✓ | /mnt/downloads | qbit.${DOMAIN} |
| **🔧 infrastructure.yaml** (12) | | | | |
| ├─ Dockge | Stack manager (PRIMARY) | ✓ | /opt/stacks/infrastructure | dockge.${DOMAIN} |
| ├─ Portainer | Container management | ✓ | /opt/stacks/infrastructure | portainer.${DOMAIN} |
| ├─ Authentik Server | SSO with web UI | ✓ | /opt/stacks/authentik | authentik.${DOMAIN} |
| │ ├─ authentik-worker | Background tasks | - | /opt/stacks/authentik | No UI |
| │ ├─ authentik-db | PostgreSQL | - | /opt/stacks/authentik | No UI |
| │ └─ authentik-redis | Cache/messaging | - | /opt/stacks/authentik | No UI |
| ├─ Pi-hole | DNS + Ad blocking | ✓ | /opt/stacks/infrastructure | pihole.${DOMAIN} |
| ├─ Watchtower | Auto container updates | - | /opt/stacks/infrastructure | No UI |
| ├─ Dozzle | Docker log viewer | ✓ | /opt/stacks/infrastructure | dozzle.${DOMAIN} |
| ├─ Glances | System monitoring | ✓ | /opt/stacks/infrastructure | glances.${DOMAIN} |
| └─ Docker Proxy | Secure socket access | - | /opt/stacks/infrastructure | No UI |
| **📊 dashboards.yaml** (2) | | | | |
| ├─ Homepage | App dashboard (AI cfg) | ✓ | /opt/stacks/dashboards | home.${DOMAIN} |
| └─ Homarr | Modern dashboard | ✓ | /opt/stacks/dashboards | homarr.${DOMAIN} |
| **🎬 media** (6) | | | | |
| ├─ Plex | Media server | ✗ | /mnt/media, /mnt/transcode | plex.${DOMAIN} |
| ├─ Jellyfin | Media server (OSS) | ✗ | /mnt/media, /mnt/transcode | jellyfin.${DOMAIN} |
| ├─ Sonarr | TV automation | ✓ | /opt/stacks/media, /mnt/media | sonarr.${DOMAIN} |
| ├─ Radarr | Movie automation | ✓ | /opt/stacks/media, /mnt/media | radarr.${DOMAIN} |
| └─ Prowlarr | Indexer manager | ✓ | /opt/stacks/media | prowlarr.${DOMAIN} |
| **📚 media-extended.yaml** (10) | | | | |
| ├─ Readarr | Ebooks/Audiobooks | ✓ | /opt/stacks/media-ext, /mnt/media | readarr.${DOMAIN} |
| ├─ Lidarr | Music manager | ✓ | /opt/stacks/media-ext, /mnt/media | lidarr.${DOMAIN} |
| ├─ Lazy Librarian | Book automation | ✓ | /opt/stacks/media-ext, /mnt/media | lazylibrarian.${DOMAIN} |
| ├─ Mylar3 | Comic manager | ✓ | /opt/stacks/media-ext, /mnt/media | mylar.${DOMAIN} |
| ├─ Calibre-Web | Ebook reader | ✓ | /opt/stacks/media-ext, /mnt/media | calibre.${DOMAIN} |
| ├─ Jellyseerr | Media requests | ✓ | /opt/stacks/media-ext | jellyseerr.${DOMAIN} |
| ├─ FlareSolverr | Cloudflare bypass | - | /opt/stacks/media-ext | No UI |
| ├─ Tdarr Server | Transcoding server | ✓ | /opt/stacks/media-ext, /mnt/transcode | tdarr.${DOMAIN} |
| ├─ Tdarr Node | Transcoding worker | - | /mnt/transcode-cache | No UI |
| └─ Unmanic | Library optimizer | ✓ | /opt/stacks/media-ext, /mnt/transcode | unmanic.${DOMAIN} |
| **🏠 homeassistant.yaml** (7) | | | | |
| ├─ Home Assistant | HA platform | ✗ | /opt/stacks/homeassistant | ha.${DOMAIN} |
| ├─ ESPHome | ESP firmware mgr | ✓ | /opt/stacks/homeassistant | esphome.${DOMAIN} |
| ├─ TasmoAdmin | Tasmota device mgr | ✓ | /opt/stacks/homeassistant | tasmoadmin.${DOMAIN} |
| ├─ Node-RED | Automation flows | ✓ | /opt/stacks/homeassistant | nodered.${DOMAIN} |
| ├─ Mosquitto | MQTT broker | - | /opt/stacks/homeassistant | Ports 1883, 9001 |
| ├─ Zigbee2MQTT | Zigbee bridge | ✓ | /opt/stacks/homeassistant | zigbee2mqtt.${DOMAIN} |
| └─ MotionEye | Video surveillance | ✓ | /opt/stacks/homeassistant, /mnt/surveillance | motioneye.${DOMAIN} |
| **💼 productivity.yaml** (8 + 6 DBs) | | | | |
| ├─ Nextcloud | File sync platform | ✓ | /opt/stacks/productivity, /mnt/nextcloud | nextcloud.${DOMAIN} |
| │ └─ nextcloud-db | MariaDB | - | /opt/stacks/productivity | No UI |
| ├─ Mealie | Recipe manager | ✗ | /opt/stacks/productivity | mealie.${DOMAIN} |
| ├─ WordPress | Blog platform | ✗ | /opt/stacks/productivity | blog.${DOMAIN} |
| │ └─ wordpress-db | MariaDB | - | /opt/stacks/productivity | No UI |
| ├─ Gitea | Git service | ✓ | /opt/stacks/productivity, /mnt/git | git.${DOMAIN} |
| │ └─ gitea-db | PostgreSQL | - | /opt/stacks/productivity | No UI |
| ├─ DokuWiki | File-based wiki | ✓ | /opt/stacks/productivity | wiki.${DOMAIN} |
| ├─ BookStack | Documentation | ✓ | /opt/stacks/productivity | docs.${DOMAIN} |
| │ └─ bookstack-db | MariaDB | - | /opt/stacks/productivity | No UI |
| ├─ MediaWiki | Wiki platform | ✓ | /opt/stacks/productivity | mediawiki.${DOMAIN} |
| │ └─ mediawiki-db | MariaDB | - | /opt/stacks/productivity | No UI |
| └─ Form.io | Form builder | ✓ | /opt/stacks/productivity | forms.${DOMAIN} |
| └─ formio-mongo | MongoDB | - | /opt/stacks/productivity | No UI |
| **🛠️ utilities.yaml** (7) | | | | |
| ├─ Vaultwarden | Password manager | ✗ | /opt/stacks/utilities | bitwarden.${DOMAIN} |
| ├─ Backrest | Backup (restic) | ✓ | /opt/stacks/utilities, /mnt/backups | backrest.${DOMAIN} |
| ├─ Duplicati | Encrypted backups | ✓ | /opt/stacks/utilities, /mnt/backups | duplicati.${DOMAIN} |
| ├─ Code Server | VS Code in browser | ✓ | /opt/stacks/utilities | code.${DOMAIN} |
| ├─ Form.io | Form platform | ✓ | /opt/stacks/utilities | forms.${DOMAIN} |
| │ └─ formio-mongo | MongoDB | - | /opt/stacks/utilities | No UI |
| └─ Authelia-Redis | Session storage | - | /opt/stacks/utilities | No UI |
| **📈 monitoring.yaml** (8) | | | | |
| ├─ Prometheus | Metrics collection | ✓ | /opt/stacks/monitoring | prometheus.${DOMAIN} |
| ├─ Grafana | Visualization | ✓ | /opt/stacks/monitoring | grafana.${DOMAIN} |
| ├─ Loki | Log aggregation | - | /opt/stacks/monitoring | Via Grafana |
| ├─ Promtail | Log shipper | - | /opt/stacks/monitoring | No UI |
| ├─ Node Exporter | Host metrics | - | /opt/stacks/monitoring | No UI |
| ├─ cAdvisor | Container metrics | - | /opt/stacks/monitoring | Internal :8080 |
| └─ Uptime Kuma | Uptime monitoring | ✓ | /opt/stacks/monitoring | status.${DOMAIN} |
| **👨‍💻 development.yaml** (6) | | | | |
| ├─ GitLab CE | Git + CI/CD | ✓ | /opt/stacks/development, /mnt/git | gitlab.${DOMAIN} |
| ├─ PostgreSQL | SQL database | - | /opt/stacks/development | Port 5432 |
| ├─ Redis | In-memory store | - | /opt/stacks/development | Port 6379 |
| ├─ pgAdmin | PostgreSQL UI | ✓ | /opt/stacks/development | pgadmin.${DOMAIN} |
| ├─ Jupyter Lab | Notebooks | ✓ | /opt/stacks/development | jupyter.${DOMAIN} |
| └─ Code Server | VS Code | ✓ | /opt/stacks/development | code.${DOMAIN} |
**Legend:** ✓ = Protected by SSO | ✗ = Bypasses SSO | - = No web UI
## Quick Deployment Order
1. **Create Networks** (one-time setup)
```bash
docker network create traefik-network
docker network create homelab-network
docker network create dockerproxy-network
```
2. **Deploy Core Stack** (required first)
```bash
cd /opt/stacks/core/
docker compose up -d
```
3. **Deploy Infrastructure**
```bash
cd /opt/stacks/infrastructure/
docker compose up -d
```
4. **Deploy Dashboards**
```bash
cd /opt/stacks/dashboards/
docker compose up -d
```
5. **Deploy Additional Stacks** (as needed)
- Media: `/opt/stacks/media/`
- Extended Media: `/opt/stacks/media-extended/`
- Home Automation: `/opt/stacks/homeassistant/`
- Productivity: `/opt/stacks/productivity/`
- Utilities: `/opt/stacks/utilities/`
- Monitoring: `/opt/stacks/monitoring/`
- Development: `/opt/stacks/development/`
## Toggling SSO (Authelia) On/Off
You can easily enable or disable SSO protection for any service by modifying its Traefik labels in the docker-compose.yml file.
### To Enable SSO on a Service
Add the Authelia middleware to the service's Traefik labels:
```yaml
labels:
- "traefik.enable=true"
- "traefik.http.routers.servicename.rule=Host(`servicename.${DOMAIN}`)"
- "traefik.http.routers.servicename.entrypoints=websecure"
- "traefik.http.routers.servicename.tls.certresolver=letsencrypt"
- "traefik.http.routers.servicename.middlewares=authelia@docker" # ← Add this line
- "traefik.http.services.servicename.loadbalancer.server.port=8080"
```
### To Disable SSO on a Service
Comment out (don't remove) the middleware line:
```yaml
labels:
- "traefik.enable=true"
- "traefik.http.routers.servicename.rule=Host(`servicename.${DOMAIN}`)"
- "traefik.http.routers.servicename.entrypoints=websecure"
- "traefik.http.routers.servicename.tls.certresolver=letsencrypt"
# - "traefik.http.routers.servicename.middlewares=authelia@docker" # ← Commented out (not removed)
- "traefik.http.services.servicename.loadbalancer.server.port=8080"
```
After making changes, redeploy the service:
```bash
# From inside the stack directory
cd /opt/stacks/stack-name/
docker compose up -d
# Or from anywhere, using the full path
docker compose -f /opt/stacks/stack-name/docker-compose.yml up -d
```
**Stopping a Service:**
```bash
# From inside the stack directory
cd /opt/stacks/stack-name/
docker compose down
# Or from anywhere, using the full path
docker compose -f /opt/stacks/stack-name/docker-compose.yml down
```
**Use Cases for Development/Production:**
- **Security First**: All services start with SSO enabled by default for maximum security
- **Development**: Keep SSO enabled to protect services during testing
- **Production**: Disable SSO only for services needing direct app/API access (Plex, Jellyfin)
- **Gradual Exposure**: Comment out SSO only when ready to expose a service
- **Quick Toggle**: AI assistant can modify these labels automatically when you ask
## Authelia Customization
### Available Customization Options
**1. Branding and Appearance**
Edit `/opt/stacks/core/authelia/configuration.yml`:
```yaml
# Custom logo and branding
theme: dark # Options: light, dark, grey, auto
# No built-in web UI for configuration
# All settings managed via YAML files
```
**2. User Management**
Users are managed in `/opt/stacks/core/authelia/users_database.yml`:
```yaml
users:
username:
displayname: "Display Name"
password: "$argon2id$v=19$m=65536..." # Generated with authelia hash-password
email: user@example.com
groups:
- admins
- users
```
Generate password hash:
```bash
docker run --rm authelia/authelia:4.37 authelia hash-password 'yourpassword'
```
**3. Access Control Rules**
Customize who can access what in `configuration.yml`:
```yaml
access_control:
default_policy: deny
rules:
# Public services (no auth)
- domain:
- "jellyfin.yourdomain.com"
- "plex.yourdomain.com"
policy: bypass
# Admin only services
- domain:
- "dockge.yourdomain.com"
- "portainer.yourdomain.com"
policy: two_factor
subject:
- "group:admins"
# All authenticated users
- domain: "*.yourdomain.com"
policy: one_factor
```
**4. Two-Factor Authentication (2FA)**
- TOTP (Time-based One-Time Password) via apps like Google Authenticator, Authy
- Configure in `configuration.yml` under `totp:` section
- Per-user enrollment via Authelia UI at `https://auth.${DOMAIN}`
**5. Session Management**
Edit `configuration.yml`:
```yaml
session:
name: authelia_session
expiration: 1h # How long before re-login required
inactivity: 5m # Timeout after inactivity
remember_me_duration: 1M # "Remember me" checkbox duration
```
**6. Notification Settings**
Email notifications for password resets, 2FA enrollment:
```yaml
notifier:
smtp:
host: smtp.gmail.com
port: 587
username: your-email@gmail.com
password: app-password
sender: authelia@yourdomain.com
```
### No Web UI for Configuration
⚠️ **Important**: Authelia does **not** have a configuration web UI. All configuration is done via YAML files:
- `/opt/stacks/core/authelia/configuration.yml` - Main settings
- `/opt/stacks/core/authelia/users_database.yml` - User accounts
This is **by design** and makes Authelia perfect for AI management and security-first approach:
- AI can read and modify YAML files
- Version control friendly
- No UI clicks required
- Infrastructure as code
- Secure by default
**Web UI Available For:**
- Login page: `https://auth.${DOMAIN}`
- User profile: Change password, enroll 2FA
- Device enrollment: Manage trusted devices
**Alternative with Web UI: Authentik**
If you need a web UI for user management, Authentik is included in the infrastructure stack:
- **Authentik**: Full-featured SSO with web UI for user/group management
- Access at: `https://authentik.${DOMAIN}`
- Includes PostgreSQL database and Redis cache
- More complex but offers GUI-based configuration
- Deploy only if you need web-based user management
**Other Alternatives:**
- **Keycloak**: Enterprise-grade SSO with web UI
- **Authelia + LDAP**: Use LDAP with web management (phpLDAPadmin, etc.)
### Quick Configuration with AI
Since all Authelia configuration is file-based, you can use the AI assistant to:
- Add/remove users
- Modify access rules
- Change session settings
- Update branding
- Enable/disable features
Just ask: "Add a new user to Authelia" or "Change session timeout to 2 hours"
## Storage Recommendations
| Data Type | Recommended Location | Reason |
|-----------|---------------------|--------|
| Configuration files | `/opt/stacks/stack-name/` | Easy access, version control |
| Small databases (< 10GB) | `/opt/stacks/stack-name/db/` | Manageable on system drive |
| Media files (movies, TV, music) | `/mnt/media/` | Large, continuous growth |
| Downloads | `/mnt/downloads/` | Temporary, high throughput |
| Backups | `/mnt/backups/` | Large, separate from system |
| Surveillance footage | `/mnt/surveillance/` | Continuous recording |
| Large databases (> 10GB) | `/mnt/databases/` | Growth over time |
| Transcoding cache | `/mnt/transcode-cache/` | High I/O, large temporary files |
| Git repositories | `/mnt/git/` | Can grow large |
| Nextcloud data | `/mnt/nextcloud/` | User files, photos |
## Configuration Templates
All configuration templates are available in `config-templates/`:
- `traefik/` - Static and dynamic Traefik configuration
- `authelia/` - Complete Authelia setup with user database
- `homepage/` - Dashboard services, widgets, and Docker integration
- `prometheus/` - Metrics scrape configurations
- `loki/` - Log aggregation settings
- `promtail/` - Log shipping configuration
- `redis/` - Redis server configuration
## Additional Resources
- **Getting Started**: See [docs/getting-started.md](getting-started.md) for detailed deployment
- **Docker Guidelines**: See [docs/docker-guidelines.md](docker-guidelines.md) for management patterns
- **Quick Reference**: See [docs/quick-reference.md](quick-reference.md) for common commands
- **Proxying External Hosts**: See [docs/proxying-external-hosts.md](proxying-external-hosts.md) for Raspberry Pi, NAS, etc.
- **AI Assistant**: Use GitHub Copilot in VS Code with `.github/copilot-instructions.md` for intelligent homelab management

View File

@@ -1,191 +0,0 @@
# System Architecture
## Overview
The EZ-Homelab implements a **layered, production-ready architecture** designed for reliability, security, and ease of management. The system is built around Docker containers orchestrated through Traefik reverse proxy with Authelia SSO authentication.
## Core Principles
### 1. **Infrastructure as Code**
- All services defined in Docker Compose files
- File-based configuration (AI-manageable)
- Version-controlled infrastructure
- Reproducible deployments
### 2. **Security First**
- **Default Deny**: All services start with Authelia SSO protection
- **Explicit Bypass**: Only media apps (Plex, Jellyfin) bypass SSO for app compatibility
- **VPN Routing**: Download services route through Gluetun VPN client
- **Wildcard SSL**: Single certificate covers all subdomains
### 3. **Layered Architecture**
```
┌─────────────────┐
│ Dashboards │ ← User Interface Layer
│ (Homepage, UI) │
└─────────────────┘
┌─────────────────┐
│ Infrastructure │ ← Management Layer
│ (Dockge, Auth) │
└─────────────────┘
┌─────────────────┐
│ Core Stack │ ← Foundation Layer
│ (DNS, Proxy, VPN)│
└─────────────────┘
```
## Component Architecture
### Core Infrastructure Layer
The foundation that everything else depends on:
- **DuckDNS**: Dynamic DNS with Let's Encrypt DNS challenge
- **Traefik**: Reverse proxy with automatic HTTPS termination
- **Authelia**: SSO authentication with file-based user database
- **Gluetun**: VPN client for secure download routing
- **Sablier**: Lazy loading service for resource efficiency
### Service Categories
#### Infrastructure Services
- **Management**: Dockge (primary), Portainer (secondary)
- **Monitoring**: Dozzle (logs), Glances (system), Pi-hole (DNS)
- **Security**: Authelia (SSO), VPN routing via Gluetun
#### Media Services
- **Streaming**: Plex, Jellyfin (with app compatibility bypass)
- **Automation**: Sonarr, Radarr, Prowlarr (*Arr stack)
- **Downloads**: qBittorrent (VPN-routed)
#### Productivity & Collaboration
- **File Sync**: Nextcloud with MariaDB
- **Version Control**: Gitea with PostgreSQL
- **Documentation**: BookStack, DokuWiki, MediaWiki
- **Communication**: Various collaboration tools
#### Home Automation
- **Core**: Home Assistant with database
- **Development**: ESPHome, Node-RED
- **Connectivity**: Mosquitto (MQTT), Zigbee2MQTT
- **Surveillance**: MotionEye
#### Monitoring & Observability
- **Metrics**: Prometheus, Node Exporter, cAdvisor
- **Visualization**: Grafana with Loki logging
- **Alerting**: Alertmanager, Uptime Kuma
## Network Architecture
### Docker Networks
- **traefik-network**: Primary network for all web-facing services
- **homelab-network**: Internal service communication
- **dockerproxy-network**: Secure Docker socket access
### Routing Patterns
- **Traefik Labels**: Declarative routing configuration
- **Authelia Middleware**: SSO protection with bypass rules
- **VPN Routing**: `network_mode: "service:gluetun"` for downloads
### Port Management
- **External Ports**: Only 80/443 exposed (Traefik)
- **Internal Ports**: Services communicate via Docker networks
- **VPN Ports**: Download services mapped through Gluetun
## Storage Strategy
### Configuration Storage
- **Location**: `/opt/stacks/{stack-name}/config/`
- **Purpose**: Application configuration and settings
- **Backup**: Included in backup strategy
### Data Storage
- **Small Data**: Named Docker volumes (< 50GB)
- **Large Data**: External mounts `/mnt/media`, `/mnt/downloads`
- **Databases**: Containerized with persistent volumes
### Backup Architecture
- **Primary**: Restic + Backrest for comprehensive backups
- **Secondary**: Service-specific backup tools (Duplicati)
- **Strategy**: 3-2-1 rule (3 copies, 2 media types, 1 offsite)
## Security Model
### Authentication Layers
1. **Network Level**: Firewall rules and VPN routing
2. **Application Level**: Authelia SSO with 2FA support
3. **Service Level**: Individual service authentication
### Access Control
- **Default Protected**: All services require authentication
- **Bypass Rules**: Configured in Authelia for specific domains
- **VPN Enforcement**: Download traffic routed through VPN
### Certificate Management
- **Wildcard Certificates**: `*.yourdomain.duckdns.org`
- **Automatic Renewal**: Traefik handles Let's Encrypt
- **DNS Challenge**: DuckDNS token-based validation
## Deployment Model
### Automated Setup
1. **System Preparation**: `setup-homelab.sh`
- Docker installation
- System configuration
- Authelia secrets generation
2. **Service Deployment**: `deploy-homelab.sh`
- Core stack deployment
- Infrastructure services
- Dashboard configuration
### Management Interface
- **Primary**: Dockge web UI at `dockge.yourdomain.duckdns.org`
- **Secondary**: Portainer for advanced container management
- **AI Integration**: File-based configuration for AI assistance
## Scalability & Performance
### Resource Management
- **Limits**: Configured per service based on requirements
- **Reservations**: Guaranteed minimum resources
- **Monitoring**: System resource tracking
### Service Categories by Resource Usage
- **Lightweight**: DNS, monitoring, authentication
- **Standard**: Web applications, dashboards
- **Heavy**: Media servers, databases
- **Specialized**: GPU-enabled services, high-I/O applications
## Maintenance & Operations
### Update Strategy
- **Automated**: Watchtower for container updates
- **Manual**: Service-specific update procedures
- **Testing**: Validation before production deployment
### Monitoring & Alerting
- **System**: Glances, Prometheus, Grafana
- **Services**: Health checks and log aggregation
- **Uptime**: Uptime Kuma for external monitoring
### Backup & Recovery
- **Automated**: Scheduled backups with Restic
- **Manual**: On-demand backups via Backrest UI
- **Testing**: Regular backup validation and restore testing
## AI Integration
### Copilot Instructions
- **File-Based**: All configuration in editable files
- **Documentation**: Comprehensive guides for AI assistance
- **Templates**: Ready-to-use configuration templates
### Management Patterns
- **Declarative**: Define desired state in YAML
- **Automated**: Scripts handle complex deployment logic
- **Validated**: Health checks and verification steps
This architecture provides a robust, secure, and maintainable foundation for a production homelab environment.</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\System-Architecture.md

View File

@@ -1,29 +0,0 @@
# EZ-Homelab Wiki Footer
## 📖 Quick Links
- [[Home]] | [[Getting Started Guide]] | [[Services Overview]]
- [[Quick Reference]] | [[Troubleshooting]] | [[AI Management Guide]]
## 🆘 Need Help?
- **Issues**: [GitHub Issues](https://github.com/kelinfoxy/EZ-Homelab/issues)
- **Discussions**: [GitHub Discussions](https://github.com/kelinfoxy/EZ-Homelab/discussions)
- **Documentation**: This wiki is the primary source of truth
## 📊 Project Status
- **Version**: 1.0.0 (Production Ready)
- **Services**: 70+ services across 10 categories
- **Architecture**: File-based, AI-manageable
- **Management**: Dockge web UI
- **Security**: Authelia SSO with VPN routing
## 🤝 Contributing
- [[Contributing Guide]] - How to contribute
- [[Development Notes]] - Technical details
- [[Code Standards]] - Development best practices
## 📜 License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/kelinfoxy/EZ-Homelab/blob/main/LICENSE) file for details.
---
*Last updated: January 21, 2026 | EZ-Homelab Wiki*</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\_Footer.md

View File

@@ -1,54 +0,0 @@
# EZ-Homelab Wiki Navigation
## 🚀 Getting Started
- [[Home]] - Wiki overview and navigation
- [[Getting Started Guide]] - Complete setup instructions
- [[Environment Configuration]] - Required settings and tokens
- [[Automated Setup]] - One-click deployment process
- [[Manual Setup]] - Step-by-step manual installation
- [[Post Setup Guide]] - After deployment configuration
- [[AI Management Guide]] - Using AI for homelab management
## 🏗️ Architecture & Design
- [[System Architecture]] - High-level component overview
- [[Network Architecture]] - Service communication patterns
- [[Security Model]] - Authentication and access control
- [[Storage Strategy]] - Data persistence and organization
- [[Docker Guidelines]] - Service management patterns
- [[Ports in Use]] - Complete port mapping reference
## 📦 Services & Stacks
- [[Services Overview]] - All available services catalog
- [[Core Infrastructure]] - Essential services (deploy first)
- [[Infrastructure Services]] - Management and monitoring
- [[Service Documentation]] - Individual service guides
## 🛠️ Operations & Management
- [[Quick Reference]] - Command cheat sheet
- [[Backup Strategy]] - Restic + Backrest comprehensive guide
- [[SSL Certificates]] - HTTPS and certificate management
- [[Proxying External Hosts]] - Connect non-Docker services
- [[Resource Limits Template]] - Performance optimization
- [[Troubleshooting]] - Common issues and solutions
## 🤖 AI & Automation
- [[Copilot Instructions]] - AI assistant configuration
- [[AI VS Code Setup]] - Development environment
- [[AI Management Prompts]] - Sample AI interactions
## 📋 Additional Resources
- [[How It Works]] - System architecture explanation
- [[On Demand Remote Services]] - Lazy loading configuration
- [[Authelia Customization]] - SSO configuration options
- [[Core Stack README]] - Core infrastructure details
## 📚 External Links
- [GitHub Repository](https://github.com/kelinfoxy/EZ-Homelab)
- [Docker Hub](https://hub.docker.com)
- [Traefik Documentation](https://doc.traefik.io/traefik/)
- [Authelia Documentation](https://www.authelia.com/)
- [DuckDNS](https://www.duckdns.org/)
---
*This wiki serves as the single source of truth for the EZ-Homelab project.*</content>
<parameter name="filePath">c:\Users\kelin\Documents\Apps\GitHub\AI-Homelab\wiki\_Sidebar.md

View File

@@ -1,480 +0,0 @@
# Action Report: SSL Wildcard Certificate Setup
**Date:** January 12, 2026
**Status:** ✅ Completed Successfully
**Impact:** All homelab services now have valid Let's Encrypt SSL certificates
---
## Problem Statement
Services were showing "not secure" warnings in browsers despite Traefik being configured for Let's Encrypt certificates. Multiple simultaneous certificate requests were failing due to DNS challenge conflicts.
## Root Causes Identified
### 1. **Multiple Simultaneous Certificate Requests**
- **Issue:** Each service (dockge, dozzle, glances, pihole, authelia) had `traefik.http.routers.*.tls.certresolver=letsencrypt` labels
- **Impact:** Traefik attempted to request individual certificates for each subdomain simultaneously
- **Consequence:** DuckDNS DNS challenge can only handle ONE TXT record at `_acme-challenge.kelin-hass.duckdns.org` at a time
- **Result:** All certificate requests failed with "Incorrect TXT record" errors
### 2. **DNS TXT Record Conflicts**
- **Issue:** Multiple services tried to create different TXT records at the same DNS location
- **Example:**
- Service A creates: `_acme-challenge.kelin-hass.duckdns.org` = "token1"
- Service B overwrites: `_acme-challenge.kelin-hass.duckdns.org` = "token2"
- Let's Encrypt validates Service A but finds "token2" → validation fails
- **DuckDNS Limitation:** Can only maintain ONE TXT record per domain
### 3. **Authelia Configuration Error**
- **Issue:** Environment variable `AUTHELIA_NOTIFIER_SMTP_PASSWORD` was set without corresponding SMTP configuration
- **Impact:** Authelia crashed on startup with "please ensure only one of the 'smtp' or 'filesystem' notifier is configured"
- **Consequence:** Services requiring Authelia authentication were inaccessible
### 4. **Stale DNS Records**
- **Issue:** Old TXT records from failed attempts persisted in DNS
- **Impact:** New certificate attempts validated against old, incorrect TXT records
## Solution Implemented
### Phase 1: Identify Certificate Request Pattern
**Actions:**
1. Discovered Traefik logs at `/var/log/traefik/traefik.log` (not stdout)
2. Analyzed logs showing multiple simultaneous DNS-01 challenges
3. Confirmed DuckDNS TXT record conflicts
**Command Used:**
```bash
docker exec traefik tail -f /var/log/traefik/traefik.log
```
### Phase 2: Configure Wildcard Certificate
**Actions:**
1. Removed `certresolver` labels from all services except Traefik
2. Configured wildcard certificate on Traefik router only
3. Added DNS propagation skip for faster validation
**Changes Made:**
**File:** `/home/kelin/AI-Homelab/docker-compose/core.yml`
```yaml
# Traefik - Only service with certresolver
traefik:
labels:
- "traefik.http.routers.traefik.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik.tls.domains[0].main=${DOMAIN}"
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
# Authelia - No certresolver, just tls=true
authelia:
labels:
- "traefik.http.routers.authelia.tls=true"
```
**File:** `/home/kelin/AI-Homelab/docker-compose/infrastructure.yml`
```yaml
# All infrastructure services - No certresolver
dockge:
labels:
- "traefik.http.routers.dockge.tls=true"
dozzle:
labels:
- "traefik.http.routers.dozzle.tls=true"
glances:
labels:
- "traefik.http.routers.glances.tls=true"
pihole:
labels:
- "traefik.http.routers.pihole.tls=true"
```
**File:** `/opt/stacks/core/traefik/traefik.yml`
```yaml
certificatesResolvers:
letsencrypt:
acme:
email: kelinfoxy@gmail.com
storage: /acme.json
dnsChallenge:
provider: duckdns
disablePropagationCheck: true # Added to skip DNS propagation wait
resolvers:
- "1.1.1.1:53"
- "8.8.8.8:53"
```
### Phase 3: Clear DNS and Reset Certificates
**Actions:**
1. Stopped all services to clear DNS TXT records
2. Reset `acme.json` to force fresh certificate request
3. Waited 60 seconds for DNS to fully clear
4. Restarted services with wildcard-only configuration
**Commands Executed:**
```bash
# Stop services
cd /opt/stacks/core && docker compose down
# Reset certificate storage
rm /opt/stacks/core/traefik/acme.json
touch /opt/stacks/core/traefik/acme.json
chmod 600 /opt/stacks/core/traefik/acme.json
chown kelin:kelin /opt/stacks/core/traefik/acme.json
# Wait for DNS to clear
sleep 60
dig +short TXT _acme-challenge.kelin-hass.duckdns.org # Verified empty
# Deploy updated configuration
cp /home/kelin/AI-Homelab/docker-compose/core.yml /opt/stacks/core/docker-compose.yml
cd /opt/stacks/core && docker compose up -d
```
### Phase 4: Fix Authelia Configuration
**Issue Found:** Environment variable triggering SMTP configuration check
**File:** `/opt/stacks/core/docker-compose.yml`
**Removed:**
```yaml
environment:
- AUTHELIA_NOTIFIER_SMTP_PASSWORD=${SMTP_PASSWORD} # ❌ Removed
```
**Command:**
```bash
cd /opt/stacks/core && docker compose up -d authelia
```
### Phase 5: Fix Infrastructure Services
**Issue:** Missing `networks:` header in compose file
**File:** `/opt/stacks/infrastructure/infrastructure.yml`
**Fixed:**
```yaml
# Before (incorrect):
traefik-network:
external: true
# After (correct):
networks:
traefik-network:
external: true
homelab-network:
driver: bridge
dockerproxy-network:
driver: bridge
```
**Command:**
```bash
cd /opt/stacks/infrastructure && docker compose -f infrastructure.yml up -d
```
## Results
### Certificate Obtained Successfully ✅
**acme.json Contents:**
```json
{
"letsencrypt": {
"Account": {
"Email": "kelinfoxy@gmail.com",
"Registration": {
"uri": "https://acme-v02.api.letsencrypt.org/acme/acct/2958966636"
}
},
"Certificates": [
{
"domain": {
"main": "dockge.kelin-hass.duckdns.org"
}
},
{
"domain": {
"main": "kelin-hass.duckdns.org",
"sans": ["*.kelin-hass.duckdns.org"]
}
}
]
}
}
```
**Certificate Details:**
- **Subject:** CN=kelin-hass.duckdns.org
- **Issuer:** C=US, O=Let's Encrypt, CN=R12
- **Coverage:** Wildcard certificate covering all subdomains
- **File Size:** 23KB (up from 0 bytes)
### Services Status
All services running with valid SSL certificates:
| Service | Status | URL | Certificate |
|---------|--------|-----|-------------|
| Traefik | ✅ Up | https://traefik.kelin-hass.duckdns.org | Valid |
| Authelia | ✅ Up | https://auth.kelin-hass.duckdns.org | Valid |
| Dockge | ✅ Up | https://dockge.kelin-hass.duckdns.org | Valid |
| Dozzle | ✅ Up | https://dozzle.kelin-hass.duckdns.org | Valid |
| Glances | ✅ Up | https://glances.kelin-hass.duckdns.org | Valid |
| Pi-hole | ✅ Up | https://pihole.kelin-hass.duckdns.org | Valid |
## Best Practices & Prevention
### 1. ✅ Use Wildcard Certificates with DuckDNS
**Rule:** Only ONE service should request certificates with DuckDNS DNS challenge
**Configuration:**
```yaml
# ✅ CORRECT: Only Traefik requests wildcard cert
traefik:
labels:
- "traefik.http.routers.traefik.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik.tls.domains[0].main=${DOMAIN}"
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
# ✅ CORRECT: Other services just enable TLS
other-service:
labels:
- "traefik.http.routers.service.tls=true" # Uses wildcard automatically
# ❌ WRONG: Multiple services requesting certs
other-service:
labels:
- "traefik.http.routers.service.tls.certresolver=letsencrypt" # DON'T DO THIS
```
### 2. ✅ DuckDNS DNS Challenge Limitations
**Understand the Constraint:**
- DuckDNS can only maintain ONE TXT record at `_acme-challenge.kelin-hass.duckdns.org`
- Multiple simultaneous challenges WILL fail
- Use wildcard certificate to avoid this limitation
**Alternative Providers (if needed):**
- Cloudflare: Supports multiple simultaneous DNS challenges
- Route53: Supports multiple TXT records
- Use HTTP challenge if DNS challenge isn't required
### 3. ✅ Traefik Logging Configuration
**Enable File Logging for Debugging:**
**File:** `/opt/stacks/core/traefik/traefik.yml`
```yaml
log:
level: DEBUG # Use DEBUG for troubleshooting, INFO for production
filePath: /var/log/traefik/traefik.log # Easier to tail than docker logs
# Mount in docker-compose.yml:
volumes:
- /var/log/traefik:/var/log/traefik
```
**Useful Commands:**
```bash
# Monitor certificate acquisition
docker exec traefik tail -f /var/log/traefik/traefik.log | grep -E "acme|certificate|DNS"
# Check for errors
docker exec traefik tail -100 /var/log/traefik/traefik.log | grep -E "error|Unable"
# View specific domain
docker exec traefik tail -200 /var/log/traefik/traefik.log | grep "kelin-hass.duckdns.org"
```
### 4. ✅ Certificate Troubleshooting Workflow
**When certificates aren't working:**
```bash
# 1. Check acme.json status
cat /opt/stacks/core/traefik/acme.json | python3 -m json.tool | grep -A5 "Certificates"
# 2. Check certificate count
python3 -c "import json; d=json.load(open('/opt/stacks/core/traefik/acme.json')); print(f'Certificates: {len(d[\"letsencrypt\"][\"Certificates\"])}')"
# 3. Test certificate being served
echo | openssl s_client -connect auth.kelin-hass.duckdns.org:443 -servername auth.kelin-hass.duckdns.org 2>/dev/null | openssl x509 -noout -subject -issuer
# 4. Check DNS TXT records
dig +short TXT _acme-challenge.kelin-hass.duckdns.org
# 5. Check Traefik logs
docker exec traefik tail -50 /var/log/traefik/traefik.log
```
### 5. ✅ Environment Variable Hygiene
**Principle:** Only set environment variables that are actually used
**Example - Authelia:**
```yaml
# ✅ CORRECT: Only variables for configured features
environment:
- AUTHELIA_JWT_SECRET=${AUTHELIA_JWT_SECRET}
- AUTHELIA_SESSION_SECRET=${AUTHELIA_SESSION_SECRET}
- AUTHELIA_STORAGE_ENCRYPTION_KEY=${AUTHELIA_STORAGE_ENCRYPTION_KEY}
# ❌ WRONG: SMTP variable without SMTP configuration
environment:
- AUTHELIA_NOTIFIER_SMTP_PASSWORD=${SMTP_PASSWORD} # Causes crash if SMTP not in config.yml
```
### 6. ✅ Docker Compose File Validation
**Before deploying:**
```bash
# Validate syntax
docker compose -f /path/to/file.yml config
# Check for common errors
grep -n "^ [a-z]" file.yml # Networks should have "networks:" header
```
### 7. ✅ Certificate Renewal Strategy
**Automatic Renewal:**
- Traefik automatically renews certificates 30 days before expiration
- Wildcard certificate covers all subdomains (no individual renewals needed)
- Monitor `acme.json` for certificate expiration dates
**Backup acme.json:**
```bash
# Regular backup (e.g., daily cron)
cp /opt/stacks/core/traefik/acme.json /opt/backups/acme.json.$(date +%Y%m%d)
# Keep last 7 days
find /opt/backups -name "acme.json.*" -mtime +7 -delete
```
## Key Learnings
### Technical Insights
1. **DuckDNS Limitation:** Single TXT record constraint requires wildcard certificate approach
2. **DNS Propagation:** `disablePropagationCheck: true` speeds up validation but relies on fast DNS updates
3. **Traefik Labels:** `tls=true` vs `tls.certresolver=letsencrypt` - use former for wildcard coverage
4. **Environment Variables:** Can trigger configuration validation even without corresponding config file entries
### Process Insights
1. **Log Discovery:** Traefik logs to files by default, not always visible via `docker logs`
2. **DNS Clearing:** Stopping services and waiting 60s ensures DNS records fully clear
3. **Incremental Debugging:** Monitor logs during certificate acquisition to catch issues early
4. **Configuration Synchronization:** Repository files must be copied to deployment locations
## Documentation Updates
### Files Modified
**Repository:**
- `/home/kelin/AI-Homelab/docker-compose/core.yml`
- `/home/kelin/AI-Homelab/docker-compose/infrastructure.yml`
**Deployed:**
- `/opt/stacks/core/docker-compose.yml`
- `/opt/stacks/core/traefik/traefik.yml`
- `/opt/stacks/core/traefik/acme.json`
- `/opt/stacks/infrastructure/infrastructure.yml`
### Configuration Templates
**Wildcard Certificate Template:**
```yaml
services:
traefik:
labels:
- "traefik.http.routers.traefik.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik.tls.domains[0].main=${DOMAIN}"
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
any-other-service:
labels:
- "traefik.http.routers.service.tls=true" # No certresolver!
```
## Future Recommendations
### Short-term (Next Week)
1. ✅ Monitor certificate auto-renewal (should happen automatically)
2. ✅ Test browser access from different devices to verify SSL
3. ✅ Update homelab documentation with wildcard certificate pattern
4. ⚠️ Consider adding certificate monitoring alerts
### Medium-term (Next Month)
1. Set up automated `acme.json` backups
2. Document certificate troubleshooting runbook
3. Consider migrating to Cloudflare if more services are added
4. Implement certificate expiration monitoring
### Long-term (Next Quarter)
1. Evaluate alternative DNS providers for better DNS challenge support
2. Consider setting up staging Let's Encrypt for testing
3. Implement centralized logging for all services
4. Add Prometheus/Grafana monitoring for SSL certificate expiration
## Quick Reference
### Emergency Certificate Reset
```bash
# 1. Stop all services
cd /opt/stacks/core && docker compose down
cd /opt/stacks/infrastructure && docker compose -f infrastructure.yml down
# 2. Reset acme.json
rm /opt/stacks/core/traefik/acme.json
touch /opt/stacks/core/traefik/acme.json
chmod 600 /opt/stacks/core/traefik/acme.json
# 3. Wait for DNS to clear
sleep 60
# 4. Restart
cd /opt/stacks/core && docker compose up -d
cd /opt/stacks/infrastructure && docker compose -f infrastructure.yml up -d
# 5. Monitor
docker exec traefik tail -f /var/log/traefik/traefik.log
```
### Verify Certificate Command
```bash
echo | openssl s_client -connect ${SUBDOMAIN}.kelin-hass.duckdns.org:443 -servername ${SUBDOMAIN}.kelin-hass.duckdns.org 2>/dev/null | openssl x509 -noout -subject -issuer -dates
```
### Check All Service Certificates
```bash
for subdomain in auth traefik dockge dozzle glances pihole; do
echo "=== $subdomain.kelin-hass.duckdns.org ==="
echo | openssl s_client -connect $subdomain.kelin-hass.duckdns.org:443 -servername $subdomain.kelin-hass.duckdns.org 2>/dev/null | openssl x509 -noout -subject -issuer
echo
done
```
---
## Summary
Successfully implemented wildcard SSL certificate for all homelab services using Let's Encrypt DNS challenge via DuckDNS. Key success factor was recognizing DuckDNS's limitation of one TXT record at a time and configuring Traefik to request a single wildcard certificate instead of individual certificates per service. All services now accessible via HTTPS with valid certificates.
**Status:** ✅ Production Ready
**Next Review:** 30 days before certificate expiration (March 13, 2026)

View File

@@ -1,209 +0,0 @@
# Alertmanager - Alert Routing
## Table of Contents
- [Overview](#overview)
- [What is Alertmanager?](#what-is-alertmanager)
- [Why Use Alertmanager?](#why-use-alertmanager)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Alert Management
**Docker Image:** [prom/alertmanager](https://hub.docker.com/r/prom/alertmanager)
**Default Stack:** `monitoring.yml`
**Web UI:** `http://SERVER_IP:9093`
**Purpose:** Handle Prometheus alerts
**Ports:** 9093
## What is Alertmanager?
Alertmanager handles alerts from Prometheus. It deduplicates, groups, and routes alerts to notification channels (email, Slack, PagerDuty, etc.). It also manages silencing and inhibition of alerts. The alerting component of the Prometheus ecosystem.
### Key Features
- **Alert Routing:** Send to right channels
- **Grouping:** Combine similar alerts
- **Deduplication:** No duplicate alerts
- **Silencing:** Mute alerts temporarily
- **Inhibition:** Suppress dependent alerts
- **Notifications:** Email, Slack, webhooks, etc.
- **Web UI:** Manage alerts visually
- **Free & Open Source:** Prometheus project
## Why Use Alertmanager?
1. **Prometheus Native:** Designed for Prometheus
2. **Smart Routing:** Alerts go where needed
3. **Deduplication:** No spam
4. **Grouping:** Related alerts together
5. **Silencing:** Maintenance mode
6. **Multi-Channel:** Email, Slack, etc.
## Configuration in AI-Homelab
```
/opt/stacks/monitoring/alertmanager/
alertmanager.yml # Configuration
data/ # Alert state
```
### alertmanager.yml
```yaml
global:
resolve_timeout: 5m
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'discord'
receivers:
- name: 'discord'
webhook_configs:
- url: 'YOUR_DISCORD_WEBHOOK_URL'
send_resolved: true
- name: 'email'
email_configs:
- to: 'alerts@yourdomain.com'
from: 'alertmanager@yourdomain.com'
smarthost: 'smtp.gmail.com:587'
auth_username: 'your@gmail.com'
auth_password: 'app_password'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'instance']
```
## Official Resources
- **Website:** https://prometheus.io/docs/alerting/latest/alertmanager
- **Configuration:** https://prometheus.io/docs/alerting/latest/configuration
## Docker Configuration
```yaml
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
restart: unless-stopped
networks:
- traefik-network
ports:
- "9093:9093"
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--storage.path=/alertmanager'
volumes:
- /opt/stacks/monitoring/alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml
- /opt/stacks/monitoring/alertmanager/data:/alertmanager
```
## Setup
1. **Configure Prometheus:**
Add to prometheus.yml:
```yaml
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
rule_files:
- '/etc/prometheus/rules/*.yml'
```
2. **Create Alert Rules:**
`/opt/stacks/monitoring/prometheus/rules/alerts.yml`:
```yaml
groups:
- name: example
rules:
- alert: InstanceDown
expr: up == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Instance {{ $labels.instance }} down"
description: "{{ $labels.instance }} has been down for more than 5 minutes."
- alert: HighCPU
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is above 80% for more than 5 minutes."
- alert: HighMemory
expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100 > 90
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.instance }}"
description: "Memory usage is above 90%."
- alert: DiskSpaceLow
expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 10
for: 5m
labels:
severity: critical
annotations:
summary: "Low disk space on {{ $labels.instance }}"
description: "Disk space is below 10%."
```
3. **Restart Prometheus:**
```bash
docker restart prometheus
```
4. **Access Alertmanager UI:** `http://SERVER_IP:9093`
## Summary
Alertmanager routes alerts from Prometheus offering:
- Alert deduplication
- Grouping and routing
- Multiple notification channels
- Silencing and inhibition
- Web UI management
- Free and open-source
**Perfect for:**
- Prometheus alert handling
- Multi-channel notifications
- Alert management
- Maintenance silencing
- Alert grouping
**Key Points:**
- Receives alerts from Prometheus
- Routes to notification channels
- Deduplicates and groups
- Supports silencing
- Web UI for management
- Configure in alertmanager.yml
- Define rules in Prometheus
**Remember:**
- Configure receivers (Discord, Email, etc.)
- Create alert rules in Prometheus
- Test alerts work
- Use silencing for maintenance
- Group related alerts
- Set appropriate thresholds
- Monitor alertmanager itself
Alertmanager manages your alerts intelligently!

View File

@@ -1,602 +0,0 @@
# Authelia - Single Sign-On & Two-Factor Authentication
## Table of Contents
- [Overview](#overview)
- [What is Authelia?](#what-is-authelia)
- [Why Use Authelia?](#why-use-authelia)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [User Management](#user-management)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Core Infrastructure
**Docker Image:** [authelia/authelia](https://hub.docker.com/r/authelia/authelia)
**Default Stack:** `core.yml`
**Web UI:** `https://auth.${DOMAIN}`
**Authentication:** Self-authenticating (login portal)
## What is Authelia?
Authelia is an open-source authentication and authorization server providing single sign-on (SSO) and two-factor authentication (2FA) for your applications via a web portal. It acts as a gatekeeper between Traefik and your services.
### Key Features
- **Single Sign-On (SSO):** Log in once, access all protected services
- **Two-Factor Authentication:** TOTP (Google Authenticator, Authy), WebAuthn, Security Keys
- **Access Control:** Per-service, per-user, per-network rules
- **Session Management:** Remember devices, revoke sessions
- **Identity Verification:** Email verification for password resets
- **Security Policies:** Custom policies per service (one_factor, two_factor, bypass)
- **Lightweight:** Minimal resource usage
- **Integration:** Works seamlessly with Traefik via ForwardAuth
## Why Use Authelia?
1. **Enhanced Security:** Add 2FA to services that don't support it natively
2. **Centralized Authentication:** One login portal for all services
3. **Granular Access Control:** Control who can access what
4. **Remember Devices:** Don't re-authenticate on trusted devices
5. **Protection Layer:** Extra security even if a service has vulnerabilities
6. **Free & Open Source:** No licensing costs
7. **Privacy:** Self-hosted, your data stays with you
## How It Works
```
User → Traefik → Authelia (Check Auth) → Service
Not Authenticated
Login Portal
Username/Password
2FA (TOTP/WebAuthn)
Cookie Issued
Access Granted
```
### Authentication Flow
1. **User accesses** `https://sonarr.yourdomain.com`
2. **Traefik** sends request to Authelia (ForwardAuth middleware)
3. **Authelia checks** for valid authentication cookie
4. **If not authenticated:**
- Redirects to `https://auth.yourdomain.com`
- User enters username/password
- User completes 2FA challenge
- Authelia issues authentication cookie
5. **If authenticated:**
- Authelia checks access control rules
- If authorized, request forwarded to service
6. **Session remembered** for configured duration
### Integration with Traefik
Services protected by Authelia use a special middleware:
```yaml
labels:
- "traefik.http.routers.sonarr.middlewares=authelia@docker"
```
This tells Traefik to verify authentication before allowing access.
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/core/authelia/
├── configuration.yml # Main configuration
├── users_database.yml # User accounts and passwords
└── db.sqlite3 # Session/TOTP storage (auto-generated)
```
### Main Configuration (`configuration.yml`)
```yaml
server:
host: 0.0.0.0
port: 9091
log:
level: info
theme: dark
totp:
issuer: yourdomain.com
authentication_backend:
file:
path: /config/users_database.yml
password:
algorithm: argon2id
iterations: 1
salt_length: 16
parallelism: 8
memory: 64
access_control:
default_policy: deny
rules:
# Bypass auth for public services
- domain: "public.yourdomain.com"
policy: bypass
# One-factor for internal networks
- domain: "*.yourdomain.com"
policy: one_factor
networks:
- 192.168.1.0/24
# Two-factor for everything else
- domain: "*.yourdomain.com"
policy: two_factor
session:
name: authelia_session
domain: yourdomain.com
expiration: 1h
inactivity: 5m
remember_me_duration: 1M
regulation:
max_retries: 5
find_time: 10m
ban_time: 15m
storage:
local:
path: /config/db.sqlite3
notifier:
filesystem:
filename: /config/notification.txt
# Alternative: SMTP for email notifications
# smtp:
# username: your-email@gmail.com
# password: your-app-password
# host: smtp.gmail.com
# port: 587
# sender: your-email@gmail.com
```
### Users Database (`users_database.yml`)
```yaml
users:
john:
displayname: "John Doe"
password: "$argon2id$v=19$m=65536,t=3,p=4$BpLnfgDsc2WD8F2q$qQv8kuZHAOhqx7/Ju3qNqawhKhh9q9L6KUXCv7RQ0MA"
email: john@example.com
groups:
- admins
- users
jane:
displayname: "Jane Smith"
password: "$argon2id$v=19$m=65536,t=3,p=4$..."
email: jane@example.com
groups:
- users
```
### Generating Password Hashes
```bash
# Using Docker
docker run --rm authelia/authelia:latest authelia crypto hash generate argon2 --password 'YourPasswordHere'
# Or from within the container
docker exec -it authelia authelia crypto hash generate argon2 --password 'YourPasswordHere'
```
### Environment Variables
```bash
AUTHELIA_JWT_SECRET=your-super-secret-jwt-key-min-32-chars
AUTHELIA_SESSION_SECRET=your-super-secret-session-key-min-32-chars
AUTHELIA_STORAGE_ENCRYPTION_KEY=your-super-secret-storage-key-min-32-chars
```
**Generate secure secrets:**
```bash
# Generate random 64-character hex strings
openssl rand -hex 32
```
## Official Resources
- **Website:** https://www.authelia.com
- **Documentation:** https://www.authelia.com/docs/
- **GitHub:** https://github.com/authelia/authelia
- **Docker Hub:** https://hub.docker.com/r/authelia/authelia
- **Community:** https://discord.authelia.com
- **Configuration Examples:** https://github.com/authelia/authelia/tree/master/examples
## Educational Resources
### Videos
- [Authelia - The BEST Authentication Platform (Techno Tim)](https://www.youtube.com/watch?v=u6H-Qwf4nZA)
- [Secure Your Self-Hosted Apps with Authelia (DB Tech)](https://www.youtube.com/watch?v=4UKOh3ssQSU)
- [Authelia Setup with Traefik (Wolfgang's Channel)](https://www.youtube.com/watch?v=g7oUvxGqvPw)
- [Two-Factor Authentication Explained](https://www.youtube.com/watch?v=0mvCeNsTa1g)
### Articles & Guides
- [Authelia Official Documentation](https://www.authelia.com/docs/)
- [Integration with Traefik](https://www.authelia.com/integration/proxies/traefik/)
- [Access Control Configuration](https://www.authelia.com/configuration/security/access-control/)
- [Migration from Organizr/Authelia v3](https://www.authelia.com/docs/configuration/migration.html)
### Concepts to Learn
- **Single Sign-On (SSO):** Authentication once for multiple applications
- **Two-Factor Authentication (2FA):** Second verification method (TOTP, WebAuthn)
- **TOTP:** Time-based One-Time Password (Google Authenticator)
- **WebAuthn:** Web Authentication standard (Yubikey, Touch ID)
- **ForwardAuth:** Proxy authentication delegation
- **LDAP:** Lightweight Directory Access Protocol (user directories)
- **Session Management:** Cookie-based authentication tracking
- **Argon2:** Modern password hashing algorithm
## Docker Configuration
### Complete Service Definition
```yaml
authelia:
image: authelia/authelia:latest
container_name: authelia
restart: unless-stopped
networks:
- traefik-network
volumes:
- /opt/stacks/core/authelia:/config
environment:
- TZ=America/New_York
- AUTHELIA_JWT_SECRET=${AUTHELIA_JWT_SECRET}
- AUTHELIA_SESSION_SECRET=${AUTHELIA_SESSION_SECRET}
- AUTHELIA_STORAGE_ENCRYPTION_KEY=${AUTHELIA_STORAGE_ENCRYPTION_KEY}
labels:
- "traefik.enable=true"
- "traefik.http.routers.authelia.rule=Host(`auth.${DOMAIN}`)"
- "traefik.http.routers.authelia.entrypoints=websecure"
- "traefik.http.routers.authelia.tls.certresolver=letsencrypt"
# ForwardAuth Middleware
- "traefik.http.middlewares.authelia.forwardAuth.address=http://authelia:9091/api/verify?rd=https://auth.${DOMAIN}"
- "traefik.http.middlewares.authelia.forwardAuth.trustForwardHeader=true"
- "traefik.http.middlewares.authelia.forwardAuth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email"
```
### Protecting Services
Add the Authelia middleware to any service:
```yaml
myservice:
image: myapp:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.myservice.rule=Host(`myservice.${DOMAIN}`)"
- "traefik.http.routers.myservice.entrypoints=websecure"
- "traefik.http.routers.myservice.tls.certresolver=letsencrypt"
- "traefik.http.routers.myservice.middlewares=authelia@docker" # <-- Add this
networks:
- traefik-network
```
## User Management
### Adding Users
1. **Generate password hash:**
```bash
docker exec -it authelia authelia crypto hash generate argon2 --password 'SecurePassword123!'
```
2. **Add to `users_database.yml`:**
```yaml
users:
newuser:
displayname: "New User"
password: "$argon2id$v=19$m=65536..." # Paste hash here
email: newuser@example.com
groups:
- users
```
3. **Restart Authelia:**
```bash
docker restart authelia
```
### Removing Users
Simply remove the user block from `users_database.yml` and restart.
### Resetting Passwords
Generate a new hash and replace the password field, then restart Authelia.
### Group-Based Access Control
```yaml
access_control:
rules:
# Only admins can access admin panel
- domain: "admin.yourdomain.com"
policy: two_factor
subject:
- "group:admins"
# Users can access media services
- domain:
- "plex.yourdomain.com"
- "jellyfin.yourdomain.com"
policy: one_factor
subject:
- "group:users"
```
## Advanced Topics
### SMTP Email Notifications
Configure email for password resets and notifications:
```yaml
notifier:
smtp:
username: your-email@gmail.com
password: your-app-password # Use app-specific password
host: smtp.gmail.com
port: 587
sender: your-email@gmail.com
subject: "[Authelia] {title}"
```
### LDAP Authentication
For advanced setups with Active Directory or FreeIPA:
```yaml
authentication_backend:
ldap:
url: ldap://ldap.example.com
base_dn: dc=example,dc=com
username_attribute: uid
additional_users_dn: ou=users
users_filter: (&({username_attribute}={input})(objectClass=person))
additional_groups_dn: ou=groups
groups_filter: (&(member={dn})(objectClass=groupOfNames))
user: cn=admin,dc=example,dc=com
password: admin-password
```
### Per-Service Policies
```yaml
access_control:
rules:
# Public services (no auth)
- domain: "public.yourdomain.com"
policy: bypass
# Internal network only (one factor)
- domain: "internal.yourdomain.com"
policy: one_factor
networks:
- 192.168.1.0/24
# High security (two factor required)
- domain:
- "banking.yourdomain.com"
- "finance.yourdomain.com"
policy: two_factor
```
### Network-Based Rules
```yaml
access_control:
rules:
# Bypass for local network
- domain: "*.yourdomain.com"
policy: bypass
networks:
- 192.168.1.0/24
- 172.16.0.0/12
# Require 2FA from external networks
- domain: "*.yourdomain.com"
policy: two_factor
```
### Custom Session Duration
```yaml
session:
expiration: 12h # Session expires after 12 hours
inactivity: 30m # Session expires after 30 min inactivity
remember_me_duration: 1M # Remember device for 1 month
```
## Troubleshooting
### Cannot Access Login Portal
```bash
# Check if Authelia is running
docker ps | grep authelia
# View logs
docker logs authelia
# Check configuration syntax
docker exec authelia authelia validate-config /config/configuration.yml
# Test connectivity
curl http://localhost:9091/api/health
```
### Services Not Requiring Authentication
```bash
# Verify middleware is applied
docker inspect service-name | grep authelia
# Check Traefik dashboard for middleware
# Visit: https://traefik.yourdomain.com
# Verify Authelia middleware definition
docker logs traefik | grep authelia
# Test ForwardAuth directly
curl -I -H "Host: service.yourdomain.com" http://authelia:9091/api/verify
```
### Login Not Working
```bash
# Check users_database.yml syntax
docker exec authelia cat /config/users_database.yml
# Verify password hash
docker exec authelia authelia crypto hash validate argon2 --password 'YourPassword' --hash '$argon2id...'
# Check logs for authentication attempts
docker logs authelia | grep -i auth
# Verify JWT and session secrets are set
docker exec authelia env | grep SECRET
```
### 2FA Not Working
```bash
# Check TOTP configuration
docker exec authelia cat /config/configuration.yml | grep -A5 totp
# Verify time synchronization (critical for TOTP)
docker exec authelia date
date # Compare with host
# Check TOTP database
sqlite3 /opt/stacks/core/authelia/db.sqlite3 "SELECT * FROM totp_configurations;"
# Reset 2FA for user (delete TOTP entry)
sqlite3 /opt/stacks/core/authelia/db.sqlite3 "DELETE FROM totp_configurations WHERE username='username';"
```
### Session Issues
```bash
# Check session configuration
docker exec authelia cat /config/configuration.yml | grep -A10 session
# Clear session database
sqlite3 /opt/stacks/core/authelia/db.sqlite3 "DELETE FROM user_sessions;"
# Verify domain matches
# session.domain should match your base domain
# Example: session.domain: yourdomain.com
# Services: *.yourdomain.com
```
### Configuration Validation
```bash
# Validate full configuration
docker exec authelia authelia validate-config /config/configuration.yml
# Check for common issues
docker logs authelia | grep -i error
docker logs authelia | grep -i warn
```
## Security Best Practices
1. **Strong Secrets:** Use 32+ character random secrets for JWT, session, and storage encryption
2. **Enable 2FA:** Require two-factor for external access
3. **Network Policies:** Use bypass for trusted networks, two_factor for internet
4. **Session Management:** Set appropriate expiration and inactivity timeouts
5. **Regular Updates:** Keep Authelia updated for security patches
6. **Email Notifications:** Configure SMTP for password reset security
7. **Backup:** Regularly backup `users_database.yml` and `db.sqlite3`
8. **Rate Limiting:** Configure `regulation` to prevent brute force
9. **Log Monitoring:** Review logs for failed authentication attempts
10. **Use HTTPS Only:** Never expose Authelia over plain HTTP
## Common Use Cases
### Home Access (No Auth)
```yaml
- domain: "*.yourdomain.com"
policy: bypass
networks:
- 192.168.1.0/24
```
### Friends & Family (One Factor)
```yaml
- domain:
- "plex.yourdomain.com"
- "jellyfin.yourdomain.com"
policy: one_factor
```
### Admin Services (Two Factor)
```yaml
- domain:
- "dockge.yourdomain.com"
- "portainer.yourdomain.com"
policy: two_factor
subject:
- "group:admins"
```
### VPN Required
```yaml
- domain: "admin.yourdomain.com"
policy: bypass
networks:
- 10.8.0.0/24 # Gluetun VPN network
```
## Summary
Authelia is your homelab's security layer. It:
- Adds authentication to any service
- Provides SSO across all applications
- Enables 2FA even for services that don't support it
- Offers granular access control
- Protects against unauthorized access
- Integrates seamlessly with Traefik
Setting up Authelia properly is one of the most important security steps for your homelab. Take time to understand access control rules and test your configuration thoroughly. Always keep the `users_database.yml` and `db.sqlite3` backed up, as they contain critical authentication data.
## Related Services
- **[Traefik](traefik.md)** - Reverse proxy that integrates with Authelia for authentication
- **[DuckDNS](duckdns.md)** - Dynamic DNS required for SSL certificates
- **[Sablier](sablier.md)** - Lazy loading that can work with Authelia-protected services
## See Also
- **[SSO Bypass Guide](../docker-guidelines.md#when-to-use-authelia-sso)** - When to disable authentication for services
- **[2FA Setup](../getting-started.md#set-up-2fa-with-authelia)** - Enable two-factor authentication
- **[Access Control Rules](../docker-guidelines.md#authelia-bypass-example-jellyfin)** - Configure bypass rules for specific services

View File

@@ -1,945 +0,0 @@
# Authentik - Identity Provider & SSO Platform
## Table of Contents
- [Overview](#overview)
- [What is Authentik?](#what-is-authentik)
- [Why Use Authentik?](#why-use-authentik)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [User Management](#user-management)
- [Application Integration](#application-integration)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure / Authentication (Alternative to Authelia)
**Docker Images:**
- `ghcr.io/goauthentik/server` (Main server + worker)
- `postgres:12-alpine` (Database)
- `redis:alpine` (Cache)
**Default Stack:** `infrastructure.yml`
**Web UI:** `https://authentik.${DOMAIN}`
**Authentication:** Self-authenticating (creates own login portal)
**Components:** 4-service architecture (server, worker, database, cache)
## What is Authentik?
Authentik is an open-source Identity Provider (IdP) focused on flexibility and versatility. It provides Single Sign-On (SSO), OAuth2, SAML, LDAP, and more through an intuitive web interface. Unlike Authelia's file-based configuration, Authentik is fully managed through its GUI.
### Key Features
- **Multiple Protocols:** OAuth2, OIDC, SAML, LDAP, Proxy
- **Web-Based Configuration:** No file editing required
- **User Portal:** Self-service user dashboard
- **Policy Engine:** Flexible, rule-based access control
- **Flow Designer:** Visual authentication flow builder
- **Multi-Factor Auth:** TOTP, WebAuthn, Duo, SMS
- **LDAP Provider:** Act as LDAP server for legacy apps
- **User Self-Service:** Password reset, profile management, 2FA setup
- **Groups & Roles:** Hierarchical permission management
- **Branding:** Custom themes, logos, colors
- **Admin Interface:** Comprehensive management dashboard
- **Event Logging:** Detailed audit trails
## Why Use Authentik?
### Authentik vs Authelia
**Use Authentik if you want:**
- ✅ GUI-based configuration (no YAML editing)
- ✅ SAML support (not in Authelia)
- ✅ LDAP server functionality
- ✅ User self-service portal
- ✅ Visual flow designer
- ✅ More complex authentication flows
- ✅ OAuth2/OIDC provider for external apps
- ✅ Enterprise features (groups, roles, policies)
**Use Authelia if you want:**
- ✅ Simpler, lighter weight
- ✅ File-based configuration (GitOps friendly)
- ✅ Minimal resource usage
- ✅ Faster setup for basic use cases
- ✅ No database required
### Common Use Cases
1. **SSO for Homelab:** Single login for all services
2. **LDAP for Legacy Apps:** Provide LDAP to apps that need it
3. **OAuth Provider:** Act as identity provider for custom apps
4. **Self-Service Portal:** Let users manage their own accounts
5. **Advanced Policies:** Complex access control rules
6. **SAML Federation:** Integrate with enterprise systems
7. **User Management:** GUI for managing users and groups
## How It Works
```
User → Browser → Authentik (SSO Portal)
Authentication Flow
(Password + 2FA + Policies)
Token/Session Issued
┌──────────────┴──────────────┐
↓ ↓
Forward Auth OAuth/SAML
(Traefik) (Applications)
↓ ↓
Your Services External Applications
```
### Component Architecture
```
┌─────────────────────────────────────────────┐
│ Authentik Server (Port 9000) │
│ - Web UI │
│ - API │
│ - Auth endpoints │
│ - Forward auth provider │
└───────────┬─────────────────────────────────┘
┌───────────┴─────────────────────────────────┐
│ Authentik Worker (Background) │
│ - Scheduled tasks │
│ - Email notifications │
│ - Policy evaluation │
│ - LDAP sync │
└───────────┬─────────────────────────────────┘
┌───────────┴─────────────────────────────────┐
│ PostgreSQL Database │
│ - User accounts │
│ - Applications │
│ - Flows and stages │
│ - Policies and rules │
└───────────┬─────────────────────────────────┘
┌───────────┴─────────────────────────────────┐
│ Redis Cache │
│ - Sessions │
│ - Cache │
│ - Rate limiting │
└─────────────────────────────────────────────┘
```
### Authentication Flow
1. **User accesses** protected service
2. **Traefik forwards** to Authentik
3. **Authentik checks** session cookie
4. **If not authenticated:**
- Redirect to Authentik login
- User enters credentials
- Multi-factor authentication (if enabled)
- Policy evaluation
- Session created
5. **If authenticated:**
- Check authorization policies
- Grant or deny access
6. **User accesses** service
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/infrastructure/authentik/
├── media/ # User-uploaded files, branding
├── custom-templates/ # Custom email templates
├── certs/ # Custom SSL certificates (optional)
└── backups/ # Database backups
```
### Environment Variables
```bash
# PostgreSQL Database
POSTGRES_USER=authentik
POSTGRES_PASSWORD=secure-database-password-here
POSTGRES_DB=authentik
# Authentik Configuration
AUTHENTIK_SECRET_KEY=long-random-secret-key-min-50-chars
AUTHENTIK_ERROR_REPORTING__ENABLED=false
# Email (Optional but recommended)
AUTHENTIK_EMAIL__HOST=smtp.gmail.com
AUTHENTIK_EMAIL__PORT=587
AUTHENTIK_EMAIL__USERNAME=your-email@gmail.com
AUTHENTIK_EMAIL__PASSWORD=your-app-password
AUTHENTIK_EMAIL__USE_TLS=true
AUTHENTIK_EMAIL__FROM=authentik@yourdomain.com
# Redis
AUTHENTIK_REDIS__HOST=authentik-redis
AUTHENTIK_REDIS__PORT=6379
# PostgreSQL Connection
AUTHENTIK_POSTGRESQL__HOST=authentik-db
AUTHENTIK_POSTGRESQL__NAME=authentik
AUTHENTIK_POSTGRESQL__USER=authentik
AUTHENTIK_POSTGRESQL__PASSWORD=secure-database-password-here
# Optional: Disable password policy
AUTHENTIK_PASSWORD_MINIMUM_LENGTH=8
# Optional: Log level
AUTHENTIK_LOG_LEVEL=info
```
**Generate Secret Key:**
```bash
openssl rand -hex 50
```
## Official Resources
- **Website:** https://goauthentik.io
- **Documentation:** https://docs.goauthentik.io
- **GitHub:** https://github.com/goauthentik/authentik
- **Discord:** https://discord.gg/jg33eMhnj6
- **Forum:** https://github.com/goauthentik/authentik/discussions
- **Docker Hub:** https://hub.docker.com/r/goauthentik/server
## Educational Resources
### Videos
- [Authentik - The BEST SSO Platform? (Techno Tim)](https://www.youtube.com/watch?v=N5unsATNpJk)
- [Authentik Setup and Configuration (DB Tech)](https://www.youtube.com/watch?v=D8ovMx_CILE)
- [Authentik vs Authelia Comparison](https://www.youtube.com/results?search_query=authentik+vs+authelia)
- [OAuth2 and OIDC Explained](https://www.youtube.com/watch?v=t18YB3xDfXI)
### Articles & Guides
- [Authentik Official Documentation](https://docs.goauthentik.io)
- [Forward Auth with Traefik](https://docs.goauthentik.io/docs/providers/proxy/forward_auth)
- [LDAP Provider Setup](https://docs.goauthentik.io/docs/providers/ldap/)
- [OAuth2/OIDC Provider](https://docs.goauthentik.io/docs/providers/oauth2/)
### Concepts to Learn
- **Identity Provider (IdP):** Service that manages user identities
- **OAuth2/OIDC:** Modern authentication protocols
- **SAML:** Enterprise federation protocol
- **LDAP:** Directory access protocol
- **Forward Auth:** Proxy-based authentication
- **Flows:** Customizable authentication sequences
- **Stages:** Building blocks of flows
- **Policies:** Rules for access control
- **Providers:** Application integration methods
## Docker Configuration
### Complete Stack Definition
```yaml
# PostgreSQL Database
authentik-db:
image: postgres:12-alpine
container_name: authentik-db
restart: unless-stopped
networks:
- authentik-network
volumes:
- /opt/stacks/infrastructure/authentik/database:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
# Redis Cache
authentik-redis:
image: redis:alpine
container_name: authentik-redis
restart: unless-stopped
networks:
- authentik-network
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
interval: 10s
timeout: 5s
retries: 5
# Authentik Server
authentik:
image: ghcr.io/goauthentik/server:latest
container_name: authentik
restart: unless-stopped
command: server
networks:
- traefik-network
- authentik-network
volumes:
- /opt/stacks/infrastructure/authentik/media:/media
- /opt/stacks/infrastructure/authentik/custom-templates:/templates
environment:
- AUTHENTIK_SECRET_KEY=${AUTHENTIK_SECRET_KEY}
- AUTHENTIK_ERROR_REPORTING__ENABLED=false
- AUTHENTIK_REDIS__HOST=authentik-redis
- AUTHENTIK_POSTGRESQL__HOST=authentik-db
- AUTHENTIK_POSTGRESQL__NAME=${POSTGRES_DB}
- AUTHENTIK_POSTGRESQL__USER=${POSTGRES_USER}
- AUTHENTIK_POSTGRESQL__PASSWORD=${POSTGRES_PASSWORD}
- TZ=America/New_York
depends_on:
- authentik-db
- authentik-redis
labels:
- "traefik.enable=true"
- "traefik.http.routers.authentik.rule=Host(`authentik.${DOMAIN}`)"
- "traefik.http.routers.authentik.entrypoints=websecure"
- "traefik.http.routers.authentik.tls.certresolver=letsencrypt"
- "traefik.http.services.authentik.loadbalancer.server.port=9000"
# Forward Auth Middleware
- "traefik.http.middlewares.authentik.forwardAuth.address=http://authentik:9000/outpost.goauthentik.io/auth/traefik"
- "traefik.http.middlewares.authentik.forwardAuth.trustForwardHeader=true"
- "traefik.http.middlewares.authentik.forwardAuth.authResponseHeaders=X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid"
# Authentik Worker (Background Tasks)
authentik-worker:
image: ghcr.io/goauthentik/server:latest
container_name: authentik-worker
restart: unless-stopped
command: worker
networks:
- authentik-network
volumes:
- /opt/stacks/infrastructure/authentik/media:/media
- /opt/stacks/infrastructure/authentik/custom-templates:/templates
- /opt/stacks/infrastructure/authentik/certs:/certs
environment:
- AUTHENTIK_SECRET_KEY=${AUTHENTIK_SECRET_KEY}
- AUTHENTIK_ERROR_REPORTING__ENABLED=false
- AUTHENTIK_REDIS__HOST=authentik-redis
- AUTHENTIK_POSTGRESQL__HOST=authentik-db
- AUTHENTIK_POSTGRESQL__NAME=${POSTGRES_DB}
- AUTHENTIK_POSTGRESQL__USER=${POSTGRES_USER}
- AUTHENTIK_POSTGRESQL__PASSWORD=${POSTGRES_PASSWORD}
- TZ=America/New_York
depends_on:
- authentik-db
- authentik-redis
networks:
authentik-network:
internal: true # Database and Redis not exposed externally
traefik-network:
external: true
```
### Protecting Services with Authentik
```yaml
myservice:
image: myapp:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.myservice.rule=Host(`myservice.${DOMAIN}`)"
- "traefik.http.routers.myservice.entrypoints=websecure"
- "traefik.http.routers.myservice.tls.certresolver=letsencrypt"
- "traefik.http.routers.myservice.middlewares=authentik@docker" # Add this
networks:
- traefik-network
```
## Initial Setup
### First-Time Configuration
1. **Deploy Stack:**
```bash
cd /opt/stacks/infrastructure
docker compose up -d authentik-db authentik-redis authentik authentik-worker
```
2. **Wait for Initialization:**
```bash
# Watch logs
docker logs -f authentik
# Wait for: "Bootstrap completed successfully"
```
3. **Access Web UI:**
- Navigate to: `https://authentik.yourdomain.com/if/flow/initial-setup/`
- This is a **one-time URL** for initial admin setup
4. **Create Admin Account:**
- Email: `admin@yourdomain.com`
- Username: `admin`
- Password: Strong password
- Complete setup
5. **Login:**
- Go to: `https://authentik.yourdomain.com`
- Login with admin credentials
- You'll see the user interface
6. **Access Admin Interface:**
- Click on your profile (top right)
- Select "Admin Interface"
- This is where you configure everything
### Initial Configuration Steps
1. **Configure Branding:**
- System → Settings → Branding
- Upload logo
- Set colors and theme
2. **Configure Email (Recommended):**
- System → Settings → Email
- SMTP settings
- Test email delivery
3. **Create Default Outpost:**
- Applications → Outposts
- Should have one called "authentik Embedded Outpost"
- This handles forward auth
4. **Create Application:**
- Applications → Applications → Create
- Name: Your service name
- Slug: your-service
- Provider: Create new provider
## User Management
### Creating Users
**Via Admin Interface:**
1. Directory → Users → Create
2. Fill in details:
- Username (required)
- Email (required)
- Name
- Active status
3. Set password or send activation email
4. Assign to groups (optional)
**Via User Portal:**
- Enable self-registration in Flow settings
- Users can sign up themselves
- Admin approval optional
### Creating Groups
1. Directory → Groups → Create
2. Name and optional parent group
3. Add users
4. Assign to applications
### Group Hierarchy
```
All Users
├── Admins
│ ├── System Admins
│ └── Application Admins
├── Users
│ ├── Family
│ └── Friends
└── Guests
```
### Password Policies
1. Policies → Create → Password Policy
2. Configure:
- Minimum length
- Uppercase/lowercase requirements
- Numbers and symbols
- Complexity score
3. Bind to flows
## Application Integration
### Forward Auth (Traefik)
**For most homelab services:**
1. **Create Provider:**
- Applications → Providers → Create
- Type: Proxy Provider
- Name: `my-service-proxy`
- Authorization flow: Default
- Forward auth: External host: `https://myservice.yourdomain.com`
2. **Create Application:**
- Applications → Applications → Create
- Name: `My Service`
- Slug: `my-service`
- Provider: Select provider created above
- Launch URL: `https://myservice.yourdomain.com`
3. **Configure Service:**
```yaml
myservice:
labels:
- "traefik.http.routers.myservice.middlewares=authentik@docker"
```
### OAuth2/OIDC Integration
**For apps supporting OAuth2:**
1. **Create Provider:**
- Type: OAuth2/OpenID Provider
- Client type: Confidential
- Client ID: Auto-generated or custom
- Client Secret: Auto-generated (save this!)
- Redirect URIs: `https://myapp.com/oauth/callback`
- Signing Key: Auto-select
2. **Create Application:**
- Link to OAuth provider
- Users can now login via "Sign in with Authentik"
3. **Configure Application:**
- OIDC Discovery URL: `https://authentik.yourdomain.com/application/o/{slug}/.well-known/openid-configuration`
- Client ID: From provider
- Client Secret: From provider
### LDAP Provider
**For legacy apps requiring LDAP:**
1. **Create Provider:**
- Type: LDAP Provider
- Name: `LDAP Service`
- Base DN: `dc=ldap,dc=goauthentik,dc=io`
- Bind Flow: Default
2. **Create Application:**
- Link to LDAP provider
3. **Create Outpost:**
- Applications → Outposts → Create
- Type: LDAP
- Providers: Select LDAP provider
- Port: 389 or 636 (LDAPS)
4. **Configure Application:**
- LDAP Server: `ldap://authentik-ldap:389`
- Base DN: From provider
- Bind DN: `cn=admin,dc=ldap,dc=goauthentik,dc=io`
- Bind Password: From Authentik
### SAML Provider
**For enterprise SAML apps:**
1. **Create Provider:**
- Type: SAML Provider
- ACS URL: From application
- Issuer: Auto-generated
- Service Provider Binding: POST or Redirect
- Audience: From application
2. **Download Metadata:**
- Export metadata XML
- Import into target application
## Advanced Topics
### Custom Flows
**Create Custom Login Flow:**
1. **Flows → Create:**
- Name: `Custom Login`
- Designation: Authentication
- Authentication: User/Password
2. **Add Stages:**
- Identification Stage (username/email)
- Password Stage
- MFA Validation Stage
- User Write Stage
3. **Configure Flow:**
- Policy bindings
- Stage bindings
- Flow order
4. **Assign to Applications:**
- Applications → Select app → Authentication flow
### Policy Engine
**Create Access Policy:**
```python
# Example policy: Only allow admins
return user.groups.filter(name="Admins").exists()
```
```python
# Example: Only allow access during business hours
import datetime
now = datetime.datetime.now()
return 9 <= now.hour < 17
```
```python
# Example: Block specific IPs
blocked_ips = ["192.168.1.100", "10.0.0.50"]
return request.context.get("ip") not in blocked_ips
```
**Bind Policy:**
1. Applications → Select app
2. Policy Bindings → Create
3. Select policy
4. Set order and action (allow/deny)
### Custom Branding
**Custom Theme:**
1. System → Settings → Branding
2. Upload logo
3. Set background image
4. Custom CSS:
```css
:root {
--ak-accent: #1f6feb;
--ak-dark-background: #0d1117;
}
```
**Custom Email Templates:**
1. Create template in `/custom-templates/email/`
2. Use Authentik template variables
3. Reference in email stages
### Events and Monitoring
**Event Logging:**
- Events → Event Logs
- Filter by user, action, app
- Export for analysis
**Notifications:**
- Events → Notification Rules
- Trigger on specific events
- Send to email, webhook, etc.
**Monitoring:**
- System → System Tasks
- Worker status
- Database connections
- Cache status
### LDAP Synchronization
Sync users from external LDAP:
1. **Create LDAP Source:**
- Directory → Federation & Social Login → Create
- Type: LDAP Source
- Server URI: `ldaps://ldap.example.com`
- Bind CN and password
- Base DN for users and groups
2. **Sync Configuration:**
- User object filter
- Group object filter
- Attribute mapping
3. **Manual Sync:**
- Directory → Sources → Select source → Sync
## Troubleshooting
### Can't Access Initial Setup URL
```bash
# Check if Authentik is running
docker ps | grep authentik
# View logs
docker logs authentik
# If you missed initial setup, create admin via CLI
docker exec -it authentik ak create_admin_group
docker exec -it authentik ak create_recovery_key
# Use recovery key to access /if/flow/recovery/
```
### Database Connection Errors
```bash
# Check if database is running
docker ps | grep authentik-db
# Check database health
docker exec authentik-db pg_isready -U authentik
# View database logs
docker logs authentik-db
# Test connection
docker exec authentik-db psql -U authentik -d authentik -c "SELECT 1;"
# Reset database (WARNING: deletes all data)
docker compose down
docker volume rm authentik_database
docker compose up -d
```
### Redis Connection Errors
```bash
# Check if Redis is running
docker ps | grep authentik-redis
# Test Redis
docker exec authentik-redis redis-cli ping
# View Redis logs
docker logs authentik-redis
# Flush Redis cache (safe)
docker exec authentik-redis redis-cli FLUSHALL
```
### Services Not Being Protected
```bash
# Verify middleware is applied
docker inspect service-name | grep authentik
# Check Traefik logs
docker logs traefik | grep authentik
# Test forward auth directly
curl -I -H "Host: service.yourdomain.com" \
http://authentik:9000/outpost.goauthentik.io/auth/traefik
# Check outpost status
# Admin Interface → Applications → Outposts → Status should be "healthy"
```
### Login Not Working
```bash
# Check Authentik logs
docker logs authentik | grep -i error
# Verify flows are configured
# Admin Interface → Flows → Should have default flows
# Check browser console
# F12 → Console → Look for errors
# Clear cookies and try again
# Browser → DevTools → Application → Clear cookies
# Test with incognito/private window
```
### Worker Not Processing Tasks
```bash
# Check worker status
docker ps | grep authentik-worker
# View worker logs
docker logs authentik-worker
# Restart worker
docker restart authentik-worker
# Check scheduled tasks
# Admin Interface → System → System Tasks
```
### High Memory Usage
```bash
# Check container stats
docker stats authentik authentik-db authentik-redis
# Restart services
docker restart authentik authentik-worker
# Optimize database
docker exec authentik-db vacuumdb -U authentik -d authentik -f
# Clear Redis cache
docker exec authentik-redis redis-cli FLUSHALL
```
### Email Not Sending
```bash
# Test email configuration
# Admin Interface → System → Settings → Email → Test
# Check worker logs (worker handles emails)
docker logs authentik-worker | grep -i email
# Verify SMTP settings
docker exec authentik env | grep EMAIL
# For Gmail, use App Password, not account password
# https://support.google.com/accounts/answer/185833
```
## Backup and Restore
### Backup
**Database Backup:**
```bash
# Backup PostgreSQL
docker exec authentik-db pg_dump -U authentik authentik > authentik-backup-$(date +%Y%m%d).sql
# Backup media files
tar -czf authentik-media-$(date +%Y%m%d).tar.gz /opt/stacks/infrastructure/authentik/media
```
**Automated Backup Script:**
```bash
#!/bin/bash
BACKUP_DIR="/opt/backups/authentik"
DATE=$(date +%Y%m%d-%H%M%S)
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup database
docker exec authentik-db pg_dump -U authentik authentik | gzip > $BACKUP_DIR/authentik-db-$DATE.sql.gz
# Backup media
tar -czf $BACKUP_DIR/authentik-media-$DATE.tar.gz /opt/stacks/infrastructure/authentik/media
# Keep only last 7 days
find $BACKUP_DIR -name "authentik-*" -mtime +7 -delete
echo "Backup completed: $DATE"
```
### Restore
```bash
# Stop services
docker compose down
# Restore database
docker compose up -d authentik-db
docker exec -i authentik-db psql -U authentik authentik < authentik-backup-20240112.sql
# Restore media
tar -xzf authentik-media-20240112.tar.gz -C /
# Start services
docker compose up -d
```
## Performance Optimization
### Database Optimization
```bash
# Vacuum and analyze
docker exec authentik-db vacuumdb -U authentik -d authentik -f -z
# Reindex
docker exec authentik-db reindexdb -U authentik -d authentik
```
### Redis Configuration
```yaml
authentik-redis:
command: >
--maxmemory 256mb
--maxmemory-policy allkeys-lru
--save ""
```
### Worker Scaling
Run multiple workers for better performance:
```yaml
authentik-worker:
deploy:
replicas: 2
# Or create multiple named workers
```
## Security Best Practices
1. **Strong Secret Key:** Use 50+ character random key
2. **Email Verification:** Enable email verification for new users
3. **MFA Required:** Enforce 2FA for admin accounts
4. **Policy Bindings:** Use policies to restrict access
5. **Regular Backups:** Automate database and media backups
6. **Update Regularly:** Keep Authentik updated
7. **Monitor Events:** Review event logs for suspicious activity
8. **Secure Database:** Never expose PostgreSQL publicly
9. **Secure Redis:** Keep Redis on internal network
10. **HTTPS Only:** Always use SSL/TLS
## Migration from Authelia
**Considerations:**
1. **Different Philosophy:** Authelia is file-based, Authentik is database-based
2. **User Migration:** No automated tool - manual recreation needed
3. **Flow Configuration:** Different access control model
4. **Resource Usage:** Authentik uses more resources (database, Redis)
5. **Flexibility:** Authentik offers more features but more complexity
**Steps:**
1. Deploy Authentik stack alongside Authelia
2. Configure Authentik flows and policies
3. Recreate users and groups in Authentik
4. Test services with Authentik middleware
5. Gradually migrate services
6. Remove Authelia when confident
## Summary
Authentik is a powerful, flexible identity provider that offers:
- Web-based configuration (no file editing)
- Multiple authentication protocols (OAuth2, SAML, LDAP)
- User self-service portal
- Advanced policy engine
- Visual flow designer
- Enterprise-grade features
**Perfect for:**
- Complex authentication requirements
- Multiple user groups and roles
- SAML integration needs
- LDAP for legacy applications
- User self-service requirements
- OAuth2/OIDC provider functionality
**Trade-offs:**
- Higher resource usage (4 containers vs 1)
- More complex setup
- Database dependency
- Steeper learning curve
**Remember:**
- Use strong secret keys
- Enable MFA for admins
- Regular database backups
- Monitor event logs
- Start with simple flows
- Gradually add complexity
- Test thoroughly before production use
- Authentik and Authelia can coexist during migration

View File

@@ -1,244 +0,0 @@
# Backrest - Comprehensive Backup Solution
**Category:** Backup & Recovery
**Description:** Backrest is a web-based UI for Restic, providing scheduled backups, retention policies, and a beautiful interface for managing backups across multiple repositories and destinations. It serves as the default backup strategy for AI-Homelab.
**Docker Image:** `garethgeorge/backrest:latest`
**Documentation:** [Backrest GitHub](https://github.com/garethgeorge/backrest)
## Overview
### What is Backrest?
Backrest (latest: v1.10.1) is a web-based UI for Restic, built with Go and SvelteKit. It simplifies Restic management:
- **Web Interface**: Create repos, plans, and monitor backups.
- **Automation**: Scheduled backups, hooks (pre/post commands).
- **Integration**: Runs Restic under the hood.
- **Features**: Multi-repo support, retention policies, notifications.
### What is Restic?
Restic (latest: v0.18.1) is a modern, open-source backup program written in Go. It provides:
- **Deduplication**: Efficiently stores only changed data.
- **Encryption**: All data is encrypted with AES-256.
- **Snapshots**: Point-in-time backups with metadata.
- **Cross-Platform**: Works on Linux, macOS, Windows.
- **Backends**: Supports local, SFTP, S3, etc.
- **Features**: Compression, locking, pruning, mounting snapshots.
## Configuration
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `BACKREST_DATA` | Internal data directory | `/data` |
| `BACKREST_CONFIG` | Configuration file path | `/config/config.json` |
| `BACKREST_UI_CONFIG` | UI configuration JSON | `{"baseURL": "https://backrest.${DOMAIN}"}` |
### Ports
- **9898** - Web UI port
### Volumes
- `./data:/data` - Backrest internal data and repositories
- `./config:/config` - Configuration files
- `./cache:/cache` - Restic cache for performance
- `/var/lib/docker/volumes:/docker_volumes:ro` - Access to Docker volumes
- `/opt/stacks:/opt/stacks:ro` - Access to service configurations
- `/var/run/docker.sock:/var/run/docker.sock` - Docker API access for hooks
## Usage
### Accessing Backrest
- **URL**: `https://backrest.${DOMAIN}`
- **Authentication**: Via Authelia SSO
- **UI Sections**: Repos, Plans, Logs
### Managing Repositories
Repositories store your backups. Create one for your main backup location.
#### Create Repository
1. Go to **Repos****Add Repo**
2. **Name**: `main-backup-repo`
3. **Storage**: Choose backend (Local, SFTP, S3, etc.)
4. **Password**: Set strong encryption password
5. **Initialize**: Backrest runs `restic init`
### Creating Backup Plans
Plans define what, when, and how to back up.
#### Database Backup Plan (Recommended)
```json
{
"id": "database-backup",
"repo": "main-backup-repo",
"paths": [
"/docker_volumes/*_mysql/_data",
"/docker_volumes/*_postgres/_data"
],
"schedule": {
"maxFrequencyDays": 1
},
"hooks": [
{
"actionCommand": {
"command": "for vol in $(docker volume ls -q | grep '_mysql$'); do docker ps -q --filter volume=$vol | xargs -r docker stop || true; done"
},
"conditions": ["CONDITION_SNAPSHOT_START"]
},
{
"actionCommand": {
"command": "for vol in $(docker volume ls -q | grep '_mysql$'); do docker ps -a -q --filter volume=$vol | xargs -r docker start || true; done"
},
"conditions": ["CONDITION_SNAPSHOT_END"]
}
],
"retention": {
"policyKeepLastN": 30
}
}
```
#### Service Configuration Backup Plan
```json
{
"id": "config-backup",
"repo": "main-backup-repo",
"paths": [
"/opt/stacks"
],
"excludes": [
"**/cache",
"**/tmp",
"**/log"
],
"schedule": {
"maxFrequencyDays": 1
},
"retention": {
"policyKeepLastN": 14
}
}
```
### Running Backups
- **Manual**: Plans → Select plan → **Run Backup Now**
- **Scheduled**: Runs automatically per plan schedule
- **Monitor**: Check **Logs** tab for status and errors
### Restoring Data
1. Go to **Repos** → Select repo → **Snapshots**
2. Choose snapshot → **Restore**
3. Select paths/files → Set target directory
4. Run restore operation
## Best Practices
### Security
- Use strong repository passwords
- Limit Backrest UI access via Authelia
- Store passwords securely (not in config files)
### Performance
- Schedule backups during low-usage hours
- Use compression for large backups
- Monitor repository size growth
### Retention
- Keep 30 daily snapshots for critical data
- Keep 14 snapshots for configurations
- Regularly prune old snapshots
### Testing
- Test restore procedures regularly
- Verify backup integrity
- Document restore processes
## Integration with AI-Homelab
### Homepage Dashboard
Add Backrest to your Homepage dashboard:
```yaml
# In homepage/services.yaml
- Infrastructure:
- Backrest:
icon: backup.png
href: https://backrest.${DOMAIN}
description: Backup management
widget:
type: iframe
url: https://backrest.${DOMAIN}
```
### Monitoring
Monitor backup success with Uptime Kuma or Grafana alerts.
## Troubleshooting
### Common Issues
**Backup Failures**
- Check repository access and credentials
- Verify source paths exist and are readable
- Review hook commands for syntax errors
**Hook Issues**
- Ensure Docker socket is accessible
- Check that containers can be stopped/started
- Verify hook commands work manually
**Performance Problems**
- Check available disk space
- Monitor CPU/memory usage during backups
- Consider excluding large, frequently changing files
**Restore Issues**
- Ensure target directory exists and is writable
- Check file permissions
- Verify snapshot integrity
## Advanced Features
### Multiple Repositories
- **Local**: For fast, local backups
- **Remote**: SFTP/S3 for offsite storage
- **Hybrid**: Local for speed, remote for safety
### Custom Hooks
```bash
# Pre-backup: Stop services
docker compose -f /opt/stacks/core/docker-compose.yml stop
# Post-backup: Start services
docker compose -f /opt/stacks/core/docker-compose.yml start
```
### Notifications
Configure webhooks in Backrest settings for backup status alerts.
## Migration from Other Solutions
### From Duplicati
1. Export Duplicati configurations
2. Create equivalent Backrest plans
3. Test backups and restores
4. Decommission Duplicati
### From Manual Scripts
1. Identify current backup sources and schedules
2. Create Backrest plans with same parameters
3. Add appropriate hooks for service management
4. Test and validate
## Related Documentation
- **[Backup Strategy Guide](../Restic-BackRest-Backup-Guide.md)** - Comprehensive setup and usage guide
- **[Docker Guidelines](../docker-guidelines.md)** - Volume management and persistence
- **[Quick Reference](../quick-reference.md)** - Command reference and troubleshooting
---
**Backrest provides enterprise-grade backup capabilities with an intuitive web interface, making it the perfect default backup solution for AI-Homelab.**

View File

@@ -1,679 +0,0 @@
# Bazarr - Subtitle Automation
## Table of Contents
- [Overview](#overview)
- [What is Bazarr?](#what-is-bazarr)
- [Why Use Bazarr?](#why-use-bazarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Subtitle Management
**Docker Image:** [linuxserver/bazarr](https://hub.docker.com/r/linuxserver/bazarr)
**Default Stack:** `media-management.yml`
**Web UI:** `https://bazarr.${DOMAIN}` or `http://SERVER_IP:6767`
**Authentication:** Optional (configurable)
**Ports:** 6767
## What is Bazarr?
Bazarr is a companion application to Sonarr and Radarr that manages and downloads subtitles. It automatically downloads missing subtitles for your movies and TV shows based on your language preferences, and can even upgrade subtitles when better versions become available. It integrates seamlessly with your existing media stack.
### Key Features
- **Automatic Subtitle Downloads:** Missing subtitles retrieved automatically
- **Multi-Language Support:** Download multiple languages per media
- **Sonarr/Radarr Integration:** Syncs with your libraries
- **Provider Management:** 20+ subtitle providers
- **Quality Scoring:** Prioritize best subtitle releases
- **Forced Subtitles:** Foreign language only scenes
- **Hearing Impaired:** SDH subtitle support
- **Manual Search:** Override automatic selection
- **Upgrade Subtitles:** Replace with better versions
- **Embedded Subtitles:** Extract from MKV files
- **Custom Post-Processing:** Scripts on download
## Why Use Bazarr?
1. **Accessibility:** Subtitles for hearing impaired viewers
2. **Foreign Language:** Watch content in any language
3. **Automatic Management:** No manual subtitle searching
4. **Quality Control:** Sync and score subtitles
5. **Multi-Language:** Support for multiple languages
6. **Upgrade System:** Replace poor quality subtitles
7. **Integration:** Works with Sonarr/Radarr
8. **Time Saving:** Automated workflow
9. **Free & Open Source:** No cost
10. **Comprehensive Providers:** Access to all major sources
## How It Works
```
New Movie/Episode Added
(via Sonarr/Radarr)
Bazarr Detects Missing Subtitles
Searches Subtitle Providers
(OpenSubtitles, Subscene, etc.)
Evaluates Options (Score, Sync)
Downloads Best Match
Places Subtitle Next to Media File
Plex/Jellyfin Detects Subtitle
Subtitle Available in Player
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/bazarr/config/ # Bazarr configuration
/mnt/media/movies/ # Movie library (with subtitles)
/mnt/media/tv/ # TV library (with subtitles)
Subtitle Structure:
/mnt/media/movies/
The Matrix (1999)/
The Matrix (1999).mkv
The Matrix (1999).en.srt # English
The Matrix (1999).en.forced.srt # Forced English
The Matrix (1999).es.srt # Spanish
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://www.bazarr.media
- **Documentation:** https://wiki.bazarr.media
- **GitHub:** https://github.com/morpheus65535/bazarr
- **Discord:** https://discord.gg/MH2e2eb
- **Reddit:** https://reddit.com/r/bazarr
- **Docker Hub:** https://hub.docker.com/r/linuxserver/bazarr
## Educational Resources
### Videos
- [Bazarr Setup Guide (Techno Tim)](https://www.youtube.com/results?search_query=bazarr+setup+techno+tim)
- [Subtitle Automation](https://www.youtube.com/results?search_query=bazarr+sonarr+radarr)
- [Bazarr Best Settings](https://www.youtube.com/results?search_query=bazarr+best+settings)
### Articles & Guides
- [Official Wiki](https://wiki.bazarr.media)
- [Setup Guide](https://wiki.bazarr.media/Getting-Started)
- [Provider Setup](https://wiki.bazarr.media/Subtitle-Providers/)
### Concepts to Learn
- **Subtitle Formats:** SRT, ASS, SSA, VTT
- **Forced Subtitles:** Foreign language only
- **Hearing Impaired (SDH):** Sound descriptions
- **Subtitle Sync:** Timing adjustment
- **Subtitle Score:** Quality metrics
- **Embedded Subtitles:** Within MKV container
- **External Subtitles:** Separate .srt files
## Docker Configuration
### Complete Service Definition
```yaml
bazarr:
image: linuxserver/bazarr:latest
container_name: bazarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "6767:6767"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/bazarr/config:/config
- /mnt/media/movies:/movies
- /mnt/media/tv:/tv
labels:
- "traefik.enable=true"
- "traefik.http.routers.bazarr.rule=Host(`bazarr.${DOMAIN}`)"
- "traefik.http.routers.bazarr.entrypoints=websecure"
- "traefik.http.routers.bazarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.bazarr.middlewares=authelia@docker"
- "traefik.http.services.bazarr.loadbalancer.server.port=6767"
```
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d bazarr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:6767`
- Domain: `https://bazarr.yourdomain.com`
3. **Initial Configuration:**
- Settings → Languages
- Settings → Providers
- Settings → Sonarr
- Settings → Radarr
### Language Configuration
**Settings → Languages → Languages Filter:**
1. **Languages:**
- Add language(s) you want subtitles for
- Example: English, Spanish, French
- Order determines priority
2. **Profile 1 (Default):**
- Name: "Default"
- Languages: English
- ✓ Enabled
3. **Create Additional Profiles:**
- Multi-language profile
- Hearing impaired profile
- Foreign language only
**Settings → Languages → Default Settings:**
- **Single Language:** ✗ (allow multiple)
- **Hearing-impaired:** No (or Prefer/Require if needed)
- **Forced Only:** No (foreign language scenes only)
### Subtitle Providers
**Settings → Providers:**
**Add Subtitle Providers:**
1. **OpenSubtitles.org:**
- Create account at opensubtitles.org
- Get API key
- Add to Bazarr
- Username and API key
- Save
2. **OpenSubtitles.com (New):**
- Newer version
- Better quality
- Requires account
- API key
3. **Subscene:**
- No account required
- Good quality
- Enable
4. **Addic7ed:**
- Requires account
- Great for TV shows
- Username and password
5. **YIFY Subtitles:**
- Movies
- No account
- Enable
**Provider Priority:**
- Drag to reorder
- Top providers checked first
- Lower providers as fallback
**Recommended Providers:**
- OpenSubtitles.com (best quality)
- Addic7ed (TV shows)
- OpenSubtitles.org (backup)
- YIFY (movies)
- Subscene (backup)
### Anti-Captcha (Optional)
Some providers use captchas:
**Settings → Providers → Anti-Captcha:**
- Service: Anti-Captcha
- API Key: From anti-captcha.com
- Costs money, optional
### Sonarr Integration
**Settings → Sonarr → Add:**
1. **Name:** Sonarr
2. **Address:** `http://sonarr:8989`
3. **API Key:** From Sonarr → Settings → General
4. **Download Only Monitored:** ✓ Yes
5. **Exclude Season Packs:** ✓ Yes (optional)
6. **Full Update:** Every 6 hours
7. **Test → Save**
**Sync:**
- Bazarr imports all shows from Sonarr
- Monitors for new episodes
- Downloads subtitles automatically
### Radarr Integration
**Settings → Radarr → Add:**
1. **Name:** Radarr
2. **Address:** `http://radarr:7878`
3. **API Key:** From Radarr → Settings → General
4. **Download Only Monitored:** ✓ Yes
5. **Full Update:** Every 6 hours
6. **Test → Save**
**Sync:**
- Bazarr imports all movies from Radarr
- Monitors for new movies
- Downloads subtitles automatically
### Subtitle Search Settings
**Settings → Subtitles → Subtitle Options:**
**Search:**
- **Adaptive Searching:** ✓ Enable (better results)
- **Minimum Score:** 80% (adjust based on quality needs)
- **Download Hearing-Impaired:** Prefer (or Don't Use)
- **Use Scene Name:** ✓ Enable
- **Use Original Format:** ✓ Enable (keep .srt, .ass, etc.)
**Upgrade:**
- **Upgrade Previously Downloaded:** ✓ Enable
- **Upgrade Manually Downloaded:** ✗ Disable (keep manual choices)
- **Upgrade for 7 Days:** (tries for better subtitles)
- **Score Threshold:** 360 (out of 360 for perfect)
**Performance:**
- **Use Embedded Subtitles:** ✗ Disable (extract if needed)
- **Exclude External Subtitles:** ✗ Disable
## Advanced Topics
### Language Profiles
**Create Custom Profiles:**
**Settings → Languages → Profiles:**
**Example: English + Spanish**
1. Click "+"
2. Name: "Dual Language"
3. Add languages: English, Spanish
4. Cutoff: English (stop when English found)
5. Save
**Example: Hearing Impaired**
1. Name: "SDH English"
2. Language: English
3. Hearing-Impaired: Required
4. Save
**Assign to Series/Movies:**
- Series → Edit → Language Profile: Dual Language
- Movies → Edit → Language Profile: SDH English
### Forced Subtitles
**Foreign language only scenes:**
**Example:** English movie with Spanish dialogue scenes
- Bazarr can download "forced" subtitles
- Only shows during foreign language
**Settings:**
- Language Profile → Forced: Yes
- Downloads .forced.srt files
### Manual Search
**Override automatic selection:**
1. **Series/Movies → Select item**
2. **Click "Search" icon**
3. **View all available subtitles**
4. **Select manually**
5. **Download**
**Use Cases:**
- Automatic subtitle quality poor
- Specific release group needed
- Hearing impaired preference
### Subtitle Sync
**Fix subtitle timing issues:**
**Settings → Subtitles → Subtitle Options:**
- ✓ Subtitle Sync (use ffmpeg)
- Fixes out-of-sync subtitles
**Manual Sync:**
- Tools like SubShift
- Bazarr can trigger external scripts
### Embedded Subtitle Extraction
**Extract from MKV:**
**Settings → Subtitles → Subtitle Options:**
- ✓ Use Embedded Subtitles
- Bazarr extracts to external .srt
- Useful for compatibility
**Requirements:**
- ffmpeg installed (included in linuxserver image)
- MKV files with embedded subs
### Custom Post-Processing
**Settings → Notifications → Custom:**
**Run scripts after subtitle download:**
- Convert formats
- Additional sync
- Notify external services
- Custom workflows
**Script location:**
```bash
/config/custom_scripts/post-download.sh
```
### Mass Actions
**Series/Movies → Mass Editor:**
**Actions:**
- Search All Subtitles
- Remove Subtitles
- Change Language Profile
- Update from Sonarr/Radarr
**Use Cases:**
- Initial setup (search all)
- Change preferences for multiple items
- Cleanup
### History
**History Tab:**
**View all subtitle actions:**
- Downloads
- Upgrades
- Deletions
- Manual searches
**Filters:**
- By language
- By provider
- By score
- By date
**Statistics:**
- Total downloads
- Provider success rates
- Language distribution
## Troubleshooting
### Bazarr Not Finding Subtitles
```bash
# Check providers
# Settings → Providers → Test
# Check provider status
# System → Status → Provider health
# Manual search
# Series/Movies → Manual Search
# View available subtitles
# Common issues:
# - Provider down/rate-limited
# - Wrong API key
# - Low minimum score (reduce)
# - Release name mismatch
```
### Bazarr Can't Connect to Sonarr/Radarr
```bash
# Test connection
docker exec bazarr curl http://sonarr:8989
docker exec bazarr curl http://radarr:7878
# Verify API keys
# Copy from Sonarr/Radarr exactly
# Check network
docker network inspect traefik-network
# Check logs
docker logs bazarr | grep -i "sonarr\|radarr"
# Force sync
# Settings → Sonarr/Radarr → Full Update
```
### Subtitles Not Appearing in Plex/Jellyfin
```bash
# Check subtitle location
ls -la /mnt/media/movies/Movie*/
# Should be next to video file:
# movie.mkv
# movie.en.srt
# Check permissions
sudo chown -R 1000:1000 /mnt/media/movies/
# Refresh Plex/Jellyfin
# Plex: Scan Library Files
# Jellyfin: Scan Library
# Check subtitle format
# Plex/Jellyfin support: SRT, VTT, ASS
# Not: SUB, IDX (convert if needed)
```
### Low Quality Subtitles
```bash
# Increase minimum score
# Settings → Subtitles → Minimum Score: 90%
# Enable adaptive search
# Settings → Subtitles → Adaptive Searching: ✓
# Add more providers
# Settings → Providers → Add quality providers
# Manual search
# Select item → Manual Search → Choose better subtitle
```
### Provider Rate Limiting
```bash
# Check provider status
# System → Status
# Wait for rate limit reset
# Usually hourly or daily
# Add more providers
# Distribute load across multiple sources
# Use Anti-Captcha
# Settings → Providers → Anti-Captcha
# Bypasses rate limits (paid service)
```
### Database Issues
```bash
# Stop Bazarr
docker stop bazarr
# Backup database
cp /opt/stacks/media-management/bazarr/config/db/bazarr.db /opt/backups/
# Check integrity
sqlite3 /opt/stacks/media-management/bazarr/config/db/bazarr.db "PRAGMA integrity_check;"
# Vacuum if needed
sqlite3 /opt/stacks/media-management/bazarr/config/db/bazarr.db "VACUUM;"
# Restart
docker start bazarr
```
## Performance Optimization
### Provider Settings
**Settings → Providers → Anti-Captcha:**
- Reduces rate limiting
- Faster searches
- Costs money
**Provider Limits:**
- Respect rate limits
- Don't overload providers
- Use multiple providers
### Sync Frequency
**Settings → Sonarr/Radarr:**
- Full Update: Every 6-12 hours
- More frequent = higher load
- Balance between updates and performance
### Minimum Score
**Settings → Subtitles:**
- Minimum Score: 80-90%
- Lower = more results, lower quality
- Higher = fewer results, better quality
## Security Best Practices
1. **Enable Authentication:**
- Settings → General → Security
- Authentication: Required
2. **API Key Security:**
- Keep provider API keys secure
- Regenerate if compromised
3. **Reverse Proxy:**
- Use Traefik + Authelia
- Don't expose port 6767 publicly
4. **Regular Updates:**
- Keep Bazarr current
- Update providers
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media-management/bazarr/config/db/bazarr.db # Database
/opt/stacks/media-management/bazarr/config/config/config.yaml # Settings
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/bazarr
docker stop bazarr
tar -czf $BACKUP_DIR/bazarr-$DATE.tar.gz \
/opt/stacks/media-management/bazarr/config/
docker start bazarr
find $BACKUP_DIR -name "bazarr-*.tar.gz" -mtime +7 -delete
```
## Integration with Other Services
### Bazarr + Sonarr/Radarr
- Automatic library sync
- New media detection
- Subtitle download triggers
### Bazarr + Plex/Jellyfin
- Subtitles appear automatically
- Multiple language support
- Forced subtitle support
## Summary
Bazarr is the subtitle automation tool offering:
- Automatic subtitle downloads
- Multi-language support
- Sonarr/Radarr integration
- 20+ subtitle providers
- Quality scoring and upgrades
- Forced and SDH subtitles
- Free and open-source
**Perfect for:**
- Multi-language households
- Hearing impaired accessibility
- Foreign language content
- Automated workflows
- Quality subtitle seekers
**Key Points:**
- Configure language profiles
- Add multiple providers
- Set minimum score appropriately
- Sync with Sonarr/Radarr
- Enable subtitle upgrades
- Use adaptive searching
- OpenSubtitles.com recommended
**Remember:**
- Subtitles placed next to media files
- .srt files for Plex/Jellyfin
- Multiple languages supported
- Forced subtitles for foreign scenes
- Provider rate limits exist
- Manual search available
- Regular backups recommended
Bazarr completes your media stack with comprehensive subtitle management!

View File

@@ -1,145 +0,0 @@
# BookStack - Knowledge Base
## Table of Contents
- [Overview](#overview)
- [What is BookStack?](#what-is-bookstack)
- [Why Use BookStack?](#why-use-bookstack)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Wiki/Knowledge Base
**Docker Image:** [linuxserver/bookstack](https://hub.docker.com/r/linuxserver/bookstack)
**Default Stack:** `productivity.yml`
**Web UI:** `http://SERVER_IP:6875`
**Database:** MariaDB (bookstack-db container)
**Ports:** 6875
## What is BookStack?
BookStack is a beautiful, easy-to-use platform for organizing and storing information. It uses a Books → Chapters → Pages hierarchy, making it perfect for documentation, wikis, and knowledge bases. Think of it as a self-hosted alternative to Notion or Confluence.
### Key Features
- **Book Organization:** Books → Chapters → Pages
- **WYSIWYG Editor:** Rich text editing
- **Markdown Support:** Alternative editor
- **Page Revisions:** Version history
- **Search:** Full-text search
- **Attachments:** File uploads
- **Multi-Tenancy:** Per-book permissions
- **User Management:** Roles and permissions
- **Diagrams:** Draw.io integration
- **API:** REST API
- **Free & Open Source:** MIT license
## Why Use BookStack?
1. **Beautiful UI:** Modern, clean design
2. **Intuitive:** Book/chapter structure makes sense
3. **Easy Editing:** WYSIWYG or Markdown
4. **Organized:** Natural hierarchy
5. **Permissions:** Granular access control
6. **Search:** Find anything quickly
7. **Diagrams:** Built-in diagram editor
8. **Active Development:** Regular updates
## Configuration in AI-Homelab
```
/opt/stacks/productivity/bookstack/config/ # BookStack config
/opt/stacks/productivity/bookstack-db/data/ # MariaDB database
```
## Official Resources
- **Website:** https://www.bookstackapp.com
- **Documentation:** https://www.bookstackapp.com/docs
- **GitHub:** https://github.com/BookStackApp/BookStack
- **Demo:** https://demo.bookstackapp.com
## Docker Configuration
```yaml
bookstack-db:
image: mariadb:latest
container_name: bookstack-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${BOOKSTACK_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=${BOOKSTACK_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/bookstack-db/data:/var/lib/mysql
bookstack:
image: linuxserver/bookstack:latest
container_name: bookstack
restart: unless-stopped
networks:
- traefik-network
ports:
- "6875:80"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- DB_HOST=bookstack-db
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=${BOOKSTACK_DB_PASSWORD}
- APP_URL=https://bookstack.${DOMAIN}
volumes:
- /opt/stacks/productivity/bookstack/config:/config
depends_on:
- bookstack-db
labels:
- "traefik.enable=true"
- "traefik.http.routers.bookstack.rule=Host(`bookstack.${DOMAIN}`)"
```
## Summary
BookStack is your organized knowledge base offering:
- Beautiful book/chapter/page structure
- WYSIWYG and Markdown editors
- Version history
- Granular permissions
- Full-text search
- Diagram editor
- File attachments
- Free and open-source
**Perfect for:**
- Company documentation
- Team knowledge bases
- Project documentation
- Personal notes
- Technical documentation
- Procedures and guides
- Collaborative writing
**Key Points:**
- Requires MariaDB database
- Book → Chapter → Page hierarchy
- Default login: admin@admin.com / password
- Change default credentials!
- WYSIWYG or Markdown editing
- Fine-grained permissions
- Draw.io integration
- REST API available
**Remember:**
- Change default admin credentials
- Set APP_URL for proper links
- Organize content in books
- Use chapters for sections
- Set permissions per book
- Regular database backups
- Search is very powerful
BookStack makes documentation beautiful and organized!

View File

@@ -1,138 +0,0 @@
# cAdvisor - Container Metrics
## Table of Contents
- [Overview](#overview)
- [What is cAdvisor?](#what-is-cadvisor)
- [Why Use cAdvisor?](#why-use-cadvisor)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Container Metrics
**Docker Image:** [gcr.io/cadvisor/cadvisor](https://gcr.io/cadvisor/cadvisor)
**Default Stack:** `monitoring.yml`
**Web UI:** `http://SERVER_IP:8080`
**Purpose:** Export container metrics
**Ports:** 8080
## What is cAdvisor?
cAdvisor (Container Advisor) analyzes and exposes resource usage and performance metrics from running containers. Created by Google, it provides detailed metrics for each container including CPU, memory, network, and filesystem usage. Essential for Docker monitoring.
### Key Features
- **Per-Container Metrics:** Individual container stats
- **Resource Usage:** CPU, memory, network, disk
- **Historical Data:** Resource usage over time
- **Web UI:** Built-in dashboard
- **Prometheus Export:** Metrics endpoint
- **Auto-Discovery:** Finds all containers
- **Real-Time:** Live metrics
- **Free & Open Source:** Google project
## Why Use cAdvisor?
1. **Container Visibility:** See what each container uses
2. **Resource Tracking:** CPU, memory, I/O per container
3. **Prometheus Integration:** Standard exporter
4. **Google Standard:** Industry trusted
5. **Auto-Discovery:** No configuration needed
6. **Web UI:** Built-in visualization
## Configuration in AI-Homelab
```
No configuration files needed. cAdvisor auto-discovers containers.
```
## Official Resources
- **GitHub:** https://github.com/google/cadvisor
- **Documentation:** https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md
## Docker Configuration
```yaml
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: unless-stopped
networks:
- traefik-network
ports:
- "8080:8080"
privileged: true
devices:
- /dev/kmsg
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
- /dev/disk:/dev/disk:ro
```
**Note:** Requires `privileged: true` and many volume mounts to access container metrics.
## Metrics Available
### CPU
- `container_cpu_usage_seconds_total` - CPU usage per container
- `container_cpu_system_seconds_total` - System CPU usage
- `container_cpu_user_seconds_total` - User CPU usage
### Memory
- `container_memory_usage_bytes` - Memory usage
- `container_memory_working_set_bytes` - Working set
- `container_memory_rss` - Resident set size
- `container_memory_cache` - Page cache
- `container_memory_swap` - Swap usage
### Network
- `container_network_receive_bytes_total` - Bytes received
- `container_network_transmit_bytes_total` - Bytes transmitted
- `container_network_receive_errors_total` - Receive errors
- `container_network_transmit_errors_total` - Transmit errors
### Filesystem
- `container_fs_usage_bytes` - Filesystem usage
- `container_fs_reads_bytes_total` - Bytes read
- `container_fs_writes_bytes_total` - Bytes written
## Summary
cAdvisor provides container metrics offering:
- Per-container resource usage
- CPU, memory, network, disk metrics
- Auto-discovery of containers
- Web UI dashboard
- Prometheus export
- Real-time monitoring
- Free and open-source
**Perfect for:**
- Docker container monitoring
- Resource usage tracking
- Performance analysis
- Identifying resource hogs
- Capacity planning
**Key Points:**
- Google's container advisor
- Auto-discovers all containers
- Built-in web UI
- Prometheus integration
- Requires privileged mode
- Port 8080 for UI and metrics
- Grafana dashboard 14282
**Remember:**
- Add to Prometheus config
- Import Grafana dashboard
- Privileged mode required
- Many volume mounts needed
- Web UI at :8080
- Low overhead
cAdvisor monitors all your containers!

View File

@@ -1,217 +0,0 @@
# Calibre-Web - Ebook Library Manager
## Table of Contents
- [Overview](#overview)
- [What is Calibre-Web?](#what-is-calibre-web)
- [Why Use Calibre-Web?](#why-use-calibre-web)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Ebook Management
**Docker Image:** [linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
**Default Stack:** `media-extended.yml`
**Web UI:** `https://calibre-web.${DOMAIN}` or `http://SERVER_IP:8083`
**Default Login:** admin/admin123
**Ports:** 8083
## What is Calibre-Web?
Calibre-Web is a web-based ebook reader and library manager. It provides a clean interface to browse, read, and download ebooks from your Calibre library. Works perfectly with Readarr for automated ebook management.
### Key Features
- **Web Reader:** Read ebooks in browser
- **Format Support:** EPUB, PDF, MOBI, AZW3, CBR, CBZ
- **User Management:** Multiple users with permissions
- **Send to Kindle:** Email books to Kindle
- **OPDS Feed:** E-reader app integration
- **Metadata Editing:** Edit book information
- **Custom Columns:** Organize your way
- **Shelves:** Create reading lists
- **Download:** Multiple formats available
## Why Use Calibre-Web?
1. **Web Access:** Read anywhere with browser
2. **No Calibre Desktop:** Standalone web interface
3. **Multi-User:** Family members can have accounts
4. **Kindle Integration:** Send books to Kindle
5. **E-Reader Support:** OPDS for apps
6. **Readarr Compatible:** Works with automated downloads
7. **Free & Open Source:** No cost
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/calibre-web/config/ # Config
/mnt/media/books/ # Calibre library
```
### Environment Variables
```bash
PUID=1000
PGID=1000
TZ=America/New_York
DOCKER_MODS=linuxserver/mods:universal-calibre # Optional: Convert books
```
## Official Resources
- **GitHub:** https://github.com/janeczku/calibre-web
- **Documentation:** https://github.com/janeczku/calibre-web/wiki
## Docker Configuration
```yaml
calibre-web:
image: linuxserver/calibre-web:latest
container_name: calibre-web
restart: unless-stopped
networks:
- traefik-network
ports:
- "8083:8083"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- DOCKER_MODS=linuxserver/mods:universal-calibre
volumes:
- /opt/stacks/media-management/calibre-web/config:/config
- /mnt/media/books:/books
labels:
- "traefik.enable=true"
- "traefik.http.routers.calibre-web.rule=Host(`calibre-web.${DOMAIN}`)"
- "traefik.http.routers.calibre-web.entrypoints=websecure"
- "traefik.http.routers.calibre-web.tls.certresolver=letsencrypt"
- "traefik.http.services.calibre-web.loadbalancer.server.port=8083"
```
## Initial Setup
1. **Start Container:**
```bash
docker compose up -d calibre-web
```
2. **Access UI:** `http://SERVER_IP:8083`
3. **First Login:**
- Username: `admin`
- Password: `admin123`
- **Change immediately!**
4. **Database Location:**
- Set to: `/books/metadata.db`
- This is your Calibre library database
5. **Configure Settings:**
- Admin → Edit Basic Configuration
- Set server name, enable features
- Configure email for Kindle sending
### Key Settings
**Basic Configuration:**
- Server Name: Your server name
- Enable uploads: ✓ (if wanted)
- Enable public registration: ✗ (keep private)
**Feature Configuration:**
- Enable uploading: Based on needs
- Enable book conversion: ✓
- Enable Goodreads integration: ✓ (optional)
**UI Configuration:**
- Theme: Dark/Light
- Books per page: 20
- Random books: 4
## Troubleshooting
### Can't Find Database
```bash
# Check Calibre library structure
ls -la /mnt/media/books/
# Should contain metadata.db
# If no Calibre library exists:
# Install Calibre desktop app
# Create library pointing to /mnt/media/books/
# Or let Readarr create it
# Check permissions
sudo chown -R 1000:1000 /mnt/media/books/
```
### Books Not Showing
```bash
# Check database path
# Admin → Basic Configuration → Database location
# Rescan library
# Admin → Reconnect to Calibre DB
# Check logs
docker logs calibre-web | tail -20
```
### Send to Kindle Not Working
```bash
# Configure email settings
# Admin → Edit Basic Configuration → E-mail Server Settings
# Gmail example:
# SMTP: smtp.gmail.com
# Port: 587
# Encryption: STARTTLS
# Username: your@gmail.com
# Password: App-specific password
# Add Kindle email
# User → Edit → Kindle E-mail
```
## Summary
Calibre-Web is the ebook reader offering:
- Web-based reading
- Format conversion
- Multi-user support
- Kindle integration
- OPDS feeds
- Readarr compatible
- Free and open-source
**Perfect for:**
- Ebook collections
- Web-based reading
- Family sharing
- Kindle users
- Readarr integration
**Key Points:**
- Requires Calibre library (metadata.db)
- Works with Readarr
- Change default password!
- OPDS for e-reader apps
- Send to Kindle via email
**Remember:**
- Point to existing Calibre library
- Or create new library with Calibre desktop
- Readarr can populate library
- Multi-user support available
- Supports most ebook formats
Calibre-Web provides beautiful web access to your ebook library!

View File

@@ -1,120 +0,0 @@
# Code Server - VS Code in Browser
## Table of Contents
- [Overview](#overview)
- [What is Code Server?](#what-is-code-server)
- [Why Use Code Server?](#why-use-code-server)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Development Environment
**Docker Image:** [linuxserver/code-server](https://hub.docker.com/r/linuxserver/code-server)
**Default Stack:** `utilities.yml` or `development.yml`
**Web UI:** `https://code.${DOMAIN}` or `http://SERVER_IP:8443`
**Ports:** 8443
## What is Code Server?
Code Server is VS Code running in your browser. Access your development environment from anywhere without installing anything. It's the full VS Code experience - extensions, settings, terminal - accessible via web browser.
### Key Features
- **VS Code:** Real VS Code, not a clone
- **Browser Access:** Any device, anywhere
- **Extensions:** Full extension support
- **Terminal:** Integrated terminal
- **Git:** Built-in Git support
- **Settings Sync:** Keep preferences
- **Collaborative:** Share sessions
- **Self-Hosted:** Your server
- **Free & Open Source:** No cost
## Why Use Code Server?
1. **Access Anywhere:** Code from any device
2. **No Installation:** Just browser needed
3. **Consistent:** Same environment everywhere
4. **Powerful:** Full VS Code features
5. **iPad Coding:** Code on tablets
6. **Remote Access:** Access home server
7. **Team Sharing:** Collaborative coding
8. **Self-Hosted:** Privacy and control
## Configuration in AI-Homelab
```
/opt/stacks/utilities/code-server/config/ # VS Code settings
/opt/stacks/utilities/code-server/workspace/ # Your projects
```
## Official Resources
- **Website:** https://coder.com/docs/code-server
- **GitHub:** https://github.com/coder/code-server
- **Documentation:** https://coder.com/docs
## Docker Configuration
```yaml
code-server:
image: linuxserver/code-server:latest
container_name: code-server
restart: unless-stopped
networks:
- traefik-network
ports:
- "8443:8443"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- PASSWORD=your_secure_password
- SUDO_PASSWORD=sudo_password
volumes:
- /opt/stacks/utilities/code-server/config:/config
- /opt/stacks:/workspace # Your code
labels:
- "traefik.enable=true"
- "traefik.http.routers.code-server.rule=Host(`code.${DOMAIN}`)"
```
## Summary
Code Server brings VS Code to your browser offering:
- Full VS Code in browser
- Extension support
- Integrated terminal
- Git integration
- Access from anywhere
- No local installation needed
- Self-hosted
- Free and open-source
**Perfect for:**
- Remote coding
- iPad/tablet development
- Consistent dev environment
- Team collaboration
- Cloud-based development
- Learning programming
**Key Points:**
- Real VS Code, not clone
- Extensions work
- Integrated terminal
- Git support
- Password protected
- Access via browser
- Mount your code directories
**Remember:**
- Set strong password
- HTTPS recommended
- Mount volumes for persistence
- Install extensions as needed
- Terminal has full access
- Save work regularly
Code Server puts VS Code everywhere!

View File

@@ -1,680 +0,0 @@
# Docker Socket Proxy - Secure Docker Socket Access
## Table of Contents
- [Overview](#overview)
- [What is Docker Socket Proxy?](#what-is-docker-socket-proxy)
- [Why Use Docker Socket Proxy?](#why-use-docker-socket-proxy)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Access Control](#access-control)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure Security
**Docker Image:** [tecnativa/docker-socket-proxy](https://hub.docker.com/r/tecnativa/docker-socket-proxy)
**Default Stack:** `infrastructure.yml`
**Web UI:** None (proxy service)
**Port:** 2375 (internal only)
**Purpose:** Secure access layer for Docker socket
## What is Docker Socket Proxy?
Docker Socket Proxy is a security-focused proxy that sits between Docker management tools and the Docker socket. It provides fine-grained access control to Docker API endpoints, allowing you to grant specific permissions rather than full Docker socket access.
### Key Features
- **Granular Permissions:** Control which Docker API endpoints are accessible
- **Security Layer:** Prevents full root access to host
- **Read/Write Control:** Separate read-only and write permissions
- **API Filtering:** Whitelist specific Docker API calls
- **No Authentication:** Relies on network isolation
- **Lightweight:** Minimal resource usage
- **HAProxy Based:** Stable, proven technology
- **Zero Config:** Works out of the box with sensible defaults
## Why Use Docker Socket Proxy?
### The Problem
Direct Docker socket access (`/var/run/docker.sock`) grants **root-equivalent access** to the host:
```yaml
# DANGEROUS: Full access to Docker = root on host
traefik:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
**Risks:**
- Container can access all other containers
- Can mount host filesystem
- Can escape container isolation
- Can compromise entire system
### The Solution
Docker Socket Proxy provides controlled access:
```yaml
# SAFER: Limited access via proxy
traefik:
environment:
- DOCKER_HOST=tcp://docker-proxy:2375
# No direct socket mount!
docker-proxy:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- CONTAINERS=1 # Allow container list
- SERVICES=1 # Allow service list
- TASKS=0 # Deny task access
```
**Benefits:**
1. **Principle of Least Privilege:** Only grant necessary permissions
2. **Reduced Attack Surface:** Limit what compromised container can do
3. **Audit Trail:** Centralized access point
4. **Network Isolation:** Proxy can be on separate network
5. **Read-Only Socket:** Proxy uses read-only mount
## How It Works
```
Management Tool (Traefik/Portainer/Dockge)
TCP Connection to docker-proxy:2375
Docker Socket Proxy (HAProxy)
├─ Check permissions
├─ Filter allowed endpoints
└─ Forward or block request
Docker Socket (/var/run/docker.sock)
Docker Engine
```
### Request Flow
1. **Tool makes API request:** "List containers"
2. **Connects to proxy:** tcp://docker-proxy:2375
3. **Proxy checks permissions:** Is CONTAINERS=1?
4. **If allowed:** Forward to Docker socket
5. **If denied:** Return 403 Forbidden
6. **Response returned:** To requesting tool
### Permission Model
**Environment variables control access:**
- `CONTAINERS=1` → Allow container operations
- `IMAGES=1` → Allow image operations
- `NETWORKS=1` → Allow network operations
- `VOLUMES=1` → Allow volume operations
- `SERVICES=1` → Allow swarm service operations
- `TASKS=1` → Allow swarm task operations
- `SECRETS=1` → Allow secret operations
- `POST=0` → Deny all write operations (read-only)
## Configuration in AI-Homelab
### Directory Structure
```
# No persistent storage needed
# All configuration via environment variables
```
### Environment Variables
```bash
# Core Docker API endpoints
CONTAINERS=1 # Container list, inspect, logs, stats
SERVICES=1 # Service management (for Swarm)
TASKS=1 # Task management (for Swarm)
NETWORKS=1 # Network operations
VOLUMES=1 # Volume operations
IMAGES=1 # Image list, pull, push
INFO=1 # Docker info, version
EVENTS=1 # Docker events stream
PING=1 # Health check
# Write operations (set to 0 for read-only)
POST=1 # Create operations
BUILD=0 # Image build (usually not needed)
COMMIT=0 # Container commit
CONFIGS=0 # Config management
DISTRIBUTION=0 # Registry operations
EXEC=0 # Execute in container (dangerous)
SECRETS=0 # Secret management (Swarm)
SESSION=0 # Not commonly used
SWARM=0 # Swarm management
# Security
LOG_LEVEL=info # Logging verbosity
```
## Official Resources
- **GitHub:** https://github.com/Tecnativa/docker-socket-proxy
- **Docker Hub:** https://hub.docker.com/r/tecnativa/docker-socket-proxy
- **Documentation:** https://github.com/Tecnativa/docker-socket-proxy/blob/master/README.md
## Educational Resources
### Videos
- [Docker Socket Security (TechnoTim)](https://www.youtube.com/watch?v=0aOqx8mQZFk)
- [Why You Should Use Docker Socket Proxy](https://www.youtube.com/results?search_query=docker+socket+proxy+security)
- [Container Security Best Practices](https://www.youtube.com/watch?v=9weaE6QEm8A)
### Articles & Guides
- [Docker Socket Proxy Documentation](https://github.com/Tecnativa/docker-socket-proxy)
- [Docker Socket Security Risks](https://docs.docker.com/engine/security/)
- [Principle of Least Privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)
### Concepts to Learn
- **Unix Socket:** Inter-process communication file
- **Docker API:** RESTful API for Docker operations
- **TCP Socket:** Network socket for remote access
- **HAProxy:** Load balancer and proxy
- **Least Privilege:** Minimal permissions principle
- **Attack Surface:** Potential vulnerability points
- **Container Escape:** Breaking out of container isolation
## Docker Configuration
### Complete Service Definition
```yaml
docker-proxy:
image: tecnativa/docker-socket-proxy:latest
container_name: docker-proxy
restart: unless-stopped
privileged: true # Required for socket access
networks:
- docker-proxy-network # Isolated network
ports:
- "2375:2375" # Only expose internally
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro # Read-only!
environment:
# Core permissions (what Traefik/Portainer need)
- CONTAINERS=1
- SERVICES=1
- NETWORKS=1
- IMAGES=1
- INFO=1
- EVENTS=1
- PING=1
# Write operations (enable as needed)
- POST=1
# Deny dangerous operations
- BUILD=0
- COMMIT=0
- EXEC=0
- SECRETS=0
# Logging
- LOG_LEVEL=info
networks:
docker-proxy-network:
internal: true # No external access
```
### Connecting Services to Proxy
**Traefik Configuration:**
```yaml
traefik:
networks:
- traefik-network
- docker-proxy-network
environment:
- DOCKER_HOST=tcp://docker-proxy:2375
# NO volumes for Docker socket!
# volumes:
# - /var/run/docker.sock:/var/run/docker.sock # REMOVE THIS
```
**Portainer Configuration:**
```yaml
portainer:
networks:
- traefik-network
- docker-proxy-network
environment:
- AGENT_HOST=docker-proxy
# NO volumes for Docker socket!
```
**Dockge Configuration:**
```yaml
dockge:
networks:
- traefik-network
- docker-proxy-network
environment:
- DOCKER_HOST=tcp://docker-proxy:2375
# NO volumes for Docker socket!
```
## Access Control
### Traefik Minimal Permissions
Traefik only needs to read container labels:
```yaml
docker-proxy:
environment:
- CONTAINERS=1 # Read container info
- SERVICES=1 # Read services
- NETWORKS=1 # Read networks
- INFO=1 # Docker info
- EVENTS=1 # Watch for changes
- POST=0 # No write operations
```
### Portainer Full Permissions
Portainer needs more access for management:
```yaml
docker-proxy:
environment:
- CONTAINERS=1
- SERVICES=1
- TASKS=1
- NETWORKS=1
- VOLUMES=1
- IMAGES=1
- INFO=1
- EVENTS=1
- POST=1 # Create/update
- PING=1
```
### Watchtower Minimal Permissions
Watchtower needs to pull images and recreate containers:
```yaml
docker-proxy:
environment:
- CONTAINERS=1
- IMAGES=1
- INFO=1
- POST=1 # Create operations
```
### Read-Only Mode
For monitoring tools (Glances, Dozzle):
```yaml
docker-proxy:
environment:
- CONTAINERS=1
- SERVICES=1
- TASKS=1
- NETWORKS=1
- VOLUMES=1
- IMAGES=1
- INFO=1
- EVENTS=1
- POST=0 # No writes
- BUILD=0
- COMMIT=0
- EXEC=0
```
## Advanced Topics
### Multiple Proxy Instances
Run separate proxies for different permission levels:
**docker-proxy-read (for monitoring tools):**
```yaml
docker-proxy-read:
image: tecnativa/docker-socket-proxy
environment:
- CONTAINERS=1
- IMAGES=1
- INFO=1
- POST=0 # Read-only
networks:
- monitoring-network
```
**docker-proxy-write (for management tools):**
```yaml
docker-proxy-write:
image: tecnativa/docker-socket-proxy
environment:
- CONTAINERS=1
- IMAGES=1
- NETWORKS=1
- VOLUMES=1
- POST=1 # Read-write
networks:
- management-network
```
### Custom HAProxy Configuration
For advanced filtering, mount custom config:
```yaml
docker-proxy:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
```
**Custom haproxy.cfg:**
```haproxy
global
log stdout format raw local0
defaults
log global
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
frontend docker
bind :2375
default_backend docker_backend
backend docker_backend
server docker unix@/var/run/docker.sock
# Custom ACLs
acl containers_req path_beg /containers
acl images_req path_beg /images
# Only allow specific endpoints
http-request deny unless containers_req or images_req
```
### Logging and Monitoring
```yaml
docker-proxy:
environment:
- LOG_LEVEL=debug # More verbose logging
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
```
**Monitor access:**
```bash
# View proxy logs
docker logs -f docker-proxy
# See what endpoints are being accessed
docker logs docker-proxy | grep -E "GET|POST|PUT|DELETE"
```
### Network Isolation
```yaml
networks:
docker-proxy-network:
driver: bridge
internal: true # No internet access
ipam:
config:
- subnet: 172.25.0.0/16
```
**Only allow specific services:**
```yaml
services:
traefik:
networks:
docker-proxy-network:
ipv4_address: 172.25.0.2
portainer:
networks:
docker-proxy-network:
ipv4_address: 172.25.0.3
```
## Troubleshooting
### Services Can't Connect to Docker
```bash
# Check if proxy is running
docker ps | grep docker-proxy
# Test proxy connectivity
docker exec traefik wget -qO- http://docker-proxy:2375/version
# Check networks
docker network inspect docker-proxy-network
# Verify service is on proxy network
docker inspect traefik | grep -A10 Networks
```
### Permission Denied Errors
```bash
# Check proxy logs
docker logs docker-proxy
# Example error: "POST /containers/create 403"
# Solution: Add POST=1 to docker-proxy environment
# Check which endpoint is being blocked
docker logs docker-proxy | grep 403
# Enable required permission
# If /images/create is blocked, add IMAGES=1
```
### Traefik Not Discovering Services
```bash
# Ensure these are enabled:
docker-proxy:
environment:
- CONTAINERS=1
- SERVICES=1
- EVENTS=1 # Critical for auto-discovery
# Check if Traefik is using proxy
docker exec traefik env | grep DOCKER_HOST
# Test manually
docker exec traefik wget -qO- http://docker-proxy:2375/containers/json
```
### Portainer Shows "Cannot connect to Docker"
```bash
# Portainer needs more permissions
docker-proxy:
environment:
- CONTAINERS=1
- SERVICES=1
- TASKS=1
- NETWORKS=1
- VOLUMES=1
- IMAGES=1
- POST=1
# In Portainer settings:
# Environment URL: tcp://docker-proxy:2375
# Not: unix:///var/run/docker.sock
```
### Watchtower Not Updating Containers
```bash
# Watchtower needs write access
docker-proxy:
environment:
- CONTAINERS=1
- IMAGES=1
- POST=1 # Required for creating containers
# Check Watchtower logs
docker logs watchtower
```
### High Memory/CPU Usage
```bash
# Check proxy stats
docker stats docker-proxy
# Should be minimal (~10MB RAM, <1% CPU)
# If high, check for connection leaks
# Restart proxy
docker restart docker-proxy
# Check for excessive requests
docker logs docker-proxy | wc -l
```
## Security Best Practices
1. **Read-Only Socket:** Always mount socket as `:ro`
2. **Minimal Permissions:** Only enable what's needed
3. **Network Isolation:** Use internal network
4. **No Public Exposure:** Never expose port 2375 to internet
5. **Separate Proxies:** Different proxies for different trust levels
6. **Monitor Access:** Review logs regularly
7. **Disable Exec:** Never enable EXEC unless absolutely necessary
8. **Regular Updates:** Keep proxy image updated
9. **Principle of Least Privilege:** Start with nothing, add as needed
10. **Testing:** Test permissions in development first
## Migration Guide
### Converting from Direct Socket Access
**Before (insecure):**
```yaml
traefik:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
**After (secure):**
1. **Add docker-proxy:**
```yaml
docker-proxy:
image: tecnativa/docker-socket-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- CONTAINERS=1
- SERVICES=1
- NETWORKS=1
- EVENTS=1
networks:
- docker-proxy-network
```
2. **Update service:**
```yaml
traefik:
environment:
- DOCKER_HOST=tcp://docker-proxy:2375
networks:
- traefik-network
- docker-proxy-network
# Remove socket volume!
```
3. **Create network:**
```yaml
networks:
docker-proxy-network:
internal: true
```
4. **Test:**
```bash
docker compose up -d
docker logs traefik # Check for errors
```
## Performance Impact
**Overhead:**
- Latency: ~1-2ms per request
- Memory: ~10-20MB
- CPU: <1%
**Minimal impact because:**
- Docker API calls are infrequent
- Proxy is extremely lightweight
- HAProxy is optimized for performance
**Benchmark:**
```bash
# Direct socket
time docker ps
# ~0.05s
# Via proxy
time docker -H tcp://docker-proxy:2375 ps
# ~0.06s
# Negligible difference for management operations
```
## Summary
Docker Socket Proxy is a critical security component that:
- Provides granular access control to Docker API
- Prevents root-equivalent access from containers
- Uses principle of least privilege
- Adds minimal overhead
- Simple to configure and maintain
**Essential for:**
- Production environments
- Multi-user setups
- Security-conscious homelabs
- Compliance requirements
- Defense in depth strategy
**Implementation Priority:**
1. Deploy docker-proxy with minimal permissions
2. Update Traefik to use proxy (most critical)
3. Update Portainer to use proxy
4. Update other management tools
5. Remove all direct socket mounts
6. Test thoroughly
7. Monitor logs
**Remember:**
- Direct socket access = root on host
- Always use read-only socket mount in proxy
- Start with restrictive permissions
- Add permissions only as needed
- Use separate proxies for different trust levels
- Never expose proxy to internet
- Monitor access logs
- Essential security layer for homelab

View File

@@ -1,600 +0,0 @@
# Dockge - Docker Compose Stack Manager
## Table of Contents
- [Overview](#overview)
- [What is Dockge?](#what-is-dockge)
- [Why Use Dockge?](#why-use-dockge)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Managing Stacks](#managing-stacks)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure Management
**Docker Image:** [louislam/dockge](https://hub.docker.com/r/louislam/dockge)
**Default Stack:** `infrastructure.yml`
**Web UI:** `https://dockge.${DOMAIN}`
**Authentication:** SSO via Authelia (automatic login)
## What is Dockge?
Dockge is a modern, self-hosted Docker Compose stack manager with a beautiful web UI. Created by the developer of Uptime Kuma, it provides a user-friendly interface for managing Docker Compose stacks with features like terminal access, log viewing, and real-time editing.
### Key Features
- **Visual Stack Management:** View all stacks, services, and containers at a glance
- **Interactive Compose Editor:** Edit docker-compose.yml files with syntax highlighting
- **Built-in Terminal:** Execute commands directly in containers
- **Real-time Logs:** Stream and search container logs
- **One-Click Actions:** Start, stop, restart, update services easily
- **Agent Mode:** Manage Docker on remote servers
- **File-based Storage:** All stacks stored as compose files on disk
- **Git Integration:** Push/pull stacks to Git repositories
- **No Database Required:** Lightweight, direct file manipulation
- **Modern UI:** Clean, responsive interface
## Why Use Dockge?
1. **Primary Management Tool:** AI-Homelab's main stack manager
2. **User-Friendly:** Much simpler than Portainer for compose stacks
3. **Direct File Access:** Edit compose files directly (no abstraction)
4. **Quick Deployment:** Create and deploy stacks in seconds
5. **Visual Feedback:** See container status, resource usage
6. **Terminal Access:** Execute commands without SSH
7. **Log Management:** View, search, and download logs easily
8. **Lightweight:** Minimal resource usage
9. **Active Development:** Regular updates and improvements
## How It Works
```
User → Web Browser → Dockge UI
Compose Files
(/opt/stacks/...)
Docker Engine
Running Containers
```
### Stack Management Flow
1. **Create/Edit** compose file in Dockge UI or text editor
2. **Deploy** stack with one click
3. **Monitor** services, logs, and resources
4. **Update** services by pulling new images
5. **Manage** individual containers (start/stop/restart)
6. **Access** terminals for troubleshooting
### File Structure
Dockge uses a simple directory structure:
```
/opt/stacks/
├── core/
│ └── compose.yaml
├── infrastructure/
│ └── compose.yaml
├── media/
│ └── compose.yaml
└── dashboards/
└── compose.yaml
```
Each folder is a "stack" with its compose file and volumes.
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/
├── core/ # Core infrastructure stack
├── infrastructure/ # Management and monitoring
├── dashboards/ # Homepage, Homarr
├── media/ # Plex, Sonarr, Radarr, etc.
├── media-extended/ # Additional media services
├── homeassistant/ # Home automation
├── productivity/ # Nextcloud, Gitea, etc.
├── utilities/ # Vaultwarden, backups
├── monitoring/ # Prometheus, Grafana
└── development/ # GitLab, dev tools
```
### Environment Variables
```bash
# Dockge Configuration
DOCKGE_STACKS_DIR=/opt/stacks
DOCKGE_ENABLE_CONSOLE=true
# SSO Integration with Authelia
DOCKGE_AUTH_PROXY_HEADER=Remote-User
DOCKGE_AUTH_PROXY_AUTO_CREATE=true
DOCKGE_AUTH_PROXY_LOGOUT_URL=https://auth.${DOMAIN}/logout
```
### SSO Authentication
Dockge integrates with Authelia for Single Sign-On (SSO) authentication:
**How it works:**
1. Traefik forwards requests to Authelia for authentication
2. Authelia sets the `Remote-User` header with authenticated username
3. Dockge reads this header and automatically logs in the user
4. Users are created automatically on first login
5. Logout redirects to Authelia's logout page
**Benefits:**
- No separate Dockge login required
- Centralized user management through Authelia
- Automatic user provisioning
- Secure logout handling
**Configuration:**
- `DOCKGE_AUTH_PROXY_HEADER=Remote-User`: Header set by Authelia
- `DOCKGE_AUTH_PROXY_AUTO_CREATE=true`: Create users automatically
- `DOCKGE_AUTH_PROXY_LOGOUT_URL`: Redirect to Authelia logout
## Official Resources
- **Website:** https://dockge.kuma.pet
- **GitHub:** https://github.com/louislam/dockge
- **Docker Hub:** https://hub.docker.com/r/louislam/dockge
- **Documentation:** https://github.com/louislam/dockge/wiki
- **Discord Community:** https://discord.gg/3xBrKN66
- **Related:** Uptime Kuma (by same developer)
## Educational Resources
### Videos
- [Dockge - BEST Docker Compose Manager? (Techno Tim)](https://www.youtube.com/watch?v=AWAlOQeNpgU)
- [Dockge vs Portainer - Which is Better?](https://www.youtube.com/results?search_query=dockge+vs+portainer)
- [Docker Compose Tutorial (NetworkChuck)](https://www.youtube.com/watch?v=DM65_JyGxCo)
- [Dockge Setup and Features (DB Tech)](https://www.youtube.com/watch?v=FY7-KpTbkI8)
### Articles & Guides
- [Dockge Official Documentation](https://github.com/louislam/dockge/wiki)
- [Docker Compose Documentation](https://docs.docker.com/compose/)
- [Dockge vs Portainer Comparison](https://www.reddit.com/r/selfhosted/comments/17kp3d7/dockge_vs_portainer/)
- [Why You Need a Docker UI](https://www.smarthomebeginner.com/docker-gui-portainer-vs-dockge/)
### Concepts to Learn
- **Docker Compose:** Tool for defining multi-container applications
- **Stacks:** Collection of services defined in compose file
- **Services:** Individual containers within a stack
- **Volumes:** Persistent storage for containers
- **Networks:** Container networking and communication
- **Environment Variables:** Configuration passed to containers
- **Health Checks:** Automated service monitoring
## Docker Configuration
### Complete Service Definition
```yaml
dockge:
image: louislam/dockge:latest
container_name: dockge
restart: unless-stopped
networks:
- traefik-network
ports:
- "5001:5001"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/stacks:/opt/stacks
- /opt/dockge/data:/app/data
environment:
- DOCKGE_STACKS_DIR=/opt/stacks
- PUID=1000
- PGID=1000
- TZ=America/New_York
labels:
- "traefik.enable=true"
- "traefik.http.routers.dockge.rule=Host(`dockge.${DOMAIN}`)"
- "traefik.http.routers.dockge.entrypoints=websecure"
- "traefik.http.routers.dockge.tls.certresolver=letsencrypt"
- "traefik.http.routers.dockge.middlewares=authelia@docker"
- "traefik.http.services.dockge.loadbalancer.server.port=5001"
```
### Important Volumes
1. **Docker Socket:**
```yaml
- /var/run/docker.sock:/var/run/docker.sock
```
Required for Docker control. Security consideration: grants full Docker access.
2. **Stacks Directory:**
```yaml
- /opt/stacks:/opt/stacks
```
Where all compose files are stored. Must match DOCKGE_STACKS_DIR.
3. **Data Directory:**
```yaml
- /opt/dockge/data:/app/data
```
Stores Dockge configuration and settings.
## Managing Stacks
### Creating a New Stack
1. **Via Dockge UI:**
- Click "Compose" button
- Name your stack (e.g., "myapp")
- Paste or write compose configuration
- Click "Deploy"
2. **Via File System:**
```bash
mkdir /opt/stacks/myapp
nano /opt/stacks/myapp/compose.yaml
# Dockge will auto-detect the new stack
```
### Stack Operations
**From Dockge UI:**
- **Start:** Green play button
- **Stop:** Red stop button
- **Restart:** Circular arrow
- **Update:** Pull latest images and recreate
- **Delete:** Remove stack (keeps volumes unless specified)
- **Edit:** Modify compose file
- **Terminal:** Access container shell
- **Logs:** View real-time logs
### Editing Stacks
1. Click on stack name
2. Click "Edit Compose" button
3. Modify yaml configuration
4. Click "Save & Update" or "Save"
5. Dockge will apply changes automatically
### Accessing Container Terminals
1. Click on stack
2. Click on service/container
3. Click "Terminal" button
4. Execute commands in interactive shell
### Viewing Logs
1. Click on stack
2. Click on service/container
3. Click "Logs" button
4. Real-time log streaming
5. Search logs with filter box
## Advanced Topics
### Agent Mode (Remote Management)
Manage Docker on remote servers from single Dockge instance:
**On Remote Server:**
```yaml
dockge-agent:
image: louislam/dockge:latest
container_name: dockge-agent
restart: unless-stopped
ports:
- "5002:5002"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/stacks:/opt/stacks
environment:
- DOCKGE_AGENT_HOST=0.0.0.0
- DOCKGE_AGENT_PORT=5002
```
**In Main Dockge:**
Settings → Agents → Add Agent
- Host: remote-server-ip
- Port: 5002
### Git Integration
Sync stacks with Git repository:
1. **Initialize Git in stack directory:**
```bash
cd /opt/stacks/mystack
git init
git remote add origin https://github.com/user/repo.git
```
2. **Use Dockge UI:**
- Click "Git" button in stack view
- Pull/Push changes
- View commit history
### Environment File Management
Store secrets in `.env` files:
```bash
# /opt/stacks/mystack/.env
MYSQL_ROOT_PASSWORD=supersecret
API_KEY=abc123xyz
```
Reference in compose file:
```yaml
services:
myapp:
environment:
- DB_PASSWORD=${MYSQL_ROOT_PASSWORD}
- API_KEY=${API_KEY}
```
### Stack Dependencies
Order stack startup:
```yaml
services:
webapp:
depends_on:
- database
- redis
```
### Health Checks
Monitor service health:
```yaml
services:
webapp:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
```
### Resource Limits
Prevent services from consuming too many resources:
```yaml
services:
webapp:
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
```
## Troubleshooting
### Dockge Won't Start
```bash
# Check if port 5001 is in use
sudo lsof -i :5001
# Check Docker socket permissions
ls -la /var/run/docker.sock
# View Dockge logs
docker logs dockge
# Verify stacks directory exists
ls -la /opt/stacks
```
### Stack Won't Deploy
```bash
# Check compose file syntax
cd /opt/stacks/mystack
docker compose config
# View detailed deployment logs
docker logs dockge
# Check for port conflicts
docker ps | grep PORT_NUMBER
# Verify network exists
docker network ls | grep traefik-network
```
### Can't Access Container Terminal
```bash
# Check if container is running
docker ps | grep container-name
# Verify container has shell
docker exec container-name which bash
docker exec container-name which sh
# Try manual terminal access
docker exec -it container-name /bin/bash
```
### Stack Shows Wrong Status
```bash
# Refresh Dockge
# Click the refresh icon in UI
# Or restart Dockge
docker restart dockge
# Check actual container status
docker ps -a
```
### Changes Not Applying
```bash
# Force recreate containers
cd /opt/stacks/mystack
docker compose up -d --force-recreate
# Or in Dockge UI:
# Click "Update" button (pulls images and recreates)
```
### Permission Issues
```bash
# Fix stacks directory permissions
sudo chown -R 1000:1000 /opt/stacks
# Fix Docker socket permissions
sudo chmod 666 /var/run/docker.sock
# Or add user to docker group:
sudo usermod -aG docker $USER
```
### High Memory Usage
```bash
# Check container resource usage
docker stats
# Add resource limits to services (see Advanced Topics)
# Prune unused resources
docker system prune -a
```
## Security Considerations
### Docker Socket Access
**Risk:** Full Docker socket access = root access to host
```yaml
- /var/run/docker.sock:/var/run/docker.sock
```
**Mitigations:**
1. **Use Authelia:** Always protect Dockge with authentication
2. **Use Docker Socket Proxy:** Limit socket access (see docker-proxy.md)
3. **Restrict Access:** Only trusted admins should access Dockge
4. **Network Security:** Never expose Dockge to internet without VPN/Authelia
### Best Practices
1. **Authentication:** Always use Authelia or similar
2. **HTTPS Only:** Never access Dockge over plain HTTP
3. **Strong Passwords:** Use strong credentials for all services
4. **Environment Files:** Store secrets in `.env` files, not compose
5. **Regular Updates:** Keep Dockge and services updated
6. **Backup Stacks:** Regular backups of `/opt/stacks`
7. **Log Monitoring:** Review logs for suspicious activity
8. **Least Privilege:** Don't run containers as root when possible
9. **Network Isolation:** Use separate networks for different stacks
10. **Audit Access:** Know who has access to Dockge
## Comparison with Alternatives
### Dockge vs Portainer
**Dockge Advantages:**
- Simpler interface
- Direct file manipulation
- Built-in terminal
- Faster for compose stacks
- No database required
- Better for small/medium deployments
**Portainer Advantages:**
- More features (users, teams, rbac)
- Kubernetes support
- Better for large enterprises
- More established project
- Advanced networking UI
### Dockge vs CLI (docker compose)
**Dockge Advantages:**
- Visual feedback
- Easier for beginners
- Quick access to logs/terminals
- One-click operations
- Remote management
**CLI Advantages:**
- Scriptable
- Faster for experts
- No additional resource usage
- More control
## Tips & Tricks
### Quick Stack Creation
Use templates for common services:
```bash
# Create template directory
mkdir /opt/stacks/templates
# Copy common compose files
cp /opt/stacks/media/compose.yaml /opt/stacks/templates/media-template.yaml
```
### Bulk Operations
```bash
# Start all stacks
cd /opt/stacks
for dir in */; do cd "$dir" && docker compose up -d && cd ..; done
# Stop all stacks
for dir in */; do cd "$dir" && docker compose down && cd ..; done
# Update all stacks
for dir in */; do cd "$dir" && docker compose pull && docker compose up -d && cd ..; done
```
### Stack Naming
Use clear, descriptive names:
- ✅ `media`, `dashboards`, `productivity`
- ❌ `stack1`, `test`, `mystack`
### Organize by Function
Group related services in stacks:
- **Core:** Essential infrastructure
- **Media:** Entertainment services
- **Productivity:** Work-related tools
- **Development:** Dev environments
## Summary
Dockge is AI-Homelab's primary management interface. It provides:
- Visual stack management with modern UI
- Direct compose file editing
- Real-time logs and terminals
- Simple deployment workflow
- Lightweight and fast
- Perfect balance of simplicity and power
As the main tool you'll use to manage your homelab, take time to familiarize yourself with Dockge's interface. It makes complex Docker operations simple and provides visual feedback that helps understand your infrastructure at a glance.
**Remember:**
- Dockge is for stack management, Portainer is backup
- Always use Authelia protection
- Keep compose files organized
- Regular backups of `/opt/stacks`
- Monitor resource usage
- Review logs regularly

View File

@@ -1,124 +0,0 @@
# DokuWiki - Documentation Wiki
## Table of Contents
- [Overview](#overview)
- [What is DokuWiki?](#what-is-dokuwiki)
- [Why Use DokuWiki?](#why-use-dokuwiki)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Wiki/Documentation
**Docker Image:** [linuxserver/dokuwiki](https://hub.docker.com/r/linuxserver/dokuwiki)
**Default Stack:** `productivity.yml`
**Web UI:** `http://SERVER_IP:8083`
**Database:** None (flat-file)
**Ports:** 8083
## What is DokuWiki?
DokuWiki is a simple, standards-compliant wiki optimized for creating documentation. Unlike MediaWiki (Wikipedia's software), DokuWiki stores pages in plain text files, requiring no database. Perfect for personal notes, project documentation, and team knowledge bases.
### Key Features
- **No Database:** Flat-file storage
- **Easy Syntax:** Simple wiki markup
- **Version Control:** Built-in revisions
- **Access Control:** User permissions
- **Search:** Full-text search
- **Plugins:** 1000+ plugins
- **Templates:** Customizable themes
- **Media Files:** Image/file uploads
- **Namespace:** Organize pages in folders
- **Free & Open Source:** GPL license
## Why Use DokuWiki?
1. **Simple:** No database needed
2. **Fast:** Lightweight and quick
3. **Easy Editing:** Wiki syntax
4. **Backup:** Just copy text files
5. **Version History:** All changes tracked
6. **Portable:** Text files, easy to migrate
7. **Low Maintenance:** Minimal requirements
8. **Privacy:** Self-hosted docs
## Configuration in AI-Homelab
```
/opt/stacks/productivity/dokuwiki/config/
dokuwiki/
data/pages/ # Wiki pages (text files)
data/media/ # Uploaded files
conf/ # Configuration
```
## Official Resources
- **Website:** https://www.dokuwiki.org
- **Documentation:** https://www.dokuwiki.org/manual
- **Plugins:** https://www.dokuwiki.org/plugins
- **Syntax:** https://www.dokuwiki.org/syntax
## Docker Configuration
```yaml
dokuwiki:
image: linuxserver/dokuwiki:latest
container_name: dokuwiki
restart: unless-stopped
networks:
- traefik-network
ports:
- "8083:80"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/productivity/dokuwiki/config:/config
labels:
- "traefik.enable=true"
- "traefik.http.routers.dokuwiki.rule=Host(`dokuwiki.${DOMAIN}`)"
```
## Summary
DokuWiki is the simple documentation wiki offering:
- No database required
- Plain text storage
- Easy wiki syntax
- Version control
- Fast and lightweight
- Plugin ecosystem
- Access control
- Free and open-source
**Perfect for:**
- Personal knowledge base
- Project documentation
- Team wikis
- Technical notes
- How-to guides
- Simple documentation needs
**Key Points:**
- Flat-file storage (no DB)
- Easy backup (copy files)
- Simple wiki markup
- Built-in version history
- Namespace organization
- User permissions available
- 1000+ plugins
- Very low resource usage
**Remember:**
- Pages stored as text files
- Namespace = folder structure
- Plugins for extended features
- Access control per page
- Regular file backups
- Simple syntax to learn
DokuWiki keeps documentation simple and portable!

View File

@@ -1,707 +0,0 @@
# Dozzle - Real-Time Docker Log Viewer
## Table of Contents
- [Overview](#overview)
- [What is Dozzle?](#what-is-dozzle)
- [Why Use Dozzle?](#why-use-dozzle)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Using Dozzle](#using-dozzle)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure Monitoring
**Docker Image:** [amir20/dozzle](https://hub.docker.com/r/amir20/dozzle)
**Default Stack:** `infrastructure.yml`
**Web UI:** `https://dozzle.${DOMAIN}`
**Authentication:** Protected by Authelia (SSO)
**Purpose:** Real-time container log viewing and searching
## What is Dozzle?
Dozzle is a lightweight, web-based Docker log viewer that provides real-time log streaming in a beautiful interface. It's designed to be simple, fast, and secure, requiring no database or complicated setup.
### Key Features
- **Real-Time Streaming:** Live log updates as they happen
- **Multi-Container View:** View logs from multiple containers simultaneously
- **Search & Filter:** Search through logs with regex support
- **Syntax Highlighting:** Colored log output for better readability
- **Dark/Light Theme:** Comfortable viewing in any environment
- **No Database:** Reads directly from Docker socket
- **Mobile Friendly:** Responsive design works on phones/tablets
- **Small Footprint:** ~15MB Docker image
- **Automatic Discovery:** Finds all containers automatically
- **Log Export:** Download logs for offline analysis
- **Authentication:** Built-in auth or use reverse proxy
- **Multi-Host Support:** Monitor logs from multiple Docker hosts
## Why Use Dozzle?
1. **Quick Troubleshooting:** Instantly see container errors
2. **No SSH Required:** Check logs from anywhere via web browser
3. **Lightweight:** Uses minimal resources
4. **Beautiful Interface:** Better than `docker logs` command
5. **Real-Time:** See logs as they happen
6. **Search:** Find specific errors quickly
7. **Multi-Container:** Monitor multiple services at once
8. **Easy Access:** No CLI needed for non-technical users
9. **Mobile Access:** Check logs from phone
10. **Free & Open Source:** No licensing costs
## How It Works
```
Docker Containers → Docker Engine (logging driver)
Docker Socket (/var/run/docker.sock)
Dozzle Container
Web Interface (Real-time streaming)
Browser (You)
```
### Log Flow
1. **Containers write logs:** Applications output to stdout/stderr
2. **Docker captures logs:** Stored by Docker logging driver
3. **Dozzle reads socket:** Connects to Docker socket
4. **Real-time streaming:** Logs pushed to browser via WebSockets
5. **Display in UI:** Formatted, colored, searchable logs
### No Storage Required
- Dozzle doesn't store logs
- Reads directly from Docker
- Container logs stored in Docker's logging driver
- Dozzle just provides a viewing interface
## Configuration in AI-Homelab
### Directory Structure
```
# Dozzle doesn't require persistent storage
# All configuration via environment variables or command flags
```
### Environment Variables
```bash
# No authentication (use Authelia instead)
DOZZLE_NO_ANALYTICS=true
# Hostname display
DOZZLE_HOSTNAME=homelab-server
# Base path (if behind reverse proxy)
DOZZLE_BASE=/
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://dozzle.dev
- **GitHub:** https://github.com/amir20/dozzle
- **Docker Hub:** https://hub.docker.com/r/amir20/dozzle
- **Documentation:** https://dozzle.dev/guide/
- **Live Demo:** https://dozzle.dev/demo
## Educational Resources
### Videos
- [Dozzle - Docker Log Viewer (Techno Tim)](https://www.youtube.com/watch?v=RMm3cJSrI0s)
- [Best Docker Log Viewer? Dozzle Review](https://www.youtube.com/results?search_query=dozzle+docker+logs)
- [Docker Logging Best Practices](https://www.youtube.com/watch?v=1S3w5vERFIc)
### Articles & Guides
- [Dozzle Official Documentation](https://dozzle.dev/guide/)
- [Docker Logging Drivers](https://docs.docker.com/config/containers/logging/configure/)
- [Docker Logs Best Practices](https://docs.docker.com/config/containers/logging/)
### Concepts to Learn
- **Container Logs:** stdout/stderr output from containers
- **Docker Logging Drivers:** json-file, syslog, journald, etc.
- **Log Rotation:** Managing log file sizes
- **WebSockets:** Real-time browser communication
- **Docker Socket:** Unix socket for Docker API
- **Log Levels:** DEBUG, INFO, WARN, ERROR
- **Regex:** Pattern matching for log searching
## Docker Configuration
### Complete Service Definition
```yaml
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: unless-stopped
networks:
- traefik-network
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- DOZZLE_NO_ANALYTICS=true
- DOZZLE_HOSTNAME=homelab-server
- TZ=America/New_York
labels:
- "traefik.enable=true"
- "traefik.http.routers.dozzle.rule=Host(`dozzle.${DOMAIN}`)"
- "traefik.http.routers.dozzle.entrypoints=websecure"
- "traefik.http.routers.dozzle.tls.certresolver=letsencrypt"
- "traefik.http.routers.dozzle.middlewares=authelia@docker"
- "traefik.http.services.dozzle.loadbalancer.server.port=8080"
```
### Important Notes
1. **Docker Socket:** Read-only access sufficient
2. **Port 8080:** Default Dozzle web interface port
3. **No Ports Exposed:** Access only via Traefik
4. **Authelia Required:** No built-in auth, use Authelia
## Using Dozzle
### Interface Overview
**Main Screen:**
- List of all containers
- Status (running/stopped)
- Container names
- Click to view logs
**Container View:**
- Real-time log streaming
- Search box at top
- Timestamp toggle
- Download button
- Container stats
- Multi-container tabs
### Viewing Logs
**Single Container:**
1. Click container name
2. Logs stream in real-time
3. Auto-scrolls to bottom
4. Click timestamp to stop auto-scroll
**Multiple Containers:**
1. Click first container
2. Click "+" button
3. Select additional containers
4. View logs side-by-side or merged
### Searching Logs
**Basic Search:**
1. Type in search box
2. Press Enter
3. Matching lines highlighted
4. Navigate with arrow buttons
**Advanced Search (Regex):**
```regex
# Find errors
error|fail|exception
# Find specific IP
192\.168\.1\..*
# Find HTTP codes
HTTP/[0-9]\.[0-9]" [45][0-9]{2}
# Case insensitive
(?i)warning
```
### Filtering Options
**Filter by:**
- Container name
- Log content
- Time range (scroll to older logs)
- Log level (if structured logs)
### Log Export
**Download Logs:**
1. View container logs
2. Click download icon
3. Choose time range
4. Save as text file
### Interface Features
**Toolbar:**
- 🔍 Search box
- ⏸️ Pause auto-scroll
- ⬇️ Download logs
- 🕐 Toggle timestamps
- ⚙️ Settings
- 🌙 Dark/Light mode toggle
**Keyboard Shortcuts:**
- `/` - Focus search
- `Esc` - Clear search
- `Space` - Pause/Resume scroll
- `g` - Scroll to top
- `G` - Scroll to bottom
## Advanced Topics
### Multi-Host Monitoring
Monitor Docker on multiple servers:
**Remote Host Requirements:**
```yaml
# On remote server - expose Docker socket via TCP (secure)
# Or use Dozzle agent
# Agent on remote server:
dozzle-agent:
image: amir20/dozzle:latest
command: agent
environment:
- DOZZLE_AGENT_KEY=your-secret-key
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
```
**Main Dozzle:**
```yaml
dozzle:
environment:
- DOZZLE_REMOTE_HOST=tcp://remote-server:2376
# Or for agent:
- DOZZLE_AGENT_KEY=your-secret-key
```
### Custom Filters
Filter containers by labels:
```yaml
dozzle:
environment:
- DOZZLE_FILTER=status=running
- DOZZLE_FILTER=label=com.docker.compose.project=media
```
**Hide Containers:**
```yaml
mycontainer:
labels:
- "dozzle.enable=false" # Hide from Dozzle
```
### Authentication
**Built-in Simple Auth (Alternative to Authelia):**
```yaml
dozzle:
environment:
- DOZZLE_USERNAME=admin
- DOZZLE_PASSWORD=secure-password
- DOZZLE_KEY=random-32-character-key-for-cookies
```
Generate key:
```bash
openssl rand -hex 16
```
### Base Path Configuration
If running behind reverse proxy with subpath:
```yaml
dozzle:
environment:
- DOZZLE_BASE=/dozzle
```
Then access at: `https://domain.com/dozzle`
### Log Level Filtering
Show only specific log levels:
```yaml
dozzle:
environment:
- DOZZLE_LEVEL=info # Only info and above
```
Levels: `trace`, `debug`, `info`, `warn`, `error`
### Container Grouping
Group containers by label:
```yaml
# In compose file:
plex:
labels:
- "dozzle.group=media"
sonarr:
labels:
- "dozzle.group=media"
# Dozzle will group them together
```
## Troubleshooting
### Dozzle Not Showing Containers
```bash
# Check if Dozzle can access Docker socket
docker exec dozzle ls -la /var/run/docker.sock
# Verify containers are running
docker ps
# Check Dozzle logs
docker logs dozzle
# Test socket access
docker exec dozzle docker ps
```
### Logs Not Updating
```bash
# Check container is producing logs
docker logs container-name
# Verify WebSocket connection
# Open browser console: F12 → Network → WS
# Should see WebSocket connection
# Check browser console for errors
# F12 → Console
# Try different browser
# Some corporate firewalls block WebSockets
```
### Can't Access Web Interface
```bash
# Check if Dozzle is running
docker ps | grep dozzle
# Check Traefik routing
docker logs traefik | grep dozzle
# Test direct access (if port exposed)
curl http://localhost:8080
# Check network connectivity
docker exec dozzle ping traefik
```
### Search Not Working
```bash
# Clear browser cache
# Hard refresh: Ctrl+Shift+R (Windows/Linux) or Cmd+Shift+R (Mac)
# Check search syntax
# Use proper regex escaping
# Try simple text search first
# Then progress to regex
# Check browser console for JavaScript errors
```
### High Memory Usage
```bash
# Check Dozzle stats
docker stats dozzle
# Viewing many containers at once increases memory
# Close unnecessary container tabs
# Restart Dozzle
docker restart dozzle
# Limit containers with filters
DOZZLE_FILTER=status=running
```
### Logs Cut Off / Incomplete
```bash
# Docker has log size limits
# Check logging driver config
# View Docker daemon log config
docker info | grep -A5 "Logging Driver"
# Configure log rotation (in daemon.json):
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
```
## Docker Logging Configuration
### Logging Drivers
**JSON File (Default):**
```yaml
services:
myapp:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
```
**Syslog:**
```yaml
services:
myapp:
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.1.1:514"
```
**Journald:**
```yaml
services:
myapp:
logging:
driver: journald
```
### Log Rotation
**Global Configuration:**
```bash
# /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"compress": "true"
}
}
# Restart Docker
sudo systemctl restart docker
```
**Per-Container:**
```yaml
services:
myapp:
logging:
driver: json-file
options:
max-size: "50m"
max-file: "5"
```
### Best Practices
1. **Set Max Size:** Prevent disk space issues
2. **Rotate Logs:** Keep 3-5 recent files
3. **Compress Old Logs:** Save disk space
4. **Structured Logging:** JSON format for better parsing
5. **Log Levels:** Use appropriate levels (debug, info, error)
6. **Sensitive Data:** Never log passwords or secrets
## Performance Optimization
### Reduce Log Volume
```yaml
services:
myapp:
environment:
# Reduce log verbosity
- LOG_LEVEL=warn # Only warnings and errors
```
### Limit Containers
```yaml
dozzle:
environment:
# Only show running containers
- DOZZLE_FILTER=status=running
# Only specific projects
- DOZZLE_FILTER=label=com.docker.compose.project=media
```
### Disable Analytics
```yaml
dozzle:
environment:
- DOZZLE_NO_ANALYTICS=true
```
## Security Considerations
1. **Protect with Authelia:** Never expose Dozzle publicly without auth
2. **Read-Only Socket:** Use `:ro` for Docker socket mount
3. **Use Docker Proxy:** Consider Docker Socket Proxy for extra security
4. **Network Isolation:** Keep on trusted network
5. **Log Sanitization:** Ensure logs don't contain secrets
6. **HTTPS Only:** Always use SSL/TLS
7. **Limited Access:** Only give access to trusted users
8. **Monitor Access:** Review who accesses logs
9. **Log Retention:** Don't keep logs longer than necessary
10. **Regular Updates:** Keep Dozzle updated
## Comparison with Alternatives
### Dozzle vs Portainer Logs
**Dozzle:**
- Specialized for logs
- Real-time streaming
- Better search/filter
- Lighter weight
- Multiple containers simultaneously
**Portainer:**
- Full Docker management
- Logs + container control
- More features
- Heavier resource usage
### Dozzle vs CLI (docker logs)
**Dozzle:**
- Web interface
- Multi-container view
- Search functionality
- No SSH needed
- User-friendly
**CLI:**
- Scriptable
- More control
- No additional resources
- Faster for experts
### Dozzle vs Loki/Grafana
**Dozzle:**
- Simple setup
- No database
- Real-time only
- Lightweight
**Loki/Grafana:**
- Log aggregation
- Long-term storage
- Advanced querying
- Complex setup
- Enterprise features
## Tips & Tricks
### Quick Container Access
**Bookmark Specific Containers:**
```
https://dozzle.yourdomain.com/show?name=plex
https://dozzle.yourdomain.com/show?name=sonarr
```
### Multi-Container Monitoring
Monitor entire stack:
```
https://dozzle.yourdomain.com/show?name=plex&name=sonarr&name=radarr
```
### Color-Coded Logs
If your app supports colored output:
```yaml
myapp:
environment:
- FORCE_COLOR=true
- TERM=xterm-256color
```
### Regular Expressions Cheat Sheet
```regex
# Find errors
error|exception|fail
# Find warnings
warn|warning|caution
# Find IP addresses
\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
# Find URLs
https?://[^\s]+
# Find timestamps
\d{4}-\d{2}-\d{2}
# Case insensitive
(?i)search_term
```
## Summary
Dozzle is a simple, lightweight tool for viewing Docker container logs. It provides:
- Beautiful web interface for log viewing
- Real-time log streaming
- Multi-container support
- Search and filter capabilities
- No database or complex setup required
- Minimal resource usage
**Perfect for:**
- Quick troubleshooting
- Development environments
- Non-technical user access
- Mobile log viewing
- Real-time monitoring
**Not ideal for:**
- Long-term log storage
- Advanced log analysis
- Log aggregation across many hosts
- Compliance/audit requirements
**Remember:**
- Protect with Authelia
- Use read-only Docker socket
- Configure log rotation
- Monitor disk space
- Logs are ephemeral (not stored by Dozzle)
- Great complement to Grafana/Loki for detailed analysis

View File

@@ -1,251 +0,0 @@
# DuckDNS - Dynamic DNS Service
## Table of Contents
- [Overview](#overview)
- [What is DuckDNS?](#what-is-duckdns)
- [Why Use DuckDNS?](#why-use-duckdns)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Core Infrastructure
**Docker Image:** [linuxserver/duckdns](https://hub.docker.com/r/linuxserver/duckdns)
**Default Stack:** `core.yml`
**Web UI:** No web interface (runs silently)
**Authentication:** Not applicable
## What is DuckDNS?
DuckDNS is a free dynamic DNS (DDNS) service that provides you with a memorable subdomain under `duckdns.org` and keeps it updated with your current IP address. It's perfect for homelabs where your ISP provides a dynamic IP address that changes periodically.
### Key Features
- **Free subdomain** under duckdns.org
- **Automatic IP updates** every 5 minutes
- **No account required** - simple token-based authentication
- **IPv4 and IPv6 support**
- **No ads or tracking**
- **Works with Let's Encrypt** for SSL certificates
## Why Use DuckDNS?
1. **Access Your Homelab Remotely:** Use a memorable domain name instead of remembering IP addresses
2. **SSL Certificates:** Required for Let's Encrypt to issue SSL certificates for your domain
3. **Dynamic IP Handling:** Automatically updates when your ISP changes your IP
4. **Free and Simple:** No credit card, no complex setup
5. **Homelab Standard:** One of the most popular DDNS services in the homelab community
## How It Works
```
Your Home Network → Router (Dynamic IP) → Internet
DuckDNS Updates
yourdomain.duckdns.org → Current IP
```
1. You create a subdomain at DuckDNS.org (e.g., `myhomelab.duckdns.org`)
2. DuckDNS gives you a token
3. The Docker container periodically sends updates to DuckDNS with your current public IP
4. When someone visits `myhomelab.duckdns.org`, they're directed to your current IP
5. Traefik uses this domain to request SSL certificates from Let's Encrypt
## Configuration in AI-Homelab
### Environment Variables
```bash
DOMAIN=yourdomain.duckdns.org
DUCKDNS_TOKEN=your-token-from-duckdns
DUCKDNS_SUBDOMAINS=yourdomain # Without .duckdns.org
```
### Setup Steps
1. **Sign up at DuckDNS.org:**
- Visit https://www.duckdns.org
- Sign in with GitHub, Google, Reddit, or Twitter
- No registration form needed
2. **Create your subdomain:**
- Enter desired name (e.g., `myhomelab`)
- Click "Add domain"
- Copy your token (shown at top of page)
3. **Configure in `.env` file:**
```bash
DOMAIN=myhomelab.duckdns.org
DUCKDNS_TOKEN=paste-your-token-here
DUCKDNS_SUBDOMAINS=myhomelab
```
4. **Deploy with core stack:**
```bash
cd /opt/stacks/core
docker compose up -d duckdns
```
### Verification
Check if DuckDNS is updating correctly:
```bash
# Check container logs
docker logs duckdns
# Verify your domain resolves
nslookup yourdomain.duckdns.org
# Check current IP
curl https://www.duckdns.org/update?domains=yourdomain&token=your-token&verbose=true
```
## Official Resources
- **Website:** https://www.duckdns.org
- **Install Page:** https://www.duckdns.org/install.jsp
- **FAQ:** https://www.duckdns.org/faqs.jsp
- **Docker Hub:** https://hub.docker.com/r/linuxserver/duckdns
- **LinuxServer.io Docs:** https://docs.linuxserver.io/images/docker-duckdns
## Educational Resources
### Videos
- [What is Dynamic DNS? (NetworkChuck)](https://www.youtube.com/watch?v=GRvFQfgvhag) - Great overview of DDNS concepts
- [DuckDNS Setup Tutorial (Techno Tim)](https://www.youtube.com/watch?v=AS1I7tGp2c8) - Practical setup guide
- [Home Lab Beginners Guide - DuckDNS](https://www.youtube.com/results?search_query=duckdns+homelab+tutorial)
### Articles & Guides
- [DuckDNS Official Documentation](https://www.duckdns.org/spec.jsp)
- [Self-Hosting Guide: Dynamic DNS](https://github.com/awesome-selfhosted/awesome-selfhosted#dynamic-dns)
- [LinuxServer.io DuckDNS Documentation](https://docs.linuxserver.io/images/docker-duckdns)
### Related Concepts
- **Dynamic DNS (DDNS):** A service that maps a domain name to a changing IP address
- **Public IP vs Private IP:** Your router's public-facing IP vs internal network IPs
- **DNS Propagation:** Time it takes for DNS changes to spread across the internet
- **A Record:** DNS record type that maps domain to IPv4 address
- **AAAA Record:** DNS record type that maps domain to IPv6 address
## Docker Configuration
### Container Details
```yaml
duckdns:
image: lscr.io/linuxserver/duckdns:latest
container_name: duckdns
restart: unless-stopped
environment:
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
- TZ=${TZ}
- SUBDOMAINS=${DUCKDNS_SUBDOMAINS}
- TOKEN=${DUCKDNS_TOKEN}
- UPDATE_IP=ipv4
volumes:
- /opt/stacks/core/duckdns:/config
```
### Update Frequency
The DuckDNS container updates your IP every 5 minutes by default. This is frequent enough for most use cases.
### Resource Usage
- **CPU:** Minimal (~0.1%)
- **RAM:** ~10MB
- **Disk:** Negligible
- **Network:** Tiny API calls every 5 minutes
## Troubleshooting
### Domain Not Resolving
```bash
# Check if DuckDNS is updating
docker logs duckdns
# Manually check current status
curl "https://www.duckdns.org/update?domains=yourdomain&token=your-token&verbose=true"
# Expected response: OK or KO (with details)
```
### Wrong IP Being Updated
```bash
# Check what IP DuckDNS sees
curl https://www.duckdns.org/update?domains=yourdomain&token=your-token&ip=&verbose=true
# Check your actual public IP
curl ifconfig.me
# If different, check router port forwarding
```
### Token Issues
- **Invalid Token:** Regenerate token at DuckDNS.org
- **Token Not Working:** Check for extra spaces in `.env` file
- **Multiple Domains:** Separate with commas: `domain1,domain2`
### Let's Encrypt Issues
If SSL certificates fail:
1. Verify DuckDNS is updating correctly
2. Check domain propagation: `nslookup yourdomain.duckdns.org`
3. Ensure ports 80 and 443 are forwarded to your server
4. Wait 10-15 minutes for DNS propagation
## Integration with Other Services
### Traefik
Traefik uses your DuckDNS domain for:
- Generating SSL certificates via Let's Encrypt
- Routing incoming HTTPS traffic to services
- Creating service-specific subdomains (e.g., `plex.yourdomain.duckdns.org`)
### Let's Encrypt
Let's Encrypt requires:
1. A publicly accessible domain (provided by DuckDNS)
2. DNS validation or HTTP challenge
3. Ports 80/443 accessible from the internet
### Port Forwarding
On your router, forward these ports to your server:
- Port 80 (HTTP) → Your Server IP
- Port 443 (HTTPS) → Your Server IP
## Alternatives to DuckDNS
- **No-IP:** Similar free DDNS service (requires monthly confirmation)
- **FreeDNS:** Another free option
- **Cloudflare:** Requires owning a domain, but adds CDN benefits
- **Custom Domain + Cloudflare:** More professional but requires purchasing domain
## Best Practices
1. **Keep Your Token Secret:** Don't share or commit to public repositories
2. **Use One Domain:** Multiple domains complicate SSL certificates
3. **Monitor Logs:** Occasionally check logs to ensure updates are working
4. **Router Backup:** Save your router config in case you need to reconfigure port forwarding
5. **Alternative DDNS:** Consider having a backup DDNS service
## Summary
DuckDNS is the foundational service that makes your homelab accessible from the internet. It provides:
- A memorable domain name for your homelab
- Automatic IP updates when your ISP changes your address
- Integration with Let's Encrypt for SSL certificates
- Simple, free, and reliable service
Without DuckDNS (or similar DDNS), you would need to use your IP address directly and manually update SSL certificates - making remote access and HTTPS much more difficult.

View File

@@ -1,101 +0,0 @@
# Duplicati - Backup Solution
## Table of Contents
- [Overview](#overview)
- [What is Duplicati?](#what-is-duplicati)
- [Why Use Duplicati?](#why-use-duplicati)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Backup & Recovery
**Docker Image:** [linuxserver/duplicati](https://hub.docker.com/r/linuxserver/duplicati)
**Default Stack:** `utilities.yml`
**Web UI:** `http://SERVER_IP:8200`
**Ports:** 8200
## What is Duplicati?
Duplicati is a backup client that securely stores encrypted, incremental, compressed backups on cloud storage or other locations. It supports many cloud providers, has a web interface, and is completely free and open-source.
### Key Features
- **20+ Backends:** S3, B2, Google Drive, OneDrive, SFTP, etc.
- **Encryption:** AES-256 encryption
- **Compression:** Multiple algorithms
- **Incremental:** Only changed data
- **Deduplication:** Block-level dedup
- **Web Interface:** Browser-based
- **Scheduling:** Automated backups
- **Throttling:** Bandwidth control
- **Versioning:** Multiple versions
- **Free & Open Source:** No cost
## Why Use Duplicati?
1. **Cloud Friendly:** 20+ storage backends
2. **Encrypted:** Secure backups
3. **Incremental:** Fast backups
4. **Free:** No licensing costs
5. **Web UI:** Easy management
6. **Windows Support:** Cross-platform
7. **Mature:** Proven solution
## Configuration in AI-Homelab
```
/opt/stacks/utilities/duplicati/config/ # Duplicati config
```
## Official Resources
- **Website:** https://www.duplicati.com
- **Documentation:** https://duplicati.readthedocs.io
- **Forum:** https://forum.duplicati.com
## Docker Configuration
```yaml
duplicati:
image: linuxserver/duplicati:latest
container_name: duplicati
restart: unless-stopped
networks:
- traefik-network
ports:
- "8200:8200"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/utilities/duplicati/config:/config
- /opt/stacks:/source:ro # Source data
- /mnt:/backups # Backup destination
labels:
- "traefik.enable=true"
- "traefik.http.routers.duplicati.rule=Host(`duplicati.${DOMAIN}`)"
```
## Summary
Duplicati provides encrypted backups to 20+ cloud storage providers with web-based management, incremental backups, and comprehensive versioning.
**Perfect for:**
- Cloud backups
- Encrypted off-site storage
- Multi-cloud backup strategy
- Scheduled automatic backups
- Version retention
**Key Points:**
- 20+ storage backends
- AES-256 encryption
- Block-level deduplication
- Web-based interface
- Incremental backups
- Bandwidth throttling
- Free and open-source
Duplicati backs up your data to the cloud!

View File

@@ -1,132 +0,0 @@
# ESPHome - ESP Device Firmware
## Table of Contents
- [Overview](#overview)
- [What is ESPHome?](#what-is-esphome)
- [Why Use ESPHome?](#why-use-esphome)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Device Examples](#device-examples)
## Overview
**Category:** IoT Device Firmware
**Docker Image:** [esphome/esphome](https://hub.docker.com/r/esphome/esphome)
**Default Stack:** `homeassistant.yml`
**Web UI:** `http://SERVER_IP:6052`
**Ports:** 6052
## What is ESPHome?
ESPHome is a system for controlling ESP8266/ESP32 microcontrollers through simple YAML configuration files. It generates custom firmware for your ESP devices that integrates natively with Home Assistant. Create custom sensors, switches, lights, and more without writing code.
### Key Features
- **YAML Configuration:** No programming needed
- **Native HA Integration:** Auto-discovered
- **OTA Updates:** Update wirelessly
- **200+ Components:** Sensors, switches, displays
- **Local Control:** No cloud required
- **Fast:** Compiled C++ firmware
- **Cheap:** ESP8266 ~$2, ESP32 ~$5
## Why Use ESPHome?
1. **Cheap Custom Devices:** $2-5 per device
2. **No Programming:** YAML configuration
3. **Home Assistant Native:** Seamless integration
4. **Local Control:** Fully offline
5. **OTA Updates:** Update over WiFi
6. **Reliable:** Compiled firmware, very stable
7. **Versatile:** Sensors, relays, LEDs, displays
## Configuration in AI-Homelab
```
/opt/stacks/homeassistant/esphome/config/
device1.yaml
device2.yaml
```
## Official Resources
- **Website:** https://esphome.io
- **Documentation:** https://esphome.io/index.html
- **Devices:** https://esphome.io/devices/index.html
## Docker Configuration
```yaml
esphome:
image: esphome/esphome:latest
container_name: esphome
restart: unless-stopped
network_mode: host
environment:
- TZ=America/New_York
volumes:
- /opt/stacks/homeassistant/esphome/config:/config
```
## Device Examples
**Temperature Sensor (DHT22):**
```yaml
esphome:
name: bedroom-temp
platform: ESP8266
board: d1_mini
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
api:
encryption:
key: !secret api_key
ota:
password: !secret ota_password
sensor:
- platform: dht
pin: D2
temperature:
name: "Bedroom Temperature"
humidity:
name: "Bedroom Humidity"
update_interval: 60s
```
**Smart Plug (Sonoff):**
```yaml
esphome:
name: living-room-plug
platform: ESP8266
board: esp01_1m
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
api:
ota:
binary_sensor:
- platform: gpio
pin:
number: GPIO0
mode: INPUT_PULLUP
inverted: True
name: "Living Room Plug Button"
on_press:
- switch.toggle: relay
switch:
- platform: gpio
name: "Living Room Plug"
pin: GPIO12
id: relay
```
ESPHome turns cheap ESP modules into powerful smart home devices!

View File

@@ -1,413 +0,0 @@
# FlareSolverr - Cloudflare Bypass Proxy
## Table of Contents
- [Overview](#overview)
- [What is FlareSolverr?](#what-is-flaresolverr)
- [Why Use FlareSolverr?](#why-use-flaresolverr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Usage](#usage)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Proxy Service
**Docker Image:** [ghcr.io/flaresolverr/flaresolverr](https://github.com/FlareSolverr/FlareSolverr/pkgs/container/flaresolverr)
**Default Stack:** `media-extended.yml`
**API Port:** 8191
**Authentication:** None (internal service)
**Used By:** Prowlarr, Jackett, NZBHydra2
## What is FlareSolverr?
FlareSolverr is a proxy server that solves Cloudflare and DDoS-GUARD challenges automatically. Many torrent indexers and websites use Cloudflare protection to prevent automated access. FlareSolverr uses a headless browser to solve these challenges, allowing Prowlarr and other *arr apps to access protected indexers.
### Key Features
- **Cloudflare Bypass:** Solves "Checking your browser" challenges
- **DDoS-GUARD Support:** Handles DDoS protection pages
- **Headless Browser:** Uses Chromium to simulate real browser
- **Simple API:** Easy integration with existing tools
- **Session Management:** Maintains authentication cookies
- **No Manual Intervention:** Fully automated
- **Docker Ready:** Easy deployment
- **Lightweight:** Minimal resource usage
## Why Use FlareSolverr?
1. **Access Protected Indexers:** Bypass Cloudflare challenges
2. **Automated:** No manual captcha solving
3. **Essential for Prowlarr:** Many indexers require it
4. **Free:** No paid services needed
5. **Simple Integration:** Works with *arr apps
6. **Session Support:** Maintains login state
7. **Multiple Sites:** Works with various protections
8. **Open Source:** Community-maintained
## How It Works
```
Prowlarr → Indexer (Protected by Cloudflare)
Cloudflare Challenge Detected
Prowlarr → FlareSolverr API
FlareSolverr Opens Headless Browser
Solves Cloudflare Challenge
Returns Cookies/Content to Prowlarr
Prowlarr Accesses Indexer Successfully
```
### Challenge Types
**Cloudflare:**
- "Checking your browser before accessing..."
- JavaScript challenge
- Captcha (in some cases)
**DDoS-GUARD:**
- Similar protection mechanism
- Requires browser verification
## Configuration in AI-Homelab
### Directory Structure
```
# No persistent data needed
# FlareSolverr is stateless
```
### Environment Variables
```bash
# Log level
LOG_LEVEL=info
# Optional: Log HTML responses
LOG_HTML=false
# Optional: Captcha solver (paid services)
# CAPTCHA_SOLVER=none
# Optional: Timeout
# TIMEOUT=60000
```
## Official Resources
- **GitHub:** https://github.com/FlareSolverr/FlareSolverr
- **Docker Hub:** https://github.com/FlareSolverr/FlareSolverr/pkgs/container/flaresolverr
- **Documentation:** https://github.com/FlareSolverr/FlareSolverr/wiki
## Educational Resources
### Videos
- [FlareSolverr Setup](https://www.youtube.com/results?search_query=flaresolverr+prowlarr+setup)
- [Bypass Cloudflare with FlareSolverr](https://www.youtube.com/results?search_query=flaresolverr+cloudflare)
### Articles & Guides
- [GitHub Documentation](https://github.com/FlareSolverr/FlareSolverr)
- [Prowlarr Integration](https://wiki.servarr.com/prowlarr/settings#flaresolverr)
### Concepts to Learn
- **Cloudflare Challenge:** Browser verification system
- **Headless Browser:** Browser without UI
- **Session Management:** Cookie persistence
- **Proxy Server:** Intermediary for requests
- **Rate Limiting:** Request throttling
## Docker Configuration
### Complete Service Definition
```yaml
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
restart: unless-stopped
networks:
- traefik-network
ports:
- "8191:8191"
environment:
- LOG_LEVEL=info
- LOG_HTML=false
- CAPTCHA_SOLVER=none
- TZ=America/New_York
```
**Note:** No volume needed - stateless service
### Resource Limits (Optional)
```yaml
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
# ... rest of config
```
## Usage
### Prowlarr Integration
**Settings → Indexers → FlareSolverr:**
1. **Tags:** Create tag "flaresolverr"
2. **Host:** `http://flaresolverr:8191`
3. **Test connection**
4. **Save**
**Tag Indexers:**
- Edit indexer that needs FlareSolverr
- Tags → Add "flaresolverr"
- Save
**When to Tag:**
- Indexer returns Cloudflare errors
- "Checking your browser" messages
- "DDoS protection by Cloudflare"
- 403 Forbidden errors
### Manual API Testing
```bash
# Test FlareSolverr
curl -X POST http://localhost:8191/v1 \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com",
"maxTimeout": 60000
}'
```
**Response:**
- Status
- Cookies
- HTML content
- Challenge solution
### Session Management
**Create Session:**
```bash
curl -X POST http://localhost:8191/v1 \
-d '{"cmd": "sessions.create"}'
```
**Use Session:**
```bash
curl -X POST http://localhost:8191/v1 \
-d '{
"cmd": "request.get",
"url": "https://example.com",
"session": "SESSION_ID"
}'
```
**Destroy Session:**
```bash
curl -X POST http://localhost:8191/v1 \
-d '{
"cmd": "sessions.destroy",
"session": "SESSION_ID"
}'
```
## Troubleshooting
### FlareSolverr Not Working
```bash
# Check container status
docker ps | grep flaresolverr
# Check logs
docker logs flaresolverr
# Test API
curl http://localhost:8191/health
# Should return: {"status": "ok"}
# Check connectivity from Prowlarr
docker exec prowlarr curl http://flaresolverr:8191/health
```
### Indexer Still Blocked
```bash
# Common causes:
# 1. FlareSolverr not tagged on indexer
# 2. Cloudflare updated protection
# 3. IP temporarily banned
# 4. Rate limiting
# Verify tag
# Prowlarr → Indexers → Edit indexer → Tags
# Check FlareSolverr logs
docker logs flaresolverr | tail -50
# Try different indexer
# Some sites may be too aggressive
# Wait and retry
# Temporary bans usually lift after time
```
### High Memory Usage
```bash
# Check resource usage
docker stats flaresolverr
# Chromium uses significant memory
# Normal: 200-500MB
# High load: 500MB-1GB
# Restart if memory leak
docker restart flaresolverr
# Set memory limit
# Add to docker-compose:
deploy:
resources:
limits:
memory: 1G
```
### Timeout Errors
```bash
# Increase timeout
# Environment variable:
TIMEOUT=120000 # 2 minutes
# Or in request:
curl -X POST http://localhost:8191/v1 \
-d '{
"cmd": "request.get",
"url": "https://example.com",
"maxTimeout": 120000
}'
# Check network speed
# Slow connections need longer timeout
```
### Browser Crashes
```bash
# Check logs for crashes
docker logs flaresolverr | grep -i crash
# Restart container
docker restart flaresolverr
# Check memory limits
# May need more RAM
# Update to latest version
docker pull ghcr.io/flaresolverr/flaresolverr:latest
docker compose up -d flaresolverr
```
## Performance Optimization
### Resource Allocation
**Recommended:**
- CPU: 0.5-1 core
- RAM: 500MB-1GB
- No disk I/O needed
**High Load:**
- Increase memory limit
- More CPU if many requests
### Request Throttling
**Prowlarr automatically throttles:**
- Don't overload FlareSolverr
- Rate limits prevent bans
### Session Reuse
**For authenticated sites:**
- Create persistent session
- Reuse across requests
- Reduces challenge frequency
## Security Best Practices
1. **Internal Network Only:** Don't expose port 8191 publicly
2. **No Authentication:** FlareSolverr has no auth (keep internal)
3. **Docker Network:** Use private Docker network
4. **Regular Updates:** Keep FlareSolverr current
5. **Monitor Logs:** Watch for abuse
6. **Resource Limits:** Prevent DoS via resource exhaustion
## Integration with Other Services
### FlareSolverr + Prowlarr
- Bypass Cloudflare on indexers
- Tag-based activation
- Automatic challenge solving
### FlareSolverr + Jackett
- Similar integration
- Configure FlareSolverr endpoint
- Tag indexers needing it
### FlareSolverr + NZBHydra2
- Usenet indexer aggregator
- Cloudflare bypass support
- Configure endpoint URL
## Summary
FlareSolverr is the Cloudflare bypass proxy offering:
- Automatic challenge solving
- Prowlarr integration
- Headless browser technology
- Session management
- Simple API
- Free and open-source
**Perfect for:**
- Protected indexer access
- Prowlarr users
- Cloudflare bypassing
- Automated workflows
- *arr stack integration
**Key Points:**
- Tag indexers in Prowlarr
- No authentication (keep internal)
- Uses headless Chromium
- Memory usage ~500MB
- Stateless service
- Essential for many indexers
**Remember:**
- Don't expose publicly
- Tag only needed indexers
- Monitor resource usage
- Restart if memory issues
- Keep updated
- Internal Docker network only
FlareSolverr enables access to Cloudflare-protected indexers automatically!

View File

@@ -1,160 +0,0 @@
# Gitea - Git Server
## Table of Contents
- [Overview](#overview)
- [What is Gitea?](#what-is-gitea)
- [Why Use Gitea?](#why-use-gitea)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Git Repository Hosting
**Docker Image:** [gitea/gitea](https://hub.docker.com/r/gitea/gitea)
**Default Stack:** `productivity.yml`
**Web UI:** `https://gitea.${DOMAIN}` or `http://SERVER_IP:3000`
**SSH:** Port 222
**Ports:** 3000, 222
## What is Gitea?
Gitea is a self-hosted Git service similar to GitHub/GitLab but lightweight and easy to deploy. It provides web-based Git repository hosting with features like pull requests, code review, issue tracking, and CI/CD integration - all running on your own infrastructure.
### Key Features
- **Git Repositories:** Unlimited repos
- **Web Interface:** GitHub-like UI
- **Pull Requests:** Code review workflow
- **Issue Tracking:** Built-in bug tracking
- **Wiki:** Per-repository wikis
- **Organizations:** Team management
- **SSH & HTTP:** Git access methods
- **Actions:** CI/CD (GitHub Actions compatible)
- **Webhooks:** Integration hooks
- **API:** REST API
- **Lightweight:** Runs on Raspberry Pi
- **Free & Open Source:** MIT license
## Why Use Gitea?
1. **Self-Hosted:** Control your code
2. **Private Repos:** Unlimited private repos
3. **Lightweight:** Low resource usage
4. **Fast:** Go-based, very quick
5. **Easy Setup:** Minutes to deploy
6. **GitHub Alternative:** Similar features
7. **No Limits:** No user/repo restrictions
8. **Privacy:** Code never leaves your server
## Configuration in AI-Homelab
```
/opt/stacks/productivity/gitea/data/ # Git repos
/opt/stacks/productivity/gitea/config/ # Configuration
```
## Official Resources
- **Website:** https://gitea.io
- **Documentation:** https://docs.gitea.io
- **GitHub:** https://github.com/go-gitea/gitea
## Docker Configuration
```yaml
gitea:
image: gitea/gitea:latest
container_name: gitea
restart: unless-stopped
networks:
- traefik-network
ports:
- "3000:3000"
- "222:22"
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__database__DB_TYPE=sqlite3
- GITEA__server__DOMAIN=gitea.${DOMAIN}
- GITEA__server__ROOT_URL=https://gitea.${DOMAIN}
- GITEA__server__SSH_DOMAIN=gitea.${DOMAIN}
- GITEA__server__SSH_PORT=222
volumes:
- /opt/stacks/productivity/gitea/data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.gitea.rule=Host(`gitea.${DOMAIN}`)"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d gitea
```
2. **Access UI:** `http://SERVER_IP:3000`
3. **Initial Configuration:**
- Database: SQLite (default, sufficient for most)
- Admin username/password
- Application URL: `https://gitea.yourdomain.com`
- SSH Port: 222
4. **Create Repository:**
- "+" button → New Repository
- Name, description, visibility
- Initialize with README if desired
5. **Clone Repository:**
```bash
# HTTPS
git clone https://gitea.yourdomain.com/username/repo.git
# SSH (configure SSH key first)
git clone ssh://git@gitea.yourdomain.com:222/username/repo.git
```
## Summary
Gitea is your self-hosted Git server offering:
- GitHub-like interface
- Unlimited repositories
- Pull requests & code review
- Issue tracking
- Organizations & teams
- CI/CD with Actions
- Lightweight & fast
- Free and open-source
**Perfect for:**
- Personal projects
- Private code hosting
- Team development
- GitHub alternative
- Code portfolio
- Learning Git workflows
- CI/CD pipelines
**Key Points:**
- Very lightweight (runs on Pi)
- GitHub-like features
- SSH and HTTPS access
- Built-in CI/CD (Actions)
- SQLite or external DB
- Webhook support
- API available
- Easy migration from GitHub
**Remember:**
- Configure SSH keys for easy access
- Use organizations for teams
- Enable Actions for CI/CD
- Regular backups of /data
- Strong admin password
- Consider external database for heavy use
- Port 222 for SSH (avoid 22 conflict)
Gitea puts your code under your control!

View File

@@ -1,133 +0,0 @@
# GitLab - DevOps Platform
## Table of Contents
- [Overview](#overview)
- [What is GitLab?](#what-is-gitlab)
- [Why Use GitLab?](#why-use-gitlab)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** DevOps Platform
**Docker Image:** [gitlab/gitlab-ce](https://hub.docker.com/r/gitlab/gitlab-ce)
**Default Stack:** `development.yml`
**Web UI:** `http://SERVER_IP:8929`
**SSH:** Port 2224
**Ports:** 8929, 2224
**Resource Requirements:** 4GB+ RAM
## What is GitLab?
GitLab is a complete DevOps platform - Git hosting, CI/CD, issue tracking, container registry, and more in one application. It's the open-source alternative to GitHub Enterprise, providing everything needed for modern software development.
### Key Features
- **Git Repositories:** Unlimited repos
- **CI/CD:** GitLab Runner integration
- **Issue Tracking:** Project management
- **Container Registry:** Docker image hosting
- **Wiki:** Per-project documentation
- **Code Review:** Merge requests
- **Snippets:** Code sharing
- **Auto DevOps:** Automated CI/CD
- **Security Scanning:** Built-in security
- **Free & Open Source:** CE edition
## Why Use GitLab?
1. **All-in-One:** Git + CI/CD + more
2. **Self-Hosted:** Private code platform
3. **CI/CD Included:** No separate service
4. **Container Registry:** Host Docker images
5. **Issue Tracking:** Built-in project management
6. **GitHub Alternative:** More features included
7. **Active Development:** Regular updates
## Configuration in AI-Homelab
```
/opt/stacks/development/gitlab/
config/ # GitLab configuration
logs/ # Application logs
data/ # Git repositories, uploads
```
**Warning:** GitLab is resource-intensive (4GB+ RAM minimum).
## Official Resources
- **Website:** https://about.gitlab.com
- **Documentation:** https://docs.gitlab.com
- **CI/CD Docs:** https://docs.gitlab.com/ee/ci
## Docker Configuration
```yaml
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: unless-stopped
hostname: gitlab.${DOMAIN}
networks:
- traefik-network
ports:
- "8929:80"
- "2224:22"
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.${DOMAIN}'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
gitlab_rails['time_zone'] = 'America/New_York'
volumes:
- /opt/stacks/development/gitlab/config:/etc/gitlab
- /opt/stacks/development/gitlab/logs:/var/log/gitlab
- /opt/stacks/development/gitlab/data:/var/opt/gitlab
shm_size: '256m'
labels:
- "traefik.enable=true"
- "traefik.http.routers.gitlab.rule=Host(`gitlab.${DOMAIN}`)"
```
**Note:** First startup takes 5-10 minutes. GitLab needs significant resources.
## Summary
GitLab is your complete DevOps platform offering:
- Git repository hosting
- Built-in CI/CD pipelines
- Container registry
- Issue tracking
- Wiki and documentation
- Code review (merge requests)
- Security scanning
- Free and open-source
**Perfect for:**
- Private Git hosting
- CI/CD pipelines
- Team development
- DevOps workflows
- Container image hosting
- Project management
- Self-hosted GitHub alternative
**Key Points:**
- Requires 4GB+ RAM
- All-in-one DevOps platform
- Built-in CI/CD
- Container registry included
- First startup slow (5-10 min)
- SSH on port 2224
- Resource intensive
**Remember:**
- Needs significant resources
- Initial setup takes time
- Get root password from logs
- Configure GitLab Runner for CI/CD
- Container registry built-in
- Regular backups important
- Update carefully (read changelogs)
GitLab provides enterprise DevOps at home!

View File

@@ -1,720 +0,0 @@
# Glances - System Monitoring Dashboard
## Table of Contents
- [Overview](#overview)
- [What is Glances?](#what-is-glances)
- [Why Use Glances?](#why-use-glances)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Using Glances](#using-glances)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure Monitoring
**Docker Image:** [nicolargo/glances](https://hub.docker.com/r/nicolargo/glances)
**Default Stack:** `infrastructure.yml`
**Web UI:** `https://glances.${DOMAIN}`
**Authentication:** Protected by Authelia (SSO)
**Purpose:** Real-time system resource monitoring
## What is Glances?
Glances is a cross-platform system monitoring tool that provides a comprehensive overview of your system's resources. It displays CPU, memory, disk, network, and process information in a single interface, accessible via CLI, Web UI, or API.
### Key Features
- **Comprehensive Monitoring:** CPU, RAM, disk, network, sensors, processes
- **Real-Time Updates:** Live statistics updated every few seconds
- **Web Interface:** Beautiful responsive dashboard
- **REST API:** Export metrics for other tools
- **Docker Support:** Monitor containers and host system
- **Alerts:** Configurable thresholds for warnings
- **Historical Data:** Short-term data retention
- **Extensible:** Plugins for additional monitoring
- **Export Options:** InfluxDB, Prometheus, CSV, JSON
- **Lightweight:** Minimal resource usage
- **Cross-Platform:** Linux, macOS, Windows
## Why Use Glances?
1. **Quick Overview:** See system health at a glance
2. **Resource Monitoring:** Track CPU, RAM, disk usage
3. **Process Management:** Identify resource-heavy processes
4. **Container Monitoring:** Monitor Docker containers
5. **Network Analysis:** Track network bandwidth
6. **Temperature Monitoring:** Hardware sensor data
7. **Disk I/O:** Identify disk bottlenecks
8. **Easy Access:** Web interface from any device
9. **No Complex Setup:** Works out of the box
10. **Free & Open Source:** No licensing costs
## How It Works
```
Host System (CPU, RAM, Disk, Network)
Glances Container (accesses host metrics via /proc, /sys)
Data Collection & Processing
┌─────────────┬──────────────┬────────────┐
│ Web UI │ REST API │ CLI │
│ (Port 61208)│ (JSON/Export)│ (Terminal)│
└─────────────┴──────────────┴────────────┘
```
### Monitoring Architecture
**Host Access:**
- `/proc` - Process and system info
- `/sys` - Hardware information
- `/var/run/docker.sock` - Docker container stats
- `/etc/os-release` - System information
**Data Flow:**
1. **Collect:** Gather metrics from system files
2. **Process:** Calculate rates, averages, deltas
3. **Store:** Keep short-term history in memory
4. **Display:** Render in web UI or export
5. **Alert:** Check thresholds and warn if exceeded
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/infrastructure/glances/
└── config/
└── glances.conf # Optional configuration file
```
### Environment Variables
```bash
# Web server mode
GLANCES_OPT=-w
# Timezone
TZ=America/New_York
# Update interval (seconds)
# GLANCES_OPT=-w -t 2
# Disable web password (use Authelia)
# GLANCES_OPT=-w --disable-webui
```
## Official Resources
- **Website:** https://nicolargo.github.io/glances/
- **GitHub:** https://github.com/nicolargo/glances
- **Docker Hub:** https://hub.docker.com/r/nicolargo/glances
- **Documentation:** https://glances.readthedocs.io
- **Wiki:** https://github.com/nicolargo/glances/wiki
## Educational Resources
### Videos
- [Glances - System Monitoring Tool (Techno Tim)](https://www.youtube.com/watch?v=3dT1LEVhdJM)
- [Server Monitoring with Glances](https://www.youtube.com/results?search_query=glances+system+monitoring)
- [Linux System Monitoring Tools](https://www.youtube.com/watch?v=5JHwNjX6FKs)
### Articles & Guides
- [Glances Official Documentation](https://glances.readthedocs.io)
- [Glances Configuration Guide](https://glances.readthedocs.io/en/stable/config.html)
- [System Monitoring Best Practices](https://www.brendangregg.com/linuxperf.html)
### Concepts to Learn
- **/proc filesystem:** Linux process information
- **CPU Load Average:** 1, 5, and 15-minute averages
- **Memory Types:** RAM, swap, cache, buffers
- **Disk I/O:** Read/write operations per second
- **Network Metrics:** Bandwidth, packets, errors
- **Process States:** Running, sleeping, zombie
- **System Sensors:** Temperature, fan speeds, voltages
## Docker Configuration
### Complete Service Definition
```yaml
glances:
image: nicolargo/glances:latest
container_name: glances
restart: unless-stopped
pid: host # Required for accurate process monitoring
networks:
- traefik-network
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /etc/os-release:/etc/os-release:ro
- /opt/stacks/infrastructure/glances/config:/glances/conf:ro
environment:
- GLANCES_OPT=-w
- TZ=America/New_York
labels:
- "traefik.enable=true"
- "traefik.http.routers.glances.rule=Host(`glances.${DOMAIN}`)"
- "traefik.http.routers.glances.entrypoints=websecure"
- "traefik.http.routers.glances.tls.certresolver=letsencrypt"
- "traefik.http.routers.glances.middlewares=authelia@docker"
- "traefik.http.services.glances.loadbalancer.server.port=61208"
```
### Important Mount Points
```yaml
volumes:
# Docker container monitoring
- /var/run/docker.sock:/var/run/docker.sock:ro
# System information
- /etc/os-release:/etc/os-release:ro
# Host filesystem (optional, for disk monitoring)
# - /:/rootfs:ro
# Configuration file (optional)
- /opt/stacks/infrastructure/glances/config:/glances/conf:ro
```
### Network Modes
**For better host monitoring, use host network:**
```yaml
glances:
network_mode: host
# Then access via: http://SERVER_IP:61208
# Or still use Traefik but with host networking
```
## Using Glances
### Dashboard Overview
**Web Interface Sections:**
1. **Header:**
- Hostname
- System uptime
- Linux distribution
- Current time
2. **CPU:**
- Overall CPU usage (%)
- Per-core utilization
- Load average (1, 5, 15 min)
- Context switches
3. **Memory:**
- Total RAM
- Used/Free
- Cache/Buffers
- Swap usage
4. **Disk:**
- Partition information
- Space used/free
- Mount points
- Disk I/O rates
5. **Network:**
- Interface names
- Upload/Download rates
- Total transferred
- Errors and drops
6. **Sensors:**
- CPU temperature
- Fan speeds
- Other hardware sensors
7. **Docker:**
- Container list
- Container CPU/Memory
- Container I/O
8. **Processes:**
- Top CPU/Memory processes
- Process details
- Sort options
### Color Coding
Glances uses colors to indicate resource usage:
- **Green:** OK (< 50%)
- **Blue:** Caution (50-70%)
- **Magenta:** Warning (70-90%)
- **Red:** Critical (> 90%)
### Sorting Options
Click on column headers to sort:
- **CPU:** CPU usage
- **MEM:** Memory usage
- **TIME:** Process runtime
- **NAME:** Process name
- **PID:** Process ID
### Keyboard Shortcuts (CLI Mode)
If accessing via terminal:
```
a - Sort by automatic (CPU + memory)
c - Sort by CPU
m - Sort by memory
p - Sort by process name
i - Sort by I/O rate
t - Sort by time (cumulative)
d - Show/hide disk I/O
f - Show/hide filesystem
n - Show/hide network
s - Show/hide sensors
k - Kill process
h - Show help
q - Quit
```
## Advanced Topics
### Configuration File
Create custom configuration for alerts and thresholds:
**glances.conf:**
```ini
[global]
refresh=2
check_update=false
[cpu]
# CPU thresholds (%)
careful=50
warning=70
critical=90
[memory]
# Memory thresholds (%)
careful=50
warning=70
critical=90
[load]
# Load average thresholds
careful=1.0
warning=2.0
critical=5.0
[diskio]
# Disk I/O hide regex
hide=loop.*,ram.*
[fs]
# Filesystem hide regex
hide=/boot.*,/snap.*
[network]
# Network interface hide regex
hide=lo,docker.*
[docker]
# Show all containers
all=true
# Max containers to display
max_name_size=20
[alert]
# Alert on high CPU
cpu_careful=50
cpu_warning=70
cpu_critical=90
```
Mount config:
```yaml
volumes:
- /opt/stacks/infrastructure/glances/config/glances.conf:/glances/conf/glances.conf:ro
```
### Export to InfluxDB
Send metrics to InfluxDB for long-term storage:
**glances.conf:**
```ini
[influxdb]
host=influxdb
port=8086
protocol=http
user=glances
password=glances
db=glances
prefix=localhost
tags=environment:homelab
```
### Export to Prometheus
Make metrics available for Prometheus scraping:
```yaml
glances:
environment:
- GLANCES_OPT=-w --export prometheus
ports:
- "9091:9091" # Prometheus exporter port
```
**Prometheus config:**
```yaml
scrape_configs:
- job_name: 'glances'
static_configs:
- targets: ['glances:9091']
```
### REST API
Access metrics programmatically:
```bash
# Get all stats
curl http://glances:61208/api/3/all
# Get specific stat
curl http://glances:61208/api/3/cpu
curl http://glances:61208/api/3/mem
curl http://glances:61208/api/3/docker
# Get process list
curl http://glances:61208/api/3/processlist
# Full API documentation
curl http://glances:61208/docs
```
### Alerts and Actions
Configure alerts in glances.conf:
```ini
[alert]
disable=False
[process]
# Alert if process not running
list=sshd,nginx,docker
# Alert if process running
disable_pattern=.*badprocess.*
[action]
# Execute script on alert
critical_action=script:/scripts/alert.sh
```
### Multi-Server Monitoring
Monitor multiple servers:
**On each remote server:**
```yaml
glances:
image: nicolargo/glances
environment:
- GLANCES_OPT=-s # Server mode
ports:
- "61209:61209"
```
**On main server:**
```yaml
glances:
environment:
- GLANCES_OPT=-w --browser
# Or use client mode to connect
```
Access via web UI: "Remote monitoring" section
### Custom Plugins
Create custom monitoring plugins:
```python
# /opt/stacks/infrastructure/glances/config/plugins/custom_plugin.py
from glances.plugins.glances_plugin import GlancesPlugin
class Plugin(GlancesPlugin):
def update(self):
# Your custom monitoring code
stats = {}
stats['custom_metric'] = get_custom_data()
return stats
```
## Troubleshooting
### Glances Not Showing Host Metrics
```bash
# Verify host access
docker exec glances ls /proc
docker exec glances cat /proc/cpuinfo
# Check pid mode
docker inspect glances | grep -i pid
# Ensure proper mounts
docker inspect glances | grep -A10 Mounts
# Try host network mode
# In compose: network_mode: host
```
### Docker Containers Not Visible
```bash
# Verify Docker socket mount
docker exec glances ls -la /var/run/docker.sock
# Check permissions
docker exec glances docker ps
# Ensure docker section enabled in config
# Or no config file hiding it
```
### High CPU Usage from Glances
```bash
# Increase refresh interval
GLANCES_OPT=-w -t 5 # Update every 5 seconds
# Disable modules you don't need
# In glances.conf:
# disable=sensors,raid
# Check if something is hammering the API
docker logs traefik | grep glances
```
### Temperature Sensors Not Showing
```bash
# Need access to /sys
volumes:
- /sys:/sys:ro
# Install lm-sensors on host
sudo apt install lm-sensors
sudo sensors-detect
# Verify sensors work on host
sensors
```
### Web Interface Not Loading
```bash
# Check if Glances is running
docker ps | grep glances
# View logs
docker logs glances
# Test direct access
curl http://SERVER_IP:61208
# Check Traefik routing
docker logs traefik | grep glances
# Verify web mode enabled
docker exec glances ps aux | grep glances
```
### Disk Information Incomplete
```bash
# Mount host root filesystem
volumes:
- /:/rootfs:ro
# In glances.conf:
[fs]
hide=/rootfs/boot.*,/rootfs/snap.*
```
### Memory Information Incorrect
```bash
# Use host PID namespace
pid: host
# Check /proc/meminfo access
docker exec glances cat /proc/meminfo
# Restart container
docker restart glances
```
## Performance Optimization
### Reduce Update Frequency
```yaml
environment:
- GLANCES_OPT=-w -t 5 # Update every 5 seconds (default is 2)
```
### Disable Unnecessary Modules
**glances.conf:**
```ini
[global]
# Disable modules you don't need
disable=raid,sensors,hddtemp
```
### Limit Process List
```ini
[processlist]
# Max number of processes to display
max=50
```
### Resource Limits
```yaml
glances:
deploy:
resources:
limits:
cpus: '0.5'
memory: 256M
```
## Security Considerations
1. **Protect with Authelia:** Exposes sensitive system info
2. **Read-Only Mounts:** Use `:ro` for all mounted volumes
3. **Limited Socket Access:** Consider Docker Socket Proxy
4. **No Public Access:** Never expose without authentication
5. **API Security:** Restrict API access if enabled
6. **Process Info:** Can reveal application details
7. **Network Monitoring:** Shows internal network traffic
8. **Regular Updates:** Keep Glances container updated
9. **Audit Logs:** Monitor who accesses the interface
10. **Minimal Permissions:** Only mount what's necessary
## Integration with Other Tools
### Grafana Dashboard
Export to Prometheus, then visualize in Grafana:
1. **Enable Prometheus export** in Glances
2. **Add Prometheus datasource** in Grafana
3. **Import Glances dashboard:** https://grafana.com/grafana/dashboards/
### Home Assistant
Monitor via REST API:
```yaml
sensor:
- platform: rest
resource: http://glances:61208/api/3/cpu
name: Server CPU
value_template: '{{ value_json.total }}'
unit_of_measurement: '%'
```
### Uptime Kuma
Monitor Glances availability:
- Type: HTTP(s)
- URL: https://glances.yourdomain.com
- Heartbeat: Every 60 seconds
## Comparison with Alternatives
### Glances vs htop/top
**Glances:**
- Web interface
- Historical data
- Docker monitoring
- Remote access
- Exportable metrics
**htop:**
- Terminal only
- No web interface
- Lower overhead
- More detailed process tree
### Glances vs Netdata
**Glances:**
- Simpler setup
- Lighter weight
- Better for single server
- Python-based
**Netdata:**
- More detailed metrics
- Better long-term storage
- Complex setup
- Better for multiple servers
### Glances vs Prometheus + Grafana
**Glances:**
- All-in-one solution
- Easier setup
- Less powerful
- Short-term data
**Prometheus + Grafana:**
- Enterprise-grade
- Long-term storage
- Complex setup
- Powerful querying
## Summary
Glances provides real-time system monitoring in a simple, accessible interface. It:
- Shows comprehensive system metrics at a glance
- Monitors Docker containers alongside host
- Provides web UI and REST API
- Uses minimal resources
- Requires minimal configuration
- Perfect for quick system health checks
**Best For:**
- Homelab monitoring
- Quick troubleshooting
- Resource usage overview
- Docker container monitoring
- Single-server setups
**Not Ideal For:**
- Long-term metric storage
- Complex alerting
- Multi-server enterprise monitoring
- Detailed performance analysis
**Remember:**
- Protect with Authelia
- Use host PID mode for accurate monitoring
- Mount Docker socket for container stats
- Configure thresholds for alerts
- Complement with Grafana for long-term analysis
- Lightweight alternative to complex monitoring stacks

View File

@@ -1,576 +0,0 @@
# Gluetun - VPN Client Container
## Table of Contents
- [Overview](#overview)
- [What is Gluetun?](#what-is-gluetun)
- [Why Use Gluetun?](#why-use-gluetun)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Routing Traffic Through Gluetun](#routing-traffic-through-gluetun)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Core Infrastructure
**Docker Image:** [qmcgaw/gluetun](https://hub.docker.com/r/qmcgaw/gluetun)
**Default Stack:** `core.yml`
**Web UI:** `http://SERVER_IP:8000` (Control Server)
**VPN Provider:** Surfshark (or 60+ others supported)
## What is Gluetun?
Gluetun is a lightweight VPN client container that provides VPN connectivity to other Docker containers. Instead of installing VPN clients on your host or within individual containers, Gluetun acts as a VPN gateway that other containers can route their traffic through.
### Key Features
- **60+ VPN Providers:** Surfshark, NordVPN, Private Internet Access, ProtonVPN, Mullvad, etc.
- **Kill Switch:** Blocks all traffic if VPN disconnects
- **Port Forwarding:** Automatic port forwarding for supported providers
- **Network Namespace Sharing:** Other containers can use Gluetun's network
- **Health Checks:** Built-in monitoring and auto-reconnection
- **DNS Management:** Uses VPN provider's DNS for privacy
- **HTTP Control Server:** Web UI for monitoring and control
- **IPv6 Support:** Optional IPv6 routing
- **Custom Provider Support:** Can configure any OpenVPN/Wireguard provider
## Why Use Gluetun?
1. **Privacy for Torrenting:** Hide your IP when using qBittorrent
2. **Geo-Restrictions:** Access region-locked content
3. **Container-Level VPN:** Only specific services use VPN, not entire system
4. **Kill Switch Protection:** Traffic blocked if VPN fails
5. **Easy Management:** Single container for all VPN needs
6. **Provider Flexibility:** Switch between providers easily
7. **No Split Tunneling Complexity:** Docker handles networking
8. **Port Forwarding:** Essential for torrent seeding
## How It Works
```
Internet → VPN Server (Surfshark) → Gluetun Container
Shared Network
┌────────────┴────────────┐
↓ ↓
qBittorrent Prowlarr
(network: gluetun) (network: gluetun)
```
### Network Namespace Sharing
Containers can use Gluetun's network stack:
```yaml
qbittorrent:
image: linuxserver/qbittorrent
network_mode: "service:gluetun" # Use Gluetun's network
# This container now routes ALL traffic through VPN
```
### Traffic Flow
1. **Container makes request** (e.g., qBittorrent downloads torrent)
2. **Traffic routed** to Gluetun container
3. **Gluetun encrypts** traffic and sends through VPN tunnel
4. **VPN server receives** encrypted traffic
5. **VPN server forwards** request to internet
6. **Response flows back** through same path
7. **If VPN fails,** kill switch blocks all traffic
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/core/gluetun/
└── (No persistent config needed - all via environment variables)
```
### Environment Variables
```bash
# VPN Provider
VPN_SERVICE_PROVIDER=surfshark
VPN_TYPE=openvpn # or wireguard
# Surfshark Credentials
OPENVPN_USER=your-surfshark-username
OPENVPN_PASSWORD=your-surfshark-password
# Server Selection
SERVER_COUNTRIES=USA # or SERVER_CITIES=New York
# SERVER_REGIONS=us-east
# Features
FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24 # Allow local network
PORT_FORWARD=on # Enable port forwarding (if supported)
DOT=on # DNS over TLS
# Health Check
HEALTH_VPN_DURATION_INITIAL=30s
HEALTH_SUCCESS_WAIT_DURATION=5m
```
### Surfshark Setup
1. **Get Surfshark Account:**
- Sign up at https://surfshark.com
- Go to Manual Setup → OpenVPN/Wireguard
- Copy service credentials (NOT your login credentials)
2. **Generate Service Credentials:**
```
Dashboard → Manual Setup → Credentials
Username: random-string
Password: random-string
```
3. **Configure Gluetun:**
```bash
OPENVPN_USER=your-service-username
OPENVPN_PASSWORD=your-service-password
```
### Server Selection Options
```bash
# By Country
SERVER_COUNTRIES=USA,Canada
# By City
SERVER_CITIES=New York,Los Angeles
# By Region (provider-specific)
SERVER_REGIONS=us-east
# By Hostname (specific server)
SERVER_HOSTNAMES=us-nyc-st001
# Random selection within criteria
# Gluetun will pick best server automatically
```
## Official Resources
- **GitHub:** https://github.com/qdm12/gluetun
- **Docker Hub:** https://hub.docker.com/r/qmcgaw/gluetun
- **Wiki:** https://github.com/qdm12/gluetun-wiki
- **Provider Setup:** https://github.com/qdm12/gluetun-wiki/tree/main/setup/providers
- **Surfshark Setup:** https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/surfshark.md
## Educational Resources
### Videos
- [VPN Explained - What is a VPN? (NetworkChuck)](https://www.youtube.com/watch?v=YEe8vs26ytg)
- [Docker VPN Setup with Gluetun (Techno Tim)](https://www.youtube.com/watch?v=fpkLvnAKen0)
- [Secure Your Docker Containers with VPN](https://www.youtube.com/results?search_query=gluetun+docker+vpn)
- [Port Forwarding for Torrents Explained](https://www.youtube.com/watch?v=jTThdKLHbq8)
### Articles & Guides
- [Gluetun Wiki - Getting Started](https://github.com/qdm12/gluetun-wiki)
- [VPN Kill Switch Explained](https://www.comparitech.com/blog/vpn-privacy/vpn-kill-switch/)
- [Why Use VPN for Torrenting](https://www.cloudwards.net/vpn-for-torrenting/)
- [Network Namespace Sharing in Docker](https://docs.docker.com/network/)
### Concepts to Learn
- **VPN (Virtual Private Network):** Encrypted tunnel for internet traffic
- **Kill Switch:** Blocks traffic if VPN disconnects
- **Port Forwarding:** Allows incoming connections (important for seeding)
- **DNS Leak:** When DNS queries bypass VPN (Gluetun prevents this)
- **Split Tunneling:** Some apps use VPN, others don't (Docker makes this easy)
- **OpenVPN vs Wireguard:** Two VPN protocols (Wireguard is newer, faster)
- **Network Namespace:** Container network isolation/sharing
## Docker Configuration
### Complete Service Definition
```yaml
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
restart: unless-stopped
cap_add:
- NET_ADMIN # Required for VPN
devices:
- /dev/net/tun:/dev/net/tun # Required for VPN
networks:
- traefik-network
ports:
# Gluetun Control Server
- "8000:8000"
# Ports for services using Gluetun's network
- "8080:8080" # qBittorrent Web UI
- "9696:9696" # Prowlarr Web UI (if through VPN)
# Add more as needed
volumes:
- /opt/stacks/core/gluetun:/gluetun
environment:
- VPN_SERVICE_PROVIDER=surfshark
- VPN_TYPE=openvpn
- OPENVPN_USER=${SURFSHARK_USER}
- OPENVPN_PASSWORD=${SURFSHARK_PASSWORD}
- SERVER_COUNTRIES=USA
- FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24
- PORT_FORWARD=on
- DOT=on
- TZ=America/New_York
# Health check
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "https://api.ipify.org"]
interval: 1m
timeout: 10s
retries: 3
start_period: 30s
```
### Alternative: Wireguard
```yaml
environment:
- VPN_SERVICE_PROVIDER=surfshark
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=${SURFSHARK_WG_PRIVATE_KEY}
- WIREGUARD_ADDRESSES=10.14.0.2/16
- SERVER_COUNTRIES=USA
```
## Routing Traffic Through Gluetun
### Method 1: Network Mode (Recommended)
```yaml
qbittorrent:
image: linuxserver/qbittorrent
container_name: qbittorrent
network_mode: "service:gluetun" # Use Gluetun's network
depends_on:
- gluetun
volumes:
- /opt/stacks/media/qbittorrent:/config
- /mnt/downloads:/downloads
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- WEBUI_PORT=8080
# NO ports section - use Gluetun's ports
# NO networks section - uses Gluetun's network
```
**Important:** When using `network_mode: "service:gluetun"`:
- Don't define `ports:` on the service
- Don't define `networks:` on the service
- Add ports to **Gluetun's** ports section
- Access WebUI through Gluetun's IP
### Method 2: Custom Network (Advanced)
```yaml
services:
gluetun:
# ... gluetun config ...
networks:
vpn-network:
ipv4_address: 172.20.0.2
qbittorrent:
# ... qbittorrent config ...
networks:
vpn-network:
depends_on:
- gluetun
networks:
vpn-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
```
### Exposing Services Through Traefik
When using `network_mode: "service:gluetun"`, Traefik labels go on **Gluetun**:
```yaml
gluetun:
image: qmcgaw/gluetun
# ... other config ...
labels:
# qBittorrent labels on Gluetun
- "traefik.enable=true"
- "traefik.http.routers.qbittorrent.rule=Host(`qbit.${DOMAIN}`)"
- "traefik.http.routers.qbittorrent.entrypoints=websecure"
- "traefik.http.routers.qbittorrent.tls.certresolver=letsencrypt"
- "traefik.http.routers.qbittorrent.middlewares=authelia@docker"
- "traefik.http.services.qbittorrent.loadbalancer.server.port=8080"
qbittorrent:
image: linuxserver/qbittorrent
network_mode: "service:gluetun"
# NO labels here
```
## Advanced Topics
### Port Forwarding
Essential for torrent seeding. Supported providers:
- Private Internet Access (PIA)
- ProtonVPN
- Perfect Privacy
- AirVPN
**Configuration:**
```bash
PORT_FORWARD=on
PORT_FORWARD_ONLY=true # Only use servers with port forwarding
```
**Get forwarded port:**
```bash
# Check Gluetun logs
docker logs gluetun | grep "port forwarded"
# Or via control server
curl http://localhost:8000/v1/openvpn/portforwarded
```
**Use in qBittorrent:**
1. Get port from Gluetun logs: `Port forwarded is 12345`
2. In qBittorrent Settings → Connection → Listening Port → Set to `12345`
### Multiple VPN Connections
Run multiple Gluetun instances for different regions:
```yaml
gluetun-usa:
image: qmcgaw/gluetun
container_name: gluetun-usa
environment:
- SERVER_COUNTRIES=USA
# ... rest of config ...
gluetun-uk:
image: qmcgaw/gluetun
container_name: gluetun-uk
environment:
- SERVER_COUNTRIES=United Kingdom
# ... rest of config ...
```
### Custom VPN Provider
For providers not natively supported:
```yaml
environment:
- VPN_SERVICE_PROVIDER=custom
- VPN_TYPE=openvpn
- OPENVPN_CUSTOM_CONFIG=/gluetun/custom.conf
volumes:
- ./custom.ovpn:/gluetun/custom.conf:ro
```
### DNS Configuration
```bash
# Use VPN provider DNS (default)
DOT=on # DNS over TLS
# Use custom DNS
DNS_ADDRESS=1.1.1.1 # Cloudflare
# Multiple DNS servers
DNS_ADDRESS=1.1.1.1,8.8.8.8
```
### Firewall Rules
```bash
# Allow local network access
FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24,172.16.0.0/12
# Block all except VPN
FIREWALL_VPN_INPUT_PORTS= # No incoming connections
# Allow specific outbound ports
FIREWALL_OUTBOUND_PORTS=80,443,53
```
## Troubleshooting
### Check VPN Connection
```bash
# View Gluetun logs
docker logs gluetun
# Check public IP (should show VPN IP)
docker exec gluetun wget -qO- https://api.ipify.org
# Check if VPN is connected
docker exec gluetun cat /tmp/gluetun/ip
```
### Service Can't Access Internet
```bash
# Check if service is using Gluetun's network
docker inspect service-name | grep NetworkMode
# Test from within service
docker exec service-name curl https://api.ipify.org
# Check firewall rules
docker logs gluetun | grep -i firewall
# Verify outbound subnets
# Ensure FIREWALL_OUTBOUND_SUBNETS includes your local network
```
### VPN Keeps Disconnecting
```bash
# Check provider status
# Visit your VPN provider's status page
# Try different server
SERVER_COUNTRIES=Canada # Change country
# Try Wireguard instead of OpenVPN
VPN_TYPE=wireguard
# Check system resources
docker stats gluetun
# View connection logs
docker logs gluetun | grep -i "connection\|disconnect"
```
### Port Forwarding Not Working
```bash
# Check if provider supports it
# Only certain providers support port forwarding
# Verify it's enabled
docker logs gluetun | grep -i "port forward"
# Get forwarded port
curl http://localhost:8000/v1/openvpn/portforwarded
# Check if server supports it
PORT_FORWARD_ONLY=true # Force port-forward-capable servers
```
### DNS Leaks
```bash
# Test DNS
docker exec gluetun nslookup google.com
# Check DNS configuration
docker exec gluetun cat /etc/resolv.conf
# Enable DNS over TLS
DOT=on
```
### Can't Access Service WebUI
```bash
# If using network_mode: "service:gluetun"
# Access via: http://GLUETUN_IP:PORT
# Check Gluetun's IP
docker inspect gluetun | grep IPAddress
# Verify ports are exposed on Gluetun
docker ps | grep gluetun
# Check if service is running
docker ps | grep service-name
```
### Kill Switch Testing
```bash
# Stop VPN (simulate disconnection)
docker exec gluetun killall openvpn
# Try accessing internet from connected service
docker exec qbittorrent curl https://api.ipify.org
# Should fail or timeout
# Restart VPN
docker restart gluetun
```
## Security Best Practices
1. **Use Strong Credentials:** Never share your VPN credentials
2. **Enable Kill Switch:** Always use Gluetun's built-in kill switch
3. **DNS over TLS:** Enable `DOT=on` to prevent DNS leaks
4. **Firewall Rules:** Restrict outbound traffic to necessary subnets only
5. **Regular Updates:** Keep Gluetun updated for security patches
6. **Provider Selection:** Use reputable VPN providers (no-logs policy)
7. **Monitor Logs:** Regularly check for connection issues
8. **Test IP Leaks:** Verify your IP is hidden: https://ipleak.net
9. **Port Security:** Only forward ports when necessary
10. **Split Tunneling:** Only route traffic that needs VPN through Gluetun
## Provider Comparisons
### Surfshark
- **Pros:** Unlimited devices, fast, affordable
- **Cons:** No port forwarding
- **Best for:** General privacy, torrenting (without seeding priority)
### Private Internet Access (PIA)
- **Pros:** Port forwarding, proven no-logs
- **Cons:** US-based
- **Best for:** Torrenting with seeding
### Mullvad
- **Pros:** Anonymous (no email required), port forwarding
- **Cons:** More expensive
- **Best for:** Maximum privacy
### ProtonVPN
- **Pros:** Port forwarding, excellent privacy
- **Cons:** Expensive for full features
- **Best for:** Privacy-focused users
### NordVPN
- **Pros:** Fast, large server network
- **Cons:** No port forwarding
- **Best for:** General use, streaming
## Summary
Gluetun is essential for:
- Protecting torrent traffic (qBittorrent, Transmission)
- Bypassing geo-restrictions
- Hiding your IP from specific services
- Maintaining privacy for indexers (Prowlarr, Jackett)
- Professional homelab security
By routing only specific containers through the VPN, you maintain:
- Fast local network access for other services
- Privacy where it matters
- Simple, maintainable configuration
- Automatic failover protection
Remember: Always verify your VPN is working correctly by checking your public IP from containers using Gluetun's network. The IP should match your VPN provider's IP, not your home IP.

View File

@@ -1,194 +0,0 @@
# Grafana - Metrics Visualization
## Table of Contents
- [Overview](#overview)
- [What is Grafana?](#what-is-grafana)
- [Why Use Grafana?](#why-use-grafana)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Setup](#setup)
## Overview
**Category:** Monitoring Dashboards
**Docker Image:** [grafana/grafana](https://hub.docker.com/r/grafana/grafana)
**Default Stack:** `monitoring.yml`
**Web UI:** `http://SERVER_IP:3001`
**Default Login:** admin/admin
**Ports:** 3001
## What is Grafana?
Grafana is the leading open-source platform for monitoring and observability. It visualizes data from Prometheus, InfluxDB, Elasticsearch, and 80+ other data sources with beautiful, interactive dashboards. The standard for turning metrics into insights.
### Key Features
- **Beautiful Dashboards:** Stunning visualizations
- **80+ Data Sources:** Prometheus, InfluxDB, MySQL, etc.
- **Alerting:** Visual alert rules
- **Variables:** Dynamic dashboards
- **Annotations:** Mark events
- **Sharing:** Share dashboards/panels
- **Plugins:** Extend functionality
- **Templating:** Reusable dashboards
- **Community Dashboards:** 10,000+ ready-made
- **Free & Open Source:** No limits
## Why Use Grafana?
1. **Industry Standard:** Used everywhere
2. **Beautiful:** Best-in-class visualizations
3. **Flexible:** 80+ data sources
4. **Easy:** Pre-made dashboards
5. **Powerful:** Advanced queries
6. **Alerting:** Visual alert builder
7. **Community:** Huge dashboard library
8. **Free:** All features included
## Configuration in AI-Homelab
```
/opt/stacks/monitoring/grafana/data/
grafana.db # Configuration database
dashboards/ # Dashboard JSON
plugins/ # Installed plugins
```
## Official Resources
- **Website:** https://grafana.com
- **Documentation:** https://grafana.com/docs/grafana/latest
- **Dashboards:** https://grafana.com/grafana/dashboards
- **Tutorials:** https://grafana.com/tutorials
## Educational Resources
### YouTube Videos
1. **Techno Tim - Prometheus & Grafana**
- https://www.youtube.com/watch?v=9TJx7QTrTyo
- Complete setup guide
- Dashboard creation
- Alerting configuration
2. **TechWorld with Nana - Grafana Tutorial**
- https://www.youtube.com/watch?v=QDQmY1iFvSU
- Dashboard building
- Variables and templating
- Best practices
3. **Christian Lempa - Grafana Basics**
- https://www.youtube.com/watch?v=bXZeTpFGw94
- Getting started
- Data source configuration
- Panel types
### Popular Dashboards
1. **Node Exporter Full:** https://grafana.com/grafana/dashboards/1860 (ID: 1860)
2. **Docker Container & Host Metrics:** https://grafana.com/grafana/dashboards/179 (ID: 179)
3. **cAdvisor:** https://grafana.com/grafana/dashboards/14282 (ID: 14282)
## Docker Configuration
```yaml
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
networks:
- traefik-network
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel
- GF_SERVER_ROOT_URL=https://grafana.${DOMAIN}
volumes:
- /opt/stacks/monitoring/grafana/data:/var/lib/grafana
user: "472" # grafana user
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.rule=Host(`grafana.${DOMAIN}`)"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d grafana
```
2. **Access UI:** `http://SERVER_IP:3001`
3. **First Login:**
- Username: `admin`
- Password: `admin`
- Set new password
4. **Add Prometheus Data Source:**
- Configuration (gear) → Data Sources → Add
- Type: Prometheus
- URL: `http://prometheus:9090`
- Save & Test
5. **Import Dashboard:**
- Dashboards (squares) → Import
- Enter dashboard ID (e.g., 1860)
- Select Prometheus data source
- Import
6. **Popular Dashboard IDs:**
- **1860:** Node Exporter Full
- **179:** Docker Container Metrics
- **14282:** cAdvisor
- **11074:** Node Exporter for Prometheus
- **893:** Docker Metrics
7. **Create Custom Dashboard:**
- Dashboards → New Dashboard
- Add Panel
- Select visualization
- Write PromQL query
- Configure panel options
- Save dashboard
## Summary
Grafana is your visualization platform offering:
- Beautiful metric dashboards
- 80+ data source types
- 10,000+ community dashboards
- Visual alert builder
- Dashboard templating
- Team collaboration
- Plugin ecosystem
- Free and open-source
**Perfect for:**
- Prometheus visualization
- Infrastructure monitoring
- Application metrics
- Business analytics
- IoT data
- Log analysis
- Performance dashboards
**Key Points:**
- Change default password!
- Import community dashboards
- Prometheus common data source
- Dashboard ID for quick import
- Variables make dashboards dynamic
- Alerting built-in
- Share dashboards easily
**Remember:**
- Default: admin/admin
- Change password immediately
- Add Prometheus as data source
- Use community dashboards (save time)
- Learn PromQL for custom queries
- Set refresh intervals
- Export dashboards as JSON backup
Grafana turns metrics into beautiful insights!

View File

@@ -1,724 +0,0 @@
# Homarr - Another Application Dashboard
## Table of Contents
- [Overview](#overview)
- [What is Homarr?](#what-is-homarr)
- [Why Use Homarr?](#why-use-homarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Setup and Usage](#setup-and-usage)
- [Widgets and Apps](#widgets-and-apps)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Dashboard
**Docker Image:** [ghcr.io/ajnart/homarr](https://github.com/ajnart/homarr/pkgs/container/homarr)
**Default Stack:** `dashboards.yml`
**Web UI:** `https://homarr.${DOMAIN}` or `http://SERVER_IP:7575`
**Authentication:** Built-in user system
**Purpose:** Interactive application dashboard (alternative to Homepage)
## What is Homarr?
Homarr is a customizable application dashboard with a focus on ease of use and interactivity. Unlike Homepage's YAML configuration, Homarr offers a web-based drag-and-drop interface for building your dashboard.
### Key Features
- **Drag-and-Drop UI:** Visual dashboard builder
- **Built-in Auth:** User accounts and permissions
- **App Integrations:** 50+ service widgets
- **Custom Widgets:** RSS, Calendar, Weather, Docker stats
- **Responsive Design:** Mobile-friendly
- **Multi-Board Support:** Create multiple dashboards
- **Search Integration:** Built-in search aggregation
- **Docker Integration:** Container status monitoring
- **Customizable Themes:** Light/dark mode with colors
- **Widget Variety:** Information, media, monitoring widgets
- **No Configuration Files:** Everything managed via GUI
- **Modern UI:** Clean, intuitive interface
## Why Use Homarr?
### Homarr vs Homepage
**Use Homarr if you want:**
- ✅ GUI-based configuration (no YAML)
- ✅ Drag-and-drop dashboard building
- ✅ Built-in user authentication
- ✅ Interactive widget management
- ✅ Visual customization
- ✅ More widget variety
- ✅ Easier for non-technical users
**Use Homepage if you want:**
- ✅ YAML configuration (GitOps friendly)
- ✅ Lighter resource usage
- ✅ Faster performance
- ✅ More mature project
- ✅ Configuration as code
### Common Use Cases
1. **Family Dashboard:** Easy for non-technical users
2. **Media Center:** Integrated media widgets
3. **Multiple Dashboards:** Different boards for different purposes
4. **Interactive Monitoring:** Clickable, actionable widgets
5. **Visual Customization:** Design your perfect dashboard
## How It Works
```
User → Browser → Homarr Web UI
Dashboard Editor
(Drag-and-drop interface)
┌────────────┴────────────┐
↓ ↓
App Tiles Widgets
(Services) (Live data, RSS, etc.)
↓ ↓
Click to access API integrations
service (Sonarr, Radarr, etc.)
```
### Architecture
**Data Storage:**
- SQLite database (stores dashboards, users, settings)
- Configuration directory (icons, backups)
- No YAML files required
**Components:**
1. **Web Interface:** Dashboard builder and viewer
2. **API Backend:** Service integrations
3. **Database:** User accounts, dashboards, settings
4. **Docker Integration:** Container monitoring
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/dashboards/homarr/
├── data/ # Database and configs
├── icons/ # Custom icons
└── backups/ # Dashboard backups
```
### Environment Variables
```bash
# Base URL (if behind reverse proxy)
BASE_URL=https://homarr.yourdomain.com
# Port
PORT=7575
# Timezone
TZ=America/New_York
# Optional: Disable analytics
DISABLE_ANALYTICS=true
```
## Official Resources
- **Website:** https://homarr.dev
- **GitHub:** https://github.com/ajnart/homarr
- **Documentation:** https://homarr.dev/docs/introduction
- **Discord:** https://discord.gg/aCsmEV5RgA
- **Demo:** https://demo.homarr.dev
## Educational Resources
### Videos
- [Homarr - Modern Dashboard for Your Homelab (Techno Tim)](https://www.youtube.com/watch?v=a2S5iHG5C0M)
- [Homarr Setup Tutorial (DB Tech)](https://www.youtube.com/watch?v=tdMAXd9sHY4)
- [Homepage vs Homarr - Which Dashboard?](https://www.youtube.com/results?search_query=homarr+vs+homepage)
### Articles & Guides
- [Homarr Official Documentation](https://homarr.dev/docs)
- [Widget Configuration Guide](https://homarr.dev/docs/widgets)
- [Integration Setup](https://homarr.dev/docs/integrations)
### Concepts to Learn
- **Dashboard Builders:** Visual interface design
- **Widget Systems:** Modular dashboard components
- **API Integration:** Service data fetching
- **User Authentication:** Multi-user support
- **Docker Integration:** Container monitoring
- **Responsive Design:** Mobile-friendly layouts
## Docker Configuration
### Complete Service Definition
```yaml
homarr:
image: ghcr.io/ajnart/homarr:latest
container_name: homarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "7575:7575"
volumes:
- /opt/stacks/dashboards/homarr/data:/app/data/configs
- /opt/stacks/dashboards/homarr/icons:/app/public/icons
- /opt/stacks/dashboards/homarr/data:/data
- /var/run/docker.sock:/var/run/docker.sock:ro # For Docker integration
environment:
- BASE_URL=https://homarr.${DOMAIN}
- PORT=7575
- TZ=America/New_York
- DISABLE_ANALYTICS=true
labels:
- "traefik.enable=true"
- "traefik.http.routers.homarr.rule=Host(`homarr.${DOMAIN}`)"
- "traefik.http.routers.homarr.entrypoints=websecure"
- "traefik.http.routers.homarr.tls.certresolver=letsencrypt"
- "traefik.http.services.homarr.loadbalancer.server.port=7575"
```
## Setup and Usage
### Initial Setup
1. **Access Homarr:**
- Navigate to: `https://homarr.yourdomain.com`
- Or: `http://SERVER_IP:7575`
2. **Create Admin Account:**
- Click "Get Started"
- Enter username and password
- Create first dashboard
3. **Dashboard Creation:**
- Name your dashboard
- Choose layout (grid, list, etc.)
- Start adding apps and widgets
### Adding Applications
**Method 1: Manual Add**
1. Click "Edit Mode" (pencil icon)
2. Click "+" button
3. Select "App"
4. Fill in details:
- Name
- URL
- Icon (search or upload)
- Description
5. Save
**Method 2: Integration**
1. Add app as above
2. Click "Integration" tab
3. Select service type (Sonarr, Radarr, etc.)
4. Enter API URL and key
5. Save - widget will show live data
**Method 3: Docker Discovery**
1. Enable Docker integration
2. Homarr auto-discovers containers
3. Add discovered apps to dashboard
### Dashboard Editor
**Edit Mode:**
- Click pencil icon to enter edit mode
- Drag tiles to rearrange
- Resize tiles by dragging corners
- Delete tiles with X button
- Click tiles to edit settings
**Grid System:**
- Tiles snap to grid
- Customizable grid size
- Responsive layout
- Mobile-optimized views
**Categories:**
- Create sections (Media, Management, etc.)
- Collapse/expand categories
- Organize apps logically
### User Management
**Create Users:**
1. Settings → Users → Add User
2. Set username and password
3. Assign permissions
4. User can login and customize their dashboard
**Permissions:**
- **Admin:** Full access
- **User:** View and edit own dashboards
- **Guest:** Read-only access
## Widgets and Apps
### Application Widgets
**Supported Integrations:**
**Media:**
- Plex, Jellyfin, Emby
- Sonarr, Radarr, Lidarr, Readarr
- Prowlarr, Jackett
- qBittorrent, Transmission, Deluge
- Tautulli (Plex stats)
- Overseerr, Jellyseerr
**Infrastructure:**
- Portainer
- Traefik
- Pi-hole, AdGuard Home
- Uptime Kuma
- Proxmox
**Other:**
- Home Assistant
- Nextcloud
- Gitea
- Calibre-Web
- Many more...
### Information Widgets
**Weather Widget:**
```
Location: Auto-detect or manual
Provider: OpenWeatherMap, WeatherAPI
Units: Imperial/Metric
Forecast: 3-7 days
```
**Calendar Widget:**
```
Type: iCal URL
Source: Google Calendar, Nextcloud, etc.
Events: Upcoming events display
```
**RSS Widget:**
```
Feed URL: Any RSS/Atom feed
Items: Number to show
Refresh: Update interval
```
**Docker Widget:**
```
Shows: Running/Stopped containers
Stats: CPU, Memory, Network
Control: Start/Stop containers (if permissions)
```
**Clock Widget:**
```
Format: 12h/24h
Timezone: Custom or system
Date: Show/hide
Analog/Digital: Style choice
```
**Media Server Widget:**
```
Plex/Jellyfin: Currently playing
Recent: Recently added
Stats: Library counts
```
### Custom Widgets
**HTML Widget:**
- Embed custom HTML
- Use for iframes, custom content
- CSS styling support
**Iframe Widget:**
- Embed external websites
- Dashboard within dashboard
- Useful for Grafana, etc.
**Image Widget:**
- Display static images
- Backgrounds, logos
- Network/local images
## Advanced Topics
### Custom Icons
**Upload Custom Icons:**
1. Place icons in `/icons/` directory
2. Or upload via UI
3. Reference in app settings
**Icon Sources:**
- Built-in icon library (1000+)
- Upload your own (PNG, SVG, JPG)
- URL to external icon
- Dashboard Icons repository
### Multiple Dashboards
**Create Multiple Boards:**
1. Settings → Boards → New Board
2. Name and configure
3. Switch between boards
4. Different dashboards for different purposes:
- Family board
- Admin board
- Media board
- etc.
**Board Sharing:**
- Share board link
- Set access permissions
- Public vs private boards
### Themes and Customization
**Theme Options:**
- Light/Dark mode
- Accent colors
- Background images
- Custom CSS (advanced)
**Layout Options:**
- Grid size
- Tile spacing
- Column count
- Responsive breakpoints
### Docker Integration
**Enable Docker:**
1. Mount Docker socket
2. Settings → Docker → Enable
3. Select which containers to show
4. Auto-discovery or manual
**Docker Features:**
- Container status
- Start/Stop controls (if enabled)
- Resource usage
- Quick access to logs
### API Access
**REST API:**
- Endpoint: `https://homarr.domain.com/api`
- Authentication: API key
- Use for automation
- Integration with other tools
**API Uses:**
- Automated dashboard updates
- External monitoring
- Custom integrations
- Backup automation
### Backup and Restore
**Manual Backup:**
1. Settings → Backups
2. Create backup
3. Download JSON file
**Restore:**
1. Settings → Backups
2. Upload backup file
3. Select boards to restore
**File System Backup:**
```bash
# Backup entire data directory
tar -czf homarr-backup-$(date +%Y%m%d).tar.gz /opt/stacks/dashboards/homarr/data/
# Restore
tar -xzf homarr-backup-20240112.tar.gz -C /opt/stacks/dashboards/homarr/
docker restart homarr
```
### Import/Export
**Export Dashboard:**
- Settings → Export → Download JSON
- Share with others
- Version control
**Import Dashboard:**
- Settings → Import → Upload JSON
- Community dashboards
- Template boards
## Troubleshooting
### Homarr Not Loading
```bash
# Check container status
docker ps | grep homarr
# View logs
docker logs homarr
# Test port
curl http://localhost:7575
# Check Traefik routing
docker logs traefik | grep homarr
```
### Can't Login
```bash
# Reset admin password
docker exec -it homarr npm run db:reset-password
# Or recreate database (WARNING: loses data)
docker stop homarr
rm -rf /opt/stacks/dashboards/homarr/data/db.sqlite
docker start homarr
# Creates new database, need to setup again
```
### Widgets Not Showing Data
```bash
# Check API connectivity
docker exec homarr curl http://service:port
# Verify API key
# Re-enter in widget settings
# Check service logs
docker logs service-name
# Test API manually
curl -H "X-Api-Key: key" http://service:port/api/endpoint
```
### Icons Not Displaying
```bash
# Clear browser cache
# Ctrl+Shift+R (hard refresh)
# Check icon path
ls -la /opt/stacks/dashboards/homarr/icons/
# Ensure proper permissions
sudo chown -R 1000:1000 /opt/stacks/dashboards/homarr/
# Re-upload icon via UI
```
### Docker Integration Not Working
```bash
# Verify socket mount
docker inspect homarr | grep -A5 Mounts
# Check socket permissions
ls -la /var/run/docker.sock
# Fix permissions
sudo chmod 666 /var/run/docker.sock
# Or use Docker Socket Proxy (recommended)
```
### High Memory Usage
```bash
# Check container stats
docker stats homarr
# Typical usage: 100-300MB
# If higher, restart container
docker restart homarr
# Reduce widgets if needed
# Remove unused integrations
```
### Slow Performance
```bash
# Reduce widget refresh rates
# Edit widget → Change refresh interval
# Disable unused features
# Settings → Integrations → Disable unused
# Check network latency
# Ping services from Homarr container
# Restart container
docker restart homarr
```
## Performance Optimization
### Reduce API Calls
- Increase widget refresh intervals
- Disable widgets for services you don't monitor frequently
- Use lazy loading for images
### Optimize Docker
```yaml
homarr:
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
memory: 128M
```
### Cache Settings
- Enable browser caching
- Use CDN for icons if available
- Optimize image sizes
## Comparison with Alternatives
### Homarr vs Homepage
**Homarr:**
- GUI configuration
- Built-in authentication
- Drag-and-drop
- More interactive
- Higher resource usage
**Homepage:**
- YAML configuration
- External auth required
- Faster performance
- GitOps friendly
- Lighter weight
### Homarr vs Heimdall
**Homarr:**
- More modern UI
- Better integrations
- Active development
- More features
**Heimdall:**
- Simpler
- Very lightweight
- Established project
- Basic functionality
### Homarr vs Organizr
**Homarr:**
- Newer, modern
- Better mobile support
- Easier setup
- More widgets
**Organizr:**
- More mature
- Tab-based interface
- Different philosophy
- Large community
## Tips and Tricks
### Efficient Layouts
**Media Dashboard:**
- Large tiles for Plex/Jellyfin
- Medium tiles for *arr apps
- Small tiles for indexers
- Weather widget in corner
**Admin Dashboard:**
- Grid of management tools
- Docker status widget
- System resource widget
- Recent logs/alerts
**Family Dashboard:**
- Large, simple icons
- Hide technical services
- Focus on media/content
- Bright, friendly theme
### Widget Combinations
**Media Setup:**
1. Plex/Jellyfin (large, with integration)
2. Sonarr/Radarr (medium, with stats)
3. qBittorrent (small, with speed)
4. Recently added (widget)
**Monitoring Setup:**
1. Docker widget (container status)
2. Pi-hole widget (blocking stats)
3. Uptime Kuma widget (service status)
4. Weather widget (why not?)
### Custom Styling
Use custom CSS for unique looks:
- Rounded corners
- Custom colors
- Transparency effects
- Animations
## Summary
Homarr is a user-friendly, interactive dashboard that offers:
- Visual dashboard builder
- Built-in authentication
- Rich widget ecosystem
- Drag-and-drop interface
- Modern, responsive design
- No configuration files needed
**Perfect for:**
- Users who prefer GUI over YAML
- Multi-user environments
- Interactive dashboards
- Visual customization fans
- Quick setup without learning curve
**Trade-offs:**
- Higher resource usage than Homepage
- Database dependency
- Less GitOps friendly
- Newer project (less mature)
**Best Practices:**
- Use both Homepage and Homarr (they complement each other)
- Homepage for you, Homarr for family
- Regular backups of database
- Optimize widget refresh rates
- Use Docker Socket Proxy for security
- Keep updated for new features
**Remember:**
- Homarr is all about visual customization
- No YAML - everything in UI
- Great for non-technical users
- Built-in auth is convenient
- Can coexist with Homepage
- Choose based on your preference and use case

View File

@@ -1,615 +0,0 @@
# Home Assistant - Home Automation Platform
## Table of Contents
- [Overview](#overview)
- [What is Home Assistant?](#what-is-home-assistant)
- [Why Use Home Assistant?](#why-use-home-assistant)
- [Key Concepts](#key-concepts)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Integrations](#integrations)
- [Automations](#automations)
- [Add-ons vs Docker](#add-ons-vs-docker)
- [Troubleshooting](#troubleshooting)
- [Advanced Topics](#advanced-topics)
## Overview
**Category:** Home Automation
**Docker Image:** [homeassistant/home-assistant](https://hub.docker.com/r/homeassistant/home-assistant)
**Default Stack:** `homeassistant.yml`
**Web UI:** `https://homeassistant.${DOMAIN}` or `http://SERVER_IP:8123`
**Ports:** 8123
**Network Mode:** host (for device discovery)
## What is Home Assistant?
Home Assistant is a free, open-source home automation platform that focuses on local control and privacy. It integrates thousands of different devices and services, allowing you to automate and control your entire smart home from a single interface. Unlike cloud-based solutions, Home Assistant runs entirely locally on your network.
### Key Features
- **2000+ Integrations:** Supports virtually every smart device
- **Local Control:** Works without internet
- **Privacy Focused:** Your data stays home
- **Powerful Automations:** Visual and YAML-based
- **Voice Control:** Alexa, Google, Siri compatibility
- **Energy Monitoring:** Track usage and solar
- **Mobile Apps:** iOS and Android
- **Dashboards:** Customizable UI
- **Community:** Huge active community
- **Free & Open Source:** No subscriptions
## Why Use Home Assistant?
1. **Universal Integration:** Control everything from one place
2. **Local Control:** Works without internet
3. **Privacy:** Data never leaves your network
4. **No Cloud Required:** Unlike SmartThings, Alexa routines
5. **Powerful Automations:** Complex logic possible
6. **Active Development:** Updates every 3 weeks
7. **Community:** Massive community support
8. **Cost:** Free forever, no subscriptions
9. **Customizable:** Unlimited flexibility
10. **Future-Proof:** Open-source ensures longevity
## Key Concepts
### Entities
The basic building blocks of Home Assistant:
- **Sensors:** Temperature, humidity, power usage
- **Switches:** On/off devices
- **Lights:** Brightness, color control
- **Binary Sensors:** Motion, door/window sensors
- **Climate:** Thermostats
- **Cameras:** Video feeds
- **Media Players:** Speakers, TVs
### Integrations
Connections to devices and services:
- **Zigbee2MQTT:** Zigbee devices
- **ESPHome:** Custom ESP devices
- **MQTT:** Message broker protocol
- **HACS:** Community store
- **Tasmota:** Flashed smart plugs
- **UniFi:** Network devices
- **Plex/Jellyfin:** Media servers
### Automations
Triggered actions:
- **Trigger:** What starts the automation
- **Condition:** Requirements to continue
- **Action:** What happens
Example: Motion detected → If after sunset → Turn on lights
### Scripts
Reusable action sequences:
- Manual execution
- Called from automations
- Parameterized
### Scenes
Saved states of devices:
- "Movie Time" → Dims lights, closes blinds
- "Good Night" → Turns off everything
- One-click activation
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/homeassistant/home-assistant/config/
configuration.yaml # Main config
automations.yaml # Automations
scripts.yaml # Scripts
secrets.yaml # Sensitive data
custom_components/ # HACS and custom integrations
www/ # Custom resources
blueprints/ # Automation blueprints
```
### Environment Variables
```bash
TZ=America/New_York
```
### Network Mode: host
Home Assistant uses `network_mode: host` instead of bridge networking. This is required for:
- **Device Discovery:** mDNS, UPnP, SSDP
- **Casting:** Chromecast, Google Home
- **HomeKit:** Apple HomeKit bridge
- **DLNA:** Media device discovery
**Trade-off:** Can't use Traefik routing easily. Typically accessed via IP:8123 or DNS record pointing to server IP.
## Official Resources
- **Website:** https://www.home-assistant.io
- **Documentation:** https://www.home-assistant.io/docs
- **Community:** https://community.home-assistant.io
- **GitHub:** https://github.com/home-assistant/core
- **YouTube:** https://www.youtube.com/@homeassistant
## Educational Resources
### YouTube Channels
1. **Everything Smart Home** - https://www.youtube.com/@EverythingSmartHome
- Best Home Assistant tutorials
- Device reviews and integrations
- Automation ideas
2. **Smart Home Junkie** - https://www.youtube.com/@SmartHomeJunkie
- In-depth Home Assistant guides
- Zigbee, Z-Wave, ESPHome
- Advanced automations
3. **Intermit.Tech** - https://www.youtube.com/@intermittechnology
- Technical deep dives
- Docker Home Assistant setup
- Integration tutorials
4. **BeardedTinker** - https://www.youtube.com/@BeardedTinker
- German/English tutorials
- Creative automation ideas
- Device comparisons
### Articles & Guides
1. **Official Getting Started:** https://www.home-assistant.io/getting-started
2. **Home Assistant Course:** https://www.home-assistant.io/course
3. **Community Guides:** https://community.home-assistant.io/c/guides/37
### Books
1. **"Home Assistant Cookbook"** by Marco Bruni
2. **"Practical Home Assistant"** by Alan Tse
## Docker Configuration
```yaml
home-assistant:
image: homeassistant/home-assistant:latest
container_name: home-assistant
restart: unless-stopped
network_mode: host
privileged: true # For USB devices (Zigbee/Z-Wave sticks)
environment:
- TZ=America/New_York
volumes:
- /opt/stacks/homeassistant/home-assistant/config:/config
- /etc/localtime:/etc/localtime:ro
- /run/dbus:/run/dbus:ro # For Bluetooth
devices:
- /dev/ttyUSB0:/dev/ttyUSB0 # Zigbee coordinator (if present)
labels:
- "com.centurylinklabs.watchtower.enable=false" # Manual updates recommended
```
**Note:** `network_mode: host` means no Traefik routing. Access via server IP.
## Initial Setup
1. **Start Container:**
```bash
docker compose up -d home-assistant
```
2. **Access UI:** `http://SERVER_IP:8123`
3. **Create Account:**
- Name, username, password
- This is your admin account
- Secure password required
4. **Set Location:**
- Used for weather, sun position
- Important for automations
5. **Scan for Devices:**
- Home Assistant auto-discovers many devices
- Check discovered integrations
6. **Install HACS (Highly Recommended):**
HACS provides thousands of community integrations and themes.
```bash
# Access container
docker exec -it home-assistant bash
# Download HACS
wget -O - https://get.hacs.xyz | bash -
# Restart Home Assistant
exit
docker restart home-assistant
```
Then in UI:
- Settings → Devices & Services → Add Integration
- Search "HACS"
- Authorize with GitHub account
- HACS now available in sidebar
## Integrations
### Essential Integrations
**Zigbee2MQTT:**
- Connect Zigbee devices
- Requires Zigbee coordinator USB stick
- See zigbee2mqtt.md documentation
**ESPHome:**
- Custom ESP8266/ESP32 devices
- Flashed smart plugs, sensors
- See esphome.md documentation
**MQTT:**
- Message broker for IoT devices
- Connects Zigbee2MQTT, Tasmota
- See mosquitto.md documentation
**Mobile App:**
- iOS/Android apps
- Location tracking
- Notifications
- Remote access
**Media Integrations:**
- Plex/Jellyfin: Media controls
- Spotify: Music control
- Sonos: Speaker control
**Network Integrations:**
- UniFi: Device tracking
- Pi-hole: Stats and control
- Wake on LAN: Turn on computers
### Adding Integrations
**UI Method:**
1. Settings → Devices & Services
2. "+ Add Integration"
3. Search for integration
4. Follow setup wizard
**YAML Method (configuration.yaml):**
```yaml
# Example: MQTT
mqtt:
broker: mosquitto
port: 1883
username: !secret mqtt_user
password: !secret mqtt_pass
```
## Automations
### Visual Editor
1. **Settings → Automations & Scenes → Create Automation**
2. **Choose Trigger:**
- Time
- Device state change
- Numeric state (temperature > 75°F)
- Event
- Webhook
3. **Add Conditions (Optional):**
- Time of day
- Day of week
- Device states
- Numeric comparisons
4. **Choose Actions:**
- Turn on/off devices
- Send notifications
- Call services
- Delays
- Repeat actions
### YAML Automations
**Example: Motion-Activated Lights**
```yaml
automation:
- alias: "Hallway Motion Lights"
description: "Turn on hallway lights when motion detected after sunset"
trigger:
- platform: state
entity_id: binary_sensor.hallway_motion
to: "on"
condition:
- condition: sun
after: sunset
action:
- service: light.turn_on
target:
entity_id: light.hallway
data:
brightness_pct: 75
- delay:
minutes: 5
- service: light.turn_off
target:
entity_id: light.hallway
```
### Automation Ideas
**Security:**
- Notify if door opens when away
- Flash lights if motion detected at night
- Send camera snapshot on doorbell press
**Comfort:**
- Adjust thermostat based on presence
- Close blinds when sunny
- Turn on fan if temperature > X
**Energy:**
- Turn off devices at bedtime
- Disable charging when battery full
- Monitor and alert high usage
**Media:**
- Dim lights when movie starts
- Pause media on doorbell
- Resume after phone call
## Add-ons vs Docker
**Home Assistant OS** (not used in AI-Homelab) includes an "Add-ons" system. Since AI-Homelab uses Docker directly, we deploy services as separate containers instead:
| Add-on | AI-Homelab Docker Service |
|--------|---------------------------|
| Mosquitto Broker | mosquitto container |
| Zigbee2MQTT | zigbee2mqtt container |
| ESPHome | esphome container |
| Node-RED | node-red container |
| File Editor | code-server container |
**Advantages of Docker Approach:**
- More control
- Easier backups
- Standard Docker tools
- Better resource management
**Disadvantage:**
- Manual integration setup (vs automatic with add-ons)
## Troubleshooting
### Container Won't Start
```bash
# Check logs
docker logs home-assistant
# Common issue: Port 8123 in use
sudo netstat -tulpn | grep 8123
# Check config syntax
docker exec home-assistant python -m homeassistant --script check_config
```
### Configuration Errors
```bash
# Validate configuration
# Settings → System → Check Configuration
# Or via command line:
docker exec home-assistant python -m homeassistant --script check_config
# View specific file
docker exec home-assistant cat /config/configuration.yaml
```
### Integration Not Working
```bash
# Check logs for integration
# Settings → System → Logs
# Filter by integration name
# Reload integration
# Settings → Devices & Services → Integration → Reload
# Remove and re-add if persistent
# Settings → Devices & Services → Integration → Delete
# Then add again
```
### USB Device Not Found
```bash
# List USB devices
ls -la /dev/ttyUSB*
ls -la /dev/ttyACM*
# Check device is passed to container
docker exec home-assistant ls -la /dev/
# Verify permissions
ls -la /dev/ttyUSB0
# Add user to dialout group (host)
sudo usermod -aG dialout kelin
# Restart
# Or set permissions in docker-compose
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
```
### Slow Performance
```bash
# Check recorder size
docker exec -it home-assistant bash
ls -lh /config/home-assistant_v2.db
# If large (>1GB), purge old data
# In UI: Settings → System → Repair → Database
# Or configure in configuration.yaml:
recorder:
purge_keep_days: 7
commit_interval: 5
```
## Advanced Topics
### Backup Strategy
**Manual Backup:**
```bash
# Stop container
docker stop home-assistant
# Backup config directory
tar -czf ha-backup-$(date +%Y%m%d).tar.gz \
/opt/stacks/homeassistant/home-assistant/config/
# Start container
docker start home-assistant
```
**Automated Backup:**
Use Home Assistant's built-in backup (Settings → System → Backups), or setup scheduled backups with the Backrest service in utilities stack.
### Secrets Management
Keep passwords out of configuration:
**secrets.yaml:**
```yaml
mqtt_user: homeassistant
mqtt_pass: your_secure_password
api_key: abc123xyz789
```
**configuration.yaml:**
```yaml
mqtt:
username: !secret mqtt_user
password: !secret mqtt_pass
weather:
- platform: openweathermap
api_key: !secret api_key
```
### Custom Components (HACS)
Thousands of community integrations:
**Popular HACS Integrations:**
- **Browser Mod:** Control browser tabs
- **Frigate:** NVR integration
- **Adaptive Lighting:** Circadian-based lighting
- **Alexa Media Player:** Advanced Alexa control
- **Waste Collection Schedule:** Trash reminders
- **Grocy:** Grocery management
**Install from HACS:**
1. HACS → Integrations
2. Search for integration
3. Download
4. Restart Home Assistant
5. Add integration via UI
### Templating
Jinja2 templates for dynamic values:
```yaml
# Get temperature difference
{{ states('sensor.outside_temp') | float - states('sensor.inside_temp') | float }}
# Conditional message
{% if is_state('person.john', 'home') %}
John is home
{% else %}
John is away
{% endif %}
# Count open windows
{{ states.binary_sensor | selectattr('entity_id', 'search', 'window')
| selectattr('state', 'eq', 'on') | list | count }}
```
### Voice Control
**Alexa:**
- Settings → Integrations → Alexa
- Expose entities to Alexa
- "Alexa, turn on living room lights"
**Google Assistant:**
- Requires Home Assistant Cloud ($6.50/month) or manual setup
- Or use Nabu Casa Cloud for easy setup
**Local Voice:**
- New feature (2023+)
- Wake word detection
- Runs fully local
- Requires USB microphone
### Node-RED Integration
Visual automation builder:
- More flexible than HA automations
- Drag-and-drop flow-based
- See node-red.md documentation
**Connect to HA:**
- Install node-red-contrib-home-assistant-websocket
- Configure Home Assistant server
- Long-lived access token
## Summary
Home Assistant is the ultimate home automation platform offering:
- 2000+ device integrations
- Local control and privacy
- Powerful automations
- Voice control
- Energy monitoring
- Mobile apps
- Active community
- Free and open-source
**Perfect for:**
- Smart home enthusiasts
- Privacy-conscious users
- DIY home automation
- Multi-brand device integration
- Complex automation needs
- Energy monitoring
**Key Points:**
- Runs entirely locally
- Works without internet
- Massive device support
- 3-week release cycle
- HACS for community add-ons
- Mobile apps available
- No subscriptions required
**Remember:**
- Install HACS for extra integrations
- Use secrets.yaml for passwords
- Regular backups important
- Community forum is helpful
- Updates every 3 weeks
- Read changelogs before updating
Home Assistant gives you complete control of your smart home!

View File

@@ -1,837 +0,0 @@
# Homepage - Application Dashboard
## Table of Contents
- [Overview](#overview)
- [What is Homepage?](#what-is-homepage)
- [Why Use Homepage?](#why-use-homepage)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Configuration Guide](#configuration-guide)
- [Widgets and Integrations](#widgets-and-integrations)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Dashboard
**Docker Image:** [ghcr.io/gethomepage/homepage](https://github.com/gethomepage/homepage/pkgs/container/homepage)
**Default Stack:** `dashboards.yml`
**Web UI:** `https://homepage.${DOMAIN}` or `http://SERVER_IP:3000`
**Authentication:** Optional (use Authelia for protection)
**Purpose:** Unified dashboard for all homelab services
## What is Homepage?
Homepage is a modern, highly customizable application dashboard designed for homelabs. It provides a clean interface to access all your services with real-time status monitoring, API integrations, and customizable widgets.
### Key Features
- **Service Cards:** Organize services with icons, descriptions, and links
- **Live Status:** Health checks and uptime monitoring
- **API Integrations:** Real-time data from 100+ services
- **Widgets:** Weather, Docker stats, system resources, bookmarks
- **Customizable:** YAML-based configuration
- **Fast & Lightweight:** Minimal resource usage
- **Modern UI:** Clean, responsive design
- **Dark/Light Mode:** Theme options
- **Search:** Quick service filtering
- **Docker Integration:** Auto-discover containers
- **Bookmarks:** Quick links and resources
- **Multi-Language:** i18n support
## Why Use Homepage?
1. **Central Hub:** Single page for all homelab services
2. **Visual Overview:** See everything at a glance
3. **Status Monitoring:** Know which services are up/down
4. **Quick Access:** Fast navigation to services
5. **Beautiful Design:** Modern, polished interface
6. **Easy Configuration:** Simple YAML files
7. **Active Development:** Regular updates and improvements
8. **API Integration:** Real-time service stats
9. **Customizable:** Tailor to your needs
10. **Free & Open Source:** Community-driven
## How It Works
```
User → Browser → Homepage Dashboard
┌─────────┴─────────┐
↓ ↓
Service Cards Widgets
(with icons) (live data)
↓ ↓
Click to access API integrations
service URL (Plex, Sonarr, etc.)
```
### Architecture
**Configuration Structure:**
```
/config/
├── settings.yaml # Global settings
├── services.yaml # Service definitions
├── widgets.yaml # Dashboard widgets
├── bookmarks.yaml # Quick links
├── docker.yaml # Docker integration
└── custom.css # Custom styling (optional)
```
**Data Flow:**
1. **Homepage loads** configuration files
2. **Renders dashboard** with service cards
3. **Makes API calls** to configured services
4. **Displays live data** in widgets
5. **Health checks** verify service status
6. **Updates in real-time** (configurable interval)
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/dashboards/homepage/config/
├── settings.yaml
├── services.yaml
├── widgets.yaml
├── bookmarks.yaml
└── docker.yaml
```
### Environment Variables
```bash
# Optional: Custom port
PORT=3000
# Optional: Puid/Pgid for permissions
PUID=1000
PGID=1000
```
## Official Resources
- **Website:** https://gethomepage.dev
- **GitHub:** https://github.com/gethomepage/homepage
- **Documentation:** https://gethomepage.dev/en/installation/
- **Widgets Guide:** https://gethomepage.dev/en/widgets/
- **Service Integrations:** https://gethomepage.dev/en/widgets/services/
- **Discord:** https://discord.gg/k4ruYNrudu
## Educational Resources
### Videos
- [Homepage - The BEST Homelab Dashboard (Techno Tim)](https://www.youtube.com/watch?v=_MxpGN8eS4U)
- [Homepage Setup Guide (DB Tech)](https://www.youtube.com/watch?v=N9dQKJMrjZM)
- [Homepage vs Homarr vs Heimdall](https://www.youtube.com/results?search_query=homepage+vs+homarr)
- [Customizing Homepage Dashboard](https://www.youtube.com/results?search_query=homepage+dashboard+customization)
### Articles & Guides
- [Homepage Official Documentation](https://gethomepage.dev)
- [Service Widget Configuration](https://gethomepage.dev/en/widgets/services/)
- [Docker Integration Guide](https://gethomepage.dev/en/configs/docker/)
- [Customization Tips](https://gethomepage.dev/en/configs/custom-css-js/)
### Concepts to Learn
- **YAML Configuration:** Structured data format
- **API Integration:** RESTful service communication
- **Health Checks:** Service availability monitoring
- **Widgets:** Modular dashboard components
- **Docker Labels:** Metadata for auto-discovery
- **Reverse Proxy:** Accessing services through dashboard
## Docker Configuration
### Complete Service Definition
```yaml
homepage:
image: ghcr.io/gethomepage/homepage:latest
container_name: homepage
restart: unless-stopped
networks:
- traefik-network
ports:
- "3000:3000"
volumes:
- /opt/stacks/dashboards/homepage/config:/app/config
- /var/run/docker.sock:/var/run/docker.sock:ro # For Docker integration
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
labels:
- "traefik.enable=true"
- "traefik.http.routers.homepage.rule=Host(`${DOMAIN}`) || Host(`homepage.${DOMAIN}`)"
- "traefik.http.routers.homepage.entrypoints=websecure"
- "traefik.http.routers.homepage.tls.certresolver=letsencrypt"
- "traefik.http.services.homepage.loadbalancer.server.port=3000"
```
## Configuration Guide
### settings.yaml
```yaml
---
# Global settings
title: My Homelab
background: /images/background.jpg # Optional
cardBlur: md # sm, md, lg
theme: dark # dark, light, auto
color: slate # slate, gray, zinc, neutral, stone, red, etc.
# Layout
layout:
Media:
style: row
columns: 4
Infrastructure:
style: row
columns: 3
# Quick search
quicklaunch:
searchDescriptions: true
hideInternetSearch: true
# Header widgets (shown at top)
headerStyle: clean # boxed, clean
```
### services.yaml
```yaml
---
# Service groups and cards
- Media:
- Plex:
icon: plex.png
href: https://plex.yourdomain.com
description: Media Server
widget:
type: plex
url: http://plex:32400
key: your-plex-token
- Sonarr:
icon: sonarr.png
href: https://sonarr.yourdomain.com
description: TV Show Management
widget:
type: sonarr
url: http://sonarr:8989
key: your-sonarr-api-key
- Radarr:
icon: radarr.png
href: https://radarr.yourdomain.com
description: Movie Management
widget:
type: radarr
url: http://radarr:7878
key: your-radarr-api-key
- qBittorrent:
icon: qbittorrent.png
href: https://qbit.yourdomain.com
description: Torrent Client
widget:
type: qbittorrent
url: http://gluetun:8080
username: admin
password: adminpass
- Infrastructure:
- Dockge:
icon: dockge.png
href: https://dockge.yourdomain.com
description: Stack Manager
- Portainer:
icon: portainer.png
href: https://portainer.yourdomain.com
description: Docker Management
- Traefik:
icon: traefik.png
href: https://traefik.yourdomain.com
description: Reverse Proxy
widget:
type: traefik
url: http://traefik:8080
- Monitoring:
- Glances:
icon: glances.png
href: https://glances.yourdomain.com
description: System Monitor
widget:
type: glances
url: http://glances:61208
- Uptime Kuma:
icon: uptime-kuma.png
href: https://uptime.yourdomain.com
description: Uptime Monitor
```
### widgets.yaml
```yaml
---
# Dashboard widgets (shown above services)
- logo:
icon: /icons/logo.png # Optional custom logo
- search:
provider: google
target: _blank
- datetime:
text_size: xl
format:
timeStyle: short
dateStyle: short
- resources:
cpu: true
memory: true
disk: /
cputemp: true
uptime: true
label: Server
- openmeteo:
label: Home
latitude: 40.7128
longitude: -74.0060
units: imperial
cache: 5
- greeting:
text_size: xl
text: Welcome to my Homelab!
```
### bookmarks.yaml
```yaml
---
# Quick links and bookmarks
- Developer:
- GitHub:
- icon: github.png
href: https://github.com
- GitLab:
- icon: gitlab.png
href: https://gitlab.com
- Documentation:
- Homepage Docs:
- icon: homepage.png
href: https://gethomepage.dev
- Docker Docs:
- icon: docker.png
href: https://docs.docker.com
- Social:
- Reddit:
- icon: reddit.png
href: https://reddit.com/r/homelab
- Discord:
- icon: discord.png
href: https://discord.gg/homelab
```
### docker.yaml
```yaml
---
# Auto-discover Docker containers
my-docker:
host: docker-proxy # Use socket proxy for security
port: 2375
# Or direct socket (less secure):
# socket: /var/run/docker.sock
```
Then add to services.yaml:
```yaml
- Docker:
- My Container:
icon: docker.png
description: Auto-discovered container
server: my-docker
container: container-name
```
## Widgets and Integrations
### Service Widgets
**Popular Integrations:**
**Plex:**
```yaml
widget:
type: plex
url: http://plex:32400
key: your-plex-token # Get from Plex Web → Settings → Network → Show Advanced
```
**Sonarr/Radarr:**
```yaml
widget:
type: sonarr
url: http://sonarr:8989
key: your-api-key # Settings → General → API Key
```
**qBittorrent:**
```yaml
widget:
type: qbittorrent
url: http://gluetun:8080
username: admin
password: adminpass
```
**Pi-hole:**
```yaml
widget:
type: pihole
url: http://pihole:80
key: your-api-key # Settings → API → Show API token
```
**Traefik:**
```yaml
widget:
type: traefik
url: http://traefik:8080
```
**AdGuard Home:**
```yaml
widget:
type: adguard
url: http://adguard:80
username: admin
password: adminpass
```
### Information Widgets
**Weather:**
```yaml
- openmeteo:
label: Home
latitude: 40.7128
longitude: -74.0060
units: imperial # or metric
cache: 5
```
**System Resources:**
```yaml
- resources:
cpu: true
memory: true
disk: /
cputemp: true
uptime: true
label: Server
```
**Docker Stats:**
```yaml
- docker:
server: my-docker
show: running # or all
```
**Glances:**
```yaml
- glances:
url: http://glances:61208
cpu: true
mem: true
process: true
```
### Custom Widgets
```yaml
- customapi:
url: http://myapi.local/endpoint
refreshInterval: 60000 # 60 seconds
display: text # or list, block
mappings:
- field: data.value
label: My Value
format: text
```
## Advanced Topics
### Custom CSS
Create `/config/custom.css`:
```css
/* Custom background */
body {
background-image: url('/images/custom-bg.jpg');
background-size: cover;
}
/* Larger service cards */
.service-card {
min-height: 150px;
}
/* Custom colors */
:root {
--color-primary: #1e40af;
--color-secondary: #7c3aed;
}
/* Hide specific elements */
.some-class {
display: none;
}
```
### Custom JavaScript
Create `/config/custom.js`:
```javascript
// Custom functionality
console.log('Homepage loaded!');
// Example: Auto-refresh every 5 minutes
setTimeout(() => {
window.location.reload();
}, 300000);
```
### Multi-Column Layouts
```yaml
layout:
Media:
style: row
columns: 4
Infrastructure:
style: column
columns: 2
Monitoring:
style: row
columns: 3
```
### Custom Icons
Place icons in `/config/icons/`:
```
/config/icons/
├── custom-icon1.png
├── custom-icon2.svg
└── custom-icon3.jpg
```
Reference in services:
```yaml
- My Service:
icon: /icons/custom-icon1.png
```
### Docker Auto-Discovery
Automatically add containers with labels:
```yaml
# In container compose file
my-service:
labels:
- "homepage.group=Media"
- "homepage.name=My Service"
- "homepage.icon=service-icon.png"
- "homepage.href=https://service.domain.com"
- "homepage.description=My awesome service"
```
### Ping Health Checks
```yaml
- My Service:
icon: service.png
href: https://service.domain.com
ping: https://service.domain.com
# Shows green/red status indicator
```
### Custom API Widgets
```yaml
- customapi:
url: http://api.local/stats
refreshInterval: 30000
display: block
mappings:
- field: users
label: Active Users
format: number
- field: status
label: Status
format: text
```
## Troubleshooting
### Homepage Not Loading
```bash
# Check if container is running
docker ps | grep homepage
# View logs
docker logs homepage
# Check port
curl http://localhost:3000
# Verify Traefik routing
docker logs traefik | grep homepage
```
### Service Widgets Not Showing Data
```bash
# Check API connectivity
docker exec homepage curl http://service:port
# Verify API key is correct
# Check service logs for auth errors
# Test API manually
curl -H "X-Api-Key: your-key" http://service:port/api/endpoint
# Check Homepage logs for errors
docker logs homepage | grep -i error
```
### Configuration Changes Not Applied
```bash
# Homepage auto-reloads config files
# Wait 10-20 seconds
# Or restart container
docker restart homepage
# Check YAML syntax
# Use online YAML validator
# View Homepage logs
docker logs homepage | grep -i config
```
### Docker Socket Permission Error
```bash
# Fix socket permissions
sudo chmod 666 /var/run/docker.sock
# Or use Docker Socket Proxy (recommended)
# See docker-proxy.md
# Check mount
docker inspect homepage | grep -A5 Mounts
```
### Icons Not Displaying
```bash
# Homepage uses icon CDN by default
# Check internet connectivity
# Use local icons instead
# Place in /config/icons/
# Clear browser cache
# Hard refresh: Ctrl+Shift+R
# Check icon name matches service
# List available icons: https://github.com/walkxcode/dashboard-icons
```
### High CPU/Memory Usage
```bash
# Check container stats
docker stats homepage
# Reduce API polling frequency
# In services.yaml, add cache settings
# Disable unnecessary widgets
# Check for network issues
docker logs homepage | grep timeout
```
## Performance Optimization
### Reduce API Calls
```yaml
# Increase refresh intervals
widget:
type: sonarr
url: http://sonarr:8989
key: api-key
refreshInterval: 300000 # 5 minutes instead of default 10 seconds
```
### Cache Configuration
```yaml
# In settings.yaml
cache:
size: 100 # Number of cached responses
ttl: 300 # Cache time in seconds
```
### Lazy Loading
```yaml
# In settings.yaml
lazyLoad: true # Load images as they scroll into view
```
## Customization Tips
### Color Themes
Available colors: `slate`, `gray`, `zinc`, `neutral`, `stone`, `red`, `orange`, `amber`, `yellow`, `lime`, `green`, `emerald`, `teal`, `cyan`, `sky`, `blue`, `indigo`, `violet`, `purple`, `fuchsia`, `pink`, `rose`
```yaml
color: blue
theme: dark
```
### Background Images
```yaml
background: /images/my-background.jpg
backgroundBlur: sm # sm, md, lg, xl
backgroundSaturate: 50 # 0-100
backgroundBrightness: 50 # 0-100
```
### Card Appearance
```yaml
cardBlur: md # sm, md, lg, xl
hideVersion: true
showStats: true
```
### Custom Greeting
```yaml
- greeting:
text_size: xl
text: "Welcome back, {{name}}!"
```
## Integration Examples
### Complete Media Stack
```yaml
- Media Server:
- Plex:
icon: plex.png
href: https://plex.domain.com
widget:
type: plex
url: http://plex:32400
key: token
- Jellyfin:
icon: jellyfin.png
href: https://jellyfin.domain.com
widget:
type: jellyfin
url: http://jellyfin:8096
key: api-key
- Media Management:
- Sonarr:
icon: sonarr.png
href: https://sonarr.domain.com
widget:
type: sonarr
url: http://sonarr:8989
key: api-key
- Radarr:
icon: radarr.png
href: https://radarr.domain.com
widget:
type: radarr
url: http://radarr:7878
key: api-key
- Prowlarr:
icon: prowlarr.png
href: https://prowlarr.domain.com
widget:
type: prowlarr
url: http://prowlarr:9696
key: api-key
```
## Summary
Homepage provides a beautiful, functional dashboard for your homelab. It offers:
- Clean, modern interface
- Real-time service monitoring
- API integrations for live data
- Easy YAML configuration
- Highly customizable
- Active development and community
**Perfect for:**
- Homelab landing page
- Service overview
- Quick access portal
- Status monitoring
- Aesthetic presentation
**Best Practices:**
- Use Authelia for authentication if exposed
- Configure API widgets for live data
- Organize services into logical groups
- Use custom icons for consistency
- Enable Docker integration for automation
- Regular config backups
- Keep updated for new features
**Remember:**
- YAML syntax matters (indentation!)
- Test API connections before configuring widgets
- Use Docker Socket Proxy for security
- Customize colors and themes to preference
- Start simple, add complexity gradually
- Homepage is your homelab's front door - make it welcoming!

View File

@@ -1,779 +0,0 @@
# Jellyfin - Open-Source Media Server
## Table of Contents
- [Overview](#overview)
- [What is Jellyfin?](#what-is-jellyfin)
- [Why Use Jellyfin?](#why-use-jellyfin)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Library Management](#library-management)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Server
**Docker Image:** [linuxserver/jellyfin](https://hub.docker.com/r/linuxserver/jellyfin)
**Default Stack:** `media.yml`
**Web UI:** `https://jellyfin.${DOMAIN}` or `http://SERVER_IP:8096`
**Authentication:** Local accounts (no external service required)
**Ports:** 8096 (HTTP), 8920 (HTTPS), 7359 (auto-discovery), 1900 (DLNA)
## What is Jellyfin?
Jellyfin is a free, open-source media server forked from Emby. It provides complete control over your media with no tracking, premium features, or account requirements. Jellyfin is 100% free with all features available without subscriptions.
### Key Features
- **Completely Free:** No premium tiers or subscriptions
- **Open Source:** GPLv2 licensed
- **No Tracking:** Zero telemetry or analytics
- **Local Accounts:** No external service required
- **Hardware Acceleration:** VAAPI, NVENC, QSV, AMF
- **Live TV & DVR:** Built-in EPG support
- **SyncPlay:** Watch together feature
- **Native Apps:** Android, iOS, Roku, Fire TV, etc.
- **Web Player:** Modern HTML5 player
- **DLNA Server:** Stream to any DLNA device
- **Plugins:** Extensible with official/community plugins
- **Webhooks:** Custom notifications and integrations
## Why Use Jellyfin?
1. **100% Free:** All features, no subscriptions ever
2. **Privacy Focused:** No tracking, no accounts to external services
3. **Open Source:** Community-driven development
4. **Self-Contained:** No dependency on external services
5. **Hardware Transcoding:** Free for everyone
6. **Modern Interface:** Clean, responsive UI
7. **Active Development:** Regular updates
8. **Plugin System:** Extend functionality
9. **SyncPlay:** Watch parties built-in
10. **No Vendor Lock-in:** Your data, your control
## How It Works
```
Media Files → Jellyfin Server (scans and organizes)
Metadata Enrichment
(TheMovieDB, MusicBrainz, etc.)
┌──────────┴──────────┐
↓ ↓
Direct Play Transcoding
(Compatible) (Hardware Accel)
↓ ↓
Jellyfin Apps Jellyfin Apps
(All Devices) (Any Browser)
```
### Media Flow
1. **Add media** to libraries
2. **Jellyfin scans** and identifies content
3. **Metadata scraped** from open databases
4. **User requests** via web/app
5. **Jellyfin determines** if transcoding needed
6. **Hardware transcoding** if supported
7. **Stream** to client
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media/jellyfin/
├── config/ # Jellyfin configuration
├── cache/ # Temporary cache
└── transcode/ # Transcoding temp files
/mnt/media/
├── movies/ # Movie files
├── tv/ # TV show files
├── music/ # Music files
└── photos/ # Photo files
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
# Optional: Published server URL
JELLYFIN_PublishedServerUrl=https://jellyfin.yourdomain.com
```
## Official Resources
- **Website:** https://jellyfin.org
- **Documentation:** https://jellyfin.org/docs/
- **GitHub:** https://github.com/jellyfin/jellyfin
- **Forum:** https://forum.jellyfin.org
- **Reddit:** https://reddit.com/r/jellyfin
- **Matrix Chat:** https://matrix.to/#/#jellyfin:matrix.org
- **Feature Requests:** https://features.jellyfin.org
## Educational Resources
### Videos
- [Jellyfin Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=R2zVv0DoMF4)
- [Jellyfin vs Plex vs Emby](https://www.youtube.com/results?search_query=jellyfin+vs+plex)
- [Ultimate Jellyfin Setup](https://www.youtube.com/watch?v=zUmIGwbNBw0)
- [Jellyfin Hardware Transcoding](https://www.youtube.com/results?search_query=jellyfin+hardware+transcoding)
- [Jellyfin Tips and Tricks](https://www.youtube.com/results?search_query=jellyfin+tips+tricks)
### Articles & Guides
- [Official Documentation](https://jellyfin.org/docs/)
- [Hardware Acceleration](https://jellyfin.org/docs/general/administration/hardware-acceleration.html)
- [Naming Conventions](https://jellyfin.org/docs/general/server/media/movies.html)
- [Plugin Catalog](https://jellyfin.org/docs/general/server/plugins/)
- [Client Apps](https://jellyfin.org/clients/)
### Concepts to Learn
- **Transcoding:** Converting media formats in real-time
- **Hardware Acceleration:** GPU-based encoding (VAAPI, NVENC, QSV)
- **Direct Play:** Streaming without conversion
- **Remuxing:** Changing container without re-encoding
- **Metadata Providers:** TheMovieDB, TVDb, MusicBrainz
- **NFO Files:** Local metadata files
- **DLNA:** Network streaming protocol
## Docker Configuration
### Complete Service Definition
```yaml
jellyfin:
image: linuxserver/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
networks:
- traefik-network
ports:
- "8096:8096" # HTTP Web UI
- "8920:8920" # HTTPS (optional)
- "7359:7359/udp" # Auto-discovery
- "1900:1900/udp" # DLNA
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- JELLYFIN_PublishedServerUrl=https://jellyfin.${DOMAIN}
volumes:
- /opt/stacks/media/jellyfin/config:/config
- /opt/stacks/media/jellyfin/cache:/cache
- /mnt/media/movies:/data/movies:ro
- /mnt/media/tv:/data/tvshows:ro
- /mnt/media/music:/data/music:ro
devices:
- /dev/dri:/dev/dri # Intel QuickSync
labels:
- "traefik.enable=true"
- "traefik.http.routers.jellyfin.rule=Host(`jellyfin.${DOMAIN}`)"
- "traefik.http.routers.jellyfin.entrypoints=websecure"
- "traefik.http.routers.jellyfin.tls.certresolver=letsencrypt"
- "traefik.http.services.jellyfin.loadbalancer.server.port=8096"
```
### With NVIDIA GPU
```yaml
jellyfin:
image: linuxserver/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
runtime: nvidia # Requires nvidia-docker
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
volumes:
- /opt/stacks/media/jellyfin/config:/config
- /mnt/media:/data:ro
ports:
- "8096:8096"
```
### Transcoding on RAM Disk
```yaml
jellyfin:
volumes:
- /opt/stacks/media/jellyfin/config:/config
- /mnt/media:/data:ro
- /dev/shm:/config/transcodes # Use RAM for transcoding
```
## Initial Setup
### First-Time Configuration
1. **Start Container:**
```bash
docker compose up -d jellyfin
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:8096`
- Via domain: `https://jellyfin.yourdomain.com`
3. **Initial Setup Wizard:**
- Select preferred display language
- Create administrator account (local, no external service)
- Set up media libraries
- Configure remote access
- Review settings
### Adding Libraries
**Add Movie Library:**
1. Dashboard → Libraries → Add Media Library
2. Content type: Movies
3. Display name: Movies
4. Folders → Add → `/data/movies`
5. Preferred language: English
6. Country: United States
7. Save
**Add TV Library:**
1. Dashboard → Libraries → Add Media Library
2. Content type: Shows
3. Display name: TV Shows
4. Folders → Add → `/data/tvshows`
5. Save
**Add Music Library:**
1. Content type: Music
2. Folders → Add → `/data/music`
3. Save
### File Naming Conventions
**Movies:**
```
/data/movies/
Movie Name (Year)/
Movie Name (Year).mkv
Example:
/data/movies/
The Matrix (1999)/
The Matrix (1999).mkv
```
**TV Shows:**
```
/data/tvshows/
Show Name (Year)/
Season 01/
Show Name - S01E01 - Episode Name.mkv
Example:
/data/tvshows/
Breaking Bad (2008)/
Season 01/
Breaking Bad - S01E01 - Pilot.mkv
Breaking Bad - S01E02 - Cat's in the Bag.mkv
```
**Music:**
```
/data/music/
Artist/
Album (Year)/
01 - Track Name.mp3
```
## Library Management
### Scanning Libraries
**Manual Scan:**
- Dashboard → Libraries → Scan All Libraries
- Or click scan icon on specific library
**Scheduled Scanning:**
- Dashboard → Scheduled Tasks → Scan Media Library
- Configure scan interval
**Real-time Monitoring:**
- Dashboard → Libraries → Enable real-time monitoring
- Watches for file changes
### Metadata Management
**Providers:**
- Dashboard → Libraries → Select library → Manage Library
- Metadata providers: TheMovieDB, TVDb, OMDb, etc.
- Order determines priority
**Identify Item:**
1. Select item with wrong metadata
2. ... → Identify
3. Search by name or TMDB/TVDb ID
4. Select correct match
**Edit Metadata:**
- ... → Edit Metadata
- Change title, description, images, etc.
- Lock fields to prevent overwriting
**Refresh Metadata:**
- ... → Refresh Metadata
- Re-scrapes from providers
### Collections
**Auto Collections:**
- Jellyfin auto-creates collections from metadata
- Example: "Marvel Cinematic Universe" for all MCU movies
**Manual Collections:**
1. Dashboard → Collections → New Collection
2. Name collection
3. Add movies/shows
4. Set sorting and display options
## Advanced Topics
### Hardware Transcoding
**Intel QuickSync (QSV):**
1. **Verify GPU Access:**
```bash
docker exec jellyfin ls -la /dev/dri
# Should show renderD128, card0, etc.
# Check permissions
docker exec jellyfin id
# User should have video group (render group ID)
```
2. **Enable in Jellyfin:**
- Dashboard → Playback → Transcoding
- Hardware acceleration: Intel QuickSync (QSV)
- Enable hardware decoding for: H264, HEVC, VP9, AV1
- Enable hardware encoding for: H264, HEVC
- Save
**NVIDIA GPU (NVENC):**
1. **Ensure nvidia-docker installed:**
```bash
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
```
2. **Enable in Jellyfin:**
- Dashboard → Playback → Transcoding
- Hardware acceleration: Nvidia NVENC
- Enable codecs
- Save
**AMD GPU (AMF/VAAPI):**
```yaml
jellyfin:
devices:
- /dev/dri:/dev/dri
- /dev/kfd:/dev/kfd # AMD GPU
group_add:
- video
- render
```
Dashboard → Transcoding → VAAPI
**Verify Hardware Transcoding:**
- Play a video that requires transcoding
- Dashboard → Activity
- Check encoding method shows (hw)
### User Management
**Add Users:**
1. Dashboard → Users → Add User
2. Enter username and password
3. Configure library access
4. Set permissions
**User Profiles:**
- Each user has own watch history
- Separate Continue Watching
- Individual preferences
**Parental Controls:**
- Set maximum parental rating
- Block unrated content
- Restrict library access
### Plugins
**Install Plugins:**
1. Dashboard → Plugins → Catalog
2. Browse available plugins
3. Install desired plugins
4. Restart Jellyfin
**Popular Plugins:**
- **TMDb Box Sets:** Auto-create collections
- **Trakt:** Sync watch history to Trakt.tv
- **Reports:** Generate library reports
- **Skin Manager:** Custom themes
- **Anime:** Better anime support
- **Fanart:** Additional image sources
- **Merge Versions:** Combine different qualities
- **Playback Reporting:** Track user activity
**Third-Party Repositories:**
- Add custom plugin repositories
- Dashboard → Plugins → Repositories
### SyncPlay (Watch Together)
**Enable:**
- Dashboard → Plugins → SyncPlay
- Restart Jellyfin
**Use:**
1. User creates SyncPlay group
2. Shares join code
3. Others join with code
4. Playback synchronized across all users
5. Chat available
**Perfect For:**
- Long-distance watch parties
- Family movie nights
- Synchronized viewing
### Live TV & DVR
**Requirements:**
- TV tuner hardware (HDHomeRun, etc.)
- Or IPTV source (m3u playlist)
**Setup:**
1. Dashboard → Live TV
2. Add TV source:
- Tuner device (network device auto-detected)
- Or IPTV (M3U URL)
3. Configure EPG (Electronic Program Guide)
- XML TV guide URL
4. Map channels
5. Save
**DVR:**
- Record shows from EPG
- Series recordings
- Post-processing options
## Troubleshooting
### Jellyfin Not Accessible
```bash
# Check container status
docker ps | grep jellyfin
# View logs
docker logs jellyfin
# Test access
curl http://localhost:8096
# Check ports
docker port jellyfin
```
### Libraries Not Scanning
```bash
# Check permissions
ls -la /mnt/media/movies/
# Fix ownership
sudo chown -R 1000:1000 /mnt/media/
# Check container can access
docker exec jellyfin ls /data/movies
# Manual scan from UI
# Dashboard → Libraries → Scan All Libraries
# Check logs
docker logs jellyfin | grep -i scan
```
### Hardware Transcoding Not Working
```bash
# Verify GPU device
docker exec jellyfin ls -la /dev/dri
# Check permissions
docker exec jellyfin groups
# Should include video (44) and render (106 or 104)
# For Intel GPU, check:
docker exec jellyfin vainfo
# Should list supported codecs
# Check Jellyfin logs during playback
docker logs -f jellyfin
# Look for hardware encoding messages
```
**Fix GPU Permissions:**
```bash
# Get render group ID
getent group render
# Update docker-compose
jellyfin:
group_add:
- "106" # render group ID
```
### Transcoding Failing
```bash
# Check transcode directory
docker exec jellyfin ls /config/transcodes/
# Ensure enough disk space
df -h /opt/stacks/media/jellyfin/
# Check FFmpeg
docker exec jellyfin ffmpeg -version
# Test hardware encoding
docker exec jellyfin ffmpeg -hwaccel vaapi -i /data/movies/sample.mkv -c:v h264_vaapi -f null -
```
### Playback Buffering
**Causes:**
- Network bandwidth insufficient
- Transcoding too slow (CPU overload)
- Disk I/O bottleneck
- Client compatibility issues
**Solutions:**
- Enable hardware transcoding
- Lower streaming quality
- Use direct play when possible
- Optimize media files (H264/HEVC)
- Increase transcoding threads
- Use faster storage for transcode directory
### Metadata Not Downloading
```bash
# Check internet connectivity
docker exec jellyfin ping -c 3 api.themoviedb.org
# Verify metadata providers enabled
# Dashboard → Libraries → Library → Metadata providers
# Force metadata refresh
# Select item → Refresh Metadata → Replace all metadata
# Check naming conventions
# Ensure files follow Jellyfin naming standards
```
### Database Corruption
```bash
# Stop Jellyfin
docker stop jellyfin
# Backup database
cp -r /opt/stacks/media/jellyfin/config/data /opt/backups/jellyfin-data-backup
# Check database
sqlite3 /opt/stacks/media/jellyfin/config/data/library.db "PRAGMA integrity_check;"
# If corrupted, restore from backup or rebuild library
# Delete database and rescan (loses watch history)
# rm /opt/stacks/media/jellyfin/config/data/library.db
# Restart
docker start jellyfin
```
## Performance Optimization
### Transcoding Performance
```yaml
# Use RAM disk for transcoding
jellyfin:
volumes:
- /dev/shm:/config/transcodes
# Or fast NVMe
volumes:
- /path/to/fast/nvme:/config/transcodes
```
**Settings:**
- Dashboard → Playback → Transcoding
- Transcoding thread count: Number of CPU cores
- Hardware acceleration: Enabled
- H264 encoding CRF: 23 (lower = better quality, more CPU)
- Throttle transcodes: Disabled (for local network)
### Database Optimization
```bash
# Stop Jellyfin
docker stop jellyfin
# Vacuum databases
sqlite3 /opt/stacks/media/jellyfin/config/data/library.db "VACUUM;"
sqlite3 /opt/stacks/media/jellyfin/config/data/jellyfin.db "VACUUM;"
# Restart
docker start jellyfin
```
### Network Optimization
**Settings:**
- Dashboard → Networking
- LAN Networks: 192.168.0.0/16,172.16.0.0/12,10.0.0.0/8
- Enable automatic port mapping: Yes
- Public HTTPS port: 8920 (if using)
- Public HTTP port: 8096
### Cache Settings
```yaml
# Separate cache volume for better I/O
jellyfin:
volumes:
- /opt/stacks/media/jellyfin/cache:/cache
```
Dashboard → System → Caching:
- Clear image cache periodically
- Set cache expiration
## Security Best Practices
1. **Strong Passwords:** Enforce for all users
2. **HTTPS Only:** Use Traefik for SSL
3. **Read-Only Media:** Mount media as `:ro`
4. **User Permissions:** Grant minimal library access
5. **Network Segmentation:** Consider separate VLAN
6. **Regular Updates:** Keep Jellyfin current
7. **Secure Remote Access:** Use VPN or Traefik auth
8. **Disable UPnP:** If not needed for remote access
9. **API Keys:** Regenerate periodically
10. **Audit Users:** Review user accounts regularly
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media/jellyfin/config/data/ # Databases
/opt/stacks/media/jellyfin/config/config/ # Configuration
/opt/stacks/media/jellyfin/config/metadata/ # Custom metadata
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/jellyfin
# Stop Jellyfin for consistent backup
docker stop jellyfin
# Backup configuration
tar -czf $BACKUP_DIR/jellyfin-config-$DATE.tar.gz \
/opt/stacks/media/jellyfin/config/
# Restart
docker start jellyfin
# Keep last 7 backups
find $BACKUP_DIR -name "jellyfin-config-*.tar.gz" -mtime +7 -delete
```
**Restore:**
```bash
docker stop jellyfin
tar -xzf jellyfin-config-20240101.tar.gz -C /
docker start jellyfin
```
## Jellyseerr Integration
Pair with Jellyseerr for media requests:
```yaml
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
environment:
- LOG_LEVEL=info
volumes:
- /opt/stacks/media/jellyseerr/config:/app/config
ports:
- "5055:5055"
```
Allows users to:
- Request movies/shows
- Track request status
- Get notifications when available
- Browse available content
## Summary
Jellyfin is the leading open-source media server offering:
- 100% free, no premium tiers
- Complete privacy, no tracking
- Hardware transcoding for everyone
- Modern, responsive interface
- Active community development
- Plugin extensibility
- Live TV & DVR built-in
**Perfect for:**
- Privacy-conscious users
- Self-hosting enthusiasts
- Those avoiding subscriptions
- Open-source advocates
- Full control seekers
- GPU transcoding needs (free)
**Trade-offs:**
- Smaller app ecosystem than Plex
- Less polished UI than Plex
- Fewer smart TV apps
- Smaller community/resources
- Some features less mature
**Remember:**
- Hardware transcoding is free (unlike Plex Pass)
- No external account required
- All features available without payment
- Active development, improving constantly
- Great alternative to Plex
- Pair with Sonarr/Radarr for automation
- Use Jellyseerr for user requests
- Privacy-first approach
Jellyfin is the best choice for users who want complete control, privacy, and all features without subscriptions!

View File

@@ -1,627 +0,0 @@
# Jellyseerr - Media Request Management
## Table of Contents
- [Overview](#overview)
- [What is Jellyseerr?](#what-is-jellyseerr)
- [Why Use Jellyseerr?](#why-use-jellyseerr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Request Management
**Docker Image:** [fallenbagel/jellyseerr](https://hub.docker.com/r/fallenbagel/jellyseerr)
**Default Stack:** `media-extended.yml`
**Web UI:** `https://jellyseerr.${DOMAIN}` or `http://SERVER_IP:5055`
**Authentication:** Via Jellyfin (SSO)
**Ports:** 5055
## What is Jellyseerr?
Jellyseerr is a fork of Overseerr specifically designed for Jellyfin. It provides a beautiful, user-friendly interface for users to request movies and TV shows. When a request is made, Jellyseerr automatically sends it to Sonarr or Radarr for download, then notifies users when their content is available. Think of it as the "frontend" for your media automation stack that non-technical users can easily navigate.
### Key Features
- **Jellyfin Integration:** Native SSO authentication
- **Beautiful UI:** Modern, responsive interface
- **User Requests:** Non-admin users can request content
- **Auto-Approval:** Configurable approval workflows
- **Request Management:** View, approve, deny requests
- **Availability Tracking:** Know when content is available
- **Notifications:** Discord, Telegram, Email, Pushover
- **Discovery:** Browse trending, popular, upcoming content
- **User Quotas:** Limit requests per user
- **4K Support:** Separate 4K requests
- **Multi-Language:** Support for multiple languages
## Why Use Jellyseerr?
1. **User-Friendly:** Non-technical users can request content easily
2. **Automated Workflow:** Request → Sonarr/Radarr → Download → Notify
3. **Permission Control:** Admins approve or auto-approve
4. **Discovery Interface:** Users can browse and discover content
5. **Request Tracking:** See status of all requests
6. **Notifications:** Keep users informed
7. **Jellyfin Integration:** Seamless SSO
8. **Quota Management:** Prevent abuse
9. **Mobile Friendly:** Responsive design
10. **Free & Open Source:** Community-driven
## How It Works
```
User Browses Jellyseerr
Requests Movie/TV Show
Jellyseerr Checks Availability
Not Available → Send to Sonarr/Radarr
Sonarr/Radarr → qBittorrent
Download Completes
Imported to Jellyfin Library
Jellyseerr Notifies User
User Watches Content
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/jellyseerr/config/ # Jellyseerr configuration
```
### Environment Variables
```bash
# Log level
LOG_LEVEL=info
# Optional: Custom port
PORT=5055
```
## Official Resources
- **Website:** https://docs.jellyseerr.dev
- **Documentation:** https://docs.jellyseerr.dev
- **GitHub:** https://github.com/Fallenbagel/jellyseerr
- **Discord:** https://discord.gg/ckbvBtDJgC
- **Docker Hub:** https://hub.docker.com/r/fallenbagel/jellyseerr
## Educational Resources
### Videos
- [Jellyseerr Setup Guide](https://www.youtube.com/results?search_query=jellyseerr+setup)
- [Overseerr vs Jellyseerr](https://www.youtube.com/results?search_query=overseerr+vs+jellyseerr)
- [Jellyfin Request System](https://www.youtube.com/results?search_query=jellyfin+request+system)
### Articles & Guides
- [Official Documentation](https://docs.jellyseerr.dev)
- [Installation Guide](https://docs.jellyseerr.dev/getting-started/installation)
- [User Guide](https://docs.jellyseerr.dev/using-jellyseerr/users)
### Concepts to Learn
- **SSO (Single Sign-On):** Jellyfin authentication
- **Request Workflow:** Request → Approval → Download → Notify
- **User Permissions:** Admin vs Requester roles
- **Quotas:** Limiting requests per time period
- **4K Requests:** Separate quality tiers
- **Auto-Approval:** Automatic vs manual approval
## Docker Configuration
### Complete Service Definition
```yaml
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
restart: unless-stopped
networks:
- traefik-network
ports:
- "5055:5055"
environment:
- LOG_LEVEL=info
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/jellyseerr/config:/app/config
labels:
- "traefik.enable=true"
- "traefik.http.routers.jellyseerr.rule=Host(`jellyseerr.${DOMAIN}`)"
- "traefik.http.routers.jellyseerr.entrypoints=websecure"
- "traefik.http.routers.jellyseerr.tls.certresolver=letsencrypt"
- "traefik.http.services.jellyseerr.loadbalancer.server.port=5055"
```
**Note:** No Authelia middleware - Jellyseerr has built-in Jellyfin authentication.
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d jellyseerr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:5055`
- Domain: `https://jellyseerr.yourdomain.com`
3. **Setup Wizard:**
- Configure Jellyfin server
- Add Sonarr and Radarr
- Configure default settings
- Sign in with Jellyfin account
### Jellyfin Server Configuration
**Step 1: Connect Jellyfin**
1. **Server Name:** Your Jellyfin server name
2. **Hostname/IP:** `jellyfin` (Docker container name) or `http://jellyfin:8096`
3. **Port:** `8096`
4. **SSL:** ✗ (internal Docker network)
5. **External URL:** `https://jellyfin.yourdomain.com` (for user links)
6. **Test Connection**
7. **Sign in with Jellyfin Admin Account**
**What happens:**
- Jellyseerr connects to Jellyfin
- Imports users from Jellyfin
- Sets up SSO authentication
### Sonarr Configuration
**Settings → Services → Sonarr → Add Sonarr Server:**
1. **Default Server:** ✓ (primary Sonarr)
2. **4K Server:** ✗ (unless you have separate 4K Sonarr)
3. **Server Name:** Sonarr
4. **Hostname/IP:** `sonarr`
5. **Port:** `8989`
6. **API Key:** From Sonarr → Settings → General → API Key
7. **Base URL:** Leave blank
8. **Quality Profile:** HD-1080p (or your default)
9. **Root Folder:** `/tv`
10. **Language Profile:** English (or your preference)
11. **Tags:** (optional)
12. **External URL:** `https://sonarr.yourdomain.com`
13. **Enable Scan:** ✓
14. **Enable Automatic Search:** ✓
15. **Test → Save**
**For 4K Setup (Optional):**
- Add second Sonarr instance
- Check "4K Server"
- Point to Sonarr-4K instance
### Radarr Configuration
**Settings → Services → Radarr → Add Radarr Server:**
1. **Default Server:** ✓
2. **4K Server:** ✗
3. **Server Name:** Radarr
4. **Hostname/IP:** `radarr`
5. **Port:** `7878`
6. **API Key:** From Radarr → Settings → General → API Key
7. **Base URL:** Leave blank
8. **Quality Profile:** HD-1080p
9. **Root Folder:** `/movies`
10. **Minimum Availability:** Released
11. **Tags:** (optional)
12. **External URL:** `https://radarr.yourdomain.com`
13. **Enable Scan:** ✓
14. **Enable Automatic Search:** ✓
15. **Test → Save**
### User Management
**Settings → Users:**
**Import Users:**
- Users automatically imported from Jellyfin
- Each user can sign in with Jellyfin credentials
**User Permissions:**
1. **Admin:** Full control
2. **User:** Can request, see own requests
3. **Manage Requests:** Can approve/deny requests
**Configure Default Permissions:**
- Settings → Users → Default Permissions
- Request Movies: ✓
- Request TV: ✓
- Request 4K: ✗ (optional)
- Auto-approve: ✗ (review before downloading)
- Request Limit: 10 per week (adjust as needed)
### General Settings
**Settings → General:**
**Application Title:** Your server name (appears in UI)
**Application URL:** `https://jellyseerr.yourdomain.com`
**CSRF Protection:** ✓ Enable
**Hide Available Media:** ✗ (show what's already available)
**Allow Partial Series Requests:** ✓ (users can request specific seasons)
**Default Permissions:** Configure for new users
## Advanced Topics
### Auto-Approval
**Settings → Users → Select User → Permissions:**
- **Auto-approve Movies:** ✓
- **Auto-approve TV:** ✓
- **Auto-approve 4K:** ✗ (usually manual)
**Use Cases:**
- Trusted users
- Family members
- Reduce manual work
**Caution:**
- Can lead to storage issues
- Consider quotas
### Request Quotas
**Settings → Users → Select User → Permissions:**
**Movie Quotas:**
- Movie Request Limit: 10
- Time Period: Week
**TV Quotas:**
- TV Request Limit: 5
- Time Period: Week
**4K Quotas:**
- Separate limits for 4K
- Usually more restrictive
**Reset:**
- Quotas reset based on time period
- Can be adjusted per user
### Notifications
**Settings → Notifications:**
**Available Notification Types:**
- Email (SMTP)
- Discord
- Telegram
- Pushover
- Slack
- Webhook
**Configuration Example: Discord**
1. **Settings → Notifications → Discord → Add**
2. **Webhook URL:** From Discord server
3. **Bot Username:** Jellyseerr (optional)
4. **Bot Avatar:** Custom avatar URL (optional)
5. **Notification Types:**
- Media Requested: ✓
- Media Approved: ✓
- Media Available: ✓
- Media Failed: ✓
- Request Pending: ✓ (for admins)
6. **Test → Save**
**Telegram Setup:**
1. Create bot with @BotFather
2. Get bot token
3. Get chat ID
4. Add to Jellyseerr
5. Configure notification types
### 4K Management
**Separate 4K Workflow:**
**Requirements:**
- Separate Sonarr-4K and Radarr-4K instances
- Separate 4K media libraries
- More storage space
**Setup:**
1. Add 4K Sonarr/Radarr servers
2. Check "4K Server" checkbox
3. Configure different quality profiles (2160p)
4. Separate root folders (/movies-4k, /tv-4k)
**User Permissions:**
- Restrict 4K requests to admins/trusted users
- Higher quotas for regular content
### Library Sync
**Settings → Services → Sync Libraries:**
**Manual Sync:**
- Force refresh of available content
- Updates Jellyseerr's cache
**Automatic Sync:**
- Runs periodically
- Keeps availability up-to-date
**Scan Settings:**
- Enable scan on Sonarr/Radarr servers
- Real-time availability updates
### Discovery Features
**Home Page:**
- Trending movies/TV
- Popular content
- Upcoming releases
- Recently Added
**Search:**
- Search movies, TV, people
- Filter by genre, year, rating
- Browse by network (Netflix, HBO, etc.)
**Recommendations:**
- Similar content suggestions
- Based on existing library
### Public Sign-Up
**Settings → General → Enable New Jellyfin Sign-In:**
- ✓ Allow new users to sign in
- ✗ Disable if you want manual approval
**With Jellyfin:**
- Users must have Jellyfin account first
- Then can access Jellyseerr
**Without Public Sign-Up:**
- Admin must import users manually
- More control over access
## Troubleshooting
### Jellyseerr Can't Connect to Jellyfin
```bash
# Check containers running
docker ps | grep -E "jellyseerr|jellyfin"
# Check network connectivity
docker exec jellyseerr curl http://jellyfin:8096
# Check Jellyfin API
curl http://SERVER_IP:8096/System/Info
# Verify hostname
# Should be: http://jellyfin:8096 (not localhost)
# Check logs
docker logs jellyseerr | grep -i jellyfin
docker logs jellyfin | grep -i error
```
### Jellyseerr Can't Connect to Sonarr/Radarr
```bash
# Test connectivity
docker exec jellyseerr curl http://sonarr:8989
docker exec jellyseerr curl http://radarr:7878
# Verify API keys
# Copy from Sonarr/Radarr → Settings → General → API Key
# Paste exactly into Jellyseerr
# Check network
docker network inspect traefik-network
# Jellyseerr, Sonarr, Radarr should all be on same network
# Check logs
docker logs jellyseerr | grep -i "sonarr\|radarr"
```
### Requests Not Sending to Sonarr/Radarr
```bash
# Check request status
# Jellyseerr → Requests tab
# Should show "Requested" → "Approved" → "Processing"
# Check auto-approval settings
# Settings → Users → Permissions
# Auto-approve enabled?
# Manually approve
# Requests → Pending → Approve
# Check Sonarr/Radarr logs
docker logs sonarr | grep -i jellyseerr
docker logs radarr | grep -i jellyseerr
# Verify quality profiles exist
# Sonarr/Radarr → Settings → Profiles
# Profile must match what's configured in Jellyseerr
```
### Users Can't Sign In
```bash
# Verify Jellyfin connection
# Settings → Jellyfin → Test Connection
# Check user exists in Jellyfin
# Jellyfin → Dashboard → Users
# Import users
# Settings → Users → Import Jellyfin Users
# Check permissions
# Settings → Users → Select user → Permissions
# Check logs
docker logs jellyseerr | grep -i auth
```
### Notifications Not Working
```bash
# Test notification
# Settings → Notifications → Select notification → Test
# Check notification settings
# Verify webhook URLs, API keys, etc.
# Check Discord webhook
curl -X POST "https://discord.com/api/webhooks/YOUR/WEBHOOK" \
-H "Content-Type: application/json" \
-d '{"content":"Test"}'
# Check logs
docker logs jellyseerr | grep -i notification
```
## Performance Optimization
### Database Optimization
```bash
# Jellyseerr uses SQLite
# Stop container
docker stop jellyseerr
# Vacuum database
sqlite3 /opt/stacks/media-management/jellyseerr/config/db/db.sqlite3 "VACUUM;"
# Restart
docker start jellyseerr
```
### Cache Management
**Settings → General:**
- Cache timeout: 6 hours (default)
- Adjust based on library size
### Scan Frequency
- More frequent scans = higher load
- Balance between real-time updates and performance
- Consider library size
## Security Best Practices
1. **Use HTTPS:** Always access via Traefik with SSL
2. **Strong Jellyfin Passwords:** Users authenticate via Jellyfin
3. **Restrict New Sign-Ins:** Disable if not needed
4. **User Quotas:** Prevent abuse
5. **Approve Requests:** Don't auto-approve all users
6. **Regular Updates:** Keep Jellyseerr current
7. **API Key Security:** Keep Sonarr/Radarr keys secure
8. **Network Isolation:** Internal Docker network only
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media-management/jellyseerr/config/db/db.sqlite3 # Database
/opt/stacks/media-management/jellyseerr/config/settings.json # Settings
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/jellyseerr
docker stop jellyseerr
tar -czf $BACKUP_DIR/jellyseerr-$DATE.tar.gz \
/opt/stacks/media-management/jellyseerr/config/
docker start jellyseerr
find $BACKUP_DIR -name "jellyseerr-*.tar.gz" -mtime +7 -delete
```
## Integration with Other Services
### Jellyseerr + Jellyfin
- SSO authentication
- User import
- Library sync
- Availability checking
### Jellyseerr + Sonarr + Radarr
- Automatic request forwarding
- Quality profile mapping
- Status tracking
- Download monitoring
### Jellyseerr + Discord/Telegram
- Request notifications
- Approval notifications
- Availability notifications
- Admin alerts
## Summary
Jellyseerr is the user-friendly request management system offering:
- Beautiful, modern interface
- Jellyfin SSO integration
- Automatic Sonarr/Radarr integration
- Request approval workflow
- User quotas and permissions
- Notification system
- Discovery and browsing
- Free and open-source
**Perfect for:**
- Shared Jellyfin servers
- Family media servers
- Non-technical users
- Request management
- Automated workflows
**Key Points:**
- Jellyfin authentication (SSO)
- Connect to Sonarr and Radarr
- Configure user permissions
- Set up notifications
- Enable/disable auto-approval
- Use quotas to prevent abuse
- Separate 4K management optional
**Remember:**
- Users need Jellyfin accounts
- API keys from Sonarr/Radarr required
- Configure quotas appropriately
- Test notifications
- Regular backups recommended
- Auto-approval optional
- 4K requires separate instances
Jellyseerr makes media requests simple and automated for everyone!

View File

@@ -1,180 +0,0 @@
# Jupyter Lab - Data Science Environment
## Table of Contents
- [Overview](#overview)
- [What is Jupyter Lab?](#what-is-jupyter-lab)
- [Why Use Jupyter Lab?](#why-use-jupyter-lab)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Data Science IDE
**Docker Image:** [jupyter/scipy-notebook](https://hub.docker.com/r/jupyter/scipy-notebook)
**Default Stack:** `development.yml`
**Web UI:** `http://SERVER_IP:8888`
**Token:** Check container logs
**Ports:** 8888
## What is Jupyter Lab?
Jupyter Lab is a web-based interactive development environment for notebooks, code, and data. It's the gold standard for data science work, allowing you to combine code execution, rich text, visualizations, and interactive widgets in one document. Think of it as an IDE specifically designed for data exploration and analysis.
### Key Features
- **Interactive Notebooks:** Code + documentation + results
- **Multiple Languages:** Python, R, Julia, etc.
- **Rich Output:** Plots, tables, HTML, LaTeX
- **Extensions:** Powerful extension system
- **File Browser:** Manage notebooks and files
- **Terminal:** Integrated terminal access
- **Markdown:** Rich text documentation
- **Data Visualization:** Matplotlib, Plotly, etc.
- **Git Integration:** Version control
- **Free & Open Source:** BSD license
## Why Use Jupyter Lab?
1. **Data Science Standard:** Used by data scientists worldwide
2. **Interactive:** See results immediately
3. **Documentation:** Code + explanations together
4. **Reproducible:** Share complete analysis
5. **Visualization:** Built-in plotting
6. **Exploratory:** Perfect for data exploration
7. **Teaching:** Great for learning/teaching
## Configuration in AI-Homelab
```
/opt/stacks/development/jupyter/work/
notebooks/ # Your Jupyter notebooks
data/ # Datasets
```
## Official Resources
- **Website:** https://jupyter.org
- **Documentation:** https://jupyterlab.readthedocs.io
- **Gallery:** https://github.com/jupyter/jupyter/wiki
## Docker Configuration
```yaml
jupyter:
image: jupyter/scipy-notebook:latest
container_name: jupyter
restart: unless-stopped
networks:
- traefik-network
ports:
- "8888:8888"
environment:
- JUPYTER_ENABLE_LAB=yes
- GRANT_SUDO=yes
user: root
volumes:
- /opt/stacks/development/jupyter/work:/home/jovyan/work
labels:
- "traefik.enable=true"
- "traefik.http.routers.jupyter.rule=Host(`jupyter.${DOMAIN}`)"
```
**Note:** `scipy-notebook` includes NumPy, Pandas, Matplotlib, SciPy, scikit-learn, and more.
## Setup
1. **Start Container:**
```bash
docker compose up -d jupyter
```
2. **Get Access Token:**
```bash
docker logs jupyter | grep token
# Look for: http://127.0.0.1:8888/lab?token=LONG_TOKEN_HERE
```
3. **Access UI:** `http://SERVER_IP:8888`
- Enter token from logs
- Set password (optional but recommended)
4. **Create Notebook:**
- File → New → Notebook
- Select kernel (Python 3)
- Start coding!
5. **Example First Cell:**
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Create sample data
data = pd.DataFrame({
'x': range(10),
'y': np.random.randn(10)
})
# Plot
plt.plot(data['x'], data['y'])
plt.title('Sample Plot')
plt.show()
# Display data
data
```
## Pre-installed Libraries
**scipy-notebook includes:**
- **NumPy:** Numerical computing
- **Pandas:** Data analysis
- **Matplotlib:** Plotting
- **SciPy:** Scientific computing
- **scikit-learn:** Machine learning
- **Seaborn:** Statistical visualization
- **Numba:** JIT compiler
- **SymPy:** Symbolic mathematics
- **Beautiful Soup:** Web scraping
- **requests:** HTTP library
## Summary
Jupyter Lab is your data science environment offering:
- Interactive Python notebooks
- Code + documentation + results together
- Data visualization
- Rich output (plots, tables, LaTeX)
- Pre-installed data science libraries
- Extensible architecture
- Git integration
- Free and open-source
**Perfect for:**
- Data science work
- Machine learning
- Data exploration
- Teaching/learning Python
- Research documentation
- Reproducible analysis
- Prototyping algorithms
**Key Points:**
- Notebook format (.ipynb)
- Cell-based execution
- scipy-notebook has common libraries
- Token-based authentication
- Set password for easier access
- Markdown + code cells
- Share notebooks easily
**Remember:**
- Save token or set password
- Regular notebook saves
- Export notebooks to PDF/HTML
- Version control with Git
- Install extra packages: `!pip install package`
- Restart kernel if needed
- Shutdown unused kernels
Jupyter Lab powers your data science workflow!

View File

@@ -1,154 +0,0 @@
# Lazy Librarian - Book Management
## Table of Contents
- [Overview](#overview)
- [What is Lazy Librarian?](#what-is-lazy-librarian)
- [Why Use Lazy Librarian?](#why-use-lazy-librarian)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
## Overview
**Category:** Book Management
**Docker Image:** [linuxserver/lazylibrarian](https://hub.docker.com/r/linuxserver/lazylibrarian)
**Default Stack:** `media-extended.yml`
**Web UI:** `http://SERVER_IP:5299`
**Alternative To:** Readarr
**Ports:** 5299
## What is Lazy Librarian?
Lazy Librarian is an automated book downloader similar to Sonarr/Radarr but for books. It's an alternative to Readarr, with some users preferring its interface and magazine support. It automatically downloads ebooks and audiobooks from your wanted list.
### Key Features
- **Author Tracking:** Monitor favorite authors
- **GoodReads Integration:** Import reading lists
- **Magazine Support:** Download magazines
- **Calibre Integration:** Automatic library management
- **Multiple Providers:** Usenet and torrent indexers
- **Format Management:** EPUB, MOBI, PDF, audiobooks
- **Quality Control:** Preferred formats
- **Notifications:** Discord, Telegram, email
## Why Use Lazy Librarian?
1. **Magazine Support:** Unlike Readarr
2. **GoodReads Integration:** Easy list importing
3. **Calibre Integration:** Seamless library management
4. **Alternative Interface:** Some prefer over Readarr
5. **Mature Project:** Stable and proven
6. **Audiobook Support:** Built-in
7. **Free & Open Source:** No cost
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/lazylibrarian/config/ # Config
/mnt/media/books/ # Book library
/mnt/downloads/ # Downloads
```
### Environment Variables
```bash
PUID=1000
PGID=1000
TZ=America/New_York
```
## Official Resources
- **Website:** https://lazylibrarian.gitlab.io
- **GitLab:** https://gitlab.com/LazyLibrarian/LazyLibrarian
- **Wiki:** https://lazylibrarian.gitlab.io/
## Docker Configuration
```yaml
lazylibrarian:
image: linuxserver/lazylibrarian:latest
container_name: lazylibrarian
restart: unless-stopped
networks:
- traefik-network
ports:
- "5299:5299"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/lazylibrarian/config:/config
- /mnt/media/books:/books
- /mnt/downloads:/downloads
labels:
- "traefik.enable=true"
- "traefik.http.routers.lazylibrarian.rule=Host(`lazylibrarian.${DOMAIN}`)"
- "traefik.http.routers.lazylibrarian.entrypoints=websecure"
- "traefik.http.routers.lazylibrarian.tls.certresolver=letsencrypt"
- "traefik.http.routers.lazylibrarian.middlewares=authelia@docker"
- "traefik.http.services.lazylibrarian.loadbalancer.server.port=5299"
```
## Initial Setup
1. **Start Container:**
```bash
docker compose up -d lazylibrarian
```
2. **Access UI:** `http://SERVER_IP:5299`
3. **Configure:**
- Config → Download Settings → qBittorrent
- Config → Search Providers → Add providers
- Config → Processing → Calibre integration
- Add authors to watch
4. **GoodReads Setup:**
- Config → GoodReads API → Get API key from goodreads.com/api
- Import reading list
5. **Add Author:**
- Search for author
- Add to database
- Check "Wanted" books
- LazyLibrarian searches automatically
## Summary
Lazy Librarian is the book automation tool offering:
- Author and book tracking
- Magazine support (unique feature)
- GoodReads integration
- Calibre compatibility
- Audiobook support
- Alternative to Readarr
- Free and open-source
**Perfect for:**
- Book collectors
- Magazine readers
- GoodReads users
- Calibre users
- Those wanting Readarr alternative
**Key Points:**
- Supports magazines (Readarr doesn't)
- GoodReads API required
- Calibre integration available
- Configure download client
- Add search providers
- Monitor authors, not individual books
**Readarr vs Lazy Librarian:**
- Readarr: Newer, cleaner UI, active development
- Lazy Librarian: Magazines, mature, different approach
- Both integrate with Calibre
- Choose based on preference
Lazy Librarian automates your book and magazine collection!

View File

@@ -1,634 +0,0 @@
# Lidarr - Music Automation
## Table of Contents
- [Overview](#overview)
- [What is Lidarr?](#what-is-lidarr)
- [Why Use Lidarr?](#why-use-lidarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Management & Automation
**Docker Image:** [linuxserver/lidarr](https://hub.docker.com/r/linuxserver/lidarr)
**Default Stack:** `media-extended.yml`
**Web UI:** `https://lidarr.${DOMAIN}` or `http://SERVER_IP:8686`
**Authentication:** Optional (configurable)
**Ports:** 8686
## What is Lidarr?
Lidarr is a music collection manager for Usenet and BitTorrent users. It's the music equivalent of Sonarr (TV) and Radarr (movies), designed to monitor for new album releases from your favorite artists, automatically download them, and organize your music library with proper metadata and tagging.
### Key Features
- **Automatic Downloads:** New releases from monitored artists
- **Quality Management:** MP3, FLAC, lossless preferences
- **Artist Management:** Track favorite artists
- **Album Tracking:** Monitor discographies
- **Calendar:** Upcoming album releases
- **Metadata Enrichment:** Album art, artist info, tags
- **Format Support:** MP3, FLAC, M4A, OGG, WMA
- **MusicBrainz Integration:** Accurate metadata
- **Multiple Quality Tiers:** Lossy vs lossless
- **Plex/Jellyfin Integration:** Library updates
## Why Use Lidarr?
1. **Never Miss Releases:** Auto-download new albums
2. **Library Organization:** Consistent structure
3. **Quality Control:** FLAC for archival, MP3 for portable
4. **Complete Discographies:** Track all artist releases
5. **Metadata Automation:** Proper tags and artwork
6. **Format Flexibility:** Multiple quality profiles
7. **Missing Album Detection:** Find gaps in collection
8. **Time Saving:** No manual searching
9. **Free & Open Source:** No cost
10. **Integration:** Works with music players and servers
## How It Works
```
New Album Release (Artist Monitored)
Lidarr Checks RSS Feeds (Prowlarr)
Evaluates Releases (Quality, Format)
Sends to qBittorrent (via Gluetun VPN)
Download Completes
Lidarr Imports & Tags
Library Updated
(/mnt/media/music/)
Plex/Jellyfin/Subsonic Access
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/lidarr/config/ # Lidarr configuration
/mnt/downloads/complete/music-lidarr/ # Downloaded music
/mnt/media/music/ # Final music library
Library Structure:
/mnt/media/music/
Artist Name/
Album Name (Year)/
01 - Track Name.flac
02 - Track Name.flac
cover.jpg
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://lidarr.audio
- **Wiki:** https://wiki.servarr.com/lidarr
- **GitHub:** https://github.com/Lidarr/Lidarr
- **Discord:** https://discord.gg/lidarr
- **Reddit:** https://reddit.com/r/lidarr
- **Docker Hub:** https://hub.docker.com/r/linuxserver/lidarr
## Educational Resources
### Videos
- [Lidarr Setup Guide](https://www.youtube.com/results?search_query=lidarr+setup)
- [*arr Stack Music Management](https://www.youtube.com/results?search_query=lidarr+music+automation)
- [Lidarr Quality Profiles](https://www.youtube.com/results?search_query=lidarr+quality+profiles)
### Articles & Guides
- [Official Documentation](https://wiki.servarr.com/lidarr)
- [Servarr Wiki](https://wiki.servarr.com/)
- [Quality Settings Guide](https://wiki.servarr.com/lidarr/settings#quality-profiles)
### Concepts to Learn
- **Quality Profiles:** Lossy vs lossless preferences
- **Metadata Profiles:** What to download (albums, EPs, singles)
- **Release Profiles:** Preferred sources (WEB, CD)
- **MusicBrainz:** Music metadata database
- **Bitrate:** Audio quality measurement
- **Lossless:** FLAC, ALAC (no quality loss)
- **Lossy:** MP3, AAC (compressed)
## Docker Configuration
### Complete Service Definition
```yaml
lidarr:
image: linuxserver/lidarr:latest
container_name: lidarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "8686:8686"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/lidarr/config:/config
- /mnt/media/music:/music
- /mnt/downloads:/downloads
labels:
- "traefik.enable=true"
- "traefik.http.routers.lidarr.rule=Host(`lidarr.${DOMAIN}`)"
- "traefik.http.routers.lidarr.entrypoints=websecure"
- "traefik.http.routers.lidarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.lidarr.middlewares=authelia@docker"
- "traefik.http.services.lidarr.loadbalancer.server.port=8686"
```
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d lidarr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:8686`
- Domain: `https://lidarr.yourdomain.com`
3. **Initial Configuration:**
- Settings → Media Management
- Settings → Profiles
- Settings → Indexers (via Prowlarr)
- Settings → Download Clients
### Media Management Settings
**Settings → Media Management:**
1. **Rename Tracks:** ✓ Enable
2. **Replace Illegal Characters:** ✓ Enable
3. **Standard Track Format:**
```
{Album Type}/{Artist Name} - {Album Title} ({Release Year})/{medium:00}{track:00} - {Track Title}
```
Example: `Studio/Pink Floyd - The Dark Side of the Moon (1973)/0101 - Speak to Me.flac`
4. **Artist Folder Format:**
```
{Artist Name}
```
5. **Album Folder Format:**
```
{Album Title} ({Release Year})
```
6. **Root Folders:**
- Add: `/music`
**File Management:**
- ✓ Unmonitor Deleted Tracks
- ✓ Use Hardlinks instead of Copy
- Minimum Free Space: 100 MB
- ✓ Import Extra Files: Artwork (cover.jpg, folder.jpg)
### Metadata Profiles
**Settings → Profiles → Metadata Profiles:**
**Standard Profile:**
- Albums: ✓
- EPs: ✓ (optional)
- Singles: ✗ (usually skip)
- Live: ✗ (optional)
- Compilation: ✗ (optional)
- Remix: ✗ (optional)
- Soundtrack: ✗ (optional)
**Complete Discography Profile:**
- Enable all types
- For die-hard fans wanting everything
### Quality Profiles
**Settings → Profiles → Quality Profiles:**
**FLAC (Lossless) Profile:**
1. Name: "Lossless"
2. Upgrades Allowed: ✓
3. Upgrade Until: FLAC
4. Qualities (in order):
- FLAC
- ALAC
- FLAC 24bit (if available)
**MP3 (Lossy) Profile:**
1. Name: "High Quality MP3"
2. Upgrade Until: MP3-320
3. Qualities:
- MP3-320
- MP3-VBR-V0
- MP3-256
**Hybrid Profile:**
- FLAC preferred
- Fall back to MP3-320
### Download Client Setup
**Settings → Download Clients → Add → qBittorrent:**
1. **Name:** qBittorrent
2. **Host:** `gluetun`
3. **Port:** `8080`
4. **Username:** `admin`
5. **Password:** Your password
6. **Category:** `music-lidarr`
7. **Test → Save**
### Indexer Setup (via Prowlarr)
**Prowlarr Integration:**
- Prowlarr → Settings → Apps → Add Lidarr
- Sync Categories: Audio/MP3, Audio/Lossless, Audio/Other
- Auto-syncs indexers
**Verify:**
- Settings → Indexers
- Should see synced indexers from Prowlarr
### Adding Your First Artist
1. **Click "Add New"**
2. **Search:** Artist name (e.g., "Pink Floyd")
3. **Select** correct artist (check MusicBrainz link)
4. **Configure:**
- Root Folder: `/music`
- Monitor: All Albums (or Future Albums)
- Metadata Profile: Standard
- Quality Profile: Lossless
- ✓ Search for missing albums
5. **Add Artist**
## Advanced Topics
### Quality Definitions
**Settings → Quality → Quality Definitions:**
Adjust bitrate ranges:
**MP3-320:**
- Min: 310 kbps
- Max: 330 kbps
**MP3-VBR-V0:**
- Min: 220 kbps
- Max: 260 kbps
**FLAC:**
- Min: 600 kbps
- Preferred: 900-1400 kbps
### Release Profiles
**Settings → Profiles → Release Profiles:**
**Preferred Sources:**
- Must Contain: `WEB|CD|FLAC`
- Must Not Contain: `MP3|128|192` (if targeting lossless)
- Score: +10
**Avoid:**
- Must Not Contain: `REPACK|PROPER` (unless needed)
- Score: -10
### Import Lists
**Settings → Import Lists:**
**Spotify Integration:** (if available)
- Import playlists
- Auto-add followed artists
**Last.fm:**
- Import top artists
- Discover new music
**MusicBrainz:**
- Import artist discography
- Series/compilation tracking
### Notifications
**Settings → Connect:**
Popular notifications:
- **Plex:** Update library on import
- **Jellyfin:** Scan library
- **Discord:** New release alerts
- **Telegram:** Mobile notifications
- **Last.fm:** Scrobble integration
- **Custom Webhook:** External services
**Example: Plex**
1. Add → Plex Media Server
2. Host: `plex`
3. Port: `32400`
4. Auth Token: From Plex
5. Triggers: On Import, On Upgrade
6. Update Library: ✓
7. Test → Save
### Custom Scripts
**Settings → Connect → Custom Script:**
Run scripts on events:
- On Download
- On Import
- On Upgrade
- On Rename
- On Retag
**Use Cases:**
- Convert formats (FLAC → MP3 for mobile)
- Sync to music player
- Update external database
- Backup to cloud
### Retagging
**Automatic tag updates:**
**Settings → Media Management → Retagging:**
- ✓ Write tags to audio files
- Tag separator: `;` or `/`
- Standard tags: Artist, Album, Track, Year
- Additional tags: Genre, Comment, AlbumArtist
**Manual Retag:**
- Select album → Retag
- Updates all file tags with correct metadata
### Multiple Instances
**Separate instances for different use cases:**
**lidarr-lossless.yml:**
```yaml
lidarr-lossless:
image: linuxserver/lidarr:latest
container_name: lidarr-lossless
ports:
- "8687:8686"
volumes:
- /opt/stacks/media-management/lidarr-lossless/config:/config
- /mnt/media/music-flac:/music
- /mnt/downloads:/downloads
```
**Use Cases:**
- Separate FLAC library
- Different quality standards
- Genre-specific instances
## Troubleshooting
### Lidarr Not Finding Albums
```bash
# Check indexers
# Settings → Indexers → Test All
# Check Prowlarr sync
docker logs prowlarr | grep lidarr
# Manual search
# Artist → Album → Manual Search
# Common issues:
# - No indexers with music categories
# - Album not released yet
# - Quality profile too restrictive
# - Wrong artist match (check MusicBrainz ID)
```
### Downloads Not Importing
```bash
# Check permissions
ls -la /mnt/downloads/complete/music-lidarr/
ls -la /mnt/media/music/
# Fix ownership
sudo chown -R 1000:1000 /mnt/media/music/
# Verify Lidarr access
docker exec lidarr ls /downloads
docker exec lidarr ls /music
# Check logs
docker logs lidarr | grep -i import
# Common issues:
# - Permission denied
# - Wrong category in qBittorrent
# - Format not in quality profile
# - Hardlink failed (different filesystems)
```
### Wrong Artist Match
```bash
# Search by MusicBrainz ID for accuracy
# Find MusicBrainz ID on musicbrainz.org
# Add New → Search: mbid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
# Edit artist
# Library → Select Artist → Edit
# Search for correct match
# Check MusicBrainz link
# Ensure correct artist selected
```
### Tagging Issues
```bash
# Check tagging settings
# Settings → Media Management → Retagging
# Manual retag
# Select album → Retag
# Check file tags
docker exec lidarr exiftool /music/Artist/Album/track.flac
# Common issues:
# - Write tags disabled
# - File format doesn't support tags
# - Permission errors
```
### Database Corruption
```bash
# Stop Lidarr
docker stop lidarr
# Backup database
cp /opt/stacks/media-management/lidarr/config/lidarr.db /opt/backups/
# Check integrity
sqlite3 /opt/stacks/media-management/lidarr/config/lidarr.db "PRAGMA integrity_check;"
# Restore from backup if corrupted
docker start lidarr
```
## Performance Optimization
### RSS Sync Interval
**Settings → Indexers → Options:**
- RSS Sync Interval: 60 minutes
- Music releases less frequently
### Database Optimization
```bash
# Stop Lidarr
docker stop lidarr
# Vacuum database
sqlite3 /opt/stacks/media-management/lidarr/config/lidarr.db "VACUUM;"
# Clear old history
# Settings → General → History Cleanup: 30 days
docker start lidarr
```
### Scan Optimization
**Settings → Media Management:**
- Analyze audio files: No (if not needed)
- Rescan folder after refresh: Only if changed
## Security Best Practices
1. **Enable Authentication:**
- Settings → General → Security
- Authentication: Required
2. **API Key Security:**
- Keep API key secure
- Regenerate if compromised
3. **Reverse Proxy:**
- Use Traefik + Authelia
- Don't expose port 8686 publicly
4. **Regular Backups:**
- Backup `/config` directory
- Includes database and settings
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media-management/lidarr/config/lidarr.db # Database
/opt/stacks/media-management/lidarr/config/config.xml # Settings
/opt/stacks/media-management/lidarr/config/Backup/ # Auto backups
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/lidarr
cp /opt/stacks/media-management/lidarr/config/lidarr.db $BACKUP_DIR/lidarr-$DATE.db
find $BACKUP_DIR -name "lidarr-*.db" -mtime +7 -delete
```
## Integration with Other Services
### Lidarr + Prowlarr
- Centralized indexer management
- Auto-sync music indexers
### Lidarr + qBittorrent (via Gluetun)
- Download music via VPN
- Category-based organization
### Lidarr + Plex/Jellyfin
- Auto-update music library
- Metadata sync
- Album artwork
### Lidarr + Last.fm
- Scrobbling integration
- Discover new artists
- Import listening history
### Lidarr + Beets
- Advanced tagging
- Music organization
- Duplicate detection
## Summary
Lidarr is the music automation tool offering:
- Automatic album downloads
- Artist and discography tracking
- Quality management (MP3, FLAC)
- Metadata and tagging
- MusicBrainz integration
- Free and open-source
**Perfect for:**
- Music collectors
- Audiophiles (FLAC support)
- Complete discography seekers
- Automated music management
- Plex/Jellyfin music users
**Key Points:**
- Monitor favorite artists
- Quality profiles for lossy/lossless
- Automatic metadata tagging
- MusicBrainz for accuracy
- Separate instances for different qualities
- Regular backups recommended
**Remember:**
- Use MusicBrainz ID for accurate matching
- FLAC for archival, MP3 for portable
- Retagging updates file metadata
- Monitor "All Albums" vs "Future Only"
- Hardlinks save disk space
- Keep API key secure
Lidarr completes your media automation stack with music management!

View File

@@ -1,157 +0,0 @@
# Loki - Log Aggregation
## Table of Contents
- [Overview](#overview)
- [What is Loki?](#what-is-loki)
- [Why Use Loki?](#why-use-loki)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Log Aggregation
**Docker Image:** [grafana/loki](https://hub.docker.com/r/grafana/loki)
**Default Stack:** `monitoring.yml`
**Web UI:** Accessed via Grafana
**Query Language:** LogQL
**Ports:** 3100
## What is Loki?
Loki is a log aggregation system inspired by Prometheus but for logs. It doesn't index log contents (like Elasticsearch), instead it indexes metadata labels, making it much more efficient. Designed to work seamlessly with Grafana for log visualization.
### Key Features
- **Label-Based Indexing:** Efficient storage
- **Grafana Integration:** Native support
- **LogQL:** Prometheus-like queries
- **Multi-Tenancy:** Isolated logs
- **Compression:** Efficient storage
- **Low Resource:** Minimal overhead
- **Promtail Agent:** Log shipper
- **Free & Open Source:** CNCF project
## Why Use Loki?
1. **Efficient:** Indexes labels, not content
2. **Grafana Native:** Seamless integration
3. **Cheap:** Low storage costs
4. **Simple:** Easy to operate
5. **Prometheus-Like:** Familiar for Prometheus users
6. **Low Resource:** Lightweight
7. **LogQL:** Powerful queries
## Configuration in AI-Homelab
```
/opt/stacks/monitoring/loki/
loki-config.yml # Loki configuration
data/ # Log storage
```
## Official Resources
- **Website:** https://grafana.com/oss/loki
- **Documentation:** https://grafana.com/docs/loki/latest
- **LogQL:** https://grafana.com/docs/loki/latest/logql
## Docker Configuration
```yaml
loki:
image: grafana/loki:latest
container_name: loki
restart: unless-stopped
networks:
- traefik-network
ports:
- "3100:3100"
command: -config.file=/etc/loki/loki-config.yml
volumes:
- /opt/stacks/monitoring/loki/loki-config.yml:/etc/loki/loki-config.yml
- /opt/stacks/monitoring/loki/data:/loki
```
### loki-config.yml
```yaml
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2020-05-15
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb:
directory: /loki/index
filesystem:
directory: /loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: true
retention_period: 168h
```
## Summary
Loki provides log aggregation offering:
- Efficient label-based indexing
- Grafana integration
- LogQL query language
- Low storage costs
- Minimal resource usage
- Promtail log shipping
- Free and open-source
**Perfect for:**
- Docker container logs
- Application logs
- System logs
- Centralized logging
- Grafana users
- Prometheus users
**Key Points:**
- Indexes labels, not content
- Much cheaper than Elasticsearch
- Works with Promtail
- Query in Grafana
- LogQL similar to PromQL
- Low resource usage
- 7-day retention typical
**Remember:**
- Use Promtail to send logs
- Add as Grafana data source
- LogQL for queries
- Configure retention
- Monitor disk space
- Label logs appropriately
Loki aggregates logs efficiently!

View File

@@ -1,218 +0,0 @@
# MariaDB - Database Services
## Table of Contents
- [Overview](#overview)
- [What is MariaDB?](#what-is-mariadb)
- [Why Use MariaDB?](#why-use-mariadb)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Database Instances in AI-Homelab](#database-instances-in-ai-homelab)
- [Management](#management)
## Overview
**Category:** Relational Database
**Docker Image:** [mariadb](https://hub.docker.com/_/mariadb)
**Default Stack:** `productivity.yml` (multiple instances)
**Ports:** 3306 (internal, not exposed)
## What is MariaDB?
MariaDB is a drop-in replacement for MySQL, created by MySQL's original developers after Oracle acquired MySQL. It's a fast, reliable relational database used by millions of applications. In AI-Homelab, separate MariaDB instances serve different applications.
### Key Features
- **MySQL Compatible:** Drop-in replacement
- **Fast:** High performance
- **Reliable:** ACID compliant
- **Standard SQL:** Industry standard
- **Replication:** Master-slave support
- **Hot Backups:** Online backups
- **Storage Engines:** Multiple engines
- **Free & Open Source:** GPL license
## Why Use MariaDB?
1. **MySQL Alternative:** Better governance than Oracle MySQL
2. **Performance:** Often faster than MySQL
3. **Compatible:** Works with MySQL applications
4. **Open Source:** Truly community-driven
5. **Stable:** Production-ready
6. **Standard:** SQL standard compliance
7. **Support:** Wide adoption
## Configuration in AI-Homelab
### Database Instances
AI-Homelab uses **separate MariaDB containers** for each application to ensure:
- **Isolation:** App failures don't affect others
- **Backup Independence:** Backup apps separately
- **Resource Control:** Per-app resource limits
- **Version Control:** Different versions if needed
## Official Resources
- **Website:** https://mariadb.org
- **Documentation:** https://mariadb.com/kb/en/documentation
- **Docker Hub:** https://hub.docker.com/_/mariadb
## Database Instances in AI-Homelab
### 1. Nextcloud Database (nextcloud-db)
```yaml
nextcloud-db:
image: mariadb:latest
container_name: nextcloud-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${NEXTCLOUD_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=${NEXTCLOUD_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/nextcloud-db/data:/var/lib/mysql
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
```
**Purpose:** Nextcloud file storage metadata
**Location:** `/opt/stacks/productivity/nextcloud-db/data`
**Special:** Requires specific transaction isolation
### 2. WordPress Database (wordpress-db)
```yaml
wordpress-db:
image: mariadb:latest
container_name: wordpress-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${WP_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=${WP_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/wordpress-db/data:/var/lib/mysql
```
**Purpose:** WordPress content and configuration
**Location:** `/opt/stacks/productivity/wordpress-db/data`
### 3. BookStack Database (bookstack-db)
```yaml
bookstack-db:
image: mariadb:latest
container_name: bookstack-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${BOOKSTACK_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=${BOOKSTACK_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/bookstack-db/data:/var/lib/mysql
```
**Purpose:** BookStack knowledge base content
**Location:** `/opt/stacks/productivity/bookstack-db/data`
### 4. MediaWiki Database (mediawiki-db)
```yaml
mediawiki-db:
image: mariadb:latest
container_name: mediawiki-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${MEDIAWIKI_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=mediawiki
- MYSQL_USER=mediawiki
- MYSQL_PASSWORD=${MEDIAWIKI_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/mediawiki-db/data:/var/lib/mysql
```
**Purpose:** MediaWiki wiki content
**Location:** `/opt/stacks/productivity/mediawiki-db/data`
## Management
### Access Database
```bash
# Connect to database
docker exec -it nextcloud-db mysql -u nextcloud -p
# Or as root
docker exec -it nextcloud-db mysql -u root -p
```
### Backup Database
```bash
# Backup single database
docker exec nextcloud-db mysqldump -u root -p${ROOT_PASSWORD} nextcloud > nextcloud-backup.sql
# Backup all databases
docker exec nextcloud-db mysqldump -u root -p${ROOT_PASSWORD} --all-databases > all-dbs-backup.sql
# Restore database
docker exec -i nextcloud-db mysql -u root -p${ROOT_PASSWORD} nextcloud < nextcloud-backup.sql
```
### Check Database Size
```bash
# Check size
docker exec -it nextcloud-db mysql -u root -p -e "
SELECT
table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.TABLES
GROUP BY table_schema;"
```
### Optimize Database
```bash
# Optimize all tables
docker exec nextcloud-db mysqlcheck -u root -p --optimize --all-databases
```
## Summary
MariaDB provides reliable database services for:
- Nextcloud (file metadata)
- WordPress (content management)
- BookStack (knowledge base)
- MediaWiki (wiki content)
- Future applications
**Key Points:**
- Separate container per application
- Isolated for reliability
- Standard MySQL compatibility
- ACID compliance
- Easy backup/restore
- Low resource usage
- Production-ready
**Remember:**
- Use strong passwords
- Regular backups critical
- Monitor disk space
- Optimize periodically
- Update carefully
- Test backups work
- Separate containers = better isolation
MariaDB powers your data-driven applications!

View File

@@ -1,151 +0,0 @@
# Mealie - Recipe Manager
## Table of Contents
- [Overview](#overview)
- [What is Mealie?](#what-is-mealie)
- [Why Use Mealie?](#why-use-mealie)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Recipe Management
**Docker Image:** [hkotel/mealie](https://hub.docker.com/r/hkotel/mealie)
**Default Stack:** `productivity.yml`
**Web UI:** `https://mealie.${DOMAIN}` or `http://SERVER_IP:9925`
**Ports:** 9925
## What is Mealie?
Mealie is a self-hosted recipe manager and meal planner. It imports recipes from websites, manages your recipe collection, generates shopping lists, and plans meals. Beautiful UI with family sharing and mobile-friendly design.
### Key Features
- **Recipe Import:** From any website URL
- **Meal Planning:** Weekly meal calendar
- **Shopping Lists:** Auto-generated from recipes
- **Categories & Tags:** Organize recipes
- **Search:** Full-text recipe search
- **Family Sharing:** Multiple users
- **OCR:** Scan recipe cards
- **API:** Integrations possible
- **Recipe Scaling:** Adjust servings
- **Mobile Friendly:** Responsive design
## Why Use Mealie?
1. **Centralized Recipes:** All recipes in one place
2. **Import from Anywhere:** URL recipe scraping
3. **Meal Planning:** Plan weekly meals
4. **Shopping Lists:** Auto-generated
5. **Family Sharing:** Everyone can access
6. **No Ads:** Unlike recipe websites
7. **Privacy:** Your data only
8. **Free & Open Source:** No cost
## Configuration in AI-Homelab
```
/opt/stacks/productivity/mealie/data/ # Recipes, images, DB
```
## Official Resources
- **Website:** https://hay-kot.github.io/mealie
- **GitHub:** https://github.com/hay-kot/mealie
- **Documentation:** https://hay-kot.github.io/mealie/documentation/getting-started
## Docker Configuration
```yaml
mealie:
image: hkotel/mealie:latest
container_name: mealie
restart: unless-stopped
networks:
- traefik-network
ports:
- "9925:9000"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- MAX_WORKERS=1
- WEB_CONCURRENCY=1
- BASE_URL=https://mealie.${DOMAIN}
volumes:
- /opt/stacks/productivity/mealie/data:/app/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.mealie.rule=Host(`mealie.${DOMAIN}`)"
- "traefik.http.routers.mealie.entrypoints=websecure"
- "traefik.http.routers.mealie.tls.certresolver=letsencrypt"
- "traefik.http.services.mealie.loadbalancer.server.port=9000"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d mealie
```
2. **Access UI:** `http://SERVER_IP:9925`
3. **Initial Login:**
- Email: `changeme@email.com`
- Password: `MyPassword`
- **Change immediately!**
4. **User Settings:**
- Change email and password
- Set preferences
5. **Import Recipe:**
- "+" button → Import Recipe
- Paste website URL
- Mealie extracts recipe automatically
- Edit and save
6. **Meal Planning:**
- Calendar view
- Drag recipes to days
- Generate shopping list
## Summary
Mealie is your digital recipe box offering:
- Recipe import from URLs
- Meal planning calendar
- Auto shopping lists
- Family sharing
- Recipe organization
- Mobile-friendly interface
- Free and open-source
**Perfect for:**
- Recipe collectors
- Meal planners
- Families
- Cooking enthusiasts
- Grocery list automation
- Recipe organization
**Key Points:**
- Import from any recipe website
- Meal calendar planning
- Shopping list generation
- Multiple users supported
- Change default credentials!
- Mobile-responsive design
- Recipe scaling feature
**Remember:**
- Change default login immediately
- Organize with categories/tags
- Use meal planner for weekly plans
- Generate shopping lists from meals
- Share with family members
- Import existing recipes from URLs
Mealie simplifies meal planning and recipe management!

View File

@@ -1,143 +0,0 @@
# MediaWiki - Wiki Platform
## Table of Contents
- [Overview](#overview)
- [What is MediaWiki?](#what-is-mediawiki)
- [Why Use MediaWiki?](#why-use-mediawiki)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Wiki Platform
**Docker Image:** [mediawiki](https://hub.docker.com/_/mediawiki)
**Default Stack:** `productivity.yml`
**Web UI:** `http://SERVER_IP:8084`
**Database:** MariaDB (mediawiki-db container)
**Ports:** 8084
## What is MediaWiki?
MediaWiki is the software that powers Wikipedia. It's a powerful, feature-rich wiki platform designed for large-scale collaborative documentation. If you want Wikipedia-style wikis with advanced features, templates, and extensions, MediaWiki is the choice.
### Key Features
- **Powers Wikipedia:** Battle-tested at scale
- **Advanced Markup:** Wikitext syntax
- **Templates:** Reusable content blocks
- **Categories:** Organize pages
- **Version History:** Complete revision tracking
- **Extensions:** 2000+ extensions
- **Multi-Language:** Full internationalization
- **Media Management:** Images, files
- **User Management:** Roles and rights
- **API:** Comprehensive API
- **Free & Open Source:** GPL license
## Why Use MediaWiki?
1. **Feature-Rich:** Most powerful wiki software
2. **Proven:** Runs Wikipedia
3. **Extensible:** 2000+ extensions
4. **Templates:** Advanced content reuse
5. **Categories:** Powerful organization
6. **API:** Extensive automation
7. **Community:** Large user base
8. **Professional:** Enterprise-grade
## Configuration in AI-Homelab
```
/opt/stacks/productivity/mediawiki/html/ # MediaWiki installation
/opt/stacks/productivity/mediawiki/images/ # Uploaded files
/opt/stacks/productivity/mediawiki-db/data/ # MariaDB database
```
## Official Resources
- **Website:** https://www.mediawiki.org
- **Documentation:** https://www.mediawiki.org/wiki/Documentation
- **Extensions:** https://www.mediawiki.org/wiki/Category:Extensions
- **Manual:** https://www.mediawiki.org/wiki/Manual:Contents
## Docker Configuration
```yaml
mediawiki-db:
image: mariadb:latest
container_name: mediawiki-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${MEDIAWIKI_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=mediawiki
- MYSQL_USER=mediawiki
- MYSQL_PASSWORD=${MEDIAWIKI_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/mediawiki-db/data:/var/lib/mysql
mediawiki:
image: mediawiki:latest
container_name: mediawiki
restart: unless-stopped
networks:
- traefik-network
ports:
- "8084:80"
environment:
- MEDIAWIKI_DB_HOST=mediawiki-db
- MEDIAWIKI_DB_NAME=mediawiki
- MEDIAWIKI_DB_USER=mediawiki
- MEDIAWIKI_DB_PASSWORD=${MEDIAWIKI_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/mediawiki/html:/var/www/html
- /opt/stacks/productivity/mediawiki/images:/var/www/html/images
depends_on:
- mediawiki-db
labels:
- "traefik.enable=true"
- "traefik.http.routers.mediawiki.rule=Host(`mediawiki.${DOMAIN}`)"
```
## Summary
MediaWiki is the enterprise wiki platform offering:
- Wikipedia's proven software
- Advanced wikitext markup
- Template system
- 2000+ extensions
- Categories and organization
- Complete revision history
- Multi-language support
- Free and open-source
**Perfect for:**
- Large wikis
- Complex documentation
- Wikipedia-style sites
- Corporate knowledge bases
- Community documentation
- Template-heavy content
- Multi-language wikis
**Key Points:**
- Requires MariaDB database
- Wikipedia's software
- Steeper learning curve
- Very powerful features
- Template system
- Extension ecosystem
- Wikitext syntax
- Enterprise-grade
**Remember:**
- Complete installation wizard
- Download LocalSettings.php after setup
- Place in /var/www/html/
- Wikitext syntax to learn
- Extensions add features
- Templates powerful but complex
- Regular backups important
MediaWiki brings Wikipedia's power to your wiki!

View File

@@ -1,138 +0,0 @@
# Mosquitto - MQTT Broker
## Table of Contents
- [Overview](#overview)
- [What is Mosquitto?](#what-is-mosquitto)
- [Why Use Mosquitto?](#why-use-mosquitto)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Message Broker
**Docker Image:** [eclipse-mosquitto](https://hub.docker.com/_/eclipse-mosquitto)
**Default Stack:** `homeassistant.yml`
**Ports:** 1883 (MQTT), 9001 (WebSocket)
## What is Mosquitto?
Mosquitto is an MQTT broker - a message bus for IoT devices. MQTT (Message Queuing Telemetry Transport) is a lightweight publish/subscribe protocol perfect for smart home devices. Mosquitto acts as the central hub where devices publish messages (like sensor readings) and other devices/services subscribe to receive them.
### Key Features
- **Lightweight:** Minimal resource usage
- **Fast:** Low latency messaging
- **Reliable:** Quality of Service levels
- **Secure:** Authentication and TLS support
- **Standard:** Industry-standard MQTT 3.1.1 and 5.0
- **WebSocket Support:** Browser connections
## Why Use Mosquitto?
1. **IoT Standard:** Industry-standard protocol
2. **Lightweight:** Efficient for battery devices
3. **Fast:** Real-time messaging
4. **Central Hub:** Connect all IoT devices
5. **Home Assistant Integration:** Native support
6. **Zigbee2MQTT:** Required for Zigbee devices
7. **Tasmota:** Tasmota devices use MQTT
## Configuration in AI-Homelab
```
/opt/stacks/homeassistant/mosquitto/
config/
mosquitto.conf # Main config
password.txt # Hashed passwords
data/ # Persistence
log/ # Logs
```
### mosquitto.conf
```conf
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
listener 1883
allow_anonymous false
password_file /mosquitto/config/password.txt
listener 9001
protocol websockets
```
## Official Resources
- **Website:** https://mosquitto.org
- **Documentation:** https://mosquitto.org/documentation
## Docker Configuration
```yaml
mosquitto:
image: eclipse-mosquitto:latest
container_name: mosquitto
restart: unless-stopped
networks:
- traefik-network
ports:
- "1883:1883"
- "9001:9001"
volumes:
- /opt/stacks/homeassistant/mosquitto/config:/mosquitto/config
- /opt/stacks/homeassistant/mosquitto/data:/mosquitto/data
- /opt/stacks/homeassistant/mosquitto/log:/mosquitto/log
```
## Setup
**Create User:**
```bash
docker exec -it mosquitto mosquitto_passwd -c /mosquitto/config/password.txt homeassistant
# Add more users
docker exec -it mosquitto mosquitto_passwd /mosquitto/config/password.txt zigbee2mqtt
# Restart
docker restart mosquitto
```
**Test Connection:**
```bash
# Subscribe (terminal 1)
docker exec -it mosquitto mosquitto_sub -h localhost -t test/topic -u homeassistant -P yourpassword
# Publish (terminal 2)
docker exec -it mosquitto mosquitto_pub -h localhost -t test/topic -m "Hello MQTT" -u homeassistant -P yourpassword
```
## Summary
Mosquitto is the MQTT message broker providing:
- Central IoT message hub
- Publish/subscribe protocol
- Lightweight and fast
- Required for Zigbee2MQTT
- Home Assistant integration
- Secure authentication
- Free and open-source
**Perfect for:**
- Smart home setups
- Zigbee devices (via Zigbee2MQTT)
- Tasmota devices
- ESP devices
- IoT messaging
- Real-time communication
**Key Points:**
- Create users with mosquitto_passwd
- Used by Zigbee2MQTT
- Home Assistant connects to it
- Port 1883 for MQTT
- Port 9001 for WebSockets
- Authentication required
Mosquitto is the messaging backbone of your smart home!

View File

@@ -1,136 +0,0 @@
# MotionEye - Camera Surveillance
## Table of Contents
- [Overview](#overview)
- [What is MotionEye?](#what-is-motioneye)
- [Why Use MotionEye?](#why-use-motioneye)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Video Surveillance
**Docker Image:** [ccrisan/motioneye](https://hub.docker.com/r/ccrisan/motioneye)
**Default Stack:** `homeassistant.yml`
**Web UI:** `http://SERVER_IP:8765`
**Default Login:** admin (no password)
**Ports:** 8765, 8081-8084 (camera streams)
## What is MotionEye?
MotionEye is a web-based frontend for the Motion video surveillance software. It provides a simple interface to manage IP cameras, USB webcams, and Raspberry Pi cameras. Features include motion detection, recording, streaming, and notifications.
### Key Features
- **Multiple Cameras:** Support many cameras
- **Motion Detection:** Alert on movement
- **Recording:** Continuous or motion-triggered
- **Streaming:** Live MJPEG/RTSP streams
- **Cloud Upload:** Google Drive, Dropbox
- **Notifications:** Email, webhooks
- **Mobile Friendly:** Responsive web UI
- **Home Assistant Integration:** Camera entities
## Why Use MotionEye?
1. **Simple Setup:** Easy camera addition
2. **Motion Detection:** Built-in alerts
3. **Free:** No subscription fees
4. **Local Storage:** Your NAS/server
5. **Multiple Cameras:** Centralized management
6. **Home Assistant:** Native integration
7. **Lightweight:** Low resource usage
## Configuration in AI-Homelab
```
/opt/stacks/homeassistant/motioneye/
config/ # Configuration
media/ # Recordings
```
## Official Resources
- **GitHub:** https://github.com/ccrisan/motioneye
- **Wiki:** https://github.com/ccrisan/motioneye/wiki
## Docker Configuration
```yaml
motioneye:
image: ccrisan/motioneye:master-amd64
container_name: motioneye
restart: unless-stopped
networks:
- traefik-network
ports:
- "8765:8765"
- "8081:8081" # Camera stream ports
- "8082:8082"
environment:
- TZ=America/New_York
volumes:
- /opt/stacks/homeassistant/motioneye/config:/etc/motioneye
- /opt/stacks/homeassistant/motioneye/media:/var/lib/motioneye
labels:
- "traefik.enable=true"
- "traefik.http.routers.motioneye.rule=Host(`motioneye.${DOMAIN}`)"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d motioneye
```
2. **Access UI:** `http://SERVER_IP:8765`
- Username: `admin`
- Password: (blank)
- **Set password immediately!**
3. **Add Camera:**
- Click "+" or hamburger menu → Add Camera
- Camera Type: Network Camera, Simple MJPEG, RTSP, etc.
- URL: `rtsp://username:password@camera_ip:554/stream`
- Test and save
4. **Configure Motion Detection:**
- Select camera
- Motion Detection → Enable
- Frame Change Threshold: 1-5% typical
- Motion Notifications → Email or webhook
5. **Recording:**
- Recording Mode: Continuous or Motion Triggered
- Storage location: /var/lib/motioneye
- Retention: Automatic cleanup
## Summary
MotionEye provides free, local video surveillance with motion detection, recording, and Home Assistant integration for IP cameras and webcams.
**Perfect for:**
- Home security cameras
- Motion-triggered recording
- Multiple camera management
- Local recording
- Budget surveillance
**Key Points:**
- Free and open-source
- Motion detection built-in
- Supports many camera types
- Local storage
- Home Assistant integration
- Change default password!
- RTSP/MJPEG streams
**Remember:**
- Set admin password immediately
- Configure motion detection sensitivity
- Set recording retention
- Test camera streams
- Use RTSP for best quality
MotionEye turns any camera into a smart surveillance system!

View File

@@ -1,190 +0,0 @@
# Mylar3 - Comic Book Management
## Table of Contents
- [Overview](#overview)
- [What is Mylar3?](#what-is-mylar3)
- [Why Use Mylar3?](#why-use-mylar3)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
## Overview
**Category:** Comic Book Management
**Docker Image:** [linuxserver/mylar3](https://hub.docker.com/r/linuxserver/mylar3)
**Default Stack:** `media-management.yml`
**Web UI:** `http://SERVER_IP:8090`
**Ports:** 8090
## What is Mylar3?
Mylar3 is an automated comic book download manager. It's like Sonarr/Radarr but specifically designed for comic books. It tracks your favorite series, automatically downloads new issues when released, and organizes your comic collection with proper metadata and naming.
### Key Features
- **Series Tracking:** Monitor ongoing comic series
- **Automatic Downloads:** New issues downloaded automatically
- **Comic Vine Integration:** Accurate metadata
- **Weekly Pull Lists:** See this week's releases
- **Story Arc Support:** Track multi-series arcs
- **Quality Management:** Preferred file sizes and formats
- **File Organization:** Consistent naming and structure
- **Failed Download Handling:** Retry logic
- **Multiple Providers:** Torrent and Usenet
- **ComicRack/Ubooquity Integration:** Reader compatibility
## Why Use Mylar3?
1. **Never Miss Issues:** Auto-download weekly releases
2. **Series Management:** Track all your series
3. **Metadata Automation:** Comic Vine integration
4. **Organization:** Consistent file structure
5. **Weekly Pull Lists:** See what's new
6. **Story Arcs:** Track crossover events
7. **Quality Control:** Size and format preferences
8. **Free & Open Source:** No cost
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/mylar3/config/ # Configuration
/mnt/media/comics/ # Comic library
/mnt/downloads/ # Downloads
Comic Structure:
/mnt/media/comics/
Series Name (Year)/
Series Name #001 (Year).cbz
Series Name #002 (Year).cbz
```
### Environment Variables
```bash
PUID=1000
PGID=1000
TZ=America/New_York
```
## Official Resources
- **GitHub:** https://github.com/mylar3/mylar3
- **Wiki:** https://github.com/mylar3/mylar3/wiki
- **Discord:** https://discord.gg/6UG94R7E8T
## Docker Configuration
```yaml
mylar3:
image: linuxserver/mylar3:latest
container_name: mylar3
restart: unless-stopped
networks:
- traefik-network
ports:
- "8090:8090"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/mylar3/config:/config
- /mnt/media/comics:/comics
- /mnt/downloads:/downloads
labels:
- "traefik.enable=true"
- "traefik.http.routers.mylar3.rule=Host(`mylar3.${DOMAIN}`)"
- "traefik.http.routers.mylar3.entrypoints=websecure"
- "traefik.http.routers.mylar3.tls.certresolver=letsencrypt"
- "traefik.http.routers.mylar3.middlewares=authelia@docker"
- "traefik.http.services.mylar3.loadbalancer.server.port=8090"
```
## Initial Setup
1. **Start Container:**
```bash
docker compose up -d mylar3
```
2. **Access UI:** `http://SERVER_IP:8090`
3. **Config Wizard:**
- Comic Location: `/comics`
- Download client: qBittorrent
- Comic Vine API: Get from comicvine.gamespot.com/api
- Search providers: Add torrent indexers
4. **Download Client:**
- Settings → Download Settings → qBittorrent
- Host: `gluetun`
- Port: `8080`
- Username/Password
- Category: `comics-mylar`
5. **Comic Vine API:**
- Register at comicvine.gamespot.com
- Get API key
- Settings → Comic Vine API key
6. **Add Series:**
- Search for comic series
- Select correct series
- Set monitoring (all issues or future only)
- Mylar searches automatically
### Weekly Pull List
**Pull List Tab:**
- Shows this week's comic releases
- For your monitored series
- One-click download
**Pull List Sources:**
- Comic Vine
- Marvel
- DC
- Image Comics
## Summary
Mylar3 is the comic book automation tool offering:
- Automatic issue downloads
- Series tracking
- Weekly pull lists
- Comic Vine metadata
- Story arc support
- Quality management
- Free and open-source
**Perfect for:**
- Comic book collectors
- Weekly release tracking
- Series completionists
- Digital comic readers
- Automated management
**Key Points:**
- Comic Vine API required
- Monitor ongoing series
- Weekly pull list feature
- Story arc tracking
- CBZ/CBR format support
- Integrates with comic readers
**File Formats:**
- CBZ (Comic Book ZIP)
- CBR (Comic Book RAR)
- Both supported by readers
**Remember:**
- Get Comic Vine API key
- Configure download client
- Add search providers
- Monitor series, not individual issues
- Check weekly pull list
- Story arcs tracked separately
Mylar3 automates your entire comic book collection!

View File

@@ -1,295 +0,0 @@
# Nextcloud - Private Cloud Storage
## Table of Contents
- [Overview](#overview)
- [What is Nextcloud?](#what-is-nextcloud)
- [Why Use Nextcloud?](#why-use-nextcloud)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Setup](#setup)
- [Apps](#apps)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** File Storage & Collaboration
**Docker Image:** [nextcloud](https://hub.docker.com/_/nextcloud)
**Default Stack:** `productivity.yml`
**Web UI:** `https://nextcloud.${DOMAIN}` or `http://SERVER_IP:8081`
**Database:** MariaDB (separate container)
**Ports:** 8081
## What is Nextcloud?
Nextcloud is a self-hosted alternative to Google Drive, Dropbox, and Microsoft 365. It provides file sync/share, calendar, contacts, office documents, video calls, and 200+ apps - all hosted on your own server with complete privacy.
### Key Features
- **File Sync & Share:** Like Dropbox
- **Calendar & Contacts:** Sync across devices
- **Office Suite:** Collaborative document editing
- **Photos:** Google Photos alternative
- **Video Calls:** Built-in Talk
- **Notes:** Markdown notes
- **Tasks:** Todo lists
- **200+ Apps:** Extensible platform
- **Mobile Apps:** iOS and Android
- **Desktop Sync:** Windows, Mac, Linux
- **E2E Encryption:** End-to-end encryption
- **Free & Open Source:** No subscriptions
## Why Use Nextcloud?
1. **Privacy:** Your data, your server
2. **No Limits:** Unlimited storage (your disk)
3. **No Subscriptions:** $0/month forever
4. **All-in-One:** Files, calendar, contacts, office
5. **Sync Everything:** Desktop and mobile apps
6. **Extensible:** Hundreds of apps
7. **Collaboration:** Share with family/team
8. **Standards:** CalDAV, CardDAV, WebDAV
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/productivity/nextcloud/
html/ # Nextcloud installation
data/ # User files
config/ # Configuration
apps/ # Installed apps
/opt/stacks/productivity/nextcloud-db/
data/ # MariaDB database
```
### Environment Variables
```bash
# Nextcloud
MYSQL_HOST=nextcloud-db
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=secure_password
NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.yourdomain.com
# MariaDB
MYSQL_ROOT_PASSWORD=root_password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=secure_password
```
## Official Resources
- **Website:** https://nextcloud.com
- **Documentation:** https://docs.nextcloud.com
- **Apps:** https://apps.nextcloud.com
- **Community:** https://help.nextcloud.com
## Docker Configuration
```yaml
nextcloud-db:
image: mariadb:latest
container_name: nextcloud-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
volumes:
- /opt/stacks/productivity/nextcloud-db/data:/var/lib/mysql
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
nextcloud:
image: nextcloud:latest
container_name: nextcloud
restart: unless-stopped
networks:
- traefik-network
ports:
- "8081:80"
environment:
- MYSQL_HOST=nextcloud-db
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.${DOMAIN}
volumes:
- /opt/stacks/productivity/nextcloud/html:/var/www/html
- /opt/stacks/productivity/nextcloud/data:/var/www/html/data
- /opt/stacks/productivity/nextcloud/config:/var/www/html/config
- /opt/stacks/productivity/nextcloud/apps:/var/www/html/custom_apps
depends_on:
- nextcloud-db
labels:
- "traefik.enable=true"
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.${DOMAIN}`)"
- "traefik.http.routers.nextcloud.entrypoints=websecure"
- "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
```
## Setup
1. **Start Containers:**
```bash
docker compose up -d nextcloud-db nextcloud
```
2. **Wait for DB Initialization:**
```bash
docker logs nextcloud-db -f
# Wait for "mysqld: ready for connections"
```
3. **Access UI:** `http://SERVER_IP:8081`
4. **Create Admin Account:**
- Username: admin
- Password: Strong password
- Click "Install"
5. **Initial Configuration:**
- Skip recommended apps (install later)
- Allow data folder location
6. **Fix Trusted Domains (if external access):**
```bash
docker exec -it --user www-data nextcloud php occ config:system:set trusted_domains 1 --value=nextcloud.yourdomain.com
```
## Apps
### Essential Apps
**Files:**
- **Photos:** Google Photos alternative with face recognition
- **Files Automated Tagging:** Auto-tag files
- **External Storage:** Connect other storage
**Productivity:**
- **Calendar:** CalDAV calendar sync
- **Contacts:** CardDAV contact sync
- **Tasks:** Todo list with CalDAV sync
- **Deck:** Kanban boards
- **Notes:** Markdown notes
**Office:**
- **Nextcloud Office:** Collaborative documents (based on Collabora)
- **OnlyOffice:** Alternative office suite
**Communication:**
- **Talk:** Video calls and chat
- **Mail:** Email client
**Media:**
- **Music:** Music player and library
- **News:** RSS reader
### Installing Apps
**Method 1: UI**
1. Apps menu (top right)
2. Browse or search
3. Download and enable
**Method 2: Command Line**
```bash
docker exec -it --user www-data nextcloud php occ app:install photos
docker exec -it --user www-data nextcloud php occ app:enable photos
```
## Troubleshooting
### Can't Access After Setup
```bash
# Add trusted domain
docker exec -it --user www-data nextcloud php occ config:system:set trusted_domains 1 --value=SERVER_IP
# Or edit config
docker exec -it nextcloud nano /var/www/html/config/config.php
# Add to 'trusted_domains' array
```
### Security Warnings
```bash
# Run maintenance mode
docker exec -it --user www-data nextcloud php occ maintenance:mode --on
# Clear cache
docker exec -it --user www-data nextcloud php occ maintenance:repair
# Update htaccess
docker exec -it --user www-data nextcloud php occ maintenance:update:htaccess
# Exit maintenance
docker exec -it --user www-data nextcloud php occ maintenance:mode --off
```
### Slow Performance
```bash
# Enable caching
docker exec -it --user www-data nextcloud php occ config:system:set memcache.local --value='\\OC\\Memcache\\APCu'
# Run background jobs via cron
docker exec -it --user www-data nextcloud php occ background:cron
```
### Missing Indices
```bash
# Add missing database indices
docker exec -it --user www-data nextcloud php occ db:add-missing-indices
# Convert to bigint (for large instances)
docker exec -it --user www-data nextcloud php occ db:convert-filecache-bigint
```
## Summary
Nextcloud is your private cloud offering:
- File sync and sharing
- Calendar and contacts sync
- Collaborative office suite
- Photo management
- Video calls
- 200+ apps
- Mobile and desktop clients
- Complete privacy
- Free and open-source
**Perfect for:**
- Replacing Google Drive/Dropbox
- Family file sharing
- Photo backup
- Calendar/contact sync
- Team collaboration
- Privacy-conscious users
**Key Points:**
- Requires MariaDB database
- 2GB RAM minimum
- Desktop sync clients available
- Mobile apps for iOS/Android
- CalDAV/CardDAV standards
- Enable caching for performance
- Regular backups important
**Remember:**
- Configure trusted domains
- Enable recommended apps
- Setup desktop sync client
- Mobile apps for phone backup
- Regular database maintenance
- Keep updated for security
Nextcloud puts you in control of your data!

View File

@@ -1,138 +0,0 @@
# Node Exporter - System Metrics
## Table of Contents
- [Overview](#overview)
- [What is Node Exporter?](#what-is-node-exporter)
- [Why Use Node Exporter?](#why-use-node-exporter)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Metrics Exporter
**Docker Image:** [prom/node-exporter](https://hub.docker.com/r/prom/node-exporter)
**Default Stack:** `monitoring.yml`
**Purpose:** Export host system metrics
**Ports:** 9100
## What is Node Exporter?
Node Exporter is a Prometheus exporter for hardware and OS metrics. It exposes CPU, memory, disk, network, and dozens of other system metrics in Prometheus format. Essential for monitoring your server health.
### Key Features
- **Hardware Metrics:** CPU, memory, disk, network
- **OS Metrics:** Load, uptime, processes
- **Filesystem:** Disk usage, I/O
- **Network:** Traffic, errors, connections
- **Temperature:** CPU/disk temps (if available)
- **Lightweight:** Minimal overhead
- **Standard:** Official Prometheus exporter
## Why Use Node Exporter?
1. **Essential:** Core system monitoring
2. **Comprehensive:** 100+ metrics
3. **Standard:** Official Prometheus exporter
4. **Lightweight:** Low resource usage
5. **Reliable:** Battle-tested
6. **Grafana Dashboards:** Many pre-made
## Configuration in AI-Homelab
```
Node Exporter runs on host network mode to access system metrics.
```
## Official Resources
- **GitHub:** https://github.com/prometheus/node_exporter
- **Metrics:** https://github.com/prometheus/node_exporter#enabled-by-default
## Docker Configuration
```yaml
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
network_mode: host
pid: host
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/host'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/host:ro,rslave
```
**Note:** Uses `network_mode: host` to access system metrics directly.
## Metrics Available
### CPU
- `node_cpu_seconds_total` - CPU time per mode
- `node_load1` - Load average (1 minute)
- `node_load5` - Load average (5 minutes)
- `node_load15` - Load average (15 minutes)
### Memory
- `node_memory_MemTotal_bytes` - Total memory
- `node_memory_MemFree_bytes` - Free memory
- `node_memory_MemAvailable_bytes` - Available memory
- `node_memory_Buffers_bytes` - Buffer cache
- `node_memory_Cached_bytes` - Page cache
### Disk
- `node_filesystem_size_bytes` - Filesystem size
- `node_filesystem_free_bytes` - Free space
- `node_filesystem_avail_bytes` - Available space
- `node_disk_read_bytes_total` - Bytes read
- `node_disk_written_bytes_total` - Bytes written
### Network
- `node_network_receive_bytes_total` - Bytes received
- `node_network_transmit_bytes_total` - Bytes transmitted
- `node_network_receive_errors_total` - Receive errors
- `node_network_transmit_errors_total` - Transmit errors
## Summary
Node Exporter provides system metrics offering:
- CPU usage and load
- Memory usage
- Disk space and I/O
- Network traffic
- System uptime
- 100+ other metrics
- Prometheus format
- Free and open-source
**Perfect for:**
- System health monitoring
- Resource usage tracking
- Capacity planning
- Performance analysis
- Server dashboards
**Key Points:**
- Official Prometheus exporter
- Runs on port 9100
- Host network mode
- Exports 100+ metrics
- Grafana dashboard 1860
- Very lightweight
- Essential for monitoring
**Remember:**
- Add to Prometheus scrape config
- Import Grafana dashboard 1860
- Monitor disk space
- Watch CPU and memory
- Network metrics valuable
- Low overhead
Node Exporter monitors your server health!

View File

@@ -1,157 +0,0 @@
# Node-RED - Visual Automation
## Table of Contents
- [Overview](#overview)
- [What is Node-RED?](#what-is-node-red)
- [Why Use Node-RED?](#why-use-node-red)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Integration with Home Assistant](#integration-with-home-assistant)
## Overview
**Category:** Visual Automation
**Docker Image:** [nodered/node-red](https://hub.docker.com/r/nodered/node-red)
**Default Stack:** `homeassistant.yml`
**Web UI:** `http://SERVER_IP:1880`
**Ports:** 1880
## What is Node-RED?
Node-RED is a flow-based programming tool for wiring together hardware devices, APIs, and online services. It provides a browser-based visual editor where you drag and drop nodes to create automations. Extremely popular with Home Assistant users for creating complex automations that would be difficult in Home Assistant's native automation system.
### Key Features
- **Visual Programming:** Drag and drop flows
- **700+ Nodes:** Pre-built functionality
- **Home Assistant Integration:** Deep integration
- **Debugging:** Real-time message inspection
- **Functions:** JavaScript for custom logic
- **Subflows:** Reusable components
- **Context Storage:** Variables and state
- **Dashboard:** Create custom UIs
## Why Use Node-RED?
1. **Visual:** See your automation logic
2. **More Powerful:** Than HA automations
3. **Easier Complex Logic:** AND/OR conditions
4. **Debugging:** See data flow in real-time
5. **Reusable:** Subflows for common patterns
6. **Learning Curve:** Easier than YAML
7. **Community:** Tons of examples
### Node-RED vs Home Assistant Automations
**Use Node-RED when:**
- Complex conditional logic needed
- Multiple triggers with different actions
- Data transformation required
- API calls to external services
- State machines
- Advanced debugging needed
**Use HA Automations when:**
- Simple trigger → action
- Using blueprints
- Want HA native management
- Simple time-based automations
## Configuration in AI-Homelab
```
/opt/stacks/homeassistant/node-red/data/
flows.json # Your flows
settings.js # Node-RED config
package.json # Installed nodes
```
## Official Resources
- **Website:** https://nodered.org
- **Documentation:** https://nodered.org/docs
- **Flows Library:** https://flows.nodered.org
- **Home Assistant Nodes:** https://zachowj.github.io/node-red-contrib-home-assistant-websocket
## Docker Configuration
```yaml
node-red:
image: nodered/node-red:latest
container_name: node-red
restart: unless-stopped
networks:
- traefik-network
ports:
- "1880:1880"
environment:
- TZ=America/New_York
volumes:
- /opt/stacks/homeassistant/node-red/data:/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.node-red.rule=Host(`node-red.${DOMAIN}`)"
```
## Integration with Home Assistant
1. **Install HA Nodes:**
- In Node-RED: Menu → Manage Palette → Install
- Search: `node-red-contrib-home-assistant-websocket`
- Install
2. **Configure Connection:**
- Drag any Home Assistant node to canvas
- Double-click → Add new server
- Base URL: `http://home-assistant:8123` (or IP)
- Access Token: Generate in HA (Profile → Long-lived token)
3. **Available Nodes:**
- **Events: state:** Trigger on entity state change
- **Events: all:** Listen to all events
- **Call service:** Control devices
- **Current state:** Get entity state
- **Get entities:** List entities
- **Trigger: state:** More options than events
**Example Flow: Motion Light with Conditions**
```
[Motion Sensor] → [Check Time] → [Check if Dark] → [Turn On Light] → [Wait 5min] → [Turn Off Light]
```
## Summary
Node-RED is the visual automation tool offering:
- Drag-and-drop flow creation
- Deep Home Assistant integration
- More powerful than HA automations
- Real-time debugging
- JavaScript functions for custom logic
- Dashboard creation
- Free and open-source
**Perfect for:**
- Complex Home Assistant automations
- Visual thinkers
- API integrations
- State machines
- Advanced logic requirements
- Custom dashboards
**Key Points:**
- Visual programming interface
- Requires HA nodes package
- Long-lived access token needed
- More flexible than HA automations
- Real-time flow debugging
- Subflows for reusability
**Remember:**
- Generate HA long-lived token
- Install home-assistant-websocket nodes
- Save/deploy flows after changes
- Export flows for backup
- Use debug nodes while developing
Node-RED makes complex automations visual and manageable!

View File

@@ -1,157 +0,0 @@
# pgAdmin - PostgreSQL Management
## Table of Contents
- [Overview](#overview)
- [What is pgAdmin?](#what-is-pgadmin)
- [Why Use pgAdmin?](#why-use-pgadmin)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Database Management
**Docker Image:** [dpage/pgadmin4](https://hub.docker.com/r/dpage/pgadmin4)
**Default Stack:** `development.yml`
**Web UI:** `http://SERVER_IP:5050`
**Purpose:** PostgreSQL GUI management
**Ports:** 5050
## What is pgAdmin?
pgAdmin is the most popular open-source management tool for PostgreSQL. It provides a web-based GUI for administering PostgreSQL databases - creating databases, running queries, managing users, viewing data, and more. Essential for PostgreSQL users who prefer visual tools over command-line.
### Key Features
- **Web Interface:** Browser-based access
- **Query Tool:** SQL editor with syntax highlighting
- **Visual Database Designer:** Create tables visually
- **Data Management:** Browse and edit data
- **User Management:** Manage roles and permissions
- **Backup/Restore:** GUI backup operations
- **Server Monitoring:** Performance dashboards
- **Multi-Server:** Manage multiple PostgreSQL servers
- **Free & Open Source:** PostgreSQL license
## Why Use pgAdmin?
1. **Visual Interface:** Easier than command-line
2. **Query Editor:** Write and test SQL visually
3. **Data Browser:** Browse tables easily
4. **Backup Tools:** GUI backup/restore
5. **Multi-Server:** Manage all PostgreSQL instances
6. **Graphical Design:** Design schemas visually
7. **Industry Standard:** Most used PostgreSQL tool
## Configuration in AI-Homelab
```
/opt/stacks/development/pgadmin/data/
pgadmin4.db # pgAdmin configuration
sessions/ # Session data
storage/ # Server connections
```
## Official Resources
- **Website:** https://www.pgadmin.org
- **Documentation:** https://www.pgadmin.org/docs
- **GitHub:** https://github.com/pgadmin-org/pgadmin4
## Docker Configuration
```yaml
pgadmin:
image: dpage/pgadmin4:latest
container_name: pgadmin
restart: unless-stopped
networks:
- traefik-network
ports:
- "5050:80"
environment:
- PGADMIN_DEFAULT_EMAIL=admin@homelab.local
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_PASSWORD}
- PGADMIN_CONFIG_SERVER_MODE=False
- PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False
volumes:
- /opt/stacks/development/pgadmin/data:/var/lib/pgadmin
labels:
- "traefik.enable=true"
- "traefik.http.routers.pgadmin.rule=Host(`pgadmin.${DOMAIN}`)"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d pgadmin
```
2. **Access UI:** `http://SERVER_IP:5050`
3. **Login:**
- Email: `admin@homelab.local`
- Password: (from PGADMIN_PASSWORD env)
4. **Add Server:**
- Right-click "Servers" → Register → Server
- General tab:
- Name: `PostgreSQL Dev`
- Connection tab:
- Host: `postgres` (container name)
- Port: `5432`
- Maintenance database: `postgres`
- Username: `admin`
- Password: (from PostgreSQL)
- Save password: ✓
- Save
5. **Browse Database:**
- Expand server tree
- Servers → PostgreSQL Dev → Databases
- Right-click database → Query Tool
6. **Run Query:**
- Query Tool (toolbar icon)
- Write SQL
- Execute (F5 or play button)
## Summary
pgAdmin is your PostgreSQL GUI offering:
- Web-based interface
- SQL query editor
- Visual database design
- Data browsing and editing
- User management
- Backup/restore tools
- Multi-server support
- Free and open-source
**Perfect for:**
- PostgreSQL administration
- Visual database management
- SQL query development
- Database design
- Learning PostgreSQL
- Backup management
**Key Points:**
- Web-based (browser access)
- Manage multiple PostgreSQL servers
- Query tool with syntax highlighting
- Visual schema designer
- Default: admin@homelab.local
- Change default password!
- Save server passwords
**Remember:**
- Set strong admin password
- Add all PostgreSQL servers
- Use query tool for SQL
- Save server connections
- Regular backups via GUI
- Monitor server performance
- Explore visual tools
pgAdmin makes PostgreSQL management visual!

View File

@@ -1,604 +0,0 @@
# Pi-hole - Network-Wide Ad Blocker
## Table of Contents
- [Overview](#overview)
- [What is Pi-hole?](#what-is-pi-hole)
- [Why Use Pi-hole?](#why-use-pi-hole)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Setup and Management](#setup-and-management)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure / Network
**Docker Image:** [pihole/pihole](https://hub.docker.com/r/pihole/pihole)
**Default Stack:** `infrastructure.yml`
**Web UI:** `https://pihole.${DOMAIN}/admin` or `http://SERVER_IP:8181/admin`
**Authentication:** Admin password set via environment variable
**DNS Port:** 53 (TCP/UDP)
## What is Pi-hole?
Pi-hole is a network-level advertisement and internet tracker blocking application that acts as a DNS sinkhole. Originally designed for Raspberry Pi, it now runs on any Linux system including Docker. It blocks ads for all devices on your network without requiring per-device configuration.
### Key Features
- **Network-Wide Blocking:** Blocks ads on all devices (phones, tablets, smart TVs)
- **DNS Level Blocking:** Intercepts DNS queries before ads load
- **No Client Software:** Works for all devices automatically
- **Extensive Blocklists:** Millions of known ad/tracking domains blocked
- **Web Interface:** Beautiful dashboard with statistics
- **Whitelist/Blacklist:** Custom domain control
- **DHCP Server:** Optional network DHCP service
- **DNS Over HTTPS:** Encrypted DNS queries (DoH)
- **Query Logging:** See all DNS queries on network
- **Group Management:** Different blocking rules for different devices
- **Regex Filtering:** Advanced domain pattern blocking
## Why Use Pi-hole?
1. **Block Ads Everywhere:** Mobile apps, smart TVs, IoT devices
2. **Faster Browsing:** Pages load faster without ads
3. **Privacy Protection:** Block trackers and analytics
4. **Save Bandwidth:** Don't download ad content
5. **Malware Protection:** Block known malicious domains
6. **Family Safety:** Block inappropriate content
7. **Network Visibility:** See what devices are connecting where
8. **Free:** No subscription fees
9. **Customizable:** Full control over blocking
## How It Works
```
Device (Phone/Computer/TV)
DNS Query: "ads.example.com"
Router/DHCP → Pi-hole (DNS Server)
Is domain in blocklist?
├─ YES → Return 0.0.0.0 (blocked)
└─ NO → Forward to upstream DNS → Return real IP
```
### Blocking Process
1. **Device makes request:** "I want to visit ads.google.com"
2. **DNS query sent:** Device asks "What's the IP for ads.google.com?"
3. **Pi-hole receives query:** Checks against blocklists
4. **If blocked:** Returns 0.0.0.0 (null IP) - ad doesn't load
5. **If allowed:** Forwards to real DNS (1.1.1.1, 8.8.8.8, etc.)
6. **Result cached:** Faster subsequent queries
### Network Setup
**Before Pi-hole:**
```
Device → Router DNS → ISP DNS → Internet
```
**After Pi-hole:**
```
Device → Router (points to Pi-hole) → Pi-hole → Upstream DNS → Internet
(blocks ads)
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/infrastructure/pihole/
├── etc-pihole/ # Pi-hole configuration
│ ├── gravity.db # Blocklist database
│ ├── custom.list # Local DNS records
│ └── pihole-FTL.db # Query log database
└── etc-dnsmasq.d/ # DNS server config
└── custom.conf # Custom DNS rules
```
### Environment Variables
```bash
# Web Interface Password
WEBPASSWORD=your-secure-password-here
# Timezone
TZ=America/New_York
# Upstream DNS Servers
PIHOLE_DNS_=1.1.1.1;8.8.8.8 # Cloudflare and Google
# PIHOLE_DNS_=9.9.9.9;149.112.112.112 # Quad9 (privacy-focused)
# Web Interface Settings
WEBTHEME=default-dark # or default-light
VIRTUAL_HOST=pihole.yourdomain.com
WEB_PORT=80
# Optional: DHCP Server
DHCP_ACTIVE=false # Set true if using Pi-hole as DHCP
DHCP_START=192.168.1.100
DHCP_END=192.168.1.200
DHCP_ROUTER=192.168.1.1
```
## Official Resources
- **Website:** https://pi-hole.net
- **Documentation:** https://docs.pi-hole.net
- **GitHub:** https://github.com/pi-hole/pi-hole
- **Docker Hub:** https://hub.docker.com/r/pihole/pihole
- **Discourse Forum:** https://discourse.pi-hole.net
- **Reddit:** https://reddit.com/r/pihole
- **Blocklists:** https://firebog.net
## Educational Resources
### Videos
- [Pi-hole - Network-Wide Ad Blocking (NetworkChuck)](https://www.youtube.com/watch?v=KBXTnrD_Zs4)
- [Ultimate Pi-hole Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=FnFtWsZ8IP0)
- [How DNS Works (Explained)](https://www.youtube.com/watch?v=72snZctFFtA)
- [Pi-hole Docker Setup (DB Tech)](https://www.youtube.com/watch?v=NRe2-vye3ik)
### Articles & Guides
- [Pi-hole Official Documentation](https://docs.pi-hole.net)
- [Docker Pi-hole Setup](https://github.com/pi-hole/docker-pi-hole/)
- [Best Blocklists (Firebog)](https://firebog.net)
- [DNS Over HTTPS Setup](https://docs.pi-hole.net/guides/dns/cloudflared/)
### Concepts to Learn
- **DNS (Domain Name System):** Translates domains to IP addresses
- **DNS Sinkhole:** Returns null IP for blocked domains
- **Upstream DNS:** Real DNS servers Pi-hole forwards to
- **Blocklists:** Lists of known ad/tracker domains
- **Regex Filtering:** Pattern-based domain blocking
- **DHCP:** Network device IP assignment
- **DNS Over HTTPS (DoH):** Encrypted DNS queries
- **Local DNS:** Custom local domain resolution
## Docker Configuration
### Complete Service Definition
```yaml
pihole:
image: pihole/pihole:latest
container_name: pihole
restart: unless-stopped
hostname: pihole
networks:
- traefik-network
ports:
- "53:53/tcp" # DNS TCP
- "53:53/udp" # DNS UDP
- "8181:80/tcp" # Web Interface (remapped to avoid conflict)
# - "67:67/udp" # DHCP (optional, uncomment if using)
volumes:
- /opt/stacks/infrastructure/pihole/etc-pihole:/etc/pihole
- /opt/stacks/infrastructure/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
environment:
- TZ=America/New_York
- WEBPASSWORD=${PIHOLE_PASSWORD}
- PIHOLE_DNS_=1.1.1.1;8.8.8.8
- WEBTHEME=default-dark
- VIRTUAL_HOST=pihole.${DOMAIN}
- DNSMASQ_LISTENING=all
cap_add:
- NET_ADMIN # Required for DHCP functionality
labels:
- "traefik.enable=true"
- "traefik.http.routers.pihole.rule=Host(`pihole.${DOMAIN}`)"
- "traefik.http.routers.pihole.entrypoints=websecure"
- "traefik.http.routers.pihole.tls.certresolver=letsencrypt"
- "traefik.http.services.pihole.loadbalancer.server.port=80"
```
### Important Notes
1. **Port 53:** DNS must be on port 53 (cannot be remapped)
2. **Web Port:** Can use 8181 externally, 80 internally
3. **NET_ADMIN:** Required capability for DHCP and network features
4. **Password:** Set strong password via WEBPASSWORD variable
## Setup and Management
### Initial Setup
1. **Deploy Pi-hole:** Start the container
2. **Wait 60 seconds:** Let gravity database build
3. **Access Web UI:** `https://pihole.yourdomain.com/admin`
4. **Login:** Use password from WEBPASSWORD
5. **Configure Router:** Point DNS to Pi-hole server IP
### Router Configuration
**Method 1: DHCP DNS Settings (Recommended)**
1. Access router admin panel
2. Find DHCP settings
3. Set Primary DNS: Pi-hole server IP (e.g., 192.168.1.10)
4. Set Secondary DNS: 1.1.1.1 or 8.8.8.8 (fallback)
5. Save and reboot router
**Method 2: Per-Device Configuration**
- Set DNS manually on each device
- Not recommended for whole-network blocking
### Dashboard Overview
**Main Dashboard Shows:**
- Total queries (last 24h)
- Queries blocked (percentage)
- Blocklist domains count
- Top allowed/blocked domains
- Query types (A, AAAA, PTR, etc.)
- Client activity
- Real-time query log
### Managing Blocklists
**Add Blocklists:**
1. Group Management → Adlists
2. Add list URL
3. Update Gravity (Tools → Update Gravity)
**Popular Blocklists (Firebog):**
```
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
https://v.firebog.net/hosts/AdguardDNS.txt
https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt
https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt
```
**Update Gravity:**
```bash
# Via CLI
docker exec pihole pihole -g
# Or Web UI: Tools → Update Gravity
```
### Whitelist/Blacklist
**Whitelist Domain (Allow):**
1. Whitelist → Add domain
2. Example: `example.com`
3. Supports wildcards: `*.example.com`
**Blacklist Domain (Block):**
1. Blacklist → Add domain
2. Example: `ads.example.com`
**Via CLI:**
```bash
# Whitelist
docker exec pihole pihole -w example.com
# Blacklist
docker exec pihole pihole -b ads.example.com
# Regex whitelist
docker exec pihole pihole --regex-whitelist "^example\.com$"
# Regex blacklist
docker exec pihole pihole --regex "^ad[sx]?[0-9]*\..*"
```
### Query Log
**View Queries:**
1. Query Log → See all DNS requests
2. Filter by client, domain, type
3. Whitelist/Blacklist directly from log
**Privacy Modes:**
- Show Everything
- Hide Domains
- Hide Domains and Clients
- Anonymous Mode
## Advanced Topics
### Local DNS Records
Create custom local DNS entries:
**Via Web UI:**
1. Local DNS → DNS Records
2. Add domain → IP mapping
3. Example: `nas.local → 192.168.1.50`
**Via File:**
```bash
# /opt/stacks/infrastructure/pihole/etc-pihole/custom.list
192.168.1.50 nas.local
192.168.1.51 server.local
```
### Group Management
Different blocking rules for different devices:
1. **Create Groups:**
- Group Management → Groups → Add Group
- Example: "Kids Devices", "Guest Network"
2. **Assign Clients:**
- Group Management → Clients → Add client to group
3. **Configure Adlists per Group:**
- Group Management → Adlists → Assign to groups
### Regex Filtering
Advanced pattern-based blocking:
**Common Patterns:**
```regex
# Block all subdomains of ads.example.com
^ad[sx]?[0-9]*\.example\.com$
# Block tracking parameters
.*\?utm_.*
# Block all Facebook tracking
^(.+[_.-])?facebook\.[a-z]+$
```
**Add Regex:**
1. Domains → Regex Filter
2. Add regex pattern
3. Test with Query Log
### DNS Over HTTPS (DoH)
Encrypt DNS queries to upstream servers:
**Using Cloudflared:**
```yaml
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflared
restart: unless-stopped
command: proxy-dns
environment:
- TUNNEL_DNS_UPSTREAM=https://1.1.1.1/dns-query,https://1.0.0.1/dns-query
- TUNNEL_DNS_PORT=5053
- TUNNEL_DNS_ADDRESS=0.0.0.0
pihole:
environment:
- PIHOLE_DNS_=cloudflared#5053 # Use cloudflared as upstream
```
### Conditional Forwarding
Forward specific domains to specific DNS:
**Example:** Local domain to local DNS
```bash
# /opt/stacks/infrastructure/pihole/etc-dnsmasq.d/02-custom.conf
server=/local/192.168.1.1
```
### DHCP Server
Use Pi-hole as network DHCP server:
```yaml
environment:
- DHCP_ACTIVE=true
- DHCP_START=192.168.1.100
- DHCP_END=192.168.1.200
- DHCP_ROUTER=192.168.1.1
- DHCP_LEASETIME=24
- PIHOLE_DOMAIN=lan
ports:
- "67:67/udp" # DHCP port
```
**Steps:**
1. Disable DHCP on router
2. Enable DHCP in Pi-hole
3. Restart network devices
## Troubleshooting
### Pi-hole Not Blocking Ads
```bash
# Check if Pi-hole is receiving queries
docker logs pihole | grep query
# Verify DNS is set correctly on device
# Windows: ipconfig /all
# Linux/Mac: cat /etc/resolv.conf
# Should show Pi-hole IP
# Test DNS resolution
nslookup ads.google.com PIHOLE_IP
# Should return 0.0.0.0 if blocked
# Check gravity database
docker exec pihole pihole -g
```
### DNS Not Resolving
```bash
# Check if Pi-hole is running
docker ps | grep pihole
# Check DNS ports
sudo netstat -tulpn | grep :53
# Test DNS
dig @PIHOLE_IP google.com
# Check upstream DNS
docker exec pihole pihole -q
```
### Web Interface Not Accessible
```bash
# Check container logs
docker logs pihole
# Verify port mapping
docker port pihole
# Access via IP
http://SERVER_IP:8181/admin
# Check Traefik routing
docker logs traefik | grep pihole
```
### High CPU/Memory Usage
```bash
# Check container stats
docker stats pihole
# Database optimization
docker exec pihole pihole -q database optimize
# Reduce query logging
# Settings → Privacy → Anonymous mode
# Clear old queries
docker exec pihole sqlite3 /etc/pihole/pihole-FTL.db "DELETE FROM queries WHERE timestamp < strftime('%s', 'now', '-7 days')"
```
### False Positives (Sites Broken)
```bash
# Check query log for blocked domains
# Web UI → Query Log → Find blocked domain
# Whitelist domain
docker exec pihole pihole -w problematic-domain.com
# Or via Web UI:
# Whitelist → Add domain
# Common false positives:
# - microsoft.com CDNs
# - google-analytics.com (breaks some sites)
# - doubleclick.net (ad network but some sites need it)
```
### Blocklist Update Issues
```bash
# Manually update gravity
docker exec pihole pihole -g
# Check disk space
df -h
# Clear cache
docker exec pihole pihole -r
# Choose: Repair
```
## Security Best Practices
1. **Strong Password:** Use complex WEBPASSWORD
2. **Authelia Protection:** Add Authelia middleware for external access
3. **Firewall Rules:** Only expose port 53 as needed
4. **Update Regularly:** Keep Pi-hole container updated
5. **Backup Config:** Regular backups of `/etc/pihole` directory
6. **Query Privacy:** Enable privacy mode for sensitive networks
7. **Upstream DNS:** Use privacy-focused DNS (Quad9, Cloudflare)
8. **Monitor Logs:** Watch for unusual query patterns
9. **Network Segmentation:** Separate IoT devices
10. **HTTPS Only:** Use HTTPS for web interface access
## Performance Optimization
```yaml
environment:
# Increase cache size
- CACHE_SIZE=10000
# Disable query logging (improves performance)
- QUERY_LOGGING=false
# Optimize DNS settings
- DNSSEC=false # Disable if not needed
```
**Database Maintenance:**
```bash
# Optimize database weekly
docker exec pihole sqlite3 /etc/pihole/pihole-FTL.db "VACUUM"
# Clear old queries (keep 7 days)
docker exec pihole pihole -f
```
## Backup and Restore
### Backup
**Via Web UI:**
1. Settings → Teleporter
2. Click "Backup"
3. Download tar.gz file
**Via CLI:**
```bash
# Backup entire config
tar -czf pihole-backup-$(date +%Y%m%d).tar.gz /opt/stacks/infrastructure/pihole/
# Backup only settings
docker exec pihole pihole -a -t
```
### Restore
**Via Web UI:**
1. Settings → Teleporter
2. Choose file
3. Click "Restore"
**Via CLI:**
```bash
# Restore from backup
tar -xzf pihole-backup-20240112.tar.gz -C /opt/stacks/infrastructure/pihole/
docker restart pihole
```
## Summary
Pi-hole provides network-wide ad blocking by acting as a DNS server that filters requests before they reach the internet. It:
- Blocks ads on all devices automatically
- Improves browsing speed and privacy
- Provides visibility into network DNS queries
- Offers extensive customization
- Protects against malware and tracking
**Setup Priority:**
1. Deploy Pi-hole container
2. Set strong admin password
3. Configure router DNS settings
4. Add blocklists (Firebog)
5. Monitor dashboard
6. Whitelist as needed
7. Configure DoH for privacy
8. Regular backups
**Remember:**
- DNS is critical - test thoroughly before deploying
- Keep secondary DNS as fallback
- Some sites may break - use whitelist
- Monitor query log initially
- Update gravity weekly
- Backup before major changes

View File

@@ -1,666 +0,0 @@
# Plex - Media Server
## Table of Contents
- [Overview](#overview)
- [What is Plex?](#what-is-plex)
- [Why Use Plex?](#why-use-plex)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Library Management](#library-management)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Server
**Docker Image:** [linuxserver/plex](https://hub.docker.com/r/linuxserver/plex)
**Default Stack:** `media.yml`
**Web UI:** `https://plex.${DOMAIN}` or `http://SERVER_IP:32400/web`
**Authentication:** Plex account required (free or Plex Pass)
**Ports:** 32400 (web), 1900, 3005, 5353, 8324, 32410-32414, 32469
## What is Plex?
Plex is a comprehensive media server platform that organizes your personal media collection (movies, TV shows, music, photos) and streams it to any device. It's the most popular media server with apps on virtually every platform.
### Key Features
- **Universal Streaming:** Apps for every device
- **Beautiful Interface:** Polished, professional UI
- **Automatic Metadata:** Fetches posters, descriptions, cast info
- **Transcoding:** Converts media for any device
- **Live TV & DVR:** With Plex Pass and TV tuner
- **Mobile Sync:** Download for offline viewing
- **User Management:** Share libraries with friends/family
- **Watched Status:** Track progress across devices
- **Collections:** Organize movies into collections
- **Discover:** Recommendations and trending
- **Remote Access:** Stream from anywhere
- **Plex Pass:** Premium features (hardware transcoding, etc.)
## Why Use Plex?
1. **Most Popular:** Largest user base and app ecosystem
2. **Professional UI:** Best-looking interface
3. **Easy Sharing:** Simple friend/family sharing
4. **Universal Apps:** Literally every platform
5. **Active Development:** Regular updates
6. **Hardware Transcoding:** GPU acceleration (Plex Pass)
7. **Mobile Downloads:** Offline viewing
8. **Live TV:** DVR functionality
9. **Free:** Core features free, Plex Pass optional
10. **Discovery Features:** Find new content easily
## How It Works
```
Media Files → Plex Server (scans and organizes)
Metadata Enrichment
(posters, info, etc.)
┌──────────┴──────────┐
↓ ↓
Local Network Remote Access
(Direct Play) (Transcoding)
↓ ↓
Plex Apps Plex Apps
(All Devices) (Outside Home)
```
### Media Flow
1. **Add media** to watched folders
2. **Plex scans** and identifies content
3. **Metadata fetched** from online databases
4. **User requests** content via app
5. **Plex analyzes** client capabilities
6. **Direct play** or **transcode** as needed
7. **Stream** to client device
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media/plex/config/ # Plex configuration
/mnt/media/
├── movies/ # Movie files
├── tv/ # TV show files
├── music/ # Music files
└── photos/ # Photo files
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
# Plex Claim Token (for setup)
PLEX_CLAIM=claim-xxxxxxxxxxxxxxxx
# Optional: Version
VERSION=latest # or specific version
# Optional: Hardware transcoding
# Requires Plex Pass + GPU
NVIDIA_VISIBLE_DEVICES=all # For NVIDIA GPUs
```
**Get Claim Token:**
Visit: https://www.plex.tv/claim/ (valid for 4 minutes)
## Official Resources
- **Website:** https://www.plex.tv
- **Support:** https://support.plex.tv
- **Forums:** https://forums.plex.tv
- **Reddit:** https://reddit.com/r/PleX
- **API Documentation:** https://www.plex.tv/api/
- **Plex Pass:** https://www.plex.tv/plex-pass/
## Educational Resources
### Videos
- [Plex Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=IOUbZPoKJM0)
- [Plex vs Jellyfin vs Emby](https://www.youtube.com/results?search_query=plex+vs+jellyfin)
- [Ultimate Plex Server Setup](https://www.youtube.com/watch?v=XKDSld-CrHU)
- [Plex Hardware Transcoding](https://www.youtube.com/results?search_query=plex+hardware+transcoding)
### Articles & Guides
- [Plex Official Documentation](https://support.plex.tv/articles/)
- [Naming Conventions](https://support.plex.tv/articles/naming-and-organizing-your-movie-media-files/)
- [Hardware Transcoding](https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/)
- [Remote Access Setup](https://support.plex.tv/articles/200289506-remote-access/)
### Concepts to Learn
- **Transcoding:** Converting media to compatible format
- **Direct Play:** Streaming without conversion
- **Direct Stream:** Remux container, no transcode
- **Hardware Acceleration:** GPU-based transcoding
- **Metadata Agents:** Sources for media information
- **Libraries:** Organized media collections
- **Quality Profiles:** Streaming quality settings
## Docker Configuration
### Complete Service Definition
```yaml
plex:
image: linuxserver/plex:latest
container_name: plex
restart: unless-stopped
network_mode: host # Required for auto-discovery
# Or use bridge network with all ports
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- VERSION=latest
- PLEX_CLAIM=${PLEX_CLAIM} # Optional, for initial setup
volumes:
- /opt/stacks/media/plex/config:/config
- /mnt/media/movies:/movies
- /mnt/media/tv:/tv
- /mnt/media/music:/music
- /mnt/media/photos:/photos
- /tmp/plex-transcode:/transcode # Temporary transcoding files
devices:
- /dev/dri:/dev/dri # For Intel QuickSync
# For NVIDIA GPU (requires nvidia-docker):
# runtime: nvidia
# environment:
# - NVIDIA_VISIBLE_DEVICES=all
# - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
```
### With Traefik (bridge network)
```yaml
plex:
image: linuxserver/plex:latest
container_name: plex
restart: unless-stopped
networks:
- traefik-network
ports:
- "32400:32400/tcp" # Web UI
- "1900:1900/udp" # DLNA
- "3005:3005/tcp" # Plex Companion
- "5353:5353/udp" # Bonjour/Avahi
- "8324:8324/tcp" # Roku
- "32410:32410/udp" # GDM Network Discovery
- "32412:32412/udp" # GDM Network Discovery
- "32413:32413/udp" # GDM Network Discovery
- "32414:32414/udp" # GDM Network Discovery
- "32469:32469/tcp" # Plex DLNA Server
volumes:
- /opt/stacks/media/plex/config:/config
- /mnt/media:/media:ro # Read-only mount
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
labels:
- "traefik.enable=true"
- "traefik.http.routers.plex.rule=Host(`plex.${DOMAIN}`)"
- "traefik.http.routers.plex.entrypoints=websecure"
- "traefik.http.routers.plex.tls.certresolver=letsencrypt"
- "traefik.http.services.plex.loadbalancer.server.port=32400"
```
## Initial Setup
### First-Time Configuration
1. **Start Container:**
```bash
docker compose up -d plex
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:32400/web`
- Via domain: `https://plex.yourdomain.com`
3. **Sign In:**
- Use existing Plex account
- Or create free account
4. **Server Setup Wizard:**
- Give server a friendly name
- Allow remote access (optional)
- Add libraries (next section)
### Adding Libraries
**Add Movie Library:**
1. Settings → Libraries → Add Library
2. Type: Movies
3. Add folder: `/movies`
4. Advanced → Scanner: Plex Movie
5. Advanced → Agent: Plex Movie
6. Add Library
**Add TV Show Library:**
1. Settings → Libraries → Add Library
2. Type: TV Shows
3. Add folder: `/tv`
4. Advanced → Scanner: Plex Series
5. Advanced → Agent: Plex TV Series
6. Add Library
**Add Music Library:**
1. Type: Music
2. Add folder: `/music`
3. Scanner: Plex Music
### File Naming Conventions
**Movies:**
```
/movies/
Movie Name (Year)/
Movie Name (Year).mkv
Example:
/movies/
The Matrix (1999)/
The Matrix (1999).mkv
```
**TV Shows:**
```
/tv/
Show Name (Year)/
Season 01/
Show Name - S01E01 - Episode Name.mkv
Example:
/tv/
Breaking Bad (2008)/
Season 01/
Breaking Bad - S01E01 - Pilot.mkv
Breaking Bad - S01E02 - Cat's in the Bag.mkv
```
## Library Management
### Scanning Libraries
**Manual Scan:**
- Settings → Libraries → Select library → Scan Library Files
**Auto-Scan:**
- Settings → Library → "Scan my library automatically"
- "Run a partial scan when changes are detected"
**Force Refresh:**
- Select library → ... → Refresh All
### Metadata Management
**Fix Incorrect Match:**
1. Find incorrectly matched item
2. ... menu → Fix Match
3. Search for correct title
4. Select correct match
**Edit Metadata:**
- ... menu → Edit
- Change title, poster, summary, etc.
- Unlock fields to override fetched data
**Refresh Metadata:**
- ... menu → Refresh Metadata
- Re-fetches from online sources
### Collections
**Auto Collections:**
- Settings → Libraries → Select library
- Advanced → Collections
- Enable "Use collection info from The Movie Database"
**Manual Collections:**
1. Select movies → Edit → Tags → Collection
2. Add collection name
3. Collection appears automatically
### Optimize Media
**Optimize Versions:**
- Settings → Convert automatically (Plex Pass)
- Creates optimized versions for specific devices
- Saves transcoding resources
## Advanced Topics
### Hardware Transcoding (Plex Pass Required)
**Intel QuickSync:**
```yaml
plex:
devices:
- /dev/dri:/dev/dri
```
Settings → Transcoder → Hardware acceleration:
- Enable: "Use hardware acceleration when available"
- Select: Intel QuickSync
**NVIDIA GPU:**
```yaml
plex:
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
```
Settings → Transcoder:
- Enable: "Use hardware acceleration when available"
**Verify:**
- Dashboard → Now Playing
- Check if "transcode (hw)" appears
### Remote Access
**Enable:**
1. Settings → Network → Remote Access
2. Click "Enable Remote Access"
3. Plex sets up automatically (UPnP)
**Manual Port Forward:**
- Router: Forward port 32400 to Plex server IP
- Settings → Network → Manually specify public port: 32400
**Custom Domain:**
- Settings → Network → Custom server access URLs
- Add: `https://plex.yourdomain.com:443`
### User Management
**Add Users:**
1. Settings → Users & Sharing → Add Friend
2. Enter email or username
3. Select libraries to share
4. Set restrictions (if any)
**Managed Users (Home Users):**
- Settings → Users & Sharing → Plex Home
- Create profiles for family members
- PIN protection
- Content restrictions
### Plex Pass Features
**Worth It For:**
- Hardware transcoding (essential for 4K)
- Mobile downloads
- Live TV & DVR
- Multiple user accounts (managed users)
- Camera upload
- Plex Dash (monitoring app)
**Get Plex Pass:**
- Monthly: $4.99
- Yearly: $39.99
- Lifetime: $119.99 (best value)
### Tautulli Integration
Monitor Plex activity:
```yaml
tautulli:
image: linuxserver/tautulli
volumes:
- /opt/stacks/media/tautulli:/config
environment:
- PLEX_URL=http://plex:32400
```
Features:
- Watch statistics
- Activity monitoring
- Notifications
- User analytics
### Plugins and Extras
**Plugins:**
- Settings → Plugins
- Available plugins (deprecated by Plex)
- Use Sonarr/Radarr instead for acquisition
**Extras:**
- Behind the scenes
- Trailers
- Interviews
Place in movie/show folder with `-extras` suffix
## Troubleshooting
### Plex Not Accessible
```bash
# Check container status
docker ps | grep plex
# View logs
docker logs plex
# Test local access
curl http://localhost:32400/web
# Check network mode
docker inspect plex | grep NetworkMode
```
### Libraries Not Scanning
```bash
# Check permissions
ls -la /mnt/media/movies/
# Fix ownership
sudo chown -R 1000:1000 /mnt/media/
# Force scan
# Settings → Libraries → Scan Library Files
# Check logs
docker logs plex | grep -i scan
```
### Transcoding Issues
```bash
# Check transcode directory permissions
ls -la /tmp/plex-transcode/
# Ensure enough disk space
df -h /tmp
# Disable transcoding temporarily
# Settings → Transcoder → Transcoder quality: Maximum
# Check hardware acceleration
# Dashboard → Now Playing → Look for (hw)
```
### Buffering/Playback Issues
**Causes:**
- Network bandwidth
- Transcoding CPU overload
- Disk I/O bottleneck
- Insufficient RAM
**Solutions:**
- Lower streaming quality
- Enable hardware transcoding (Plex Pass)
- Use direct play when possible
- Upgrade network
- Optimize media files
### Remote Access Not Working
```bash
# Check port forwarding
# Router should forward 32400 → Plex server
# Check Plex status
# Settings → Network → Remote Access → Test
# Manually specify port
# Settings → Network → Manually specify: 32400
# Check firewall
sudo ufw allow 32400/tcp
```
### Metadata Not Downloading
```bash
# Check internet connectivity
docker exec plex ping -c 3 metadata.provider.plex.tv
# Refresh metadata
# Select library → Refresh All
# Check agents
# Settings → Agents → Make sure agents are enabled
# Force re-match
# Item → Fix Match → Search again
```
### Database Corruption
```bash
# Stop Plex
docker stop plex
# Backup database
cp /opt/stacks/media/plex/config/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db /opt/backups/
# Repair database
docker run --rm -v /opt/stacks/media/plex/config:/config \
linuxserver/plex \
/usr/lib/plexmediaserver/Plex\ Media\ Server --repair
# Restart Plex
docker start plex
```
## Performance Optimization
### Transcoding
```yaml
# Use RAM disk for transcoding
volumes:
- /dev/shm:/transcode
```
### Database Optimization
```bash
# Stop Plex
docker stop plex
# Vacuum database
sqlite3 "/opt/stacks/media/plex/config/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db" "VACUUM;"
# Restart
docker start plex
```
### Quality Settings
**Streaming Quality:**
- Settings → Network → Internet streaming quality
- Set based on upload bandwidth
- Lower = less transcoding
**Direct Play:**
- Settings → Network → Treat WAN IP As LAN Bandwidth
- Reduces unnecessary transcoding
## Security Best Practices
1. **Strong Password:** Use secure Plex account password
2. **2FA:** Enable two-factor authentication on Plex account
3. **Read-Only Media:** Mount media as `:ro` when possible
4. **Limited Sharing:** Only share with trusted users
5. **Secure Remote Access:** Use HTTPS only
6. **Regular Updates:** Keep Plex updated
7. **Monitor Activity:** Use Tautulli for user tracking
8. **PIN Protection:** Use PINs for managed users
9. **Network Isolation:** Consider separate network for media
10. **Firewall:** Restrict remote access if not needed
## Backup Strategy
**Critical Files:**
```bash
# Configuration and database
/opt/stacks/media/plex/config/Library/Application Support/Plex Media Server/
# Important:
- Plug-in Support/Databases/ # Watch history, metadata
- Metadata/ # Cached images
- Preferences.xml # Settings
```
**Backup Script:**
```bash
#!/bin/bash
docker stop plex
tar -czf plex-backup-$(date +%Y%m%d).tar.gz \
/opt/stacks/media/plex/config/
docker start plex
```
## Summary
Plex is the industry-leading media server offering:
- Professional interface and experience
- Universal device support
- Powerful transcoding
- Easy sharing with friends/family
- Active development and features
- Free with optional Plex Pass
**Perfect for:**
- Home media streaming
- Sharing with non-technical users
- Remote access needs
- Multi-device households
- Premium experience seekers
**Trade-offs:**
- Closed source
- Requires Plex account
- Some features require Plex Pass
- Phone sync requires Plex Pass
- More resource intensive than alternatives
**Remember:**
- Proper file naming is crucial
- Hardware transcoding needs Plex Pass
- Remote access requires port forwarding
- Share responsibly with trusted users
- Regular backups recommended
- Consider Plex Pass for full features
- Plex + Sonarr/Radarr = Perfect combo

View File

@@ -1,563 +0,0 @@
# Portainer - Docker Management Platform
## Table of Contents
- [Overview](#overview)
- [What is Portainer?](#what-is-portainer)
- [Why Use Portainer?](#why-use-portainer)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Using Portainer](#using-portainer)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure Management
**Docker Image:** [portainer/portainer-ce](https://hub.docker.com/r/portainer/portainer-ce)
**Default Stack:** `infrastructure.yml`
**Web UI:** `https://portainer.${DOMAIN}`
**Authentication:** Built-in (admin/password) + Authelia protection
**Role:** Secondary management tool (Dockge is primary)
## What is Portainer?
Portainer is a comprehensive Docker and Kubernetes management platform with an intuitive web interface. It provides enterprise-grade features for managing containers, images, networks, volumes, and more across single hosts or entire clusters.
### Key Features
- **Full Docker Management:** Containers, images, networks, volumes, stacks
- **User Management:** Multi-user support with role-based access control (RBAC)
- **Kubernetes Support:** Manage K8s clusters (Community Edition)
- **App Templates:** One-click deployment of popular applications
- **Registry Management:** Connect to Docker registries
- **Resource Monitoring:** CPU, memory, network usage
- **Container Console:** Web-based terminal access
- **Webhooks:** Automated deployments via webhooks
- **Environment Management:** Manage multiple Docker hosts
- **Team Collaboration:** Share environments with teams
## Why Use Portainer?
1. **Backup Management Tool:** When Dockge has issues
2. **Advanced Features:** User management, registries, templates
3. **Detailed Information:** More comprehensive stats and info
4. **Image Management:** Better interface for managing images
5. **Network Visualization:** See container networking
6. **Volume Management:** Easy volume backup/restore
7. **Established Platform:** Mature, well-documented, large community
8. **Enterprise Option:** Can upgrade to Business Edition if needed
## How It Works
```
User → Web Browser → Portainer UI
Docker Socket
Docker Engine
All Docker Resources
(Containers, Images, Networks, Volumes)
```
### Architecture
Portainer consists of:
1. **Portainer Server:** Main application with web UI
2. **Docker Socket:** Connection to Docker Engine
3. **Portainer Agent:** Optional, for managing remote hosts
4. **Database:** Stores configuration, users, settings
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/infrastructure/portainer/
└── data/ # Portainer database and config (auto-created)
```
### Initial Setup
**First Login:**
1. Access `https://portainer.yourdomain.com`
2. Create admin account (username: admin)
3. Choose "Docker" environment
4. Select "Connect via Docker socket"
### Environment Variables
```bash
# No environment variables typically needed
# Configuration done through Web UI
```
## Official Resources
- **Website:** https://www.portainer.io
- **Documentation:** https://docs.portainer.io
- **Community Edition:** https://www.portainer.io/portainer-ce
- **GitHub:** https://github.com/portainer/portainer
- **Docker Hub:** https://hub.docker.com/r/portainer/portainer-ce
- **Forum:** https://community.portainer.io
- **YouTube:** https://www.youtube.com/c/portainerio
## Educational Resources
### Videos
- [Portainer - Docker Management Made Easy (Techno Tim)](https://www.youtube.com/watch?v=ljDI5jykjE8)
- [Portainer Full Tutorial (NetworkChuck)](https://www.youtube.com/watch?v=iX0HbrfRyvc)
- [Portainer vs Dockge Comparison](https://www.youtube.com/results?search_query=portainer+vs+dockge)
- [Advanced Portainer Features (DB Tech)](https://www.youtube.com/watch?v=8q9k1qzXRk4)
### Articles & Guides
- [Portainer Official Documentation](https://docs.portainer.io)
- [Getting Started with Portainer](https://docs.portainer.io/start/install-ce)
- [Portainer vs Dockge](https://www.reddit.com/r/selfhosted/comments/17kp3d7/dockge_vs_portainer/)
- [Docker Management Best Practices](https://docs.docker.com/config/daemon/)
### Concepts to Learn
- **Docker Management:** Centralized control of Docker resources
- **RBAC:** Role-Based Access Control for teams
- **Stacks:** Docker Compose deployments via UI
- **Templates:** Pre-configured app deployments
- **Registries:** Docker image repositories
- **Environments:** Multiple Docker hosts managed together
- **Agents:** Remote Docker host management
## Docker Configuration
### Complete Service Definition
```yaml
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- traefik-network
ports:
- "9443:9443" # HTTPS UI
- "8000:8000" # Edge agent (optional)
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /opt/stacks/infrastructure/portainer/data:/data
environment:
- TZ=America/New_York
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.rule=Host(`portainer.${DOMAIN}`)"
- "traefik.http.routers.portainer.entrypoints=websecure"
- "traefik.http.routers.portainer.tls.certresolver=letsencrypt"
- "traefik.http.routers.portainer.middlewares=authelia@docker"
- "traefik.http.services.portainer.loadbalancer.server.port=9443"
- "traefik.http.services.portainer.loadbalancer.server.scheme=https"
```
### Important Notes
1. **Port 9443:** HTTPS UI (Portainer uses self-signed cert internally)
2. **Docker Socket:** Read-only mount recommended for security
3. **Data Volume:** Stores all Portainer configuration
4. **Edge Agent Port:** 8000 for remote agent connections (optional)
## Using Portainer
### Dashboard Overview
**Home Dashboard Shows:**
- Total containers (running, stopped)
- Total images
- Total volumes
- Total networks
- Stack count
- Resource usage (CPU, memory)
### Container Management
**View Containers:**
- Home → Containers
- See all containers with status
- Quick actions: start, stop, restart, remove
**Container Details:**
- Logs (real-time and download)
- Stats (CPU, memory, network)
- Console (terminal access)
- Inspect (full container JSON)
- Recreate (update container)
**Container Actions:**
1. **Start/Stop/Restart:** One-click control
2. **Logs:** View stdout/stderr output
3. **Stats:** Real-time resource usage
4. **Exec Console:** Access container shell
5. **Duplicate:** Create copy with same config
6. **Recreate:** Pull new image and restart
### Stack Management
**Deploy Stack:**
1. Stacks → Add Stack
2. Name your stack
3. Choose method:
- Web editor (paste compose)
- Upload compose file
- Git repository
4. Click "Deploy the stack"
**Manage Existing Stacks:**
- View all services in stack
- Edit compose configuration
- Stop/Start entire stack
- Remove stack (keep/delete volumes)
### Image Management
**Images View:**
- All local images
- Size and tags
- Pull new images
- Remove unused images
- Build from Dockerfile
- Import/Export images
**Common Operations:**
```
Pull Image: Images → Pull → Enter image:tag
Remove Image: Images → Select → Remove
Build Image: Images → Build → Upload Dockerfile
```
### Network Management
**View Networks:**
- All Docker networks
- Connected containers
- Network driver type
- Subnet information
**Create Network:**
1. Networks → Add Network
2. Name and driver (bridge, overlay)
3. Configure subnet/gateway
4. Attach containers
### Volume Management
**View Volumes:**
- All Docker volumes
- Size and mount points
- Containers using volume
**Volume Operations:**
- Create new volumes
- Remove unused volumes
- Browse volume contents
- Backup/restore volumes
### App Templates
**Quick Deploy:**
1. App Templates
2. Select application
3. Configure settings
4. Deploy
**Popular Templates:**
- WordPress, MySQL, Redis
- Nginx, Apache
- PostgreSQL, MongoDB
- And many more...
## Advanced Topics
### User Management
**Create Users:**
1. Users → Add User
2. Username and password
3. Assign role
4. Set team membership (if teams exist)
**Roles:**
- **Administrator:** Full access
- **Operator:** Manage containers, no settings
- **User:** Limited access to assigned resources
- **Read-only:** View only
### Team Collaboration
**Create Team:**
1. Teams → Add Team
2. Name team
3. Add members
4. Assign resource access
**Use Case:**
- Family team: Access to media services
- Admin team: Full access
- Guest team: Limited access
### Registry Management
**Add Private Registry:**
1. Registries → Add Registry
2. Choose type (Docker Hub, GitLab, custom)
3. Enter credentials
4. Test connection
**Use Cases:**
- Private Docker Hub repos
- GitHub Container Registry
- Self-hosted registry
- GitLab Registry
### Webhooks
**Automated Deployments:**
1. Select container/stack
2. Create webhook
3. Copy webhook URL
4. Configure in CI/CD pipeline
**Example:**
```bash
# Trigger container update
curl -X POST https://portainer.domain.com/api/webhooks/abc123
```
### Multiple Environments
**Add Remote Docker Host:**
1. Environments → Add Environment
2. Choose "Docker" or "Agent"
3. Enter connection details
4. Test and save
**Agent Deployment:**
```yaml
portainer-agent:
image: portainer/agent:latest
ports:
- "9001:9001"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
```
### Custom Templates
**Create Template:**
1. App Templates → Custom Templates
2. Add template
3. Define compose configuration
4. Set categories and logo
5. Save
### Resource Limits
Set container limits in Portainer UI:
1. Edit container
2. Resources & Runtime
3. Set CPU/memory limits
4. Apply changes
## Troubleshooting
### Can't Access Portainer
```bash
# Check if running
docker ps | grep portainer
# View logs
docker logs portainer
# Check port
curl -k https://localhost:9443
# Verify Traefik routing
docker logs traefik | grep portainer
```
### Forgot Admin Password
```bash
# Stop Portainer
docker stop portainer
# Remove admin user from DB
docker run --rm -v portainer_data:/data portainer/portainer-ce \
--admin-password 'NewPassword123!'
# Or reset completely (deletes all data)
docker stop portainer
docker rm portainer
docker volume rm portainer_data
docker compose up -d portainer
```
### Stacks Not Visible
```bash
# Portainer looks for compose files in specific location
# It doesn't automatically detect all stacks like Dockge
# Import existing stacks:
# Stacks → Add Stack → Web Editor → Paste compose content
```
### Container Terminal Not Working
```bash
# Ensure container has shell
docker exec container-name which bash
# Check Portainer logs
docker logs portainer | grep console
# Try different shell
# In Portainer: Console → Command → /bin/sh
```
### High Memory Usage
```bash
# Portainer uses more resources than Dockge
# Check stats
docker stats portainer
# If too high:
# - Close unused browser tabs
# - Restart Portainer
# - Reduce polling frequency (Settings)
```
### Database Corruption
```bash
# Backup first
cp -r /opt/stacks/infrastructure/portainer/data /opt/backups/
# Stop and recreate
docker stop portainer
docker rm portainer
docker volume rm portainer_data
docker compose up -d portainer
```
## Security Considerations
### Best Practices
1. **Strong Admin Password:** Use complex password
2. **Enable HTTPS:** Always use SSL/TLS
3. **Use Authelia:** Add extra authentication layer
4. **Limit Docker Socket:** Use read-only when possible
5. **Regular Updates:** Keep Portainer updated
6. **User Management:** Create separate users, avoid sharing admin
7. **RBAC:** Use role-based access for teams
8. **Audit Logs:** Review activity logs regularly
9. **Network Isolation:** Don't expose to internet without protection
10. **Backup Configuration:** Regular backups of `/data` volume
### Docker Socket Security
**Risk:** Full socket access = root on host
**Mitigations:**
- Use Docker Socket Proxy (see docker-proxy.md)
- Read-only mount when possible
- Limit user access to Portainer
- Monitor audit logs
- Use Authelia for additional authentication
## Portainer vs Dockge
### When to Use Portainer
- Need user management (teams, RBAC)
- Managing multiple Docker hosts
- Want app templates
- Need detailed image management
- Enterprise features required
- More established, proven platform
### When to Use Dockge
- Simple stack management
- Direct file manipulation preferred
- Lighter resource usage
- Faster for compose operations
- Better terminal experience
- Cleaner, modern UI
### AI-Homelab Approach
- **Primary:** Dockge (daily operations)
- **Secondary:** Portainer (backup, advanced features)
- **Use Both:** They complement each other
## Tips & Tricks
### Quick Container Recreate
To update a container with new image:
1. Containers → Select container
2. Click "Recreate"
3. Check "Pull latest image"
4. Click "Recreate"
### Volume Backup
1. Volumes → Select volume
2. Export/Backup
3. Download tar archive
4. Store safely
### Stack Migration
Export from one host, import to another:
1. Select stack
2. Copy compose content
3. On new host: Add Stack → Paste
4. Deploy
### Environment Variables
Set globally for all stacks:
1. Stacks → Select stack → Editor
2. Environment variables section
3. Add key=value pairs
4. Update stack
## Summary
Portainer is your backup Docker management platform. It provides:
- Comprehensive Docker management
- User and team collaboration
- Advanced features for complex setups
- Reliable, established platform
- Detailed resource monitoring
While Dockge is the primary tool for daily stack management, Portainer excels at:
- User management and RBAC
- Multiple environment management
- Detailed image and volume operations
- Template-based deployments
- Enterprise-grade features
Keep both running - they serve different purposes and complement each other well. Use Dockge for quick stack operations and Portainer for advanced features and user management.
**Remember:**
- Portainer is backup/secondary tool in AI-Homelab
- Different interface philosophy than Dockge
- More features, higher resource usage
- Excellent for multi-user scenarios
- Always protect with Authelia
- Regular backups of `/data` volume

View File

@@ -1,183 +0,0 @@
# PostgreSQL - Database Services
## Table of Contents
- [Overview](#overview)
- [What is PostgreSQL?](#what-is-postgresql)
- [Why Use PostgreSQL?](#why-use-postgresql)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Database Instances](#database-instances)
## Overview
**Category:** Relational Database
**Docker Image:** [postgres](https://hub.docker.com/_/postgres)
**Default Stack:** `development.yml`
**Ports:** 5432 (internal)
## What is PostgreSQL?
PostgreSQL is an advanced open-source relational database. It's more feature-rich than MySQL/MariaDB, with better support for complex queries, JSON, full-text search, and extensions. Many consider it the best open-source database.
### Key Features
- **ACID Compliant:** Reliable transactions
- **JSON Support:** Native JSON/JSONB
- **Extensions:** PostGIS, pg_trgm, etc.
- **Full-Text Search:** Built-in FTS
- **Complex Queries:** Advanced SQL
- **Replication:** Streaming replication
- **Performance:** Excellent for complex queries
- **Free & Open Source:** PostgreSQL license
## Why Use PostgreSQL?
1. **Feature-Rich:** More features than MySQL
2. **Standards Compliant:** SQL standard
3. **JSON Support:** Native JSON queries
4. **Extensions:** Powerful ecosystem
5. **Reliability:** ACID compliant
6. **Performance:** Great for complex queries
7. **Community:** Strong development
## Configuration in AI-Homelab
AI-Homelab uses separate PostgreSQL instances for different applications.
## Official Resources
- **Website:** https://www.postgresql.org
- **Documentation:** https://www.postgresql.org/docs
- **Docker Hub:** https://hub.docker.com/_/postgres
## Database Instances
### GitLab Database (gitlab-postgres)
```yaml
gitlab-postgres:
image: postgres:14
container_name: gitlab-postgres
restart: unless-stopped
networks:
- traefik-network
environment:
- POSTGRES_DB=gitlabhq_production
- POSTGRES_USER=gitlab
- POSTGRES_PASSWORD=${GITLAB_DB_PASSWORD}
volumes:
- /opt/stacks/development/gitlab-postgres/data:/var/lib/postgresql/data
```
**Purpose:** GitLab platform database
**Location:** `/opt/stacks/development/gitlab-postgres/data`
### Development Database (postgres)
```yaml
postgres:
image: postgres:latest
container_name: postgres
restart: unless-stopped
networks:
- traefik-network
ports:
- "5432:5432"
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=postgres
volumes:
- /opt/stacks/development/postgres/data:/var/lib/postgresql/data
```
**Purpose:** General development database
**Location:** `/opt/stacks/development/postgres/data`
## Management
### Access Database
```bash
# Connect to PostgreSQL
docker exec -it postgres psql -U admin -d postgres
# Or specific database
docker exec -it postgres psql -U admin -d dbname
```
### Common Commands
```sql
-- List databases
\l
-- Connect to database
\c database_name
-- List tables
\dt
-- Describe table
\d table_name
-- List users
\du
-- Quit
\q
```
### Backup Database
```bash
# Backup single database
docker exec postgres pg_dump -U admin dbname > backup.sql
# Backup all databases
docker exec postgres pg_dumpall -U admin > all_dbs_backup.sql
# Restore database
docker exec -i postgres psql -U admin -d dbname < backup.sql
```
### Create Database/User
```sql
-- Create database
CREATE DATABASE myapp;
-- Create user
CREATE USER myuser WITH PASSWORD 'password';
-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE myapp TO myuser;
```
## Summary
PostgreSQL provides advanced database services for:
- GitLab (if using PostgreSQL backend)
- Development applications
- Applications needing JSON support
- Complex query requirements
- Extensions like PostGIS
**Key Points:**
- More advanced than MySQL
- Native JSON support
- Powerful extensions
- ACID compliance
- Excellent performance
- Standards compliant
- Free and open-source
**Remember:**
- Use strong passwords
- Regular backups critical
- Monitor disk space
- VACUUM periodically
- Use pgAdmin for GUI management
- Test backups work
- Separate containers for isolation
PostgreSQL powers your advanced applications!

View File

@@ -1,208 +0,0 @@
# Prometheus - Metrics Database
## Table of Contents
- [Overview](#overview)
- [What is Prometheus?](#what-is-prometheus)
- [Why Use Prometheus?](#why-use-prometheus)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Setup](#setup)
## Overview
**Category:** Monitoring & Metrics
**Docker Image:** [prom/prometheus](https://hub.docker.com/r/prom/prometheus)
**Default Stack:** `monitoring.yml`
**Web UI:** `http://SERVER_IP:9090`
**Query Language:** PromQL
**Ports:** 9090
## What is Prometheus?
Prometheus is an open-source monitoring system with a time-series database. It scrapes metrics from configured targets at intervals, stores them, and allows powerful querying. Combined with Grafana for visualization, it's the industry standard for infrastructure monitoring.
### Key Features
- **Time-Series DB:** Store metrics over time
- **Pull Model:** Scrapes targets
- **PromQL:** Powerful query language
- **Service Discovery:** Auto-discover targets
- **Alerting:** Alert on conditions
- **Exporters:** Monitor anything
- **Highly Scalable:** Production-grade
- **Free & Open Source:** CNCF project
## Why Use Prometheus?
1. **Industry Standard:** Used by Google, etc.
2. **Powerful Queries:** PromQL flexibility
3. **Exporters:** Monitor everything
4. **Grafana Integration:** Beautiful graphs
5. **Alerting:** Prometheus Alertmanager
6. **Reliable:** Battle-tested
7. **Active Development:** CNCF project
## Configuration in AI-Homelab
```
/opt/stacks/monitoring/prometheus/
prometheus.yml # Configuration
data/ # Time-series data
rules/ # Alert rules
```
### prometheus.yml
```yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
- job_name: 'docker'
static_configs:
- targets: ['docker-proxy:2375']
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
```
## Official Resources
- **Website:** https://prometheus.io
- **Documentation:** https://prometheus.io/docs
- **PromQL:** https://prometheus.io/docs/prometheus/latest/querying/basics
- **Exporters:** https://prometheus.io/docs/instrumenting/exporters
## Educational Resources
### YouTube Videos
1. **TechWorld with Nana - Prometheus Monitoring**
- https://www.youtube.com/watch?v=h4Sl21AKiDg
- Complete Prometheus tutorial
- PromQL queries explained
- Grafana integration
2. **Techno Tim - Prometheus & Grafana**
- https://www.youtube.com/watch?v=9TJx7QTrTyo
- Docker setup
- Exporters configuration
- Dashboard creation
### Articles
1. **Prometheus Best Practices:** https://prometheus.io/docs/practices/naming
2. **PromQL Guide:** https://timber.io/blog/promql-for-humans
## Docker Configuration
```yaml
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
networks:
- traefik-network
ports:
- "9090:9090"
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
volumes:
- /opt/stacks/monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- /opt/stacks/monitoring/prometheus/data:/prometheus
- /opt/stacks/monitoring/prometheus/rules:/etc/prometheus/rules
labels:
- "traefik.enable=true"
- "traefik.http.routers.prometheus.rule=Host(`prometheus.${DOMAIN}`)"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d prometheus
```
2. **Access UI:** `http://SERVER_IP:9090`
3. **Check Targets:**
- Status → Targets
- Verify exporters are "UP"
4. **Test Query:**
- Graph tab
- Query: `up`
- Shows which targets are up
5. **Example Queries:**
```promql
# CPU usage per container
rate(container_cpu_usage_seconds_total[5m])
# Memory usage
container_memory_usage_bytes
# Disk I/O
rate(node_disk_read_bytes_total[5m])
# Network traffic
rate(container_network_receive_bytes_total[5m])
```
## Summary
Prometheus is your metrics database offering:
- Time-series metric storage
- Powerful PromQL queries
- Exporter ecosystem
- Service discovery
- Alerting integration
- Grafana visualization
- Industry standard
- Free and open-source
**Perfect for:**
- Infrastructure monitoring
- Container metrics
- Application metrics
- Performance tracking
- Alerting
- Capacity planning
**Key Points:**
- Scrapes metrics from exporters
- Stores in time-series database
- PromQL for queries
- Integrates with Grafana
- Alertmanager for alerts
- 15s scrape interval default
- 30 day retention typical
**Remember:**
- Configure scrape targets
- Install exporters for data sources
- Use Grafana for visualization
- Set retention period
- Monitor disk space
- Learn PromQL basics
- Regular backups
Prometheus powers your monitoring infrastructure!

View File

@@ -1,134 +0,0 @@
# Promtail - Log Shipper
## Table of Contents
- [Overview](#overview)
- [What is Promtail?](#what-is-promtail)
- [Why Use Promtail?](#why-use-promtail)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Log Shipping
**Docker Image:** [grafana/promtail](https://hub.docker.com/r/grafana/promtail)
**Default Stack:** `monitoring.yml`
**Purpose:** Ship logs to Loki
**Ports:** 9080
## What is Promtail?
Promtail is the log shipping agent for Loki. It discovers log files, reads them, parses labels, and ships them to Loki. Think of it as the Filebeat/Fluentd equivalent for Loki.
### Key Features
- **Log Discovery:** Auto-find log files
- **Label Extraction:** Parse labels from logs
- **Tailing:** Follow log files
- **Position Tracking:** Don't lose logs
- **Multi-Tenant:** Send to multiple Loki instances
- **Docker Integration:** Scrape container logs
- **Pipeline Processing:** Transform logs
- **Free & Open Source:** CNCF project
## Why Use Promtail?
1. **Loki Native:** Designed for Loki
2. **Docker Aware:** Scrape container logs
3. **Label Extraction:** Smart parsing
4. **Reliable:** Position tracking
5. **Efficient:** Minimal overhead
6. **Simple:** Easy configuration
## Configuration in AI-Homelab
```
/opt/stacks/monitoring/promtail/
promtail-config.yml # Configuration
positions.yaml # Log positions
```
## Official Resources
- **Website:** https://grafana.com/docs/loki/latest/clients/promtail
- **Documentation:** https://grafana.com/docs/loki/latest/clients/promtail
## Docker Configuration
```yaml
promtail:
image: grafana/promtail:latest
container_name: promtail
restart: unless-stopped
networks:
- traefik-network
ports:
- "9080:9080"
command: -config.file=/etc/promtail/promtail-config.yml
volumes:
- /opt/stacks/monitoring/promtail/promtail-config.yml:/etc/promtail/promtail-config.yml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /opt/stacks/monitoring/promtail/positions.yaml:/positions.yaml
```
### promtail-config.yml
```yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
relabel_configs:
- source_labels: ['__meta_docker_container_name']
regex: '/(.*)'
target_label: 'container'
- source_labels: ['__meta_docker_container_log_stream']
target_label: 'stream'
pipeline_stages:
- docker: {}
```
## Summary
Promtail ships logs to Loki offering:
- Docker container log collection
- Label extraction
- Position tracking
- Pipeline processing
- Loki integration
- Free and open-source
**Perfect for:**
- Sending Docker logs to Loki
- System log shipping
- Application log forwarding
- Centralized log collection
**Key Points:**
- Ships logs to Loki
- Scrapes Docker containers
- Tracks log positions
- Extracts labels
- Minimal resource usage
- Loki's preferred agent
**Remember:**
- Mount Docker socket
- Configure Loki URL
- Labels parsed from containers
- Position file prevents duplicates
- Low overhead
- Works seamlessly with Loki
Promtail sends logs to Loki!

View File

@@ -1,734 +0,0 @@
# Prowlarr - Indexer Manager
## Table of Contents
- [Overview](#overview)
- [What is Prowlarr?](#what-is-prowlarr)
- [Why Use Prowlarr?](#why-use-prowlarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Indexer Management
**Docker Image:** [linuxserver/prowlarr](https://hub.docker.com/r/linuxserver/prowlarr)
**Default Stack:** `media.yml`
**Web UI:** `https://prowlarr.${DOMAIN}` or `http://SERVER_IP:9696`
**Authentication:** Optional (configurable)
**Ports:** 9696
## What is Prowlarr?
Prowlarr is an indexer manager and proxy for Sonarr, Radarr, Readarr, and Lidarr. Instead of configuring indexers (torrent/usenet sources) separately in each *arr app, Prowlarr manages them centrally and syncs automatically. It's the "one indexer to rule them all" for your media automation stack.
### Key Features
- **Centralized Indexer Management:** Configure once, use everywhere
- **Automatic Sync:** Pushes indexers to all *arr apps
- **Built-in Indexers:** 500+ indexers included
- **Custom Indexers:** Add any indexer via definitions
- **Search Aggregation:** Search across all indexers at once
- **Stats & History:** Track indexer performance
- **App Sync:** Connects with Sonarr, Radarr, Readarr, Lidarr
- **FlareSolverr Integration:** Bypass Cloudflare protection
- **Download Client Support:** Direct downloads (optional)
- **Notification Support:** Discord, Telegram, etc.
## Why Use Prowlarr?
1. **DRY Principle:** Configure indexers once, not in every app
2. **Centralized Management:** Single source of truth
3. **Automatic Sync:** Updates push to all connected apps
4. **Performance Monitoring:** See which indexers work best
5. **Easier Maintenance:** Update indexer settings in one place
6. **Search Aggregation:** Test searches across all indexers
7. **FlareSolverr Support:** Bypass protections automatically
8. **History Tracking:** Monitor what's being searched
9. **App Integration:** Seamless *arr stack integration
10. **Free & Open Source:** Part of the Servarr family
## How It Works
```
Prowlarr (Central Hub)
Manages 500+ Indexers
(1337x, RARBG, YTS, etc.)
Automatically Syncs To:
- Sonarr (TV)
- Radarr (Movies)
- Readarr (Books)
- Lidarr (Music)
*arr Apps Search via Prowlarr
Prowlarr Queries All Indexers
Returns Aggregated Results
*arr Apps Download Best Match
```
### The Problem Prowlarr Solves
**Before Prowlarr:**
- Configure indexers in Sonarr
- Configure same indexers in Radarr
- Configure same indexers in Readarr
- Configure same indexers in Lidarr
- Update indexer? Change in 4 places!
**With Prowlarr:**
- Configure indexers once in Prowlarr
- Auto-sync to all apps
- Update once, updates everywhere
- Centralized statistics and management
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media/prowlarr/config/ # Prowlarr configuration
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://prowlarr.com
- **Wiki:** https://wiki.servarr.com/prowlarr
- **GitHub:** https://github.com/Prowlarr/Prowlarr
- **Discord:** https://discord.gg/prowlarr
- **Reddit:** https://reddit.com/r/prowlarr
- **Docker Hub:** https://hub.docker.com/r/linuxserver/prowlarr
## Educational Resources
### Videos
- [Prowlarr Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=ZI__3VNlQGM)
- [Complete *arr Stack with Prowlarr](https://www.youtube.com/results?search_query=prowlarr+sonarr+radarr+setup)
- [Prowlarr Indexer Setup](https://www.youtube.com/results?search_query=prowlarr+indexers)
- [FlareSolverr with Prowlarr](https://www.youtube.com/results?search_query=flaresolverr+prowlarr)
### Articles & Guides
- [Official Documentation](https://wiki.servarr.com/prowlarr)
- [Servarr Wiki](https://wiki.servarr.com/)
- [Indexer Setup Guide](https://wiki.servarr.com/prowlarr/indexers)
- [Application Sync Guide](https://wiki.servarr.com/prowlarr/settings#applications)
### Concepts to Learn
- **Indexers:** Sources for torrents/usenet (1337x, RARBG, etc.)
- **Trackers:** BitTorrent indexers
- **Usenet:** Alternative to torrents (requires subscription)
- **Public vs Private:** Indexer access types
- **API Keys:** Authentication between services
- **FlareSolverr:** Cloudflare bypass proxy
- **Categories:** Media type classifications
- **Sync Profiles:** What syncs to which app
## Docker Configuration
### Complete Service Definition
```yaml
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "9696:9696"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media/prowlarr/config:/config
labels:
- "traefik.enable=true"
- "traefik.http.routers.prowlarr.rule=Host(`prowlarr.${DOMAIN}`)"
- "traefik.http.routers.prowlarr.entrypoints=websecure"
- "traefik.http.routers.prowlarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.prowlarr.middlewares=authelia@docker"
- "traefik.http.services.prowlarr.loadbalancer.server.port=9696"
```
### With FlareSolverr (for Cloudflare bypass)
```yaml
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
# ... (same as above)
depends_on:
- flaresolverr
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
restart: unless-stopped
networks:
- traefik-network
ports:
- "8191:8191"
environment:
- LOG_LEVEL=info
```
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d prowlarr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:9696`
- Domain: `https://prowlarr.yourdomain.com`
3. **Initial Configuration:**
- Settings → Apps (connect *arr apps)
- Indexers → Add Indexers
- Settings → General (authentication)
### Connecting *arr Applications
**Settings → Apps → Add Application:**
#### Add Sonarr
1. **Name:** Sonarr
2. **Sync Level:** Add and Remove Only (recommended)
3. **Prowlarr Server:** `http://prowlarr:9696`
4. **Sonarr Server:** `http://sonarr:8989`
5. **API Key:** From Sonarr → Settings → General → API Key
6. **Sync Categories:** TV/SD, TV/HD, TV/UHD
7. **Test → Save**
#### Add Radarr
1. **Name:** Radarr
2. **Sync Level:** Add and Remove Only
3. **Prowlarr Server:** `http://prowlarr:9696`
4. **Radarr Server:** `http://radarr:7878`
5. **API Key:** From Radarr → Settings → General → API Key
6. **Sync Categories:** Movies/SD, Movies/HD, Movies/UHD
7. **Test → Save**
#### Add Readarr (if using)
1. **Name:** Readarr
2. **Server:** `http://readarr:8787`
3. **API Key:** From Readarr
4. **Categories:** Books/Ebook, Books/Audiobook
#### Add Lidarr (if using)
1. **Name:** Lidarr
2. **Server:** `http://lidarr:8686`
3. **API Key:** From Lidarr
4. **Categories:** Audio/MP3, Audio/Lossless
### Adding Indexers
**Indexers → Add Indexer:**
#### Popular Public Trackers
**1337x:**
1. Search: "1337x"
2. Select: 1337x
3. Base URL: (default)
4. API Key: (none for public)
5. Categories: Select relevant
6. Test → Save
**YTS:**
1. Search: "YTS"
2. Select: YTS
3. Configure categories
4. Test → Save
**EZTV:**
1. Search: "EZTV"
2. Select: EZTV (TV shows)
3. Configure
4. Test → Save
**Common Public Indexers:**
- 1337x (General)
- YTS (Movies, small file sizes)
- EZTV (TV Shows)
- RARBG (if still available)
- The Pirate Bay
- Nyaa (Anime)
- LimeTorrents
#### Private Trackers
**Requires Account:**
1. Register on tracker website
2. Get API key or credentials
3. Add in Prowlarr with credentials
4. Test → Save
**Popular Private Trackers:**
- BroadcastHe.Net (TV)
- PassThePopcorn (Movies)
- IPTorrents (General)
- TorrentLeech (General)
#### Usenet Indexers
**Requires Usenet Provider:**
1. Subscribe to usenet provider (Newshosting, etc.)
2. Subscribe to indexer (NZBGeek, etc.)
3. Add indexer with API key
4. Configure download client (SABnzbd, NZBGet)
### Auto-Sync Verification
**After adding indexers:**
1. **Check Sonarr:**
- Settings → Indexers
- Should see all indexers from Prowlarr
- Each with "(Prowlarr)" suffix
2. **Check Radarr:**
- Settings → Indexers
- Should see same indexers
- Auto-synced from Prowlarr
3. **Test Search:**
- Sonarr → Add Series → Search
- Should find results from all indexers
## Advanced Topics
### FlareSolverr Integration
Some indexers use Cloudflare protection. FlareSolverr bypasses this.
**Setup:**
1. **Add FlareSolverr Container:**
```yaml
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
restart: unless-stopped
ports:
- "8191:8191"
environment:
- LOG_LEVEL=info
```
2. **Configure in Prowlarr:**
- Settings → Indexers → Scroll down
- FlareSolverr Host: `http://flaresolverr:8191`
- Test
3. **Tag Indexers:**
- Edit indexer
- Tags → Add "flaresolverr"
- Indexers with tag will use FlareSolverr
**When to Use:**
- Indexer returns Cloudflare errors
- "Access Denied" or "Checking your browser"
- DDoS protection pages
### Sync Profiles
**Settings → Apps → Sync Profiles:**
Control what syncs where:
**Standard Profile:**
- Sync categories: All
- Minimum seeders: 1
- Enable RSS: Yes
- Enable Automatic Search: Yes
- Enable Interactive Search: Yes
**Custom Profiles:**
- Create profiles for different apps
- Example: 4K-only profile for Radarr-4K
### Indexer Categories
**Important for Proper Sync:**
- **Movies:** Movies/HD, Movies/UHD, Movies/SD
- **TV:** TV/HD, TV/UHD, TV/SD
- **Music:** Audio/MP3, Audio/Lossless
- **Books:** Books/Ebook, Books/Audiobook
- **Anime:** TV/Anime, Movies/Anime
**Ensure Correct Categories:**
- Indexer must have correct categories
- Apps filter by category
- Mismatched categories = no results
### Statistics & History
**System → Status:**
- Indexer response times
- Success rates
- Error counts
**History:**
- View all searches
- Track performance
- Debug issues
**Indexer Stats:**
- Indexers → View stats column
- Grab count
- Query count
- Failure rate
### Custom Indexer Definitions
**Add Unlisted Indexers:**
1. **Find Indexer Definition:**
- Prowlarr GitHub → Definitions
- Or community-submitted
2. **Add Definition File:**
- Copy YAML definition
- Place in `/config/Definitions/Custom/`
- Restart Prowlarr
3. **Add Indexer:**
- Should appear in list
- Configure as normal
### Download Clients (Optional)
**Prowlarr can send directly to download clients:**
**Settings → Download Clients → Add:**
Example: qBittorrent
- Host: `gluetun` (if via VPN)
- Port: `8080`
- Category: `prowlarr-manual`
**Use Case:**
- Manual downloads from Prowlarr
- Not needed for *arr apps (they have own clients)
### Notifications
**Settings → Connect:**
Get notified about:
- Indexer health issues
- Grab events
- Application updates
**Popular Notifications:**
- Discord
- Telegram
- Pushover
- Email
- Custom webhook
## Troubleshooting
### Prowlarr Not Accessible
```bash
# Check container status
docker ps | grep prowlarr
# View logs
docker logs prowlarr
# Test access
curl http://localhost:9696
# Check network
docker network inspect traefik-network
```
### Indexers Not Syncing to Apps
```bash
# Check app connection
# Settings → Apps → Test
# Check API keys match
# Prowlarr API key vs app's API key
# Check network connectivity
docker exec prowlarr ping -c 3 sonarr
docker exec prowlarr ping -c 3 radarr
# Force sync
# Settings → Apps → Select app → Sync App Indexers
# View logs
docker logs prowlarr | grep -i sync
```
### Indexer Failing
```bash
# Test indexer
# Indexers → Select indexer → Test
# Common issues:
# - Indexer down
# - Cloudflare protection (need FlareSolverr)
# - IP banned (too many requests)
# - API key invalid
# Check indexer status
# Visit indexer website directly
# Enable FlareSolverr if Cloudflare error
```
### FlareSolverr Not Working
```bash
# Check FlareSolverr status
docker logs flaresolverr
# Test FlareSolverr
curl http://localhost:8191/health
# Ensure Prowlarr can reach FlareSolverr
docker exec prowlarr curl http://flaresolverr:8191/health
# Verify indexer tagged
# Indexer → Edit → Tags → flaresolverr
# Check FlareSolverr logs during indexer test
docker logs -f flaresolverr
```
### No Search Results
```bash
# Check indexers enabled
# Indexers → Ensure not disabled
# Test indexers
# Indexers → Test All
# Check categories
# Indexer categories must match app needs
# Manual search
# Prowlarr → Search → Test query
# Should return results
# Check app logs
docker logs sonarr | grep prowlarr
docker logs radarr | grep prowlarr
```
### Database Corruption
```bash
# Stop Prowlarr
docker stop prowlarr
# Backup database
cp /opt/stacks/media/prowlarr/config/prowlarr.db /opt/backups/
# Check integrity
sqlite3 /opt/stacks/media/prowlarr/config/prowlarr.db "PRAGMA integrity_check;"
# If corrupted, restore or rebuild
# rm /opt/stacks/media/prowlarr/config/prowlarr.db
docker start prowlarr
# Prowlarr will recreate database (need to reconfigure)
```
## Performance Optimization
### Rate Limiting
**Settings → Indexers → Options:**
- Indexer download limit: 60 per day (per indexer)
- Prevents IP bans
- Adjust based on indexer limits
### Query Limits
**Per Indexer:**
- Edit indexer → Query Limit
- Requests per day
- Prevents abuse
### Caching
**Prowlarr caches results:**
- Reduces duplicate queries
- Improves response time
- Automatic management
### Database Maintenance
```bash
# Stop Prowlarr
docker stop prowlarr
# Vacuum database
sqlite3 /opt/stacks/media/prowlarr/config/prowlarr.db "VACUUM;"
# Restart
docker start prowlarr
```
## Security Best Practices
1. **Enable Authentication:**
- Settings → General → Security
- Authentication: Required
- Username and password
2. **API Key Security:**
- Keep API keys secret
- Regenerate if compromised
- Settings → General → API Key
3. **Use Reverse Proxy:**
- Traefik + Authelia
- Don't expose 9696 publicly
4. **Indexer Credentials:**
- Secure storage
- Use API keys over passwords
- Rotate periodically
5. **Monitor Access:**
- Check history for unusual activity
- Review indexer stats
6. **VPN for Public Trackers:**
- While Prowlarr doesn't download
- Apps behind VPN still benefit
7. **Regular Updates:**
- Keep Prowlarr current
- Check release notes
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media/prowlarr/config/prowlarr.db # Database
/opt/stacks/media/prowlarr/config/config.xml # Settings
/opt/stacks/media/prowlarr/config/Backup/ # Auto backups
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/prowlarr
# Backup database
cp /opt/stacks/media/prowlarr/config/prowlarr.db $BACKUP_DIR/prowlarr-$DATE.db
# Keep last 7 days
find $BACKUP_DIR -name "prowlarr-*.db" -mtime +7 -delete
```
**Restore:**
```bash
docker stop prowlarr
cp /opt/backups/prowlarr/prowlarr-20240101.db /opt/stacks/media/prowlarr/config/prowlarr.db
docker start prowlarr
```
## Integration with Other Services
### Prowlarr + Sonarr + Radarr
- Central indexer management
- Auto-sync to all apps
- Single configuration point
### Prowlarr + FlareSolverr
- Bypass Cloudflare protection
- Access protected indexers
- Automatic proxy usage
### Prowlarr + VPN
- Prowlarr itself doesn't need VPN
- Download clients (qBittorrent) need VPN
- Indexer searches are legal
## Summary
Prowlarr is the essential indexer manager for *arr stacks offering:
- Centralized indexer management
- Automatic sync to all *arr apps
- 500+ built-in indexers
- FlareSolverr integration
- Performance statistics
- History tracking
- Free and open-source
**Perfect for:**
- *arr stack users
- Multiple *arr applications
- Centralized management needs
- Indexer performance monitoring
- Simplified configuration
**Key Benefits:**
- Configure once, use everywhere
- Automatic sync to apps
- Single source of truth
- Easy maintenance
- Performance monitoring
- Cloudflare bypass support
**Remember:**
- Add Prowlarr first, then apps
- Apps auto-receive indexers
- Use FlareSolverr for Cloudflare
- Monitor indexer health
- Respect indexer rate limits
- Keep API keys secure
- Regular backups essential
**Essential Stack:**
```
Prowlarr (Indexer Manager)
Sonarr (TV) + Radarr (Movies)
qBittorrent (via Gluetun VPN)
Plex/Jellyfin (Media Server)
```
Prowlarr is the glue that makes the *arr stack work seamlessly!

View File

@@ -1,803 +0,0 @@
# qBittorrent - Torrent Client
## Table of Contents
- [Overview](#overview)
- [What is qBittorrent?](#what-is-qbittorrent)
- [Why Use qBittorrent?](#why-use-qbittorrent)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Download Client
**Docker Image:** [linuxserver/qbittorrent](https://hub.docker.com/r/linuxserver/qbittorrent)
**Default Stack:** `media.yml`
**Network Mode:** Via Gluetun (VPN container)
**Web UI:** `http://SERVER_IP:8081` (via Gluetun)
**Authentication:** Username/password (default: admin/adminadmin)
**Ports:** 8081 (WebUI via Gluetun), 6881 (incoming connections via Gluetun)
## What is qBittorrent?
qBittorrent is a free, open-source BitTorrent client that serves as a complete replacement for µTorrent. It's lightweight, feature-rich, and most importantly: no ads, no bundled malware, and respects your privacy. In AI-Homelab, it runs through the Gluetun VPN container for secure, anonymous downloading.
### Key Features
- **Free & Open Source:** No ads, no tracking
- **Web UI:** Remote management via browser
- **RSS Support:** Auto-download from RSS feeds
- **Sequential Download:** Stream while downloading
- **Search Engine:** Built-in torrent search
- **IP Filtering:** Block unwanted IPs
- **Bandwidth Control:** Limit upload/download speeds
- **Category Management:** Organize torrents
- **Label System:** Tag and filter
- **Connection Encryption:** Secure traffic
- **UPnP/NAT-PMP:** Auto port forwarding
- **Tracker Management:** Add, edit, remove trackers
## Why Use qBittorrent?
1. **No Ads:** Unlike µTorrent, completely ad-free
2. **Open Source:** Transparent, community-audited code
3. **Free Forever:** No premium versions or nag screens
4. **Lightweight:** Minimal resource usage
5. **Feature-Rich:** Everything you need built-in
6. **Active Development:** Regular updates
7. **Cross-Platform:** Windows, Mac, Linux
8. **API Support:** Integrates with Sonarr/Radarr
9. **Privacy Focused:** No telemetry or tracking
10. **VPN Friendly:** Works perfectly with Gluetun
## How It Works
```
Sonarr/Radarr → qBittorrent (via Gluetun VPN)
Torrent Download
(All traffic via VPN)
Download Complete
Sonarr/Radarr Import File
Move to Media Library
(/mnt/media/tv or /movies)
Plex/Jellyfin Scan
Media Available
```
### VPN Integration
**In AI-Homelab:**
- qBittorrent runs **inside** Gluetun container network
- ALL qBittorrent traffic routes through VPN
- If VPN drops, qBittorrent has no internet (kill switch)
- Protects your real IP address
- Bypasses ISP throttling
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media/qbittorrent/config/ # qBittorrent configuration
/mnt/downloads/ # Download directory
├── complete/ # Completed downloads
├── incomplete/ # In-progress downloads
└── torrents/ # Torrent files
```
### Environment Variables
```bash
# User permissions (must match media owner)
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
# Web UI port (inside Gluetun network)
WEBUI_PORT=8080
```
## Official Resources
- **Website:** https://www.qbittorrent.org
- **Documentation:** https://github.com/qbittorrent/qBittorrent/wiki
- **GitHub:** https://github.com/qbittorrent/qBittorrent
- **Forums:** https://qbforums.shiki.hu
- **Reddit:** https://reddit.com/r/qBittorrent
- **Docker Hub:** https://hub.docker.com/r/linuxserver/qbittorrent
## Educational Resources
### Videos
- [qBittorrent Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=9QS9xjz6W-k)
- [qBittorrent + VPN Setup](https://www.youtube.com/results?search_query=qbittorrent+vpn+docker)
- [qBittorrent Best Settings](https://www.youtube.com/results?search_query=qbittorrent+best+settings)
- [Sonarr/Radarr + qBittorrent](https://www.youtube.com/results?search_query=sonarr+radarr+qbittorrent)
### Articles & Guides
- [Official Wiki](https://github.com/qbittorrent/qBittorrent/wiki)
- [LinuxServer.io qBittorrent](https://docs.linuxserver.io/images/docker-qbittorrent)
- [Optimal Settings Guide](https://github.com/qbittorrent/qBittorrent/wiki/Optimal-Settings)
- [Category Management](https://github.com/qbittorrent/qBittorrent/wiki/WebUI-API-(qBittorrent-4.1))
### Concepts to Learn
- **Seeders/Leechers:** Users uploading/downloading
- **Ratio:** Upload/download ratio
- **Seeding:** Sharing completed files
- **Private Trackers:** Require ratio maintenance
- **Port Forwarding:** Improves connection speed
- **NAT-PMP/UPnP:** Automatic port mapping
- **DHT:** Decentralized peer discovery
- **PEX:** Peer exchange protocol
- **Magnet Links:** Torrent links without .torrent files
## Docker Configuration
### Standard Configuration (with Gluetun VPN)
In AI-Homelab, qBittorrent uses Gluetun's network:
```yaml
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
ports:
- "8081:8080" # qBittorrent WebUI (host:container)
- "6881:6881" # qBittorrent incoming
- "6881:6881/udp"
environment:
- VPN_SERVICE_PROVIDER=surfshark
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=${SURFSHARK_PRIVATE_KEY}
- WIREGUARD_ADDRESSES=${SURFSHARK_ADDRESS}
- SERVER_COUNTRIES=Netherlands
volumes:
- /opt/stacks/core/gluetun:/gluetun
qbittorrent:
image: linuxserver/qbittorrent:latest
container_name: qbittorrent
restart: unless-stopped
network_mode: "service:gluetun" # Uses Gluetun's network
depends_on:
- gluetun
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- WEBUI_PORT=8081
volumes:
- /opt/stacks/media/qbittorrent/config:/config
- /mnt/downloads:/downloads
```
**Key Points:**
- `network_mode: "service:gluetun"` routes all traffic through VPN
- Ports exposed on Gluetun, not qBittorrent
- No internet access if VPN down (kill switch)
- Access via `http://SERVER_IP:8081` (Gluetun's port)
### Standalone Configuration (without VPN - NOT RECOMMENDED)
```yaml
qbittorrent:
image: linuxserver/qbittorrent:latest
container_name: qbittorrent
restart: unless-stopped
ports:
- "8081:8080"
- "6881:6881"
- "6881:6881/udp"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media/qbittorrent/config:/config
- /mnt/downloads:/downloads
```
**Warning:** This exposes your real IP address!
## Initial Setup
### First Access
1. **Start Containers:**
```bash
docker compose up -d gluetun qbittorrent
```
2. **Verify VPN Connection:**
```bash
docker logs gluetun | grep -i "connected"
# Should see "Connected" message
```
3. **Get Initial Password:**
```bash
docker logs qbittorrent | grep -i "temporary password"
# Look for: "The WebUI administrator username is: admin"
# "The WebUI administrator password is: XXXXXXXX"
```
4. **Access Web UI:**
- Navigate to: `http://SERVER_IP:8081`
- Username: `admin`
- Password: From logs above
5. **Change Default Password:**
- Tools → Options → Web UI → Authentication
- Change password immediately!
### Essential Settings
**Tools → Options:**
#### Downloads
**When adding torrent:**
- Create subfolder: ✓ Enabled
- Start torrent: ✓ Enabled
**Saving Management:**
- Default Torrent Management Mode: Automatic
- When Torrent Category changed: Relocate
- When Default Save Path changed: Relocate affected
- When Category Save Path changed: Relocate
**Default Save Path:** `/downloads/incomplete`
**Keep incomplete torrents in:** `/downloads/incomplete`
**Copy .torrent files to:** `/downloads/torrents`
**Copy .torrent files for finished downloads to:** `/downloads/torrents/complete`
#### Connection
**Listening Port:**
- Port used for incoming connections: `6881`
- Use UPnP / NAT-PMP: ✓ Enabled (if VPN supports)
**Connections Limits:**
- Global maximum number of connections: `500`
- Maximum number of connections per torrent: `100`
- Global maximum upload slots: `20`
- Maximum upload slots per torrent: `4`
#### Speed
**Global Rate Limits:**
- Upload: `10000 KiB/s` (or lower to maintain ratio)
- Download: `0` (unlimited, or set based on bandwidth)
**Alternative Rate Limits:** (optional)
- Enable for scheduled slow periods
- Upload: `1000 KiB/s`
- Download: `5000 KiB/s`
- Schedule: Weekdays during work hours
#### BitTorrent
**Privacy:**
- ✓ Enable DHT (decentralized network)
- ✓ Enable PEX (peer exchange)
- ✓ Enable Local Peer Discovery
- Encryption mode: **Require encryption** (important!)
**Seeding Limits:**
- When ratio reaches: `2.0` (or required by tracker)
- Then: **Pause torrent**
#### Web UI
**Authentication:**
- Username: `admin` (or custom)
- Password: **Strong password!**
- Bypass authentication for clients on localhost: ✗ Disabled
- Bypass authentication for clients in whitelisted IP subnets: (optional)
- `192.168.1.0/24` (your local network)
**Security:**
- Enable clickjacking protection: ✓
- Enable Cross-Site Request Forgery protection: ✓
- Enable Host header validation: ✓
**Custom HTTP Headers:** (if behind reverse proxy)
```
X-Frame-Options: SAMEORIGIN
```
#### Advanced
**Network Interface:**
- Leave blank (uses default via Gluetun)
**Optional IP address to bind to:**
- Leave blank (handled by Gluetun)
**Validate HTTPS tracker certificates:** ✓ Enabled
### Category Setup (for Sonarr/Radarr)
**Categories → Add category:**
1. **tv-sonarr**
- Save path: `/downloads/complete/tv-sonarr`
- Download path: `/downloads/incomplete`
2. **movies-radarr**
- Save path: `/downloads/complete/movies-radarr`
- Download path: `/downloads/incomplete`
3. **music-lidarr**
- Save path: `/downloads/complete/music-lidarr`
4. **books-readarr**
- Save path: `/downloads/complete/books-readarr`
*arr apps will automatically use these categories when sending torrents.
## Advanced Topics
### VPN Kill Switch
**How It Works:**
- qBittorrent uses Gluetun's network
- If VPN disconnects, qBittorrent has no internet
- Automatic kill switch, no configuration needed
**Verify:**
```bash
# Check IP inside qBittorrent container
docker exec qbittorrent curl ifconfig.me
# Should show VPN IP, not your real IP
# Stop VPN
docker stop gluetun
# Try again
docker exec qbittorrent curl ifconfig.me
# Should fail (no internet)
```
### Port Forwarding (via VPN)
Some VPN providers support port forwarding for better speeds:
**Gluetun Configuration:**
```yaml
gluetun:
environment:
- VPN_PORT_FORWARDING=on
```
**qBittorrent Configuration:**
- Tools → Options → Connection
- Port used for incoming: Check Gluetun logs for forwarded port
- Random port: Disabled
**Check Forwarded Port:**
```bash
docker logs gluetun | grep -i "port"
# Look for: "port forwarded is XXXXX"
```
**Update qBittorrent:**
- Tools → Options → Connection → Port: XXXXX
### RSS Auto-Downloads
**Automatically download from RSS feeds:**
**View → RSS Reader:**
1. **Add RSS feed:**
- New subscription → Feed URL
- Example: TV show RSS from tracker
2. **Create Download Rule:**
- RSS Downloader
- Add rule
- Must Contain: Episode naming pattern
- Must Not Contain: Unwanted qualities
- Assign Category: tv-sonarr
- Save to directory: `/downloads/complete/tv-sonarr`
**Example Rule:**
- Rule name: "Breaking Bad 1080p"
- Must Contain: `Breaking.Bad S* 1080p`
- Must Not Contain: `720p|HDTV`
- Category: tv-sonarr
**Note:** Sonarr/Radarr handle RSS better. Use this for non-automated content.
### Search Engine
**Built-in torrent search:**
**View → Search Engine → Search plugins:**
1. **Install plugins:**
- Check for updates
- Install popular plugins (1337x, RARBG, etc.)
2. **Search:**
- Enter search term
- Select plugin(s)
- Search
- Add desired torrent
**Note:** Prowlarr + Sonarr/Radarr provide better search. This is for manual downloads.
### IP Filtering
**Block unwanted IPs (anti-piracy monitors):**
**Tools → Options → Connection:**
**IP Filtering:**
- ✓ Filter path (.dat, .p2p, .p2b): `/config/blocked-ips.txt`
- ✓ Apply to trackers
**Get Block List:**
```bash
# Download blocklist
docker exec qbittorrent wget -O /config/blocked-ips.txt \
https://github.com/Naunter/BT_BlockLists/raw/master/bt_blocklists.gz
# Extract
docker exec qbittorrent gunzip /config/blocked-ips.txt.gz
```
**Auto-update (cron):**
```bash
0 3 * * 0 docker exec qbittorrent wget -O /config/blocked-ips.txt https://github.com/Naunter/BT_BlockLists/raw/master/bt_blocklists.gz && docker exec qbittorrent gunzip -f /config/blocked-ips.txt.gz
```
### Anonymous Mode
**Tools → Options → BitTorrent → Privacy:**
- ✓ Enable anonymous mode
**What it does:**
- Disables DHT
- Disables PEX
- Disables LPD
- Only uses trackers
**Use When:**
- Private trackers (required)
- Maximum privacy
- Tracker-only operation
### Sequential Download
**For streaming while downloading:**
1. Right-click torrent
2. ✓ Download in sequential order
3. ✓ Download first and last pieces first
**Use Case:**
- Stream video while downloading
- Works with Plex/Jellyfin
**Note:** Can affect swarm health, use sparingly.
### Scheduler
**Schedule speed limits:**
**Tools → Options → Speed → Alternative Rate Limits:**
1. Enable alternative rate limits
2. Set slower limits
3. Schedule: When to activate
- Example: Weekdays 9am-5pm (reduce usage during work)
### Tags & Labels
**Organize torrents:**
**Add Tag:**
1. Right-click torrent → Add tags
2. Create custom tags
3. Filter by tag in left sidebar
**Use Cases:**
- Priority downloads
- Personal vs automated
- Quality levels
- Sources
## Troubleshooting
### qBittorrent Not Accessible
```bash
# Check containers
docker ps | grep -E "gluetun|qbittorrent"
# Check logs
docker logs gluetun | tail -20
docker logs qbittorrent | tail -20
# Check VPN connection
docker logs gluetun | grep -i "connected"
# Test access
curl http://localhost:8081
```
### Slow Download Speeds
```bash
# Check VPN connection
docker exec qbittorrent curl ifconfig.me
# Verify it's VPN IP
# Test VPN speed
docker exec gluetun speedtest-cli
# Common fixes:
# 1. Enable port forwarding (VPN provider)
# 2. Different VPN server location
# 3. Increase connection limits
# 4. Check seeders/leechers count
# 5. Try different torrent
```
**Settings to Check:**
- Tools → Options → Connection → Port forwarding
- Tools → Options → Connection → Connection limits (increase)
- Tools → Options → Speed → Remove limits temporarily
### VPN Kill Switch Not Working
```bash
# Verify network mode
docker inspect qbittorrent | grep NetworkMode
# Should show: "container:gluetun"
# Test kill switch
docker stop gluetun
docker exec qbittorrent curl ifconfig.me
# Should fail with "Could not resolve host"
# If it shows your real IP, network mode is wrong!
```
### Permission Errors
```bash
# Check download directory
ls -la /mnt/downloads/
# Should be owned by PUID:PGID (1000:1000)
sudo chown -R 1000:1000 /mnt/downloads/
# Check from container
docker exec qbittorrent ls -la /downloads
docker exec qbittorrent touch /downloads/test.txt
# Should succeed
```
### Torrents Stuck at "Stalled"
```bash
# Possible causes:
# 1. No seeders
# 2. Port not forwarded
# 3. VPN blocking connections
# 4. Tracker down
# Check tracker status
# Right-click torrent → Edit trackers
# Should show "Working"
# Force reannounce
# Right-click torrent → Force reannounce
# Check connection
# Bottom right: Connection icon should be green
```
### Can't Login to Web UI
```bash
# Reset password
docker stop qbittorrent
# Edit config
nano /opt/stacks/media/qbittorrent/config/qBittorrent/qBittorrent.conf
# Find and change:
# WebUI\Password_PBKDF2="@ByteArray(...)"
# Delete the Password line, restart
docker start qbittorrent
# Check logs for new temporary password
docker logs qbittorrent | grep "password"
```
### High CPU Usage
```bash
# Check torrent count
# Too many active torrents
# Limit active torrents
# Tools → Options → BitTorrent
# Maximum active torrents: 5
# Maximum active downloads: 3
# Check hashing
# Large files hashing = high CPU (temporary)
# Limit download speed if needed
```
## Performance Optimization
### Optimal Settings
```
Connection:
- Global max connections: 500
- Per torrent: 100
- Upload slots global: 20
- Upload slots per torrent: 4
BitTorrent:
- DHT, PEX, LPD: Enabled
- Encryption: Require encryption
Speed:
- Set based on bandwidth
- Leave some headroom
```
### Disk I/O
**Settings → Advanced → Disk cache:**
- Disk cache: `-1` (auto)
- Disk cache expiry: `60` seconds
**If using SSD:**
- Can increase cache
- Reduces write amplification
### Multiple VPN Locations
**For better speeds, try different locations:**
```yaml
gluetun:
environment:
- SERVER_COUNTRIES=Netherlands # Change to different country
```
Popular choices:
- Netherlands (good speeds)
- Switzerland (privacy)
- Romania (fast)
- Sweden (balanced)
## Security Best Practices
1. **Always Use VPN:** Never run qBittorrent without VPN
2. **Strong Password:** Change default admin password
3. **Encryption Required:** Tools → Options → BitTorrent → Require encryption
4. **IP Filtering:** Use blocklists
5. **Network Mode:** Always use `network_mode: service:gluetun`
6. **Port Security:** Don't expose ports unless necessary
7. **Regular Updates:** Keep qBittorrent and Gluetun updated
8. **Verify VPN:** Regularly check IP address
9. **Private Trackers:** Respect ratio requirements
10. **Legal Content:** Only download legal content
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media/qbittorrent/config/qBittorrent/qBittorrent.conf # Main config
/opt/stacks/media/qbittorrent/config/qBittorrent/categories.json # Categories
/opt/stacks/media/qbittorrent/config/qBittorrent/rss/ # RSS config
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/qbittorrent
# Stop qBittorrent
docker stop qbittorrent
# Backup config
tar -czf $BACKUP_DIR/qbittorrent-config-$DATE.tar.gz \
/opt/stacks/media/qbittorrent/config/
# Start qBittorrent
docker start qbittorrent
# Keep last 7 backups
find $BACKUP_DIR -name "qbittorrent-config-*.tar.gz" -mtime +7 -delete
```
## Integration with Other Services
### qBittorrent + Gluetun (VPN)
- All traffic through VPN
- Kill switch protection
- IP address masking
### qBittorrent + Sonarr/Radarr
- Automatic downloads
- Category management
- Import on completion
### qBittorrent + Plex/Jellyfin
- Sequential download for streaming
- Auto-scan on import
## Summary
qBittorrent is the essential download client offering:
- Free, open-source, no ads
- Web-based management
- VPN integration (Gluetun)
- Category management
- RSS auto-downloads
- Built-in search
- Privacy-focused
**Perfect for:**
- Torrent downloading
- *arr stack integration
- VPN-protected downloading
- Privacy-conscious users
- Replacing µTorrent/BitTorrent
**Key Points:**
- Always use with VPN (Gluetun)
- Change default password
- Enable encryption
- Use categories for *arr apps
- Monitor ratio on private trackers
- Port forwarding improves speeds
- Regular IP address verification
**Remember:**
- VPN required for safety
- network_mode: service:gluetun
- Categories for organization
- Encryption required
- Change default credentials
- Verify VPN connection
- Respect seeding requirements
- Legal content only
**Essential Media Stack:**
```
Prowlarr → Sonarr/Radarr → qBittorrent (via Gluetun) → Plex/Jellyfin
```
qBittorrent + Gluetun = Safe, fast, private downloading!

View File

@@ -1,770 +0,0 @@
# Radarr - Movie Automation
## Table of Contents
- [Overview](#overview)
- [What is Radarr?](#what-is-radarr)
- [Why Use Radarr?](#why-use-radarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Management & Automation
**Docker Image:** [linuxserver/radarr](https://hub.docker.com/r/linuxserver/radarr)
**Default Stack:** `media.yml`
**Web UI:** `https://radarr.${DOMAIN}` or `http://SERVER_IP:7878`
**Authentication:** Optional (configurable)
**Ports:** 7878
## What is Radarr?
Radarr is a movie collection manager and automation tool for Usenet and BitTorrent. It's essentially Sonarr's sibling, but for movies. Radarr monitors for new movies you want, automatically downloads them when available, and organizes your movie library beautifully.
### Key Features
- **Automatic Downloads:** Grab movies as they release
- **Quality Management:** Choose preferred qualities and upgrades
- **Calendar:** Track movie releases
- **Movie Management:** Organize and rename automatically
- **Failed Download Handling:** Retry with different releases
- **Notifications:** Discord, Telegram, Pushover, etc.
- **Custom Scripts:** Automate workflows
- **List Integration:** Import from IMDb, Trakt, TMDb lists
- **Multiple Versions:** Keep different qualities of same movie
- **Collections:** Organize movie series/franchises
## Why Use Radarr?
1. **Never Miss Releases:** Auto-download on availability
2. **Quality Upgrades:** Replace with better versions over time
3. **Organization:** Consistent naming and structure
4. **Time Saving:** No manual searching
5. **Library Management:** Track watched, wanted, available
6. **4K Management:** Separate 4K from HD
7. **Collection Support:** Marvel, Star Wars, etc.
8. **Discovery:** Find new movies via lists
9. **Integration:** Works seamlessly with Plex/Jellyfin
10. **Free & Open Source:** No cost, community-driven
## How It Works
```
New Movie Release
Radarr Checks RSS Feeds (Prowlarr)
Evaluates Releases (Quality, Size, Custom Formats)
Sends Best Release to Downloader
(qBittorrent via Gluetun VPN)
Monitors Download Progress
Download Completes
Radarr Imports File
Renames & Moves to Library
(/mnt/media/movies/)
Plex/Jellyfin Auto-Scans
Movie Available for Watching
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media/radarr/config/ # Radarr configuration
/mnt/downloads/ # Download directory (from qBittorrent)
/mnt/media/movies/ # Final movie library
Movie Library Structure:
/mnt/media/movies/
The Matrix (1999)/
The Matrix (1999) Bluray-1080p.mkv
Inception (2010)/
Inception (2010) Bluray-2160p.mkv
```
### Environment Variables
```bash
# User permissions (must match file ownership)
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://radarr.video
- **Wiki:** https://wiki.servarr.com/radarr
- **GitHub:** https://github.com/Radarr/Radarr
- **Discord:** https://discord.gg/radarr
- **Reddit:** https://reddit.com/r/radarr
- **Docker Hub:** https://hub.docker.com/r/linuxserver/radarr
## Educational Resources
### Videos
- [Radarr Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=5rtGBwBuzQE)
- [Complete *arr Stack Setup](https://www.youtube.com/results?search_query=radarr+sonarr+prowlarr+setup)
- [Radarr Quality Profiles](https://www.youtube.com/results?search_query=radarr+quality+profiles)
- [Radarr Custom Formats](https://www.youtube.com/results?search_query=radarr+custom+formats)
### Articles & Guides
- [TRaSH Guides (Essential!)](https://trash-guides.info/Radarr/)
- [Quality Settings Guide](https://trash-guides.info/Radarr/Radarr-Setup-Quality-Profiles/)
- [Custom Formats Guide](https://trash-guides.info/Radarr/radarr-setup-custom-formats/)
- [Naming Scheme](https://trash-guides.info/Radarr/Radarr-recommended-naming-scheme/)
- [Servarr Wiki](https://wiki.servarr.com/radarr)
### Concepts to Learn
- **Quality Profiles:** Define preferred qualities and upgrades
- **Custom Formats:** Advanced filtering (HDR, DV, Codecs, etc.)
- **Indexers:** Sources for releases (via Prowlarr)
- **Root Folders:** Where movies are stored
- **Minimum Availability:** When to search (Announced, In Cinemas, Released, Physical/Web)
- **Collections:** Movie franchises (MCU, DC, Star Wars)
- **Cutoff:** Quality to stop upgrading at
## Docker Configuration
### Complete Service Definition
```yaml
radarr:
image: linuxserver/radarr:latest
container_name: radarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "7878:7878"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media/radarr/config:/config
- /mnt/media/movies:/movies
- /mnt/downloads:/downloads
labels:
- "traefik.enable=true"
- "traefik.http.routers.radarr.rule=Host(`radarr.${DOMAIN}`)"
- "traefik.http.routers.radarr.entrypoints=websecure"
- "traefik.http.routers.radarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.radarr.middlewares=authelia@docker"
- "traefik.http.services.radarr.loadbalancer.server.port=7878"
```
### Key Volume Mapping
```yaml
volumes:
- /opt/stacks/media/radarr/config:/config # Radarr settings
- /mnt/media/movies:/movies # Movie library (final location)
- /mnt/downloads:/downloads # Download client folder
```
**Important:** Radarr needs access to both download and library locations for hardlinking (instant moves without copying).
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d radarr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:7878`
- Domain: `https://radarr.yourdomain.com`
3. **Initial Configuration:**
- Settings → Media Management
- Settings → Profiles
- Settings → Indexers (via Prowlarr)
- Settings → Download Clients
### Media Management Settings
**Settings → Media Management:**
1. **Rename Movies:** ✓ Enable
2. **Replace Illegal Characters:** ✓ Enable
3. **Standard Movie Format:**
```
{Movie Title} ({Release Year}) {Quality Full}
```
Example: `The Matrix (1999) Bluray-1080p.mkv`
4. **Movie Folder Format:**
```
{Movie Title} ({Release Year})
```
Example: `The Matrix (1999)`
5. **Root Folders:**
- Add: `/movies`
- This is where movies will be stored
**File Management:**
- ✓ Unmonitor Deleted Movies
- ✓ Download Propers and Repacks: Prefer and Upgrade
- ✓ Analyze video files
- ✓ Use Hardlinks instead of Copy: Important for space saving!
- Minimum Free Space: 100 MB
- ✓ Import Extra Files: Subtitles (srt, sub)
### Quality Profiles
**Settings → Profiles → Quality Profiles:**
**HD-1080p Profile (Recommended):**
1. Create new profile: "HD-1080p"
2. Upgrades Allowed: ✓
3. Upgrade Until: Bluray-1080p
4. Minimum Custom Format Score: 0
5. Upgrade Until Custom Format Score: 10000
6. Qualities (in order of preference):
- Bluray-1080p
- Remux-1080p (optional, large files)
- WEB-1080p
- HDTV-1080p
7. Save
**4K Profile (Optional):**
- Name: "Ultra HD"
- Upgrade Until: Bluray-2160p (Remux-2160p for best quality)
- Include HDR custom formats
**Follow TRaSH Guides** for optimal profiles:
https://trash-guides.info/Radarr/Radarr-Setup-Quality-Profiles/
### Download Client Setup
**Settings → Download Clients → Add → qBittorrent:**
1. **Name:** qBittorrent
2. **Enable:** ✓
3. **Host:** `gluetun` (if qBittorrent behind VPN)
4. **Port:** `8080`
5. **Username:** `admin`
6. **Password:** Your password
7. **Category:** `movies-radarr`
8. **Priority:** Normal
9. **Test → Save**
**Settings:**
- ✓ Remove Completed: Download completed and seeding
- ✓ Remove Failed: Failed downloads
### Indexer Setup (via Prowlarr)
Radarr should get indexers automatically from Prowlarr:
**Check Sync:**
1. Settings → Indexers
2. Should see synced indexers from Prowlarr
3. Test: Each should show "Successful"
**If Not Synced:**
- Prowlarr → Settings → Apps → Add Radarr
- Prowlarr Server: `http://prowlarr:9696`
- Radarr Server: `http://radarr:7878`
- API Key: From Radarr → Settings → General
### Adding Your First Movie
1. **Click "Add New"**
2. **Search:** Type movie name (e.g., "The Matrix")
3. **Select** correct movie from results
4. **Configure:**
- Root Folder: `/movies`
- Quality Profile: HD-1080p
- Minimum Availability: Released (or Physical/Web)
- ✓ Start search for movie
5. **Add Movie**
Radarr will:
- Add movie to database
- Search for best release
- Send to qBittorrent
- Monitor download
- Import when complete
## Advanced Topics
### Custom Formats
Custom Formats are the powerhouse of Radarr v4+, allowing precise control over releases.
**Settings → Custom Formats:**
**Common Custom Formats:**
1. **HDR Formats:**
- HDR
- HDR10+
- Dolby Vision
- HLG
2. **Audio Formats:**
- Dolby Atmos
- TrueHD
- DTS-X
- DTS-HD MA
3. **Release Groups:**
- Preferred groups (RARBG, FGT, etc.)
- Avoid groups (known bad quality)
4. **Resolution:**
- 4K DV HDR10
- 1080p
5. **Streaming Service:**
- Netflix
- Amazon
- Apple TV+
- Disney+
**Importing TRaSH Guides:**
https://trash-guides.info/Radarr/radarr-setup-custom-formats/
**Scoring Custom Formats:**
- Assign scores to formats
- Higher score = more preferred
- Negative scores = avoid
- Cutoff score = stop upgrading
**Example Scoring:**
- Dolby Vision: +100
- HDR10+: +75
- HDR: +50
- DTS-X: +30
- Dolby Atmos: +30
- Preferred Group: +10
- Bad Group: -1000
### Minimum Availability
**When should Radarr search for a movie?**
- **Announced:** As soon as announced (usually too early)
- **In Cinemas:** Theatrical release (usually only cams available)
- **Released:** Digital/Physical release announced
- **Physical/Web:** When officially released (best option)
**Recommendation:** Physical/Web for quality releases.
### Collections
**Movie Collections:**
- Automatically group franchises
- Marvel Cinematic Universe
- Star Wars
- James Bond
- etc.
**Settings → Metadata:**
- ✓ Enable Collections
**Managing Collections:**
- Movies → Collections tab
- View all movies in collection
- Add entire collection at once
**Auto-add Collection:**
- Edit movie → Collection → Monitor Collection
- Automatically adds all movies in franchise
### Multiple Radarr Instances
Run separate instances for different libraries:
**radarr-4k.yml:**
```yaml
radarr-4k:
image: linuxserver/radarr:latest
container_name: radarr-4k
ports:
- "7879:7878"
volumes:
- /opt/stacks/media/radarr-4k/config:/config
- /mnt/media/movies-4k:/movies
- /mnt/downloads:/downloads
environment:
- PUID=1000
- PGID=1000
```
**Use Cases:**
- Separate 4K library (different Plex library)
- Different quality standards
- Testing new settings
- Language-specific (anime, foreign films)
### Import Lists
**Automatically add movies from lists:**
**Settings → Import Lists:**
**Trakt Lists:**
1. Add → Trakt List
2. Authenticate with Trakt
3. List Type: Watchlist, Popular, Trending, etc.
4. Quality Profile: HD-1080p
5. Monitor: Yes
6. Search on Add: Yes
7. Save
**IMDb Lists:**
1. Add → IMDb Lists
2. List ID: From IMDb list URL
3. Configure quality and monitoring
**TMDb Lists:**
- Popular Movies
- Upcoming Movies
- Top Rated
**Custom Lists:**
- Personal lists from various sources
- Auto-sync periodically
### Notifications
**Settings → Connect → Add Notification:**
**Popular Notifications:**
- **Plex:** Update library on import
- **Jellyfin:** Scan library
- **Discord:** New movie alerts
- **Telegram:** Mobile notifications
- **Pushover:** Push notifications
- **Email:** SMTP alerts
- **Webhook:** Custom integrations
**Example: Plex**
1. Add → Plex Media Server
2. Host: `plex`
3. Port: `32400`
4. Auth Token: From Plex
5. Triggers: On Download, On Import, On Upgrade
6. Update Library: ✓
7. Test → Save
### Custom Scripts
**Settings → Connect → Custom Script:**
Run scripts on events:
- On Grab
- On Download
- On Upgrade
- On Rename
- On Delete
**Use Cases:**
- External notifications
- File processing
- Metadata updates
- Backup triggers
### Quality Definitions
**Settings → Quality:**
Adjust size limits for qualities:
**Default Limits (MB per minute):**
- Bluray-2160p: 350-400
- Bluray-1080p: 60-100
- Bluray-720p: 25-50
- WEB-1080p: 25-50
**Adjust Based on:**
- Storage capacity
- Preference for quality vs. size
- Bandwidth limitations
## Troubleshooting
### Radarr Can't Find Releases
```bash
# Check indexers
# Settings → Indexers → Test All
# Check Prowlarr
docker logs prowlarr | grep radarr
# Manual search movie
# Movies → Movie → Manual Search → View logs
# Common causes:
# - No indexers
# - Movie not released yet
# - Quality profile too restrictive
# - Custom format scoring too high
```
### Downloads Not Importing
```bash
# Check download client
# Settings → Download Clients → Test
# Check permissions
ls -la /mnt/downloads/
ls -la /mnt/media/movies/
# Verify Radarr access
docker exec radarr ls /downloads
docker exec radarr ls /movies
# Check logs
docker logs radarr | grep -i import
# Common issues:
# - Permission denied
# - Wrong category in qBittorrent
# - Still seeding
# - Hardlink failed (different filesystems)
```
### Wrong Movie Match
```bash
# Edit movie
# Movies → Movie → Edit
# Search for correct movie
# Remove and re-add if necessary
# Check TMDb ID
# Ensure correct movie selected
```
### Constant Upgrades
```bash
# Movie keeps downloading
# Check quality profile cutoff
# Settings → Profiles → Cutoff should be set
# Check custom format scoring
# May be scoring new releases higher
# Lock quality
# Edit movie → Set specific quality → Don't upgrade
```
### Hardlink Errors
```bash
# "Unable to hardlink" error
# Check filesystem
df -h /mnt/downloads
df -h /mnt/media/movies
# Must be same filesystem for hardlinks
# If different, Radarr copies instead (slow)
# Solution: Both on same disk/mount
```
### Database Corruption
```bash
# Stop Radarr
docker stop radarr
# Backup database
cp /opt/stacks/media/radarr/config/radarr.db /opt/backups/
# Check integrity
sqlite3 /opt/stacks/media/radarr/config/radarr.db "PRAGMA integrity_check;"
# Restore from backup if corrupted
# rm /opt/stacks/media/radarr/config/radarr.db
# cp /opt/backups/radarr-DATE.db /opt/stacks/media/radarr/config/radarr.db
docker start radarr
```
## Performance Optimization
### RSS Sync Interval
**Settings → Indexers → Options:**
- RSS Sync Interval: 60 minutes (default)
- Movies release less frequently than TV
- Can increase interval to reduce load
### Database Optimization
```bash
# Stop Radarr
docker stop radarr
# Vacuum database
sqlite3 /opt/stacks/media/radarr/config/radarr.db "VACUUM;"
# Reduce history
# Settings → General → History Cleanup: 30 days
docker start radarr
```
### Optimize Scanning
**Settings → Media Management:**
- Analyze video files: No (if not needed)
- Rescan folder after refresh: Only if changed
### Limit Concurrent Downloads
**Settings → Download Clients:**
- Maximum Downloads: 5 (reasonable limit)
- Prevents overwhelming bandwidth
## Security Best Practices
1. **Enable Authentication:**
- Settings → General → Security
- Authentication: Required (Basic or Forms)
2. **API Key Security:**
- Keep API key secret
- Regenerate if compromised
3. **Reverse Proxy:**
- Use Traefik + Authelia
- Don't expose 7878 publicly
4. **Read-Only Media:**
- Consider read-only mount for movies
- Radarr needs write for imports
5. **Regular Backups:**
- Backup `/config` directory
- Includes database and settings
6. **Network Isolation:**
- Separate Docker network
- Only connect necessary services
7. **Keep Updated:**
- Regular updates for security patches
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media/radarr/config/radarr.db # Database
/opt/stacks/media/radarr/config/config.xml # Settings
/opt/stacks/media/radarr/config/Backup/ # Auto backups
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/radarr
# Manual backup trigger
docker exec radarr cp /config/radarr.db /config/backup-manual-$DATE.db
# Copy to backup location
cp /opt/stacks/media/radarr/config/radarr.db $BACKUP_DIR/radarr-$DATE.db
# Keep last 7 days
find $BACKUP_DIR -name "radarr-*.db" -mtime +7 -delete
```
**Restore:**
```bash
docker stop radarr
cp /opt/backups/radarr/radarr-20240101.db /opt/stacks/media/radarr/config/radarr.db
docker start radarr
```
## Integration with Other Services
### Radarr + Plex/Jellyfin
- Auto-update library on import
- Settings → Connect → Plex/Jellyfin
### Radarr + Prowlarr
- Centralized indexer management
- Auto-sync indexers
- Single source of truth
### Radarr + qBittorrent (via Gluetun)
- Download movies via VPN
- Automatic import after download
- Category-based organization
### Radarr + Jellyseerr
- User request interface
- Users request movies
- Radarr automatically downloads
### Radarr + Tautulli
- Track additions
- View statistics
- Popular movies
## Summary
Radarr is the essential movie automation tool offering:
- Automatic movie downloads
- Quality management and upgrades
- Organized movie library
- Custom format scoring
- Collection management
- Free and open-source
**Perfect for:**
- Movie collectors
- Automated library management
- Quality enthusiasts
- 4K collectors
- Collection completionists
**Key Points:**
- Follow TRaSH Guides for best setup
- Custom formats are powerful
- Use Physical/Web minimum availability
- Hardlinks save massive disk space
- Pair with Prowlarr for indexers
- Pair with qBittorrent + Gluetun
- Separate 4K instance recommended
**Remember:**
- Proper file permissions essential
- Same filesystem for hardlinks
- Quality profile cutoff important
- Custom format scoring controls upgrades
- Collections make franchises easy
- Regular backups crucial
- Test indexers periodically
Sonarr + Radarr + Prowlarr + qBittorrent = Complete media automation!

View File

@@ -1,595 +0,0 @@
# Readarr - Book & Audiobook Automation
## Table of Contents
- [Overview](#overview)
- [What is Readarr?](#what-is-readarr)
- [Why Use Readarr?](#why-use-readarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Management & Automation
**Docker Image:** [linuxserver/readarr](https://hub.docker.com/r/linuxserver/readarr)
**Default Stack:** `media-management.yml`
**Web UI:** `https://readarr.${DOMAIN}` or `http://SERVER_IP:8787`
**Authentication:** Optional (configurable)
**Ports:** 8787
## What is Readarr?
Readarr is an ebook and audiobook collection manager for Usenet and BitTorrent users. It's part of the *arr family (like Sonarr for TV and Radarr for movies), but specifically designed for books. Readarr monitors for new book releases, automatically downloads them, and organizes your library with proper metadata.
### Key Features
- **Automatic Downloads:** New releases from monitored authors
- **Quality Management:** Ebook vs audiobook, formats
- **Author Management:** Track favorite authors
- **Series Tracking:** Monitor book series
- **Calendar:** Upcoming releases
- **Metadata Enrichment:** Book covers, descriptions, ISBNs
- **Format Support:** EPUB, MOBI, AZW3, PDF, MP3, M4B
- **GoodReads Integration:** Import reading lists
- **Multiple Libraries:** Fiction, non-fiction, audiobooks
- **Calibre Integration:** Works with Calibre libraries
## Why Use Readarr?
1. **Never Miss Releases:** Auto-download new books from favorite authors
2. **Library Organization:** Consistent structure and naming
3. **Format Flexibility:** Multiple ebook formats
4. **Series Management:** Track reading order
5. **Metadata Automation:** Covers, descriptions, authors
6. **GoodReads Integration:** Import want-to-read lists
7. **Audiobook Support:** Unified management
8. **Time Saving:** No manual searching
9. **Free & Open Source:** No cost
10. **Calibre Compatible:** Works with existing libraries
## How It Works
```
New Book Release (Author Monitored)
Readarr Checks RSS Feeds (Prowlarr)
Evaluates Releases (Format, Quality)
Sends to qBittorrent (via Gluetun VPN)
Download Completes
Readarr Imports & Organizes
Library Updated
(/mnt/media/books/)
Calibre-Web / Calibre Access
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/readarr/config/ # Readarr configuration
/mnt/downloads/complete/books-readarr/ # Downloaded books
/mnt/media/books/ # Final book library
Library Structure:
/mnt/media/books/
Author Name/
Series Name/
Book 01 - Title (Year).epub
Book 02 - Title (Year).epub
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://readarr.com
- **Wiki:** https://wiki.servarr.com/readarr
- **GitHub:** https://github.com/Readarr/Readarr
- **Discord:** https://discord.gg/readarr
- **Reddit:** https://reddit.com/r/readarr
- **Docker Hub:** https://hub.docker.com/r/linuxserver/readarr
## Educational Resources
### Videos
- [Readarr Setup Guide](https://www.youtube.com/results?search_query=readarr+setup)
- [*arr Stack Complete Guide](https://www.youtube.com/results?search_query=readarr+sonarr+radarr)
- [Readarr + Calibre](https://www.youtube.com/results?search_query=readarr+calibre)
### Articles & Guides
- [Official Documentation](https://wiki.servarr.com/readarr)
- [Servarr Wiki](https://wiki.servarr.com/)
- [Calibre Integration](https://wiki.servarr.com/readarr/settings#calibre)
### Concepts to Learn
- **Metadata Profiles:** Preferred ebook formats
- **Quality Profiles:** Quality and format preferences
- **Release Profiles:** Preferred/avoided release groups
- **Root Folders:** Library locations
- **Author vs Book:** Monitoring modes
- **GoodReads Lists:** Import reading lists
- **Calibre Content Server:** Integration
## Docker Configuration
### Complete Service Definition
```yaml
readarr:
image: linuxserver/readarr:develop # Use develop tag
container_name: readarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "8787:8787"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/readarr/config:/config
- /mnt/media/books:/books
- /mnt/downloads:/downloads
labels:
- "traefik.enable=true"
- "traefik.http.routers.readarr.rule=Host(`readarr.${DOMAIN}`)"
- "traefik.http.routers.readarr.entrypoints=websecure"
- "traefik.http.routers.readarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.readarr.middlewares=authelia@docker"
- "traefik.http.services.readarr.loadbalancer.server.port=8787"
```
**Note:** Use `develop` tag for latest features. Readarr is still in active development.
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d readarr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:8787`
- Domain: `https://readarr.yourdomain.com`
3. **Initial Configuration:**
- Settings → Media Management
- Settings → Profiles
- Settings → Indexers (via Prowlarr)
- Settings → Download Clients
### Media Management Settings
**Settings → Media Management:**
1. **Rename Books:** ✓ Enable
2. **Replace Illegal Characters:** ✓ Enable
3. **Standard Book Format:**
```
{Author Name}/{Series Title}/{Series Title} - {Book #} - {Book Title} ({Release Year})
```
Example: `Brandon Sanderson/Mistborn/Mistborn - 01 - The Final Empire (2006).epub`
4. **Author Folder Format:**
```
{Author Name}
```
5. **Root Folders:**
- Add: `/books`
**File Management:**
- ✓ Unmonitor Deleted Books
- ✓ Use Hardlinks instead of Copy
- Minimum Free Space: 100 MB
- ✓ Import Extra Files
### Metadata Profiles
**Settings → Profiles → Metadata Profiles:**
Default profile includes:
- Ebook formats (EPUB, MOBI, AZW3, PDF)
- Audiobook formats (MP3, M4B, M4A)
**Custom Profile Example:**
- Name: "Ebooks Only"
- Include: EPUB, MOBI, AZW3, PDF
- Exclude: Audio formats
### Quality Profiles
**Settings → Profiles → Quality Profiles:**
**Ebook Profile:**
1. Name: "Ebook - High Quality"
2. Upgrades Allowed: ✓
3. Upgrade Until: EPUB
4. Qualities (in order):
- EPUB
- AZW3
- MOBI
- PDF (lowest priority)
**Audiobook Profile:**
1. Name: "Audiobook"
2. Upgrade Until: M4B (best for chapters)
3. Qualities:
- M4B
- MP3
### Download Client Setup
**Settings → Download Clients → Add → qBittorrent:**
1. **Name:** qBittorrent
2. **Host:** `gluetun`
3. **Port:** `8080`
4. **Username:** `admin`
5. **Password:** Your password
6. **Category:** `books-readarr`
7. **Test → Save**
### Indexer Setup (via Prowlarr)
**Prowlarr Integration:**
- Prowlarr → Settings → Apps → Add Readarr
- Sync Categories: Books/Ebook, Books/Audiobook
- Auto-syncs indexers to Readarr
**Verify:**
- Settings → Indexers
- Should see synced indexers from Prowlarr
### Adding Your First Author
1. **Click "Add New"**
2. **Search:** Author name (e.g., "Brandon Sanderson")
3. **Select** correct author
4. **Configure:**
- Root Folder: `/books`
- Monitor: All Books (or Future Books)
- Metadata Profile: Standard
- Quality Profile: Ebook - High Quality
- ✓ Search for missing books
5. **Add Author**
### Adding Individual Books
1. **Library → Add New → Search for a book**
2. **Search:** Book title or ISBN
3. **Select** correct book
4. **Configure:**
- Root Folder: `/books`
- Monitor: Yes
- Quality Profile: Ebook - High Quality
- ✓ Start search for book
5. **Add Book**
## Advanced Topics
### GoodReads Integration
**Import Reading Lists:**
**Settings → Import Lists → Add → GoodReads Lists:**
1. **Access Token:** Get from GoodReads
2. **User ID:** Your GoodReads ID
3. **List Name:** "to-read" or custom list
4. **Root Folder:** `/books`
5. **Quality Profile:** Ebook - High Quality
6. **Monitor:** Yes
7. **Search on Add:** Yes
8. **Save**
**Auto-sync periodically** to import new books from your GoodReads lists.
### Calibre Integration
**If you use Calibre:**
**Settings → Calibre:**
1. **Host:** IP of Calibre server
2. **Port:** `8080` (default)
3. **Username/Password:** If Calibre requires auth
4. **Library:** Name of Calibre library
5. **Save**
**Options:**
- Use Calibre for metadata
- Export to Calibre on import
- Sync with Calibre library
### Multiple Libraries
**Separate ebook and audiobook libraries:**
```yaml
readarr-audio:
image: linuxserver/readarr:develop
container_name: readarr-audio
ports:
- "8788:8787"
volumes:
- /opt/stacks/media-management/readarr-audio/config:/config
- /mnt/media/audiobooks:/books
- /mnt/downloads:/downloads
environment:
- PUID=1000
- PGID=1000
```
Separate instance for audiobooks with different quality profile.
### Metadata Management
**Edit Metadata:**
- Select book → Edit → Metadata
- Change title, author, description
- Upload custom cover
- Lock fields to prevent overwriting
**Refresh Metadata:**
- Right-click book → Refresh & Scan
- Re-fetches from online sources
### Series Management
**Monitor Entire Series:**
1. Add author
2. View author page
3. Series tab → Select series
4. Monitor entire series
5. Readarr tracks all books in series
**Reading Order:**
- Readarr shows series order
- Books numbered sequentially
- Missing books highlighted
### Custom Scripts
**Settings → Connect → Custom Script:**
Run scripts on events:
- On Download
- On Import
- On Upgrade
- On Rename
- On Delete
**Use Cases:**
- Convert formats (MOBI → EPUB)
- Sync to e-reader
- Backup to cloud
- Update external database
### Notifications
**Settings → Connect:**
Popular notifications:
- **Calibre:** Update library on import
- **Discord:** New book alerts
- **Telegram:** Mobile notifications
- **Email:** SMTP alerts
- **Custom Webhook:** External integrations
## Troubleshooting
### Readarr Not Finding Books
```bash
# Check indexers
# Settings → Indexers → Test All
# Check Prowlarr sync
docker logs prowlarr | grep readarr
# Manual search
# Book → Manual Search → View logs
# Common issues:
# - No indexers with book categories
# - Book not released yet
# - Quality profile too restrictive
# - Wrong book metadata (try ISBN search)
```
### Downloads Not Importing
```bash
# Check permissions
ls -la /mnt/downloads/complete/books-readarr/
ls -la /mnt/media/books/
# Fix ownership
sudo chown -R 1000:1000 /mnt/media/books/
# Verify Readarr access
docker exec readarr ls /downloads
docker exec readarr ls /books
# Check logs
docker logs readarr | grep -i import
# Common issues:
# - Permission denied
# - Wrong category in qBittorrent
# - File format not in metadata profile
# - Hardlink failed (different filesystems)
```
### Wrong Author/Book Match
```bash
# Search by ISBN for accurate match
# Add New → Search: ISBN-13 number
# Edit book/author
# Library → Select → Edit
# Search for correct match
# Check metadata sources
# Settings → Metadata → Verify sources enabled
```
### Database Corruption
```bash
# Stop Readarr
docker stop readarr
# Backup database
cp /opt/stacks/media-management/readarr/config/readarr.db /opt/backups/
# Check integrity
sqlite3 /opt/stacks/media-management/readarr/config/readarr.db "PRAGMA integrity_check;"
# Restore from backup if corrupted
docker start readarr
```
## Performance Optimization
### RSS Sync Interval
**Settings → Indexers → Options:**
- RSS Sync Interval: 60 minutes
- Books release less frequently than TV/movies
### Database Optimization
```bash
# Stop Readarr
docker stop readarr
# Vacuum database
sqlite3 /opt/stacks/media-management/readarr/config/readarr.db "VACUUM;"
# Clear old history
# Settings → General → History Cleanup: 30 days
docker start readarr
```
## Security Best Practices
1. **Enable Authentication:**
- Settings → General → Security
- Authentication: Required
2. **API Key Security:**
- Keep API key secure
- Regenerate if compromised
3. **Reverse Proxy:**
- Use Traefik + Authelia
- Don't expose port 8787 publicly
4. **Regular Backups:**
- Backup `/config` directory
- Includes database and settings
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media-management/readarr/config/readarr.db # Database
/opt/stacks/media-management/readarr/config/config.xml # Settings
/opt/stacks/media-management/readarr/config/Backup/ # Auto backups
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/readarr
cp /opt/stacks/media-management/readarr/config/readarr.db $BACKUP_DIR/readarr-$DATE.db
find $BACKUP_DIR -name "readarr-*.db" -mtime +7 -delete
```
## Integration with Other Services
### Readarr + Prowlarr
- Centralized indexer management
- Auto-sync book indexers
### Readarr + qBittorrent (via Gluetun)
- Download books via VPN
- Category-based organization
### Readarr + Calibre-Web
- Web interface for reading
- Library management
- Format conversion
### Readarr + Calibre
- Professional ebook management
- Format conversion
- Metadata editing
- E-reader sync
## Summary
Readarr is the ebook/audiobook automation tool offering:
- Automatic book downloads
- Author and series tracking
- Format management (EPUB, MOBI, MP3, M4B)
- GoodReads integration
- Calibre compatibility
- Free and open-source
**Perfect for:**
- Avid readers
- Audiobook enthusiasts
- Series completionists
- GoodReads users
- Calibre users
- Book collectors
**Key Points:**
- Use develop tag for latest features
- Monitor favorite authors
- GoodReads list integration
- Multiple format support
- Calibre compatibility
- Series tracking
- ISBN search for accuracy
**Remember:**
- Still in active development
- Use ISBN for accurate matching
- Separate ebook/audiobook profiles
- Integrate with Calibre-Web for reading
- Monitor series, not just books
- Regular backups recommended
Readarr completes the *arr stack for comprehensive media automation!

View File

@@ -1,213 +0,0 @@
# Redis - In-Memory Cache
## Table of Contents
- [Overview](#overview)
- [What is Redis?](#what-is-redis)
- [Why Use Redis?](#why-use-redis)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Redis Instances](#redis-instances)
## Overview
**Category:** In-Memory Data Store
**Docker Image:** [redis](https://hub.docker.com/_/redis)
**Default Stack:** `development.yml` and others
**Ports:** 6379 (internal)
## What is Redis?
Redis (Remote Dictionary Server) is an in-memory data structure store used as a cache, message broker, and database. It's incredibly fast because data is stored in RAM, making it perfect for caching, sessions, queues, and real-time applications.
### Key Features
- **In-Memory:** Microsecond latency
- **Data Structures:** Strings, lists, sets, hashes, etc.
- **Persistence:** Optional disk writes
- **Pub/Sub:** Message broker
- **Atomic Operations:** Thread-safe operations
- **Replication:** Master-slave
- **Lua Scripting:** Server-side scripts
- **Free & Open Source:** BSD license
## Why Use Redis?
1. **Speed:** Extremely fast (in RAM)
2. **Versatile:** Cache, queue, pub/sub, database
3. **Simple:** Easy to use
4. **Reliable:** Battle-tested
5. **Rich Data Types:** More than key-value
6. **Atomic:** Safe concurrent access
7. **Popular:** Wide adoption
## Configuration in AI-Homelab
AI-Homelab uses Redis for caching and session storage in multiple applications.
## Official Resources
- **Website:** https://redis.io
- **Documentation:** https://redis.io/docs
- **Commands:** https://redis.io/commands
## Redis Instances
### Authentik Redis (authentik-redis)
```yaml
authentik-redis:
image: redis:alpine
container_name: authentik-redis
restart: unless-stopped
networks:
- traefik-network
command: --save 60 1 --loglevel warning
volumes:
- /opt/stacks/infrastructure/authentik-redis/data:/data
```
**Purpose:** Authentik SSO caching and sessions
**Location:** `/opt/stacks/infrastructure/authentik-redis/data`
### GitLab Redis (gitlab-redis)
```yaml
gitlab-redis:
image: redis:alpine
container_name: gitlab-redis
restart: unless-stopped
networks:
- traefik-network
command: --save 60 1 --loglevel warning
volumes:
- /opt/stacks/development/gitlab-redis/data:/data
```
**Purpose:** GitLab caching and background jobs
**Location:** `/opt/stacks/development/gitlab-redis/data`
### Development Redis (redis)
```yaml
redis:
image: redis:alpine
container_name: redis
restart: unless-stopped
networks:
- traefik-network
ports:
- "6379:6379"
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes:
- /opt/stacks/development/redis/data:/data
```
**Purpose:** General development caching
**Location:** `/opt/stacks/development/redis/data`
## Management
### Access Redis CLI
```bash
# Connect to Redis
docker exec -it redis redis-cli
# With password
docker exec -it redis redis-cli -a your_password
# Or authenticate after connecting
AUTH your_password
```
### Common Commands
```redis
# Set key
SET mykey "Hello"
# Get key
GET mykey
# Set with expiration (seconds)
SETEX mykey 60 "expires in 60 seconds"
# List all keys
KEYS *
# Delete key
DEL mykey
# Check if key exists
EXISTS mykey
# Get key type
TYPE mykey
# Flush all data (careful!)
FLUSHALL
# Get info
INFO
```
### Monitor Activity
```bash
# Monitor commands in real-time
docker exec -it redis redis-cli MONITOR
# Get statistics
docker exec -it redis redis-cli INFO stats
```
### Backup Redis
```bash
# Redis automatically saves to dump.rdb
# Just backup the data volume
tar -czf redis-backup.tar.gz /opt/stacks/development/redis/data/
# Force save now
docker exec redis redis-cli SAVE
```
## Summary
Redis provides in-memory storage offering:
- Ultra-fast caching
- Session storage
- Message queuing (pub/sub)
- Real-time operations
- Rich data structures
- Persistence options
- Atomic operations
- Free and open-source
**Perfect for:**
- Application caching
- Session storage
- Real-time analytics
- Message queues
- Leaderboards
- Rate limiting
- Pub/sub messaging
**Key Points:**
- In-memory (very fast)
- Data persists to disk optionally
- Multiple data structures
- Simple key-value interface
- Use password protection
- Low resource usage
- Alpine image is tiny
**Remember:**
- Set password (--requirepass)
- Data stored in RAM
- Configure persistence
- Monitor memory usage
- Backup dump.rdb file
- Use for temporary data
- Not a full database replacement
Redis accelerates your applications with caching!

View File

@@ -1,532 +0,0 @@
# Sablier - Lazy Loading Service
## Table of Contents
- [Overview](#overview)
- [What is Sablier?](#what-is-sablier)
- [Why Use Sablier?](#why-use-sablier)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Using Sablier](#using-sablier)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Core Infrastructure
**Docker Image:** [sablierapp/sablier](https://hub.docker.com/r/sablierapp/sablier)
**Default Stack:** `core.yml`
**Web UI:** No web UI (API only)
**Authentication:** None required
**Purpose:** On-demand container startup and resource management
## What is Sablier?
Sablier is a lightweight service that enables lazy loading for Docker containers. It automatically starts containers when they're accessed through Traefik and stops them after a period of inactivity, helping to conserve system resources and reduce power consumption.
### Key Features
- **On-Demand Startup:** Containers start automatically when accessed
- **Automatic Shutdown:** Containers stop after configurable inactivity periods
- **Traefik Integration:** Works seamlessly with Traefik reverse proxy
- **Resource Conservation:** Reduces memory and CPU usage for unused services
- **Group Management:** Related services can be managed as groups
- **Health Checks:** Waits for services to be ready before forwarding traffic
- **Minimal Overhead:** Lightweight with low resource requirements
## Why Use Sablier?
1. **Resource Efficiency:** Save memory and CPU by only running services when needed
2. **Power Savings:** Reduce power consumption on always-on systems
3. **Faster Boot:** Services start quickly when accessed vs. waiting for full system startup
4. **Scalability:** Handle more services than would fit in memory simultaneously
5. **Cost Effective:** Lower resource requirements mean smaller/fewer servers needed
6. **Environmental:** Reduce energy consumption and carbon footprint
## How It Works
```
User Request → Traefik → Sablier Check → Container Start → Health Check → Forward Traffic
Container Stop (after timeout)
```
When a request comes in for a service with Sablier enabled:
1. **Route Detection:** Sablier monitors Traefik routes for configured services
2. **Container Check:** Verifies if the target container is running
3. **Startup Process:** If not running, starts the container via Docker API
4. **Health Verification:** Waits for the service to report healthy
5. **Traffic Forwarding:** Routes traffic to the now-running service
6. **Timeout Monitoring:** Tracks inactivity and stops containers after timeout
## Configuration in AI-Homelab
Sablier is deployed as part of the core infrastructure stack and requires no additional configuration for basic operation. It automatically discovers services with the appropriate labels.
### Service Integration
Add these labels to any service that should use lazy loading:
```yaml
services:
myservice:
# ... other configuration ...
labels:
- "sablier.enable=true"
- "sablier.group=core-myservice" # Optional: group related services
- "traefik.enable=true"
- "traefik.http.routers.myservice.rule=Host(`myservice.${DOMAIN}`)"
# ... other Traefik labels ...
```
### Advanced Configuration
For services requiring custom timeouts or group management:
```yaml
labels:
- "sablier.enable=true"
- "sablier.group=media-services" # Group name for related services
- "sablier.timeout=300" # 5 minutes inactivity timeout (default: 300)
- "sablier.theme=dark" # Optional: theme for Sablier UI (if used)
```
## Official Resources
- **GitHub Repository:** https://github.com/sablierapp/sablier
- **Docker Hub:** https://hub.docker.com/r/sablierapp/sablier
- **Documentation:** https://sablierapp.github.io/sablier/
## Educational Resources
- **Traefik Integration:** https://doc.traefik.io/traefik/middlewares/http/forwardauth/
- **Docker Lazy Loading:** Search for "docker lazy loading" or "container on-demand"
- **Resource Management:** Linux container resource management best practices
## Docker Configuration
### Environment Variables
| Variable | Description | Default | Required |
|----------|-------------|---------|----------|
| `SABLIER_PROVIDER` | Container runtime provider | `docker` | Yes |
| `SABLIER_DOCKER_API_VERSION` | Docker API version | `1.53` | No |
| `SABLIER_DOCKER_NETWORK` | Docker network for containers | `traefik-network` | Yes |
| `SABLIER_LOG_LEVEL` | Logging level (debug, info, warn, error) | `debug` | No |
| `DOCKER_HOST` | Docker socket endpoint | `tcp://docker-proxy:2375` | Yes |
### Ports
- **10000** - Sablier API endpoint (internal use only)
### Volumes
None required - Sablier communicates with Docker via API
### Networks
- **traefik-network** - Required for communication with Traefik
- **homelab-network** - Required for Docker API access
## Using Sablier
### Basic Usage
1. **Enable on Service:** Add `sablier.enable=true` label to any service
2. **Access Service:** Navigate to the service URL in your browser
3. **Automatic Startup:** Sablier detects the request and starts the container
4. **Wait for Ready:** Service starts and health checks pass
5. **Use Service:** Container is now running and accessible
6. **Automatic Shutdown:** Container stops after 5 minutes of inactivity
### Monitoring Lazy Loading
Check which services are managed by Sablier:
```bash
# View all containers with Sablier labels
docker ps --filter "label=sablier.enable=true" --format "table {{.Names}}\t{{.Status}}"
# Check Sablier logs
docker logs sablier
# View Traefik routes that trigger lazy loading
docker logs traefik | grep sablier
```
### Service Groups
Group related services that should start/stop together:
```yaml
# Database and web app in same group
services:
myapp:
labels:
- "sablier.enable=true"
- "sablier.group=myapp-stack"
myapp-db:
labels:
- "sablier.enable=true"
- "sablier.group=myapp-stack"
```
### Custom Timeouts
Set different inactivity timeouts per service:
```yaml
labels:
- "sablier.enable=true"
- "sablier.timeout=600" # 10 minutes
```
## Advanced Topics
### Performance Considerations
- **Startup Time:** Services take longer to respond on first access
- **Resource Spikes:** Multiple services starting simultaneously can cause load
- **Health Checks:** Ensure services have proper health checks for reliable startup
### Troubleshooting Startup Issues
- **Container Won't Start:** Check Docker logs for the failing container
- **Health Check Fails:** Verify service health endpoints are working
- **Network Issues:** Ensure containers are on the correct Docker network
### Integration with Monitoring
Sablier works with existing monitoring:
- **Prometheus:** Can monitor Sablier API metrics
- **Grafana:** Visualize container start/stop events
- **Dozzle:** View logs from lazy-loaded containers
## Troubleshooting
### Service Won't Start Automatically
**Symptoms:** Accessing service URL shows connection error
**Solutions:**
```bash
# Check if Sablier is running
docker ps | grep sablier
# Verify service has correct labels
docker inspect container-name | grep sablier
# Check Sablier logs
docker logs sablier
# Test manual container start
docker start container-name
```
### Containers Not Stopping
**Symptoms:** Containers remain running after inactivity timeout
**Solutions:**
```bash
# Check timeout configuration
docker inspect container-name | grep sablier.timeout
# Verify Sablier has access to Docker API
docker exec sablier curl -f http://docker-proxy:2375/_ping
# Check for active connections
netstat -tlnp | grep :port
```
### Traefik Routing Issues
**Symptoms:** Service accessible but Sablier not triggering
**Solutions:**
```bash
# Verify Traefik labels
docker inspect container-name | grep traefik
# Check Traefik configuration
docker logs traefik | grep "Creating router"
# Test direct access (bypass Sablier)
curl http://container-name:port/health
```
### Common Issues
**Issue:** Services start but are not accessible
**Fix:** Ensure services are on the `traefik-network`
**Issue:** Sablier can't connect to Docker API
**Fix:** Verify `DOCKER_HOST` environment variable and network connectivity
**Issue:** Containers start but health checks fail
**Fix:** Add proper health checks to service configurations
**Issue:** High resource usage during startup
**Fix:** Stagger service startups or increase system resources
### Performance Tuning
- **Increase Timeouts:** For services that need longer inactivity periods
- **Group Services:** Related services can share startup/shutdown cycles
- **Monitor Resources:** Use Glances or Prometheus to track resource usage
- **Optimize Health Checks:** Ensure health checks are fast and reliable
### Getting Help
- **GitHub Issues:** https://github.com/sablierapp/sablier/issues
- **Community:** Check Traefik and Docker forums for lazy loading discussions
- **Logs:** Enable debug logging with `SABLIER_LOG_LEVEL=debug`
- "sablier.start-on-demand=true" # Enable lazy loading
```
### Traefik Middleware
Configure Sablier middleware in Traefik dynamic configuration:
```yaml
http:
middlewares:
sablier-service:
plugin:
sablier:
sablierUrl: http://sablier-service:10000
group: core-service-name
sessionDuration: 2m # How long to keep service running after access
ignoreUserAgent: curl # Don't start service for curl requests
dynamic:
displayName: Service Name
theme: ghost
show-details-by-default: true
```
## Examples
### Basic Service with Lazy Loading
```yaml
services:
my-service:
image: my-service:latest
container_name: my-service
restart: "no" # Important: Must be "no" for Sablier to control start/stop
networks:
- traefik-network
labels:
- "traefik.enable=true"
- "traefik.http.routers.my-service.rule=Host(`my-service.${DOMAIN}`)"
- "traefik.http.routers.my-service.entrypoints=websecure"
- "traefik.http.routers.my-service.tls.certresolver=letsencrypt"
- "traefik.http.routers.my-service.middlewares=authelia@docker"
- "traefik.http.services.my-service.loadbalancer.server.port=8080"
# Sablier lazy loading
- "sablier.enable=true"
- "sablier.group=core-my-service"
- "sablier.start-on-demand=true"
```
### Remote Service Proxy
For services on remote servers, configure Traefik routes with Sablier middleware:
```yaml
# In /opt/stacks/core/traefik/dynamic/remote-services.yml
http:
routers:
remote-service:
rule: "Host(`remote-service.${DOMAIN}`)"
entryPoints:
- websecure
service: remote-service
tls:
certResolver: letsencrypt
middlewares:
- sablier-remote-service@file
services:
remote-service:
loadBalancer:
servers:
- url: "http://remote-server-ip:port"
middlewares:
sablier-remote-service:
plugin:
sablier:
sablierUrl: http://sablier-service:10000
group: remote-server-group
sessionDuration: 5m
displayName: Remote Service
```
## Troubleshooting
### Service Won't Start
**Check Sablier logs:**
```bash
cd /opt/stacks/core
docker compose logs sablier-service
```
**Verify container permissions:**
```bash
# Check if Sablier can access Docker API
docker exec sablier-service curl -f http://localhost:10000/health
```
### Services Not Starting on Demand
**Check Traefik middleware configuration:**
```bash
# Verify middleware is loaded
docker logs traefik | grep sablier
```
**Check service labels:**
```bash
# Verify Sablier labels are present
docker inspect service-name | grep sablier
```
### Services Stop Too Quickly
**Increase session duration:**
```yaml
middlewares:
sablier-service:
plugin:
sablier:
sessionDuration: 10m # Increase from default
```
### Performance Issues
**Check resource usage:**
```bash
docker stats sablier-service
```
**Monitor Docker API calls:**
```bash
docker logs sablier-service | grep "API call"
```
## Best Practices
### Resource Management
- Use lazy loading for services that aren't accessed frequently
- Set appropriate session durations based on usage patterns
- Monitor resource usage to ensure adequate system capacity
### Configuration
- **Always set `restart: "no"`** for Sablier-managed services to allow full lifecycle control
- Group related services together for coordinated startup
- Use descriptive display names for the loading page
- Configure appropriate timeouts for your use case
### Security
- Sablier runs with Docker API access - ensure proper network isolation
- Use Docker socket proxy for additional security
- Monitor Sablier logs for unauthorized access attempts
## Integration with Other Services
### Homepage Dashboard
Add Sablier status to Homepage:
```yaml
# In homepage config
- Core Infrastructure:
- Sablier:
icon: docker.png
href: http://sablier-service:10000
description: Lazy loading service
widget:
type: iframe
url: http://sablier-service:10000
```
### Monitoring
Monitor Sablier with Prometheus metrics (if available) or basic health checks:
```yaml
# Health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:10000/health"]
interval: 30s
timeout: 10s
retries: 3
```
## Advanced Configuration
### Custom Themes
Sablier supports different loading page themes:
```yaml
dynamic:
displayName: My Service
theme: ghost # Options: ghost, hacker, ocean, etc.
show-details-by-default: true
```
### Group Management
Services can be grouped for coordinated startup:
```yaml
# All services in the same group start together
labels:
- "sablier.group=media-stack"
- "sablier.enable=true"
- "sablier.start-on-demand=true"
```
### API Access
Sablier provides a REST API for programmatic control:
```bash
# Get service status
curl http://sablier-service:10000/api/groups
# Start a service group
curl -X POST http://sablier-service:10000/api/groups/media-stack/start
# Stop a service group
curl -X POST http://sablier-service:10000/api/groups/media-stack/stop
```
## Migration from Manual Management
When adding Sablier to existing services:
1. **Change restart policy** to `"no"` in the compose file (critical for Sablier control)
2. **Add Sablier labels** to the service compose file
3. **Configure Traefik middleware** for the service
4. **Stop the service** initially (let Sablier manage it)
5. **Test access** - service should start automatically
6. **Monitor logs** to ensure proper operation
> **Important**: Services managed by Sablier must have `restart: "no"` to allow Sablier full control over container lifecycle. Do not use `unless-stopped`, `always`, or `on-failure` restart policies.
## Related Documentation
- [Traefik Documentation](traefik.md) - Reverse proxy configuration
- [Authelia Documentation](authelia.md) - SSO authentication
- [On-Demand Remote Services](../Ondemand-Remote-Services.md) - Remote service setup guide

View File

@@ -1,749 +0,0 @@
# Sonarr - TV Show Automation
## Table of Contents
- [Overview](#overview)
- [What is Sonarr?](#what-is-sonarr)
- [Why Use Sonarr?](#why-use-sonarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Management & Automation
**Docker Image:** [linuxserver/sonarr](https://hub.docker.com/r/linuxserver/sonarr)
**Default Stack:** `media.yml`
**Web UI:** `https://sonarr.${DOMAIN}` or `http://SERVER_IP:8989`
**Authentication:** Optional (configurable)
**Ports:** 8989
## What is Sonarr?
Sonarr is a PVR (Personal Video Recorder) for Usenet and BitTorrent users. It watches for new episodes of your favorite shows and automatically downloads, sorts, and renames them. Think of it as your personal TV show manager that never sleeps.
### Key Features
- **Automatic Downloads:** Grabs new episodes as they air
- **Quality Management:** Choose preferred qualities and upgrades
- **Calendar:** See upcoming episodes at a glance
- **Series Tracking:** Monitor all your shows in one place
- **Episode Management:** Rename and organize automatically
- **Failed Download Handling:** Retry with different releases
- **Notifications:** Pushover, Telegram, Discord, etc.
- **Custom Scripts:** Run actions on import
- **List Integration:** Import shows from Trakt, IMDb, etc.
- **Multi-Language:** Profiles for different audio/subtitle languages
## Why Use Sonarr?
1. **Never Miss Episodes:** Automatic downloads when they air
2. **Quality Upgrades:** Replace with better quality over time
3. **Organization:** Consistent naming and folder structure
4. **Time Saving:** No manual searching and downloading
5. **Metadata Management:** Integrates with Plex/Jellyfin/Emby
6. **Season Packs:** Smart handling of season releases
7. **Backlog Management:** Track missing episodes
8. **Multi-Show Management:** Hundreds of shows easily
9. **Smart Search:** Finds best releases automatically
10. **Integration Ecosystem:** Works with downloaders and indexers
## How It Works
```
New Episode Airs
Sonarr Checks RSS Feeds (Prowlarr)
Evaluates Releases (Quality, Size, etc.)
Sends Best Release to Downloader
(qBittorrent via Gluetun VPN)
Monitors Download Progress
Download Completes
Sonarr Imports File
Renames & Moves to Library
(/mnt/media/tv/)
Plex/Jellyfin Auto-Scans
Episode Available for Watching
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media/sonarr/config/ # Sonarr configuration
/mnt/downloads/ # Download directory (from qBittorrent)
/mnt/media/tv/ # Final TV library
TV Library Structure:
/mnt/media/tv/
Show Name (Year)/
Season 01/
Show Name - S01E01 - Episode Name.mkv
Show Name - S01E02 - Episode Name.mkv
```
### Environment Variables
```bash
# User permissions (must match file ownership)
PUID=1000
PGID=1000
# Timezone (for air times)
TZ=America/New_York
```
## Official Resources
- **Website:** https://sonarr.tv
- **Wiki:** https://wiki.servarr.com/sonarr
- **GitHub:** https://github.com/Sonarr/Sonarr
- **Discord:** https://discord.gg/sonarr
- **Reddit:** https://reddit.com/r/sonarr
- **Docker Hub:** https://hub.docker.com/r/linuxserver/sonarr
## Educational Resources
### Videos
- [Sonarr Setup Guide (Techno Tim)](https://www.youtube.com/watch?v=5rtGBwBuzQE)
- [Complete *arr Stack Tutorial](https://www.youtube.com/results?search_query=sonarr+radarr+prowlarr+setup)
- [Sonarr Quality Profiles](https://www.youtube.com/results?search_query=sonarr+quality+profiles)
- [Sonarr Custom Formats](https://www.youtube.com/results?search_query=sonarr+custom+formats)
### Articles & Guides
- [TRaSH Guides (Must Read!)](https://trash-guides.info/Sonarr/)
- [Quality Settings Guide](https://trash-guides.info/Sonarr/Sonarr-Setup-Quality-Profiles/)
- [Custom Formats Guide](https://trash-guides.info/Sonarr/sonarr-setup-custom-formats/)
- [Naming Scheme](https://trash-guides.info/Sonarr/Sonarr-recommended-naming-scheme/)
- [Servarr Wiki](https://wiki.servarr.com/sonarr)
### Concepts to Learn
- **Quality Profiles:** Define preferred qualities and upgrades
- **Custom Formats:** Advanced release filtering (HDR, Dolby Vision, etc.)
- **Release Profiles:** Preferred/ignored words
- **Indexers:** Sources for releases (via Prowlarr)
- **Root Folders:** Where shows are stored
- **Series Types:** Standard, Daily, Anime
- **Season Packs:** Full season releases
- **Cutoff:** Quality to stop upgrading at
## Docker Configuration
### Complete Service Definition
```yaml
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "8989:8989"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media/sonarr/config:/config
- /mnt/media/tv:/tv
- /mnt/downloads:/downloads
labels:
- "traefik.enable=true"
- "traefik.http.routers.sonarr.rule=Host(`sonarr.${DOMAIN}`)"
- "traefik.http.routers.sonarr.entrypoints=websecure"
- "traefik.http.routers.sonarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.sonarr.middlewares=authelia@docker"
- "traefik.http.services.sonarr.loadbalancer.server.port=8989"
```
### Key Volume Mapping
```yaml
volumes:
- /opt/stacks/media/sonarr/config:/config # Sonarr settings
- /mnt/media/tv:/tv # TV library (final location)
- /mnt/downloads:/downloads # Download client folder
```
**Important:** Sonarr needs access to both download location and final library location to perform hardlinking (instant moves without copying).
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d sonarr
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:8989`
- Domain: `https://sonarr.yourdomain.com`
3. **Initial Configuration:**
- Settings → Media Management
- Settings → Profiles
- Settings → Indexers (via Prowlarr)
- Settings → Download Clients
### Media Management Settings
**Settings → Media Management:**
1. **Rename Episodes:** ✓ Enable
2. **Replace Illegal Characters:** ✓ Enable
3. **Standard Episode Format:**
```
{Series Title} - S{season:00}E{episode:00} - {Episode Title} {Quality Full}
```
Example: `Breaking Bad - S01E01 - Pilot Bluray-1080p.mkv`
4. **Daily Episode Format:**
```
{Series Title} - {Air-Date} - {Episode Title} {Quality Full}
```
5. **Anime Episode Format:**
```
{Series Title} - S{season:00}E{episode:00} - {Episode Title} {Quality Full}
```
6. **Series Folder Format:**
```
{Series Title} ({Series Year})
```
Example: `Breaking Bad (2008)`
7. **Season Folder Format:**
```
Season {season:00}
```
8. **Root Folders:**
- Add: `/tv`
- This is where shows will be stored
**File Management:**
- ✓ Unmonitor Deleted Episodes
- ✓ Download Propers and Repacks: Prefer and Upgrade
- ✓ Analyze video files
- ✓ Use Hardlinks instead of Copy: Important for space saving!
- Minimum Free Space: 100 MB
### Quality Profiles
**Settings → Profiles → Quality Profiles:**
Default profile is fine, but recommended setup:
**HD-1080p Profile:**
1. Create new profile: "HD-1080p"
2. Upgrades Allowed: ✓
3. Upgrade Until: Bluray-1080p
4. Qualities (in order):
- Bluray-1080p
- WEB-1080p
- HDTV-1080p
- WEBDL-1080p
5. Save
**4K Profile (optional):**
- Name: "Ultra HD"
- Upgrade Until: Bluray-2160p
- Qualities: 4K variants
**Follow TRaSH Guides** for optimal quality profiles:
https://trash-guides.info/Sonarr/Sonarr-Setup-Quality-Profiles/
### Download Client Setup
**Settings → Download Clients → Add → qBittorrent:**
1. **Name:** qBittorrent
2. **Enable:** ✓
3. **Host:** `gluetun` (container name, if qBittorrent behind VPN)
4. **Port:** `8080`
5. **Username:** `admin` (default)
6. **Password:** `adminadmin` (default, change this!)
7. **Category:** `tv-sonarr`
8. **Priority:** Normal
9. **Test → Save**
**Important Settings:**
- ✓ Remove Completed: Download completed and seeding
- ✓ Remove Failed: Downloads that fail
### Indexer Setup (via Prowlarr)
Sonarr should get indexers automatically from Prowlarr (Sync):
**Manual Check:**
1. Settings → Indexers
2. Should see synced indexers from Prowlarr
3. Each should show "Test: Successful"
**If Not Synced:**
- Go to Prowlarr → Settings → Apps
- Add Sonarr application
- Prowlarr Server: `http://prowlarr:9696`
- Sonarr Server: `http://sonarr:8989`
- API Key: From Sonarr → Settings → General → API Key
### Adding Your First Show
1. **Click "Add Series"**
2. **Search:** Type show name (e.g., "Breaking Bad")
3. **Select** correct show from results
4. **Configure:**
- Root Folder: `/tv`
- Quality Profile: HD-1080p
- Series Type: Standard (or Daily/Anime)
- Season: All or specific seasons
- ✓ Start search for missing episodes
5. **Add Series**
Sonarr will:
- Add show to database
- Search for all missing episodes
- Send downloads to qBittorrent
- Monitor for new episodes
## Advanced Topics
### Custom Formats (v4)
Custom Formats allow advanced filtering of releases:
**Settings → Custom Formats:**
Common custom formats:
- **HDR:** Prefer HDR versions
- **Dolby Vision:** DV support
- **Streaming Services:** Prefer specific services
- **Audio:** Atmos, TrueHD, DTS-X
- **Release Groups:** Prefer trusted groups
**Importing from TRaSH Guides:**
https://trash-guides.info/Sonarr/sonarr-setup-custom-formats/
**Example Use Cases:**
- Prefer HDR for 4K TV
- Avoid streaming service logos/watermarks
- Prefer lossless audio
- Prefer specific release groups (RARBG, etc.)
### Release Profiles (Deprecated in v4)
Replaced by Custom Formats in Sonarr v4.
### Series Types
**Standard:**
- Regular TV shows
- Season/Episode numbering
- Example: Breaking Bad S01E01
**Daily:**
- Talk shows, news
- Air date naming
- Example: The Daily Show 2024-01-01
**Anime:**
- Absolute episode numbering
- Example: One Piece 001
Set when adding series.
### Auto-Tagging
**Settings → Import Lists:**
Automatically add shows from:
- **Trakt Lists:** Your watchlist, popular shows
- **IMDb Lists:** Custom lists
- **Simkl Lists:** Another tracking service
- **MyAnimeList:** For anime
**Example: Trakt Watchlist**
1. Settings → Import Lists → Add → Trakt Watchlist
2. Authenticate with Trakt
3. Configure:
- Quality Profile: HD-1080p
- Root Folder: /tv
- Monitor: All Episodes
- Search: Yes
4. Save
Shows added to your Trakt watchlist auto-import to Sonarr!
### Notifications
**Settings → Connect → Add Notification:**
Popular options:
- **Plex:** Update libraries on import
- **Jellyfin:** Scan library
- **Discord:** New episode notifications
- **Telegram:** Mobile alerts
- **Pushover:** Push notifications
- **Email:** SMTP notifications
**Example: Plex**
1. Add → Plex Media Server
2. Host: `plex`
3. Port: `32400`
4. Auth Token: Get from Plex
5. Triggers: On Import, On Upgrade
6. Test → Save
### Custom Scripts
**Settings → Connect → Custom Script:**
Run scripts on events:
- On Download
- On Import
- On Upgrade
- On Rename
- On Delete
**Example Use Cases:**
- Notify external service
- Trigger backup
- Custom file processing
- Update external database
### Multiple Sonarr Instances
Run separate instances for different use cases:
**sonarr-4k.yml:**
```yaml
sonarr-4k:
image: linuxserver/sonarr:latest
container_name: sonarr-4k
ports:
- "8990:8989"
volumes:
- /opt/stacks/media/sonarr-4k/config:/config
- /mnt/media/tv-4k:/tv
- /mnt/downloads:/downloads
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
```
**Use Cases:**
- Separate 4K library
- Different quality standards
- Anime-specific instance
- Testing new settings
### Season Pack Handling
**Settings → Indexers → Edit Indexer:**
- Prefer Season Pack: Yes
- Season Pack Only: No
**How It Works:**
- Sonarr prefers full season downloads
- More efficient than individual episodes
- Only grabs if meets quality requirements
### V4 vs V3
**Sonarr v4 (Current):**
- Custom Formats replace Release Profiles
- Better quality management
- Improved UI
- SQLite database
**Sonarr v3 (Legacy):**
- Release Profiles
- Older interface
- Still supported
**Migration:**
- Automatic on update
- Backup first!
- Read changelog
## Troubleshooting
### Sonarr Can't Find Releases
```bash
# Check indexers
# Settings → Indexers → Test All
# Check Prowlarr sync
docker logs prowlarr | grep sonarr
# Check search
# Series → Manual Search → View logs
# Common causes:
# - No indexers configured
# - Indexers down
# - Release doesn't exist yet
# - Quality profile too restrictive
```
### Downloads Not Importing
```bash
# Check download client connection
# Settings → Download Clients → Test
# Check permissions
ls -la /mnt/downloads/
ls -la /mnt/media/tv/
# Ensure Sonarr can access both
docker exec sonarr ls /downloads
docker exec sonarr ls /tv
# Check logs
docker logs sonarr | grep -i import
# Common issues:
# - Permission denied
# - Wrong category in qBittorrent
# - File still seeding
# - Hardlink failed (different filesystems)
```
### Wrong Series Match
```bash
# Edit series
# Series → Select Show → Edit
# Fix match
# Search for correct series
# Update Series
# Or delete and re-add with correct match
```
### Upgrade Loop
```bash
# Sonarr keeps downloading same episode
# Check quality profile
# Ensure "Upgrade Until" is set correctly
# Check custom formats
# May be scoring releases incorrectly
# Check file already exists
# Series → Select Episode → Delete file
# Sonarr may think existing file doesn't meet requirements
```
### Hardlink Errors
```bash
# Error: "Unable to hardlink"
# Check if /downloads and /tv on same filesystem
df -h /mnt/downloads
df -h /mnt/media/tv
# Must be on same mount point for hardlinks
# If different, Sonarr will copy instead
# Fix: Ensure both on same disk/volume
```
### Database Corruption
```bash
# Stop Sonarr
docker stop sonarr
# Backup database
cp /opt/stacks/media/sonarr/config/sonarr.db /opt/backups/
# Check database
sqlite3 /opt/stacks/media/sonarr/config/sonarr.db "PRAGMA integrity_check;"
# If corrupted, restore from backup
# Or let Sonarr rebuild (loses settings)
# rm /opt/stacks/media/sonarr/config/sonarr.db
docker start sonarr
```
## Performance Optimization
### RSS Sync Interval
**Settings → Indexers → Options:**
- RSS Sync Interval: 15 minutes (default)
- Lower for faster new episode detection
- Higher to reduce indexer load
### Reduce Database Size
```bash
# Stop Sonarr
docker stop sonarr
# Vacuum database
sqlite3 /opt/stacks/media/sonarr/config/sonarr.db "VACUUM;"
# Remove old history
# Settings → General → History Cleanup: 30 days
docker start sonarr
```
### Optimize Scanning
**Settings → Media Management:**
- Analyze video files: No (if not needed)
- Rescan Series Folder after Refresh: Only if Changed
Reduces I/O on library scans.
## Security Best Practices
1. **Enable Authentication:**
- Settings → General → Security
- Authentication: Required
- Username and password
2. **API Key:**
- Keep API key secure
- Regenerate if compromised
- Settings → General → API Key
3. **Reverse Proxy:**
- Use Traefik + Authelia
- Don't expose port 8989 publicly
4. **Read-Only Media:**
- Mount TV library as read-only if possible
- Sonarr needs write for imports
5. **Network Isolation:**
- Consider separate Docker network
- Only connect to necessary services
6. **Regular Backups:**
- Backup `/config` directory
- Includes database and settings
7. **Update Regularly:**
- Keep Sonarr updated
- Check release notes
## Backup Strategy
**Critical Files:**
```bash
/opt/stacks/media/sonarr/config/sonarr.db # Database
/opt/stacks/media/sonarr/config/config.xml # Settings
/opt/stacks/media/sonarr/config/Backup/ # Built-in backups
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/sonarr
# Create backup
docker exec sonarr cp /config/sonarr.db /config/backup-manual-$DATE.db
# Copy to backup location
cp /opt/stacks/media/sonarr/config/sonarr.db $BACKUP_DIR/sonarr-$DATE.db
# Keep last 7 days
find $BACKUP_DIR -name "sonarr-*.db" -mtime +7 -delete
```
**Restore:**
```bash
docker stop sonarr
cp /opt/backups/sonarr/sonarr-20240101.db /opt/stacks/media/sonarr/config/sonarr.db
docker start sonarr
```
## Integration with Other Services
### Sonarr + Plex/Jellyfin
- Auto-update library on import
- Settings → Connect → Plex/Jellyfin
### Sonarr + Prowlarr
- Automatic indexer management
- Centralized indexer configuration
- Prowlarr syncs to Sonarr
### Sonarr + qBittorrent (via Gluetun)
- Download client for torrents
- Behind VPN for safety
- Automatic import after download
### Sonarr + Jellyseerr
- User requests interface
- Jellyseerr sends to Sonarr
- Automated fulfillment
### Sonarr + Tautulli
- Track Sonarr additions via Plex
- Statistics on new episodes
## Summary
Sonarr is the essential TV show automation tool offering:
- Automatic episode downloads
- Quality management and upgrades
- Organized library structure
- Calendar and tracking
- Integration with downloaders and media servers
- Completely free and open-source
**Perfect for:**
- TV show enthusiasts
- Automated media management
- Quality upgraders
- Multiple show tracking
- Integration with *arr stack
**Key Points:**
- Follow TRaSH Guides for optimal setup
- Use quality profiles wisely
- Hardlinks save disk space
- Pair with Prowlarr for indexers
- Pair with qBittorrent + Gluetun for downloads
- Regular backups recommended
- RSS sync keeps you up-to-date
**Remember:**
- Proper file permissions crucial
- Same filesystem for hardlinks
- Quality profiles control upgrades
- Custom formats for advanced filtering
- Monitor RSS sync interval
- Keep API key secure
- Test indexers regularly
Sonarr + Radarr + Prowlarr + qBittorrent = Perfect media automation stack!

View File

@@ -1,87 +0,0 @@
# TasmoAdmin - Tasmota Device Manager
## Table of Contents
- [Overview](#overview)
- [What is TasmoAdmin?](#what-is-tasmoadmin)
- [Why Use TasmoAdmin?](#why-use-tasmoadmin)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** IoT Device Management
**Docker Image:** [raymondmm/tasmoadmin](https://hub.docker.com/r/raymondmm/tasmoadmin)
**Default Stack:** `homeassistant.yml`
**Web UI:** `http://SERVER_IP:9999`
**Ports:** 9999
## What is TasmoAdmin?
TasmoAdmin is a centralized web interface for managing multiple Tasmota-flashed devices. Tasmota is alternative firmware for ESP8266 smart devices (Sonoff, Tuya, etc.). TasmoAdmin lets you configure, update, and monitor all your Tasmota devices from one dashboard instead of accessing each device individually.
### Key Features
- **Centralized Management:** All devices in one place
- **Bulk Updates:** Update firmware on all devices
- **Configuration Backup:** Save device configs
- **Device Discovery:** Auto-find Tasmota devices
- **Monitoring:** See status of all devices
- **Remote Control:** Toggle devices
- **Group Operations:** Manage multiple devices
## Why Use TasmoAdmin?
1. **Bulk Management:** Update 50 devices in one click
2. **Configuration Backup:** Save before experiments
3. **Overview:** See all devices at once
4. **Easier Than Individual Access:** No remembering IPs
5. **Bulk Configuration:** Apply settings to groups
6. **Free & Open Source:** No cost
## Configuration in AI-Homelab
```
/opt/stacks/homeassistant/tasmoadmin/data/ # Device configs
```
## Official Resources
- **GitHub:** https://github.com/TasmoAdmin/TasmoAdmin
- **Tasmota:** https://tasmota.github.io/docs
## Docker Configuration
```yaml
tasmoadmin:
image: raymondmm/tasmoadmin:latest
container_name: tasmoadmin
restart: unless-stopped
networks:
- traefik-network
ports:
- "9999:80"
volumes:
- /opt/stacks/homeassistant/tasmoadmin/data:/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.tasmoadmin.rule=Host(`tasmoadmin.${DOMAIN}`)"
```
## Summary
TasmoAdmin simplifies managing many Tasmota devices by providing centralized configuration, firmware updates, and monitoring from a single web interface.
**Perfect for:**
- Multiple Tasmota devices
- Bulk firmware updates
- Configuration management
- Device monitoring
**Key Points:**
- Manages Tasmota-flashed ESP devices
- Auto-discovery on network
- Bulk operations support
- Config backup/restore
- Free and open-source
TasmoAdmin makes Tasmota device management effortless!

View File

@@ -1,692 +0,0 @@
# Tdarr - Transcoding Automation
## Table of Contents
- [Overview](#overview)
- [What is Tdarr?](#what-is-tdarr)
- [Why Use Tdarr?](#why-use-tdarr)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Transcoding
**Docker Image:** [ghcr.io/haveagitgat/tdarr](https://github.com/HaveAGitGat/Tdarr/pkgs/container/tdarr)
**Default Stack:** `media-extended.yml`
**Web UI:** `https://tdarr.${DOMAIN}` or `http://SERVER_IP:8265`
**Server Port:** 8266
**Authentication:** Built-in
**Ports:** 8265 (WebUI), 8266 (Server)
## What is Tdarr?
Tdarr is a distributed transcoding system for automating media library transcoding/remuxing management. It can convert your entire media library to specific codecs (like H.265/HEVC), formats, or remove unwanted audio/subtitle tracks - all automatically. It uses a plugin system with hundreds of pre-made plugins for common tasks, supports hardware acceleration, and can run transcoding across multiple nodes.
### Key Features
- **Distributed Transcoding:** Multiple worker nodes
- **Plugin System:** 500+ pre-made plugins
- **Hardware Acceleration:** NVIDIA, Intel QSV, AMD
- **Health Checks:** Identify corrupted files
- **Codec Conversion:** H.264 → H.265, VP9, AV1
- **Audio/Subtitle Management:** Remove unwanted tracks
- **Container Remux:** MKV → MP4, etc.
- **Scheduling:** Transcode during specific hours
- **Library Statistics:** Codec breakdown, space usage
- **Web UI:** Modern interface with dark mode
## Why Use Tdarr?
1. **Space Savings:** H.265 saves 30-50% vs H.264
2. **Standardization:** Consistent codec across library
3. **Compatibility:** Convert to Plex/Jellyfin-friendly formats
4. **Cleanup:** Remove unwanted audio/subtitle tracks
5. **Quality Control:** Health checks detect corruption
6. **Automation:** Set it and forget it
7. **Hardware Acceleration:** Fast transcoding with GPU
8. **Distributed:** Use multiple machines
9. **Free & Open Source:** No cost
10. **Flexible:** Plugin system for any workflow
## How It Works
```
Tdarr Scans Media Library
Analyzes Each File
(Codec, resolution, audio tracks, etc.)
Compares Against Rules/Plugins
Files Needing Transcoding → Queue
Worker Nodes Process Queue
(Using CPU or GPU)
Transcoded File Created
Replaces Original (or saves alongside)
Library Updated
```
### Architecture
**Tdarr Server:**
- Central management
- Web UI
- Library scanning
- Queue management
**Tdarr Node:**
- Worker process
- Performs transcoding
- Can run on same or different machine
- Multiple nodes supported
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/tdarr/
├── server/ # Tdarr server data
├── config/ # Configuration
├── logs/ # Logs
└── temp/ # Temporary transcoding files
/mnt/media/ # Media libraries (shared)
├── movies/
└── tv/
```
### Environment Variables
```bash
# Server
serverIP=0.0.0.0
serverPort=8266
webUIPort=8265
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
# Hardware acceleration (optional)
# NVIDIA_VISIBLE_DEVICES=all
```
## Official Resources
- **Website:** https://tdarr.io
- **Documentation:** https://docs.tdarr.io
- **GitHub:** https://github.com/HaveAGitGat/Tdarr
- **Discord:** https://discord.gg/GF8Chh3
- **Forum:** https://www.reddit.com/r/Tdarr
- **Plugins:** https://github.com/HaveAGitGat/Tdarr_Plugins
## Educational Resources
### Videos
- [Tdarr Setup Guide](https://www.youtube.com/results?search_query=tdarr+setup+guide)
- [Tdarr H.265 Conversion](https://www.youtube.com/results?search_query=tdarr+h265+conversion)
- [Tdarr Hardware Transcoding](https://www.youtube.com/results?search_query=tdarr+hardware+acceleration)
### Articles & Guides
- [Official Documentation](https://docs.tdarr.io)
- [Plugin Library](https://github.com/HaveAGitGat/Tdarr_Plugins)
- [Best Practices](https://docs.tdarr.io/docs/tutorials/best-practices)
### Concepts to Learn
- **Transcoding:** Converting video codecs
- **Remuxing:** Changing container without re-encoding
- **H.265/HEVC:** Modern codec, better compression
- **Hardware Encoding:** GPU-accelerated transcoding
- **Bitrate:** Video quality measurement
- **CRF:** Constant Rate Factor (quality setting)
- **Streams:** Video, audio, subtitle tracks
## Docker Configuration
### Complete Service Definition
```yaml
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr
restart: unless-stopped
networks:
- traefik-network
ports:
- "8265:8265" # WebUI
- "8266:8266" # Server
environment:
- serverIP=0.0.0.0
- serverPort=8266
- webUIPort=8265
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/tdarr/server:/app/server
- /opt/stacks/media-management/tdarr/config:/app/configs
- /opt/stacks/media-management/tdarr/logs:/app/logs
- /opt/stacks/media-management/tdarr/temp:/temp
- /mnt/media:/media
devices:
- /dev/dri:/dev/dri # Intel QuickSync
labels:
- "traefik.enable=true"
- "traefik.http.routers.tdarr.rule=Host(`tdarr.${DOMAIN}`)"
- "traefik.http.routers.tdarr.entrypoints=websecure"
- "traefik.http.routers.tdarr.tls.certresolver=letsencrypt"
- "traefik.http.routers.tdarr.middlewares=authelia@docker"
- "traefik.http.services.tdarr.loadbalancer.server.port=8265"
```
### With NVIDIA GPU
```yaml
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
# ... rest of config
```
### Tdarr Node (Worker)
```yaml
tdarr-node:
image: ghcr.io/haveagitgat/tdarr_node:latest
container_name: tdarr-node
restart: unless-stopped
network_mode: service:tdarr
environment:
- nodeName=MainNode
- serverIP=0.0.0.0
- serverPort=8266
- PUID=1000
- PGID=1000
volumes:
- /opt/stacks/media-management/tdarr/config:/app/configs
- /opt/stacks/media-management/tdarr/logs:/app/logs
- /opt/stacks/media-management/tdarr/temp:/temp
- /mnt/media:/media
devices:
- /dev/dri:/dev/dri
```
## Initial Setup
### First Access
1. **Start Containers:**
```bash
docker compose up -d tdarr tdarr-node
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:8265`
- Domain: `https://tdarr.yourdomain.com`
3. **Initial Configuration:**
- Add libraries
- Configure node
- Install plugins
- Create flows
### Add Library
**Libraries Tab → Add Library:**
1. **Name:** Movies
2. **Source:** `/media/movies`
3. **Folder watch:** ✓ Enable
4. **Priority:** Normal
5. **Schedule:** Always (or specific hours)
6. **Save**
**Repeat for TV Shows:**
- Name: TV Shows
- Source: `/media/tv`
### Configure Node
**Nodes Tab:**
**Built-in Node:**
- Should appear automatically
- Named "MainNode" (from docker config)
- Shows as "Online"
**Node Settings:**
- Transcode GPU: 1 (if GPU available)
- Transcode CPU: 2-4 (CPU threads)
- Health Check GPU: 0
- Health Check CPU: 1
**Hardware Acceleration:**
- CPU Only: Use CPU workers
- NVIDIA: Select "NVENC" codec in plugins
- Intel QSV: Select "QSV" codec
- Prioritize GPU for best performance
### Install Plugins
**Plugins Tab → Community:**
**Essential Plugins:**
1. **Migz-Transcode using Nvidia GPU & FFMPEG**
- NVIDIA hardware transcoding
- H.264 → H.265
2. **Migz-Transcode using CPU & FFMPEG**
- CPU-based transcoding
- Fallback for non-GPU
3. **Remux Container to MKV**
- Convert to MKV without re-encoding
4. **Remove All Subtitle Streams**
- Clean unwanted subtitles
5. **Remove Non-English Audio Streams**
- Keep only English audio
**Install:**
- Click "+" to install plugin
- Shows in "Local" tab when installed
### Create Flow
**Flows Tab → Add Flow:**
**Example: Convert to H.265**
1. **Flow Name:** H.265 Conversion
2. **Add Step:**
- Plugin: Migz-Transcode using Nvidia GPU & FFMPEG
- Target Codec: hevc (H.265)
- CRF: 23 (quality)
- Resolution: Keep original
- Audio: Copy (no transcode)
- Subtitles: Copy
3. **Save Flow**
**Assign to Library:**
- Libraries → Movies → Transcode Options
- Flow: H.265 Conversion
- Save
### Start Processing
**Dashboard:**
- View queue
- Processing status
- Completed/Failed counts
- Library statistics
**Start Transcoding:**
- Automatically starts based on schedule
- Monitor progress in real-time
## Advanced Topics
### Custom Plugins
**Create Custom Plugin:**
**Plugins Tab → Local → Create Plugin:**
```javascript
function details() {
return {
id: "Custom_H265",
Stage: "Pre-processing",
Name: "Custom H.265 Conversion",
Type: "Video",
Operation: "Transcode",
Description: "Custom H.265 conversion with specific settings",
Version: "1.0",
Tags: "video,h265,nvenc",
};
}
function plugin(file, librarySettings, inputs, otherArguments) {
var response = {
processFile: false,
preset: "",
handBrakeMode: false,
FFmpegMode: true,
reQueueAfter: true,
infoLog: "",
};
if (file.video_codec_name === "hevc") {
response.infoLog += "File already H.265 ☑\n";
response.processFile = false;
return response;
}
response.processFile = true;
response.preset = "-c:v hevc_nvenc -crf 23 -c:a copy -c:s copy";
response.infoLog += "Transcoding to H.265 with NVENC\n";
return response;
}
module.exports.details = details;
module.exports.plugin = plugin;
```
**Use Cases:**
- Specific quality settings
- Custom audio handling
- Conditional transcoding
- Advanced workflows
### Flow Conditions
**Add Conditions to Flows:**
**Example: Only transcode if > 1080p**
- Add condition: Resolution > 1920x1080
- Then: Transcode to 1080p
- Else: Skip
**Example: Only if file size > 5GB**
- Condition: File size check
- Large files get transcoded
- Small files skipped
### Scheduling
**Library Settings → Schedule:**
**Transcode Schedule:**
- All day: 24/7 transcoding
- Night only: 10PM - 6AM
- Custom hours: Define specific times
**Use Cases:**
- Transcode during off-hours
- Avoid peak usage times
- Save electricity costs
### Health Checks
**Detect Corrupted Files:**
**Libraries → Settings → Health Check:**
- ✓ Enable health check
- Check for: Video errors, audio errors
- Workers: 1-2 CPU workers
**Health Check Flow:**
- Scans files for corruption
- Marks unhealthy files
- Option to quarantine or delete
### Statistics
**Dashboard → Statistics:**
**Library Stats:**
- Total files
- Codec breakdown (H.264 vs H.265)
- Total size
- Space savings potential
**Processing Stats:**
- Files processed
- Success rate
- Average processing time
- Queue size
### Multiple Libraries
**Separate libraries for:**
- Movies
- TV Shows
- 4K Content
- Different quality tiers
**Benefits:**
- Different flows per library
- Prioritization
- Separate schedules
## Troubleshooting
### Tdarr Not Transcoding
```bash
# Check node status
# Nodes tab → Should show "Online"
# Check queue
# Dashboard → Should show items
# Check workers
# Nodes → Workers should be > 0
# Check logs
docker logs tdarr
docker logs tdarr-node
# Common issues:
# - No workers configured
# - Library not assigned to flow
# - Schedule disabled
# - Temp directory full
```
### Transcoding Fails
```bash
# Check logs
docker logs tdarr-node | tail -50
# Check temp directory space
df -h /opt/stacks/media-management/tdarr/temp/
# Check FFmpeg
docker exec tdarr-node ffmpeg -version
# Check hardware acceleration
# GPU not detected → use CPU plugins
# Check file permissions
ls -la /mnt/media/movies/
```
### High CPU/GPU Usage
```bash
# Check worker count
# Nodes → Reduce workers if too high
# Monitor resources
docker stats tdarr tdarr-node
# Limit workers:
# - GPU: 1-2
# - CPU: 2-4 (depending on cores)
# Schedule during off-hours
# Libraries → Schedule → Night only
```
### Slow Transcoding
```bash
# Enable hardware acceleration
# Use NVENC/QSV plugins instead of CPU
# Reduce quality
# CRF 28 faster than CRF 18
# Increase workers (if resources available)
# Use faster preset
# Plugin settings → Preset: fast/faster
# Check disk I/O
# Fast NVMe for temp directory improves speed
```
### Files Not Replacing Originals
```bash
# Check flow settings
# Flow → Output → Replace original: ✓
# Check permissions
ls -la /mnt/media/
# Check temp directory
ls -la /opt/stacks/media-management/tdarr/temp/
# Check logs for errors
docker logs tdarr-node | grep -i error
```
## Performance Optimization
### Hardware Acceleration
**Best Performance:**
1. NVIDIA GPU (NVENC)
2. Intel QuickSync (QSV)
3. AMD GPU
4. CPU (slowest)
**Settings:**
- GPU workers: 1-2
- Quality: CRF 23-28
- Preset: medium/fast
### Temp Directory
**Use Fast Storage:**
```yaml
volumes:
- /path/to/fast/nvme:/temp
```
**Or RAM disk:**
```yaml
volumes:
- /dev/shm:/temp
```
**Benefits:**
- Faster read/write
- Reduced wear on HDDs
### Worker Optimization
**Recommendations:**
- Transcode GPU: 1-2
- Transcode CPU: 50-75% of cores
- Health Check: 1 CPU worker
**Don't Overload:**
- System needs resources for other services
- Leave headroom
## Security Best Practices
1. **Reverse Proxy:** Use Traefik + Authelia
2. **Read-Only Media:** Use separate temp directory
3. **Backup Before Bulk Operations:** Test on small set first
4. **Regular Backups:** Original files until verified
5. **Monitor Disk Space:** Transcoding needs 2x file size temporarily
6. **Limit Access:** Keep UI secured
7. **Regular Updates:** Keep Tdarr current
## Backup Strategy
**Before Transcoding:**
- Test on small library subset
- Verify quality before bulk processing
- Keep original files until verified
**Critical Files:**
```bash
/opt/stacks/media-management/tdarr/server/ # Database
/opt/stacks/media-management/tdarr/config/ # Configuration
```
**Backup Script:**
```bash
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR=/opt/backups/tdarr
tar -czf $BACKUP_DIR/tdarr-$DATE.tar.gz \
/opt/stacks/media-management/tdarr/server/ \
/opt/stacks/media-management/tdarr/config/
find $BACKUP_DIR -name "tdarr-*.tar.gz" -mtime +7 -delete
```
## Integration with Other Services
### Tdarr + Plex/Jellyfin
- Transcode to compatible codecs
- Reduce Plex transcoding load
- Optimize library for direct play
### Tdarr + Sonarr/Radarr
- Process new downloads automatically
- Standard quality across library
## Summary
Tdarr is the transcoding automation system offering:
- Distributed transcoding
- H.265 space savings
- Hardware acceleration
- 500+ plugins
- Flexible workflows
- Health checking
- Free and open-source
**Perfect for:**
- Large media libraries
- Space optimization (H.265)
- Library standardization
- Quality control
- Hardware acceleration users
**Key Points:**
- Use GPU for best performance
- CRF 23 good balance
- Test on small set first
- Monitor disk space
- Fast temp directory helps
- Schedule during off-hours
- Backup before bulk operations
**Remember:**
- Transcoding is lossy (quality loss)
- Keep originals until verified
- H.265 saves 30-50% space
- Hardware acceleration essential
- Can take days for large libraries
- Test plugins before bulk use
Tdarr automates your entire library transcoding workflow!

View File

@@ -1,517 +0,0 @@
# Traefik - Modern Reverse Proxy
## Table of Contents
- [Overview](#overview)
- [What is Traefik?](#what-is-traefik)
- [Why Use Traefik?](#why-use-traefik)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Core Infrastructure
**Docker Image:** [traefik](https://hub.docker.com/_/traefik)
**Default Stack:** `core.yml`
**Web UI:** `https://traefik.${DOMAIN}`
**Authentication:** Protected by Authelia (SSO)
## What is Traefik?
Traefik is a modern HTTP reverse proxy and load balancer designed for microservices and containerized applications. It automatically discovers services and configures routing, making it ideal for Docker environments.
### Key Features
- **Automatic Service Discovery:** Detects Docker containers and configures routing automatically
- **Automatic HTTPS:** Integrates with Let's Encrypt for free SSL certificates
- **Dynamic Configuration:** No need to restart when adding/removing services
- **Multiple Providers:** Supports Docker, Kubernetes, Consul, and more
- **Middleware Support:** Authentication, rate limiting, compression, etc.
- **Load Balancing:** Distribute traffic across multiple instances
- **WebSocket Support:** Full WebSocket passthrough
- **Dashboard:** Built-in web UI for monitoring
## Why Use Traefik?
1. **Single Entry Point:** All services accessible through one domain with subdomains
2. **Automatic SSL:** Free SSL certificates automatically renewed
3. **No Manual Configuration:** Services auto-configure with Docker labels
4. **Security:** Centralized authentication and access control
5. **Professional Setup:** Industry-standard reverse proxy
6. **Easy Maintenance:** Add/remove services without touching Traefik config
## How It Works
```
Internet → Your Domain → Router (Port 80/443) → Traefik
├→ Plex (plex.domain.com)
├→ Sonarr (sonarr.domain.com)
├→ Radarr (radarr.domain.com)
└→ [Other Services]
```
### Request Flow
1. **User visits** `https://plex.yourdomain.duckdns.org`
2. **DNS resolves** to your public IP (via DuckDNS)
3. **Router forwards** port 443 to Traefik
4. **Traefik receives** the request and checks routing rules
5. **Middleware** applies authentication (if required)
6. **Traefik forwards** request to Plex container
7. **Response flows back** through Traefik to user
### Service Discovery
Traefik reads Docker labels to configure routing:
```yaml
labels:
- "traefik.enable=true"
- "traefik.http.routers.plex.rule=Host(`plex.${DOMAIN}`)"
- "traefik.http.routers.plex.entrypoints=websecure"
- "traefik.http.routers.plex.tls.certresolver=letsencrypt"
```
Traefik automatically:
- Creates route for `plex.yourdomain.com`
- Requests SSL certificate from Let's Encrypt
- Renews certificates before expiration
- Routes traffic to Plex container
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/core/traefik/
├── traefik.yml # Static configuration
├── dynamic/ # Dynamic routing rules
│ ├── routes.yml # Additional routes
│ └── external.yml # External service proxying
└── acme.json # SSL certificates (auto-generated)
```
### Static Configuration (`traefik.yml`)
```yaml
api:
dashboard: true # Enable web dashboard
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: websecure
scheme: https
websecure:
address: ":443"
http:
tls:
certResolver: letsencrypt
certificatesResolvers:
letsencrypt:
acme:
email: your-email@example.com
storage: /acme.json
# For testing environments: Use Let's Encrypt staging to avoid rate limits
# caServer: https://acme-staging-v02.api.letsencrypt.org/directory
dnsChallenge:
provider: duckdns
# Note: Explicit resolvers can cause DNS propagation check failures
# Remove resolvers to use system's DNS for better DuckDNS TXT record resolution
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
directory: /dynamic
watch: true
```
### Environment Variables
```bash
DOMAIN=yourdomain.duckdns.org
ACME_EMAIL=your-email@example.com
```
### Service Labels Example
```yaml
services:
myservice:
image: myapp:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.myservice.rule=Host(`myservice.${DOMAIN}`)"
- "traefik.http.routers.myservice.entrypoints=websecure"
- "traefik.http.routers.myservice.tls=true" # Uses wildcard cert automatically
- "traefik.http.routers.myservice.middlewares=authelia@docker"
- "traefik.http.services.myservice.loadbalancer.server.port=8080"
networks:
- traefik-network
```
## Official Resources
- **Website:** https://traefik.io
- **Documentation:** https://doc.traefik.io/traefik/
- **GitHub:** https://github.com/traefik/traefik
- **Docker Hub:** https://hub.docker.com/_/traefik
- **Community Forum:** https://community.traefik.io
- **Blog:** https://traefik.io/blog/
## Educational Resources
### Videos
- [Traefik 101 - What is a Reverse Proxy? (Techno Tim)](https://www.youtube.com/watch?v=liV3c9m_OX8)
- [Traefik Tutorial - The BEST Reverse Proxy? (NetworkChuck)](https://www.youtube.com/watch?v=wLrmmh1eI94)
- [Traefik vs Nginx Proxy Manager](https://www.youtube.com/results?search_query=traefik+vs+nginx+proxy+manager)
- [Traefik + Docker + SSL (Wolfgang's Channel)](https://www.youtube.com/watch?v=lJRPg9jN4hE)
### Articles & Guides
- [Traefik Official Documentation](https://doc.traefik.io/traefik/)
- [Traefik Quick Start Guide](https://doc.traefik.io/traefik/getting-started/quick-start/)
- [Docker Provider Documentation](https://doc.traefik.io/traefik/providers/docker/)
- [Let's Encrypt with Traefik](https://doc.traefik.io/traefik/user-guides/docker-compose/acme-http/)
- [Awesome Traefik (GitHub)](https://github.com/containous/traefik/wiki)
### Concepts to Learn
- **Reverse Proxy:** Server that sits between clients and backend services
- **Load Balancer:** Distributes traffic across multiple backend servers
- **EntryPoints:** Ports where Traefik listens (80, 443)
- **Routers:** Define rules for routing traffic to services
- **Middleware:** Process requests (auth, rate limiting, headers)
- **Services:** Backend applications that receive traffic
- **ACME:** Automatic Certificate Management Environment (Let's Encrypt)
## Docker Configuration
### Complete Service Definition
```yaml
traefik:
image: traefik:v2.11
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- traefik-network
ports:
- "80:80" # HTTP (redirects to HTTPS)
- "443:443" # HTTPS
- "8080:8080" # Dashboard
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /opt/stacks/core/traefik/traefik.yml:/traefik.yml:ro
- /opt/stacks/core/traefik/dynamic:/dynamic:ro
- /opt/stacks/core/traefik/acme.json:/acme.json
environment:
- DUCKDNS_TOKEN=${DUCKDNS_TOKEN}
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.${DOMAIN}`)"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik.tls.domains[0].main=${DOMAIN}"
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
- "traefik.http.routers.traefik.middlewares=authelia@docker"
- "traefik.http.routers.traefik.service=api@internal"
```
### Important Files
#### acme.json
Stores SSL certificates. **Must have 600 permissions:**
```bash
touch /opt/stacks/core/traefik/acme.json
chmod 600 /opt/stacks/core/traefik/acme.json
```
#### Dynamic Configuration
Add custom routes for non-Docker services:
```yaml
# /opt/stacks/core/traefik/dynamic/external.yml
http:
routers:
raspberry-pi:
rule: "Host(`pi.yourdomain.com`)"
entryPoints:
- websecure
service: raspberry-pi
tls:
certResolver: letsencrypt
services:
raspberry-pi:
loadBalancer:
servers:
- url: "http://192.168.1.50:80"
```
## Advanced Topics
### Middlewares
#### Authentication (Authelia)
```yaml
labels:
- "traefik.http.routers.myservice.middlewares=authelia@docker"
```
#### Rate Limiting
```yaml
http:
middlewares:
rate-limit:
rateLimit:
average: 100
burst: 50
```
#### Headers
```yaml
http:
middlewares:
security-headers:
headers:
stsSeconds: 31536000
stsIncludeSubdomains: true
stsPreload: true
```
### Multiple Domains
```yaml
labels:
- "traefik.http.routers.myservice.rule=Host(`app.domain1.com`) || Host(`app.domain2.com`)"
```
### PathPrefix Routing
```yaml
labels:
- "traefik.http.routers.myservice.rule=Host(`domain.com`) && PathPrefix(`/app`)"
```
### WebSocket Support
WebSockets work automatically. For specific configuration:
```yaml
labels:
- "traefik.http.services.myservice.loadbalancer.sticky.cookie=true"
```
### Custom Certificates
```yaml
tls:
certificates:
- certFile: /path/to/cert.pem
keyFile: /path/to/key.pem
```
## Troubleshooting
### Check Traefik Dashboard
Access at `https://traefik.yourdomain.com` to see:
- All discovered services
- Active routes
- Certificate status
- Error logs
### Common Issues
#### Service Not Accessible
```bash
# Check if Traefik is running
docker ps | grep traefik
# View Traefik logs
docker logs traefik
# Check if service is on traefik-network
docker inspect service-name | grep Networks
# Verify labels
docker inspect service-name | grep traefik
```
#### Wildcard Certificates with DuckDNS
**IMPORTANT:** When using DuckDNS for DNS challenge, only ONE service should request certificates:
```yaml
# ✅ CORRECT: Only Traefik requests wildcard certificate
traefik:
labels:
- "traefik.http.routers.traefik.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik.tls.domains[0].main=${DOMAIN}"
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
# ✅ CORRECT: Other services just enable TLS
other-service:
labels:
- "traefik.http.routers.service.tls=true" # Uses wildcard cert
# ❌ WRONG: Multiple services requesting individual certs
other-service:
labels:
- "traefik.http.routers.service.tls.certresolver=letsencrypt" # Causes conflicts!
```
**Why?** DuckDNS can only maintain ONE TXT record at `_acme-challenge.yourdomain.duckdns.org`. Multiple simultaneous certificate requests will fail with "Incorrect TXT record" errors.
**Solution:** Use a wildcard certificate (`*.yourdomain.duckdns.org`) that covers all subdomains.
**Verify Certificate:**
```bash
# Check wildcard certificate is obtained
python3 -c "import json; d=json.load(open('/opt/stacks/core/traefik/acme.json')); print(f'Certificates: {len(d[\"letsencrypt\"][\"Certificates\"])}')"
# Test certificate being served
echo | openssl s_client -connect yourdomain.duckdns.org:443 -servername yourdomain.duckdns.org 2>/dev/null | openssl x509 -noout -subject -issuer
```
#### SSL Certificate Issues
```bash
# Check acme.json permissions
ls -la /opt/stacks/core/traefik/acme.json
# Should be: -rw------- (600)
# Check certificate generation logs
docker exec traefik tail -50 /var/log/traefik/traefik.log | grep -E "acme|certificate"
# Verify ports 80/443 are accessible
curl -I http://yourdomain.duckdns.org
curl -I https://yourdomain.duckdns.org
# Check Let's Encrypt rate limits
# Let's Encrypt allows 50 certificates per domain per week
```
#### Testing Environment Setup
When resetting test environments, use Let's Encrypt staging to avoid production rate limits:
```yaml
certificatesResolvers:
letsencrypt:
acme:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
# ... rest of config
```
**Staging certificates are not trusted by browsers** - they're for testing only. Switch back to production when deploying.
#### Certificate Conflicts During Testing
- **Preserve acme.json** across test environment resets to reuse certificates
- **Use staging server** for frequent testing to avoid rate limits
- **Wait 1+ hours** between certificate requests to allow DNS propagation
- **Ensure only one Traefik instance** performs DNS challenges (DuckDNS allows only one TXT record)
#### Router Port Forwarding
Ensure these ports are forwarded to your server:
- Port 80 (HTTP) → Required for Let's Encrypt challenges
- Port 443 (HTTPS) → Required for HTTPS traffic
#### acme.json Corruption
```bash
# Backup and recreate
cp /opt/stacks/core/traefik/acme.json /opt/stacks/core/traefik/acme.json.backup
rm /opt/stacks/core/traefik/acme.json
touch /opt/stacks/core/traefik/acme.json
chmod 600 /opt/stacks/core/traefik/acme.json
# Restart Traefik
docker restart traefik
```
### Debugging Commands
```bash
# Test service connectivity from Traefik
docker exec traefik ping service-name
# Check routing rules
docker exec traefik traefik version
# Validate configuration
docker exec traefik cat /traefik.yml
# Check docker socket connection
docker exec traefik ls -la /var/run/docker.sock
```
## Security Best Practices
1. **Protect Dashboard:** Always use Authelia or basic auth for dashboard
2. **Regular Updates:** Keep Traefik updated for security patches
3. **Limit Docker Socket:** Use Docker Socket Proxy for additional security
4. **Use Strong Ciphers:** Configure modern TLS settings
5. **Rate Limiting:** Implement rate limiting on public endpoints
6. **Security Headers:** Enable HSTS, CSP, and other security headers
7. **Monitor Logs:** Regularly review logs for suspicious activity
## Performance Optimization
- **Enable Compression:** Traefik can compress responses
- **Connection Pooling:** Reuse connections to backend services
- **HTTP/2:** Enabled by default for better performance
- **Caching:** Consider adding caching middleware
- **Resource Limits:** Set appropriate CPU/memory limits
## Comparison with Alternatives
### Traefik vs Nginx Proxy Manager
- **Traefik:** Automatic discovery, native Docker integration, config as code
- **NPM:** Web UI for configuration, simpler for beginners, more manual setup
### Traefik vs Caddy
- **Traefik:** Better for Docker, more features, larger community
- **Caddy:** Simpler config, automatic HTTPS, less Docker-focused
### Traefik vs HAProxy
- **Traefik:** Modern, dynamic, Docker-native
- **HAProxy:** More powerful, complex config, not Docker-native
## Summary
Traefik is the heart of your homelab's networking infrastructure. It:
- Automatically routes all web traffic to appropriate services
- Manages SSL certificates without manual intervention
- Provides a single entry point for all services
- Integrates seamlessly with Docker
- Scales from simple to complex setups
Understanding Traefik is crucial for managing your homelab effectively. Take time to explore the dashboard and understand how routing works - it will make troubleshooting and adding new services much easier.
## Related Services
- **[Authelia](authelia.md)** - SSO authentication that integrates with Traefik
- **[Sablier](sablier.md)** - Lazy loading that works with Traefik routing
- **[DuckDNS](duckdns.md)** - Dynamic DNS for SSL certificate validation
- **[Gluetun](gluetun.md)** - VPN routing that can work alongside Traefik
## See Also
- **[Traefik Labels Guide](../docker-guidelines.md#traefik-label-patterns)** - How to configure services for Traefik
- **[SSL Certificate Setup](../getting-started.md#notes-about-ssl-certificates-from-letsencrypt-with-duckdns)** - How SSL certificates work with Traefik
- **[External Host Proxying](../proxying-external-hosts.md)** - Route non-Docker services through Traefik

View File

@@ -1,338 +0,0 @@
# Unmanic - Library Optimization
## Table of Contents
- [Overview](#overview)
- [What is Unmanic?](#what-is-unmanic)
- [Why Use Unmanic?](#why-use-unmanic)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Initial Setup](#initial-setup)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Media Optimization
**Docker Image:** [josh5/unmanic](https://hub.docker.com/r/josh5/unmanic)
**Default Stack:** `media-extended.yml`
**Web UI:** `https://unmanic.${DOMAIN}` or `http://SERVER_IP:8888`
**Authentication:** Optional
**Ports:** 8888
## What is Unmanic?
Unmanic is a library optimizer designed to automate file management and transcoding tasks. Unlike Tdarr (which processes entire libraries), Unmanic focuses on continuous optimization - it watches your media library and processes new files as they arrive. It's perfect for maintaining consistent quality and format standards automatically.
### Key Features
- **File Watcher:** Automatic processing of new files
- **Plugin System:** Extensible workflows
- **Hardware Acceleration:** GPU transcoding support
- **Container Conversion:** Automatic remuxing
- **Audio/Subtitle Management:** Track manipulation
- **Queue Management:** Priority handling
- **Statistics Dashboard:** Processing metrics
- **Multiple Libraries:** Independent configurations
- **Remote Workers:** Distributed processing
- **Pause/Resume:** Flexible scheduling
## Why Use Unmanic?
1. **Automatic Processing:** New files handled immediately
2. **Consistent Quality:** Enforce library standards
3. **Space Optimization:** Convert to efficient codecs
4. **Format Standardization:** All files same container
5. **Hardware Acceleration:** Fast GPU transcoding
6. **Plugin Ecosystem:** Pre-built workflows
7. **Simple Setup:** Easier than Tdarr for basic tasks
8. **Free & Open Source:** No cost
9. **Active Development:** Regular updates
10. **Docker Ready:** Easy deployment
## How It Works
```
New File Added to Library
Unmanic Detects File (File Watcher)
Applies Configured Plugins
Queues for Processing
Worker Processes File
Output File Created
Replaces Original
Library Optimized
```
## Configuration in AI-Homelab
### Directory Structure
```
/opt/stacks/media-management/unmanic/config/ # Configuration
/mnt/media/movies/ # Movie library
/mnt/media/tv/ # TV library
/tmp/unmanic/ # Temp files
```
### Environment Variables
```bash
# User permissions
PUID=1000
PGID=1000
# Timezone
TZ=America/New_York
```
## Official Resources
- **Website:** https://unmanic.app
- **Documentation:** https://docs.unmanic.app
- **GitHub:** https://github.com/Unmanic/unmanic
- **Discord:** https://discord.gg/wpShMzf
- **Forum:** https://forum.unmanic.app
## Educational Resources
### Videos
- [Unmanic Setup Guide](https://www.youtube.com/results?search_query=unmanic+setup)
- [Unmanic vs Tdarr](https://www.youtube.com/results?search_query=unmanic+vs+tdarr)
### Articles & Guides
- [Official Docs](https://docs.unmanic.app)
- [Plugin Library](https://unmanic.app/plugins)
### Concepts to Learn
- **File Watching:** Real-time monitoring
- **Plugin Workflows:** Sequential processing
- **Hardware Transcoding:** GPU acceleration
- **Container Remuxing:** Format conversion without re-encoding
- **Worker Pools:** Parallel processing
## Docker Configuration
### Complete Service Definition
```yaml
unmanic:
image: josh5/unmanic:latest
container_name: unmanic
restart: unless-stopped
networks:
- traefik-network
ports:
- "8888:8888"
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- /opt/stacks/media-management/unmanic/config:/config
- /mnt/media:/library
- /tmp/unmanic:/tmp/unmanic
devices:
- /dev/dri:/dev/dri # Intel QuickSync
labels:
- "traefik.enable=true"
- "traefik.http.routers.unmanic.rule=Host(`unmanic.${DOMAIN}`)"
- "traefik.http.routers.unmanic.entrypoints=websecure"
- "traefik.http.routers.unmanic.tls.certresolver=letsencrypt"
- "traefik.http.routers.unmanic.middlewares=authelia@docker"
- "traefik.http.services.unmanic.loadbalancer.server.port=8888"
```
### With NVIDIA GPU
```yaml
unmanic:
image: josh5/unmanic:latest
container_name: unmanic
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
# ... rest of config
```
## Initial Setup
### First Access
1. **Start Container:**
```bash
docker compose up -d unmanic
```
2. **Access Web UI:**
- Local: `http://SERVER_IP:8888`
- Domain: `https://unmanic.yourdomain.com`
3. **Add Library:**
- Settings → Libraries → Add Library
- Path: `/library/movies`
- Enable file watcher
4. **Install Plugins:**
- Plugins → Install desired plugins
- Configure plugin settings
5. **Configure Workers:**
- Settings → Workers
- Set worker count based on resources
### Popular Plugins
**Essential Plugins:**
1. **Video Encoder H265/HEVC**
- Convert to H.265
- Space savings
2. **Normalize Audio Levels**
- Consistent volume
- Prevents loud/quiet issues
3. **Remove Subtitle Streams**
- Clean unwanted subtitles
4. **Container Conversion to MKV**
- Standardize on MKV
5. **Video Resolution Limiter**
- Downscale 4K to 1080p if needed
**Install:**
- Plugins → Search → Install
- Configure in Library settings
### Library Configuration
**Settings → Libraries → Select Library:**
**General:**
- Enable file watcher: ✓
- Enable inotify: ✓ (real-time)
- Scan interval: 30 minutes (backup to file watcher)
**Plugins:**
- Add plugins in desired order
- Each plugin processes sequentially
**Example Flow:**
1. Container Conversion to MKV
2. Video Encoder H265
3. Normalize Audio
4. Remove Subtitle Streams
### Worker Configuration
**Settings → Workers:**
**Worker Count:**
- 1-2 for GPU encoding
- 2-4 for CPU encoding
- Don't overload system
**Worker Settings:**
- Enable hardware acceleration
- Set temp directory
- Configure logging
## Troubleshooting
### Unmanic Not Processing Files
```bash
# Check file watcher
# Settings → Libraries → Check "Enable file watcher"
# Check logs
docker logs unmanic | tail -50
# Manual trigger
# Dashboard → Rescan Library
# Check queue
# Dashboard → Should show pending tasks
# Verify permissions
docker exec unmanic ls -la /library/movies/
```
### Transcoding Fails
```bash
# Check worker logs
# Dashboard → Workers → View logs
# Check temp space
df -h /tmp/unmanic/
# Check FFmpeg
docker exec unmanic ffmpeg -version
# Check GPU access
docker exec unmanic ls /dev/dri/
# Common issues:
# - Insufficient temp space
# - GPU not available
# - File format unsupported
```
### High Resource Usage
```bash
# Reduce worker count
# Settings → Workers → Decrease count
# Check active workers
docker stats unmanic
# Pause processing
# Dashboard → Pause button
# Schedule processing
# Process during off-hours only
```
## Summary
Unmanic is the library optimizer offering:
- Automatic file processing
- Real-time file watching
- Plugin-based workflows
- Hardware acceleration
- Simple setup
- Free and open-source
**Perfect for:**
- Continuous optimization
- New file processing
- Format standardization
- Automated workflows
- Simple transcoding needs
**Key Points:**
- File watcher for automation
- Plugin system for flexibility
- Hardware acceleration support
- Simpler than Tdarr for basic needs
- Real-time processing
**Remember:**
- Use fast temp directory
- Monitor disk space
- Test plugins before bulk use
- Hardware acceleration recommended
- Works great with Sonarr/Radarr
Unmanic keeps your library optimized automatically!

View File

@@ -1,155 +0,0 @@
# Uptime Kuma - Uptime Monitoring
## Table of Contents
- [Overview](#overview)
- [What is Uptime Kuma?](#what-is-uptime-kuma)
- [Why Use Uptime Kuma?](#why-use-uptime-kuma)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Uptime Monitoring
**Docker Image:** [louislam/uptime-kuma](https://hub.docker.com/r/louislam/uptime-kuma)
**Default Stack:** `monitoring.yml`
**Web UI:** `http://SERVER_IP:3001`
**Ports:** 3001
## What is Uptime Kuma?
Uptime Kuma is a self-hosted monitoring tool like UptimeRobot. It monitors HTTP(s), TCP, DNS, ping, and more services, sending notifications when they go down. Beautiful UI with status pages, perfect for monitoring your homelab services.
### Key Features
- **20+ Monitor Types:** HTTP, TCP, ping, DNS, Docker, etc.
- **Beautiful UI:** Modern, clean interface
- **Status Pages:** Public/private status pages
- **Notifications:** 90+ notification services
- **Multi-Language:** 40+ languages
- **Certificates:** SSL cert expiry monitoring
- **Responsive:** Mobile-friendly
- **Free & Open Source:** No limits
## Why Use Uptime Kuma?
1. **Self-Hosted:** No external dependencies
2. **Beautiful:** Best-in-class UI
3. **Free:** Unlike UptimeRobot paid tiers
4. **Comprehensive:** 20+ monitor types
5. **Status Pages:** Share uptime publicly
6. **Active Development:** Rapid updates
7. **Easy:** Simple to setup
## Configuration in AI-Homelab
```
/opt/stacks/monitoring/uptime-kuma/data/
kuma.db # SQLite database
```
## Official Resources
- **GitHub:** https://github.com/louislam/uptime-kuma
- **Demo:** https://demo.uptime.kuma.pet
## Docker Configuration
```yaml
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
restart: unless-stopped
networks:
- traefik-network
ports:
- "3001:3001"
volumes:
- /opt/stacks/monitoring/uptime-kuma/data:/app/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.uptime-kuma.rule=Host(`uptime.${DOMAIN}`)"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d uptime-kuma
```
2. **Access UI:** `http://SERVER_IP:3001`
3. **Create Account:**
- First user becomes admin
- Set username and password
4. **Add Monitor:**
- "+ Add New Monitor"
- Type: HTTP(s), TCP, Ping, etc.
- URL: https://service.yourdomain.com
- Interval: 60 seconds
- Retry: 3 times
- Save
5. **Setup Notifications:**
- Settings → Notifications
- Add notification (Discord, Telegram, Email, etc.)
- Test notification
- Apply to monitors
6. **Create Status Page:**
- Status Pages → "+ New Status Page"
- Add monitors to display
- Customize theme
- Make public or private
- Share URL
## Monitor Types
- **HTTP(s):** Website monitoring
- **TCP:** Port monitoring
- **Ping:** ICMP ping
- **DNS:** DNS resolution
- **Docker Container:** Container status
- **Keyword:** Search for text in page
- **JSON Query:** Check JSON response
- **SSL Certificate:** Cert expiry
## Summary
Uptime Kuma provides uptime monitoring offering:
- 20+ monitor types
- Beautiful web interface
- Status pages
- 90+ notification services
- SSL certificate monitoring
- Docker container monitoring
- Free and open-source
**Perfect for:**
- Service uptime monitoring
- Public status pages
- SSL cert expiry alerts
- Homelab monitoring
- Replacing UptimeRobot
- Team uptime visibility
**Key Points:**
- Self-hosted monitoring
- Beautiful modern UI
- 20+ monitor types
- 90+ notification options
- Public/private status pages
- First user = admin
- Very active development
**Remember:**
- Create admin account first
- Add all critical services
- Setup notifications
- Create status page for visibility
- Monitor SSL certificates
- Set appropriate check intervals
- Test notifications work
Uptime Kuma keeps your services monitored!

View File

@@ -1,255 +0,0 @@
# Vaultwarden - Password Manager
## Table of Contents
- [Overview](#overview)
- [What is Vaultwarden?](#what-is-vaultwarden)
- [Why Use Vaultwarden?](#why-use-vaultwarden)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Setup](#setup)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Password Management
**Docker Image:** [vaultwarden/server](https://hub.docker.com/r/vaultwarden/server)
**Default Stack:** `utilities.yml`
**Web UI:** `https://vaultwarden.${DOMAIN}` or `http://SERVER_IP:8343`
**Client Apps:** Bitwarden apps (iOS, Android, desktop, browser extensions)
**Ports:** 8343
## What is Vaultwarden?
Vaultwarden (formerly Bitwarden_RS) is an unofficial Bitwarden server implementation written in Rust. It's fully compatible with official Bitwarden clients but designed for self-hosting with much lower resource requirements. Store all your passwords, credit cards, secure notes, and identities encrypted on your own server.
### Key Features
- **Bitwarden Compatible:** Use official apps
- **End-to-End Encryption:** Zero-knowledge
- **Cross-Platform:** Windows, Mac, Linux, iOS, Android
- **Browser Extensions:** Chrome, Firefox, Safari, Edge
- **Password Generator:** Strong password creation
- **2FA Support:** TOTP, U2F, Duo
- **Secure Notes:** Encrypted notes storage
- **File Attachments:** Store encrypted files
- **Collections:** Organize passwords
- **Organizations:** Family/team sharing
- **Low Resource:** <100MB RAM
- **Free & Open Source:** No premium required
## Why Use Vaultwarden?
1. **Self-Hosted:** Control your passwords
2. **Free Premium Features:** All features included
3. **Privacy:** Passwords never leave your server
4. **Zero-Knowledge:** Only you can decrypt
5. **Lightweight:** Runs on anything
6. **Bitwarden Apps:** Use official clients
7. **Family Sharing:** Free organizations
8. **Open Source:** Auditable security
## Configuration in AI-Homelab
```
/opt/stacks/utilities/vaultwarden/data/
db.sqlite3 # Password database (encrypted)
attachments/ # File attachments
sends/ # Bitwarden Send files
config.json # Configuration
```
## Official Resources
- **GitHub:** https://github.com/dani-garcia/vaultwarden
- **Wiki:** https://github.com/dani-garcia/vaultwarden/wiki
- **Bitwarden Apps:** https://bitwarden.com/download/
## Educational Resources
### YouTube Videos
1. **Techno Tim - Vaultwarden Setup**
- https://www.youtube.com/watch?v=yzjgD3hIPtE
- Complete setup guide
- Browser extension configuration
- Organization setup
2. **DB Tech - Bitwarden RS (Vaultwarden)**
- https://www.youtube.com/watch?v=2IceFM4BZqk
- Docker deployment
- App configuration
- Security best practices
3. **Wolfgang's Channel - Vaultwarden Security**
- https://www.youtube.com/watch?v=ViR021iiR5Y
- Security hardening
- 2FA setup
- Backup strategies
### Articles
1. **Official Wiki:** https://github.com/dani-garcia/vaultwarden/wiki
2. **Comparison:** https://github.com/dani-garcia/vaultwarden/wiki/Which-container-image-to-use
## Docker Configuration
```yaml
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
networks:
- traefik-network
ports:
- "8343:80"
environment:
- DOMAIN=https://vaultwarden.${DOMAIN}
- SIGNUPS_ALLOWED=true # Disable after creating accounts
- INVITATIONS_ALLOWED=true
- SHOW_PASSWORD_HINT=false
- WEBSOCKET_ENABLED=true
- SENDS_ALLOWED=true
- EMERGENCY_ACCESS_ALLOWED=true
volumes:
- /opt/stacks/utilities/vaultwarden/data:/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.vaultwarden.rule=Host(`vaultwarden.${DOMAIN}`)"
- "traefik.http.routers.vaultwarden.entrypoints=websecure"
- "traefik.http.routers.vaultwarden.tls.certresolver=letsencrypt"
- "traefik.http.services.vaultwarden.loadbalancer.server.port=80"
```
## Setup
1. **Start Container:**
```bash
docker compose up -d vaultwarden
```
2. **Access Web Vault:** `https://vaultwarden.yourdomain.com`
3. **Create Account:**
- Click "Create Account"
- Email (for account identification)
- Strong master password (REMEMBER THIS!)
- Master password cannot be recovered!
- Hint (optional, stored in server)
4. **Disable Public Signups:**
After creating accounts, edit docker-compose.yml:
```yaml
- SIGNUPS_ALLOWED=false
```
Then: `docker compose up -d vaultwarden`
5. **Setup Browser Extension:**
- Install Bitwarden extension
- Settings → Server URL → Custom
- `https://vaultwarden.yourdomain.com`
- Log in with your account
6. **Setup Mobile Apps:**
- Download Bitwarden app
- Before login, tap settings gear
- Server URL → Custom
- `https://vaultwarden.yourdomain.com`
- Log in
7. **Enable 2FA (Recommended):**
- Web Vault → Settings → Two-step Login
- Authenticator App (Free) or
- Duo, YubiKey, Email (all free in Vaultwarden)
- Scan QR code with authenticator
- Save recovery code!
## Troubleshooting
### Can't Connect from Apps
```bash
# Check domain is set
docker exec vaultwarden cat /data/config.json | grep domain
# Verify HTTPS working
curl -I https://vaultwarden.yourdomain.com
# Check logs
docker logs vaultwarden | tail -20
```
### Forgot Master Password
**There is NO recovery!** Master password cannot be reset. Your vault is encrypted with your master password. Without it, the data cannot be decrypted.
**Prevention:**
- Write master password somewhere safe
- Use a memorable but strong passphrase
- Consider password hint (stored on server)
- Print recovery codes for 2FA
### Websocket Issues
```bash
# Ensure websocket enabled
docker inspect vaultwarden | grep WEBSOCKET
# Should show: WEBSOCKET_ENABLED=true
```
### Backup Vault
```bash
# Stop container
docker stop vaultwarden
# Backup data directory
tar -czf vaultwarden-backup-$(date +%Y%m%d).tar.gz \
/opt/stacks/utilities/vaultwarden/data/
# Start container
docker start vaultwarden
# Or use Backrest (default) for automatic backups
```
## Summary
Vaultwarden is your self-hosted password manager offering:
- Bitwarden-compatible server
- All premium features free
- End-to-end encryption
- Cross-platform apps
- Browser extensions
- Family/team organizations
- Secure note storage
- File attachments
- Very lightweight
- Free and open-source
**Perfect for:**
- Password management
- Family password sharing
- Self-hosted security
- Privacy-conscious users
- Replacing LastPass/1Password
- Secure note storage
**Key Points:**
- Compatible with Bitwarden clients
- Master password CANNOT be recovered
- Disable signups after creating accounts
- Enable 2FA for security
- Regular backups critical
- Set custom server URL in apps
- HTTPS required for full functionality
**Remember:**
- Master password = cannot recover
- Write it down somewhere safe
- Enable 2FA immediately
- Disable public signups
- Regular backups essential
- Use official Bitwarden apps
- HTTPS required for apps
Vaultwarden gives you control of your passwords!

View File

@@ -1,579 +0,0 @@
# Watchtower - Automated Container Updates
## Table of Contents
- [Overview](#overview)
- [What is Watchtower?](#what-is-watchtower)
- [Why Use Watchtower?](#why-use-watchtower)
- [How It Works](#how-it-works)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Educational Resources](#educational-resources)
- [Docker Configuration](#docker-configuration)
- [Controlling Updates](#controlling-updates)
- [Advanced Topics](#advanced-topics)
- [Troubleshooting](#troubleshooting)
## Overview
**Category:** Infrastructure Automation
**Docker Image:** [containrrr/watchtower](https://hub.docker.com/r/containrrr/watchtower)
**Default Stack:** `infrastructure.yml`
**Web UI:** None (runs as automated service)
**Default Behavior:** Checks for updates daily at 4 AM
## What is Watchtower?
Watchtower is an automated Docker container update service that monitors running containers and automatically updates them when new images are available. It pulls new images, stops old containers, and starts new ones while preserving volumes and configuration.
### Key Features
- **Automatic Updates:** Keeps containers up-to-date automatically
- **Schedule Control:** Cron-based update scheduling
- **Selective Monitoring:** Choose which containers to update
- **Notifications:** Email, Slack, Discord, etc. on updates
- **Update Strategies:** Rolling updates, one-time runs
- **Cleanup:** Automatically removes old images
- **Dry Run Mode:** Test without actually updating
- **Label-Based Control:** Fine-grained update control per container
- **Zero Downtime:** Seamless container recreation
## Why Use Watchtower?
1. **Security:** Automatic security patches
2. **Convenience:** No manual image updates needed
3. **Consistency:** All containers stay updated
4. **Time Savings:** Automates tedious update process
5. **Minimal Downtime:** Fast container recreation
6. **Safe Updates:** Preserves volumes and configuration
7. **Rollback Support:** Keep old images for fallback
8. **Notification:** Get notified of updates
## How It Works
```
Watchtower (scheduled check)
Check Docker Hub for new images
Compare with local image digests
If new version found:
├─ Pull new image
├─ Stop old container
├─ Remove old container
├─ Create new container (same config)
├─ Start new container
└─ Remove old image (optional)
```
### Update Process
1. **Scheduled Check:** Watchtower wakes up (e.g., 4 AM daily)
2. **Scan Containers:** Finds all monitored containers
3. **Check Registries:** Queries Docker Hub/registries for new images
4. **Compare Digests:** Checks if image hash changed
5. **Pull New Image:** Downloads latest version
6. **Recreate Container:** Stops old, starts new with same config
7. **Cleanup:** Removes old images (if configured)
8. **Notify:** Sends notification (if configured)
### What Gets Preserved
**Preserved:**
- Volumes and data
- Networks
- Environment variables
- Labels
- Port mappings
- Restart policy
**Not Preserved:**
- Running processes inside container
- Temporary files in container filesystem
- In-memory data
## Configuration in AI-Homelab
### Directory Structure
```
# Watchtower doesn't need persistent storage
# All configuration via environment variables
```
### Environment Variables
```bash
# Schedule (cron format)
WATCHTOWER_SCHEDULE=0 0 4 * * * # Daily at 4 AM
# Cleanup old images
WATCHTOWER_CLEANUP=true
# Include stopped containers
WATCHTOWER_INCLUDE_STOPPED=false
# Include restarting containers
WATCHTOWER_INCLUDE_RESTARTING=false
# Debug logging
WATCHTOWER_DEBUG=false
# Notifications (optional)
WATCHTOWER_NOTIFICATIONS=email
WATCHTOWER_NOTIFICATION_EMAIL_FROM=watchtower@yourdomain.com
WATCHTOWER_NOTIFICATION_EMAIL_TO=admin@yourdomain.com
WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.gmail.com
WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587
WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=your-email@gmail.com
WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=your-app-password
```
## Official Resources
- **GitHub:** https://github.com/containrrr/watchtower
- **Docker Hub:** https://hub.docker.com/r/containrrr/watchtower
- **Documentation:** https://containrrr.dev/watchtower/
- **Arguments Reference:** https://containrrr.dev/watchtower/arguments/
## Educational Resources
### Videos
- [Watchtower - Auto Update Docker Containers (Techno Tim)](https://www.youtube.com/watch?v=5lP_pdjcVMo)
- [Keep Docker Containers Updated Automatically](https://www.youtube.com/watch?v=SZ-wprcMYGY)
- [Watchtower Setup Tutorial (DB Tech)](https://www.youtube.com/watch?v=Ejtzf-Y8Vac)
### Articles & Guides
- [Watchtower Official Documentation](https://containrrr.dev/watchtower/)
- [Docker Update Strategies](https://containrrr.dev/watchtower/arguments/)
- [Notification Configuration](https://containrrr.dev/watchtower/notifications/)
- [Cron Schedule Examples](https://crontab.guru/)
### Concepts to Learn
- **Container Updates:** How Docker updates work
- **Image Digests:** SHA256 hashes for images
- **Cron Scheduling:** Time-based task automation
- **Rolling Updates:** Zero-downtime deployment
- **Image Tags vs Digests:** Differences and implications
- **Semantic Versioning:** Understanding version numbers
- **Docker Labels:** Metadata for containers
## Docker Configuration
### Complete Service Definition
```yaml
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_SCHEDULE=0 0 4 * * * # 4 AM daily
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_STOPPED=false
- WATCHTOWER_INCLUDE_RESTARTING=true
- WATCHTOWER_ROLLING_RESTART=true
- WATCHTOWER_TIMEOUT=30s
- TZ=America/New_York
# Optional: Notifications
# - WATCHTOWER_NOTIFICATIONS=email
# - WATCHTOWER_NOTIFICATION_EMAIL_FROM=watchtower@yourdomain.com
# - WATCHTOWER_NOTIFICATION_EMAIL_TO=admin@yourdomain.com
```
### Schedule Formats
```bash
# Cron format: second minute hour day month weekday
# * * * * * * = every second
# 0 * * * * * = every minute
# 0 0 * * * * = every hour
# 0 0 4 * * * = daily at 4 AM
# 0 0 4 * * 0 = weekly on Sunday at 4 AM
# 0 0 4 1 * * = monthly on 1st at 4 AM
# Common schedules:
WATCHTOWER_SCHEDULE="0 0 4 * * *" # Daily 4 AM
WATCHTOWER_SCHEDULE="0 0 4 * * 0" # Weekly Sunday 4 AM
WATCHTOWER_SCHEDULE="0 0 2 1,15 * *" # 1st and 15th at 2 AM
WATCHTOWER_SCHEDULE="0 */6 * * * *" # Every 6 hours
```
Use https://crontab.guru/ to test cron expressions.
## Controlling Updates
### Label-Based Control
**Enable/Disable per Container:**
```yaml
myservice:
image: myapp:latest
labels:
# Enable updates (default)
- "com.centurylinklabs.watchtower.enable=true"
# OR disable updates
- "com.centurylinklabs.watchtower.enable=false"
```
### Monitor Specific Containers
Run Watchtower with container names:
```bash
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
container1 container2 container3
```
### Exclude Containers
```yaml
watchtower:
image: containrrr/watchtower
environment:
- WATCHTOWER_LABEL_ENABLE=true # Only update labeled containers
```
Then add label to containers you want updated:
```yaml
myservice:
labels:
- "com.centurylinklabs.watchtower.enable=true"
```
### Update Scope
**Scope Options:**
```yaml
environment:
# Only update containers with specific scope
- WATCHTOWER_SCOPE=myscope
# Then label containers:
myservice:
labels:
- "com.centurylinklabs.watchtower.scope=myscope"
```
## Advanced Topics
### Notification Configuration
#### Email Notifications
```yaml
environment:
- WATCHTOWER_NOTIFICATIONS=email
- WATCHTOWER_NOTIFICATION_EMAIL_FROM=watchtower@yourdomain.com
- WATCHTOWER_NOTIFICATION_EMAIL_TO=admin@yourdomain.com
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.gmail.com
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=your-email@gmail.com
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=your-app-password
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_TLS_SKIP_VERIFY=false
```
#### Slack Notifications
```yaml
environment:
- WATCHTOWER_NOTIFICATIONS=slack
- WATCHTOWER_NOTIFICATION_SLACK_HOOK_URL=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
- WATCHTOWER_NOTIFICATION_SLACK_IDENTIFIER=watchtower
```
#### Discord Notifications
```yaml
environment:
- WATCHTOWER_NOTIFICATIONS=discord
- WATCHTOWER_NOTIFICATION_URL=https://discord.com/api/webhooks/YOUR/WEBHOOK
```
### One-Time Run
Run once and exit (useful for testing):
```bash
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--run-once
```
### Dry Run Mode
Test without actually updating:
```bash
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--run-once \
--debug \
--dry-run
```
### Rolling Restart
Update containers one at a time (safer):
```yaml
environment:
- WATCHTOWER_ROLLING_RESTART=true
```
### Monitor Only Mode
Check for updates but don't apply:
```yaml
environment:
- WATCHTOWER_MONITOR_ONLY=true
- WATCHTOWER_NOTIFICATIONS=email # Get notified of available updates
```
### Custom Update Commands
Run commands after update:
```yaml
myservice:
labels:
- "com.centurylinklabs.watchtower.lifecycle.post-update=/scripts/post-update.sh"
```
### Private Registry
Update containers from private registries:
```bash
# Docker Hub
docker login
# Private registry
docker login registry.example.com
# Watchtower will use stored credentials
```
Or configure explicitly:
```yaml
environment:
- REPO_USER=username
- REPO_PASS=password
```
## Troubleshooting
### Check Watchtower Status
```bash
# View logs
docker logs watchtower
# Follow logs in real-time
docker logs -f watchtower
# Check last run
docker logs watchtower | grep "Session done"
```
### Force Manual Update
```bash
# Run Watchtower once
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--run-once \
--debug
```
### Container Not Updating
```bash
# Check if container is monitored
docker logs watchtower | grep container-name
# Verify image has updates
docker pull image:tag
# Check for update labels
docker inspect container-name | grep watchtower
# Force update specific container
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--run-once \
container-name
```
### Updates Breaking Services
```bash
# Stop Watchtower
docker stop watchtower
# Rollback manually
docker pull image:old-tag
docker compose up -d --force-recreate service-name
# Or restore from backup
# If you kept old images (no cleanup):
docker images | grep image-name
docker tag image:old-sha image:latest
docker compose up -d service-name
```
### High Resource Usage
```bash
# Check Watchtower stats
docker stats watchtower
# Increase check interval
# Change from hourly to daily:
WATCHTOWER_SCHEDULE="0 0 4 * * *"
# Enable cleanup to free disk space
WATCHTOWER_CLEANUP=true
```
### Notification Not Working
```bash
# Test email manually
docker exec watchtower wget --spider https://smtp.gmail.com
# Check credentials
docker logs watchtower | grep -i notification
# Verify SMTP settings
# For Gmail, use App Password, not account password
```
## Best Practices
### Recommended Configuration
```yaml
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
# Daily at 4 AM
- WATCHTOWER_SCHEDULE=0 0 4 * * *
# Clean up old images
- WATCHTOWER_CLEANUP=true
# Rolling restart (safer)
- WATCHTOWER_ROLLING_RESTART=true
# 30 second timeout
- WATCHTOWER_TIMEOUT=30s
# Enable notifications
- WATCHTOWER_NOTIFICATIONS=email
- WATCHTOWER_NOTIFICATION_EMAIL_FROM=watchtower@yourdomain.com
- WATCHTOWER_NOTIFICATION_EMAIL_TO=admin@yourdomain.com
- TZ=America/New_York
```
### Update Strategy
1. **Start Conservative:**
- Weekly updates initially
- Monitor for issues
- Gradually increase frequency
2. **Use Labels:**
- Disable updates for critical services
- Update testing services first
- Separate production from testing
3. **Enable Notifications:**
- Know when updates happen
- Track update history
- Alert on failures
4. **Test Regularly:**
- Dry run before enabling
- Manual one-time runs
- Verify services after updates
5. **Backup Strategy:**
- Keep old images (disable cleanup initially)
- Backup volumes before major updates
- Document rollback procedures
### Selective Updates
```yaml
# Core infrastructure: Manual updates only
traefik:
labels:
- "com.centurylinklabs.watchtower.enable=false"
authelia:
labels:
- "com.centurylinklabs.watchtower.enable=false"
# Media services: Auto-update
plex:
labels:
- "com.centurylinklabs.watchtower.enable=true"
sonarr:
labels:
- "com.centurylinklabs.watchtower.enable=true"
```
## Security Considerations
1. **Docker Socket Access:** Watchtower needs full Docker access (security risk)
2. **Automatic Updates:** Can break things unexpectedly
3. **Test Environment:** Test updates in dev before production
4. **Backup First:** Always backup critical services
5. **Monitor Logs:** Watch for failed updates
6. **Rollback Plan:** Know how to revert updates
7. **Pin Versions:** Use specific tags for critical services
8. **Update Windows:** Schedule during low-usage times
9. **Notification:** Always enable notifications
10. **Review Updates:** Check changelog before auto-updating
## Summary
Watchtower automates container updates, providing:
- Automatic security patches
- Consistent update schedule
- Minimal manual intervention
- Clean, simple configuration
- Notification support
- Label-based control
**Use Watchtower for:**
- Non-critical services
- Media apps (Plex, Sonarr, etc.)
- Utilities and tools
- Development environments
**Avoid for:**
- Core infrastructure (Traefik, Authelia)
- Databases (require careful updates)
- Custom applications
- Services requiring migration steps
**Remember:**
- Start with conservative schedule
- Enable cleanup after testing
- Use labels for fine control
- Monitor notifications
- Have rollback plan
- Test in dry-run mode first
- Backup before major updates

View File

@@ -1,139 +0,0 @@
# WordPress - Content Management System
## Table of Contents
- [Overview](#overview)
- [What is WordPress?](#what-is-wordpress)
- [Why Use WordPress?](#why-use-wordpress)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
## Overview
**Category:** Website/Blog Platform
**Docker Image:** [wordpress](https://hub.docker.com/_/wordpress)
**Default Stack:** `productivity.yml`
**Web UI:** `https://wordpress.${DOMAIN}` or `http://SERVER_IP:8082`
**Database:** MariaDB (wordpress-db container)
**Ports:** 8082
## What is WordPress?
WordPress is the world's most popular content management system (CMS), powering 40%+ of all websites. While often associated with blogs, it's a full-featured CMS capable of building any type of website - from simple blogs to complex e-commerce sites.
### Key Features
- **Easy Content Editing:** WYSIWYG editor
- **10,000+ Themes:** Customizable designs
- **58,000+ Plugins:** Extend functionality
- **Media Management:** Photos, videos, files
- **SEO Friendly:** Built-in optimization
- **Multi-User:** Different permission levels
- **Mobile Responsive:** Most themes mobile-ready
- **Gutenberg Editor:** Block-based content
- **E-commerce:** WooCommerce plugin
- **Free & Open Source:** Core is free
## Why Use WordPress?
1. **Industry Standard:** Most popular CMS
2. **Easy to Use:** Non-technical friendly
3. **Huge Ecosystem:** Themes and plugins
4. **Community:** Massive support community
5. **Self-Hosted:** Own your content
6. **SEO:** Excellent SEO capabilities
7. **Flexible:** Any type of website
8. **Free Core:** Pay only for premium add-ons
## Configuration in AI-Homelab
```
/opt/stacks/productivity/wordpress/html/ # WordPress files
/opt/stacks/productivity/wordpress-db/data/ # MariaDB database
```
## Official Resources
- **Website:** https://wordpress.org
- **Documentation:** https://wordpress.org/support
- **Themes:** https://wordpress.org/themes
- **Plugins:** https://wordpress.org/plugins
## Docker Configuration
```yaml
wordpress-db:
image: mariadb:latest
container_name: wordpress-db
restart: unless-stopped
networks:
- traefik-network
environment:
- MYSQL_ROOT_PASSWORD=${WP_DB_ROOT_PASSWORD}
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=${WP_DB_PASSWORD}
volumes:
- /opt/stacks/productivity/wordpress-db/data:/var/lib/mysql
wordpress:
image: wordpress:latest
container_name: wordpress
restart: unless-stopped
networks:
- traefik-network
ports:
- "8082:80"
environment:
- WORDPRESS_DB_HOST=wordpress-db
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=${WP_DB_PASSWORD}
- WORDPRESS_DB_NAME=wordpress
volumes:
- /opt/stacks/productivity/wordpress/html:/var/www/html
depends_on:
- wordpress-db
labels:
- "traefik.enable=true"
- "traefik.http.routers.wordpress.rule=Host(`wordpress.${DOMAIN}`)"
```
## Summary
WordPress is the world's leading CMS offering:
- Easy content management
- 58,000+ plugins
- 10,000+ themes
- SEO optimization
- Multi-user support
- E-commerce ready
- Mobile responsive
- Free and open-source
**Perfect for:**
- Personal blogs
- Business websites
- Portfolio sites
- E-commerce (WooCommerce)
- News sites
- Knowledge bases
- Any public website
**Key Points:**
- Requires MariaDB database
- Install security plugins
- Regular updates critical
- Backup database regularly
- Use strong admin password
- Consider security hardening
- Performance caching recommended
**Remember:**
- Keep WordPress updated
- Backup regularly
- Use security plugins (Wordfence)
- Strong passwords essential
- Limit login attempts
- SSL certificate recommended
- Performance plugins help
WordPress powers your website with endless possibilities!

View File

@@ -1,162 +0,0 @@
# Zigbee2MQTT - Zigbee Bridge
## Table of Contents
- [Overview](#overview)
- [What is Zigbee2MQTT?](#what-is-zigbee2mqtt)
- [Why Use Zigbee2MQTT?](#why-use-zigbee2mqtt)
- [Configuration in AI-Homelab](#configuration-in-ai-homelab)
- [Official Resources](#official-resources)
- [Docker Configuration](#docker-configuration)
- [Setup](#setup)
## Overview
**Category:** Zigbee Gateway
**Docker Image:** [koenkk/zigbee2mqtt](https://hub.docker.com/r/koenkk/zigbee2mqtt)
**Default Stack:** `homeassistant.yml`
**Web UI:** `http://SERVER_IP:8080`
**Requires:** USB Zigbee coordinator
**Ports:** 8080
## What is Zigbee2MQTT?
Zigbee2MQTT is a bridge between Zigbee devices and MQTT. It allows you to use Zigbee devices with Home Assistant without proprietary hubs (Philips Hue bridge, IKEA gateway, etc.). You only need a $15 USB Zigbee coordinator stick.
### Key Features
- **2500+ Supported Devices:** Huge compatibility
- **No Cloud:** Local control
- **No Hubs Needed:** Just USB stick
- **OTA Updates:** Update device firmware
- **Groups:** Control multiple devices
- **Scenes:** Save device states
- **Touchlink:** Reset devices
- **Web UI:** Visual management
- **Free & Open Source:** No subscriptions
## Why Use Zigbee2MQTT?
1. **No Proprietary Hubs:** Save $50+ per brand
2. **Local Control:** No internet required
3. **Any Brand:** Mix Philips, IKEA, Aqara, etc.
4. **More Features:** Than manufacturer apps
5. **Device Updates:** OTA firmware updates
6. **Privacy:** No cloud reporting
7. **Open Source:** Community supported
## Configuration in AI-Homelab
```
/opt/stacks/homeassistant/zigbee2mqtt/
data/
configuration.yaml
database.db
devices.yaml
groups.yaml
```
### Zigbee Coordinators
**Recommended:**
- **Sonoff Zigbee 3.0 USB Plus** ($20) - Works great
- **ConBee II** ($40) - Premium option
- **CC2652P** ($25) - Powerful, longer range
**Not Recommended:**
- CC2531 - Old, limited, unreliable
## Official Resources
- **Website:** https://www.zigbee2mqtt.io
- **Supported Devices:** https://www.zigbee2mqtt.io/supported-devices
- **Documentation:** https://www.zigbee2mqtt.io/guide
## Docker Configuration
```yaml
zigbee2mqtt:
image: koenkk/zigbee2mqtt:latest
container_name: zigbee2mqtt
restart: unless-stopped
networks:
- traefik-network
ports:
- "8080:8080"
environment:
- TZ=America/New_York
volumes:
- /opt/stacks/homeassistant/zigbee2mqtt/data:/app/data
- /run/udev:/run/udev:ro
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
```
## Setup
1. **Find USB Device:**
```bash
ls -la /dev/ttyUSB*
# or
ls -la /dev/ttyACM*
```
2. **Configure:**
Edit `/opt/stacks/homeassistant/zigbee2mqtt/data/configuration.yaml`:
```yaml
homeassistant: true
permit_join: false
mqtt:
base_topic: zigbee2mqtt
server: mqtt://mosquitto:1883
user: zigbee2mqtt
password: your_password
serial:
port: /dev/ttyUSB0
frontend:
port: 8080
advanced:
network_key: GENERATE # Auto-generates on first start
```
3. **Start Container:**
```bash
docker compose up -d zigbee2mqtt
```
4. **Access Web UI:** `http://SERVER_IP:8080`
5. **Pair Devices:**
- In UI: Enable "Permit Join" (top right)
- Or in config: `permit_join: true` and restart
- Put device in pairing mode (usually hold button)
- Device appears in Zigbee2MQTT
- Automatically appears in Home Assistant
- Disable permit join when done!
## Summary
Zigbee2MQTT bridges Zigbee devices to Home Assistant via MQTT, enabling local control of 2500+ devices from any manufacturer without proprietary hubs.
**Perfect for:**
- Zigbee smart home devices
- Avoiding cloud dependencies
- Multi-brand setups
- Local control
- Cost savings (no hubs)
**Key Points:**
- Requires USB Zigbee coordinator
- 2500+ supported devices
- No manufacturer hubs needed
- Works with MQTT and Home Assistant
- OTA firmware updates
- Web UI for management
- Keep permit_join disabled when not pairing
**Remember:**
- Plug coordinator away from USB 3.0 ports (interference)
- Use USB extension cable if needed
- Disable permit_join after pairing
- Keep firmware updated
- Routers extend network (powered devices)
Zigbee2MQTT liberates your Zigbee devices from proprietary hubs!

View File

@@ -1,293 +0,0 @@
# Common Issues and Solutions
## Installation Issues
### Docker Group Permissions
**Symptom:** `permission denied while trying to connect to the Docker daemon socket`
**Solution:**
```bash
# After running setup script, you must log out and back in
exit # or logout
# Or without logging out:
newgrp docker
```
### Password Hash Generation Timeout
**Symptom:** Password hash generation takes longer than 60 seconds
**Causes:**
- High CPU usage from other processes
- Slow system (argon2 is computationally intensive)
**Solutions:**
```bash
# Check system resources
top
# or
htop
# If system is slow, reduce argon2 iterations (less secure but faster)
# This is handled automatically by Authelia - just wait
# On very slow systems, it may take up to 2 minutes
```
### Port Conflicts
**Symptom:** `bind: address already in use`
**Solution:**
```bash
# Check what's using the port
sudo lsof -i :80
sudo lsof -i :443
# Common culprits:
# - Apache: sudo systemctl stop apache2
# - Nginx: sudo systemctl stop nginx
# - Another container: docker ps (find and stop it)
```
## Deployment Issues
### Authelia Restart Loop
**Symptom:** Authelia container keeps restarting
**Common causes:**
1. **Password hash corruption** - Fixed in current version
2. **Encryption key mismatch** - Changed .env after initial deployment
**Solution:**
```bash
# Check logs
sudo docker logs authelia
# If encryption key error, reset Authelia database:
sudo ./scripts/reset-test-environment.sh
# Then run setup and deploy again
```
### Watchtower Issues
**Status:** Temporarily disabled due to Docker API compatibility
**Issue:** Docker 29.x requires API v1.44, but Watchtower versions have compatibility issues
**Current state:** Commented out in infrastructure.yml with documentation
**Manual updates instead:**
```bash
# Update all images in a stack
cd /opt/stacks/stack-name/
docker compose pull
docker compose up -d
```
### Homepage Not Showing Correct URLs
**Symptom:** Homepage shows `{{HOMEPAGE_VAR_DOMAIN}}` instead of actual domain
**Cause:** Old deployment script version
**Solution:**
```bash
# Re-run deployment script (safe - won't affect running services)
sudo ./scripts/deploy-homelab.sh
# Or manually fix:
cd /opt/stacks/dashboards/homepage
sudo find . -name "*.yaml" -exec sed -i "s/{{HOMEPAGE_VAR_DOMAIN}}/yourdomain.duckdns.org/g" {} \;
```
### Services Not Accessible via HTTPS
**Symptom:** Can't access services at https://service.yourdomain.duckdns.org
**Solutions:**
1. **Check Traefik is running:**
```bash
sudo docker ps | grep traefik
sudo docker logs traefik
```
2. **Verify DuckDNS is updating:**
```bash
sudo docker logs duckdns
# Should show "Your IP has been updated"
```
3. **Check ports are open:**
```bash
sudo ufw status
# Should show 80/tcp and 443/tcp ALLOW
```
4. **Verify domain resolves:**
```bash
nslookup yourdomain.duckdns.org
# Should return your public IP
```
## Service-Specific Issues
### Gluetun VPN Not Connecting
**Symptom:** Gluetun shows connection errors
**Solutions:**
```bash
# Check credentials in .env
cat ~/EZ-Homelab/.env | grep SURFSHARK
# Check Gluetun logs
sudo docker logs gluetun
# Common fixes:
# 1. Wrong server region
# 2. Invalid credentials
# 3. WireGuard not supported by provider
```
### Pi-hole DNS Not Working
**Symptom:** Devices can't resolve DNS through Pi-hole
**Solutions:**
```bash
# Check Pi-hole is running
sudo docker ps | grep pihole
# Verify port 53 is available
sudo lsof -i :53
# If systemd-resolved is conflicting:
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved
```
### Dockge Shows Empty
**Symptom:** No stacks visible in Dockge
**Cause:** Stacks not copied to /opt/stacks/
**Solution:**
```bash
# Check what exists
ls -la /opt/stacks/
# Re-run deployment to copy stacks
sudo ./scripts/deploy-homelab.sh
```
## Performance Issues
### Slow Container Start Times
**Causes:**
- First-time image pulls
- Slow disk (not using SSD/NVMe)
- Insufficient RAM
**Solutions:**
```bash
# Pre-pull images
cd /opt/stacks/stack-name/
docker compose pull
# Check disk performance
sudo hdparm -Tt /dev/sda # Replace with your disk
# Check RAM usage
free -h
# Move /opt/stacks to faster disk if needed
```
### High CPU Usage from Authelia
**Normal:** Argon2 password hashing is intentionally CPU-intensive for security
**If persistent:**
```bash
# Check what's causing load
sudo docker stats
# If Authelia constantly high:
sudo docker logs authelia
# Look for repeated authentication attempts (possible attack)
```
## Reset and Recovery
### Complete Reset (Testing Only)
**Warning:** This is destructive!
```bash
# Use the safe reset script
sudo ./scripts/reset-test-environment.sh
# Then re-run setup and deploy
sudo ./scripts/setup-homelab.sh
sudo ./scripts/deploy-homelab.sh
```
### Partial Reset (Single Stack)
```bash
# Stop and remove specific stack
cd /opt/stacks/stack-name/
docker compose down -v # -v removes volumes (data loss!)
# Redeploy
docker compose up -d
```
### Backup Before Reset
```bash
# Backup important data
sudo tar czf ~/homelab-backup-$(date +%Y%m%d).tar.gz /opt/stacks/
# Backup specific volumes
docker run --rm \
-v stack_volume:/data \
-v $(pwd):/backup \
busybox tar czf /backup/volume-backup.tar.gz /data
```
## Getting Help
1. **Check container logs:**
```bash
sudo docker logs container-name
sudo docker logs -f container-name # Follow logs
```
2. **Use Dozzle for real-time logs:**
Access at https://dozzle.yourdomain.duckdns.org
3. **Check the AI assistant:**
Ask Copilot in VS Code for specific issues
4. **Verify configuration:**
```bash
# Check .env file
cat ~/EZ-Homelab/.env
# Check compose file
cat /opt/stacks/stack-name/docker-compose.yml
```
5. **Docker system info:**
```bash
docker info
docker version
docker system df # Disk usage
```

View File

@@ -1,223 +0,0 @@
# SSL Certificate Issues with DuckDNS DNS Challenge
## Issue Summary
Wildcard SSL certificate acquisition via DuckDNS DNS-01 challenge consistently fails due to network connectivity issues with DuckDNS authoritative nameservers.
## Root Cause Analysis
### Why Both Domain and Wildcard are Required
Let's Encrypt requires validation of BOTH domains when using SAN (Subject Alternative Name) certificates:
- `kelin-hass.duckdns.org` (apex domain)
- `*.kelin-hass.duckdns.org` (wildcard)
This is a Let's Encrypt policy - you cannot obtain just the wildcard certificate. Both must be validated simultaneously.
### Technical Root Cause: Unreachable Authoritative Nameservers
**Problem**: DuckDNS authoritative nameservers (ns1-ns9.duckdns.org) are **unreachable** from the test system's network.
**Evidence**:
```bash
# Direct ping to DuckDNS nameservers - 100% packet loss
ping -c 2 ns1.duckdns.org # FAIL: 100% packet loss
ping -c 2 99.79.143.35 # FAIL: 100% packet loss (direct IP)
# DNS queries to authoritative servers - timeout
dig @99.79.143.35 kelin-hass.duckdns.org # FAIL: timeout
dig @35.182.183.211 kelin-hass.duckdns.org # FAIL: timeout
dig @3.97.58.28 kelin-hass.duckdns.org # FAIL: timeout
# Queries to recursive resolvers - SUCCESS
dig @8.8.8.8 kelin-hass.duckdns.org # SUCCESS
dig @1.1.1.1 kelin-hass.duckdns.org # SUCCESS
# Traceroute analysis
traceroute 99.79.143.35
# Shows traffic reaching hop 5 (74.41.143.193) then black hole
# DuckDNS nameservers are hosted on Amazon AWS
# Suggests AWS security groups or ISP blocking
```
**Why This Matters**:
Traefik's ACME client (lego library) requires verification against authoritative nameservers after setting TXT records. Even though:
- DuckDNS API successfully sets TXT records ✅
- TXT records propagate to public DNS (8.8.8.8, 1.1.1.1) ✅
- Recursive DNS queries work ✅
The lego library **must** also query the authoritative nameservers directly to verify propagation, and this step fails due to network unreachability.
## Attempted Solutions
### Configuration Optimizations Tried
1. **Increased propagation delay** - `delayBeforeCheck: 300` (5 minutes)
- Result: Delay worked, but authoritative NS check still failed
2. **Extended timeout** - `DUCKDNS_PROPAGATION_TIMEOUT=600` (10 minutes)
- Result: Longer timeout observed, but same NS unreachability issue
3. **LEGO environment variables**:
```yaml
- LEGO_DISABLE_CNAME_SUPPORT=true
- LEGO_EXPERIMENTAL_DNS_TCP_SUPPORT=true
- LEGO_DNS_TIMEOUT=60
- LEGO_DNS_RESOLVERS=1.1.1.1:53,8.8.8.8:53
- LEGO_DISABLE_CP=true
```
- Result: Forced use of recursive resolvers for some queries, but SOA lookups still failed
4. **Explicit Docker DNS configuration**:
```yaml
dns:
- 1.1.1.1
- 8.8.8.8
```
- Result: Container used correct resolvers, but lego still attempted authoritative NS queries
5. **VPN routing test** (through Gluetun container)
- Result: DuckDNS nameservers also unreachable through VPN
### Error Messages Observed
**Phase 1: Direct authoritative nameserver timeout**
```
propagation: time limit exceeded: last error: authoritative nameservers:
DNS call error: read udp 172.19.0.2:53666->3.97.58.28:53: i/o timeout
[ns=ns6.duckdns.org.:53, question='_acme-challenge.kelin-hass.duckdns.org. IN TXT']
```
**Phase 2: SOA record query failure**
```
propagation: time limit exceeded: last error: could not find zone:
[fqdn=_acme-challenge.kelin-hass.duckdns.org.]
unexpected response for 'kelin-hass.duckdns.org.'
[question='kelin-hass.duckdns.org. IN SOA', code=SERVFAIL]
```
## Working Configuration (Self-Signed Certificates)
Current deployment is **fully functional** with self-signed certificates:
- All services accessible via HTTPS ✅
- Can proceed through browser certificate warnings ✅
- Traefik routing works correctly ✅
- Authelia SSO functional ✅
- All stacks deployed successfully ✅
## Recommended Solutions for Next Test Run
### Option 1: Switch to Cloudflare DNS (RECOMMENDED)
**Pros**:
- Cloudflare nameservers are highly reliable and globally accessible
- Supports wildcard certificates via DNS-01 challenge
- Better performance and propagation times
- Well-tested with Traefik
**Steps**:
1. Move domain to Cloudflare (free tier sufficient)
2. Obtain Cloudflare API token (Zone:DNS:Edit permission)
3. Update `traefik.yml`:
```yaml
dnsChallenge:
provider: cloudflare
delayBeforeCheck: 30 # Cloudflare propagates quickly
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
```
4. Update `docker-compose.yml`:
```yaml
environment:
- CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
```
### Option 2: Investigate Network Blocking
**Diagnostic Steps**:
1. Test from different network (mobile hotspot, different ISP)
2. Contact ISP to check if AWS IP ranges are blocked
3. Check router/firewall for DNS filtering or AWS blocking
4. Test with different VPN provider
**If network is the issue**:
- May need to use VPN or proxy for Traefik container
- Consider hosting Traefik on different network segment
### Option 3: HTTP-01 Challenge (Non-Wildcard)
**Pros**:
- More reliable (no DNS dependencies)
- Works with current DuckDNS setup
- No external nameserver queries required
**Cons**:
- ❌ No wildcard certificate (must specify each subdomain)
- Requires port 80 accessible from internet
- Separate certificate for each subdomain
**Steps**:
1. Update `traefik.yml`:
```yaml
httpChallenge:
entryPoint: web
```
2. Remove wildcard domain label from Traefik service:
```yaml
# Remove this line:
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
```
3. Add explicit TLS configuration to each service's labels
### Option 4: Use Alternative DNS Provider with DuckDNS
Keep DuckDNS for dynamic IP updates, but use different DNS for certificates:
1. Use Cloudflare for DNS records
2. Keep DuckDNS container for IP updates
3. Create CNAME in Cloudflare pointing to DuckDNS
4. Use Cloudflare for certificate challenge
## Files to Update in Repository
### ~/EZ-Homelab/stacks/core/traefik/traefik.yml
Document both HTTP and DNS challenge configurations with clear comments.
### ~/EZ-Homelab/stacks/core/docker-compose.yml
Ensure wildcard domain configuration is correct (it is currently):
```yaml
- "traefik.http.routers.traefik.tls.domains[0].main=${DOMAIN}"
- "traefik.http.routers.traefik.tls.domains[0].sans=*.${DOMAIN}"
```
**This is correct** - keep both apex and wildcard.
### ~/EZ-Homelab/docs/service-docs/traefik.md
Add troubleshooting section for DuckDNS DNS challenge issues.
## Success Criteria for Next Test
### Must Have:
- [ ] Valid wildcard SSL certificate obtained
- [ ] Certificate automatically renews
- [ ] No browser certificate warnings
- [ ] Documented working configuration
### Should Have:
- [ ] Certificate acquisition completes in < 5 minutes
- [ ] Reliable across multiple test runs
- [ ] Clear error messages if failure occurs
## Timeline Analysis
**First Test Run**: Certificates reportedly worked
**Current Test Run**: Consistent failures
**Possible Explanations**:
1. DuckDNS infrastructure changes (AWS security policies)
2. ISP routing changes
3. Increased AWS security after abuse/attacks
4. Different network environment during first test
## Conclusion
**Current Status**: System is production-ready except for SSL certificate warnings.
**Blocking Issue**: DuckDNS authoritative nameservers unreachable from current network environment.
**Recommendation**: **Switch to Cloudflare DNS** for next test run. This is the most reliable solution and is the industry standard for automated certificate management with Traefik.
**Alternative**: If staying with DuckDNS is required, investigate network connectivity issues with ISP and consider using HTTP-01 challenge (losing wildcard capability).