Skip to content

Deployment Guide

Quick Reference

  • Platform: Linux / macOS / Docker
  • Min Requirements: 2 vCPU, 4GB RAM (8GB recommended for parallel swarming)
  • Python Version: 3.10+
  • Health Check: python3 gemini-run.py "health check"

System Requirements

ComponentMinimumRecommended
CPU1 Core2+ Cores
RAM2GB8GB (for large DAGs)
Disk100MB1GB+ (for SQLite growth)
RuntimePython 3.10Python 3.12

Environment Variables

VariableDescriptionRequiredDefault
GOOGLE_API_KEYPrimary Gemini API access key.Yes
MISTRAL_API_KEYFallback Mistral API access key.Yes
KAGGLE_USERNAMEFor KaggleAgent operations.Optional
KAGGLE_KEYFor KaggleAgent operations.Optional

Security

Never commit .env files to source control. Use a secrets manager (e.g., AWS Secrets Manager, GitHub Secrets) in production environments.

Local Setup

Follow these steps to get the system running on your local workstation.

bash
# Step 1: Clone the repository
git clone https://github.com/kizabgd123/gemma_neki.git
cd gemma_neki

# Step 2: Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Step 3: Install dependencies
pip install google-generativeai mistralai pandas numpy pytest

# Step 4: Configure environment
cp .env.example .env
# Edit .env with your actual API keys

# Step 5: Verify installation
export PYTHONPATH=.
pytest tests/

Running Your First Workflow

Use the main entrypoint gemini-run.py to trigger the orchestrator.

bash
python3 gemini-run.py "Research and implement a secure JWT authentication module."

CI/CD Pipeline

The project uses a standard 4-gate verification pipeline:

mermaid
graph LR
    A["🔀 Git Push"] --> B["🧪 Unit Tests (Pytest)"]
    B --> C["⚖️ Debate Simulation"]
    C --> D["📦 Build Docker Image"]
    D --> E["🚀 Deploy to Staging"]

See Also

Built with DocKit Premium