Deploy nfyio in 5 Minutes with Docker Compose
A step-by-step walkthrough for deploying nfyio on your own infrastructure — from zero to your first S3 bucket in under 5 minutes.
nfyio Team
Talya Smart & Technoplatz JV
nfyio is a self-hosted cloud infrastructure platform. In this guide, you’ll go from zero to a running instance with S3-compatible object storage, AI embeddings, and Keycloak-based auth — all on your own infrastructure.
Prerequisites
Before you begin, make sure you have:
- Docker (version 24 or higher)
- Docker Compose (v2+)
- A server with at least 2 GB RAM and 20 GB disk
1. Clone the Repository
git clone https://github.com/hilaltechnologic/nfyio.git
cd nfyio
2. Configure Environment Variables
Copy the example env file and fill in the required values:
cp .env.example .env
Open .env and set the required fields:
# Session encryption (minimum 64 characters)
SESSION_SECRET=your-super-secret-session-key-at-least-64-chars-long
# Keycloak admin credentials
KEYCLOAK_ADMIN_PASSWORD=change-me-in-production
# PostgreSQL password
POSTGRES_PASSWORD=change-me-too
# Redis password
REDIS_PASSWORD=redis-secret
# Allowed origins
ALLOWED_ORIGINS=https://app.yourdomain.com,https://yourdomain.com
3. Start the Platform
docker compose up -d
This starts:
- PostgreSQL with pgvector extension (port 5432)
- Redis for job queues and session store (port 6379)
- SeaweedFS for blob storage (ports 9333, 8080)
- Keycloak for authentication (port 8080)
- nfyio Gateway (port 3000)
- nfyio Storage Proxy (port 7007)
- nfyio Agent Service (port 7010)
4. Verify Health
# Check all containers are running
docker compose ps
# Verify gateway health
curl http://localhost:3000/health
# → {"status":"ok","version":"0.9.0"}
# Verify storage proxy
curl http://localhost:7007/health
# → {"status":"ok","backend":"seaweedfs"}
5. Create Your First Bucket
Use any S3-compatible client. Here’s an example with the AWS CLI:
# Configure AWS CLI to point to nfyio
aws configure set aws_access_key_id your-access-key
aws configure set aws_secret_access_key your-secret-key
aws configure set region us-east-1
# Create a bucket
aws s3 mb s3://my-first-bucket \
--endpoint-url http://localhost:7007
# Upload a file
aws s3 cp README.md s3://my-first-bucket/ \
--endpoint-url http://localhost:7007
# List objects
aws s3 ls s3://my-first-bucket \
--endpoint-url http://localhost:7007
6. Generate an Access Key
Access keys are scoped to specific routes and permissions. Create one via the API:
curl -X POST http://localhost:3000/api/access-keys \
-H "Authorization: Bearer $YOUR_JWT" \
-H "Content-Type: application/json" \
-d '{
"name": "my-first-key",
"scopes": ["storage:read", "storage:write"],
"workspaceId": "your-workspace-id"
}'
What’s Next?
Written by
nfyio Team
Talya Smart & Technoplatz JV
Building the future of web design at Anti-Gravity. Passionate about creating beautiful, accessible experiences.
Related Posts
Deploy nfyio on Kubernetes with Helm Charts
Production-ready Kubernetes deployment for nfyio using Helm charts — including PostgreSQL, Redis, SeaweedFS, Keycloak, and the nfyio gateway.
Multi-Region Deployment for nfyio
Deploy nfyio across multiple geographic regions with data replication, latency-based routing, and disaster recovery between sites.