Storage Overview
S3-compatible object storage overview with SeaweedFS backend. Virtual-hosted URLs, presigned URLs, multipart upload, versioning, SSE, ACLs, and AI pipeline integration.
NFYio provides S3-compatible object storage backed by SeaweedFS — a distributed, high-performance storage system. Use the same APIs, SDKs, and tools you already know from AWS S3, while keeping your data on your own infrastructure.
Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Client (SDK, │────►│ NFYio Storage │────►│ SeaweedFS │
│ CLI, API) │ │ Proxy (S3 API) │ │ (Backend) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
▼
┌──────────────────┐
│ PostgreSQL │
│ (Metadata) │
└──────────────────┘
The storage proxy translates S3 API requests to SeaweedFS operations and stores metadata (buckets, ACLs, versioning state) in PostgreSQL. Your objects are stored as blobs in SeaweedFS.
Key Capabilities
Virtual-Hosted URLs
Access buckets via virtual-hosted style URLs for cleaner, domain-based addressing:
https://my-bucket.nfyio.yourdomain.com/object-key
Path-style URLs are also supported:
https://nfyio.yourdomain.com/my-bucket/object-key
Presigned URLs
Generate time-limited URLs for upload or download without exposing credentials. Ideal for:
- Direct browser uploads
- Temporary download links
- Sharing with external users
// Generate a 1-hour download URL
const url = await s3.getSignedUrlPromise('getObject', {
Bucket: 'my-bucket',
Key: 'document.pdf',
Expires: 3600
});
Multipart Upload
Upload large files (>100MB) in parallel chunks for better throughput and resumability:
aws s3 cp large-file.zip s3://my-bucket/ --endpoint-url https://storage.yourdomain.com
The AWS CLI automatically uses multipart upload for files over the threshold.
Versioning
Keep multiple versions of objects to recover from accidental overwrites or deletions. Enable versioning at the bucket level and list/restore previous versions via the API.
Server-Side Encryption (SSE)
Encrypt objects at rest using AES-256. Specify encryption on upload:
aws s3 cp secret.pdf s3://my-bucket/ \
--server-side-encryption AES256 \
--endpoint-url https://storage.yourdomain.com
Access Control Lists (ACLs)
Control read/write permissions at the bucket and object level. Use canned ACLs (private, public-read, bucket-owner-full-control) or custom ACLs with granular grants.
Integration with AI Pipeline
NFYio storage integrates seamlessly with the AI/RAG pipeline:
- Ingest — Upload documents (PDF, DOCX, images) to a bucket
- Trigger — The agent service watches configured buckets or paths
- Embed — Documents are chunked, embedded, and indexed in pgvector
- Query — Semantic search retrieves relevant chunks for RAG responses
Configure ingestion paths in your workspace to automatically process new objects. The AI pipeline respects bucket ACLs and access keys.
S3 API Compatibility
NFYio implements the core S3 API operations:
| Operation | Supported |
|---|---|
| CreateBucket, DeleteBucket, ListBuckets | ✅ |
| PutObject, GetObject, DeleteObject | ✅ |
| ListObjects, ListObjectsV2 | ✅ |
| MultipartUpload (Initiate, Upload, Complete, Abort) | ✅ |
| CopyObject | ✅ |
| GetBucketVersioning, PutBucketVersioning | ✅ |
| GetObjectAcl, PutObjectAcl | ✅ |
| Presigned URLs | ✅ |
| Server-Side Encryption (AES-256) | ✅ |
Endpoints
- Storage API:
https://storage.yourdomain.com(or your configured endpoint) - Web Console: Use the NFYio dashboard to manage buckets and objects visually
Next Steps
- Managing Buckets — Create and configure buckets
- Working with Objects — Upload, download, and manage objects
- Access Keys — Create credentials for API access