Understanding tiCrypt Infrastructure: Components, Connectivity, and Deployment Options
Planning a tiCrypt deployment starts with understanding the infrastructure that powers it. This guide walks through the core components, how they connect, and the deployment architectures available — from a lightweight demo system to a full-scale production environment with batch processing.
Note: This guide covers infrastructure planning and setup. The tiCrypt installation and software deployment process is covered separately.
Core Components
tiCrypt is built from a set of modular components, each with a distinct role. Here's what powers the platform.
tiCrypt Backend
The backend is the heart of the system, composed of 11 services. The most critical ones include:
- ticrypt-rest — The HTTPS entry point for the entire system. All other services depend on it. It runs behind Nginx as a reverse-proxied virtual domain.
- ticrypt-auth — Handles authentication, authorization, and serves as the global coordinator across all backend services.
- ticrypt-vm — Manages the full virtual machine lifecycle, including advanced features like SLURM integration for batch processing.
- ticrypt-logger — Maintains a tamper-resistant, blockchain-structured relational log of all system activity, designed for processing by tiCrypt Audit.
- ticrypt-proxy — Creates secure tunnels between users and their VMs, enabling RDP sessions, application access, and other connectivity.
tiCrypt Audit
tiCrypt Audit is a dedicated system for processing logs, generating reports and alerts, and running ad hoc queries. It is designed around three principles:
- Isolation — Audit does not require direct access to the tiCrypt backend. The backend pushes live logs to Audit over port 25000, but the reverse path does not exist. This means security teams can use Audit without gaining access to any other part of the system.
- Full History — Audit retains logs for the lifetime of the deployment. The complete system history can be reconstructed at any point in the future.
- High Performance — Built on ClickHouse with specialized data-loading techniques, most ad hoc queries return in under a second. Individual reports export in 2–10 seconds, and generating thousands of reports takes only minutes.
Data Ingress
tiCrypt provides two mechanisms for securely acquiring data from external sources:
- ticrypt-sftp — SFTP-based data ingestion. Requires an HTTPS endpoint and an SFTP port (22 or 2022).
- ticrypt-mailbox — Web-based data ingestion. Requires an HTTPS endpoint.
Both services share the same underlying architecture and are intentionally deployed outside the secure infrastructure perimeter. This allows external collaborators to submit data without accessing the secure system. However, both require a network path to the tiCrypt backend REST interface — an important consideration if the backend sits behind a VPN.
Virtual Machine Hosting
tiCrypt manages one or more VM hosts with varying configurations of memory, CPU cores, and GPUs. The hardware does not need to be uniform across hosts.
VM hosts run secure, tiCrypt-managed virtual machines that interact with the backend and, in a tightly controlled manner, with each other. Direct internet access is not required — only connectivity to the backend server.
VM performance depends on three factors: host hardware, the distributed filesystem, and network speed. For production environments, high-performance storage and fast networking are essential.
Batch Processing with SLURM
tiCrypt supports batch processing through SLURM integration via a dedicated component called tiCrypt-host-manager, which coordinates between SLURM and the tiCrypt backend.
SLURM hosts require the same filesystem access and backend connectivity as standard VM hosts. While it's possible to run both interactive VMs and SLURM workloads on the same host, separating them simplifies the setup.
Setup Requirements
Connectivity
| Connection | Requirement |
|---|---|
| ticrypt-vm → VM Hosts | SSH access for VM lifecycle management |
| ticrypt-proxy → VM Hosts | Access to ports 5900–6256 |
| ticrypt-logger → tiCrypt Audit | Access to port 25000 |
| ticrypt-sftp / ticrypt-mailbox → ticrypt-rest | Access to the HTTPS frontend |
| ticrypt-rest, Audit, sftp, mailbox | Each requires its own Nginx frontend for HTTPS |
DNS and Certificates
Each HTTPS-enabled service requires a dedicated virtual domain and its own TLS certificate. Multi-domain certificates are not recommended, as they are considered less secure. A suggested naming convention:
| Service | Subdomain Example |
|---|---|
| ticrypt-rest | backend.my_system.my_domain.edu |
| tiCrypt Audit | audit.my_system.my_domain.edu |
| ticrypt-sftp | sftp.my_system.my_domain.edu |
| ticrypt-mailbox | mailbox.my_system.my_domain.edu |
Port Access
All HTTPS traffic is served through Nginx with virtual domains and reverse-proxied to local ports (typically 8080–8084).
| Service | Port(s) | Notes |
|---|---|---|
| ticrypt-rest | HTTPS → 8080 | Port 443 open to users |
| ticrypt-proxy | 6000–6100 | Same visibility as port 443 |
| tiCrypt Audit | 25000 (logs), HTTPS → 8081 | Port 443 open to admins |
| ticrypt-sftp | 2022 (SFTP), HTTPS → 8082 | Port 443 open to the world |
| ticrypt-mailbox | HTTPS → 8083 | Port 443 open to the world |
| SSH | 22 | Management access and Libvirt on VM hosts |
| VM Hosts | 5900–6256 | Port forwarding from the backend |
Storage
| Service | Mount Point | Minimum Size |
|---|---|---|
| ticrypt-rest | /storage/vault | 100 GB+ |
| VM Hosts / ticrypt-vm | /storage/libvirt | 1 TB+ |
| tiCrypt Audit | /var/clickhouse | 10 GB+ |
Storage needs scale with usage. Large deployments can reach 10 TB+ for the vault and 1 PB+ for VM disk images.
Deployment Architectures
tiCrypt scales from a single-server demo to a multi-node production cluster. Below are the most common configurations.
Single Server (Demo/Test)
Everything — backend services, Audit, data ingress, and VM hosting — runs on one machine. This is suitable for demos and testing only, not production use.
Minimum specs: 32 cores (with virtualization extensions), 128 GB RAM, 1 TB storage.
Small Production System
A three-node setup that separates concerns for reliability and access control:
| Role | Specs | Access |
|---|---|---|
| ticrypt-sftp + ticrypt-mailbox (VM) | 4 cores, 16 GB RAM, 100 GB storage | World-facing |
| tiCrypt Audit (VM) | 2+ cores, 16 GB+ RAM, 100 GB+ storage | Admin/security teams |
| Backend + VM hosting (server) | 64+ cores, 512 GB RAM, 10 TB+ storage | Internal |
Storage is locally attached to the backend server.
Production System with Interactive VMs
This architecture separates the backend from dedicated VM hosts for better scalability:
| Role | Specs | Access |
|---|---|---|
| ticrypt-sftp + ticrypt-mailbox (VM) | 4 cores, 16 GB RAM, 100 GB storage | World-facing |
| tiCrypt Audit (VM) | 8+ cores, 64 GB+ RAM, 1 TB+ storage | Admin/security teams |
| Backend (server or VM) | 32 cores, 128 GB RAM | Internal |
| VM hosts (vm1, vm2, …) | Varies by workload | Internal |
Production System with Interactive VMs and Batch Processing
The most comprehensive deployment adds SLURM nodes alongside interactive VM hosts:
| Role | Notes |
|---|---|
| ticrypt-sftp + ticrypt-mailbox | World-facing VM |
| tiCrypt Audit | Admin/security-access VM |
| Backend | Dedicated server or VM |
| VM hosts (vm1, vm2, …) | Libvirt for interactive VMs |
| SLURM hosts (slurm1, slurm2, …) | SLURM + Libvirt for batch VMs |
This configuration scales to a large number of SLURM nodes. Special interactive VMs manage the formation and security of private SLURM clusters on top of the global SLURM scheduler — which is why direct, high-performance connectivity between VM hosts and SLURM hosts is required.
Small SLURM Demo System
A variation of the single-server setup with added SLURM capacity:
- One server runs all tiCrypt components plus interactive VM hosting.
- Two or more additional SLURM hosts handle batch processing.
Flexible by Design
tiCrypt's modular architecture means there is no single "correct" deployment. A research group running a handful of interactive VMs on a single server and a large institution operating hundreds of SLURM batch nodes across a dedicated cluster are both valid configurations. The same components simply scale and redistribute across available infrastructure. Storage backends, VM host hardware, and network topology can all vary to match what your environment already provides. As requirements evolve, components like additional VM hosts or SLURM nodes can be introduced without redesigning the existing setup.
