Skip to main content

tiCrypt Backend Installation Guide

ticrypt-setup automates the installation and configuration of the tiCrypt backend and worker nodes using Ansible. This replaces the manual installation process, which is more time-consuming and prone to configuration errors. For this reason, ticrypt-setup is the recommended installation method for all tiCrypt backend installations.

Prerequisites

  1. Control machine (where you run install.sh)

    • Ansible installed on the control node
    • SSH key-based access to all target hosts
    • Passwordless sudo to all target hosts (or run as root)
  2. Target hosts

    • Backend host reachable via SSH
    • VM hosts reachable via SSH (at least one)
    • Optional: Slurm hosts reachable via SSH (only if Slurm is used)

Step 1 — Create a working directory

Use a dedicated directory to keep your inventory and configuration files:

mkdir -p /root/ticrypt-setup
cd /root/ticrypt-setup

Step 2 — Download and extract ticrypt-setup 0.1.9

Download and extract the installer archive:

cd /root/ticrypt-setup

curl -LO https://storage.googleapis.com/ticrypt/install/ticrypt-setup-0.1.9.tgz
tar -xzvf ticrypt-setup-0.1.9.tgz
rm -f ticrypt-setup-0.1.9.tgz

Verify the extracted folder name (important)

Some distributions extract into a directory that exactly matches the archive name (for example ticrypt-setup-0.1.9/). To confirm:

ls -1

For the rest of this guide, we’ll refer to the extracted directory as:

  • ticrypt-setup-0.1.9/

If your extracted folder name differs, substitute accordingly.


Create a convenience link so you can run the installer without typing versioned paths:

ln -sf ticrypt-setup-0.1.9/scripts/install.sh ./install.sh
chmod +x ./install.sh

When you upgrade the installer later, you only need to update this symlink to point at the new version.


Step 4 — Initialize the installer (creates config stubs)

Run:

./install.sh --init

This initialization step creates the configuration files the installer uses. Your installation requires two primary files:

  • inventory.ini — host list and per-host settings (where Ansible connects and host networking details)
  • ticrypt.yml — tiCrypt configuration variables used to generate service configs and install components

Correction
Some older docs incorrectly reference ticrypt.conf. The installer configuration file is ticrypt.yml.


Step 5 — Edit inventory.ini

inventory.ini is an Ansible inventory in INI format. It defines which machines are in your deployment.

Inventory structure and rules

  • [backend] must exist and contain exactly one backend host (typical)
  • [vm-hosts] must exist and contain at least one VM host
  • [slurm-hosts] is optional and only required if you are installing Slurm support

Required host variables

Each host must set:

  • ansible_host — IP/hostname Ansible should connect to
  • bridge_secure_ip — IP for the secure Open vSwitch bridge interface on that host

Example inventory.ini (with clarified comments)

# ============================================
# tiCrypt Ansible Inventory
# ============================================
# For each host you may set:
# - ansible_host: IP/hostname used for SSH connectivity by Ansible
# - bridge_secure_ip: IP assigned to the secure Open vSwitch bridge on that host
#
# Notes:
# - All hosts must be reachable from the control machine over SSH.
# - If you run the installer on the backend host itself, you STILL must define the backend host here.
# - bridge_secure_ip MUST fall within the secure network CIDR configured in ticrypt.yml (network.secure.base/no).
# ============================================

[backend]
# Backend host (REST API, Auth, Storage services, etc.)
ticrypt-backend ansible_host=203.0.113.10 bridge_secure_ip=192.168.128.1

[vm-hosts]
# VM worker hosts (libvirt, VM lifecycle management)
ticrypt-vm-1 ansible_host=203.0.113.11 bridge_secure_ip=192.168.128.2
ticrypt-vm-2 ansible_host=203.0.113.12 bridge_secure_ip=192.168.128.3

[slurm-hosts]
# Optional: Slurm worker hosts (only if Slurm is enabled/used)
# ticrypt-slurm-1 ansible_host=203.0.113.13 bridge_secure_ip=192.168.128.4
# ticrypt-slurm-2 ansible_host=203.0.113.14 bridge_secure_ip=192.168.128.5

Step 6 — Edit ticrypt.yml

ticrypt.yml is the main configuration file used by the installer to:

  • generate tiCrypt service configuration files
  • configure networking (bridges, NAT, DHCP ranges)
  • set permissions and filesystem paths
  • enable/disable features (MFA, mailboxes, escrow, etc.)

Passwords and tokens (generate securely)

Generate secure values:

# Tokens (long, random)
openssl rand -base64 32

# Passwords (still random, slightly shorter is fine)
openssl rand -base64 16

You should treat these values as secrets:

  • auth.selfRegistration.token
  • auth.mfa.tokenSalt
  • mongodb.password
  • global.sqlPassword

This example focuses on clarity and correctness of comments, and covers the minimum that most deployments need. It is intentionally verbose so admins know what each variable does.

# ============================================
# tiCrypt installer configuration (ticrypt.yml)
# ============================================
# This file is consumed by ticrypt-setup and Ansible.
# It is used to generate service configs and configure networking.
#
# Tip:
# - Keep this file under change control.
# - If you must store secrets here, consider Ansible Vault or a protected secrets workflow.
# ============================================

# -------------------------
# Network configuration
# -------------------------
network:
# If you want ticrypt-setup to create VLAN NICs on top of a trunk/bond, define baseNIC and vlan IDs.
# If you manage networking outside of ticrypt-setup, you can leave baseNIC/vlan commented out and
# just ensure the named NICs exist.
# baseNIC: "bond0"

# Secure network (for VMs and secure inter-host traffic)
secure:
bridge: "br-secure" # Open vSwitch bridge name for the secure network
gateway: "192.168.128.1" # Gateway IP on the secure network (often backend bridge_secure_ip)
base: "192.168.128.0" # Network base address
no: 17 # CIDR suffix (e.g., /17). Must be large enough for all secure nodes/VMs.
dhcpRange: "192.168.129.1,192.168.255.254" # DHCP range inside the secure CIDR
nic: "enp4s0f1" # Physical NIC for secure network; override per-host in inventory if needed
# vlan: 1081 # Optional VLAN tag if you build VLAN on top of baseNIC

# Service network (NATed) for service VMs and internal service traffic
service:
bridge: "br-service"
gateway: "192.168.122.1"
base: "192.168.122.0"
no: 24
dhcpRange: "192.168.122.2,192.168.122.254"
nic: "enp4s0f2"
# vlan: 1082

# Data-in network (NATed) typically used for inbound/SFTP/data acquisition workflows
datain:
bridge: "br-datain"
gateway: "192.168.123.1"
base: "192.168.123.0"
no: 24
dhcpRange: "192.168.123.2,192.168.123.254"
nic: "enp4s0f3"
# vlan: 1083

# -------------------------
# Global deployment settings
# -------------------------
global:
rpmToken: "REPLACE_WITH_TOKEN_FROM_TERA_INSIGHTS" # Token used to access TI RPM repositories
backendDomain: "ticrypt.example.edu" # Public DNS name of backend (must match TLS cert)
poolsDirectory: "/storage/libvirt/pools" # Libvirt storage pools location
storagePath: "/mnt/storage/ticrypt/storage" # tiCrypt Vault storage path (must exist and be writable)
ssl_cert: "ticrypt.crt" # TLS cert filename (relative to current working directory)
ssl_key: "ticrypt.key" # TLS private key filename (relative to current working directory)
sqlPassword: "REPLACE_WITH_STRONG_PASSWORD" # Password for SQL 'ticrypt' user

# System user/group IDs used on all nodes (keep consistent across hosts)
ticryptUser:
uid: 977
gid: 977
home: "/var/lib/ticrypt"

libvirtGroup:
gid: 978

# Slurm is extremely sensitive to controller hostname consistency.
# Use the real hostname of your Slurm controller (often the backend host).
slurm_server_name: "ticrypt-backend"

# -------------------------
# Feature flags
# -------------------------
features:
# If your MongoDB is configured with auth enabled, set this true.
mongoAuthentication: false

auth:
keyEscrow: true # Enable key escrow feature (requires sitekey configuration in your deployment)
selfRegistration: true # Allow users to self-register (accounts start disabled until approved)
mailboxes: true # Enable mailbox/inbox features
mfa: false # Enable MFA
splitCredentials: false # Requires MFA; strengthens security by separating encryption material across factors

logger:
# pathTranslation supports deployments where backend and worker nodes mount storage at different paths.
# Example: backend uses /mnt/storage but workers see the same mount at /storage.
pathTranslation: false

# -------------------------
# Authentication service settings
# -------------------------
auth:
selfRegistration:
token: "REPLACE_ME" # Provide this token to Tera Insights so it can be embedded in deployment files
reason: "Account has not been approved yet by an admin"

mfa:
name: "fake-shibboleth" # Example provider name (replace with real provider if MFA enabled)
tokenSalt: "REPLACE_ME" # Random salt used for generating MFA tokens (keep secret)

# -------------------------
# MongoDB settings (example fields)
# -------------------------
mongodb:
# If mongoAuthentication is true, set a secure password here and ensure MongoDB is configured accordingly.
password: "REPLACE_WITH_STRONG_PASSWORD"

# -------------------------
# VM / libvirt settings
# -------------------------
vm:
# Licensing server type. Keep as documented by your deployment guidance.
licensingServer: "firewalld"

# Default VM realm name (must match the realms section in your full config if applicable)
defaultRealm: "primary"

# Optional path translations if features.logger.pathTranslation is enabled.
# Maps a source path as seen by tiCrypt services to the destination path as mounted on worker nodes.
# Example: backend paths under /mnt/storage correspond to /storage on VM hosts.
pathTranslations:
- src: "/storage"
dst: "/mnt/storage"

Configuration validation checklist (before install)

  1. Inventory

    • Backend host exists in [backend]
    • At least one VM host exists in [vm-hosts]
    • Every host has ansible_host and bridge_secure_ip
  2. Networking

    • network.secure.base/no defines a CIDR that contains every bridge_secure_ip
    • DHCP ranges are inside their respective subnets
    • NIC names match your OS (use ip link to confirm)
  3. TLS

    • global.backendDomain matches the TLS certificate CN/SAN
    • ssl_cert and ssl_key files exist in /root/ticrypt-setup (or your working directory) when you run install
  4. Storage

    • global.storagePath exists and is available on the backend
    • If you use shared storage, it is mounted consistently where expected

Step 7 — Run the installer

Full install:

./install.sh --all

Who can run the installer?

You can run ./install.sh as any user as long as that user has:

  1. Passwordless sudo privileges on all target hosts
  2. SSH access to all target hosts using keys (no interactive password prompts)

Running as root is the most common approach in server deployments.


Troubleshooting and re-runs

List available steps

./install.sh --list

Re-run from the failing step

If a run fails, fix the configuration and restart at the failing step number:

./install.sh --start-at FAILED_STEP_NUMBER

Run a single step

./install.sh --only STEP_NUMBER

Show installer options

./install.sh

Quick reference: common doc errors corrected here

  • ✅ Updated installer archive reference: ticrypt-setup-0.1.9.tgz
  • ✅ Correct config file name: ticrypt.yml (not ticrypt.conf)
  • ✅ Removed broken “see section …” placeholders
  • ✅ Fixed incomplete sentence (“User to run ./install.sh as …”)
  • ✅ Clarified bridge_secure_ip must be within network.secure.* CIDR

Appendix — Optional hardening recommendations

  • Store secrets outside ticrypt.yml using Ansible Vault or a secrets manager
  • Restrict SSH access to control machine and limit sudo scope where feasible
  • Ensure firewall rules and TLS are configured per your institution’s security baseline