Skip to main content

tiCrypt Backend Installation Guide

ticrypt-setup removes the need for manual installation of the tiCrypt backend and worker nodes. It uses Ansible to automate the installation and configuration of all components. Using this method, you can avoid the much more tedious and error-prone manual installation process. This is the recommended installation method for all tiCrypt backend installations.

Prerequisites

  • Install Ansible on the control machine.
  • Set Root SSH access to all target servers.
  • Configure SSH keys between control machine and all hosts - test manual connection first on all servers.

Setup configuration files and run installer

  1. Create a directory that will contain the configuration files used by the Ansible scripts. We recommend directory /root/ticrypt-setup
  2. Download and extract the latest version of the tiCrypt-setup archive.
curl -O https://storage.googleapis.com/ticrypt/install/ticrypt-setup-0.1.0.tgz
tar -xzvf ticrypt-setup-0.1.0.tgz
# Creates directory ticrypt-setup-0.1.0
rm -rf ticrypt-setup-0.1.0.tgz
  1. Create a symbolic link to the installation script, for convenience.
ln -sf ticrypt-setup-0.1.0/scripts/install.sh .

If you install a new version of the installer, simply recreate the link to the new version of the install.sh executable.

  1. Initialize the installation process with
./install.sh --init

This creates files inventory.ini and ticrypt.conf. These are the configuration files the installer needs to configure your system. You need to extensively edit these files to fit your tiCrypt installation.

  1. Edit inventory.ini and describe your hardware (see section )
  2. Edit ticrypt.yml and set your tiCrypt backend configuration information (see section ).
  3. Run the installation script (for a more extensive set of options, see Section ).
./install.sh --all
User to run ./install.sh as

The installation script uses Ansible to perform the installation. You can run the installation script as any user as long as that user has:

  1. Sudo privileges on all target hosts without password prompt.
  2. SSH access to all target hosts, using SSH keys without password.
Dealing with installation errors

Inevitably, errors in installation files will result in installation errors. Keep track of the exact step that failed, fix the error in the configuration files, and re-run the installation script with the

./install.sh --start-at FAILED_STEP_NUMBER

where FAILED_STEP_NUMBER is the number of the step that failed (as shown in the installation output).

You can list the available steps with

./install.sh --list

You can execute individual steps with

./install.sh --only STEP_NUMBER

To see all the options supported by the installation script, run

./install.sh

Configure Inventory File

Here is an example of an inventory.ini file:

# Options for each host
# - ansible_host: IP or hostname to connect to via SSH
# - bridge_secure_ip: IP address of the secure bridge interface on the host

# This must always exist
[backend]
ticrypt-backend ansible_host=127.0.0.1 bridge_secure_ip=192.168.128.1

# Define VM hosts here. At least one must exist.
[vm-hosts]
ticrypt-vm-1 ansible_host=10.22.122.2 bridge_secure_ip=192.168.128.2
ticrypt-vm-2 ansible_host=10.22.122.3 bridge_secure_ip=192.168.128.3

# Define Slurm hosts here. Can be missing if Slurm is not used.
[slurm-hosts]
ticrypt-slurm-1 ansible_host=10.22.122.4 bridge_secure_ip=192.168.128.4
ticrypt-slurm-2 ansible_host=10.22.122.5 bridge_secure_ip=192.168.128.5

Setting ansible_host variable

Each host must have the ansible_host variable set to the IP address or hostname that Ansible will use to connect via SSH. Each of the hosts must:

  1. Be reachable via SSH from the control machine.
  2. Allow root (or regular user + sudo) login via SSH without password (e.g. using SSH keys).

This must include the backend host itself, even if you run the installation script on the backend host.

Setting bridge_secure_ip variable

Each host must have the bridge_secure_ip variable set to the IP address that will be assigned to the secure OpenvSwitch bridge interface on that host. This IP address must be in the range specified as the network.secure.base/network.secure.no range.

Configure tiCrypt Variables

Most of the tiCrypt configuration is done in the ticrypt.yml file. This information is used to:

  1. Generate tiCrypt configuration files (replacing the need to manually edit configuration files).
  2. Set up networking.
  3. Configure the backend server and VM/Slurm hosts.
  4. Fix permissions on directories and files.
Setting token and password variables

Variables auth.selfRegistration.token and auth.mfa.tokenSalt must be set to secure random strings. You can generate them using the following commands:

openssl rand -base64 32

Passwords mongodb.password and global.sqlPassword must also be set to secure passwords. You can generate them using the command:

openssl rand -base64 16

These values must be kept secret.

Here is an example of the minimum configuration required:

## tiCrypt configuration file

## Network configuration
network: {
# baseNIC: "bond0", # Define this and vlans if you want the network nics to be created by ticrypt

# Used for all secure VMs (Slurm included). Must be a private large network with at least /17 addresses.
secure: {
bridge: "br-secure", # Name of the Open vSwitch bridge for secure network
gateway: "192.168.128.1", # Gateway for secure network
base: "192.168.128.0", # Base network address for secure network
no: 17, # Size of the secure network (CIDR suffix)
dhcpRange: "192.168.129.1,192.168.255.254", # DHCP range for secure network
nic: "enp4s0f1", # Physical NIC to use for secure network. Override in inventory if needed.
# vlan: 1081, # Define this if you want a vlan on top of baseNIC for secure network
},
# Used for service network (NATed). Must be a private network. /24 is usually sufficient.
service: {
bridge: "br-service", # Name of the Open vSwitch bridge for service network
gateway: "192.168.122.1", # Gateway for service network
base: "192.168.122.0", # Base network address for service network
no: 24, # Size of the service network (CIDR suffix)
dhcpRange: "192.168.122.2,192.168.122.254", # DHCP range for service network
nic: "enp4s0f2", # Physical NIC to use for service network. Override in inventory if needed.
# vlan: 1082, # Define this if you want a vlan on top of baseNIC for secure network
},
# Used for data-in network (NATed). Must be a private network. /24 is usually sufficient.
datain: {
bridge: "br-datain", # Name of the Open vSwitch bridge for data-in network
gateway: "192.168.123.1", # Gateway for data-in network
base: "192.168.123.0", # Base network address for data-in network
no: 24, # Size of the data-in network (CIDR suffix)
dhcpRange: "192.168.123.2,192.168.123.254", # DHCP range for data-in network
nic: "enp4s0f3", # Physical NIC to use for data-in network. Override in inventory if needed.
# vlan: 1083, # Define this if you want a vlan on top of baseNIC for secure network
},
}


## General setup

global: {
rpmToken: "ask Tera Insights for a token", # RPM token for accessing Tera Insights repositories
backendDomain: "ticrypt.mydomain.edu", # URL of the backend server
poolsDirectory: "/storage/libvirt/pools", # Directory where libvirt storage pools are created
storagePath: "/mnt/storage/ticrypt/storage", # Path to the Vault storage. Must be accessible by the ticrypt user.
ssl_cert: "ticrypt-test.crt", # SSL certificate file name from current directory
ssl_key: "ticrypt-test.key", # SSL key file name from current directory
sqlPassword: "REPLACE_WITH_STRONG_PASSWORD", # Password for the SQL user 'ticrypt'. Replace with a strong password.
ticryptUser: {
uid: 977,
gid: 977,
home: "/var/lib/ticrypt",
},
libvirtGroup: {
gid: 978,
},
# This must be the hostname of the Slurm controller. Slurm is very sensitive to this value
slurm_server_name: "backend", # Hostname of the Slurm controller, usually the backend
}

## Overall tiCrypt features. Use this section to enable/disable features globally.
## Most of these features should be enabled in a production system.
features: {
mongoAuthentication: false, # Do we need to authenticate to MongoDB?
auth: {
keyEscrow: true, # Enable key escrow feature? This requires sitekey to be set.
selfRegistration: true, # Allow self registration of users? You almost always want this enabled since accounts start disabled.
mailboxes: true, # Enable mailboxes features? This should be set to true unless you have a specific reason not to.
mfa: false, # Enable multi-factor authentication (MFA) for enhanced security?
splitCredentials: false, # This feature requires mfa to be enabled. It significantly strengthens security by storing the salt and IV used to encrypt user's key on the backend server and protect it using MFA.
},
logger: {
externalLogger: false, # Are the logs pushed to an external logging service?
},
maintenance: {
accountLocker: true, # Enable automatic account locking for inactive accounts?
garbageCollectors: {
files: true, # Enable general garbage collection user accounts?
drives: true, # Enable garbage collection for deleted VM drives?
escrowKeys: true, # Enable garbage collection for old escrowed keys?
directories: true, # Enable garbage collection for unused directories?
},
},
rest: {
jsonValidation: true, # Validate JSON input for REST API requests?
responseValidation: true, # Validate JSON output for REST API responses?
fileStats: true, # Print file statistics after each request?
stackTrace: true, # Print stack trace on error?
},
vm: {
licensingServer: true, # Enable licensing server feature? This allows secure VMs to access the internet through a controlled licensing server.
schedulingLog: false, # Enable logging of VM scheduling events? Disable unless you have logging problems you want to debug.
pathTranslation: false, # Enable path translation feature? This allows mapping of Livirt paths differently on the backend and worker nodes. If the Libvirt storage is mapped to the same path on all nodes, you can disable this.
},
}

## Mongo setup (all services)
mongodb: {
# user and password used only if mongoAuthentication is true
user: "ticrypt",
password: "REPLACE_WITH_STRONG_PASSWORD", # Password for the MongoDB user 'ticrypt'. Replace with a strong password.
}

## ticrypt-auth setup
auth: {
selfRegistration: {
token: "REPLACE_ME", # Token required for self registration. Must be provided to Tera Insights to include in deployment file.
reason: "Account has not been approved yet by an admin" # Reason shown to users when their account is not yet approved.
},
mfa: {
name: "fake-shibboleth", # Name of the MFA provider
tokenSalt: "REPLACE_ME", # Salt used for generating MFA tokens
url: "https://gv.terainsights.net/auth.php", # Where is the MFA hosted? Must be full path.
certTTL: "15 min", # How long the MFA certificate is valid. Specifically, how much time the user has to hand it over to the backend after obtaining it from the MFA server.
tokenTTL: "2 days", # How long the MFA token is valid
publicKey: "/etc/pki/ticrypt/mfa/fake-shibboleth.pem" # Path to the public key used to verify MFA tokens
},
XSS: {
killSession: true, # Kill user session on XSS detection?
lockUser: true, # Lock user account on XSS detection?
lockReason: "Account locked due to suspected malicious activity" # Reason shown to users when their account is locked due to XSS detection.
}
}

## ticrypt-logger
logger: {
externalLogger: {
host: "localhost", # Hostname of the external logging server
port: 25000, # Port of the external logging server
sendTimeout: "30s", # Timeout for sending logs
retryTimeout: "5s", # Timeout between retries
},
}

## ticrypt-maintenance
# frequency values: "10m", "1h", "4h", "12h", "1d". How often to run the task.
maintenance: {
accountLocker: {
frequency: "4h", # How often to check for inactive accounts
timeBeforeAccountOld: "365d", # When to consider an account old/inactive
},
fileGarbageCollector: {
retentionPeriod: "90d", # How old files must be to be considered for deletion if unused
amountGrouped: 100, # Batch size for deletions. Used for improving performance.
frequency: "1d", # How often to run the garbage collector
},
driveGarbageCollector: {
frequency: "1d", # How often to run the garbage collector
log: true, # Log deleted drives?
dryRun: false, # Only set to true if you want to test the garbage collector
trashDirectory: "/mnt/storage/libvirt/trash/drives", # Where deleted drives are moved to before permanent deletion
sourceDirectory: "/mnt/storage/libvirt/pools/ticrypt-vm-drives", # Where the active drives are stored
},
escrowKeyGarbageCollector: {
frequency: "1d", # How often to run the garbage collector
},
directoryGarbageCollector: {
frequency: "1d", # How often to run the garbage collector
},
}

## ticrypt-proxy
proxy: {
ports: "6000-6100", # Ports to use for proxying VM connections
singleUse: true, # Should proxy sessions be single-use?
ttl: "5m", # Time-to-live for proxy sessions
}

## ticrypt-rest
rest: {
# Empty for now
}

## ticrypt-vm
vm: {
# Type of licensing server: "firewalld" is the only value supported currently
licensingServer: "firewalld",
defaultRealm: "primary", # Which of the realms described below is the default one?
# Used for path translation feature. Maps paths on worker nodes to paths on the backend.
# Only used if pathTranslation feature is enabled.
pathTranslations: [{
src: "/storage",
dst: "/mnt/storage"
}],
realms: [
{
id: "primary", # Unique identifier for the realm
driver: "libvirt", # Driver to use for this realm. "libvirt" or "nutanix"
enabled: true, # Is this realm enabled?
name: "Libvirt", # Human-readable name for the realm
registrationTimeout: "5 minutes", # How long to wait for a VM to register before killing it
registrationTimeoutDebug: "60 minutes", # Same but for VMs in debug mode
drives: {
minSize: "8 MiB",
maxSize: "5 TiB",
# Documentation on what these options mean can be found at:
# https://doc.opensuse.org/documentation/leap/virtualization/html/book-virtualization/cha-cachemodes.html
cache: "writeback", # Options: "default", "writethrough", "writeback", "none"
},
# This provides support for alternative VM controllers that provide
# experimental or beta features.
vmControllers: [
{
id: "beta",
name: "beta",
description: "New but somewhat experimental features"
},
{
id: "experimental",
name: "Experimental",
description: "Possibly unstable version of the controller"
}
]
},
],
# Hardware profiles define sets of hardware resources available on VM hosts.
# Each VM host must be assigned a hardware profile that matches its resources.
hardwareProfiles: [
{
id: "timonsterHP", # ID. Must be unique
name: "tiMonster", # Display name of the hardware profile
description: "Hardware profile for timonster",
cores: 240, # Reserve at least 2 cores for the host OS
memory: 960, # in GiB. Researve at least 8 GiB for the host OS
# This describes available devices on the host that can be assigned to VMs.
# One entry per device. The "addr" is the PCI address of the device as reported by the OS
# Run lspci to get the PCI addresses of devices.
devices: [
{
type: "gpu-nvidia", # This is just an ID. Use same ID for all devices with the same capabilities
addr: "04:00.0",
}
],
},
{
id: "tiwebHP",
name: "tiWeb",
description: "Hardware profile for tiweb",
cores: 60,
memory: 240, # in GiB
devices: [],
}
],
}

Troubleshooting

  • SSH errors: Verify root access and known_hosts.
  • NIC errors: Confirm secure_nic is correct (use ip link to check).
  • Permission errors: Ensure proper ownership on target files.
  • Variable undefined: Check ticrypt.yml and inventory.ini.