Skip to main content

Configuring ticrypt Virtual Machines

ticrypt-vm config

This service is one of the most complicated parts of tiCrypt. The configuration options are extensive and require significant configuration.

Parameters for the section ticrypt.vm:

mongodbSectionSee ???
cost-functionsSectionSee cost-fct
hardware-profilesSectionSee hardware-profile
realmsSectionSee tc-realms
licensing-server-driverStringnone, firewalld
reload-freqDurationHow often to re-build the rules
firewalldSectionSetup for firewalld driver
akka.remote.netty.tcp.hostnameSee ???
akka.remote.netty.tcp.portSee ???

Parameters for ticrypt.vm.firewalld:

file-accessStringFile where rules are written.

Make sure that file-access option in ticrypt.vm.firewalld matches the mechanism you have to transform rules into firewall access. If using ticrypt-allowedlist service, the file must be /etc/ticrypt/allowedlist.txt.

tiCrypt realms

A realm is a fully independent VM execution environment that has its own:

  • VM Images: the content of the boot drive used to start the VMs

  • Drives: the user drives that store the encrypted data

Two types of realms are supported as of version 3.1.2: Libvirt and Nutanix.

The realms section specifies realms and includes files for their config. Example:

realms {
libvirt1 = { include "realms/libvirt1.conf" }
libvirt2 = { include "realms/libvirt2.conf" }
nutanix = { include "realms/nutanix.conf" }

You can specify as many Libvirt and as many Nutanix realms as you want.

Configuring the two types of realms requires careful, realm-specific explanation. The next to sections deal with that.

Both the common and specific parameters should not be in a section since the file is included.

Nutanix Realm Configuration

The Nutanix realm uses a Nutanix Cluster to execute the VMs and to host the VM Drives and Images.

The parameters required by the Nutanix driver depend on an already set-up environment in Nutanix PRISM. In particular, you need to:

  1. Set up an account used by tiCrypt. We suggest the user name ticrypt and a strong password. The account must have enough privileges to create/remove VMs and drives.

  2. Create a network and obtain the network ID.

  3. Create a storage container for drives and obtain the ID

disabledBooleanIs the realm disabled?
drivernutanixThis is a Nutanix realm
nameStringDisplayable name of realm
network-idStringThe network ID in Nutanix storage ID
hostnameStringThe hostname of Nutanix cluster
portPortThe port of PRISM API, default 9440
usernameStringDefaults to ticrypt
passwordStringThe password of the Nutanix account
task-poll-freqDurationHow often to get task updates
registration-timeoutDurationVM startup timeout
drives.min-sizeSizeMinimum drive size
drives.max-sizeSizeMaximum drive size

registration-timeout is somewhat critical. Too short and the VMs do not have enough time to register and get killed.

To debug VMs that do not correctly register, increase registration-timeout and attach a console.

You almost always want to increase drives.max-size. The default value is 32 GiB.

VM Images can only be imported via the Nutanix PRISM interface.

VM Naming Configuration

In addition, Nutanix VMs are automatically prepended with a tag to specify that they are part of tiCrypt. By default, they are named "tc-<name>", but the automatic naming can be changed via the nutanix realm’s config.

In the nutanix realm config:

    vm-name: "tc-<name>"
vm-desc: "tiCrypt managed VM | Owner: <owner> (<ownerID>) | Brick: <brick> (<brickID>)"

The tags in brackets will be filled by that vm’s information, and the following tags can be used:

<name>:VM nameNutanix VM 1
<team>:Team nameTera Insights
<owner>:VM owner’s full nameTiCrypt User
<ownerID>:VM owner’s ID289f4326-a3a8-49e5-8327-2bdba30d3aba
<project>:Project nameTiCrypt Testing
<brick>:Brick nameNutanix-CentOS
<brickID>:Brick IDbdd65456-0f1e-4d29-aa65-fdcc71926431

Drive Naming Configuration

The same mechanism is available for drives on Nutanix.

In the nutanix realm config:

    drive-name: "tc-<name>"
drive-desc: "tiCrypt managed drive | Owner: <owner> (<ownerID>) | Format: <format>"

The tags in brackets will be filled by that drive’s information, and the following tags can be used:

<name>:VM nameNutanix VM 1
<team>:Team nameTera Insights
<owner>:VM owner’s full nameTiCrypt User
<ownerID>:VM owner’s ID289f4326-a3a8-49e5-8327-2bdba30d3aba
<project>:Project nameTiCrypt Testing
<format>:Drive formatext4

Libvirt Realm Configuration

The Libvirt realm uses KVM/QEMU to control the VMs. For this to work, we need:

  1. ticrypt-vm service must have root access to all the VM Host servers. This is accomplished by injecting the RSA-2048 public key of ticrypt-vm into the /root/.ssh/authorized_keys file for each of the VM Hosts.

  2. A distributed file system shared by all the VM Hosts must be set up. The storage must be large enough to accommodate the VM Images, VM Drives, and Libvirt temporary files.

Two independent tasks need to be accomplished for the Libvirt realms to function correctly:

  • Realm setup

  • Hardware description

The realm setup is specified in the file included in the realms section of ticrypt-vm. The hardware description goes into ticrypt-vm config file itself.

Configuring the Libvirt Realm

All the parameters go in the included file specified in the realms section.

These parameters should not be part of sections inside the included file.

disabledBooleanIs the realm disabled?
driverlivbirtThis is a Libvirt realm
nameStringDisplayable name of realm
volumes-poolStringLibvirt name for volumes pool
drives-poolStringLibvirt name for drives pool
bricks-poolStringLibvirt name for VM Images
log-schedulingBoolDebug VM scheduler?
registration-timeoutDurationVM startup timeout
poll-frequencyDurationHow often a VM is checked
network-filterStringDefault network filter
drives.lazy-allocationon, offDelay allocation?
drives.cacheStringCache strategy
drives.min-sizeSizeMinimum drive size
drives.max-sizeSizeMaximum drive size
uploadsSectionReserved for future use
scalingSectionReserved for future use

Some notes on the above are in order.

  1. volumes-pool, drives-pool and bricks-pool have default values ticrypt-vm, ticrypt-vm-drives and ticrypt-bricks. Unless you are hosting multiple tiCrypt instances, there is no reason to change the defaults

  2. registration-timeout cannot be too short since some VMs might not have time to boot and register withing the specified time and fail

  3. network-filter is an obscure feature that needs to be specified only if problems arise. See for a complicated explanation.

  4. You almost always want drives.lazy-allocation=on. This way, you "pay" for the storage only when it is needed.

  5. drives.cache is best left at "default". In general, it is tricky to select. Possible values are:

    1. "default": System default caching strategy, may be either "writethrough" or "writeback" depending on the version of qemu-kvm

    2. "writethrough": Host page cache is enabled, but disk write cache is disabled, meaning all writes to the drive are completed only when the data has been committed to the underlying storage. Similar to adding O_DSYNC flag to writes.

    3. "writeback": Both the host page cache and disk write cache are enabled. Disk behaves similarly to a raid controller with RAM cache.

    4. "none": Bypasses host page cache entirely. Similar to adding O_DIRECT flag to reads and writes.

Hardware specification preliminaries

There are four main components in VM host hardware configuration:

Built-in parameterized functions take in the value of a single attribute (e.g. number of VMs, amount of memory used) and return a cost based on that attribute. Used in the construction of Cost Functions.

Cost Functions
Configurable rules determine how much it costs to schedule resources on a specific node. Used to determine if VMs can be scheduled to specific notes, and which node is most optimal.

Hardware Profiles
Descriptions of a hardware configuration. Link together a cost function along with resource information. Assigned to hosts to allow for scheduling and device allocation.

Individual machines are used to run client VMs. Hosts are assigned a hardware profile to use for scheduling decisions.

Cost Functions and Hardware Profiles are managed in the ticrypt-vm configuration file, while Hosts are dynamically managed using the tiCrypt user interface.

The types of values used in the description of parameters are:

String"an example"String value
Int42Whole number value
Bytes32, 64 MiB, 18 GiBA number of types with an optional unit
Real1.0, 1A real number with an optional decimal place
RealOrBytes1.0, 32 MiB, 2 GiBEither a real number or a number of bytes with unit
Curvecurve: invalidA curve definition
CurveMapid1: ..., id2: ...A mapping of IDs to curve definitions
Devicetype: "gpu", addr: "02:03.4"Object containing a device’s type and PCI address
DeviceList[..., ...]A List of Device entries


Curves are configuration objects with a single required string parameter, curve. This refers to one of the following built-in curves which take a single attribute and compute a cost from it. This may be the numbers of VMs running, the amount of memory being used in bytes, or the number of a specific device in use. Additional parameters may be specified depending on the curve type.

If a curve has multiple values listed for its types, any of those can be used as the value of the curve attribute.

The following parameters are common to all curve types:

scaleReal1.0The scale factor to apply to the curve result.
maxRealNoneIf specified, any input past this value will return Infinity.

Common Curve Parameters

Constant curve

Types: constant

Always returns a fixed value.

valueRealicon:check[role="green"]The constant value to return.

Constant Curve Parameters


{ curve: constant, value: 1.0 }


Invalid Curve

Types: invalid

Always returns positive infinity, preventing the VM from being scheduled. Mostly useful in conjunction with the [???](#Piecewise Curve).


{ curve: invalid }


Linear Curve

Types: linear

Computes a value from the input based on a line with a given slope and base.

slopeRealOrBytesicon:check[role="green"]The slope of the line.
baseRealOrBytesicon:check[role="green"]The vaule to return when the input is 0.

Linear Curve Parameters


{ curve: linear, slope: 1.0, base: 0.0 }
{ curve: linear, slope: 10.0, base: 5.0 }


Hard-Soft Curve

Types: hard-soft, soft-hard

Creates a mostly continuous piecewise linear curve with a soft and hard cap. The cost ramps up slowly until the input reaches soft, at which the cost ramps up much more quickly until the input reaches hard. After that, the cost is infinity.

softRealOrBytesicon:check[role="green"]The input value at which softAmt should be returned.
hardRealOrBytesicon:check[role="green"]The input value at which hardAmt should be returned.
softAmtReal1.0The cost returned when the input value reaches soft.
hardAmtReal10.0The cost returned when the input value reaches hard.

Hard-Soft Curve Parameters

Hard-Soft Curve Examples


# Returns 0 when input is 0, 1 when input is 1, 10 when input is 2, infinity when input > 2
{ curve: "hard-soft", soft: 1.0, hard: 2.0 }

# Both of these are equivalent
{ curve: "hard-soft", soft: 32 GiB, hard: 48 GiB, softAmt: 10, hardAmt: 100 }
{ curve: "hard-soft", soft: 32 GiB, hard: 48 GiB, scale: 10.0 }


Unavailable Curve

Types: unavailable

Creates a piecewise curve that returns either 0 if the input is less than or equal to zero, or infinity if the input is greater than zero. This is useful for denoting resources that are unavailable.

This curve is the default for any devices that do not otherwise have curves associated with them.

Unavailable Curve Examples


{ curve: unavailable }


Piecewise Curve

Aliases: piecewise, low-high, high-low

Returns the result of one of two curves depending on the input value. Allows for more complex piecewise linear curves.

cutoffRealOrBytesicon:check[role="green"]The cutoff point for switching from low to high
lowCurveicon:check[role="green"]The curve to use when the input is less than or equal to cutoff
highCurveicon:check[role="green"]The curve to use when the input is greater than cutoff

Linear Curve Parameters

Piecewise Curve Examples


# Constant cost of 5 up to a hard cutoff when input is 6
{ curve: piecewise, cutoff: 6.0, low: { curve: constant, value: 5}, high: { curve: invalid } }

# Same as a hard-soft with soft = 1, hard = 2, softAmt = 1, hardAmt = 10
curve: piecewise
cutoff: 1
low: {
curve: linear
slope: 1
base: 0
high: {
curve: piecewise
cutoff: 2
low: {
curve: linear
slope: 9
base: -8
high: {
curve: invalid

# Same as unavailable
curve: piecewise,
cutoff: 0,
low: { curve: constant, value: 0 },
high: { curve: invalid }


Cost Functions

Cost Functions are managed in the ticrypt.vm.cost-functions section of ticrypt-vm.conf, which is mapping of cost function IDs to their definition. Cost function definitions have the following structure. See the section ??? for more information about parameters with the type Curve.

nameStringSame as IDA human-readable name for the cost function.
descriptionStringEmptyA description of the cost function.
vmsCurveNoneCurve used to assign cost based on number of VMs running.
vcpusCurveNoneCurve used to assing cost based on number of virtual CPU cores used.
memoryCurveNoneCurve used to assign cost based on memory usage in bytes.
devicesCurveMapNoneMapping of device ID to curve for cost of using devices of that type.
scaleReal1.0Scaling factor to multiply all costs generated by this function.
offsetReal0.0Static offset to add to all costs generated by this function. Not affected by the scale.

Cost Function Parameters

Any device types not listed in the devices section will act as if they were defined with the unavailable curve.

A curve should be set for at least one of vms, vcpus, or memory; otherwise, there will be no restriction on VMs using no devices being scheduled to hosts using this cost function. This will cause rapid resource exhaustion.


Basic Cost Function


ticrypt.vm.cost-functions {
simple {
name: "Simple"
description: "A basic cost function"

# Add 1 cost per VM scheduled to the VM
vms: { curve: "linear", slope: 1, base: 0 }

# VCPUs cost 0.5 each until 8, then 3 each until 16
vcpus: { curve: "hard-soft", soft: 8, hard: 16, softAmt: 4, hardAmt: 32 }

# Add 10 cost at 16 GiB of memory, 100 at 32 GiB
memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 GiB, scale: 10}
# alternative way to specify the same thing:
#memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 Gib, softAmt: 10, hardAmt: 100 }


Cost Function with GPUs


ticrypt.vm.cost-functions {
nvidia-gpu {
name: "GPU Node"
description: "Used for GPU nodes with the standard configuration"

# Same as simple
vms: { curve: "linear", slope: 1, base: 0 }
vcpus: { curve: "hard-soft", soft: 8, hard: 16, softAmt: 4, hardAmt: 32 }
memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 GiB, scale: 10}

# NVidia GPUs cost 1 each, up to a max of 2 used
devices.nvidia-gpu: { curve: linear, slope: 1, base: 0, max: 2}

# Extremely high offset so that VMs do not get scheduled here unless there is no
# other choice
offset: 100000


Cost Functions Using Pools and Includes

=== .ticrypt-vm.conf

ticrypt.vm.cost-functions {
pool1 {
include "cost-functions/pool-common.conf"
name: "Pool 1"
pool2 {
include "cost-functions/pool-common.conf"
name: "Pool 2"
# Add a flat 1000 so that hosts in pool 2 will not be used unless the VM could not be
# scheduled to hosts in pool 1
offset: 1000


vms: { curve: "linear", slope: 1, base: 0 }
vcpus: { curve: "hard-soft", soft: 8, hard: 16, softAmt: 4, hardAmt: 32 }
memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 GiB, scale: 10}


Hardware Profiles

Hardware Profiles are defined in the ticrypt.vm.hardware-profiles section in ticrypt-vm.conf. Each Hardware profile should be defined as an ID mapped to a configuration block with the following parameters:

nameStringSame as IDA human-readable name for the hardware profile.
descriptionStringEmptyA description of the hardware profile.
coresInticon:check[role="green"]The number of CPU cores available on the host.
memoryBytesicon:check[role="green"]The amount of memory in bytes available on the host.
devicesDeviceList[]The PCI devices available for VM use.
cost-functionStringicon:check[role="green"]The ID of the cost function to use for this profile.

Hardware Profile Parameters

Hardware Profile Examples


ticrypt.vm.hardware-profiles {
pool1: {
name: "Pool 1"
description: "Standard host in pool 1"
cores: 8
memory: 32 GiB
cost-function: "pool1"
pool2: {
name: "Pool 2"
description: "Standard host in pool 2"
cores: 8
memory: 32 GiB
cost-function: "pool2"
nvidia-gpu-node: {
name: "NVidia GPU Node"
description: "Special nodes with NVidia GPUs attached"
cores: 8
memory: 32 GiB
cost-function: nvidia-gpu
devices = [
{type: "nvidia-gpu", addr: "0000:01:02.0"}
# PCI addresses can omit the domain if it is all 0
{type: "nvidia-gpu", addr: "02:03.0"},



Hosts are managed from the Management tab in the tiCrypt user interface. Administrators are able to add, enable, disable, and remove hosts from this interface.

Each host’s configuration consists of the following pieces of information:

A descriptive name of the specific host.

The Libvirt connection URI used to connect to the host. Full documentation on the various URI formats are available at the Libvirt documentation. Some examples include:

Local QEMU Connection

Remote QEMU Connection

Hardware Profile
One of the defined hardware profiles to use for resource and scheduling information.

The operational state of the host. Can be one of the following:

A connection will be established to the host, and VMs will be scheduled to it.

No Scheduling
A connection will be established to the host, but no new VMs will be scheduled to it. This allows for the management of existing VMs without allowing new VMs. Useful for nodes that need to be brought down gracefully.

No connection to the host will be made, and no VMs will be scheduled to it.

Static Address Translation
Optional configuration used if VMs are not networks that are not directly reachable from the tiCrypt server. If needed, an IP address and base port can be specified. tiCrypt will then make connections to the specified address when communicating with VMs, with the port determined by the base port plus the last octet of the VM’s IP address.

Setting up Libvirt Pools

If you used the recommended default values for volumes-pool, drives-pool and bricks-pool, there are four pools that need to be set up in libvirt: ticrypt-vm, ticrypt-vm-drives, ticrypt-bricks and ticrypt-vm-snapshot.

In order to define pools, e.g. ticrypt-vm-snapshots, you can: ``` virsh pool-define-as ticrypt-vm-snapshots dir - - - - "<path/to/pool>" virsh pool-build ticrypt-vm-snapshots virsh pool-start ticrypt-vm-snapshots virsh pool-autostart ticrypt-vm-snapshots ```