Skip to main content

ticrypt-vm config

This service is one of the most complicated parts of tiCrypt. The configuration options are extensive and require significant configuration.

Parameters for the section ticrypt.vm:

ParameterTypeRequiredDescription

mongodb

Section

Required

See ???

cost-functions

Section

Required

See [cost-fct]

hardware-profiles

Section

Required

See [hardware-profile]

realms

Section

Required

See [tc-realms]

licensing-server-driver

String

Optional

none, firewalld

reload-freq

Duration

Optional

How often to re-build the rules

firewalld

Section

Optional

Setup for firewalld driver

akka.remote.netty.tcp.hostname

See ???

akka.remote.netty.tcp.port

See ???

Parameters for ticrypt.vm.firewalld:

ParameterTypeRequiredDescription

file-access

String

Required

File where rules are written.

Make sure that file-access option in ticrypt.vm.firewalld matches the mechanism you have to transform rules into firewall access. If using ticrypt-allowedlist service, the file must be /etc/ticrypt/allowedlist.txt.

tiCrypt realms

A realm is a fully independent VM execution environment that has its own:

  • VM Images: the content of the boot drive used to start the VMs

  • Drives: the user drives that store the encrypted data

Two types of realms are supported as of version 3.1.2: Libvirt and Nutanix.

The realms section specifies realms and include files for their config. Example:

realms {
libvirt1 = { include "realms/libvirt1.conf" }
libvirt2 = { include "realms/libvirt2.conf" }
nutanix = { include "realms/nutanix.conf" }
}

You can specify as many Libvirt and as many Nutanix realms as you want.

Configuring the two types of realms requires careful, realm specific explanation. The next to sections deal with that.

Both the common and specific parameters should not be in a section since the file is included.

Nutanix Realm Configuration

The Nutanix realm uses a Nutanix Cluster to execute the VMs and to host the VM Drives and Images.

The paramaters required by the Nutanix driver depend on an already set up environment in Nutanix PRISM. In particular, you need to:

  1. Setup an account used by tiCrypt. We suggest user name ticrypt and a strong password. The account must have enough privileges to create/remove VMs and drives.

  2. Create a network and obtain the network ID.

  3. Create a storage container for drives and obtain the ID

ParameterTypeRequiredDescription

disabled

Boolean

Required

Is the realm disabled?

driver

nutanix

Required

This is a Nutanix realm

name

String

Required

Displayable name of realm

network-id

String

Required

The network ID in Nutanix

drives.storage-container-id

String

Required

The storage ID

hostname

String

Required

The hostname of Nutanix cluster

port

Port

Optional

The port of PRISM API, default 9440

username

String

Optional

Defaults to ticrypt

password

String

Required

The password of the Nutanix account

task-poll-freq

Duration

Optional

How often to get task updates

registration-timeout

Duration

Optional

VM startup timeout

drives.min-size

Size

Optional

Minimum drive size

drives.max-size

Size

Optional

Maximum drive size

registration-timeout is somewhat critical. Too short and the VMs do not have enough time to register and get killed.

To debug VMs that do not correctly register, increase registration-timeout and attach a console.

You almost always want to increase drives.max-size. The default value is 32 GiB.

VM Images can only be imported via the Nutanix PRISM interface.

VM Naming Configuration

In addition, Nutanix VMs are automatically prepended with a tag to specify that they are part of tiCrypt. By default, they are named "tc-<name>", but the automatic naming can be changed via the nutanix realm’s config.

In the nutanix realm config:

    vm-name: "tc-<name>"
vm-desc: "tiCrypt managed VM | Owner: <owner> (<ownerID>) | Brick: <brick> (<brickID>)"

The tags in brackets will be filled by that vm’s information, and the following tags can be used:

Tag

Description

Example

<name>:

VM name

Nutanix VM 1

<team>:

Team name

Tera Insights

<owner>:

VM owner’s full name

TiCrypt User

<ownerID>:

VM owner’s ID

289f4326-a3a8-49e5-8327-2bdba30d3aba

<project>:

Project name

TiCrypt Testing

<brick>:

Brick name

Nutanix-CentOS

<brickID>:

Brick ID

bdd65456-0f1e-4d29-aa65-fdcc71926431

Drive Naming Configuration

The same mechanism is available for drives on Nutanix.

In the nutanix realm config:

    drive-name: "tc-<name>"
drive-desc: "tiCrypt managed drive | Owner: <owner> (<ownerID>) | Format: <format>"

The tags in brackets will be filled by that drive’s information, and the following tags can be used:

Tag

Description

Example

<name>:

VM name

Nutanix VM 1

<team>:

Team name

Tera Insights

<owner>:

VM owner’s full name

TiCrypt User

<ownerID>:

VM owner’s ID

289f4326-a3a8-49e5-8327-2bdba30d3aba

<project>:

Project name

TiCrypt Testing

<format>:

Drive format

ext4

Libvirt Realm Configuration

The Libvirt realm uses KVM/QEMU to control the VMs. For this to work, we need:

  1. ticrypt-vm service must have root access to all the VM Host servers. This is accomplished by injecting the RSA-2048 public key of ticrypt-vm into the /root/.ssh/authorized_keys file for each of the VM Hosts.

  2. A distributed file-system shared by all the VM Hosts must be set up. The storage must be large enough to accommodate the VM Images, VM Drives, and Libvirt temporary files.

Two independent tasks need to be accomplished for the Libvirt realms to function correctly:

  • Realm setup

  • Hardware description

The realm setup is specified in the file included from the realms section of ticrypt-vm. The hardware description goes into ticrypt-vm config file itself.

Configuring the Libvirt Realm

All the parameters go in the included file specified in the realms section.

These parameters should not be part of sections inside the included file.

ParameterTypeRequiredDescription

disabled

Boolean

Required

Is the realm disabled?

driver

livbirt

Required

This is a Libvirt realm

name

String

Required

Displayable name of realm

volumes-pool

String

Optional

Libvirt name for volumes pool

drives-pool

String

Optional

Libvirt name for drives pool

bricks-pool

String

Optional

Libvirt name for VM Images

log-scheduling

Bool

Optional

Debug VM scheduler?

registration-timeout

Duration

Optional

VM startup timeout

poll-frequency

Duration

Optional

How often a VM is checked

network-filter

String

Optional

Default network filter

drives.lazy-allocation

on, off

Required

Delay allocation?

drives.cache

String

Optional

Cache strategy

drives.min-size

Size

Optional

Minimum drive size

drives.max-size

Size

Optional

Maximum drive size

uploads

Section

Optional

Reserved for future use

scaling

Section

Optional

Reserved for future use

Some notes on the above are in order.

  1. volumes-pool, drives-pool and bricks-pool have default values ticrypt-vm, ticrypt-vm-drives and ticrypt-bricks. Unless you are hosting multiple tiCrypt instances, there is no reason to change the defaults

  2. registration-timeout cannot be too short since some VMs might not have time to boot and register withing the specified time and fail

  3. network-filter is an obscure feature that needs to be specified only if problems arise. See https://libvirt.org/formatnwfilter.html for a complicated explanation.

  4. You almost always want drives.lazy-allocation=on. This way, you "pay" for the storage only when it is needed.

  5. drives.cache is best left at "default". In general, it is tricky to select. Possible values are:

    1. "default": System default caching strategy, may be either "writethrough" or "writeback" depending on the version of qemu-kvm

    2. "writethrough": Host page cache is enabled, but disk write cache is disabled, meaning all writes to the drive only complete when the data has been committed to the underlying storage. Similar to adding O_DSYNC flag to writes.

    3. "writeback": Both the host page cache and disk write cache are enabled. Disk behaves similar to a raid controller with RAM cache.

    4. "none": Bypasses host page cache entirely. Similar to adding O_DIRECT flag to reads and writes.

Hardware specification preliminaries

There are four main components in VM host hardware configuration:

Curves
Built-in parameterized functions taking in the value of a single attribute (e.g. number of VMs, amount of memory used) and returning a cost based on that attribute. Used in the construction of Cost Functions.

Cost Functions
Configurable rules that determine how much it costs to schedule resources on a specific node. Used to determine if VMs can be scheduled to specific notes, and which node is most optimal.

Hardware Profiles
Descriptions of a hardware configuration. Link together a cost function along with resource information. Assigned to hosts to allow for scheduling and device allocation.

Hosts
Individual machines used to run client VMs. Hosts are assigned a hardware profile to use for scheduling decisions.

Cost Functions and Hardware Profiles are managed in the ticrypt-vm configuration file, while Hosts are dynamically managed using the tiCrypt user interface.

The types of values used in the description of parameters are:

TypeExampleDescription

String

"an example"

String value

Int

42

Whole number value

Bytes

32, 64 MiB, 18 GiB

A number of types with an optional unit

Real

1.0, 1

A real number with an optional decimal place

RealOrBytes

1.0, 32 MiB, 2 GiB

Either a real number or a number of bytes with unit

Curve

`curve: invalid`

A curve definition

CurveMap

id1: ..., id2: ...

A mapping of IDs to curve definitions

Device

type: "gpu", addr: "02:03.4"

Object containing a device’s type and PCI address

DeviceList

[..., ...]

A List of Device entries

Curves

Curves are configuration objects with a single required string parameter, curve. This refers to one of the following built-in curves which take a single attribute and compute a cost from it. This may be the numbers of VMs running, the amount of memory being used in bytes, or the number of a specific device in use. Additional parameters may be specified depending on the curve type.

If a curve has multiple values listed for its types, any of those can be used as the value of the curve attribute.

The following parameters are common to all curve types:

Common Curve Parameters
ParameterTypeRequiredDefaultDescription

scale

Real

1.0

The scale factor to apply to the curve result.

max

Real

None

If specified, any input past this value will return Infinity.

Common Curve Parameters

Constant curve

Types: constant

Always returns a fixed value.

Constant Curve Parameters
ParameterTypeRequiredDefaultDescription

value

Real

icon:check[role="green"]

The constant value to return.

Constant Curve Parameters

===

{ curve: constant, value: 1.0 }

===

Invalid Curve

Types: invalid

Always returns positive infinity, preventing the VM from being scheduled. Mostly useful in conjunction with the [???](#Piecewise Curve).

===

{ curve: invalid }

===

Linear Curve

Types: linear

Computes a value from the input based on a line with a given slope and base.

Linear Curve Parameters
ParameterTypeRequiredDefaultDescription

slope

RealOrBytes

icon:check[role="green"]

The slope of the line.

base

RealOrBytes

icon:check[role="green"]

The vaule to return when the input is 0.

Linear Curve Parameters

===

{ curve: linear, slope: 1.0, base: 0.0 }
{ curve: linear, slope: 10.0, base: 5.0 }

===

Hard-Soft Curve

Types: hard-soft, soft-hard

Creates a mostly continuous piecewise linear curve with a soft and hard cap. The cost ramps up slowly until the input reaches soft, at which the cost ramps up much more quickly until the input reaches hard. After that, the cost is infinity.

Hard-Soft Curve Parameters
ParameterTypeRequiredDefaultDescription

soft

RealOrBytes

icon:check[role="green"]

The input value at which softAmt should be returned.

hard

RealOrBytes

icon:check[role="green"]

The input value at which hardAmt should be returned.

softAmt

Real

1.0

The cost returned when the input value reaches soft.

hardAmt

Real

10.0

The cost returned when the input value reaches hard.

Hard-Soft Curve Parameters

Hard-Soft Curve Examples

===

# Returns 0 when input is 0, 1 when input is 1, 10 when input is 2, infinity when input > 2
{ curve: "hard-soft", soft: 1.0, hard: 2.0 }

# Both of these are equivalent
{ curve: "hard-soft", soft: 32 GiB, hard: 48 GiB, softAmt: 10, hardAmt: 100 }
{ curve: "hard-soft", soft: 32 GiB, hard: 48 GiB, scale: 10.0 }

===

Unavailable Curve

Types: unavailable

Creates a piecewise curve that returns either 0 if the input is less than or equal to zero, or infinity if the input is greater than zero. This is useful for denoting resources that are unavailable.

This curve is the default for any devices that do not otherwise have curves associated with them.

Unavailable Curve Examples

===

{ curve: unavailable }

===

Piecewise Curve

Aliases: piecewise, low-high, high-low

Returns the result of one of two curves depending on the input value. Allows for more complex piecewise linear curves.

Linear Curve Parameters
ParameterTypeRequiredDefaultDescription

cutoff

RealOrBytes

icon:check[role="green"]

The cutoff point for switching from low to high

low

Curve

icon:check[role="green"]

The curve to use when the input is less than or equal to cutoff

high

Curve

icon:check[role="green"]

The curve to use when the input is greater than cutoff

Linear Curve Parameters

Piecewise Curve Examples

===

# Constant cost of 5 up to a hard cutoff when input is 6
{ curve: piecewise, cutoff: 6.0, low: { curve: constant, value: 5}, high: { curve: invalid } }

# Same as a hard-soft with soft = 1, hard = 2, softAmt = 1, hardAmt = 10
{
curve: piecewise
cutoff: 1
low: {
curve: linear
slope: 1
base: 0
}
high: {
curve: piecewise
cutoff: 2
low: {
curve: linear
slope: 9
base: -8
},
high: {
curve: invalid
}
}
}

# Same as unavailable
{
curve: piecewise,
cutoff: 0,
low: { curve: constant, value: 0 },
high: { curve: invalid }
}

===

Cost Functions

Cost Functions are managed in the ticrypt.vm.cost-functions section of ticrypt-vm.conf, which is mapping of cost function IDs to their definition. Cost function definitions have the following structure. See the section ??? for more information about parameters with the type Curve.

Cost Function Parameters
ParameterTypeDefaultDescription

name

String

Same as ID

A human-readable name for the cost function.

description

String

Empty

A description of the cost function.

vms

Curve

None

Curve used to assign cost based on number of VMs running.

vcpus

Curve

None

Curve used to assing cost based on number of virtual CPU cores used.

memory

Curve

None

Curve used to assign cost based on memory usage in bytes.

devices

CurveMap

None

Mapping of device ID to curve for cost of using devices of that type.

scale

Real

1.0

Scaling factor to multiply all costs generated by this function.

offset

Real

0.0

Static offset to add to all costs generated by this function. Not affected by the scale.

Cost Function Parameters

Any device types not listed in the devices section will act as if they were defined with the unavailable curve.

A curve should be set for at least one of vms, vcpus, or memory; otherwise, there will be no restriction on VMs using no devices being scheduled to hosts using this cost function. This will cause rapid resource exhaustion.

Examples

Basic Cost Function

===

ticrypt.vm.cost-functions {
simple {
name: "Simple"
description: "A basic cost function"

# Add 1 cost per VM scheduled to the VM
vms: { curve: "linear", slope: 1, base: 0 }

# VCPUs cost 0.5 each until 8, then 3 each until 16
vcpus: { curve: "hard-soft", soft: 8, hard: 16, softAmt: 4, hardAmt: 32 }

# Add 10 cost at 16 GiB of memory, 100 at 32 GiB
memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 GiB, scale: 10}
# alternative way to specify the same thing:
#memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 Gib, softAmt: 10, hardAmt: 100 }
}
}

===

Cost Function with GPUs

===

ticrypt.vm.cost-functions {
nvidia-gpu {
name: "GPU Node"
description: "Used for GPU nodes with the standard configuration"

# Same as simple
vms: { curve: "linear", slope: 1, base: 0 }
vcpus: { curve: "hard-soft", soft: 8, hard: 16, softAmt: 4, hardAmt: 32 }
memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 GiB, scale: 10}

# NVidia GPUs cost 1 each, up to a max of 2 used
devices.nvidia-gpu: { curve: linear, slope: 1, base: 0, max: 2}

# Extremely high offset so that VMs do not get scheduled here unless there is no
# other choice
offset: 100000
}
}

===

Cost Functions Using Pools and Includes

=== .ticrypt-vm.conf

ticrypt.vm.cost-functions {
pool1 {
include "cost-functions/pool-common.conf"
name: "Pool 1"
}
pool2 {
include "cost-functions/pool-common.conf"
name: "Pool 2"
# Add a flat 1000 so that hosts in pool 2 will not be used unless the VM could not be
# scheduled to hosts in pool 1
offset: 1000
}
}

cost-functions/pool-common.conf

vms: { curve: "linear", slope: 1, base: 0 }
vcpus: { curve: "hard-soft", soft: 8, hard: 16, softAmt: 4, hardAmt: 32 }
memory: { curve: "hard-soft", soft: 16 GiB, hard: 32 GiB, scale: 10}

===

Hardware Profiles

Hardware Profiles are defined in the ticrypt.vm.hardware-profiles section in ticrypt-vm.conf. Each Hardware profile should be defined as an ID mapped to a configuration block with the following parameters:

Hardware Profile Parameters
ParameterTypeRequiredDefaultDescription

name

String

Same as ID

A human-readable name for the hardware profile.

description

String

Empty

A description of the hardware profile.

cores

Int

icon:check[role="green"]

The number of CPU cores available on the host.

memory

Bytes

icon:check[role="green"]

The amount of memory in bytes available on the host.

devices

DeviceList

[]

The PCI devices available for VM use.

cost-function

String

icon:check[role="green"]

The ID of the cost function to use for this profile.

Hardware Profile Parameters

Hardware Profile Examples

===

ticrypt.vm.hardware-profiles {
pool1: {
name: "Pool 1"
description: "Standard host in pool 1"
cores: 8
memory: 32 GiB
cost-function: "pool1"
}
pool2: {
name: "Pool 2"
description: "Standard host in pool 2"
cores: 8
memory: 32 GiB
cost-function: "pool2"
}
nvidia-gpu-node: {
name: "NVidia GPU Node"
description: "Special nodes with NVidia GPUs attached"
cores: 8
memory: 32 GiB
cost-function: nvidia-gpu
devices = [
{type: "nvidia-gpu", addr: "0000:01:02.0"}
# PCI addresses can omit the domain if it is all 0
{type: "nvidia-gpu", addr: "02:03.0"},
]
}
}

===

Hosts

Hosts are managed from the Management tab in the tiCrypt user interface. Administrators are able to add, enable, disable, and remove hosts from this interface.

Each host’s configuration consists of the following pieces of information:

Name
A descriptive name of the specific host.

URI
The Libvirt connection URI used to connect to the host. Full documentation on the various URI formats are available at the Libvirt documentation. Some examples include:

Local QEMU Connection
qemu:///system

Remote QEMU Connection
qemu+ssh://[user@]host[:port]/system

Hardware Profile
One of the defined hardware profiles to use for resource and scheduling information.

State
The operational state of the host. Can be one of the following:

Enabled
A connection will be established to the host, and VMs will be scheduled to it.

No Scheduling
A connection will be established to the host, but no new VMs will be scheduled to it. This allows for the management of existing VMs without allowing new VMs. Useful for nodes that need to be brought down gracefully.

Disabled
No connection to the host will be made, and no VMs will be scheduled to it.

Static Address Translation
Optional configuration used if VMs are not networks that are not directly reachable from the tiCrypt server. If needed, an IP address and base port can be specified. tiCrypt will then make connections to the specified address when communicating with VMs, with the port determined by the base port plus the last octet of the VM’s IP address.

Setting up Libvirt Pools

If you used the recommended default values for volumes-pool, drives-pool and bricks-pool, there are four pools that need to be set up in libvirt: ticrypt-vm, ticrypt-vm-drives, ticrypt-bricks and ticrypt-vm-snapshot.

In order to define pools, e.g. ticrypt-vm-snapshots, you can: ``` virsh pool-define-as ticrypt-vm-snapshots dir - - - - "<path/to/pool>" virsh pool-build ticrypt-vm-snapshots virsh pool-start ticrypt-vm-snapshots virsh pool-autostart ticrypt-vm-snapshots ```