Skip to main content

· 3 min read
tiCrypt Team

Administrator roles

There are three different administrator roles within tiCrypt; super-admin, admin, and sub-admin. Just like everything else in tiCrypt, there is a need for security which is implemented with an administrator hierarchy.

Super-administrator

A super administrator is the highest role in the system. This user can do ANYTHING in the system, however the main purpose of a super-admin user is to manage global or system wide settings such as deployment settings. Superadmins are also repsonsible for accessing and restarting system services. Super-admins can modify all types of users. This role is limited to a very select few administrators within the organization. Again, all admins (including super admins) do not have access to the user data unless a user explicitly shares the data with the admin.

Administrator

Admins are managers of the entire system. This role is defined as a traditional system manager role with no access to user data unless explicitly shared. Admins can only modify basic user and subadmin permissions. Admins cannot modify super-admin permissions nor can they do the tasks specific for super-admins stated above.

Sub-administrator

Subadmins within the system are the lowest level of adminstrators. A subadmin role allows specific users to act as administrators within the tiCrypt system for their team. Admins in the system assign a managed object to the specific subadmin.

The subadmin has access to the Management tab within the tiCrypt system, just like the admins above, but subadmins would only list members, VMs, projects, etc., associated with the admin’s assigned team. This allows for a subadmin to activate team members, change team members' permissions, manage tiCrypt projects, or any other admin action for that specific team only.

Three guiding principles that act as rules for a subadmin.

If a user is deactivated and belongs to no team, a subadmin can place this new user into their team. This allows for subadmins to onboard and activate users without the need for a Super Admin/UMD IT Admin. This rule prevents the subadmin from managing already existing members in the system that are not part of the defined team.

If a user explicitly belongs to a team, then the subadmin of that team can manage that user. If a user is removed from a team and is no longer a member of any team, that account becomes deactivated, and default permissions are restored. Once the account is deactivated, a Super Admin/UMD IT Admin will need to change the role. This rule is in place to prevent possible malicious permission changes.

Subadmins can create new teams, but new teams will have a default quota. The quota will need to be increased by the Super Admin/UMD IT Admin. This will prevent subadmins from over-utilizing (or over-allocating resouUMD ITes) in the system without permission from UMD IT.

· 7 min read
Alin Dobra

tiCrypt Vault Storage

The tiCrypt Vault offers a file system-like facility that allows files and directories to be created and used. All the metadata, such as file properties, directory entries, access information, decryption keys, are stored in the MongoDB database used by the tiCrypt-file storage service.

In tiCrypt Vault, the file content is broken into 8MB chunks, each encrypted independently using the file key. The size of such chunks is 8MB+64Bytes on disk (unless they are the last, incomplete chunk ). The extra 64Bytes contain the IV (initialization vector for AES encryption). For each file, the chunks are numbered from 0 onwards. Based on the file ID, the chunks are stored in a directory structure, visible only to the tiCrypt backend (preferably not to the VM hosts). The location of the storage can be configured in the configuration file of the tiCrypt-storage service.

tiCrypt is not opinionated on the file system or integration with other systems for this storage but, for compliance reasons, it is recommended that the access is restricted to only the tiCrypt backend.

Without the decryption keys that only users can recover, the content of the chunk files is completely indecipherable. It is safe to back up these files using any method (including non-encrypted backup, cloud, etc). The strong encryption coupled with the unavailability of the key to even administrators ensures that this content can be replicated outside the secure environment from a compliance point of view.unavailability of the key to even administrators, ensures that from a compliance point of view, this content can be replicated outside the secure environment.

tiCrypt Encrypted Drives in Libvirt

tiCrypt virtual machines use a boot disk image that is not encrypted and one or more encrypted drives. Both disk images and encrypted drives are stored as files in the underlying distributed file system available to all Libvirt host servers. The specific mechanism uses the notion of Libvirt disk pools. Independent disk pools can be defined for disk images, encrypted drives, drive snapshots (for backup), ISOs, etc. Each pool is located in a different directory within the distributed file system.

Libvirt (and tiCrypt, by extension) is agnostic to the choice of the file system where various disk pools are defined. A good practice is to place the various disk pools on a distributed file system visible (preferably in the same location) and to mount the file system on all VM hosts. Any file system can be used, including NFS, BeeGFS, Luster, etc.

As part of the virtualization mechanism, Libvirt makes the files corresponding to drives, stored on the host file system, appear as devices to the OS running within the VM. Any writes to the virtual device get translated into changes of the underlying file. The situation is somewhat more complex when snapshots are used since multiple files on the disk will form the virtual device.

Encrypted Drive Creation

Upon drive creation, tiCrypt instructs Libvirt to create a drive. This results in a file being created in the underlying file system in the corresponding drive pool. Two types of drives are supported: raw, with extension .raw, and QCOW2, with extension .qcow2.

For a raw drive, a file as large as the indicated drive size gets created. The file content is initialized with 0s (corresponding to a blank drive). Writes in the virtual drive result in writes in the corresponding file at exactly the same position (e.g., if block 10244 of the virtual drive is written, block 10244 of the raw file gets changed as well)

For a QCOW2 drive, only changed blocks get written; the file format is quite complex and supports advanced features like copy-on-write. The initial file size is small (low megabytes) when the drive is new; the file grows in size as more data is written to the disk.

The qemu-img tool can be used to convert between two formats. Usually, tiCrypt takes care of setting the drives up without the need for this tool.

A newly created tiCrypt disk is blank. Absolutely no formating of the drive or any other preparation has been performed. The drive will be formated the first time it is attached to a tiCrypt virtual machine. The main reasons for this are:

  • The encryption/decryption key for the encrypted drive is kept secret from the infrastructure. This includes the tiCrypt backend, Libvirt, and underlying VM host.
  • The choice of the file system to use is delegated to the underlying operating system and tiCrypt VM Controller. Libvirt is not aware, nor does it needs to know, of the actual file system on the drive. For Linux formated drives, by inspecting the files backing up the drives, there is no way to tell even if the drive is formatted at all, let alone any information on content or type of file system.

Encrypted Drive Formatting

As far as Libvirt is concerned, only low-level disk reads and writes exist. Whatever operation the operating system is performing gets translated into reading/write operations on the virtual disk; in turn, this will result in reading/write operations for the underlying file in the disk pool.

In Windows, a normal NTFS file system is crated but immediately (before the drive is made available to the user), BitLocker is turned on. This ensures that all files created subsequently are encrypted. BitLocker uses so-called "full volume encryption," i.e., all the new data will be encrypted, including the metadata. An external tool scanning the backing file can determine that the drive is an NTFS formatted drive and read all non-encrypted content. Sine tiCrypt turns on encryption immediately, very limited information is visible.

In Linux, the LUKS full disk encryption mechanism is used. It essentially places an encryption block layer between the raw drive (virtual drive in this case) and the file system (usually EXT4). This way, absolutely all information on the disk is encrypted. An external tool can only tell which disk blocks have been written to (are non-zero) but can derive no information on the rest of the content

tiCrypt Non-secure Drives

Two types of non-secure drives are supported in tiCrypt: ISOs and read-only NFS shares.

Attaching ISOs

ISOs are made available using read-only CD-ROM devices. As such, they are always safe to mount in a secure tiCrypt VM. Both Linux and Windows can readily mount such ISOs and make them available as "drives."

ISOs are particularly useful if the NFS shares described below are not used. For example, Python or R packages could be made available as ISOs so that various VMs can install the required packages locally.

Attaching NFS file systems

By allowing, through firewall rules, access to a local NFS server, various tiCrypt VMs can mount a common file system for the purpose of accessing public (non-secure) data, packages, software, etc.

From a security point of view, the NFS server should export the data as read-only. The tiCrypt secure VMs should never be allowed to mount a read-write NFS share since data can be exfiltrated, thus defeating the great effort put into tiCrypt protections against data exfiltration. This will most surely make the tiCrypt deployment non-compliant.

A further restriction is related to the location of the NFS server. The NFS file system must be under full control of the system administrators of tiCrypt. It has to be part of the tiCrypt system envelope; for example, one of the server's part of the tiCrypt infrastructure can take this role. This restriction is due to compliance complications: the security envelope extends to all parts of the system; a remote NFS server becomes part of the secure environment and is subject to all the security restrictions.

A practical recommendation is to create a local NFS server inside the security envelope and regularly sync content from an external NFS server. System administrators responsible for the operations need to be identified in the security plan (compliance requirement).

· 5 min read
tiCrypt Team

Business case

This blog post discusses what a configuration file is and why you, as an admin, should care about it when running your virtual machines. A configuration file can alter the internal settings of your virtual machine.

Why do we need this?

When the administrator creates a VM Image, they need two things: the tiCrypt software to manage the image, and the configuration file. tiCrypt provides users with a default configuration file. The administrator can use the tiCrypt config file, or use their own. Allowing users to upload their own configuration files allows for customization for what the researcher needs.

What types of parameters are in a config file?

Below is the default configuration file with notes on what each object and its parameters mean.

[terminal]

  • enabled = false
    Whether or not the terminal service is enabled. On windows, the terminal is powershell and on linux it will be linux.

  • #command = "/bin/bash"
    The default command to use when running terminals in linux.

  • #command = "powershell.exe"
    The default command to use when running terminals in windows.

  • scrollback = 10000
    Default number of lines of terminal scrollback history kept.

[tunnel]

  • enabled = false
    Whether or not the tunnel service is enabled.

  • serverPort = 22
    TCP port on which to bind the tunneling service.

  • allowedPorts = [] This is our default port. We use a list of ports for tunneling. The following are examples of what users can set allowedPorts to.

  • allowedPorts = 5901
    A single port can be used.

  • allowedPorts = "5901-5905"
    A range of ports can be used.

  • allowedPorts = [ 14500, 5901 ] An array of ports can be used.

  • addGroups = []
    This is our default list. addGroups is a list of additional system groups that users with tunneling permissions will be added to.

  • addGroups = [ "Remote Desktop Users" ]
    Used by Windows ONLY: allow access to RDP

  • idleTimeout = "15m"
    This is the timeout for idle tunnels. It defaults to 15 minutes. If set to positive duration, tunnels without active forwarded connections will be killed after the specified timeout. The minimum non-zero idle timeout is 1 second.

  • sftpEnabled = false
    Whether SFTP support is enabled. If enabled, an SSH daemon will be run that is configured to only allow SFTP connections. Enabling SFTP allows for one way SFTP from the local client to the virtual machine.

  • sftpPort = 2022
    The local port on which the SFTP SSH daemon runs. This will be automatically added to the allowed tunnel ports.

  • sshDirPath = ""
    An sshPath is NEEDED if SFTP is enabled. The path to the directory containing the sshd(.exe) and ssh-keygen(.exe) executables. If not set, the following will be checked for the executable:

    1. The assets archive at bin/ssh/
    2. The system path

[tunnel.services]

  • xpra = 14500

Optional names for ports, which may be referred to in the connection instructions for the VM. This is only needed for linux. This will be ignored windows. It is recommended to use the xpra information that we provide. More information can be found here

[tunnel.cert]

  • country = "US"
    If specified, country MUST be a two-letter country code.
  • organization = "unspecified"

[users]

  • changeAdminPassword = false
    If true, the VMC will attempt to change the admin password at startup to a random password. This prevents anyone from knowing the password.

  • managersAsAdmin = false
    If this is set to true, then IN WINDOWS, managers or owners of the VM can fulfill admin tasks without a password. If this is true in LINUX, then managers will be part of a group called "sudoers" and can act as admins without a password.

  • createDirs = []

  • createHiddenDirs = []
    These two parameters apply to Windows ONLY. Users can add the names of directories to be automatically created in the user's home on their encrypted drive if they do not already exist. This will allow for the directories to be automatically linked into the user's profile on the C: drive even if they did not originally exist.
    If it is left blank, everything in the home drive will be a junction. Both only pertain to Windows.

[commands]

  • commands.rootCommands
    Commands that are run by the root upon startup

  • commands.rootCommands.runOnlyOnceCommands
    Commands that are run only once

  • commands.rootCommands.runEveryTimeCommands
    Commands that are run every time event_name = {command0_name = "command0", command1_name = "command1"}

  • commands.userCommands
    Commands to be run by the user

  • commands.userCommands.runOnlyOnceCommands
    Commands that are run only once

  • commands.userCommands.runEveryTimeCommands
    Commands that are run every time

tiCrypt Default Config

[terminal]
enabled = false
command = linux = "/bin/bash" windows = "powershell.exe"
scrollback = 10000
[tunnel]
enabled = false
serverPort = 22
allowedPorts = []
addGroups = []
addGroups = [ "Remote Desktop Users" ]
idleTimeout = "15m"
sftpEnabled = false
sftpPort = 2022
sshDirPath = ""
[tunnel.services]
xpra = 14500
[tunnel.cert]
country = "US"
organization = "unspecified"
[users]
changeAdminPassword = false
managersAsAdmin = false
createDirs = []
createHiddenDirs = []
[commands]
[commands.rootCommands]
[commands.rootCommands.runOnlyOnceCommands]
[commands.rootCommands.runEveryTimeCommands]
[commands.userCommands]
[commands.userCommands.runOnlyOnceCommands]
[commands.userCommands.runEveryTimeCommands]

· 8 min read
Alin Dobra

Core principles

The main idea is to introduce two notions:

  • full backup/checkpoint is a complete backup of the resources. Recovering the resource only requires the information in such a backup.
  • incremental backup is a "delta" change from previous incremental backups or checkpoints. All the incremental backups between the recovery point and a checkpoint are required to recover the resources covered by the backup.
  • backup strategies specify what type of resource to backup and how often.
  • backup domains specify specific resources to back up together with the backup strategy. All the resources covered by a backup domain are backed up together.
  • backup: an incremental or full backup of a specific backup domain at a specific point in time.
  • external backup server: server with SFTP connectivity that holds the backup files externally.
  • backup solution: Software running on the external backup server that actually backs up the files on tapes/cloud.

Except for SFTP interface availability, there is no requirement on the external backup server. In particular, any operating system and backup solution can be used.

Backup strategies

  • name: displayable name for the backup strategy
  • id: internal ID
  • incrementalInterval: the interval between incremental backups in seconds.
  • checkpointInterval: the interval between full backups/checkpoints in seconds.
  • teamStrategy: the strategy for the teams
  • projectStrategy: the strategy for the projects
  • userStrategy: the strategy for the users

The strategies are maps from names to boolean (true/false). The possible names and meanings are:

  • userVault [team,project]: should we backup the vault of the users part of the team/project?
  • userForms [team,project]: should we backup the forms of the users in the team/project?
  • groups [user, team, project]: should we backup the Vault related to groups of the user/team/project?
  • drives [user,team,project]: should be backup the drives (marked as backup) of the user/team/project?
  • vault [user]: backup user vault
  • forms [user,project]: backup user/project forms

Backup domains

Backup domains are simply collections of users/projects/teams that need to be backed up together.

The backup domains can be set up at sub-admin+ level and do not require full admin access. Only resources that the sub-admin has access to can be included in a backup domain. Specifically:

  • if a sub-admin manages a team, the team can be added
  • if a sub-admin manages a project, the project can be added
  • if a user is in a team or a project managed by a sub-admin, the user can be added

At a high level, the backup domains specify:

  • name: displayable name of the domain
  • owner: Who owns the domain
  • managers: Other users that can manage this domain
  • sftp: SFTPSpec object
  • strategy: ID of the backup strategy to use

The SFTPSpec object has the following structure:

interface SFTPSpec {
server: string, // the name or IP of the sftp server
port: number, // the port, default 22
user: string, // sftp user name
directory: string, // the directory where backups are stored
}

The objects in the domain to backup are specified using a "membership" model. Specifically, a CRUD interface that allows listing, adding, and deleting objects to a domain is needed. Listing and changing the membership can be done by specifying:

interface BackupDomainSpec {
domainID: string, // the ID of the domain
type: "user" | "team" | "project", // type of the object
objID: string, // Object ID
}

Backups

Backups can be used for two different reasons:

  • disaster recovery: In situations when the system needs to be re-created from scratch (disastrous failure of storage, accidental system deletion, system cloning)
  • time machine: Recovering the state of a resource from the past, e.g. state of a drive as of 3 days ago, recover deleted file from 2 weeks ago, etc.

Some important principles are:

  • security: the backup is encrypted to the full extent possible. Neither the external backup server or backup solution used should be trusted.
  • minimize traffic: only the files strictly needed to perform recovery from backup should be required. This ensures minimal costs for recovery

Creating a backup

A backup consists of:

  1. Entries in the ticrypt-backup service database indicating information on what is backed up and where
  2. Files placed on the tiCrypt backend server, later transferred to the remote sftp backup server indicated by the backup domain metadata.

Creating a backup will involve the following steps:

  1. A backup directory is created to host the metadata and files for the specific backup of the specific backup domain
  2. Server computes which resources (files, forms, drives) are backed up. For "full" backups, all such objects permitted by the backup strategies are listed, for "incremental" backups, only objects that changed in the backup period are listed
  3. Server prepares a "metadata-file" containing all the relevant backup information and the list of backed up files with the corresponding auxiliary files
  4. For each type of object, auxiliary files are copied into the "backup directory"
  • files: The file chunks
  • forms: the form entries into an SQLite database
  • drives: the .qcow drive image (full backup), or the image snapshot (incremental backup)
  1. Metadata to be able to recover the information about the resources is added to the metadata file. Specifically:
  • files: file metadata and all file keys
  • forms: form metadata and all form keys
  • drives: drive metadata and all drive keys
  1. The backup information on this specific backup is stored into ticrypt-backup service's database

Recovering a resource from the backup

This is primarily performed to recover a previous version of a resource, i.e. time machine functionality.

To recover a resource from the backup, the following steps are taken:

  1. Recover the metadata file from the backend server or remote SFTP server
  2. Determine other backup metadata files (incremental backups need access to all the metadata files of all incremental backups they depend on)
  3. Compute all other auxiliary files needed
  4. Recover the content of the auxiliary files on the system backend (via SFTP if needed)
  5. Recover the state of the resource:
  • files: recover the contend of missing chunks and the metadata
  • forms: recover the content of the SQLite database and metadata
  • drives: recover the state of the drive, by "merging" snapshots on top of the most recent checkpoint.

Recovering the entire state from the backup

This is primarily performed as a form of disaster recovery. Specifically, to recover all resources covered by the backup.

The process is the same as above but it spans all resources covered by a backup.

Caching backup files for better performance

In order to speed up recovery, the backup files can be cached at multiple levels:

  • backend server: can store, without guarantee of availability, any of the backup files/directories.
  • external backup server: same as above thus removing the need to recover from the actual backup solution.

It is more valuable to cache metadata rather than auxiliary files. The metadata is required even to figure out what can be recovered.

User stories

  • As a sub-admin+, I want to list existing backup strategies
  • As an admin+, I want to be able to add/delete/modify backup strategies. This requires admin+ level since it is based on global system policy/requirements. It is also intended to limit the choices for backups to remove decision overload.
  • AS a sub-admin+, I want to list backup domains. Depending on role and visibility, only some of the domains are visible here.
  • As a sub-admin+, I want to be able to add/delete/modify backup domains
  • As a sub-admin+, I want to add/delete other sub-admins to the list of managers for specific backup domains
  • As a sub-admin+, I want to list objects covered by backup domains. Depending on role and visibility, only some of the objects are visible here.
  • As a sub-amin+, I want to add/delete objects to a backup domain
  • As a sub-admin+, I want to list backups associated with a backup domain
  • As a sub-admin+, I want to check the status of an ongoing backup
  • As a sub-admin+, I want to initiate a backup (incremental or full) immediately on a specific domain. This might be needed to allow backups before dangerous operations. It probably requires special permission.
  • As a sub-admin+, I want to list resources that can be recovered by a specific backup
  • As a sub-admin+, I want to list backup files needed to recover a resource
  • As a sub-admin+, I want to list backup files needed to recover the full state of a backup domain
  • As a sub-admin+, I want to recover a specific resource, possibly in a "copy" of the original resource. This requires all files listed as backup files to be available on either the tiCrypt backend or the remote SFTP server.
  • As a sub-admin+, I want to recover a complete domain. This requires all files listed as backup files to be available on either the tiCrypt backend or the remote SFTP server.