Skip to main content

· 12 min read
Betuel Gag

Abstract

Large projects with many users require effective management at scale in tiCrypt.
In a scaling scenario, both admins and super-admins should know how to:

  • Manage multiple users simultaneously in a large number of projects.
  • Make bulk changes to user status.
  • Implement global changes in the system when needed.
  • Manage bulk changes in the tiCrypt backend.
  • Proactively use bulk VM actions.

This blog outlines tiCrypt features that streamline the management of a large number of projects at scale.

Global Management at Scale

Most of the time, System Admins and Project Investigators take on extensive project responsibilities.
The goal of global management is to perform bulk actions efficiently and enhance data consistency, reducing human error.

The Management section enables admins and project managers to manage their projects efficiently.
Many tiCrypt features are designed to empower management teams to perform various bulk actions.

1. Make Global Announcements

Before deploying large projects, all admins or sub-admins may be required to set up a management infrastructure. The global announcement feature allows Project Managers and Admins to send secured global messages within the system.

To make a global announcement to all users or admins in projects follow the instructions in Make an Announcement in a Project from Management.

2. Management User Profiles

The User Profiles in Management section is a powerful tool for creating personas. They allow you to tag users without altering their default permission settings.

Suppose you manage a large project with 1000+ users.

You must organize users into categories based on management requirements, project compliance, and level of access.
Organizing 1000+ users manually is tedious and time-consuming.

As a result, tiCrypt allows you to use the User Profiles feature to create custom user/admin profiles.
Each user profile includes custom roles and permissions, enabling unique actions and events during project deployment.

Once user profiles are created, they can be applied in bulk to project or team members as needed.

caution

Use this feature with caution. Improper use of permissions can block certain actions for users assigned to the user profile.

Follow the instructions in Create a User Profile.

3. Apply Profiles in Bulk

Once you have built your desired user profiles in management, you can apply them to users in bulk by selecting multiple users at once.

Follow the instructions in Apply User Profiles.

4. Bulk Email

In a large project, communication is crucial. tiCrypt offers alternative ways to communicate via email, allowing admins to Copy all project member emails or Download them with a single click.

Follow the instructions in Bulk Email a User from the Vault.

5. Bulk Refresh Users Information

If a large number of users are updated at different times and you want to generate a report for audit purposes, you can use this option to bulk refresh all user data.

Follow the instructions in Refresh a User's Information from the Vault.

6. Add Multiple Certifications

Adding multiple certifications at once can streamline management efforts. This feature allows admins and project managers to certify multiple users for a security requirement within a security level of a tagged (classified) project.

Follow the instructions in Certify User(s) with a Certification for a Security Requirement.

7. Bulk Mark Certifications as Expired

Whenever a project requirement changes or is updated, admins and project managers can revoke access for all project members to a security level by marking their certifications as expired.

Follow the instructions in Mark a User Certification as Expired.

8. Add Multiple Users to a Project

Significant project processes may require adding many users to a project; this can be achieved using the bulk Add to Project option.

Follow the instructions in Add User(s) to a Project from Management.

9. Add Multiple Users to Multiple Projects

Large projects with multiple subprojects may require admins and project managers to add many users with similar roles to multiple projects at once. This action can be sped up using the Add members to projects option.

note

Select multiple projects before clicking the Add member(s) button.

Follow the instructions in Add User(s) to a Project from Management.

10. Assign Subadmins to Multiple Projects

Successful project managers and admins are often supported by effective sub-admins. tiCrypt allows admins to assign projects in bulk to sub-admins.

Follow the instructions in Assign a Project to Sub-Admin.

11. Change Roles in Bulk

Changing roles for multiple users may be rare. However, tiCrypt allows admins and super-admins to change user roles simultaneously.

Follow the instructions in Change Role (Promote or Demote) of a User in Management.

12. Change States in Bulk

When users leave the organization indefinitely, you can change their states to inactive in bulk. This option also helps you activate new users by setting their states to active and escrow on the next login in bulk.

Follow the instructions in Change State of a User from Management.

13. Bulk Delete Objects

As a super-admin, you can bulk delete most objects in tiCrypt; however, you cannot delete cryptographically enhanced objects (i.e., Groups, VMs, Drives, etc.) unless you are the owner.

Select multiple objects to delete in the following instructions.

14. Bulk Export in JSON or CSV

Admins and project managers can bulk export data in JSON or CSV format from the Management and Virtual Machines sections. The export options are globally displayed for most tiCrypt objects.

  1. Go to Management.
  2. Select the object you would like to export in any of the subsections.
  3. Click either the Export CSV or the Export JSON button in the top right panel.
  4. Select one of the following export quantities:
    • Export all items.
    • Export visible items.
    • Export selected items.
  5. In the pop-up, click Save.

View more export instructions in:

15. Bulk Change Host States

Changing host states in bulk helps manage how extensive VM infrastructure connects to them. When hosts require maintenance or updates that require all VMs to be disconnected, super-admins can use this option.

Follow the instructions in Change the State of a Libvirt Host.

16. Bulk Check Host Utilizations

The Check host utilization option is bulk by default. This checks all hosts in the system, allowing super-admins to verify resource usage.

Follow the instructions in Check the Utilization of VMs, Cores, Memory, and Devices on a Libvirt Host.

17. Bulk Shutdown VMs by Hosts

You can bulk shut down VMs by host. This action allows for a complete shutdown of all VMs on a host in urgent situations.

caution

Please be aware that using this option will turn off all VMs of the host; all unsaved work in the VMs may be lost.

Follow the instructions in Shut Down All VMs in a Libvirt Host.

18. Bulk Manage Hardware Setups User or Team Access

Follow the instructions in Manage User Access in a VM Hardware Setup.

19. Bulk Change Hardware Setups Images

Follow the instructions in Change the Image of an Existing VM Hardware Setup.

20. Bulk Replace Hardware Setups Instructions

Follow the instructions in Replace Instructions in a VM Hardware Setup.

21. Bulk Set Projects in Running VMs

Some significant projects demand multiple VMs to be connected to them. You can bulk tag numerous VMs to a project simultaneously.

Follow the instructions in Set Project in a Running VM Configuration from Management.

22. Bulk Shut Down Running VMs

When a project is complete and data is saved on drives, the VMs are no longer in use hence you can bulk shut down them.

Follow the instructions in Shut Down a Running VM Configuration from Management.

23. Bulk Power Up Service VMs

When starting a large project, VMs in place for service may be powered up simultaneously.

Follow the instructions in Power Up a Service VM.

24. Bulk Fetch Libvirt XML description of Service VMs

Super-admins can view the difference between each Service VMs's XML description.

Follow the instructions in View the Libvirt XML Description of a Service VM.

25. Bulk Restart Controllers of Service VMs

Service VM controllers may be restarted in bulk to fix errors or apply updates.

Follow the instructions in Restart Controller of a Service VM.

26. Bulk Create Deletion Request of Escrow Users

When an entire group of escrow users is changed, you can create deletion requests in bulk.

Follow the instructions in Delete an Existing Escrow User.

27. Bulk Execute Signed Certificates

A similar situation applies to bulk-executing signed certificates. Super-admins have permission to bulk upload site-key admin-signed certificates into tiCrypt.

Follow the instructions in Execute a Signed Escrow Certificate.

28. Bulk Attach & Mount Drives to VM

tiCrypt allows users to bulk attach and mount unlimited drives to a VM. This action is possible due to flexible infrastructure and functionality.

You can only attach ready drives, attaching them to VMs as read-write or read-only drive states.

caution

If you attach multiple drives to a VM, consider resource utilization and VM architecture best practices.

Follow the instructions in Attach a Drive to an Existing VM.

29. Bulk Change Project Tag in Drives

Follow the instructions in Add or Change a Project in a Drive.

info

You cannot re-tag VMs with different projects simultaneously. All VMs must be tagged to the same project to change the project in bulk.

30. Bulk Add Users to a VM

Adding multiple users to a VM is a common action in project management.

Follow the instructions in Add Users to a Virtual Machine.

31. Unshare Drives from Everyone Else

You can unshare drives from all users simultaneously. This action allows the drive owner to keep a drive private.

Follow the instructions in Unshare the Drive from Everyone Else.

32. Bulk Transfer via SFTP

Large projects often require research data at scale. A simple way to transfer large amounts of data into projects is via SFTP.
Before transferring, you must create an endpoint for your data.

Follow the instructions in Create an SFTP Inbox.

Virtual Machine Management at Scale

Virtual Machines can be managed in bulk to accomplish complex tasks at scale. You can use the following features from the Management or Virtual Machine tabs.

1. VM User Profiles

The VM User Profiles feature in the Management section is a powerful tool for creating personas within the virtual machine environment. They allow you to tag virtual machine users by changing their permissions in the VM.

If you manage an extensive VM infrastructure with many users, you should leverage VM user profiles to organize user permissions and levels of control within VMs and drives.

  • Regardless of user roles in the system, you can flexibly create a VM user profile.
    • Eg1: Super-admins of the system may be standard VM users if they belong to a VM profile designed for that purpose.
    • Eg2: Standard users in the system may have VM manager roles if they belong to a VM profile designed for that purpose.
  • Multiple users may have multiple VM user profiles.
  • Regardless of user role, each VM user can have a maximum of one VM user profile per virtual machine.

Follow the instructions in Add User Profiles in a Virtual Machine.

info

To learn more about VM User Profiles, see What is the Purpose of VM Profiles?.

2. Create Access Directory for Large VM Groups

Access directories play a significant role in large VM group management. By default, there are three groups for an access directory:

  • Everybody: All VM users have access to the directory.
  • Nobody: None of the VM users have access to the directory except the VM owner.
  • Managers: Only the users with manager roles in the VM have access to the directory.
  • Custom: Users with custom permissions set by the VM owner or VM managers can access the directory.

Follow the instructions in Create an Access Directory for a Virtual Machine Group.

Miscellaneous Management at Scale

There are several complementary features that can be used as additional tools for management at scale.

1. Global Login Message

In certain scenarios, you may need to perform maintenance in the backend, which may require pausing the system for a few days. Before starting maintenance, it is recommended to have at least one channel to contact all users about the maintenance outside the system.

As a best practice, use the global login message feature to inform everyone about maintenance periods or significant project updates that may affect all users.
Optionally, you can set custom colors, symbols, and display frequencies for your global message.

Follow the instructions in Display a Global Login Message.

2. Global Terms of Services

The Terms & Conditions prompt can be used for any relevant information or updates users should know about. e.g., "The system will be down for 14 days due to a large project maintenance."

Follow the instructions in Implement Terms of Service into the System.

3. The Terminals

The Terminals feature helps you keep track of running VMs when dealing with complex workflows. It is a complementary feature for large projects, allowing you to manage multiple VMs conveniently and simultaneously.

info

To learn more about the Terminals, see the Access the Terminals section.

· 2 min read
Thomas Samant

R is an indispensable tool for statistical computing and graphics, employed across a wide range of sectors. For users of tiCrypt, managing R in offline virtual machines (VMs) is essential to comply with security protocols while maintaining essential functionality. Here’s how to manage R libraries locally and through NFS mounts.

General Instructions for tiCrypt Users

Operating R in offline mode requires handling libraries and packages locally or from a network file system (NFS) mount when direct internet access is not available.

Installing R Packages Locally:

  1. Download Packages: On an internet-enabled machine, download the R package files (typically .tar.gz files) from CRAN or other repositories.
  2. Transfer to VM: Transfer these downloaded package files to your tiCrypt VM through approved secure methods.
  3. Install Locally: Install the packages from the local files using the following command in R:
    install.packages(path_to_file, repos = NULL, type = "source")
    Replace path_to_file with the path to your downloaded package file.

Installing from an NFS Mount:

If a CRAN repository is mirrored on an NFS accessible to your VM, you can install packages directly from there, bypassing the need for local file transfers:

  1. Set Library Location: Specify the library location to your NFS mount where the CRAN mirror resides:
    .libPaths("file:///path_to_cran_mirror")
    Replace path_to_cran_mirror with the actual NFS mount path to the CRAN repository.
  2. Install Packages: Install the R packages using the following command:
    install.packages("[PACKAGE_NAME]", repos=NULL, type="source")
    Replace [PACKAGE_NAME] with the actual name of the R package you want to install.

Specific Guidelines for Citadel Users

In the Citadel environment, RStudio is easily accessible from the applications toolbar, and specific embedded instructions simplify the package installation process:

Installing R Packages in Citadel:

  1. Access R or RStudio: Launch R or RStudio from the applications toolbar.
  2. Set Library Location: Use this command to direct R to the NFS mount where the CRAN repository is mirrored:
    .libPaths("file:///mnt/modules/cran/src/contrib")
  3. Install Packages: Install your desired packages with either of the following commands, depending on whether the global library path has been set:
    install.packages("[PACKAGE_NAME]", repos = NULL, type = "source")
    Or specify the contriburl directly:
    install.packages("[PACKAGE_NAME]", contriburl="file:///mnt/modules/cran2/src/contrib", type = "source")

R Startup Message:

For Citadel users, the R startup message includes instructions on installing packages, ensuring you have the necessary steps readily available.

· 5 min read
Cristian Dobra

In today's digital age, the security of data has become paramount. As we increasingly rely on digital devices for storing sensitive information—ranging from personal documents to financial records—protecting this data from unauthorized access is crucial. This is where LUKS encryption comes into play.

What is LUKS Encryption?

LUKS, which stands for Linux Unified Key Setup, is a standard for disk encryption. It is widely used on Linux systems to protect data at rest, meaning the data stored on a physical medium (like a hard drive or SSD). LUKS provides a robust and transparent way to encrypt entire block devices, ensuring that all data written to the disk is automatically encrypted.

Why Use LUKS Encryption?

Data Confidentiality: LUKS encryption ensures that the data on your disk is unreadable without the proper decryption key. This is particularly important for protecting sensitive information from unauthorized access, whether due to theft, loss, or unauthorized users attempting to access the data.

Physical Security:

In the event that a physical device is stolen, the data remains protected. Without the correct passphrase or key, the encrypted data appears as random, unreadable bytes, rendering it useless to anyone who doesn't have access.

Compliance with Regulations:

Many industries and organizations are required to adhere to strict data protection regulations, such as GDPR, HIPAA, or PCI-DSS. Implementing LUKS encryption helps ensure compliance with these regulations by safeguarding sensitive information and maintaining data privacy standards.

Protection Against Data Breaches:

Data breaches can have severe consequences, including financial loss, reputational damage, and legal repercussions. By encrypting data at rest, LUKS provides an additional layer of security that helps prevent unauthorized data access, even if a breach occurs.

Ease of Use:

Despite its powerful capabilities, LUKS is relatively easy to use. With tools like cryptsetup, setting up and managing LUKS encryption can be straightforward, making it accessible for both novice and experienced Linux users.

The Purpose of LUKS Encryption

The primary purpose of LUKS encryption is to protect data integrity and confidentiality. By encrypting the entire disk or specific partitions, LUKS ensures that:

  • Only authorized users can access the data: Decryption requires the correct passphrase or key, making unauthorized access significantly more difficult.
  • Data remains secure in various scenarios: Whether the device is lost, stolen, or improperly accessed, the encrypted data remains protected.
  • Users maintain control over their data: Encryption keys are managed by the user, ensuring that only they have the ability to decrypt and access the data.

In essence, LUKS encryption serves as a vital tool in the arsenal of data security practices. It provides a comprehensive, reliable, and efficient means of protecting sensitive information, ensuring that your data remains confidential and secure in an increasingly interconnected world.

Conclusion

Implementing LUKS encryption on your Linux systems is not just a technical decision; it's a crucial step towards safeguarding your data against a myriad of potential threats. Whether you're a business looking to protect client information or an individual concerned about personal privacy, LUKS encryption offers a robust solution for securing your digital assets.

By providing this context, you set the stage for your readers to understand why LUKS encryption is important before diving into the technical details of how to implement it using cryptsetup and other tools. This approach helps ensure that your audience appreciates the significance of the commands and procedures you describe later in your blog.

Example of how to use LUKS encryption

In below example we will guide you on how to encrypt a mounted disk on Rocky linux and how to release the unused space.

Assuming the virtual machine VM is created

On HOST server:

Add SCSI disk to VM. The disk must be qcow2 format

Create disk image:

qemu-img create -f qcow2 /VM/disks/fstrim.qcow2 2G

Check if format is qcow2

[root@myserver ]# qemu-img info fstrim.qcow2
image: fstrim.qcow2
file format: qcow2
virtual size: 2 GiB (21474836480 bytes)

Edit and add disk on virtual machine

virsh edit VM

Add below lines, after disk

<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' discard='unmap'/>
<source file='/VM/disks/fstrim.qcow2'/>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>

On Virtual Machine

Install cryptsetup

yum -y install crytpsetup

Encrypt the disk with LUKS (it will format the disk and lose all the data):

cryptsetup luksFormat /dev/sdx

Open the disk to set up a filesystem

cryptsetup luksOpen --allow-discards  /dev/sdx mydisk

Format new encrypted disk

mkfs.xfs /dev/mapper/mydisk

Mount encrypted disk

mount /dev/mapper/mydisk /mnt/

Create files to test. Both on VM and HOST SERVER the space will increase with 1G.

dd if=/dev/random of=/mnt/1.img bs=1M count=1024

After deleting the 1G file in VM /mnt, the space on HOST will not decrease. To decrese it in VM must be run below command:

rm 1.img
fstrim -v /mnt

ON HOST SERVER - check the disk space:

BEFORE TRIM

[root@myserver ]# du -sh fstrim.qcow2
1.1G fstrim.qcow2

AFTER TRIM

[root@myserver ]# du -sh fstrim.qcow2
85M fstrim.qcow2

The trim can be done manually as discribed above with command 'fstrim -v /mnt' and automatically once per week: Enable / start the service fstrim.timer.

systemctl enable fstrim.timer
systemctl start fstrim.timer

After reboot, the file system will be mounted manually, because the password must be entered again for the encrypted disk.

cryptsetup luksOpen --allow-discards  /dev/sdx mydisk
mount /dev/mapper/mydisk /mnt/

· 7 min read
Alin Dobra

Adding batch-processing capabilities to tiCrypt is one of the most requested new features. It will allow large computational jobs to be executed by the secure environment provided by tiCrypt with full cryptographic isolation and security, achieving batch processing in a fully compliant CMMC/NIST environment.

A natural solution is integrating tiCrypt with Slurm, the most popular batch-processing system. This document provides a technical discussion of the integration and security challenges.

Slurm Overview

Slurm is a batch-processing system that allows users to submit jobs to a cluster of machines. The jobs are queued and executed when resources become available. Slurm provides a rich set of features, including:

  • sophisticated job scheduling
  • resource management
  • job accounting and reporting (including billing)
  • job execution, most notably for MPI jobs
  • job monitoring and control

For secure computation, especially when covered by CUI restrictions, Slurm is a poor choice. While building somewhat secure systems around Slurm is possible, it is difficult and often results in significant performance degradation. The main difficulty to overcome is that Slurm is designed to run jobs on the same machine where the Slurm controller is running. This does not protect against malicious code running on the same machine as the Slurm controller. Moreover, Slurm cannot isolate jobs or infrastructure from each other.

Ideal tiCrypt-Slurm Integration

The ideal integration of tiCrypt with Slurm would provide the following features:

  • Isolation of jobs from each other. This is the most critical feature. It means that jobs cannot interfere with each other,
  • Isolation of jobs from the infrastructure. This is also very important. Ideally, Slurm should not be aware of the code that is being executed, nor does it have access to the data processed.
  • Integration with tiCrypt. The integration should be as seamless as possible. In particular, data encrypted by tiCrypt should be seamlessly integrated.
  • Minimal performance degradation. The integration should not significantly degrade the performance of Slurm. Ideally, it should not degrade the performance at all.
  • Keep the excellent Slurm capabilities. tiCrypt should rely on Slurms scheduling, resource management, job accounting and reporting, job monitoring, and control capabilities.

Architecture of the solution

The above goals seem difficult to achieve because most of Slurm's capabilities must be retained, but the security must be "fixed". The key idea is to separate Slurm functionality into two parts:

  1. Global Slurm Scheduler. This is part of Slurm that is responsible for scheduling jobs, managing resources, accounting, and reporting. This component knows who executes the jobs, what resources they require, tracks the jobs, etc. This component is unaware of the code being executed nor has access to the data processed. This Slurm instance will run globally, outside tiCrypt, and interact with tiCrypt Backend through Slurm REST API.

  2. Local Slurm Executor. This is the part of Slurm that is responsible for executing the jobs. This component is aware of the code that is being executed and has access to the data processed. It can also provide Slurm with advanced execution capabilities, such as MPI. This Slurm instance will run locally, inside tiCrypt-managed Virtual Machines, and interact only with tiCrypt VM Controller. Each CUI project will have its own local Slurm Executor, managed by the tiCrypt VM Controller.

The Global Slurm Scheduler will not be aware of the Local Slurm Executors, and viceversa. The interaction between the two will be through the tiCrypt Backend and tiCrypt VM Controller. The tiCrypt VM Controller will hide most details from the tiCrypt backend (such as the code being executed, the data processed, etc.) and only provide information on the resources needed. This ensures that the Global Slurm Scheduler is not aware of the code being executed nor has access to the data processed. The tiCrypt backend will hide global details such as what other jobs run in the system, who is running them, etc. This will ensure the fact that the Local Slurm Executor is not aware of what else is running in the system, who is running it, etc.

The main mechanism used by tiCrypt to "trick" Slurm into running, as described above, is extensive use of the Slurms Plugging Architecture and re-write the jobs and statistics reporting mechanism. The following two sections describe the specific mechanisms to achieve the above goals.

Global Slurm Scheduler

The Global Slurm Scheduler will be a full-fledged Slurm instance, running outside of tiCrypt but side-by-side with tiCrypt backend. Specifically, slurmctld, slurmdbd and slurmrestd will run on the same machine as the tiCrypt backend. It will be configured separately from tiCrypt and can use any of the Slurm features, most importantly, various plugins. Slurm will be configured only to allow the tiCrypt backend to submit jobs; specifically, the Slurm API will be guarded against any other access.

For each of the VM Hosts used by tiCrypt for batch processing, a Slurm node will be configured. Specifically, slurmd will run on each VM Host. tiCrypt backend will feed "fake" jobs to Slurm to simulate the actual execution (see below). Special programs that do nothing except coordinate with tiCrypt Backend will be submitted to Slurm. This technique is similar to PCOCC system. Custom LUA plugins will be used to intercept the job execution (quirk in Slurm that this is the only way to block job execution) and to "adjust" the job statistics based on actual execution (see below).

tiCrypt Controlling VM

For each CUI project, tiCrypt will create an interactive VM (CVM) that will be used to run the jobs for that project. The project members will fully control this VM and will be used to manage the Local Slurm Executor, provision all the security, and interact with tiCrypt Backend. This mechanism is similar to the management of tiCrypt Interactive Clusters. Specifically, the CVM will:

  • Interact with tiCrypt users using the exact mechanism as the regular tiCrypt interactive VMs.
  • Provide the distributed file system used by the batch execution. This will use the exact mechanism as the tiCrypt Interactive Clusters.
  • Provide secure communication between the CVM and the worker VMs using automatically provisioned VPN (using StrongSwan).
  • Manage the Local Slurm Executor and local job submission
  • Ask tiCrypt Backend for VMs to be provisioned for batch processing. This is based on submitted Slurm jobs.
  • Inform Local Slurm Executor when resources are available and jobs can be executed.
  • Decommission the VMs when the batch processing is done.

Users with access to the CVM will be able to submit jobs to the Local Slurm Executor using sbatch and srun commands.

Local Slurm Executor

The Local Slurm Executor will be a full-fledged Slurm instance, running inside tiCrypt-managed Virtual Machines. Specifically, slurmctld ( and if strictly necessary, 'slurmdbd) will run on the CVM, and slurmd` will run on each VM part of the batch job (managed by local VM Controllers coordinated via the CVM). The provisioning and execution will be controlled by the tiCrypt CVM. Via Slurm command line tools or tiCrypt Frontend integration, the users can see the jobs' status, cancel them, etc.

To provide integration with the rest of tiCrypt infrastructure and Global Slurm Schedler, LUA plugins and other tiCrypt mechanisms will intercept the job submission, execution, and statistics reporting.

Challenges and Solutions

The above plan might sound complicated, but we think the complexity is manageable. The main challenges are:

  • Learning Slulrm. While the Tera Insights team has no experience running Slurm at scale, we have extensive experience dealing with complex systems and "coding around" their limitations and quirks.
  • Using LUA plugins. The Slurm documentation on plugins is not particularly extensive, but we can draw much inspiration from the PCOCC system implementation.
  • Adding new features to tiCrypt. The tiCrypt team has extensive experience in adding new features to tiCrypt. We have a well-established process for adding new features, including design, and implementation.

· 7 min read
Thomas Samant

tiCrypt Vault Storage

The tiCrypt Vault offers a file system-like facility that allows files and directories to be created and used. All the metadata, such as file properties, directory entries, access information, and decryption keys, are stored in the MongoDB database used by the tiCrypt-file storage service.

In tiCrypt Vault, the file content is broken into 8MB chunks, each encrypted independently using the file key. The size of such chunks is 8MB+64Bytes on disk (unless they are the last, incomplete chunk). The extra 64Bytes contain the IV (initialization vector for AES encryption). For each file, the chunks are numbered from 0 onwards. The chunks are stored in a directory structure based on the file ID, visible only to the tiCrypt backend (preferably not to the VM hosts). The storage location can be configured in the configuration file of the tiCrypt-storage service.

tiCrypt is not opinionated on the file system or integration with other systems for this storage, but for compliance reasons, it is recommended that the access is restricted to only the tiCrypt backend.

Without the decryption keys that only users can recover, the content of the chunk files is entirely indecipherable. It is safe to back up these files using any method (including non-encrypted backup, cloud, etc). The strong encryption, coupled with the unavailability of the key to even administrators, ensures that this content can be replicated outside the secure environment from a compliance point of view. The unavailability of the key to even administrators ensures that from a compliance point of view, this content can be replicated outside the secure environment.

tiCrypt Encrypted Drives in Libvirt

tiCrypt virtual machines use a boot disk image that is not encrypted and one or more encrypted drives. Both disk images and encrypted drives are stored as files in the underlying distributed file system available to all Libvirt host servers. The specific mechanism uses the notion of Libvirt disk pools. Independent disk pools can be defined for disk images, encrypted drives, ISOs, etc. Each pool is located in a different directory within the distributed file system.

Libvirt (and tiCrypt, by extension) is agnostic to choosing the file system where various disk pools are defined. A good practice is to place the different disk pools on a visible distributed file system (preferably in the same location) and to mount the file system on all VM hosts. Any file system, including NFS, BeeGFS, Luster, etc., can be used.

As part of the virtualization mechanism, Libvirt makes the files corresponding to drives stored on the host file system appear as devices to the OS running within the VM. Any writes to the virtual device get translated into changes in the underlying file. The situation is somewhat more complex when snapshots are used since multiple files on the disk will form the virtual device.

Encrypted Drive Creation

Upon drive creation, tiCrypt instructs Libvirt to create a drive. This results in a file being made in the underlying file system in the corresponding drive pool. Two drives are supported: raw, with extension .raw, and QCOW2, with extension .qcow2.

A file as large as the indicated drive size gets created for a raw drive. The file content is initialized with 0s (corresponding to a blank drive). Writes in the virtual drive result in writes in the corresponding file at the same position (e.g., if block 10244 of the virtual drive is written, block 10244 of the raw file gets changed as well)

For a QCOW2 drive, only changed blocks get written; the file format is quite complex and supports advanced features like copy-on-write. The initial file size is small (low megabytes) when the drive is new; the file grows in size as more data is written to the disk.

The qemu-img tool can be used to convert between two formats. Usually, tiCrypt sets the drives up without the need for this tool.

A newly created tiCrypt disk is blank. No formatting of the drive or any other preparation has been performed. The drive will be formatted the first time it is attached to a tiCrypt virtual machine. The main reasons for this are:

  • The encryption/decryption key for the encrypted drive is kept secret from the infrastructure. This includes the tiCrypt backend, Libvirt, and underlying VM host.
  • The choice of the file system to use is delegated to the underlying operating system and tiCrypt VM Controller. Libvirt is not aware, nor does it need to know, of the actual file system on the drive. For Linux formatted drives, by inspecting the files backing up the drives, there is no way to tell even if the drive is formatted at all, let alone any information on the content or type of file system.

Encrypted Drive Formatting

As far as Libvirt is concerned, only low-level disk reads and writes exist. Whatever operation the operating system is performing gets translated into reading/write operations on the virtual disk; in turn, this will result in reading/write operations for the underlying file in the disk pool.

In Windows, a standard NTFS file system is created, but immediately (before the drive is made available to the user), BitLocker is turned on. This ensures that all files created subsequently are encrypted. BitLocker uses so-called "full volume encryption," i.e., all the new data will be encrypted, including the metadata. An external tool scanning the backing file can determine that the drive is an NTFS formatted drive and read all non-encrypted content. Since tiCrypt turns on encryption immediately, minimal information is visible.

In Linux, the LUKS entire disk encryption mechanism is used. It essentially places an encryption block layer between the raw drive (virtual drive in this case) and the file system (usually EXT4). This way, absolutely all information on the disk is encrypted. An external tool can only tell which disk blocks have been written to (are non-zero) but can derive no info on the rest of the content.

tiCrypt Non-secure Drives

Two non-secure drives are supported in tiCrypt: ISOs and read-only NFS shares.

Attaching ISOs

ISOs are made available using read-only CD-ROM devices. As such, they are always safe to mount in a secure tiCrypt VM. Linux and Windows can readily mount such ISOs and make them available as "drives."

ISOs are particularly useful if the NFS shares described below are not used. For example, Python or R packages could be made available as ISOs so that various VMs can install the required packages locally.

Attaching NFS file systems

By allowing, through firewall rules, access to a local NFS server, various tiCrypt VMs can mount a common file system for the purpose of accessing public (non-secure) data, packages, software, etc.

From a security point of view, the NFS server should export the data as read-only. The tiCrypt secure VMs should never be allowed to mount a read-write NFS share since data can be exfiltrated, thus defeating the tremendous effort put into tiCrypt protections against data exfiltration. This will most unquestionably make the tiCrypt deployment non-compliant.

A further restriction is related to the location of the NFS server. The system administrators of tiCrypt must control the NFS file system. It has to be part of the tiCrypt system envelope; for example, one of the servers part of the tiCrypt infrastructure can take this role. This restriction is due to compliance complications: the security envelope extends to all system parts; a remote NFS server becomes part of the secure environment and is subject to all security restrictions.

A practical recommendation is to create a local NFS server inside the security envelope and a regular

· 19 min read
Betuel Gag

More sophisticated data access restriction scenarios involve "hierarchical data"; these are situations where various data sets possess distinct scopes and hierarchical restrictions. While simple data protection can be achieved straightforwardly using the project tagging mechanism in tiCrypt, more complex scenarios like the one described below require more careful consideration.

Problem Statement

Suppose you have a research group that has access to two-level sets of data: federal-level data and state-level data. Both sets contain CUI from the following states: Florida, Texas, and California.

We assume each of the states has distinct data.

The following data restrictions are in place:

  • You can only combine the state-level data with the federal-level data per state.
  • You cannot combine data sets between states.

To apply the zero-trust principle, you may want to enforce restrictions on specific data sets.

Knowing that all members work with federal-level data across the states, others work with state-level data, and a third group works with federal- and state-level data; we want to set up an infrastructure where:

  • All group members have access to federal-level data.
  • Specific members have access to Texas data.
  • Specific members have access to Florida data.
  • Specific members have access to California data.
  • No member can combine federal or state-level data sets between states at any time.
  • No member is allowed access to state-level data unless they are actively working on that state.
  • Data declassification is limited to the project PIs.
  • Data access and downloads are prevented and access-controlled.
  • CUI is safe at all times under the project infrastructure.

How can you use groups and projects to achieve this infrastructure?

This blog aims to provide two possible solutions using the interplay between tiCrypt teams, groups, and project tags, and shedding light on how they can be maneuvered to accomplish this goal.

Background information

Before exploring the interplays, we will examine the usability of tiCrypt teams, groups, and projects to understand how they interact.

Let's delve into the details.

Teams

Teams are access-controlled. They must have at least one member.

  • Separated from groups and projects: There is no interplay between groups and teams or projects and teams.
  • Familiarity: Same team members are recommended to join the same group/project for better collaboration.
  • Resource usage: Teams can help control resource allocation in a project or group.

Groups

Groups are cryptographic. They make it easy to share encrypted information between users.

  • Independent from teams: Try adding a user to a team, add them to a group, then remove them from the team. The user will still be in the group.
  • Fully encrypted: Interaction between group members is cryptographically based on the user's public-private keys with no ACL-based operations.
  • Activity isolation: Try creating a group using two team members, then promote one of them to the owner and leave the group. You cannot decrypt the content between your members unless you are now part of their group.
  • Satisfy compliance requirements: You should use groups to enforce compliance standards between members.
note

You may be in a situation where you share a file with another tiCrypt user. However, with time, you share files more often with the same user, and it becomes a habit. At this point, you should create a group with the respective user where you can share files unconditionally.

Projects

Projects are access-controlled. They must be active in your session to be accessible.

  • Tags: You can tag almost anything in the system with your project.
  • Separation of powers: Access to the project is granted by your project membership, not by your admin.
  • Doubled security & enforced compliance via security levels: You can only access project resources when you have a membership and satisfy project security levels.
    • A project with no security levels still requires a project membership.
    • The more security requirements, the more effort is required to join the project.
note

Unlocked projects displaying the Unlocked tag are restriction-free. If a resource has a project tag, the project restrictions apply.

The Interplay between Groups & Projects

Groups are encrypted collaboration objects that allow encrypted file sharing. Projects are a tagging mechanism limiting access to objects such as files, directories, groups, VMs, and drives. Regarding resource distribution, groups commonly share resources in group directories where all group members can view and access them; projects require members to specifically share a resource with other project members to allow access to it.

By combining projects with groups, you achieve an encrypted, access-controlled group directory; that includes group permissions, project membership, and project security levels in one place.

The interplay takes place at a resource level. There are four classes of resources in a group tagged by a project:

  1. Resources belonging to the group, not tagged by the project: accessible only to the group members, inaccessible to project members.
  2. Resources belonging to the group, also tagged by the project: accessible only to group members who are also project members, have an active project membership, and satisfy project security requirements.
  3. Resource not belonging to the group but tagged by the project: inaccessible to group members, accessible only to project members with active project membership and satisfied project security requirements.
  4. Resource not belonging to the group nor tagged by the project: Outside files are inaccessible to the group and project members.
tip
  • To access a tagged group, you need group keys, project membership, and satisfied security levels.
  • Tagged groups can have resources with distinctive project tags unrelated to the group's tag, enabling group owners to tag files with other projects they belong to. We do not recommend having one group with many unrelated project tags for clean data management.

Tagged Groups Permissions

Allow system managed access-control, meaning the following permissions or a combination of them is deployed for each group member:

  • . Add Members: Add other users to the current group.
  • . Remove Members: Remove existing members from the group.
  • . Change Permissions: Change group permissions for themselves and other members.
  • . Add Directory Entries: Upload files in the group directory using the upload action.
  • . Remove Directory Entries: Delete files from group directory using the delete action.
  • . Create Directories: Create new (sub)directories in the group directory using the create new directory action.
  • . Modify Project Tag: Change the tags of files and directories in the group directory using the change project action.
note
  • Once a user is added to a group, they become a group member.
  • A group member with user role becomes a group member with manager role when their Add Members permission is ticked.
  • As a group owner, you cannot change your permissions in the group.
  • Mistakenly unticking Change Permissions box as a group member you will block you from changing any permissions, including yours. To restore your permissions, you must ask the group owner to tick back the Change Permissions box for you.
  • By ticking the Modify Project Tag permission, you allow subproject members to untag (declassify) tagged group directory files from the subproject and keep them hidden within their group directory, making them visible only to their group members.
tip

For good practice, you may use the following permissions structure for your tagged groups:

  • Group User: Add directory entries, Create directories.
  • Group Manager: Add directory entries, Remove Directory Entries, Change Permissions, Add Members, Remove Members, Create directories.
  • Group Owner: All permissions by default, including Modify Project Tag.

A tagged group can hold resources with different restriction levels by project similar to our early example with federal- and state-level data containing CUI from the states of Florida, Texas, and California.

Two workflows show the practical side of project infrastructure using the right interplay between groups and projects.

Workflow 1: One Group to Span All Projects

You can create an "umbrella" group for a project with subprojects for the data. You will manage subproject memberships via project tags.

One large member group is associated with a project tag with subprojects that individually protect the data. Each group member gets certified for a subproject which allows them to access only the group directories associated with their respective subproject. The PI controls access via subproject security levels and active memberships.

Note this is a single fully encrypted group, without cryptographic isolation between sub-directories yet secure and compliant. This method protects users from admin impersonation and uses access-control between sub-directories.

  • Next, you need a subproject structure with security levels and security requirements. For this example, we will use:

    • Subproject 01,02,03,04 and 05.
    • Security level 1,2,3,4 and 5.
    • Security requirement 1,2,3,4 and 5.
    • Security requirement 1 will correspond to security level 1, security requirement 2 to security level 2, security requirement 3 to security level 3, etc.
    • Security level 1 will correspond to subproject 01, security level 2 to subproject 02, security level 3 to subproject 03, etc.
  • Create security requirements for each security level.

  • Create a project which will be the project under the "umbrella" group.
    • Optionally, add a security level with security requirements to the project.
  • Re-login to view the project.
caution

When you apply a security level to the project under the "umbrella" group, you will force all group members of all subprojects to satisfy the security requirements for the main project which will tag the group.

  • Create multiple sub-projects and link them to the appropriate security levels previously created.
  • Re-login to view the subprojects.
    • Files per subproject will be visible only to users with active memberships for those sub-projects and satisfy the project requirements.
note

Creating a subproject with no security levels will be considered part of the main project to all members of the newly created subproject.

As a member with a user role in the "umbrella" group, you can only access the tagged group directory with colored tags. Other group directories will only show their names with a Question Mark instead of the tag, indicating that they are inaccessible due to a locked session. If a group directory bears a project tag that is not part of your active session (not your current project), you cannot view its contents.

note

Active memberships in the project allow active sessions in the tagged group directory.

To test your workflow infrastructure, open a group sub-directory you are a member of that was tagged with a corresponding subproject you belong to and upload a file into the group sub-directory. All members of the subproject tagging the sub-directory should be able to view, access, and perform all actions with the sub-directory files if they:

  • Are an active group member of the "umbrella" group.
  • Are members of the subproject that tags the sub-directory.
  • Satisfy all subproject security requirements which tag the sub-directory.

Even if the users will not be able to view the other members of the subprojects (except their membership), they will be able to view the members of their "umbrella" group who are also the members of all subprojects. This action may be altered by an admin in the permissions panel of each user or using the apply profile option.

From now on, you can either manage all subproject tags of the "umbrella" group or delegate each subproject tagged sub-directory to a sub-admin.

To delegate your tagged subproject to a sub-admin:

  • Promote subproject member to the manager role.
  • Optionally, remove yourself from the subproject to allow independent collaboration between subproject members and the new project manager.

You have successfully completed a tagged group with projects infrastructure.

Workflow 2: One Project to Span All Groups

You can create an "umbrella" project to create a set of groups corresponding to subprojects of the "umbrella" project. You will manage project membership via tagged groups. This workflow is complex and requires more thoughtful management.

You have a single project with full encryption between layers (group directories). The PI controls access to your group directory via group permissions, security levels per subproject, and project memberships. You are cryptographically isolated and access-controlled which is both secure and compliant.

caution

When you apply a security level to the "umbrella" project, you will force all members of all groups of all subprojects to satisfy the security requirements for the "umbrella" project.

  • Re-login to view the project.
  • Next, you need a subproject structure with security levels and security requirements. For this example, we will use:

    • Subproject 1,2,3,4 and 5.
    • Security level 1,2,3,4 and 5.
    • Security requirement 1,2,3,4 and 5.
    • Security requirement 1 will correspond to security level 1, security requirement 2 to security level 2, security requirement 3 to security level 3, etc.
    • Security level 1 will correspond to subproject 1, security level 2 to subproject 2, security level 3 to subproject 3, etc.
  • Create security requirements for each security level.

note

You can add multiple security requirements per security level, depending on your project infrastructure.

  • Certify yourself with the corresponding security requirements for the security levels belonging to each subproject.
  • Alternatively, certify each sub-admin who will run one of the subprojects.
  • Re-login to get access to the subprojects.
  • Create multiple small groups, and tag each group by its coresponding subproject.
  • At the same time, add the appropriate members to the group who will belong to the corresponding subproject. For this example, we will use:
    • Group 1,2,3,4 and 5.
    • Subproject 1,2,3,4 and 5 which we created previously.
    • Group 1 will correspond to Subproject 1, group 2 to subproject 2, group 3 to subproject 3, etc.
    • Subproject 1 tags group 1, subproject 2 tags group 2, subproject 3 tags group 3, etc.
    • A random user will be added to each group corresponding to each subproject for demo purposes.
  • Re-login to view the changes in user memberships.

As a member with the user role in the "umbrella" project, you can view all files from all group directories.

As a subproject member, you can view only your group directory tagged with your subproject tag. Suppose you do not satisfy the subproject security requirements. In that case, you can view the name of the files, their type, owner, creation date and size but cannot perform the following actions: view, download, change project, share, access file history, edit or delete files.

note

Users must be notified about the group they belong to.

To test your workflow infrastructure, open a group directory you are a member of that was tagged with a corresponding subproject and upload a file into the group directory. All members of that group should be able to view, access, and perform all actions with the group files if they:

  • Are active group members.
  • Are members of the subproject that tags the group.
  • Satisfy all subproject security requirements which tag the group. ]

Even if the users will not be able to view the other members of the subproject (except their membership), they will be able to view the members of their group who are also the members of their subproject. This action may be altered by an admin in the permissions panel of each user or using the apply profile option.

As shown below, you can either manage the tagged groups of subprojects or delegate each tagged group to a sub-admin.

To delegate your tagged group to a sub-admin:

  • Promote group member to the owner of the group.
  • Optionally, leave the group to allow independent collaboration between members and the group owner.
  • Next, set up the subproject membership of the group owner of the tagged group corresponding to the subproject that tags the group.
    • User role: allows group owner to have control of the group but cannot untag files by subproject from the tagged group.
    • Manager role: allows group owner to untag files from the group, pulling them out of the subproject (this is not recommended).
note
  • A declassified file from the subproject which tags the group is still secured within the group but only visible and accessible to the group members.
  • When you tag a file from the subproject tagged group directory with the "umbrella" project tag, the group members who have access to the subproject tagged group directory will also have access to the "umbrella" project tagged file since it belongs to the parent project and it is located within the appropriate group directory.

In contrast, the group owner can move a file outside the group but still have it tagged with the subproject.

  • The file will still be classified by the subproject tag but not visible to other group members except the group owner.
tip
  • For clean data management purposes, you should never tag random files with other project tags in a tagged group. This results in many tagged files in the group directory structure that have nothing to do with the project which tags the group.
  • Do not create groups with people who do not know each other- we recommend creating groups with team members. The FE will only allow you to access other users' resources if they are on your team.
  • Set the team boundaries lined up with group boundaries.
caution
  • If a group has a project tag, you cannot get the group key unless the project is active in your session.
  • If the project tagging the group is not active, there is no way to access the group.
  • After user permission changes in the project you must re-login to update the current session.

You have successfully completed a tagged project with groups infrastructure.

Other Hybrid Workflows

Besides the two workflows above, there are many other hybrid workflows you can create to fit your project needs and customize your infrastructure.

Hybrid Example 1:

  • Create an "umbrella" group to create a project with subprojects for the data.
  • Separately, create an "umbrella" project.
  • Create a set of groups and tag them by the "umbrella" project.
  • Create subdirectories for each group directory and tag them with the project's subprojects that tag the "umbrella" group.
  • You will manage project memberships via tagged groups and subproject memberships via project tags at the same time.

Hybrid Example 2:

  • Create an "umbrella" group to create an "umbrella" project.
  • Create a set of subprojects under the "umbrella" project.
  • Create small groups tagged by the "umbrella" project.
  • Each small group may have sub-directories tagged independently by the subprojects created previously.

To conclude, by leveraging the power of project tags and groups, you can establish an efficient workflow for your research infrastructure.

· 3 min read
Thomas Samant
Betuel Gag
Alin Dobra

1. Security First

  • Security is the top priority, and the architecture is designed with security as the central consideration.
  • A comprehensive approach to security is taken, going far beyond perimeter protection with Firewall, VPN, and intrusion detection systems.
  • Zero-trust is implemented using cryptography rather than solely relying on access control lists (ACLs).
  • The goal is to architect a complete solution rather than "patching" security vulnerabilities.
  • Features are only added if they do not compromise security.
  • There is no notion of public/unsecured data; explicit sharing is the only allowed method.
  • Default shut is favored over default open.
  • Public-key cryptography (PKC) is the core concept, with all security mechanisms based on PKC.
  • Password-based authentication is not used, and extensive use of cryptography is implemented.
  • End-to-end encryption is utilized, and each resource is independently protected.
  • Encryption keys are managed using PKC, and cryptographic isolation is enforced.

2. Separation of Duties

  • Admin power is decentralized uniformly throughout the system to prevent data breach entry points, even if an admin account is compromised.
  • Access control and end-to-end encryption are used together, with the addition of two-factor authentication (2FA) for enhanced security.
  • Extreme flexibility is provided in terms of operating system support (Windows and Linux), tooling support (AI + GPUs), and the full software stack.
  • The overhead for small and large projects is kept minimal.
  • Researchers are empowered to manage and control their data and workflows, decentralizing management and minimizing the role of admins.
  • Admins define mechanisms and monitor usage but have no access to user data.

3. Mechanism instead of policy

  • The focus is on enforcing behavior through mechanisms rather than relying solely on policies.
  • Mechanisms are designed to prevent and deter bad behavior, with system-enforced capabilities.
  • Automated system-enforced mechanisms reduce the risk of human error and ensure consistent adherence to security protocols.
  • Severely reduce the number of FTEs and "police" behavior responses.
  • Policies should only dictate the mechanisms used for enforcement.

4. Support diverse research workflows

  • tiCrypt supports diverse research workflows with Windows and Linux OS support, AI + GPU capabilities, and compatibility with various hardware devices.
  • It provides flexibility in deployment, allowing on-premises bare-metal servers, cloud deployment (AWS,Azure,Google Cloud,etc), hyper-converged solutions (Nutanix, RedHat,etc), and hybrid models (on prem+cloud).
  • Accommodate non-uniformity and "borrow" VM hosts from both cloud and HPC clusters.
  • Compatibility with existing security and infrastructure solutions such as Duo, Shibboleth, firewalls, and VPNs is ensured.

5. Detailed auditing

  • Auditing is fully integrated into the secure system, addressing compliance requirements directly.
  • Different projects may have specific auditing requirements, and the system caters to those needs.
  • tiCrypt solution includes an audit system that produces compliance reports, maintains a very detailed audit trail, and retains audit logs for the entire history of the system.
  • Reports allow audit pre-dictions of data behavior.

Conclusion

  • Partial success can be achieved with significant effort, but there may be system blind spots and limited supported workflows. tiCrypt is the result of a collaboration with University of Florida over ten years, designed to address all compliance and security needs, making it a proven security unicorn.
  • The three pillars of compliance include strong security, system enforcement, and comprehensive auditing and reporting.
  • All features are designed to meet the rigorous compliance standards of public institutions.

· 7 min read
Alin Dobra

Secure virtual machines in tiCrypt run in almost complete isolation from each-other and the environment. The main motivation for this isolation is providing a high degree of security.

Setup

In this blog we will refer to a number of tiCrypt components that participate in the activities described here. They are:

  • ticrypt-connect runs the application on user's device
  • ticrypt-frontend runs in the browser on user's device and is served by ticrypt-connect
  • ticrypt-backend is the set of tiCrypt services running on the Backend server
  • ticrypt-rest is the entry point in ticrypt-backend and uses REST protocol.
  • ticrypt-proxy mediates communication between components by replicating traffic between two other connections.
  • ticrypt-allowedlist mediates access to external licensing servers
  • ticrypt-vmc controls the secure VM and is responsible for all VM related security mechanisms

Overview

The VM security is provided through the following mechanisms:

  1. Reverse mechanism to talk to tiCrypt frontend using ticrypt-proxy component
  2. The only opened ports for a VM are port 22 for traffic tunneling from tiCrypt Connect
  3. Special mechanism to reach port 22 of VMs but no general mechanism to reach any other port from tiCrypt-backend
  4. Strictly controlled outgoing traffic through ticrypt-allowedlist component

We take each mechanism in turn end provide details on how it works.

ticrypt-proxy mediated VM communication

Due to the fact that all ports are blocked except 22 and that ticrypt-vmc (VM controller) control this port for port forwarding, there is no mechanism to directly contact a secure VM using direct access. Instead, a mechanism fully controlled by the ticrytp-vmc is employed. The mechanism relies on ticrypt-proxy (component of ticrytp-backend) to mediate communication with the user in the following manner:

  1. The ticrypt-vmc creates a web-socket connection with ticrypt-proxy at all times.
  2. When a user wants to connect to a VM, the ticrypt-frotnend creates a web-socket to match the existing VM one in ticrypt-proxy. All traffic between the two matching web-sockets is replicaed by ticrypt-proxy. Imediately ticrypt-vmc creates a new web-socket for future communication.
  3. The first message sent by both the ticrypt-frontend and ticrypt-vmc are digital signature proofs of identity and Diffie-Helman secret key negotiation.
  4. If the digital signature fails or if the user is not authorized (as determined by ticrypt-vmc) the connection is immediately closed. Similarly, if the VM validation fails, ticrypt-frontend will close the connection
  5. All subsequent messages (in both directions) are encrypted using hte negotiated key. Except the setup messages, all traffic is hidden from ticrypt-proxy and any other part of the infrastructure.
  6. All commands, terminal info, keys, etc are send through this encrypted channel.
info

The only message sent in the clear to the ticrypt-vmc contains both authentication (digital signature) and key negotiation. Any other message or functionality is immediately rejected.

info

The communication does not rely on listening on regular ports and can only be mediated by ticrypt-proxy.

info

The public key of the VM owner cannot change after learned by the ticrypt-vmc thus hijacking the communication requires a compromise of user's private key.

tip

Most of the VM functionality only requires this communication method and can only be performed using this mechanism. The only exception is application port tunneling.

Application port tunneling

In order to allow reach application deployment in VMs, a generic mechanism that tunnels TCP traffic on specific ports from the VM to user's device is provided. The mechanism is highly controlled (as explained below) but, in principle, can make any network application access possible.

note

The application tunneling is similar to the SSH tunneling mechanism used for reverse port forwarding but uses TLS

The communication pathway relies on an number of independent segments, all of which can prevent the mechanism from functioning. All traffic on these segments is proxied/forwarded and overall is secured with TLS encryption and authenticated with digital signature (not passwords). At no point any intermediate component can intercept the communication without breaking TLS.

note

Access to VMs is limited to port 22, not SSH as a service. ticrypt-connect mediates port forwarding of any desired port using the mechanisim described with ticrypt-vmc listening on port 22. Specifically, port 3389 for RDP can be forwarded. Usually, multiple such ports are forwarded to allow richer functionality.

Communication setup

Initiating the forwarding tunnel setup requires the following steps:

  1. The ticrytp-frontend using authenticated session, tells ticrypt-rest (entry point of ticrypt-backend) to create the pathway to a specific VM
  2. The request is validated and ticrypt-proxy is informed of the operation
  3. The ticrypt-proxy sets up a listening endpoint on one of the designated ports (usually 6000-6100). The endpoint is strictly set up and only allows access from the IP address used to make the request.
  4. The ticrypt-frontend is informed of the allocated port.
  5. ticrypt-frontend asks ticrypt-connect to generate a TLS certificate for authentication and connection encryption.
  6. ticrypt-frontend tells ticrypt-vmc to accept the application forwarding and provides the authentication TLS certificate
  7. ticrypt-vmc replies with a list of ports that need to be tunneled
  8. ticrypt-frontend tells ticrypt-connect to start the connection and use the previous certificate.
  9. ticrypt-vmc, upon connection creation, checks that the digital signature is correct and initiates the TLS mediated tunneling.
  10. The traffic to/from local port on user device is tunneled and re-created within the VM to allow application access
tip

A special case is provided for SFTP over port 2022 if the feature is enabled in ticrypt-vmc. It can be used to move large amount of data from user's device to the VM.

Communication pathway

The communication pathway for port tunneling is:

  1. The ticrypt-connect accesses the allocated port controlled by ticrypt-proxy. The connection is pinned to the IP address the setup request comes from.
  2. ticrypt-proxy forwards the request to the VM Host endpoint (explained below)
  3. The VM Host endpoint forwards the traffic to port 22 of the correct VM
  4. ticrypt-vmc listens on port 22 and runs the TLS protocol with port tunneling.
tip

Port 22 is specifically selected to ensure no SSH server is deployed instead to allow normal access to the VM. The traffic on 22 is the TLS protocol with ticrypt-vmc control and not SSH protocol.

Secure VM isolation

In order to ensure VM security, only port 22 is opened inbound. Furthermore, all outbound traffic is limited to ticrytp-rest traffic unless an exception is provided by ticrypt-allowedlist (next section)

The specific mechanisms deployed for VM isolation are:

  1. All traffic to ports other than 22 is blocked internally by the VM thus isolating any other access. Note that application access is provided by the port tunneling above.
  2. All traffic to the VM ip on ports other than 22 is blocked using firewall rules on the VM host.
  3. External traffic to VMs is not routed by the VM host (with the exception of port 22)
  4. Traffic from VMs to outside is blocked unless ticrypt-allowedlist mechanism is used
info

The specific mechanism to allow access on port 22 but no other traffic is to block any traffic routing but provide a forwarding mechanism from a range on the VM host to port 22 of the IP ranges dedicated to the secure VMs. There is simply no way from a server outside the specific VM host to access any other VM port.

tip

The outbound traffic from secure VMs is blocked since it can be used to exfiltrate data thus resulting in data breaches.

Licensing server access

In order to use most commercial software, access to external licensing servers needs to be provided. Most of the time, the licensing servers reside within the organization, but occasionally, the servers are external. Recognizing this, tiCrypt provides a strictly controlled mechanism to allow access to licensing servers.

The ticrypt-allowlist component mediates setting up the mechanism that allows access to licensing servers. It does so by manipulating:

  1. Firewall and port forwarding rules on the server running ticrypt-backend
  2. DNS replies to requests that VMs make

The mechanism is strict and very specific. Unless a mapping is provided using ipsets and firewall rules, any outgoing traffic from VMs is blocked.

The ticrypt-allowlist component works in conjunction with ticrypt-frontend to provide a convenient way to enable/disable access. This mechanism requires SuperAdmin access.