Skip to main content

· 3 min read
tiCrypt Team

Drive Access for VMs in tiCrypt

In the realm of research, where data integrity and control are paramount, tiCrypt's drive acess stand as essential tools for researchers. With read-only access, researchers maintain strict control over who can access their valuable data, ensuring its integrity remains intact. Conversely, read-write access grants researchers the ability to edit, collaborate, and refine their data with precision. By leveraging these drive modes, researchers maintain full control over their data, fostering a secure and collaborative environment for their research endeavors.

Read-only Drives

Sharing a drive in read-only mode offers researchers the benefit of preserving data integrity while facilitating collaboration. By restricting editing privileges to authorized users, this mode ensures that research findings remain accurate and reliable. It enables seamless teamwork and knowledge sharing without compromising data security. Thus, sharing a drive in read-only mode empowers researchers to collaborate effectively while safeguarding their valuable data assets.

Read-write Drives

Sharing a drive in read-write mode offers researchers the invaluable benefit of seamless collaboration and real-time data editing. By granting edit access to all authorized users, this mode facilitates dynamic teamwork, allowing researchers to collectively refine, analyze, and manage shared data. With everyone on the team empowered to contribute and make edits, collaboration becomes more efficient, accelerating the pace of research and fostering innovation. Ultimately, read-write drives enable researchers to work together seamlessly, maximizing productivity and advancing scientific discovery.

Read-only ModeRead-write Mode
Restricts unauthorized modifications, reducing cybersecurity risks.Enables editing, collaboration, and comprehensive data management.
Users can view files but cannot edit, ensuring data integrity.Allows both reading and writing operations for efficient collaboration.
Facilitates secure collaboration while safeguarding data integrity.Provides control over drive settings for streamlined project management.
tip

In tiCrypt, when working on a collaborative project, consider granting multiple team members read-write access to your drive.

Granting Users Drive Access

Drive modes are applied when sharing a drive with other users or groups. Here's how to do it:

  1. Navigate to the drives table under Virtual Machines
  2. Select the drive you want to share.
  3. Click the "Share" icon
  4. Enter the names of users/groups you wish to share the drive with.
  5. Choose either "read-only" or "read-write" access.
  6. Click "Share" to apply the settings.
tip

For more detailed instructions, refer to the share a drive section in tiCrypt's documentation.

Understanding and properly applying drive modes in tiCrypt is essential for effective data management and collaboration. By leveraging read-only and read-write modes appropriately, users can ensure data security, access control, and seamless collaboration within the application.

· 7 min read
Alin Dobra

Adding batch-processing capabilities to tiCrypt is one of the most requested new features. It will allow large computational jobs to be executed by the secure environment provided by tiCrypt with full cryptographic isolation and security, achieving batch processing in a fully compliant CMMC/NIST environment.

A natural solution is integrating tiCrypt with Slurm, the most popular batch-processing system. This document provides a technical discussion of the integration and security challenges.

Slurm Overview

Slurm is a batch-processing system that allows users to submit jobs to a cluster of machines. The jobs are queued and executed when resources become available. Slurm provides a rich set of features, including:

  • sophisticated job scheduling
  • resource management
  • job accounting and reporting (including billing)
  • job execution, most notably for MPI jobs
  • job monitoring and control

For secure computation, especially when covered by CUI restrictions, Slurm is a poor choice. While building somewhat secure systems around Slurm is possible, it is difficult and often results in significant performance degradation. The main difficulty to overcome is that Slurm is designed to run jobs on the same machine where the Slurm controller is running. This does not protect against malicious code running on the same machine as the Slurm controller. Moreover, Slurm cannot isolate jobs or infrastructure from each other.

Ideal tiCrypt-Slurm Integration

The ideal integration of tiCrypt with Slurm would provide the following features:

  • Isolation of jobs from each other. This is the most critical feature. It means that jobs cannot interfere with each other,
  • Isolation of jobs from the infrastructure. This is also very important. Ideally, Slurm should not be aware of the code that is being executed, nor does it have access to the data processed.
  • Integration with tiCrypt. The integration should be as seamless as possible. In particular, data encrypted by tiCrypt should be seamlessly integrated.
  • Minimal performance degradation. The integration should not significantly degrade the performance of Slurm. Ideally, it should not degrade the performance at all.
  • Keep the excellent Slurm capabilities. tiCrypt should rely on Slurms scheduling, resource management, job accounting and reporting, job monitoring, and control capabilities.

Architecture of the solution

The above goals seem difficult to achieve because most of Slurm's capabilities must be retained, but the security must be "fixed". The key idea is to separate Slurm functionality into two parts:

  1. Global Slurm Scheduler. This is part of Slurm that is responsible for scheduling jobs, managing resources, accounting, and reporting. This component knows who executes the jobs, what resources they require, tracks the jobs, etc. This component is unaware of the code being executed nor has access to the data processed. This Slurm instance will run globally, outside tiCrypt, and interact with tiCrypt Backend through Slurm REST API.

  2. Local Slurm Executor. This is the part of Slurm that is responsible for executing the jobs. This component is aware of the code that is being executed and has access to the data processed. It can also provide Slurm with advanced execution capabilities, such as MPI. This Slurm instance will run locally, inside tiCrypt-managed Virtual Machines, and interact only with tiCrypt VM Controller. Each CUI project will have its own local Slurm Executor, managed by the tiCrypt VM Controller.

The Global Slurm Scheduler will not be aware of the Local Slurm Executors, and viceversa. The interaction between the two will be through the tiCrypt Backend and tiCrypt VM Controller. The tiCrypt VM Controller will hide most details from the tiCrypt backend (such as the code being executed, the data processed, etc.) and only provide information on the resources needed. This ensures that the Global Slurm Scheduler is not aware of the code being executed nor has access to the data processed. The tiCrypt backend will hide global details such as what other jobs run in the system, who is running them, etc. This will ensure the fact that the Local Slurm Executor is not aware of what else is running in the system, who is running it, etc.

The main mechanism used by tiCrypt to "trick" Slurm into running, as described above, is extensive use of the Slurms Plugging Architecture and re-write the jobs and statistics reporting mechanism. The following two sections describe the specific mechanisms to achieve the above goals.

Global Slurm Scheduler

The Global Slurm Scheduler will be a full-fledged Slurm instance, running outside of tiCrypt but side-by-side with tiCrypt backend. Specifically, slurmctld, slurmdbd and slurmrestd will run on the same machine as the tiCrypt backend. It will be configured separately from tiCrypt and can use any of the Slurm features, most importantly, various plugins. Slurm will be configured only to allow the tiCrypt backend to submit jobs; specifically, the Slurm API will be guarded against any other access.

For each of the VM Hosts used by tiCrypt for batch processing, a Slurm node will be configured. Specifically, slurmd will run on each VM Host. tiCrypt backend will feed "fake" jobs to Slurm to simulate the actual execution (see below). Special programs that do nothing except coordinate with tiCrypt Backend will be submitted to Slurm. This technique is similar to PCOCC system. Custom LUA plugins will be used to intercept the job execution (quirk in Slurm that this is the only way to block job execution) and to "adjust" the job statistics based on actual execution (see below).

tiCrypt Controlling VM

For each CUI project, tiCrypt will create an interactive VM (CVM) that will be used to run the jobs for that project. The project members will fully control this VM and will be used to manage the Local Slurm Executor, provision all the security, and interact with tiCrypt Backend. This mechanism is similar to the management of tiCrypt Interactive Clusters. Specifically, the CVM will:

  • Interact with tiCrypt users using the exact mechanism as the regular tiCrypt interactive VMs.
  • Provide the distributed file system used by the batch execution. This will use the exact mechanism as the tiCrypt Interactive Clusters.
  • Provide secure communication between the CVM and the worker VMs using automatically provisioned VPN (using StrongSwan).
  • Manage the Local Slurm Executor and local job submission
  • Ask tiCrypt Backend for VMs to be provisioned for batch processing. This is based on submitted Slurm jobs.
  • Inform Local Slurm Executor when resources are available and jobs can be executed.
  • Decommission the VMs when the batch processing is done.

Users with access to the CVM will be able to submit jobs to the Local Slurm Executor using sbatch and srun commands.

Local Slurm Executor

The Local Slurm Executor will be a full-fledged Slurm instance, running inside tiCrypt-managed Virtual Machines. Specifically, slurmctld ( and if strictly necessary, 'slurmdbd) will run on the CVM, and slurmd` will run on each VM part of the batch job (managed by local VM Controllers coordinated via the CVM). The provisioning and execution will be controlled by the tiCrypt CVM. Via Slurm command line tools or tiCrypt Frontend integration, the users can see the jobs' status, cancel them, etc.

To provide integration with the rest of tiCrypt infrastructure and Global Slurm Schedler, LUA plugins and other tiCrypt mechanisms will intercept the job submission, execution, and statistics reporting.

Challenges and Solutions

The above plan might sound complicated, but we think the complexity is manageable. The main challenges are:

  • Learning Slulrm. While the Tera Insights team has no experience running Slurm at scale, we have extensive experience dealing with complex systems and "coding around" their limitations and quirks.
  • Using LUA plugins. The Slurm documentation on plugins is not particularly extensive, but we can draw much inspiration from the PCOCC system implementation.
  • Adding new features to tiCrypt. The tiCrypt team has extensive experience in adding new features to tiCrypt. We have a well-established process for adding new features, including design, and implementation.

· 7 min read
Thomas Samant

tiCrypt Vault Storage

The tiCrypt Vault offers a file system-like facility that allows files and directories to be created and used. All the metadata, such as file properties, directory entries, access information, and decryption keys, are stored in the MongoDB database used by the tiCrypt-file storage service.

In tiCrypt Vault, the file content is broken into 8MB chunks, each encrypted independently using the file key. The size of such chunks is 8MB+64Bytes on disk (unless they are the last, incomplete chunk). The extra 64Bytes contain the IV (initialization vector for AES encryption). For each file, the chunks are numbered from 0 onwards. The chunks are stored in a directory structure based on the file ID, visible only to the tiCrypt backend (preferably not to the VM hosts). The storage location can be configured in the configuration file of the tiCrypt-storage service.

tiCrypt is not opinionated on the file system or integration with other systems for this storage, but for compliance reasons, it is recommended that the access is restricted to only the tiCrypt backend.

Without the decryption keys that only users can recover, the content of the chunk files is entirely indecipherable. It is safe to back up these files using any method (including non-encrypted backup, cloud, etc). The strong encryption, coupled with the unavailability of the key to even administrators, ensures that this content can be replicated outside the secure environment from a compliance point of view. The unavailability of the key to even administrators ensures that from a compliance point of view, this content can be replicated outside the secure environment.

tiCrypt Encrypted Drives in Libvirt

tiCrypt virtual machines use a boot disk image that is not encrypted and one or more encrypted drives. Both disk images and encrypted drives are stored as files in the underlying distributed file system available to all Libvirt host servers. The specific mechanism uses the notion of Libvirt disk pools. Independent disk pools can be defined for disk images, encrypted drives, drive snapshots (for backup), ISOs, etc. Each pool is located in a different directory within the distributed file system.

Libvirt (and tiCrypt, by extension) is agnostic to choosing the file system where various disk pools are defined. A good practice is to place the different disk pools on a visible distributed file system (preferably in the same location) and to mount the file system on all VM hosts. Any file system, including NFS, BeeGFS, Luster, etc., can be used.

As part of the virtualization mechanism, Libvirt makes the files corresponding to drives stored on the host file system appear as devices to the OS running within the VM. Any writes to the virtual device get translated into changes in the underlying file. The situation is somewhat more complex when snapshots are used since multiple files on the disk will form the virtual device.

Encrypted Drive Creation

Upon drive creation, tiCrypt instructs Libvirt to create a drive. This results in a file being made in the underlying file system in the corresponding drive pool. Two drives are supported: raw, with extension .raw, and QCOW2, with extension .qcow2.

A file as large as the indicated drive size gets created for a raw drive. The file content is initialized with 0s (corresponding to a blank drive). Writes in the virtual drive result in writes in the corresponding file at the same position (e.g., if block 10244 of the virtual drive is written, block 10244 of the raw file gets changed as well)

For a QCOW2 drive, only changed blocks get written; the file format is quite complex and supports advanced features like copy-on-write. The initial file size is small (low megabytes) when the drive is new; the file grows in size as more data is written to the disk.

The qemu-img tool can be used to convert between two formats. Usually, tiCrypt sets the drives up without the need for this tool.

A newly created tiCrypt disk is blank. No formatting of the drive or any other preparation has been performed. The drive will be formatted the first time it is attached to a tiCrypt virtual machine. The main reasons for this are:

  • The encryption/decryption key for the encrypted drive is kept secret from the infrastructure. This includes the tiCrypt backend, Libvirt, and underlying VM host.
  • The choice of the file system to use is delegated to the underlying operating system and tiCrypt VM Controller. Libvirt is not aware, nor does it need to know, of the actual file system on the drive. For Linux formatted drives, by inspecting the files backing up the drives, there is no way to tell even if the drive is formatted at all, let alone any information on the content or type of file system.

Encrypted Drive Formatting

As far as Libvirt is concerned, only low-level disk reads and writes exist. Whatever operation the operating system is performing gets translated into reading/write operations on the virtual disk; in turn, this will result in reading/write operations for the underlying file in the disk pool.

In Windows, a standard NTFS file system is created, but immediately (before the drive is made available to the user), BitLocker is turned on. This ensures that all files created subsequently are encrypted. BitLocker uses so-called "full volume encryption," i.e., all the new data will be encrypted, including the metadata. An external tool scanning the backing file can determine that the drive is an NTFS formatted drive and read all non-encrypted content. Since tiCrypt turns on encryption immediately, minimal information is visible.

In Linux, the LUKS entire disk encryption mechanism is used. It essentially places an encryption block layer between the raw drive (virtual drive in this case) and the file system (usually EXT4). This way, absolutely all information on the disk is encrypted. An external tool can only tell which disk blocks have been written to (are non-zero) but can derive no info on the rest of the content.

tiCrypt Non-secure Drives

Two non-secure drives are supported in tiCrypt: ISOs and read-only NFS shares.

Attaching ISOs

ISOs are made available using read-only CD-ROM devices. As such, they are always safe to mount in a secure tiCrypt VM. Linux and Windows can readily mount such ISOs and make them available as "drives."

ISOs are particularly useful if the NFS shares described below are not used. For example, Python or R packages could be made available as ISOs so that various VMs can install the required packages locally.

Attaching NFS file systems

By allowing, through firewall rules, access to a local NFS server, various tiCrypt VMs can mount a common file system for the purpose of accessing public (non-secure) data, packages, software, etc.

From a security point of view, the NFS server should export the data as read-only. The tiCrypt secure VMs should never be allowed to mount a read-write NFS share since data can be exfiltrated, thus defeating the tremendous effort put into tiCrypt protections against data exfiltration. This will most unquestionably make the tiCrypt deployment non-compliant.

A further restriction is related to the location of the NFS server. The system administrators of tiCrypt must control the NFS file system. It has to be part of the tiCrypt system envelope; for example, one of the servers part of the tiCrypt infrastructure can take this role. This restriction is due to compliance complications: the security envelope extends to all system parts; a remote NFS server becomes part of the secure environment and is subject to all security restrictions.

A practical recommendation is to create a local NFS server inside the security envelope and a regular

· 23 min read
Betuel Gag

Larger projects with many users require effective management at scale in tiCrypt. In a scaling scenario, both admins and super-admins must know how to:

  • Manage multiple users simultaneously in a large number of projects.
  • Make bulk changes in the user's status.
  • Adopt global changes in the system when needed.
  • Manage bulk changes in the tiCrypt backend.
  • Proactively make use of bulk VM actions.

In this blog, we provide a set of tiCrypt features that allow you to streamline the process to maintain a significant number of projects at scale.


Global Management at Scale

Most of the time, the System Admins and Project Investigators take over the extensive project responsibilities. The scope of global management is to perform bulk actions with less effort and to enhance data consistency, avoiding human error.

The tab allows admins and project managers to be creative about their projects. Many of the tiCrypt features were designed to leverage power to the management teams allowing various bulk actions to occur.

Make Global Announcements

Before deploying large projects to all admins or sub-admins may be required to set up a management infrastructure. The global announcement feature allows Project Managers and Admins to send secured global messages within the system.

To make a global announcement to all users or admins navigate to the tab in the Users section.

  • Click the Make announcement button in the top left corner.
  • Follow the instructions from Make Global Announcement section.

Management User Profiles

The User Profiles in tab is a powerful tool for creating personas. They are a way to tag users without altering their default permissions settings.

Scenario: Suppose you manage a large project with 1000+ users.

You must organize the users in categories based on your management requirements, project compliance and level of access. It is tedious and time-consuming to organize 1000+ users manually.

As a result, tiCrypt allows you to use the User Profiles feature to create your own user/admin avatar. Each user profile includes custom roles and permissions to allow unique actions and events during the project deployment.

Once the user profiles are created, they can be applied in bulk to project or team members, whichever is the case.

caution

Use this feature with high caution only if necessary. Hazardous use of permissions can block certain unpredicted actions for users assigned the user profile.

info

Learn more about User Profiles in the User profiles example section.

To create a user profile navigate to tab in the User Profiles section.

  • Click the Create new user role button in the top right.
  • Follow the instructions from Create User Profile section.

Apply Profiles in Bulk

Once you built your desired user profiles in management, you can apply them to users in bulk.

To apply profiles to users navigate to tab in the Users section.

  • Select the users you would like to apply profiles to.
  • Click the Apply profile button in the top right.
  • Follow the instructions from Apply Profile section.

Bulk Email

In a large project, communication is crucial. tiCrypt offers alternative ways to communicate via email, allowing admins to all project member emails or them at a click of a button.

To bulk email users navigate to the tab in the Users section.

  • Select the users you would like to bulk email.
  • Click the Bulk-email button in the top right.
  • Follow the instructions from Bulk Email section.

Bulk Refresh Users Information

If a large number of Users is updated at different times and you want to build a report of them for audit purposes. You can use this option to bulk refresh all user data.

To bulk refresh users information navigate to tab in the Users section.

  • Select the users you would like to refresh information for.
  • Click the Refresh user(s) info button in the top right.
  • Follow the instructions from Refresh User Info section.

Add Multiple Certifications

Adding multiple certifications at once can automate management efforts. This feature allows admins and project managers to certify multiple users for a security requirement within a security level of a tagged (classified) project.

To add multiple user certifications to users navigate to tab in the Users section.

  • Select the users you want to add the certifications to.
  • Click the Add certification button in the top right.
  • Follow the instructions from Add Certification section.

Bulk Mark Certifications as Expired

Whenever a project requirement changes or is updated, admins and project managers can turn off the access for all project members to a security level by marking their certifications as expired.

To mark multiple user certifications as expired navigate to tab in User Certifications section.

  • Select the user certifications you want to mark as expired.
  • Click Mark as expired in the top right.
  • Follow the instructions from Mark Certification As Expired section.

Add Multiple Users to a Project

Significant project processes may require adding many users to a project; this can be achieved using the bulk Add to project option.

To add multiple users to a project navigate to tab in the Users section.

  • Select the users you want to add to a project.
  • Click the Add to project(s) button in the top right.
  • Follow the instructions from Add to Project section.

Add Users to Multiple Projects

Large projects with multiple subprojects may require admins and project managers to add many users with similar roles in the projects. This action can be sped up using the Add members to projects option.

To add multiple users to many projects navigate to tab in the Projects section.

  • Select the projects you would like to add users to.
  • Click the Add memeber(s) button in the top right.
  • In the prompt, type the name of the members you want to add to the selected projects.
  • Click the button on the right.
  • Scroll down then optionally, type an expiration date for the users in the project.
  • Next, select user roles in the projects.
    • Select user restrictions in the projects.
    • Select whether you want to skip or update their expiration in the projects.
  • Click .

Assign Subadmins to Multiple Projects

Successful project managers and admins are often supported by successful sub-admins. tiCrypt allows admins to assign projects in bulk to sub-admins.

To assign a projects to subadmins navigate to the tab in the Projects section.

Change Roles in Bulk

Changing roles to multiple users may be a rare scenario. However, tiCrypt allows admins and super-admins roles to change other users' roles simultaneously.

To change the users roles navigate to tab in the Users section.

  • Select the users you would like to change the roles of.
  • Click the Change role button in the top right.
  • Follow the instructions from Change Role section.

Change States in Bulk

In the case when users leave the organization indefinitely, you can change their states to inactive in bulk. This option also helps you to onboard new users by setting in bulk their states to active and escrow on the next login.

To change state of users navigate to tab in the Users section.

  • Select the user(s) you would like to change state of.
  • Click the Change state button in the top right.
  • Follow the instructions from Change State section.

Disable Multiple Accounts Until

When multiple users have gone on holiday or overseas, certain compliance factors may require you to prevent their access to the project. You can use the Disable account until option to pause their access for a limited time and automatically resume it later.

To disable users account until a specified date navigate to tab in the Users section.

  • Select the user(s) you want to disable the account of.
  • Click the Disable account until button in the top right.
  • Follow the instructions from Disable Account Until section.
note

The practical difference between Change state of users to inactive and using the feature Disable account until is the inactivity period. If users leave for a short time, you should use the Disable account until feature if they are likely to never come back you can Change state of users to inactive and eventually delete them.

Bulk Delete Objects

As a super-admin, you can bulk delete the majority of the objects in tiCrypt; however, you cannot delete anything that is cryptographically enhanced (i.e., Groups, VMs, Drives, Etc.) unless you are the owner of them.

To bulk delete any objects navigate to any of the following tabs ,, in any section or sub-tab.

  • Select the object you would like to delete.
  • Click the Delete button usually in the top right.
  • In the prompt, click .
  • Alternatively, click to confirm the deletion.
info

View a specific example to Bulk delete VM configurations.

Bulk Export in JSON or CSV

Admins and project managers can bulk export data in JSON or CSV format in the and tabs. The export options are globally displayed for most tiCrypt objects.

To bulk export in JSON or CSV any objects navigate to any of the following tabs , in any section or sub-tab.

  • Select the object you would like to export.
  • Click either the CSV Export option or the JSON Export option.
  • Finally, click one of the following export quantities:
    • All items.
    • Visible items.
    • Selected items.

Bulk Change Host States

Changing host states in bulk is to manage how extensive VMs infrastructure would connect to them. When hosts may need maintenance or updates that require all VMs to be disconnected from them, super-admins can use this option.

To change the state of a host navigate to the tab in the Hosts section.

  • Select the host you would like to change the state of.
  • Click the Change state button in the top right.
  • Follow the instructions from Change Host State section.

Bulk Check Host Utilizations

The Check host utilization option is bulk by default. This checks all hosts of the system, allowing super-admins to verify the flow of resources in the host.

To check utilization of a host navigate to the tab in the Hosts section.

  • Select the host where you would like to check the utilization.
  • Click the Check utilization button in the top right.
  • Follow the instructions from Check Host Utilization section.

Bulk Shutdown VMs by Hosts

Similarly to admins changing host states to inactive, you can bulk shut down VMs by hosts. This action allows for a complete shutdown of all VMs of a host in urgent situations.

caution

Please be aware that using this option will turn off all VMs of the host; all unsaved work in the VMs may be lost.

To shutdown all VMs of a host navigate to the tab in the Hosts section.

  • Select the hosts where you would like to shut the VMs down.
  • Click the Shutdown all VMs button in the top right.
  • Follow the instructions from Shut Down All VMs in Host section.

Bulk Manage Hardware Setups Access

To bulk manage access in VM Hardware Setups navigate to the tab in the Hardware Setups section.

  • Select the VM Hardware Setups you would like to manage the access of.
  • Click the Manage Access button in the top right.
  • Follow the instructions from Manage Hardware Setups Access section.

Bulk Change Hardware Setups Images

To bulk change the image in VM Hardware Setups navigate to the tab in the Hardware Setups section.

  • Select the VM Hardware Setups you would like to change the image of.
  • Click the Change Image button in the top right.
  • Follow the instructions from Change Hardware Setups Images section.

Bulk Replace Hardware Setups Instructions

To bulk replace instructions in VM Hardware Setups navigate to the tab in the Hardware Setups section.

  • Select the VM Hardware Setups you would like to replace the instructions for.
  • Click the Replace Instructions button in the top right.
  • Follow the instructions from Replace Hardware Setups Instructions section.

Bulk Set Projects in Running VMs

Some significant projects demand multiple VMs to be connected to them. You can bulk tag numerous VMs to a project simultaneously.

To set projects in running VMs navigate to the tab in the Running VMs section.

  • Select the running VMs to which you would like to set the projects.
  • Click the Set Project button in the top right.
  • Follow the instructions from Set Projects in Running VMs section.

Bulk Shut Down Running VMs

When a project is complete and data is saved on drives, the VMs are no longer in use hence you can bulk shut down them.

To shut down running VMs navigate to the tab in the Running VMs section.

  • Select the connected VMs you want to shut down.
  • Click the Shut down button in the top right.
  • Follow the instructions from Shut Down Running VMs section.

Bulk Power Up Service VMs

When starting a large project, VMs in place for service may be powered up simultaneously.

To bulk power up Service VMs navigate to the tab in the Service VMs section.

  • Select the service VMs you would like to power up.
  • Click the Power Up button in the top right.
  • Follow the instructions from Power Up Service VMs section.

Bulk Fetch Libvirt XML description of Service VMs

Super-admins can view the difference between each Service VMs's XML description.

To bulk fetch the Libvirt XML description navigate to tab, in the Virtual Machines section.

  • Select the connected VMs you want to view the Libvirt XML description of.
  • Click the Three dots button in the top right.
  • In the prompt, click the Libvirt XML Description option.
  • Follow the instructions from View Libvirt XML Description of Running VMs section.

Bulk Restart Controllers of Service VMs

Service VMs controllers may be restarted in bulk to fix errors or update changes in the VM controllers.

To restart controller in Service VMs navigate to the tab in the Service VMs section.

  • Select the service VMs you would like to restart the controllers of.
  • Click the Restart button in the top right.
  • Follow the instructions from Restart Controller section.

Bulk Create Deletion Request of Escrow Users

In the case when an entire group of escrow users is changed, you can bulk-create deletion requests and follow the process from Delete Escrow Users section to execute the deletion appropriately.

To delete an escrow user from the system navigate to the tab in the Escrow Users section.

  • Select the escrow user you would like to delete.
  • Click the Create deletion request button in the top right.
  • Follow the instructions from Delete Escrow Users section.

Bulk Execute Signed Certificates

A similar situation applies to bulk-executing signed certificates. Super-admins have permission to bulk upload the site-key admin-signed certificates into tiCrypt.

To bulk upload a signed certificate navigate to tab in the Escrow Certificates section.

Bulk Attach & Mount Drives to VM

tiCrypt allows users to bulk attach and mount unlimited drives to a VM. This action is possible due to flexible infrastructure and functionality.

You can attach either read-only or read-write drives.

caution

If you attach multiple drives to a VM, consider the amount of resource utilization and VM architecture best practices.

To attach a drive in read-only or read-write mode navigate to the tab in the section.

  • Select the Virtual machine you want to attach the drives to.
  • Scroll down and click the Drive Management card.
  • Click the Attach drive(s) button in the top center.
  • Follow the read-only or read-write mode instructions from Attach more drives in a Running VM section.
note

The following example is for a read-only drive.

Bulk Change Project Tag in Drives

To change project in a drive navigate to the tab in the Drives section.

  • Select the drive you would like to change the project for.
  • Click the Change project button in the top right.
  • Follow the instructions from Change Project in Drive section.
caution

You cannot re-tag VMs with different projects simultaneously. Your VMs must be tagged to the same project to change the project in bulk.

Bulk Add Users to a VM

Adding multiple users to a VM is a frequent action in project management.

To bulk add users to a VM configuration navigate to tab, in the Virtual Machines section.

  • Select the virtual machine you want to add users to.
  • Click the User Management card on the right panel.
  • Click the Add User(s) button at the top panel.
  • Follow the instructions from Add Users to VM section.

Unshare Drives from Everyone Else

You can unshare drives from all users simultaneously. This action allows the owner of the drive to keep a drive private to themselves.

To unshare a drive with everyone navigate to the tab in the Drives section.

  • Select the drive you would like to unshare.
  • Click the Unshare from everyone else button in the top right.
  • Follow the instructions from Unshare Drives with Everyone section.

Bulk Transfer via SFTP

Research data at scale is necessary for large projects. A simple way to transfer large amounts of data into the projects is via SFTP methods. Before you make a transfer, you must create an endpoint for your data to land on.

To create an SFTP endpoint navigate to the tab in the section.

  • Select an existing directory you want to turn into an SFTP inbox.
  • Click the Manage Inbox icon in the top right center.
  • Follow the instructions from SFTP Overview section.

VM Management at Scale

The Virtual Machines can work in bulk to achieve complex tasks at scale. You can use the following features by accessing the Management tab or the Virtual Machine tab.

Virtual Machines User Profiles

The User Profiles in tab is a powerful tool to create personas within the virtual machines realm. They are a way to tag virtual machine users by changing their permissions in the VM.

Scenario: Suppose you manage an extensive VM infrastructure where you have a vast number of users. You are advised to leverage the VM profiles to organize the user's permission and level of control within the VMs and Drives.

  • Despite the user roles in the system, you can flexibly create a VM user profile.
    • Eg1: super-admins of the system may be standard VM users if they belong to a VM User Profile designed for that purpose.
    • Eg2: standard users in the system may have VM manager roles if they belong to a VM User Profile designed for that purpose.
  • Multiple users may have multiple VM user profiles.
  • No matter the user role, each VM user can have a maximum of one VM user profile per virtual machine.

To create a VM User Profile navigate to tab section.

  • Click the button to make sure your VM is connected.
  • Next, navigate to the VM User Profiles card.
  • Click the Add profile button in the top center.
  • In the prompt, type the VM User profile name.
    • Select the profile role.
    • Tick the appropriate profile permissions.
  • Click .
info

To learn more about VM User Profiles follow the instructions from VM User Profiles (in Virtual Machines) section.

Create Access Directory for Large VM Groups

Access directories play a significant role in the VM large group management. There are by default three groups for an access directory:

  • Everybody: All VM users have access to the directory.
  • Nobody: None of the VM users have access to the directory; except the owner of the VM.
  • Managers: Only the users with manager roles in the VM have access to the directory.
  • Custom: Users with custom permissions set by the VM owner or VM managers have access to the directory.

To create a new access directory navigate to tab in the section.

  • Select the connected VM you want to create the access directory in.
  • Click the Access Directory Management card.
  • Click button in the top left.
  • Follow the instructions from Create Access Directory section.

Miscellaneous Management at Scale

It is worth mentioning several complementary features that may be used as a tool to perform management at scale.

Global Login Message

In specific scenarios, you may need to conduct maintenance in the backend, which may require you to pause the system for a few days. Before starting the maintenance project, it is recommended to have at least one channel to contact all users about the maintenance work outside the system.

For good practice, you can use the global login message feature to inform everyone about a maintenance period or a significant project update that may affect all users. Optionally, you can set custom colors, symbols and display frequencies for your global message.

Global Terms of Services

As an alternative to global login messages, you can apply the same principle immediately after the users have logged in.

The Terms & Conditions prompt may be used for any relevant information or update users should know about. *, e.g., "The system will be down for 14 days due to a large project maintenance."

The Terminal Hub

The Terminal Hub helps you keep track of the running VMs when you deal with many terminals. It is a complimentary feature to large projects since it allows you to manage multiple VMs conveniently, at the same time.

info

To learn more about the Terminal Hub, navigate to Terminal Hub Overview section.

· 3 min read
Betuel Gag

The file transfer hub allows you to transfer data between an online cloud platform, your vault and your local machine.

Vault to Local Machine Transfer

The local transfer opens a direct pathway from the tiCrypt to your local machine. You can upload or download files from/to your local machine by simply drag-and-drop action in the file transfer hub.

note

You can bulk transfer files/directories from/to your vault.

Cloud Platforms

You can use the following cloud providers to perform a transfer to your vault:

  • . Dropbox
  • . Google Drive
  • . OneDrive

Connect to the Cloud

Before initiating a transfer you must connect to the cloud and follow the prompts to create an API integration.

note

Your Vault data is fully encrypted, unseen by the cloud providers.

Initiate the Transfer

For this transfer example, we will display a OneDrive transfer.

info

You can only transfer files/directories into a read-write drive. If the drive is read-only you will not be able to transfer any files/directories to it.

Layouts & Views

Your drive will always be on the right side while your Vault will be on the left side by default. However, you can change the transfer layouts to have the Vault on top and the drive on the bottom.

Alternatively, you can switch the panels between the cozy or compact view.

tip

Use compact view when you have many files and directories in one place.

Hidden Cloud Files

Very often, cloud drives may have technical hidden files which are not visible to standard users.

You can unhide the files by using the Show hidden items button in the top right.

note

You may as well hide the files back using the Hide hidden items button in the top right.

Vault to VM Transfer

Before moving your data to your VM, you will contain it in your vault.

Even if you mistakenly transfer a malicious file/directory to your Vault, it will not be able to produce any harm due to isolation and containment of the vault. Any malicious attempts to break out from the vault are not an option.

info

To learn more about Vault to VM transfer, read the Transfer a File to the Virtual Machine section.

· 19 min read
Betuel Gag

More sophisticated data access restriction scenarios involve "hierarchical data"; these are situations where various data sets possess distinct scopes and hierarchical restrictions. While simple data protection can be achieved straightforwardly using the project tagging mechanism in tiCrypt, more complex scenarios like the one described below require more careful consideration.

Problem Statement

Suppose you have a research group that has access to two-level sets of data: federal-level data and state-level data. Both sets contain CUI from the following states: Florida, Texas, and California.

We assume each of the states has distinct data.

The following data restrictions are in place:

  • You can only combine the state-level data with the federal-level data per state.
  • You cannot combine data sets between states.

To apply the zero-trust principle, you may want to enforce restrictions on specific data sets.

Knowing that all members work with federal-level data across the states, others work with state-level data, and a third group works with federal- and state-level data; we want to set up an infrastructure where:

  • All group members have access to federal-level data.
  • Specific members have access to Texas data.
  • Specific members have access to Florida data.
  • Specific members have access to California data.
  • No member can combine federal or state-level data sets between states at any time.
  • No member is allowed access to state-level data unless they are actively working on that state.
  • Data declassification is limited to the project PIs.
  • Data access and downloads are prevented and access-controlled.
  • CUI is safe at all times under the project infrastructure.

How can you use groups and projects to achieve this infrastructure?

This blog aims to provide two possible solutions using the interplay between tiCrypt teams, groups, and project tags, and shedding light on how they can be maneuvered to accomplish this goal.

Background information

Before exploring the interplays, we will examine the usability of tiCrypt teams, groups, and projects to understand how they interact.

Let's delve into the details.

Teams

Teams are access-controlled. They must have at least one member.

  • Separated from groups and projects: There is no interplay between groups and teams or projects and teams.
  • Familiarity: Same team members are recommended to join the same group/project for better collaboration.
  • Resource usage: Teams can help control resource allocation in a project or group.

Groups

Groups are cryptographic. They make it easy to share encrypted information between users.

  • Independent from teams: Try adding a user to a team, add them to a group, then remove them from the team. The user will still be in the group.
  • Fully encrypted: Interaction between group members is cryptographically based on the user's public-private keys with no ACL-based operations.
  • Activity isolation: Try creating a group using two team members, then promote one of them to the owner and leave the group. You cannot decrypt the content between your members unless you are now part of their group.
  • Satisfy compliance requirements: You should use groups to enforce compliance standards between members.
note

You may be in a situation where you share a file with another tiCrypt user. However, with time, you share files more often with the same user, and it becomes a habit. At this point, you should create a group with the respective user where you can share files unconditionally.

Projects

Projects are access-controlled. They must be active in your session to be accessible.

  • Tags: You can tag almost anything in the system with your project.
  • Separation of powers: Access to the project is granted by your project membership, not by your admin.
  • Doubled security & enforced compliance via security levels: You can only access project resources when you have a membership and satisfy project security levels.
    • A project with no security levels still requires a project membership.
    • The more security requirements, the more effort is required to join the project.
note

Unlocked projects displaying the Unlocked tag are restriction-free. If a resource has a project tag, the project restrictions apply.

The Interplay between Groups & Projects

Groups are encrypted collaboration objects that allow encrypted file sharing. Projects are a tagging mechanism limiting access to objects such as files, directories, groups, VMs, and drives. Regarding resource distribution, groups commonly share resources in group directories where all group members can view and access them; projects require members to specifically share a resource with other project members to allow access to it.

By combining projects with groups, you achieve an encrypted, access-controlled group directory; that includes group permissions, project membership, and project security levels in one place.

The interplay takes place at a resource level. There are four classes of resources in a group tagged by a project:

  1. Resources belonging to the group, not tagged by the project: accessible only to the group members, inaccessible to project members.
  2. Resources belonging to the group, also tagged by the project: accessible only to group members who are also project members, have an active project membership, and satisfy project security requirements.
  3. Resource not belonging to the group but tagged by the project: inaccessible to group members, accessible only to project members with active project membership and satisfied project security requirements.
  4. Resource not belonging to the group nor tagged by the project: Outside files are inaccessible to the group and project members.
tip
  • To access a tagged group, you need group keys, project membership, and satisfied security levels.
  • Tagged groups can have resources with distinctive project tags unrelated to the group's tag, enabling group owners to tag files with other projects they belong to. We do not recommend having one group with many unrelated project tags for clean data management.

Tagged Groups Permissions

Allow system managed access-control, meaning the following permissions or a combination of them is deployed for each group member:

  • . Add Members: Add other users to the current group.
  • . Remove Members: Remove existing members from the group.
  • . Change Permissions: Change group permissions for themselves and other members.
  • . Add Directory Entries: Upload files in the group directory using the upload action.
  • . Remove Directory Entries: Delete files from group directory using the delete action.
  • . Create Directories: Create new (sub)directories in the group directory using the create new directory action.
  • . Modify Project Tag: Change the tags of files and directories in the group directory using the change project action.
note
  • Once a user is added to a group, they become a group member.
  • A group member with user role becomes a group member with manager role when their Add Members permission is ticked.
  • As a group owner, you cannot change your permissions in the group.
  • Mistakenly unticking Change Permissions box as a group member you will block you from changing any permissions, including yours. To restore your permissions, you must ask the group owner to tick back the Change Permissions box for you.
  • By ticking the Modify Project Tag permission, you allow subproject members to untag (declassify) tagged group directory files from the subproject and keep them hidden within their group directory, making them visible only to their group members.
tip

For good practice, you may use the following permissions structure for your tagged groups:

  • Group User: Add directory entries, Create directories.
  • Group Manager: Add directory entries, Remove Directory Entries, Change Permissions, Add Members, Remove Members, Create directories.
  • Group Owner: All permissions by default, including Modify Project Tag.

A tagged group can hold resources with different restriction levels by project similar to our early example with federal- and state-level data containing CUI from the states of Florida, Texas, and California.

Two workflows show the practical side of project infrastructure using the right interplay between groups and projects.

Workflow 1: One Group to Span All Projects

You can create an "umbrella" group for a project with subprojects for the data. You will manage subproject memberships via project tags.

One large member group is associated with a project tag with subprojects that individually protect the data. Each group member gets certified for a subproject which allows them to access only the group directories associated with their respective subproject. The PI controls access via subproject security levels and active memberships.

Note this is a single fully encrypted group, without cryptographic isolation between sub-directories yet secure and compliant. This method protects users from admin impersonation and uses access-control between sub-directories.

  • Next, you need a subproject structure with security levels and security requirements. For this example, we will use:

    • Subproject 01,02,03,04 and 05.
    • Security level 1,2,3,4 and 5.
    • Security requirement 1,2,3,4 and 5.
    • Security requirement 1 will correspond to security level 1, security requirement 2 to security level 2, security requirement 3 to security level 3, etc.
    • Security level 1 will correspond to subproject 01, security level 2 to subproject 02, security level 3 to subproject 03, etc.
  • Create security requirements for each security level.

  • Create a project which will be the project under the "umbrella" group.
    • Optionally, add a security level with security requirements to the project.
  • Re-login to view the project.
caution

When you apply a security level to the project under the "umbrella" group, you will force all group members of all subprojects to satisfy the security requirements for the main project which will tag the group.

  • Create multiple sub-projects and link them to the appropriate security levels previously created.
  • Re-login to view the subprojects.
    • Files per subproject will be visible only to users with active memberships for those sub-projects and satisfy the project requirements.
note

Creating a subproject with no security levels will be considered part of the main project to all members of the newly created subproject.

As a member with a user role in the "umbrella" group, you can only access the tagged group directory with colored tags. Other group directories will only show their names with a Question Mark instead of the tag, indicating that they are inaccessible due to a locked session. If a group directory bears a project tag that is not part of your active session (not your current project), you cannot view its contents.

note

Active memberships in the project allow active sessions in the tagged group directory.

To test your workflow infrastructure, open a group sub-directory you are a member of that was tagged with a corresponding subproject you belong to and upload a file into the group sub-directory. All members of the subproject tagging the sub-directory should be able to view, access, and perform all actions with the sub-directory files if they:

  • Are an active group member of the "umbrella" group.
  • Are members of the subproject that tags the sub-directory.
  • Satisfy all subproject security requirements which tag the sub-directory.

Even if the users will not be able to view the other members of the subprojects (except their membership), they will be able to view the members of their "umbrella" group who are also the members of all subprojects. This action may be altered by an admin in the permissions panel of each user or using the apply profile option.

From now on, you can either manage all subproject tags of the "umbrella" group or delegate each subproject tagged sub-directory to a sub-admin.

To delegate your tagged subproject to a sub-admin:

  • Promote subproject member to the manager role.
  • Optionally, remove yourself from the subproject to allow independent collaboration between subproject members and the new project manager.

You have successfully completed a tagged group with projects infrastructure.

Workflow 2: One Project to Span All Groups

You can create an "umbrella" project to create a set of groups corresponding to subprojects of the "umbrella" project. You will manage project membership via tagged groups. This workflow is complex and requires more thoughtful management.

You have a single project with full encryption between layers (group directories). The PI controls access to your group directory via group permissions, security levels per subproject, and project memberships. You are cryptographically isolated and access-controlled which is both secure and compliant.

caution

When you apply a security level to the "umbrella" project, you will force all members of all groups of all subprojects to satisfy the security requirements for the "umbrella" project.

  • Re-login to view the project.
  • Next, you need a subproject structure with security levels and security requirements. For this example, we will use:

    • Subproject 1,2,3,4 and 5.
    • Security level 1,2,3,4 and 5.
    • Security requirement 1,2,3,4 and 5.
    • Security requirement 1 will correspond to security level 1, security requirement 2 to security level 2, security requirement 3 to security level 3, etc.
    • Security level 1 will correspond to subproject 1, security level 2 to subproject 2, security level 3 to subproject 3, etc.
  • Create security requirements for each security level.

note

You can add multiple security requirements per security level, depending on your project infrastructure.

  • Certify yourself with the corresponding security requirements for the security levels belonging to each subproject.
  • Alternatively, certify each sub-admin who will run one of the subprojects.
  • Re-login to get access to the subprojects.
  • Create multiple small groups, and tag each group by its coresponding subproject.
  • At the same time, add the appropriate members to the group who will belong to the corresponding subproject. For this example, we will use:
    • Group 1,2,3,4 and 5.
    • Subproject 1,2,3,4 and 5 which we created previously.
    • Group 1 will correspond to Subproject 1, group 2 to subproject 2, group 3 to subproject 3, etc.
    • Subproject 1 tags group 1, subproject 2 tags group 2, subproject 3 tags group 3, etc.
    • A random user will be added to each group corresponding to each subproject for demo purposes.
  • Re-login to view the changes in user memberships.

As a member with the user role in the "umbrella" project, you can view all files from all group directories.

As a subproject member, you can view only your group directory tagged with your subproject tag. Suppose you do not satisfy the subproject security requirements. In that case, you can view the name of the files, their type, owner, creation date and size but cannot perform the following actions: view, download, change project, share, access file history, edit or delete files.

note

Users must be notified about the group they belong to.

To test your workflow infrastructure, open a group directory you are a member of that was tagged with a corresponding subproject and upload a file into the group directory. All members of that group should be able to view, access, and perform all actions with the group files if they:

  • Are active group members.
  • Are members of the subproject that tags the group.
  • Satisfy all subproject security requirements which tag the group. ]

Even if the users will not be able to view the other members of the subproject (except their membership), they will be able to view the members of their group who are also the members of their subproject. This action may be altered by an admin in the permissions panel of each user or using the apply profile option.

As shown below, you can either manage the tagged groups of subprojects or delegate each tagged group to a sub-admin.

To delegate your tagged group to a sub-admin:

  • Promote group member to the owner of the group.
  • Optionally, leave the group to allow independent collaboration between members and the group owner.
  • Next, set up the subproject membership of the group owner of the tagged group corresponding to the subproject that tags the group.
    • User role: allows group owner to have control of the group but cannot untag files by subproject from the tagged group.
    • Manager role: allows group owner to untag files from the group, pulling them out of the subproject (this is not recommended).
note
  • A declassified file from the subproject which tags the group is still secured within the group but only visible and accessible to the group members.
  • When you tag a file from the subproject tagged group directory with the "umbrella" project tag, the group members who have access to the subproject tagged group directory will also have access to the "umbrella" project tagged file since it belongs to the parent project and it is located within the appropriate group directory.

In contrast, the group owner can move a file outside the group but still have it tagged with the subproject.

  • The file will still be classified by the subproject tag but not visible to other group members except the group owner.
tip
  • For clean data management purposes, you should never tag random files with other project tags in a tagged group. This results in many tagged files in the group directory structure that have nothing to do with the project which tags the group.
  • Do not create groups with people who do not know each other- we recommend creating groups with team members. The FE will only allow you to access other users' resources if they are on your team.
  • Set the team boundaries lined up with group boundaries.
caution
  • If a group has a project tag, you cannot get the group key unless the project is active in your session.
  • If the project tagging the group is not active, there is no way to access the group.
  • After user permission changes in the project you must re-login to update the current session.

You have successfully completed a tagged project with groups infrastructure.

Other Hybrid Workflows

Besides the two workflows above, there are many other hybrid workflows you can create to fit your project needs and customize your infrastructure.

Hybrid Example 1:

  • Create an "umbrella" group to create a project with subprojects for the data.
  • Separately, create an "umbrella" project.
  • Create a set of groups and tag them by the "umbrella" project.
  • Create subdirectories for each group directory and tag them with the project's subprojects that tag the "umbrella" group.
  • You will manage project memberships via tagged groups and subproject memberships via project tags at the same time.

Hybrid Example 2:

  • Create an "umbrella" group to create an "umbrella" project.
  • Create a set of subprojects under the "umbrella" project.
  • Create small groups tagged by the "umbrella" project.
  • Each small group may have sub-directories tagged independently by the subprojects created previously.

To conclude, by leveraging the power of project tags and groups, you can establish an efficient workflow for your research infrastructure.

· 3 min read
Thomas Samant
Betuel Gag
Alin Dobra

1. Security First

  • Security is the top priority, and the architecture is designed with security as the central consideration.
  • A comprehensive approach to security is taken, going far beyond perimeter protection with Firewall, VPN, and intrusion detection systems.
  • Zero-trust is implemented using cryptography rather than solely relying on access control lists (ACLs).
  • The goal is to architect a complete solution rather than "patching" security vulnerabilities.
  • Features are only added if they do not compromise security.
  • There is no notion of public/unsecured data; explicit sharing is the only allowed method.
  • Default shut is favored over default open.
  • Public-key cryptography (PKC) is the core concept, with all security mechanisms based on PKC.
  • Password-based authentication is not used, and extensive use of cryptography is implemented.
  • End-to-end encryption is utilized, and each resource is independently protected.
  • Encryption keys are managed using PKC, and cryptographic isolation is enforced.

2. Separation of Duties

  • Admin power is decentralized uniformly throughout the system to prevent data breach entry points, even if an admin account is compromised.
  • Access control and end-to-end encryption are used together, with the addition of two-factor authentication (2FA) for enhanced security.
  • Extreme flexibility is provided in terms of operating system support (Windows and Linux), tooling support (AI + GPUs), and the full software stack.
  • The overhead for small and large projects is kept minimal.
  • Researchers are empowered to manage and control their data and workflows, decentralizing management and minimizing the role of admins.
  • Admins define mechanisms and monitor usage but have no access to user data.

3. Mechanism instead of policy

  • The focus is on enforcing behavior through mechanisms rather than relying solely on policies.
  • Mechanisms are designed to prevent and deter bad behavior, with system-enforced capabilities.
  • Automated system-enforced mechanisms reduce the risk of human error and ensure consistent adherence to security protocols.
  • Severely reduce the number of FTEs and "police" behavior responses.
  • Policies should only dictate the mechanisms used for enforcement.

4. Support diverse research workflows

  • tiCrypt supports diverse research workflows with Windows and Linux OS support, AI + GPU capabilities, and compatibility with various hardware devices.
  • It provides flexibility in deployment, allowing on-premises bare-metal servers, cloud deployment (AWS,Azure,Google Cloud,etc), hyper-converged solutions (Nutanix, RedHat,etc), and hybrid models (on prem+cloud).
  • Accommodate non-uniformity and "borrow" VM hosts from both cloud and HPC clusters.
  • Compatibility with existing security and infrastructure solutions such as Duo, Shibboleth, firewalls, and VPNs is ensured.

5. Detailed auditing

  • Auditing is fully integrated into the secure system, addressing compliance requirements directly.
  • Different projects may have specific auditing requirements, and the system caters to those needs.
  • tiCrypt solution includes an audit system that produces compliance reports, maintains a very detailed audit trail, and retains audit logs for the entire history of the system.
  • Reports allow audit pre-dictions of data behavior.

Conclusion

  • Partial success can be achieved with significant effort, but there may be system blind spots and limited supported workflows. tiCrypt is the result of a collaboration with University of Florida over ten years, designed to address all compliance and security needs, making it a proven security unicorn.
  • The three pillars of compliance include strong security, system enforcement, and comprehensive auditing and reporting.
  • All features are designed to meet the rigorous compliance standards of public institutions.

· 7 min read
Alin Dobra

Secure virtual machines in tiCrypt run in almost complete isolation from each-other and the environment. The main motivation for this isolation is providing a high degree of security.

Setup

In this blog we will refer to a number of tiCrypt components that participate in the activities described here. They are:

  • ticrypt-connect runs the application on user's device
  • ticrypt-frontend runs in the browser on user's device and is served by ticrypt-connect
  • ticrypt-backend is the set of tiCrypt services running on the Backend server
  • ticrypt-rest is the entry point in ticrypt-backend and uses REST protocol.
  • ticrypt-proxy mediates communication between components by replicating traffic between two other connections.
  • ticrypt-allowedlist mediates access to external licensing servers
  • ticrypt-vmc controls the secure VM and is responsible for all VM related security mechanisms

Overview

The VM security is provided through the following mechanisms:

  1. Reverse mechanism to talk to tiCrypt frontend using ticrypt-proxy component
  2. The only opened ports for a VM are port 22 for traffic tunneling from tiCrypt Connect
  3. Special mechanism to reach port 22 of VMs but no general mechanism to reach any other port from tiCrypt-backend
  4. Strictly controlled outgoing traffic through ticrypt-allowedlist component

We take each mechanism in turn end provide details on how it works.

ticrypt-proxy mediated VM communication

Due to the fact that all ports are blocked except 22 and that ticrypt-vmc (VM controller) control this port for port forwarding, there is no mechanism to directly contact a secure VM using direct access. Instead, a mechanism fully controlled by the ticrytp-vmc is employed. The mechanism relies on ticrypt-proxy (component of ticrytp-backend) to mediate communication with the user in the following manner:

  1. The ticrypt-vmc creates a web-socket connection with ticrypt-proxy at all times.
  2. When a user wants to connect to a VM, the ticrypt-frotnend creates a web-socket to match the existing VM one in ticrypt-proxy. All traffic between the two matching web-sockets is replicaed by ticrypt-proxy. Imediately ticrypt-vmc creates a new web-socket for future communication.
  3. The first message sent by both the ticrypt-frontend and ticrypt-vmc are digital signature proofs of identity and Diffie-Helman secret key negotiation.
  4. If the digital signature fails or if the user is not authorized (as determined by ticrypt-vmc) the connection is immediately closed. Similarly, if the VM validation fails, ticrypt-frontend will close the connection
  5. All subsequent messages (in both directions) are encrypted using hte negotiated key. Except the setup messages, all traffic is hidden from ticrypt-proxy and any other part of the infrastructure.
  6. All commands, terminal info, keys, etc are send through this encrypted channel.
info

The only message sent in the clear to the ticrypt-vmc contains both authentication (digital signature) and key negotiation. Any other message or functionality is immediately rejected.

info

The communication does not rely on listening on regular ports and can only be mediated by ticrypt-proxy.

info

The public key of the VM owner cannot change after learned by the ticrypt-vmc thus hijacking the communication requires a compromise of user's private key.

tip

Most of the VM functionality only requires this communication method and can only be performed using this mechanism. The only exception is application port tunneling.

Application port tunneling

In order to allow reach application deployment in VMs, a generic mechanism that tunnels TCP traffic on specific ports from the VM to user's device is provided. The mechanism is highly controlled (as explained below) but, in principle, can make any network application access possible.

note

The application tunneling is similar to the SSH tunneling mechanism used for reverse port forwarding but uses TLS

The communication pathway relies on an number of independent segments, all of which can prevent the mechanism from functioning. All traffic on these segments is proxied/forwarded and overall is secured with TLS encryption and authenticated with digital signature (not passwords). At no point any intermediate component can intercept the communication without breaking TLS.

note

Access to VMs is limited to port 22, not SSH as a service. ticrypt-connect mediates port forwarding of any desired port using the mechanisim described with ticrypt-vmc listening on port 22. Specifically, port 3389 for RDP can be forwarded. Usually, multiple such ports are forwarded to allow richer functionality.

Communication setup

Initiating the forwarding tunnel setup requires the following steps:

  1. The ticrytp-frontend using authenticated session, tells ticrypt-rest (entry point of ticrypt-backend) to create the pathway to a specific VM
  2. The request is validated and ticrypt-proxy is informed of the operation
  3. The ticrypt-proxy sets up a listening endpoint on one of the designated ports (usually 6000-6100). The endpoint is strictly set up and only allows access from the IP address used to make the request.
  4. The ticrypt-frontend is informed of the allocated port.
  5. ticrypt-frontend asks ticrypt-connect to generate a TLS certificate for authentication and connection encryption.
  6. ticrypt-frontend tells ticrypt-vmc to accept the application forwarding and provides the authentication TLS certificate
  7. ticrypt-vmc replies with a list of ports that need to be tunneled
  8. ticrypt-frontend tells ticrypt-connect to start the connection and use the previous certificate.
  9. ticrypt-vmc, upon connection creation, checks that the digital signature is correct and initiates the TLS mediated tunneling.
  10. The traffic to/from local port on user device is tunneled and re-created within the VM to allow application access
tip

A special case is provided for SFTP over port 2022 if the feature is enabled in ticrypt-vmc. It can be used to move large amount of data from user's device to the VM.

Communication pathway

The communication pathway for port tunneling is:

  1. The ticrypt-connect accesses the allocated port controlled by ticrypt-proxy. The connection is pinned to the IP address the setup request comes from.
  2. ticrypt-proxy forwards the request to the VM Host endpoint (explained below)
  3. The VM Host endpoint forwards the traffic to port 22 of the correct VM
  4. ticrypt-vmc listens on port 22 and runs the TLS protocol with port tunneling.
tip

Port 22 is specifically selected to ensure no SSH server is deployed instead to allow normal access to the VM. The traffic on 22 is the TLS protocol with ticrypt-vmc control and not SSH protocol.

Secure VM isolation

In order to ensure VM security, only port 22 is opened inbound. Furthermore, all outbound traffic is limited to ticrytp-rest traffic unless an exception is provided by ticrypt-allowedlist (next section)

The specific mechanisms deployed for VM isolation are:

  1. All traffic to ports other than 22 is blocked internally by the VM thus isolating any other access. Note that application access is provided by the port tunneling above.
  2. All traffic to the VM ip on ports other than 22 is blocked using firewall rules on the VM host.
  3. External traffic to VMs is not routed by the VM host (with the exception of port 22)
  4. Traffic from VMs to outside is blocked unless ticrypt-allowedlist mechanism is used
info

The specific mechanism to allow access on port 22 but no other traffic is to block any traffic routing but provide a forwarding mechanism from a range on the VM host to port 22 of the IP ranges dedicated to the secure VMs. There is simply no way from a server outside the specific VM host to access any other VM port.

tip

The outbound traffic from secure VMs is blocked since it can be used to exfiltrate data thus resulting in data breaches.

Licensing server access

In order to use most commercial software, access to external licensing servers needs to be provided. Most of the time, the licensing servers reside within the organization, but occasionally, the servers are external. Recognizing this, tiCrypt provides a strictly controlled mechanism to allow access to licensing servers.

The ticrypt-allowlist component mediates setting up the mechanism that allows access to licensing servers. It does so by manipulating:

  1. Firewall and port forwarding rules on the server running ticrypt-backend
  2. DNS replies to requests that VMs make

The mechanism is strict and very specific. Unless a mapping is provided using ipsets and firewall rules, any outgoing traffic from VMs is blocked.

The ticrypt-allowlist component works in conjunction with ticrypt-frontend to provide a convenient way to enable/disable access. This mechanism requires SuperAdmin access.