In the realm of research, where data integrity and control are paramount, tiCrypt's drive acess stand as essential tools for researchers. With read-only access, researchers maintain strict control over who can access their valuable data, ensuring its integrity remains intact. Conversely, read-write access grants researchers the ability to edit, collaborate, and refine their data with precision. By leveraging these drive modes, researchers maintain full control over their data, fostering a secure and collaborative environment for their research endeavors.
Sharing a drive in read-only mode offers researchers the benefit of preserving data integrity while facilitating collaboration. By restricting editing privileges to authorized users, this mode ensures that research findings remain accurate and reliable. It enables seamless teamwork and knowledge sharing without compromising data security. Thus, sharing a drive in read-only mode empowers researchers to collaborate effectively while safeguarding their valuable data assets.
Sharing a drive in read-write mode offers researchers the invaluable benefit of seamless collaboration and real-time data editing. By granting edit access to all authorized users, this mode facilitates dynamic teamwork, allowing researchers to collectively refine, analyze, and manage shared data. With everyone on the team empowered to contribute and make edits, collaboration becomes more efficient, accelerating the pace of research and fostering innovation. Ultimately, read-write drives enable researchers to work together seamlessly, maximizing productivity and advancing scientific discovery.
Drive modes are applied when sharing a drive with other users or groups. Here's how to do it:
Navigate to the drives table under Virtual Machines
Select the drive you want to share.
Click the "Share" icon
Enter the names of users/groups you wish to share the drive with.
Choose either "read-only" or "read-write" access.
Click "Share" to apply the settings.
tip
For more detailed instructions, refer to the share a drive section in tiCrypt's documentation.
Understanding and properly applying drive modes in tiCrypt is essential for effective data management and collaboration.
By leveraging read-only and read-write modes appropriately, users can ensure data security, access control, and seamless collaboration within the application.
Adding batch-processing capabilities to tiCrypt is one of the most requested new features. It will allow
large computational jobs to be executed by the secure environment provided by tiCrypt with full cryptographic
isolation and security, achieving batch processing in a fully compliant CMMC/NIST environment.
A natural solution is integrating tiCrypt with Slurm, the most popular batch-processing system. This document provides a
technical discussion of the integration and security challenges.
Slurm is a batch-processing system that allows users to submit jobs to a cluster of machines. The jobs are queued and
executed when resources become available. Slurm provides a rich set of features, including:
sophisticated job scheduling
resource management
job accounting and reporting (including billing)
job execution, most notably for MPI jobs
job monitoring and control
For secure computation, especially when covered by CUI restrictions, Slurm is a poor choice. While building
somewhat secure systems around Slurm is possible, it is difficult and often results in significant performance degradation. The main difficulty to overcome is that Slurm is designed to run jobs on the same machine where the Slurm controller is running. This does not protect against malicious code running on the same machine as the Slurm controller. Moreover, Slurm cannot isolate jobs or infrastructure from each other.
The ideal integration of tiCrypt with Slurm would provide the following features:
Isolation of jobs from each other. This is the most critical feature. It means that jobs cannot interfere with each other,
Isolation of jobs from the infrastructure. This is also very important. Ideally, Slurm should not be aware of the code that
is being executed, nor does it have access to the data processed.
Integration with tiCrypt. The integration should be as seamless as possible. In particular, data encrypted by tiCrypt should be seamlessly integrated.
Minimal performance degradation. The integration should not significantly degrade the performance of Slurm. Ideally, it should not degrade the performance at all.
Keep the excellent Slurm capabilities. tiCrypt should rely on Slurms scheduling, resource management, job accounting and reporting, job monitoring, and control capabilities.
The above goals seem difficult to achieve because most of Slurm's capabilities must be retained, but the security must be "fixed".
The key idea is to separate Slurm functionality into two parts:
Global Slurm Scheduler. This is part of Slurm that is responsible for scheduling jobs, managing resources, accounting, and reporting. This component knows who executes the jobs, what resources they require, tracks the jobs, etc. This component is unaware of the code being executed nor has access to the data processed. This Slurm instance will run globally, outside tiCrypt, and interact with tiCrypt Backend through Slurm REST API.
Local Slurm Executor. This is the part of Slurm that is responsible for executing the jobs. This component is aware of the code that is being executed and has access to the data processed. It can also provide Slurm with advanced execution capabilities, such as MPI. This Slurm instance will run locally,
inside tiCrypt-managed Virtual Machines, and interact only with tiCrypt VM Controller. Each CUI project will have its own local Slurm Executor, managed by the tiCrypt VM Controller.
The Global Slurm Scheduler will not be aware of the Local Slurm Executors, and viceversa. The interaction between the two will be through the tiCrypt Backend and tiCrypt VM Controller.
The tiCrypt VM Controller will hide most details from the tiCrypt backend (such as the code being executed, the data processed, etc.) and only provide information on the resources needed.
This ensures that the Global Slurm Scheduler is not aware of the code being executed nor has access to the data processed.
The tiCrypt backend will hide global details such as what other jobs run in the system, who is running them, etc. This will ensure the fact that the Local Slurm Executor is not aware of what else
is running in the system, who is running it, etc.
The main mechanism used by tiCrypt to "trick" Slurm into running, as described above, is extensive use of the Slurms Plugging Architecture and re-write the jobs and statistics reporting mechanism. The following two sections describe the specific mechanisms to achieve the above goals.
The Global Slurm Scheduler will be a full-fledged Slurm instance, running outside of tiCrypt but side-by-side with tiCrypt backend. Specifically, slurmctld, slurmdbd and slurmrestd
will run on the same machine as the tiCrypt backend. It will be configured separately from tiCrypt and can use any of the Slurm features, most importantly, various plugins.
Slurm will be configured only to allow the tiCrypt backend to submit jobs; specifically, the Slurm API will be guarded against any other access.
For each of the VM Hosts used by tiCrypt for batch processing, a Slurm node will be configured. Specifically, slurmd will run on each VM Host. tiCrypt backend will feed "fake" jobs to Slurm
to simulate the actual execution (see below). Special programs that do nothing except coordinate with tiCrypt Backend will be submitted to Slurm. This technique is similar
to PCOCC system. Custom LUA plugins will be used to intercept the job execution (quirk in Slurm that this is the only way to block job execution)
and to "adjust" the job statistics based on actual execution (see below).
For each CUI project, tiCrypt will create an interactive VM (CVM) that will be used to run the jobs for that project. The project members will fully control this VM and will be used to
manage the Local Slurm Executor, provision all the security, and interact with tiCrypt Backend. This mechanism is similar to the management of tiCrypt Interactive Clusters. Specifically,
the CVM will:
Interact with tiCrypt users using the exact mechanism as the regular tiCrypt interactive VMs.
Provide the distributed file system used by the batch execution. This will use the exact mechanism as the tiCrypt Interactive Clusters.
Provide secure communication between the CVM and the worker VMs using automatically provisioned VPN (using StrongSwan).
Manage the Local Slurm Executor and local job submission
Ask tiCrypt Backend for VMs to be provisioned for batch processing. This is based on submitted Slurm jobs.
Inform Local Slurm Executor when resources are available and jobs can be executed.
Decommission the VMs when the batch processing is done.
Users with access to the CVM will be able to submit jobs to the Local Slurm Executor using sbatch and srun commands.
The Local Slurm Executor will be a full-fledged Slurm instance, running inside tiCrypt-managed Virtual Machines. Specifically, slurmctld ( and if strictly necessary, 'slurmdbd)
will run on the CVM, and slurmd` will run on each VM part of the batch job (managed by local VM Controllers coordinated via the CVM). The provisioning and execution will be controlled
by the tiCrypt CVM. Via Slurm command line tools or tiCrypt Frontend integration, the users can see the jobs' status, cancel them, etc.
To provide integration with the rest of tiCrypt infrastructure and Global Slurm Schedler, LUA plugins and other tiCrypt mechanisms will intercept the job submission, execution, and statistics
reporting.
The above plan might sound complicated, but we think the complexity is manageable. The main challenges are:
Learning Slulrm. While the Tera Insights team has no experience running Slurm at scale, we have extensive experience dealing with complex systems and "coding around" their limitations and quirks.
Using LUA plugins. The Slurm documentation on plugins is not particularly extensive, but we can draw much inspiration from the PCOCC system implementation.
Adding new features to tiCrypt. The tiCrypt team has extensive experience in adding new features to tiCrypt. We have a well-established process for adding new features, including design, and implementation.
The tiCrypt Vault offers a file system-like facility that allows files and directories to be created and used. All the metadata, such as file properties, directory entries, access information, and decryption keys, are stored in the MongoDB database used by the tiCrypt-file storage service.
In tiCrypt Vault, the file content is broken into 8MB chunks, each encrypted independently using the file key. The size of such chunks is 8MB+64Bytes on disk (unless they are the last, incomplete chunk). The extra 64Bytes contain the IV (initialization vector for AES encryption). For each file, the chunks are numbered from 0 onwards. The chunks are stored in a directory structure based on the file ID, visible only to the tiCrypt backend (preferably not to the VM hosts). The storage location can be configured in the configuration file of the tiCrypt-storage service.
tiCrypt is not opinionated on the file system or integration with other systems for this storage, but for compliance reasons, it is recommended that the access is restricted to only the tiCrypt backend.
Without the decryption keys that only users can recover, the content of the chunk files is entirely indecipherable. It is safe to back up these files using any method (including non-encrypted backup, cloud, etc). The strong encryption, coupled with the unavailability of the key to even administrators, ensures that this content can be replicated outside the secure environment from a compliance point of view. The unavailability of the key to even administrators ensures that from a compliance point of view, this content can be replicated outside the secure environment.
tiCrypt virtual machines use a boot disk image that is not encrypted and one or more encrypted drives. Both disk images and encrypted drives are stored as files in the underlying distributed file system available to all Libvirt host servers. The specific mechanism uses the notion of Libvirt disk pools. Independent disk pools can be defined for disk images, encrypted drives, drive snapshots (for backup), ISOs, etc. Each pool is located in a different directory within the distributed file system.
Libvirt (and tiCrypt, by extension) is agnostic to choosing the file system where various disk pools are defined. A good practice is to place the different disk pools on a visible distributed file system (preferably in the same location) and to mount the file system on all VM hosts. Any file system, including NFS, BeeGFS, Luster, etc., can be used.
As part of the virtualization mechanism, Libvirt makes the files corresponding to drives stored on the host file system appear as devices to the OS running within the VM. Any writes to the virtual device get translated into changes in the underlying file. The situation is somewhat more complex when snapshots are used since multiple files on the disk will form the virtual device.
Upon drive creation, tiCrypt instructs Libvirt to create a drive. This results in a file being made in the underlying file system in the corresponding drive pool. Two drives are supported: raw, with extension .raw, and QCOW2, with extension .qcow2.
A file as large as the indicated drive size gets created for a raw drive. The file content is initialized with 0s (corresponding to a blank drive). Writes in the virtual drive result in writes in the corresponding file at the same position (e.g., if block 10244 of the virtual drive is written, block 10244 of the raw file gets changed as well)
For a QCOW2 drive, only changed blocks get written; the file format is quite complex and supports advanced features like copy-on-write. The initial file size is small (low megabytes) when the drive is new; the file grows in size as more data is written to the disk.
The qemu-img tool can be used to convert between two formats. Usually, tiCrypt sets the drives up without the need for this tool.
A newly created tiCrypt disk is blank. No formatting of the drive or any other preparation has been performed. The drive will be formatted the first time it is attached to a tiCrypt virtual machine. The main reasons for this are:
The encryption/decryption key for the encrypted drive is kept secret from the infrastructure. This includes the tiCrypt backend, Libvirt, and underlying VM host.
The choice of the file system to use is delegated to the underlying operating system and tiCrypt VM Controller. Libvirt is not aware, nor does it need to know, of the actual file system on the drive. For Linux formatted drives, by inspecting the files backing up the drives, there is no way to tell even if the drive is formatted at all, let alone any information on the content or type of file system.
As far as Libvirt is concerned, only low-level disk reads and writes exist. Whatever operation the operating system is performing gets translated into reading/write operations on the virtual disk; in turn, this will result in reading/write operations for the underlying file in the disk pool.
In Windows, a standard NTFS file system is created, but immediately (before the drive is made available to the user), BitLocker is turned on. This ensures that all files created subsequently are encrypted. BitLocker uses so-called "full volume encryption," i.e., all the new data will be encrypted, including the metadata. An external tool scanning the backing file can determine that the drive is an NTFS formatted drive and read all non-encrypted content. Since tiCrypt turns on encryption immediately, minimal information is visible.
In Linux, the LUKS entire disk encryption mechanism is used. It essentially places an encryption block layer between the raw drive (virtual drive in this case) and the file system (usually EXT4). This way, absolutely all information on the disk is encrypted. An external tool can only tell which disk blocks have been written to (are non-zero) but can derive no info on the rest of the content.
ISOs are made available using read-only CD-ROM devices. As such, they are always safe to mount in a secure tiCrypt VM. Linux and Windows can readily mount such ISOs and make them available as "drives."
ISOs are particularly useful if the NFS shares described below are not used. For example, Python or R packages could be made available as ISOs so that various VMs can install the required packages locally.
By allowing, through firewall rules, access to a local NFS server, various tiCrypt VMs can mount a common file system for the purpose of accessing public (non-secure) data, packages, software, etc.
From a security point of view, the NFS server should export the data as read-only. The tiCrypt secure VMs should never be allowed to mount a read-write NFS share since data can be exfiltrated, thus defeating the tremendous effort put into tiCrypt protections against data exfiltration. This will most unquestionably make the tiCrypt deployment non-compliant.
A further restriction is related to the location of the NFS server. The system administrators of tiCrypt must control the NFS file system. It has to be part of the tiCrypt system envelope; for example, one of the servers part of the tiCrypt infrastructure can take this role. This restriction is due to compliance complications: the security envelope extends to all system parts; a remote NFS server becomes part of the secure environment and is subject to all security restrictions.
A practical recommendation is to create a local NFS server inside the security envelope and a regular
Larger projects with many users require effective management at scale in tiCrypt.
In a scaling scenario, both admins and super-admins must know how to:
Manage multiple users simultaneously in a large number of projects.
Make bulk changes in the user's status.
Adopt global changes in the system when needed.
Manage bulk changes in the tiCrypt backend.
Proactively make use of bulk VM actions.
In this blog, we provide a set of tiCrypt features that allow you to streamline the process to maintain a significant number of projects at scale.
Global Management at Scale
Most of the time, the System Admins and Project Investigators take over the extensive project responsibilities.
The scope of global management is to perform bulk actions with less effort and to enhance data consistency, avoiding human error.
The tab allows admins and project managers to be creative about their projects.
Many of the tiCrypt features were designed to leverage power to the management teams allowing various bulk actions to occur.
Before deploying large projects to all admins or sub-admins may be required to set up a management infrastructure. The global announcement feature allows Project Managers and Admins to send secured global messages within the system.
To make a global announcement to all users or admins navigate to the tab in the Users section.
Click the Make announcement button in the top left corner.
The User Profiles in tab is a powerful tool for creating personas. They are a way to tag users without altering their default permissions settings.
Scenario: Suppose you manage a large project with 1000+ users.
You must organize the users in categories based on your management requirements, project compliance and level of access.
It is tedious and time-consuming to organize 1000+ users manually.
As a result, tiCrypt allows you to use the User Profiles feature to create your own user/admin avatar.
Each user profile includes custom roles and permissions to allow unique actions and events during the project deployment.
Once the user profiles are created, they can be applied in bulk to project or team members, whichever is the case.
caution
Use this feature with high caution only if necessary. Hazardous use of permissions can block certain unpredicted actions for users assigned the user profile.
In a large project, communication is crucial. tiCrypt offers alternative ways to communicate via email, allowing admins to all project member emails or them at a click of a button.
To bulk email users navigate to the tab in the Users section.
If a large number of Users is updated at different times and you want to build a report of them for audit purposes. You can use this option to bulk refresh all user data.
To bulk refresh users information navigate to tab in the Users section.
Select the users you would like to refresh information for.
Click the Refresh user(s) info button in the top right.
Adding multiple certifications at once can automate management efforts. This feature allows admins and project managers to certify multiple users for a security requirement within a security level of a tagged (classified) project.
To add multiple user certifications to users navigate to tab in the Users section.
Select the users you want to add the certifications to.
Click the Add certification button in the top right.
Whenever a project requirement changes or is updated, admins and project managers can turn off the access for all project members to a security level by marking their certifications as expired.
To mark multiple user certifications as expired navigate to tab in User Certifications section.
Select the user certifications you want to mark as expired.
Large projects with multiple subprojects may require admins and project managers to add many users with similar roles in the projects. This action can be sped up using the Add members to projects option.
To add multiple users to many projects navigate to tab in the Projects section.
Select the projects you would like to add users to.
Click the Add memeber(s) button in the top right.
In the prompt, type the name of the members you want to add to the selected projects.
Click the button on the right.
Scroll down then optionally, type an expiration date for the users in the project.
Next, select user roles in the projects.
Select user restrictions in the projects.
Select whether you want to skip or update their expiration in the projects.
Changing roles to multiple users may be a rare scenario. However, tiCrypt allows admins and super-admins roles to change other users' roles simultaneously.
To change the users roles navigate to tab in the Users section.
Select the users you would like to change the roles of.
In the case when users leave the organization indefinitely, you can change their states to inactive in bulk. This option also helps you to onboard new users by setting in bulk their states to active and escrow on the next login.
To change state of users navigate to tab in the Users section.
Select the user(s) you would like to change state of.
Click the Change state button in the top right.
Follow the instructions from Change State section.
When multiple users have gone on holiday or overseas, certain compliance factors may require you to prevent their access to the project. You can use the Disable account until option to pause their access for a limited time and automatically resume it later.
To disable users account until a specified date navigate to tab in the Users section.
Select the user(s) you want to disable the account of.
Click the Disable account until button in the top right.
The practical difference between Change state of users to inactive and using the feature Disable account until is the inactivity period. If users leave for a short time, you should use the Disable account until feature if they are likely to never come back you can Change state of users to inactive and eventually delete them.
As a super-admin, you can bulk delete the majority of the objects in tiCrypt; however, you cannot delete anything that is cryptographically enhanced (i.e., Groups, VMs, Drives, Etc.) unless you are the owner of them.
To bulk delete any objects navigate to any of the following tabs ,, in any section or sub-tab.
Admins and project managers can bulk export data in JSON or CSV format in the and tabs.
The export options are globally displayed for most tiCrypt objects.
To bulk export in JSON or CSV any objects navigate to any of the following tabs , in any section or sub-tab.
Select the object you would like to export.
Click either the CSV Export option or the JSON Export option.
Finally, click one of the following export quantities:
Changing host states in bulk is to manage how extensive VMs infrastructure would connect to them. When hosts may need maintenance or updates that require all VMs to be disconnected from them, super-admins can use this option.
To change the state of a host navigate to the tab in the Hosts section.
Select the host you would like to change the state of.
The Check host utilization option is bulk by default. This checks all hosts of the system, allowing super-admins to verify the flow of resources in the host.
To check utilization of a host navigate to the tab in the Hosts section.
Select the host where you would like to check the utilization.
Click the Check utilization button in the top right.
Similarly to admins changing host states to inactive, you can bulk shut down VMs by hosts. This action allows for a complete shutdown of all VMs of a host in urgent situations.
caution
Please be aware that using this option will turn off all VMs of the host; all unsaved work in the VMs may be lost.
To shutdown all VMs of a host navigate to the tab in the Hosts section.
Select the hosts where you would like to shut the VMs down.
Click the Shutdown all VMs button in the top right.
In the case when an entire group of escrow users is changed, you can bulk-create deletion requests and follow the process from Delete Escrow Users section
to execute the deletion appropriately.
To delete an escrow user from the system navigate to the tab in the Escrow Users section.
Select the escrow user you would like to delete.
Click the Create deletion request button in the top right.
A similar situation applies to bulk-executing signed certificates. Super-admins have permission to bulk upload the site-key admin-signed certificates into tiCrypt.
To bulk upload a signed certificate navigate to tab in the Escrow Certificates section.
Click the Execute Signed Certificates button in the top right.
Research data at scale is necessary for large projects. A simple way to transfer large amounts of data into the projects is via SFTP methods.
Before you make a transfer, you must create an endpoint for your data to land on.
To create an SFTP endpoint navigate to the tab in the section.
Select an existing directory you want to turn into an SFTP inbox.
Click the Manage Inbox icon in the top right center.
The Virtual Machines can work in bulk to achieve complex tasks at scale. You can use the following features by accessing the Management tab or the Virtual Machine tab.
The User Profiles in tab is a powerful tool to create personas within the virtual machines realm. They are a way to tag virtual machine users by changing their permissions in the VM.
Scenario: Suppose you manage an extensive VM infrastructure where you have a vast number of users. You are advised to leverage the VM profiles to organize the user's permission and level of control within the VMs and Drives.
Despite the user roles in the system, you can flexibly create a VM user profile.
Eg1: super-admins of the system may be standard VM users if they belong to a VM User Profile designed for that purpose.
Eg2: standard users in the system may have VM manager roles if they belong to a VM User Profile designed for that purpose.
Multiple users may have multiple VM user profiles.
No matter the user role, each VM user can have a maximum of one VM user profile per virtual machine.
To create a VM User Profile navigate to tab section.
Click the button to make sure your VM is connected.
In specific scenarios, you may need to conduct maintenance in the backend, which may require you to pause the system for a few days.
Before starting the maintenance project, it is recommended to have at least one channel to contact all users about the maintenance work outside the system.
For good practice, you can use the global login message feature to inform everyone about a maintenance period or a significant project update that may affect all users.
Optionally, you can set custom colors, symbols and display frequencies for your global message.
As an alternative to global login messages, you can apply the same principle immediately after the users have logged in.
The Terms & Conditions prompt may be used for any relevant information or update users should know about.
*, e.g., "The system will be down for 14 days due to a large project maintenance."
The Terminal Hub helps you keep track of the running VMs when you deal with many terminals.
It is a complimentary feature to large projects since it allows you to manage multiple VMs conveniently, at the same time.
The local transfer opens a direct pathway from the tiCrypt to your local machine.
You can upload or download files from/to your local machine by simply drag-and-drop action in the file transfer hub.
note
You can bulk transfer files/directories from/to your vault.
For this transfer example, we will display a OneDrive transfer.
info
You can only transfer files/directories into a read-write drive. If the drive is read-only you will not be able to transfer any files/directories to it.
Your drive will always be on the right side while your Vault will be on the left side by default.
However, you can change the transfer layouts to have the Vault on top and the drive on the bottom.
Alternatively, you can switch the panels between the cozy or compact view.
tip
Use compact view when you have many files and directories in one place.
Before moving your data to your VM, you will contain it in your vault.
Even if you mistakenly transfer a malicious file/directory to your Vault, it will not be able to produce any harm due to isolation and containment of the vault.
Any malicious attempts to break out from the vault are not an option.
More sophisticated data access restriction scenarios involve "hierarchical data"; these are situations where various data sets possess distinct scopes and hierarchical restrictions.
While simple data protection can be achieved straightforwardly using the project tagging mechanism in tiCrypt, more complex scenarios like the one described below require more careful consideration.
Suppose you have a research group that has access to two-level sets of data: federal-level data and state-level data. Both sets contain CUI from the following states: Florida, Texas, and California.
We assume each of the states has distinct data.
The following data restrictions are in place:
You can only combine the state-level data with the federal-level data per state.
You cannot combine data sets between states.
To apply the zero-trust principle, you may want to enforce restrictions on specific data sets.
Knowing that all members work with federal-level data across the states, others work with state-level data, and a third group works with federal- and state-level data; we want to set up an infrastructure where:
All group members have access to federal-level data.
Specific members have access to Texas data.
Specific members have access to Florida data.
Specific members have access to California data.
No member can combine federal or state-level data sets between states at any time.
No member is allowed access to state-level data unless they are actively working on that state.
Data declassification is limited to the project PIs.
Data access and downloads are prevented and access-controlled.
CUI is safe at all times under the project infrastructure.
How can you use groups and projects to achieve this infrastructure?
This blog aims to provide two possible solutions using the interplay between tiCrypt teams,groups, and project tags, and shedding light on how they can be maneuvered to accomplish this goal.
Groups are cryptographic. They make it easy to share encrypted information between users.
Independent from teams: Try adding a user to a team, add them to a group, then remove them from the team. The user will still be in the group.
Fully encrypted: Interaction between group members is cryptographically based on the user's public-private keys with no ACL-based operations.
Activity isolation: Try creating a group using two team members, then promote one of them to the owner and leave the group. You cannot decrypt the content between your members unless you are now part of their group.
Satisfy compliance requirements: You should use groups to enforce compliance standards between members.
note
You may be in a situation where you share a file with another tiCrypt user. However, with time, you share files more often with the same user, and it becomes a habit.
At this point, you should create a group with the respective user where you can share files unconditionally.
Projects are access-controlled. They must be active in your session to be accessible.
Tags: You can tag almost anything in the system with your project.
Separation of powers: Access to the project is granted by your project membership, not by your admin.
Doubled security & enforced compliance via security levels: You can only access project resources when you have a membership and satisfy project security levels.
A project with no security levels still requires a project membership.
The more security requirements, the more effort is required to join the project.
note
Unlocked projects displaying the Unlocked tag are restriction-free.
If a resource has a project tag, the project restrictions apply.
Groups are encrypted collaboration objects that allow encrypted file sharing. Projects are a tagging mechanism limiting access to objects such as files, directories, groups, VMs, and drives.
Regarding resource distribution, groups commonly share resources in group directories where all group members can view and access them; projects require members to specifically share a resource with other project members to allow access to it.
By combining projects with groups, you achieve an encrypted, access-controlled group directory; that includes group permissions, project membership, and project security levels in one place.
The interplay takes place at a resource level. There are four classes of resources in a group tagged by a project:
Resources belonging to the group, not tagged by the project: accessible only to the group members, inaccessible to project members.
Resources belonging to the group, also tagged by the project: accessible only to group members who are also project members, have an active project membership, and satisfy project security requirements.
Resource not belonging to the group but tagged by the project: inaccessible to group members, accessible only to project members with active project membership and satisfied project security requirements.
Resource not belonging to the group nor tagged by the project: Outside files are inaccessible to the group and project members.
tip
To access a tagged group, you need group keys, project membership, and satisfied security levels.
Tagged groups can have resources with distinctive project tags unrelated to the group's tag, enabling group owners to tag files with other projects they belong to. We do not recommend having one group with many unrelated project tags for clean data management.
Allow system managed access-control, meaning the following permissions or a combination of them is deployed for each group member:
. Add Members: Add other users to the current group.
. Remove Members: Remove existing members from the group.
. Change Permissions: Change group permissions for themselves and other members.
. Add Directory Entries: Upload files in the group directory using the upload action.
. Remove Directory Entries: Delete files from group directory using the delete action.
. Create Directories: Create new (sub)directories in the group directory using the create new directory action.
. Modify Project Tag: Change the tags of files and directories in the group directory using the change project action.
note
Once a user is added to a group, they become a group member.
A group member with user role becomes a group member with manager role when their Add Members permission is ticked.
As a group owner, you cannot change your permissions in the group.
Mistakenly unticking Change Permissions box as a group member you will block you from changing any permissions, including yours.
To restore your permissions, you must ask the group owner to tick back the Change Permissions box for you.
By ticking the Modify Project Tag permission, you allow subproject members to untag (declassify) tagged group directory files from the subproject and keep them hidden within their group directory, making them visible only to their group members.
tip
For good practice, you may use the following permissions structure for your tagged groups:
Group User:Add directory entries, Create directories.
Group Owner: All permissions by default, including Modify Project Tag.
A tagged group can hold resources with different restriction levels by project similar to our early example with federal- and state-level data containing CUI from the states of Florida, Texas, and California.
Two workflows show the practical side of project infrastructure using the right interplay between groups and projects.
You can create an "umbrella" group for a project with subprojects for the data. You will manage subproject memberships via project tags.
One large member group is associated with a project tag with subprojects that individually protect the data.
Each group member gets certified for a subproject which allows them to access only the group directories associated with their respective subproject.
The PI controls access via subproject security levels and active memberships.
Note this is a single fully encrypted group, without cryptographic isolation between sub-directories yet secure and compliant.
This method protects users from admin impersonation and uses access-control between sub-directories.
Next, you need a subproject structure with security levels and security requirements. For this example, we will use:
Subproject 01,02,03,04 and 05.
Security level 1,2,3,4 and 5.
Security requirement 1,2,3,4 and 5.
Security requirement 1 will correspond to security level 1, security requirement 2 to security level 2, security requirement 3 to security level 3, etc.
Security level 1 will correspond to subproject 01, security level 2 to subproject 02, security level 3 to subproject 03, etc.
Create a project which will be the project under the "umbrella" group.
Optionally, add a security level with security requirements to the project.
Re-login to view the project.
caution
When you apply a security level to the project under the "umbrella" group, you will force all group members of all subprojects to satisfy the security requirements for the main project which will tag the group.
As a member with a user role in the "umbrella" group, you can only access the tagged group directory with colored tags.
Other group directories will only show their names with a Question Mark instead of the tag, indicating that they are inaccessible due to a locked session.
If a group directory bears a project tag that is not part of your active session (not your current project), you cannot view its contents.
note
Active memberships in the project allow active sessions in the tagged group directory.
Subproject 01 will tag Subdirectory 1, subproject 2 will tag subdirectory 02, subproject 3 will tag subdirectory 03, etc.
To test your workflow infrastructure, open a group sub-directory you are a member of that was tagged with a corresponding subproject you belong to and upload a file into the group sub-directory. All members of the subproject tagging the sub-directory should be able to view, access, and perform all actions with the sub-directory files if they:
Are an active group member of the "umbrella" group.
Are members of the subproject that tags the sub-directory.
Satisfy all subproject security requirements which tag the sub-directory.
Even if the users will not be able to view the other members of the subprojects (except their membership), they will be able to view the members of their "umbrella" group who are also the members of all subprojects.
This action may be altered by an admin in the permissions panel of each user or using the apply profile option.
From now on, you can either manage all subproject tags of the "umbrella" group or delegate each subproject tagged sub-directory to a sub-admin.
To delegate your tagged subproject to a sub-admin:
You can create an "umbrella" project to create a set of groups corresponding to subprojects of the "umbrella" project. You will manage project membership via tagged groups.
This workflow is complex and requires more thoughtful management.
You have a single project with full encryption between layers (group directories). The PI controls access to your group directory via group permissions, security levels per subproject, and project memberships. You are cryptographically isolated and access-controlled which is both secure and compliant.
When you apply a security level to the "umbrella" project, you will force all members of all groups of all subprojects to satisfy the security requirements for the "umbrella" project.
Re-login to view the project.
Next, you need a subproject structure with security levels and security requirements. For this example, we will use:
Subproject 1,2,3,4 and 5.
Security level 1,2,3,4 and 5.
Security requirement 1,2,3,4 and 5.
Security requirement 1 will correspond to security level 1, security requirement 2 to security level 2, security requirement 3 to security level 3, etc.
Security level 1 will correspond to subproject 1, security level 2 to subproject 2, security level 3 to subproject 3, etc.
Subproject 1,2,3,4 and 5 which we created previously.
Group 1 will correspond to Subproject 1, group 2 to subproject 2, group 3 to subproject 3, etc.
Subproject 1 tags group 1, subproject 2 tags group 2, subproject 3 tags group 3, etc.
A random user will be added to each group corresponding to each subproject for demo purposes.
Re-login to view the changes in user memberships.
As a member with the user role in the "umbrella" project, you can view all files from all group directories.
As a subproject member, you can view only your group directory tagged
with your subproject tag. Suppose you do not satisfy the subproject security requirements. In that case, you can view the name of the files, their type, owner, creation date and size but cannot perform the following actions: view, download, change project, share, access file history, edit or delete files.
note
Users must be notified about the group they belong to.
Next, certify all group members with the security requirements corresponding to the subproject their group is tagged with.
To test your workflow infrastructure, open a group directory you are a member of that was tagged with a corresponding subproject and upload a file into the group directory. All members of that group should be able to view, access, and perform all actions with the group files if they:
Are active group members.
Are members of the subproject that tags the group.
Satisfy all subproject security requirements which tag the group.
]
Even if the users will not be able to view the other members of the subproject (except their membership), they will be able to view the members of their group who are also the members of their subproject.
This action may be altered by an admin in the permissions panel of each user or using the apply profile option.
As shown below, you can either manage the tagged groups of subprojects or delegate each tagged group to a sub-admin.
To delegate your tagged group to a sub-admin:
Promote group member to the owner of the group.
Optionally, leave the group to allow independent collaboration between members and the group owner.
Next, set up the subproject membership of the group owner of the tagged group corresponding to the subproject that tags the group.
User role: allows group owner to have control of the group but cannot untag files by subproject from the tagged group.
Manager role: allows group owner to untag files from the group, pulling them out of the subproject (this is not recommended).
note
A declassified file from the subproject which tags the group is still secured within the group but only visible and accessible to the group members.
When you tag a file from the subproject tagged group directory with the "umbrella" project tag, the group members who have access to the subproject tagged group directory will also have access to the "umbrella" project tagged file since it belongs to the parent project and it is located within the appropriate group directory.
In contrast, the group owner can move a file outside the group but still have it tagged with the subproject.
The file will still be classified by the subproject tag but not visible to other group members except the group owner.
tip
For clean data management purposes, you should never tag random files with other project tags in a tagged group. This results in many tagged files in the group directory structure that have nothing to do with the project which tags the group.
Do not create groups with people who do not know each other- we recommend creating groups with team members. The FE will only allow you to access other users' resources if they are on your team.
Set the team boundaries lined up with group boundaries.
caution
If a group has a project tag, you cannot get the group key unless the project is active in your session.
If the project tagging the group is not active, there is no way to access the group.
After user permission changes in the project you must re-login to update the current session.
You have successfully completed a tagged project with groups infrastructure.
Auditing is fully integrated into the secure system, addressing compliance requirements directly.
Different projects may have specific auditing requirements, and the system caters to those needs.
tiCrypt solution includes an audit system that produces compliance reports, maintains a very detailed audit trail, and retains audit logs for the entire history of the system.
Reports allow audit pre-dictions of data behavior.
Partial success can be achieved with significant effort, but there may be system blind spots and limited supported workflows.
tiCrypt is the result of a collaboration with University of Florida over ten years, designed to address all compliance and security needs, making it a proven security unicorn.
The three pillars of compliance include strong security, system enforcement, and comprehensive auditing and reporting.
All features are designed to meet the rigorous compliance standards of public institutions.
Secure virtual machines in tiCrypt run in almost complete isolation from each-other and the environment. The main motivation for this isolation is providing a high degree of security.
Due to the fact that all ports are blocked except 22 and that ticrypt-vmc (VM controller) control this port for port forwarding, there is no mechanism to directly contact a secure VM using direct access. Instead, a mechanism fully controlled by the ticrytp-vmc is employed. The mechanism relies on ticrypt-proxy (component of ticrytp-backend) to mediate communication with the user in the following manner:
The ticrypt-vmc creates a web-socket connection with ticrypt-proxy at all times.
When a user wants to connect to a VM, the ticrypt-frotnend creates a web-socket to match the existing VM one in ticrypt-proxy. All traffic between the two matching web-sockets is replicaed by ticrypt-proxy. Imediately ticrypt-vmc creates a new web-socket for future communication.
The first message sent by both the ticrypt-frontend and ticrypt-vmc are digital signature proofs of identity and Diffie-Helman secret key negotiation.
If the digital signature fails or if the user is not authorized (as determined by ticrypt-vmc) the connection is immediately closed. Similarly, if the VM validation fails, ticrypt-frontend will close the connection
All subsequent messages (in both directions) are encrypted using hte negotiated key. Except the setup messages, all traffic is hidden from ticrypt-proxy and any other part of the infrastructure.
All commands, terminal info, keys, etc are send through this encrypted channel.
info
The only message sent in the clear to the ticrypt-vmc contains both authentication (digital signature) and key negotiation. Any other message or functionality is immediately rejected.
info
The communication does not rely on listening on regular ports and can only be mediated by ticrypt-proxy.
info
The public key of the VM owner cannot change after learned by the ticrypt-vmc thus hijacking the communication requires a compromise of user's private key.
tip
Most of the VM functionality only requires this communication method and can only be performed using this mechanism. The only exception is application port tunneling.
In order to allow reach application deployment in VMs, a generic mechanism that tunnels TCP traffic on specific ports from the VM to user's device is provided. The mechanism is highly controlled (as explained below) but, in principle, can make any network application access possible.
note
The application tunneling is similar to the SSH tunneling mechanism used for reverse port forwarding but uses TLS
The communication pathway relies on an number of independent segments, all of which can prevent the mechanism from functioning. All traffic on these segments is proxied/forwarded and overall
is secured with TLS encryption and authenticated with digital signature (not passwords). At no point any intermediate component can intercept the communication without breaking TLS.
note
Access to VMs is limited to port 22, not SSH as a service. ticrypt-connect mediates port forwarding of any desired port using the mechanisim described with ticrypt-vmc listening on port 22. Specifically, port 3389 for RDP can be forwarded. Usually, multiple such ports are forwarded to allow richer functionality.
Initiating the forwarding tunnel setup requires the following steps:
The ticrytp-frontend using authenticated session, tells ticrypt-rest (entry point of ticrypt-backend) to create the pathway to a specific VM
The request is validated and ticrypt-proxy is informed of the operation
The ticrypt-proxy sets up a listening endpoint on one of the designated ports (usually 6000-6100). The endpoint is strictly set up and only allows access from the IP address used to make the request.
The ticrypt-frontend is informed of the allocated port.
ticrypt-frontend asks ticrypt-connect to generate a TLS certificate for authentication and connection encryption.
ticrypt-frontend tells ticrypt-vmc to accept the application forwarding and provides the authentication TLS certificate
ticrypt-vmc replies with a list of ports that need to be tunneled
ticrypt-frontend tells ticrypt-connect to start the connection and use the previous certificate.
ticrypt-vmc, upon connection creation, checks that the digital signature is correct and initiates the TLS mediated tunneling.
The traffic to/from local port on user device is tunneled and re-created within the VM to allow application access
tip
A special case is provided for SFTP over port 2022 if the feature is enabled in ticrypt-vmc.
It can be used to move large amount of data from user's device to the VM.
The ticrypt-connect accesses the allocated port controlled by ticrypt-proxy. The connection is pinned to the IP address the setup request comes from.
ticrypt-proxy forwards the request to the VM Host endpoint (explained below)
The VM Host endpoint forwards the traffic to port 22 of the correct VM
ticrypt-vmc listens on port 22 and runs the TLS protocol with port tunneling.
tip
Port 22 is specifically selected to ensure no SSH server is deployed instead to allow normal access to the VM. The traffic on 22 is the TLS protocol with ticrypt-vmc control and not SSH protocol.
In order to ensure VM security, only port 22 is opened inbound. Furthermore, all outbound traffic is limited to ticrytp-rest traffic unless an exception is provided by ticrypt-allowedlist (next section)
The specific mechanisms deployed for VM isolation are:
All traffic to ports other than 22 is blocked internally by the VM thus isolating any other access. Note that application access is provided by the port tunneling above.
All traffic to the VM ip on ports other than 22 is blocked using firewall rules on the VM host.
External traffic to VMs is not routed by the VM host (with the exception of port 22)
Traffic from VMs to outside is blocked unless ticrypt-allowedlist mechanism is used
info
The specific mechanism to allow access on port 22 but no other traffic is to block any traffic routing but provide a forwarding mechanism from a range on the VM host to port 22 of the IP ranges dedicated to the secure VMs. There is simply no way from a server outside the specific VM host to access any other VM port.
tip
The outbound traffic from secure VMs is blocked since it can be used to exfiltrate data thus resulting in data breaches.
In order to use most commercial software, access to external licensing servers needs to be provided. Most of the time, the licensing servers reside within the organization, but occasionally, the servers are external. Recognizing this, tiCrypt provides a strictly controlled mechanism to allow access to licensing servers.
The ticrypt-allowlist component mediates setting up the mechanism that allows access to licensing servers. It does so by manipulating:
Firewall and port forwarding rules on the server running ticrypt-backend
DNS replies to requests that VMs make
The mechanism is strict and very specific. Unless a mapping is provided using ipsets and firewall rules, any outgoing traffic from VMs is blocked.
The ticrypt-allowlist component works in conjunction with ticrypt-frontend to provide a convenient way to enable/disable access. This mechanism requires SuperAdmin access.