Free Training
We provide complimentary training sessions. For more information or to request assistance, please visit our training page.
Deploy the product
First, subscribe to the product on the AWS Marketplace, and then deploy this CloudFormation file.
Steps
A comprehensive list of steps to ensure a successful deployment:
- Verify you are in the correct AWS account.
- Ensure you are in the appropriate region.
- Subscribe to the product using the link provided above, and ensure not to launch the product from the AWS Marketplace.
- Confirm the product was not launched from the AWS Marketplace.
- Deploy the product using the CloudFormation template link provided above.
- Wait for the deployment to complete while continuing to review the remaining documentation.
Initial Startup
Expect a slight delay in the startup time of our product relative to launching an instance with a standard AMI. This occurs as our bespoke software configuration is applied to tailor the product to your needs, extending the initialization process by a few minutes.
Server Connection
Using SSM
All of our products are designed to support AWS Systems Manager (SSM) right out of the box. We strongly believe in security, and the fewer ports exposed to the public, the better. The SSM service provided by AWS perfectly aligns with this approach.
When you need to connect to an instance, opt for connecting through the Session Manager. Once you've gained access, execute the command sudo su ec2-user
to switch to the user account where all of our tools are located. This ensures you have the appropriate permissions and access to the necessary resources. Additionally, you're encouraged to use the AWS Systems Manager service for secure and efficient management of your instances. This integrated service provides a unified interface for automated tasks and monitoring, further enhancing your operational security and efficiency.
Using SSH
This approach is also available to you. You can access the instance using the ec2-user
username and the SSH key you selected at deployment time.
Working with PGP - Optional
Should you opt to enable encryption at rest during deployment, this section is designed to guide you through verifying the auto-generated key, generating new keys for users, and decrypting data.
View the Auto-Generated PGP Key
Upon the initial boot of the EC2 instance, our product automatically generates a PGP key. It's crucial to identify this key's ID for executing all subsequent commands effectively.
- Log into the instance.
- Elevate to the root user by executing
sudo su -
. - Navigate to the root user's home directory by typing
cd
and pressing Enter. - Display the auto-generated key's details with
gpg --list-secret-keys --keyid-format=long
.
Generating Keys for Users
Creating a subkey for each user allows for tailored access and control. Follow these steps to generate user-specific keys:
-
Begin by selecting the main key for editing:
gpg --edit-key MAIN_KEY_ID
In the interactive mode, type the
addkey
command, then choose option 6 for RSA (encrypt only), pick the size of the key, and finally set the expiration. -
To confirm the creation of the new subkey, list the keys:
gpg --list-secret-keys --keyid-format=long
-
For user access, export the newly created subkey:
gpg --armor --export-secret-key SUB_KEY_ID > KEY_NAME.asc
This streamlined process ensures your data remains secure while providing necessary access to authorized users.
Decrypt the Data
Decrypting PGP-encrypted files requires access to the appropriate private key for decryption. Here's how to manage and use decryption keys across Linux, macOS, and Windows, including specific instructions for adding or providing the decryption key in the CLI for Linux and macOS, and the requirement for Gpg4win on Windows.
Linux
Before you can decrypt a file on Linux, ensure that the private key, which corresponds to the public key used for encryption, is imported into your keyring.
- To import a private key, use the following command:
gpg --import /path/to/private-key-file
- Once the key is imported, you can decrypt files using the command:
gpg -o decrypted_file.extension --decrypt encrypted_file.extension
This command searches your keyring for the appropriate private key to decrypt the file, prompting for the passphrase if necessary.
macOS
Similar to Linux, macOS users must ensure the private key is available in their keyring.
- Import your private key using the Terminal with the command:
gpg --import /path/to/private-key-file
- After importing your key, decrypt files by executing:
gpg -o decrypted_file.extension --decrypt encrypted_file.extension
As with Linux, you will be prompted to enter the passphrase for your private key during the decryption process.
Windows (Using Gpg4win)
Windows does not include built-in CLI support for PGP. To decrypt PGP-encrypted files on Windows, you must first install Gpg4win, which provides both Kleopatra (a GUI application) and command-line tools.
-
Install Gpg4win: Download and install Gpg4win from the official website. This suite includes the necessary GPG command-line tools.
-
Import Private Key: Use Kleopatra or the command line to import your private key. If using the command line, open Command Prompt or PowerShell and run:
gpg --import /path/to/private-key-file
- Decrypt Files: With the private key imported, use the command line to decrypt files:
gpg -o decrypted_file.extension --decrypt encrypted_file.extension
Make sure to replace /path/to/private-key-file
with the actual path to your private key file, and adjust file names as necessary. This process will prompt you for the passphrase of the private key when decrypting the file.
By following these steps, you can manage and use PGP keys across different operating systems, ensuring secure decryption of your encrypted data. Remember, the availability and use of the gpg
command-line tool on Windows require the installation of Gpg4win, bridging the functionality gap between Windows and Unix-like operating systems (Linux and macOS).
Understanding GPG Key Output
When managing cryptographic keys with GnuPG (GPG) for secure communication and data storage, it's crucial to understand the output of key management commands. This section explains the output of the gpg --list-secret-keys --keyid-format=long
command, which is used to list the secret (private) keys stored in the GPG keyring.
/root/.gnupg/secring.gpg
------------------------
sec 4096R/8E323866C962F513 2024-02-26 [expires: 2100-12-12]
uid SFTP Server (Without password) <sftp.server@0x4447.email>
ssb 2048R/C056A6CB2B48A905 2024-02-27
Keyring File Path
- /root/.gnupg/secring.gpg: Indicates the file path to the keyring storing your secret keys. GPG uses this directory to keep its configuration files and keys secure.
Secret Key Details
- sec 4096R/8E323866C962F513 2024-02-26 [expires: 2100-12-12]: This line provides detailed information about a secret key:
- sec: Stands for "secret key" and denotes a private key.
- 4096R: Specifies the key's length (4096 bits) and the encryption algorithm (RSA), indicating strong security.
- 8E323866C962F513: The key ID, uniquely identifying the key. It is displayed in long format for detailed reference.
- 2024-02-26: The creation date of the key.
- [expires: 2100-12-12]: The expiration date of the key, set far in the future for extended use.
User ID
- uid SFTP Server (Without password) sftp.server@0x4447.email: Identifies the key owner, including a name and email. This key is designated for an SFTP server, indicating its use in secure file transfers.
Secret Subkey Details
- ssb 2048R/C056A6CB2B48A905 2024-02-27: Details a subkey associated with the primary key:
- ssb: Stands for "secret subkey", used primarily for encryption.
- 2048R: Indicates a 2048-bit RSA subkey, balancing security and performance.
- C056A6CB2B48A905: The subkey ID.
- 2024-02-27: The creation date of the subkey.
Migrating to v1.3.x
If you're planning to upgrade to version v1.3.x and wish to keep all the pgp-keys, follow these step-by-step instructions for a seamless transition:
- Log in to the instance.
- Ensure you are operating as
ec2-user
. - Navigate to the home directory of
ec2-user
. - Download the backup and restore CLI script to the instance using
curl
orwget
from this link: pgp_backup_restore.sh. - Make the script executable with
chmod +x pgp_backup_restore.sh
. - Execute the script with
pgp_backup_restore.sh backup
. - Run
ls -la
to verify the contents of the current directory. - Look for a file named
0x4447-sftp-backup.tar.gz
. - Copy this file to your local machine using your preferred method:
SCP
for example, as the data will be lost after the update. - Proceed with the product update.
- Upload the backup file to the root directory of
ec2-user
. - Re-download the backup and restore CLI script to the instance using
curl
orwget
from this link: pgp_backup_restore.sh. - Execute the script again, but this time with
pgp_backup_restore.sh restore
.
If everything goes as planned, all your configurations will be restored.
Advanced details
Key aspects
- Unlimited storage for uploaded data.
- Ability to easily browse pre-existing EFS drives.
- Optional PGP support for encrypting individual files at rest.
Example use cases
Your imagination is your limit, but here are some ideas worth considering:
- Ingest vast amounts of data at a fixed price.
- Enable secure data sharing with financial institutions.
- Seamlessly browse existing EFS drives within your account and easily access their contents.
- Provide a secure storage solution for highly sensitive data, encrypted with PGP.
Resilience
Our product incorporates built-in resilience measures to prevent data loss and ensure uninterrupted connectivity, even in the event of changing IP addresses. The CloudFormation template we provide offers a streamlined and efficient way to deploy and set up all the necessary components, allowing you to get up and running swiftly with everything you need.
Test the setup
Before going into production, it is important to thoroughly test the product. This is not because we lack confidence in its functionality, but rather to ensure that you become familiar with how it works and can address any potential challenges or issues beforehand. Testing will help you gain confidence in the product's performance and make necessary adjustments, if needed, before deploying it in a live production environment.
Security Concerns
Below we give you a list of potential ideas to consider regarding security, but this list is not exhaustive – it is just a good starting point.
- Limit access to the server to a specific fixed IP.
- Restrict root access to only yourself.
How To
How to change the instance type
Make sure you regularly back up your drive(s). One simple solution would be to use:
- Go to the CloudFormation console
- Click on the stack that you want to update.
- Click the
Update
button. - Keep the default selection and click
Next
- On the new
Parameters
page, change the instance type from the drop down. - Click
Next
till the end.
Please wait for the stack to finish updating.
F.A.Q
These are some of the common solutions to problems you may encounter:
Not authorized for images
My CloudFormation stack encountered a failure with the following error: API: ec2:RunInstances Not authorized for images:...
in the Event tab.
Solution
Before using our CloudFormation file, please ensure that you accept the subscription from the AWS Marketplace.
The product is misbehaving
I followed all the instructions from the documentation.
Solution
Please verify if the values entered in the UserData section have been successfully passed to the instance itself.
sudo cat /var/lib/cloud/instance/user-data.txt
UserData seams ok
The UserData reached the instance, but the product is not behaving as expected.
Solution
Use the following command to check if there were any errors during the boot process.
sudo cat /var/log/messages | grep 0x4447
Issue with EFS backup restoration
I launched the product using an EFS drive restored from a backup, but unfortunately, the product is not functioning as expected..
Solution
You need to reorganize the EFS drive. AWS restores the data, even on a new and empty drive, in special folders called: aws-backup-restore_timestamp-of-restore
. Meaning, AWS does not recreate the original folder structure during the restoration process. Check how AWS restores EFS Backups to learn more.
Before utilizing the restored drive, you have the option to reorganize it using our SFTP product.