We provide free training, feel free to reach us through our training page.
- SSL is enabled by default to only accept logs over a secure channel.
- User access is over SSH using passwords, and limited only to the log folder.
- Logs are organized in separate folders using the remote host name.
Example use cases
Your imagination is your limit, but here are some ideas that are worth considering:
- If you design your infrastructure so that you don't give remote access to the production server to anyone, you can stream the logs from those servers into our product for secure access to production logs - ideal for developers wanting to debug potential issues.
- Stream logs from docker containers by setting up docker to pass the logs in to the host OS, which then can forward those messages to our product.
Our complementary CloudFormation is set up in a way that you have to provide an internal IP that you'd like the server to always use, this way even if the instance is terminated for a server type change, the IP will remain the same and the clients will be able to reconnect without any changes.
Our product is configured to allow any server to send it logs. The data will be sent over an encrypted connection, but there is not a credential system to prevent instances from sending data. For this reason, this product should not be accessible from the public internet. It was designed to be deployed in a private subnet within a VPC, to allow only local servers to send it logs.
Our product includes a bash script that when run on a client will automatically install and configure the Rsyslog service. The bash script will be uploaded to S3, and from there your clients can pull it and run it. Once the server instance is deployed, check the S3 bucket and review the script before using it on a client to make sure it is safe for your environment. If needed you can manually configure each client, or modify the script to fit your needs.
Complete feature list
This section lists all the features of this product for easy referencing.
The product itself
- Default SSL connection.
- Bash script for client configuration.
- Separated users for log access.
If you were to use our CloudFormation file, you’d also get
- An Alarm to check for CPU Bursts.
- An Alarm to check for CPU Load.
- An Alarm to check for Disk usage.
- An Alarm to auto recover the instance if it gets terminated suddenly by AWS due to hardware failure.
- An Alarm for EC2 Instance termination protection.
- A SNS Topic to receive notifications from the above alarms.
- The ability to set same local IP for the server so even after termination the clients won't need reconfiguration.
We provide a CloudFormation file. Before you click the orange button to deploy the stack, make sure to subscribe first to the product on the AWS Marketplace, and if you want to check the CloudFormation prior to deployment, follow this link.
What will be deployed
- 1x EC2 instance with 0x4447 custom AMI:
- 1x IAM Role.
- 1x IAM Policy.
- 1x Security Group.
- 1x Instance profile.
- 4x CloudWatch Alarms:
- CPU Burst.
- CPU Load.
- Disk Usage.
- EC2 Instance Recovery.
- 1x SNS Topic:
- 1x SNS Policy.
- 1x Topic Subscription.
- 1x CloudWatch Dashboard for instance overview.
- 1x EFS drive:
- 1x Mount target.
- 1x Security group.
- 1x Backup:
- 1x Plan.
- 1x Role.
- 1x Selection.
- 1x Vault.
- 1x S3 Bucket to store external scripts.
The First Boot
The boot time of our product will be slower than if you started an instance from a clean AMI, this is due to our custom code that needs to be executed in order to prepare the product for you. This process can take a few minutes longer than usual.
Connecting to the Server
If you need to connect to the server: get it's IP, connect to the instance over SSH with the username
ec2-user, while using the private key you selected at deployment time. If successfully connected, you should be greeted with a custom MOTD detailing the product information.
Automatic Client Setup
This step is optional. If you know what you are doing, feel free to configure your client servers yourself. Or you are using another product that forwards logs etc, use whatever UI the product has to set it up.
Once the server (our product) is deployed correctly, you can configure your clients with the following commands (make sure to replace the placeholder values with real ones, and make sure the EC2 instances in which you run these commands have access to the S3 bucket where the custom script is located).
These commands can be executed:
- by hand
- by placing them in the EC2 Instance UserData
- by executing them remotely through AWS Systems Manager
#!/bin/bash aws s3 cp s3://PARAM_BUCKET_RSYSLOG/bash/rsyslog-client-setup.sh /tmp/rsyslog-client-setup.sh chmod +x /tmp/rsyslog-client-setup.sh /tmp/rsyslog-client-setup.sh PARAM_RSYLOG_SERVER_IP
- Copy the bash script which will configure the client
- Make the script executable
- Configure the client to send the logs to the Rsyslog server
To allow other team members to access the logs from remote servers through our product, we created a special user group called
rsyslog that has access only to the remote logs. Below you can find a reminder how to manage users and passwords under Linux.
How to create a user
sudo useradd -g rsyslog PARAM_USER_NAME
How to set a password
sudo passwd PARAM_USER_NAME
How to delete a user
sudo userdel PARAM_USER_NAME
How to change a password
sudo passwd PARAM_USER_NAME
The logs can be found in the
/var/log/0x4447-rsyslog folder. Inside it you'll find more folders named after the remote hostname for easy identification.
Test the setup
Before you go into production, make sure to test the product. This ensures that you get used to how it works.
Below we give you a list of potential ideas to consider regarding security, but this list is not exhaustive – it is just a good starting point.
- Never expose this server to the public. Use it only inside a private network to limit who can send it logs
- Allow logging only from specific subnets
- Block public SSH access
- Allow SSH connection only from limited subnets
- Ideally allow SSH connection only from another central instance
- Don't give root access to anyone but yourself
How to change the instance type
If you need more memory and CPU capacity, you can change your instance type to a bigger one. To do so, follow these instructions:
- Go to the CloudFormation console
- Click on the stack that you want to update.
- Click the
- Keep the default selection and click
- On the new
Parameterspage, change the instance type from the drop down.
Nexttill the end.
Wait for the stack to finish updating.
These are some of the common solutions to problems you may run into:
Not authorized for images
My CloudFormation stack failed with the following error
API: ec2:RunInstances Not authorized for images:... in the Event tab.
You have to accept the subscription from the AWS Marketplace first, before you use our CloudFormation file.
The product is misbehaving
I did follow all the instructions from the documentation.
Check if the values entered in the UserData have reached the instance itself.
sudo cat /var/lib/cloud/instance/user-data.txt
UserData seams ok
The UserData reached the instance, and yet the product is not acting as it should.
Use the following command to see if there were any errors during the boot process.
sudo cat /var/log/messages | grep 0x4447