- Off-site backup for personal files
- Backup to S3 script
- Prerequisites
- Provision AWS resources
- Deploy stack
- Get bucket name
- Get access key
- Setup rclone
- Backup to S3
- Single file backup
- Directory backup
- Subdirectories backup
- Example
- Questions and answers
- Why IAM user?
- Why .tar.gz archives?
- Why stream archive?
- Why calculating hash?
- Why rclone?
- Why S3 Glacier Flexible Retrieval?
- Problems and considerations
- Automatic backups
- Backup from Google Photos (and similar)
- Disk operations vs power-saving mode
- Backblaze – an easy alternative
- Conclusion
In need to backup my personal files in the cloud, I wrote a script that archives the data into the Amazon S3 bucket. After some fine-tuning and solving a bunch of edge-cases, it’s limited mainly by the disk read and my internet upload speed. And it costs me only $3.70 per TiB per month.
Instead of reinventing the wheel, I started with research. There must be a good, easy-to-use cloud backup service, right? But everything I found was too complex and/or expensive. So I wrote down the backup script myself.
Then I did the research again, and the results were quite different – this time, I found a few reasonable services I could use. But I already had the script, I had fun writing it, I will continue using it, so I decided to share it.
Off-site backup for personal files
The question about the personal backup system is raised from time to time on Hacker News. On the internet, you can find a 3-2-1 backup strategy:
- 3 copies
- on 2 different media
- with at least 1 copy off-site
I have an external disk with documents and photos archive. This, however, is just one copy kept right next to my laptop. And hard drives fail.
So I needed an off-site backup.
Looking for personal cloud backup solutions, I found some overcomplicated, some expensive, and one or two reasonable services. But then I remembered that I work on AWS, and Amazon S3 storage is cheap. This is especially true if you want to archive data and don’t touch it too often.
The result is the script I wrote for backing up files to the S3 bucket.
Backup to S3 script
Below is the full, detailed explanation. If you are interested only in the script and usage instructions, you can find the link to the GitHub repository at the end.
Prerequisites
The script uses rclone and GNU Parallel.
On macOS, you can install them with Homebrew:
Provision AWS resources
To store backups in an S3 bucket, you need to have such a bucket. And while you could create and configure it by hand, it will be easier to provision it with a simple CloudFormation template.
The template defines two resources: the BackupBucket
and BackupUser
.
The BackupBucket
has disabled public access for all the objects, as you don’t want any of the files to be publicly accessible by mistake.
It also enables object versioning. When uploading new versions of existing files – fresh backups of the same files – the previous ones will be kept instead of immediately overridden.
On the other hand, to not keep old backups indefinitely (and pay for them), the bucket has a lifecycle rule that automatically removes old file versions. It will keep only the last 3 versions and remove others 30 days after they become “older”.
The other lifecycle rule aborts incomplete files uploads after 3 days. The script will upload big files in multiple chunks. If the process fails or is interrupted, you are still charged for the uploaded chunks until you complete or abort the upload. This rule will prevent those incomplete uploads from staying forever and generating charges.
The second created resource is the BackupUser
. It’s an IAM user with permission to upload files to the bucket.
Deploy stack
To deploy the stack, run:
Get bucket name
After the deployment is completed, go to the CloudFormation in the AWS Console and find the backupToS3
stack.
Then, in the “Outputs” tab, you will see the BackupBucketName
key with the generated S3 bucket name.
You will need it in a moment.
Get access key
Similarly, you will find the BackupUserName
with the IAM user name.
Go to the IAM, open that user details, and create an access key in the “Security credentials” tab.
Setup rclone
rclone requires setting up the storage backend upfront.
You can do this by running rclone config
and setting up the S3
or manually editing the configuration file.
In the configuration, set the access key ID and secret access key generated for the IAM user.
Backup to S3
Generally, the idea is straightforward: we copy everything to the S3 bucket.
But things are rarely so simple. So let’s break it down, step by step.
The script is based on the minimal Bash script template. Bash provides the easiest way to glue together various CLI programs and tools.
The script expects at least three parameters: the S3 bucket name (--bucket
), the backup name (--name
), and the local path to be backed up (--path
). The backup name serves as an S3 prefix to separate distinct backups.
After parsing the input arguments, the script does four things.
Firstly, it converts the path to absolute.
Secondly, it calculates the chunk size for the multipart file upload based on the max archive size. More on this later.
Thirdly, it creates an array of common parameters for rclone.
And finally, it executes backup based on the provided arguments.
Single file backup
If the backup path points to a file, the script uses the rclone copy command to simply upload the file to the bucket.
rclone will calculate an MD5 of the file and upload it only if a file with the same name and checksum does not yet exist. This will prevent wasting time uploading the file if it’s unmodified from the last time you backed it up.
Directory backup
If the path points to a directory, things get more complex.
Directory upload function starts with cleaning the path from parts like /./
and creating the archive name.
The archive will be named just like the directory, with a .tar.gz
extension.
The subsequent process is best explained with a flowchart:
After the directory is compressed and uploaded, the script creates two additional text files in the S3 bucket. One contains the calculated MD5 hash of the files and the other files list.
Subdirectories backup
Since we compress the directory and calculate its hash to not re-upload it unnecessary, it makes sense to archive individual subdirectories separately. This way, if the content of one of them changes, only it must be updated, not everything.
At the same time, we should aim to have a smaller number of bigger archives instead of creating too many small ones. This will make the backup and restore process more effective, both in time and cost.
For those reasons, an optional parameter --split-depth
defines how many levels down the directories tree script should go and create separate archives.
Each found directory is archived with the backup_path() function, the same as before.
Additionally, all the files in directories above the --split-depth
level are archived as a _files.tar.gz
.
To illustrate this, let’s take this files structure:
With --split-depth 1
, the disk will be backed up as four archives:
And with --split-depth 2
:
Example
As you can see, two directories were found in the path and uploaded.
The first one was 41 GiB in total and was transferred in 1h57m. The other was 177 GiB and took 8h12m to upload. If you calculate that, it almost perfectly matches my internet upload speed of 50 Mbit/s.
Questions and answers
Why IAM user?
Generally, using IAM users is not the best practice. Instead, it’s far better (and more common) to use SSO and grant access through IAM roles. Furthermore, it’s even safer (and more convenient) to use tools like Leapp or aws-vault to manage AWS access.
However, credentials obtained this way are valid for only up to 12 hours and may require manual actions like providing an MFA token to refresh them. Depending on the directory size and your internet connection, uploading backup may take longer.
For that reason, we use an IAM user with static access credentials set directly in rclone. This user has access limited to only the backup bucket.
Putting the user access credentials directly in the rclone configuration has one additional perk. It allows for the backup to be done in the background without affecting your other work with AWS.
Why .tar.gz archives?
If you just want to upload the whole directory recursively to the S3 bucket, rclone copy
or sync
commands will handle it.
So why bother with compressing files into a .tar.gz
archive?
Uploading or downloading a single archive file will be much faster than doing the same with hundreds and thousands of individual files. Since this is backup, not for regular storage, there is no requirement for quickly fetching a single file.
Working on a smaller number of big files also reduces costs for S3 operations PUT and GET operations.
This may be simplified in the future when the rclone gets the archive capability built-in.
Why stream archive?
The .tar.gz
archive is streamed with the pipe (|
) operator to the rclone rcat
command, which reads the data from the standard input and sends it to the storage.
The archive file is never created on the disk.
Thus you don’t need free space in memory or on the disk equal to the backup file size.
This, however, brings some consequences.
One is that the rclone does not know the total size of the archive upfront.
Sending a big file to the S3 is done with a multipart upload. First, the upload process is started, then the next chunks of the file are sent separately, and, finally, the transfer is completed. There is a limit, though, to 10.000 parts max. Thus each chunk must be big enough. Since rclone does not know the total file size, we must manually instruct it.
This is where the previously calculated $chunk_size_mb
takes part.
By default, it’s set so the total file size limit is 1 TiB.
You can use the --max-size
parameter to modify it.
The other consequence of streaming the archive file is that the rclone can’t calculate its checksum before uploading.
Why calculating hash?
Typically, the rclone calculates file checksum and compares it with the checksum of the file already existing in the storage.
For example, this happens when we back up a single file using rclone copy
.
But since we are streaming the archive, rclone can’t calculate the checksum on its own, so we need to do it ourselves.
We never create the whole archive file locally, so we can’t calculate its hash. So instead, we calculate the MD5 of all the files and then compute the MD5 of those hashes:
This hash is uploaded as a separate object next to the archive. On the next script run, the hash for the local files is re-calculated and compared with the one in the storage.
Calculating MD5 for thousands of files can be time expensive. However, the biggest bottleneck of the backup process is the upload speed. Therefore, calculating the hash and possibly skipping the archive upload can reduce the total process time, especially in the case of rarely modified files.
Why rclone?
rclone provides a layer on top of AWS SDK for interacting with S3 buckets. While the same could be achieved with AWS CLI, rclone displays upload progress when sending data from the input stream, and AWS CLI does not.
However, the progress shown by the rclone is not 100% accurate. rclone shows data as transferred once buffered by the AWS SDK, not when actually sent. For that reason, the displayed progress may go faster and slower or even stop from time to time. In reality, data is constantly uploaded.
Another reason to use rclone is that, because access credentials for rclone are set separately in its config file, ongoing backup upload does not interfere with other activities and tools that use local AWS credentials. So we can work with other AWS accounts at the same time.
Why S3 Glacier Flexible Retrieval?
AWS S3 has different storage classes, with various pricing per GB and operations.
The S3 Glacier Flexible Retrieval class is over 6 times cheaper per GB stored than the default S3 Standard. In the least expensive AWS regions (like us-east-1
or eu-west-1
), the storage price is only $3.69 per TiB per month.
On the other hand, fetching objects from the S3 Glacier Flexible Retrieval is not instantaneous. Firstly you have to request object restore. It is free if you are okay with waiting 5-12 hours for it to be ready. Otherwise, it can be sped up by choosing the Expedited retrieval, but for $30 per TiB.
An even cheaper storage class is the S3 Glacier Deep Archive. Using it can further reduce costs to only $1.01 per TiB. But, contrarily to Flexible Retrieval, it does not provide an option to retrieve the data in less than a few hours. Also, there is no free data retrieval option, although it costs only $2.56 per TiB if you are willing to wait for up to 48 hours.
With personal backups in mind, data retrieval should be rare and, hopefully, not time-critical. Thus the S3 Glacier Flexible Retrieval storage class provides a reasonable balance between costs and data access options.
Keep also in mind that with the S3 Glacier Flexible Retrieval, you are billed for at least 90 days of storing your objects, even if you remove them sooner.
The storage class can be set with the --storage-class
parameter.
Please see the S3 pricing page for all the details.
Problems and considerations
Automatic backups
Backups are best if they are done regularly. Otherwise, you can find yourself looking for the backup to restore and finding out that the last was done a year ago.
If you want to backup local files from your computer, nothing stops you from automating it with a CRON job.
Unfortunately, automation can be impossible with external drives, attached only from time to time. It relies only on your good practices.
Backup from Google Photos (and similar)
Apart from physical drives and local files, backing up photos and documents from services like Google Photos, Google Drive, etc., is not a bad idea either. Accidents happen, even with cloud services.
As a Google Photos user, I hoped to fetch photos with rclone. Unfortunately, it’s not possible to download photos in the original resolution this way, which makes it unfit for backup.
The only possibility I found is to use Google Takeout to get the data dump from Photos, Drive, GMail, and other Google services, and then upload it to S3 with the backup script.
Disk operations vs power-saving mode
Starting backup and leaving the laptop for the night may not necessarily bring the best results. I did it and was surprised to see that calculating files checksum did not complete after 8 hours. But when I re-run the script again, it finished in an hour or so.
Even when not on the battery, computers tend to minimize background operations when not used.
On macOS, in Settings -> Battery -> Power Adapter, you can uncheck “Put hard disks to sleep when possible”. A more aggressive option is to disable sleep altogether.
Backblaze – an easy alternative
Please note: the links below are affiliate links.
While engineering your personal backup to S3 can be fun, there are simpler alternatives. One of them, and one of the best I found, is Backblaze.
For just a $9/month (or $99/year), you can have automated, unlimited data cloud backup. You can try it with a 15-day Free Trial: Backblaze Personal Backup.
Conclusion
250-lines Bash script is not a foolproof backup system. The backup is not incremental. It does not handle file modifications during the archive process.
Nonetheless, this is just what I need to back up my external drive from time to time and sleep without worrying about losing data if it will not work the next time I attach it to the computer.
The whole script with usage instructions is available on GitHub: