You can read more on consul on the hashicorp website.
The aim of this write up is to show how you could backup your consul data in S3 if you do not have an enterprise version which would normally come with a consul backup agent.
Here is a sample of my script
#!/bin/bash BAK_DEST=/tmp/consul/backup #Polling associated AWS variables REGION=$(/usr/bin/curl --silent http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region) INSTANCE_ID=$(/usr/bin/curl --silent http://169.254.169.254/latest/meta-data/instance-id) #S3 bucket is published in AWS Parameter store. You might decide to hardcode this. S3_BUCKET=$(/usr/local/bin/aws ssm get-parameter --name "/keystore/$REGION/consul_s3_destination" --region $REGION | jq -r .Parameter.Value) #hostname is a tag on the EC2 instance so my server can easily poll that information. HOSTNAME=$(/usr/local/bin/aws ec2 describe-tags --region=$REGION --filter "Name=resource-id,Values=$INSTANCE_ID" "Name=key, Values=Name" --output=text | cut -f5) #number of days to keep archives KEEP_DAYS=2 #script variables BAK_DATE=`date +%F` BAK_DATETIME=`date +%F-%H%M` BAK_FOLDER=${BAK_DEST} BAK_DB=${BAK_DEST}/${HOSTNAME}-${BAK_DATETIME} #CREATE folder where backup database is to be place #echo 'Creating consul back up ' ${BAK_FOLDER} #mkdir ${BAK_FOLDER} #PERFORM Consul backup echo 'Creating archive file ' ${BAK_DB}'.tar.gz Please wait ......' /usr/local/bin/consul snapshot save ${BAK_DB}.snap tar czPf ${BAK_DB}.tar.gz ${BAK_DB}.snap #Moving backups to AWS. This uses AWS CLI to copy snapshots to S3 echo 'Copying consul backups to S3' /usr/local/bin/aws s3 cp ${BAK_DB}.snap s3://${S3_BUCKET}/dailybackup/${HOSTNAME}-${BAK_DATETIME}.snap # DELETE FILES OLDER THAN 2 days echo 'Deleting backup older than '${KEEP_DAYS}' days' find ${BAK_FOLDER} -type f -mtime +${KEEP_DAYS} -name '*.gz' -execdir rm -- {} \; find ${BAK_FOLDER} -type f -mtime +${KEEP_DAYS} -name '*.snap' -execdir rm -- {} \;
A few items of interest; the script works on the premise that your consul server has the permission to read from AWS parameter store; also write to S3. Servers should also have AWS CLI installed.
I also created a local destination for consul backups on the host /tmp/consul/backup.
The hostname is a tag in AWS and hence the server must also be able to describe ec2 instances to pull it tag information.
The script uses the command "consul snapshot save" to take a snapshot and save in the local destination followed by using was AWS CLI to copy the snapshot to a predefined S3 destination. The S3 destination is published in parameter store which the EC2 instance would also grab.
No comments:
Post a Comment