I see no option to export a backup of the settings for a domain.
Maybe I should save the results of public DNS with dig but I would question whether a friend knows a better way.
I see no option to export a backup of the settings for a domain.
Maybe I should save the results of public DNS with dig but I would question whether a friend knows a better way.
Yes, it can be more friendly way. I suggest using cli53 tool, https://github.com/barnybug/cli53
After you setup it, just try
cli53 export --full sciworth.com
And you get the export zone in bind format.
No need of additional software installations. You need only awscli.
Here is what I just wrote. It is simple and works like charm.
#!/bin/bash -e
#
# Author: Peycho Dimitrov
#
# DESCRIPTION
#
# Create full backup of all hosted Route53 zones / domains in your account.
#
# REQUIREMENTS
#
# Available s3 bucket (where your json files will be saved)
# awscli (with cofigured credentials or IAM role)
# gzip
# awk
#
####################################
# CONFIGURATION
region="us-east-1" # Your aws region
b_route53_tmp="/tmp/r53_backup" # Your temp directory
b_route53_bucket="s3://my-backups/route53" # Your backup folder in s3.
# END OF CONFIGURATION
# Do not edit here if you don't know what your're doing! #
mkdir -p $b_route53_tmp
echo "$(date) Backup all Route53 zones and resource records."
p_aws="$(which aws) --region $region"
r53_zones=$($p_aws route53 list-hosted-zones --query '[HostedZones[*].[Id, Name]]' --output text | awk -F'/' '{print $3}')
if [ ! -z "$r53_zones" ]; then
while read route; do
zone=$(echo "$route" | awk '{print $1}')
domain=$(echo "$route" | awk '{print $2}')
echo "Processing $zone / $domain"
$p_aws route53 list-resource-record-sets --hosted-zone-id "$zone" --output json > "$b_route53_tmp"/$(date +%Y%m%d%H%M%S)-"$zone"-"$domain"backup.json
done <<<"$r53_zones"
echo "Archive json files."
gzip "$b_route53_tmp"/*backup.json
echo "Backup $zone / $domain data to $b_route53_bucket/$(date +%Y)/$(date +%m)/$(date +%d)/"
$p_aws s3 cp "$b_route53_tmp"/ $b_route53_bucket/$(date +%Y)/$(date +%m)/$(date +%d)/ --exclude "*" --include "*.gz" --recursive
fi
echo "$(date) Done!"
If you want to export to bind format, you can use this script:
#!/bin/bash
zonename=$1
hostedzoneid=$(aws route53 list-hosted-zones | jq -r ".HostedZones[] | select(.Name == \"$zonename.\") | .Id" | cut -d'/' -f3)
aws route53 list-resource-record-sets --hosted-zone-id $hostedzoneid --output json | jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[].Value)\n"'
Based on @sztibu's answer above, except it shows usage and supports zone_id or zone_name. This is my fave because it's standard old school bind format, so other tools can do stuff with it.
#!/bin/bash
# r53_export
usage() {
local cmd=$(basename "$0")
echo -e >&2 "\nUsage: $cmd {--id ZONE_ID|--domain ZONE_NAME}\n"
exit 1
}
while [[ $1 ]]; do
if [[ $1 == --id ]]; then shift; zone_id="$1"
elif [[ $1 == --domain ]]; then shift; zone_name="$1"
else usage
fi
shift
done
if [[ $zone_name ]]; then
zone_id=$(
aws route53 list-hosted-zones --output json
| jq -r ".HostedZones[] | select(.Name == "$zone_name.") | .Id"
| head -n1
| cut -d/ -f3
)
echo >&2 "+ Found zone id: '$zone_id'"
fi
[[ $zone_id ]] || usage
aws route53 list-resource-record-sets --hosted-zone-id $zone_id --output json
| jq -jr '.ResourceRecordSets[] | "(.Name) \t(.TTL) \t(.Type) \t(.ResourceRecords[]?.Value)\n"'
AWS provides all the tools you need to do this natively. There is no need to use non-AWS software or code. All you need is an Linux VM running as an EC2 instance with AWS CLI installed and the proper EC2 permissions configured. See steps below with links to AWS documentation OR take a look at the AWS zone migration guide (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-migrating.html).
Instructions on Linux.
Install AWS CLI (https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
Configure an IAM role for ec2 (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role).
Attach policy "AmazonRoute53ReadOnlyAccess" to your role (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console).
Run the aws route53 command to export your zone to JSON file (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-migrating.html#hosted-zones-migrating-create-file).
(Optional) Use python to convert the JSON file to CSV (https://www.geeksforgeeks.org/convert-json-to-csv-in-python/).
I wrote quick simple backup script
#!/usr/bin/env bash
#set your vars
date=$(date +%j%m%Y)
backup_to="/change/this/to/your/destination/backup/dir"
backup_command_path="/usr/local/bin/cli53"
backup_command="cli53 export"
#check if cli53 exist
if ! [ -f $backup_command_path ]; then
echo -e " file do not exist\n please install it from https://github.com/barnybug/cli53/releases/latest"
exit 1
fi
#get list of domains to backup
domains=$(cli53 l | awk '{print $2}'| sed 's/.$//' | sed 's/Nam//')
cd $backup_to
for i in ${domains[@]}; do
cli53 export $i > $i
done
#list backup directory
echo -e " the following zones where backup:\n"
ls -l
To export a hosted zone in AWS Route 53, follow these steps (let say you are using example.com hosted zone):
Step 1: Installation – pip install route53-transfer
Step 2: Backup the zone to a CSV file:
route53-transfer dump example.com backup.csv
Use STDOUT instead of a file
route53-transfer dump example.com –
Step 3: Restore a zone:
route53-transfer load example.com backup.csv
Use - to load from STDIN instead
Migrate between accounts:
Use command line switches for overriding the access and secret keys:
route53-transfer --access-key-id=ACCOUNT1 --secret-key=SECRET dump example.com
route53-transfer --access-key-id=ACCOUNT2 --secret-key=SECRET load example.com
If you are working with private zones, use –private to distinguish private domains:
route53-transfer --private dump example.com example-private.csv
route53-transfer dump example.com example-public.csv
You can sign up for Cloudflare.com and add a free website.
Cloudflare will scan your DNS as part of its onboarding.
After import (or maybe during), in "Advanced" below the DNS records, there is an Export DNS file button.