I recently signed up with Rackspace to host some database servers. I've got two MySQL servers set up, and have a method to create backups (using the Percona Xtrabackup and innobackupex tools). I've been trying to use duplicity to copy these backups to S3 and CloudFiles storage, and it is taking forreverr! I would expect the S3 backup to not be very fast, but the CloudFiles backup has taken 15 hours to backup 9GB. That's horrendously slow, and unacceptable for me.
I've looked through the duplicity source code, and by default it does not utilize the Rackspace Servicenet to transfer to cloudfiles. I then looked at the cloudfiles source code for the lib that duplicity uses for the CF backend, and saw that there was an environmental option for utilizing the Servicenet (RACKSPACE_SERVICENET). So long as that is set to something the cloudfiles lib should be connecting to cloudfiles via the Rackspace Servicenet, which SHOULD make for fast transfers. It does not.
I'm not sure if the speed limitation is because of some limitation of CloudFiles or if the cloudfiles python library isn't actually connecting via the RackSpace servicenet.
Do any of y'all have any other suggestions for how I should/could go about getting these backups off the server and onto a 3rd party or remote backup service?