So, here is the script. It works - but it all depends on how long scaleway takes to restore files from glacier to standard storage. I have noticed this can take longer than the advertised 6hours. They claim they have had some technical problems last couple of days, so I can only hope it will speed up a bit. However, time is not really a problem to me. If a verification takes more than a day - so be it.
- make sure scaleway (or other s3 backend) is configured correctly in rclone, and that you also have a backend to your local filesystem.
$ rclone config
Current remotes:
Name Type
==== ====
local local
scaleway s3
- configure the aws cli (How to use Object Storage with AWS-CLI - Scaleway). It is possible to rewrite the script so it does not use rclone, but I haven’t spent time doing that. Basically, it would need to translate the json output from aws to a format that is similar to rclone.
- save below script to a location that duplicati has access to, and make it executable. I have named the script
rclone_alias.sh
. You might need to change some parameters.
#!/bin/bash
command=$1
source=$2;
dest=$3;
rclone_remote="scaleway"
rclone_local="local"
log_file="/volume1/duplicati_temp/rclone.log"
err_file="/volume1/duplicati_temp/rclone.err"
aws_exec="/volume1/@appstore/python3/bin/aws"
waitsec=60
keeprestore=2
check_restore_status( ){
status="2";
while [[ "$status" != "0" ]]; do
now=$(date +"%T")
echo "restoring - $now" >> $log_file
echo "$aws_exec s3api head-object --bucket $1 --key $2" >> $log_file
output="$($aws_exec s3api head-object --bucket $1 --key $2)"
echo "$output" >> $log_file
status="$(echo $output | grep true | wc -l)"
if [[ "$status" == "1" ]]; then sleep 60; fi
done
}
restore_from_glacier( ){
arrIn=($(echo $1 | sed -e 's/\// /g'));
bucket=${arrIn[0]};
file=${arrIn[1]};
echo "restoring from glacier $bucket $file" >> $log_file
echo "$aws_exec s3api restore-object --bucket $bucket --key $file --restore-request '{"Days":'$keeprestore',"GlacierJobParameters":{"Tier":"Standard"}}'" >> $log_file
$aws_exec s3api restore-object --bucket $bucket --key $file --restore-request '{"Days":'$keeprestore',"GlacierJobParameters":{"Tier":"Standard"}}' >> $err_file
check_restore_status $bucket $file;
echo "$aws_exec s3 cp s3://$bucket/$file $2" >> $log_file
$aws_exec s3 cp s3://$bucket/$file $2 >> $err_file
}
upload_to_glacier( ){
arrIn=($(echo $2 | sed -e 's/\// /g'));
bucket=${arrIn[0]};
file=${arrIn[1]};
echo "$aws_exec s3 cp $1 s3://$bucket/$file --storage-class GLACIER" >> $log_file
$aws_exec s3 cp $1 s3://$bucket/$file --storage-class GLACIER
$aws_exec s3 cp s3://$bucket/$file s3://$bucket/$file --ignore-glacier-warnings --storage-class GLACIER
}
case $command in
lsjson)
echo "rclone lsjson $source" >> $log_file;
rclone lsjson $source;;
copyto)
arrSource=(${source/:/ });
arrDest=(${dest/:/ });
source=${arrSource[0]};
dest=${arrDest[0]};
case ${arrSource[0]} in
$rclone_remote)
echo "from glacier to local: ${arrSource[1]} --> ${arrDest[1]}" >> $log_file;
restore_from_glacier ${arrSource[1]} ${arrDest[1]};;
$rclone_local)
echo "from local to glacier: ${arrSource[1]} --> ${arrDest[1]}" >> $log_file;
upload_to_glacier ${arrSource[1]} ${arrDest[1]};;
esac;;
delete)
echo "delete $source" >> $log_file
rclone delete $source;;
esac
- configure duplicati with
rclone
as backend, provide the same local and remote path as in the script, and make sure you point the rclone executable to the above script by setting the optionrclone-executable
Backup should work perfectly, verification can take a while. The script will do restore, and check every minute if the file has been restored, and download once restored. It should go back to glacier after keeprestore
days. Some log will be stored in the log-file. It only relies on rclone to provide the list of files.
I’m confident others can make a way better script than above. It works for me, so I’m happy.
Happy backing up!
Wim