Backup Duplicati to scaleway.com

Thanks

Just in case to complete the topic here :

Use SSL: Yes
Storage type: S3 compatible
Server: custom URL (s3.fr-par.scw.cloud )
Bucket name: The name of your bucket
Region: Custom region value (fr-par)
Storage class: Standard
Folder path: folder path
AWS Access ID: (generated access key)
AWS Access Key: (generated secret key)
Client library to use: Amazon AWS SDK
Advanced option [s3-ext-authenticationregion = ‘fr-par’ ]

These are the parameters that works for me.
You can create sub folders and set the path on your backup setting

FYI - i have a solution that seems to be working on glacier. I’m now testing the set-up.
Requirements:

  • a linux / bash environment
  • configure aws in the bash environment, so you can access your buckets via bash (How to use Object Storage with AWS-CLI - Scaleway)
  • configure your back-end in duplicati as an rclone-backend with alternative executable to the bash script
  • also configure rclone for the glacier backend

The bash script will attempt to restore the files in glacier, check for the status every xx minutes, and download when restored. Downside is that restore can take a while, and probably will take several hours for larger files. I need to check how that will work on my production backups.

Regards,
Wim

Nov 19, 2020 11:16 AM: Downloaded and decrypted 256.12 MB in 00:12:24.0716900, 352.47 KB/s

==> 12 minutes to restore and download a block from glacier of 256 MB. Not that bad.

Some Windows solution for the glacier?

Should be possible. The script (I will share once it works as it should) will need to be translated to a windows-script, or to be run in WSL. S3 command line clients are available for windows, as well as rclone, so it should work.

Test last night failed. Still need to check what went wrong and if I can fix it. It failed on a 1GB file that took a lot of time to restore.

1 Like

So, here is the script. It works - but it all depends on how long scaleway takes to restore files from glacier to standard storage. I have noticed this can take longer than the advertised 6hours. They claim they have had some technical problems last couple of days, so I can only hope it will speed up a bit. However, time is not really a problem to me. If a verification takes more than a day - so be it.

  • make sure scaleway (or other s3 backend) is configured correctly in rclone, and that you also have a backend to your local filesystem.
$ rclone config
Current remotes:

Name                 Type
====                 ====
local                local
scaleway             s3

  • configure the aws cli (How to use Object Storage with AWS-CLI - Scaleway). It is possible to rewrite the script so it does not use rclone, but I haven’t spent time doing that. Basically, it would need to translate the json output from aws to a format that is similar to rclone.
  • save below script to a location that duplicati has access to, and make it executable. I have named the script rclone_alias.sh. You might need to change some parameters.
#!/bin/bash
command=$1
source=$2;
dest=$3;

rclone_remote="scaleway"
rclone_local="local"

log_file="/volume1/duplicati_temp/rclone.log"
err_file="/volume1/duplicati_temp/rclone.err"
aws_exec="/volume1/@appstore/python3/bin/aws"

waitsec=60
keeprestore=2

check_restore_status( ){

        status="2";
        while [[ "$status" != "0" ]]; do
                now=$(date +"%T")
                echo "restoring - $now" >> $log_file
                echo "$aws_exec s3api head-object --bucket $1 --key $2" >> $log_file
                output="$($aws_exec s3api head-object --bucket $1 --key $2)"
                echo "$output" >> $log_file
                status="$(echo $output | grep true | wc -l)"
                if [[ "$status" == "1" ]]; then sleep 60; fi
        done
}

restore_from_glacier( ){
        arrIn=($(echo $1 | sed -e 's/\// /g'));
        bucket=${arrIn[0]};
        file=${arrIn[1]};
        echo "restoring from glacier $bucket $file"  >> $log_file
        echo "$aws_exec s3api restore-object --bucket $bucket --key $file --restore-request '{"Days":'$keeprestore',"GlacierJobParameters":{"Tier":"Standard"}}'" >> $log_file
        $aws_exec s3api restore-object --bucket $bucket --key $file --restore-request '{"Days":'$keeprestore',"GlacierJobParameters":{"Tier":"Standard"}}' >> $err_file
        check_restore_status $bucket $file;
        echo "$aws_exec s3 cp s3://$bucket/$file $2" >> $log_file
        $aws_exec s3 cp s3://$bucket/$file $2 >> $err_file

}

upload_to_glacier( ){
        arrIn=($(echo $2 | sed -e 's/\// /g'));
        bucket=${arrIn[0]};
        file=${arrIn[1]};
        echo "$aws_exec s3 cp $1 s3://$bucket/$file --storage-class GLACIER" >> $log_file
        $aws_exec s3 cp $1 s3://$bucket/$file --storage-class GLACIER
        $aws_exec s3 cp s3://$bucket/$file s3://$bucket/$file --ignore-glacier-warnings --storage-class GLACIER
}


case $command in
        lsjson)
                echo "rclone lsjson $source" >> $log_file;
                rclone lsjson $source;;
        copyto)
                arrSource=(${source/:/ });
                arrDest=(${dest/:/ });
                source=${arrSource[0]};
                dest=${arrDest[0]};

                case ${arrSource[0]} in
                        $rclone_remote)
                                echo "from glacier to local: ${arrSource[1]} --> ${arrDest[1]}" >> $log_file;
                                restore_from_glacier ${arrSource[1]} ${arrDest[1]};;
                        $rclone_local)
                                echo "from local to glacier: ${arrSource[1]} --> ${arrDest[1]}" >> $log_file;
                                upload_to_glacier ${arrSource[1]} ${arrDest[1]};;
                esac;;
        delete)
                echo "delete $source" >> $log_file
                rclone delete $source;;
esac
  • configure duplicati with rclone as backend, provide the same local and remote path as in the script, and make sure you point the rclone executable to the above script by setting the option rclone-executable

Backup should work perfectly, verification can take a while. The script will do restore, and check every minute if the file has been restored, and download once restored. It should go back to glacier after keeprestore days. Some log will be stored in the log-file. It only relies on rclone to provide the list of files.

I’m confident others can make a way better script than above. It works for me, so I’m happy.

Happy backing up!

Wim