Backup Duplicati to scaleway.com

@drwtsn32 I have
https://duplicati-test.s3.fr-par.scw.cloud

Try removing the “https://” part - only include the hostname portion.
Make sure Use SSL is checked.

@drwtsn32 tnx
I try with that now have

Failed to connect: The authorization header is malformed; the region ‘us-east-1’ is wrong; expecting ‘fr-par’

but I have select Paris see here https://nimb.ws/kL9cHY

Just leave the region at “default” and storage class at “default”. Does the bucket already exist?

Solved

1 Like

Unfortunately, it is not solved. Backup is started. I see on scaleway
file “duplicati-test”
and
folder “duplicati-test”
is created
but in Duplicati have an error. I try to repair database but can’t repair

Nov 5, 2020 11:29 PM: The operation Repair has failed with error: The backup storage destination is missing data files. You can either enable --rebuild-missing-dblock-files or run the purge command to remove these files. The following files are missing: duplicati-bed31e3041e964e5fad14ebc13db9be17.dblock.zip, duplicati-b191b14e62a8b49858f30edddfaa4ad8c.dblock.zip, duplicati-bd63343bf38664563a8333fe4135755d4.dblock.zip
{“ClassName”:“Duplicati.Library.Interface.UserInformationException”,“Message”:“The backup storage destination is missing data files. You can either enable --rebuild-missing-dblock-files or run the purge command to remove these files. The following files are missing: duplicati-bed31e3041e964e5fad14ebc13db9be17.dblock.zip, duplicati-b191b14e62a8b49858f30edddfaa4ad8c.dblock.zip, duplicati-bd63343bf38664563a8333fe4135755d4.dblock.zip”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at Duplicati.Library.Main.Operation.RepairHandler.RunRepairRemote()\r\n at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)\r\n at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)\r\n at Duplicati.Library.Main.Controller.Repair(IFilter filter)\r\n at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:“8\nRunRepairRemote\nDuplicati.Library.Main, Version=2.0.5.1, Culture=neutral, PublicKeyToken=null\nDuplicati.Library.Main.Operation.RepairHandler\nVoid RunRepairRemote()”,“HResult”:-2146233088,“Source”:“Duplicati.Library.Main”,“WatsonBuckets”:null}

What error did you see before you attempted a repair?

I’m surprised there is any data there, I thought this was your first attempt to use the back end storage?

Founded problem. This is the right configuration


Here is price list Cloud, Compute, Storage and Network models and pricing - Scaleway
Also they offer 75GB free space

2 Likes

is your data in standard or in glacier? If in glacier, does verification work? As stack (transip.be) will stop it’s free 1TB storage, I might use scaleway for storing my data.

It is Standard storage.

you do realize this 5x as expensive as glacier?

Price is €0.01/GB/month for standard and €0.002/GB/month for glacier (restore free) if I read correctly. It is cheaper than Microsoft, Google or Amazon

1 Like

Hi Dalilaj

I am very interesting in knowing how you set this up.
Can you please tell me in scaleway what are the corresponding data

  • Bucket Name = ?
  • AWS Access ID = ?
  • AWS Access Key = ?

Where are these in Scaleway interface ?

Thanks in advance for your answer

Hi
bucket name enters in the form:
“my-bucket-name” without quotes
AWS Access ID and AWS Access Key get inside

  1. Select project

  2. After selecting the project to select the credentials

Thanks

Just in case to complete the topic here :

Use SSL: Yes
Storage type: S3 compatible
Server: custom URL (s3.fr-par.scw.cloud )
Bucket name: The name of your bucket
Region: Custom region value (fr-par)
Storage class: Standard
Folder path: folder path
AWS Access ID: (generated access key)
AWS Access Key: (generated secret key)
Client library to use: Amazon AWS SDK
Advanced option [s3-ext-authenticationregion = ‘fr-par’ ]

These are the parameters that works for me.
You can create sub folders and set the path on your backup setting

FYI - i have a solution that seems to be working on glacier. I’m now testing the set-up.
Requirements:

  • a linux / bash environment
  • configure aws in the bash environment, so you can access your buckets via bash (How to use Object Storage with AWS-CLI - Scaleway)
  • configure your back-end in duplicati as an rclone-backend with alternative executable to the bash script
  • also configure rclone for the glacier backend

The bash script will attempt to restore the files in glacier, check for the status every xx minutes, and download when restored. Downside is that restore can take a while, and probably will take several hours for larger files. I need to check how that will work on my production backups.

Regards,
Wim

Nov 19, 2020 11:16 AM: Downloaded and decrypted 256.12 MB in 00:12:24.0716900, 352.47 KB/s

==> 12 minutes to restore and download a block from glacier of 256 MB. Not that bad.

Some Windows solution for the glacier?

Should be possible. The script (I will share once it works as it should) will need to be translated to a windows-script, or to be run in WSL. S3 command line clients are available for windows, as well as rclone, so it should work.

Test last night failed. Still need to check what went wrong and if I can fix it. It failed on a 1GB file that took a lot of time to restore.

1 Like

So, here is the script. It works - but it all depends on how long scaleway takes to restore files from glacier to standard storage. I have noticed this can take longer than the advertised 6hours. They claim they have had some technical problems last couple of days, so I can only hope it will speed up a bit. However, time is not really a problem to me. If a verification takes more than a day - so be it.

  • make sure scaleway (or other s3 backend) is configured correctly in rclone, and that you also have a backend to your local filesystem.
$ rclone config
Current remotes:

Name                 Type
====                 ====
local                local
scaleway             s3

  • configure the aws cli (How to use Object Storage with AWS-CLI - Scaleway). It is possible to rewrite the script so it does not use rclone, but I haven’t spent time doing that. Basically, it would need to translate the json output from aws to a format that is similar to rclone.
  • save below script to a location that duplicati has access to, and make it executable. I have named the script rclone_alias.sh. You might need to change some parameters.
#!/bin/bash
command=$1
source=$2;
dest=$3;

rclone_remote="scaleway"
rclone_local="local"

log_file="/volume1/duplicati_temp/rclone.log"
err_file="/volume1/duplicati_temp/rclone.err"
aws_exec="/volume1/@appstore/python3/bin/aws"

waitsec=60
keeprestore=2

check_restore_status( ){

        status="2";
        while [[ "$status" != "0" ]]; do
                now=$(date +"%T")
                echo "restoring - $now" >> $log_file
                echo "$aws_exec s3api head-object --bucket $1 --key $2" >> $log_file
                output="$($aws_exec s3api head-object --bucket $1 --key $2)"
                echo "$output" >> $log_file
                status="$(echo $output | grep true | wc -l)"
                if [[ "$status" == "1" ]]; then sleep 60; fi
        done
}

restore_from_glacier( ){
        arrIn=($(echo $1 | sed -e 's/\// /g'));
        bucket=${arrIn[0]};
        file=${arrIn[1]};
        echo "restoring from glacier $bucket $file"  >> $log_file
        echo "$aws_exec s3api restore-object --bucket $bucket --key $file --restore-request '{"Days":'$keeprestore',"GlacierJobParameters":{"Tier":"Standard"}}'" >> $log_file
        $aws_exec s3api restore-object --bucket $bucket --key $file --restore-request '{"Days":'$keeprestore',"GlacierJobParameters":{"Tier":"Standard"}}' >> $err_file
        check_restore_status $bucket $file;
        echo "$aws_exec s3 cp s3://$bucket/$file $2" >> $log_file
        $aws_exec s3 cp s3://$bucket/$file $2 >> $err_file

}

upload_to_glacier( ){
        arrIn=($(echo $2 | sed -e 's/\// /g'));
        bucket=${arrIn[0]};
        file=${arrIn[1]};
        echo "$aws_exec s3 cp $1 s3://$bucket/$file --storage-class GLACIER" >> $log_file
        $aws_exec s3 cp $1 s3://$bucket/$file --storage-class GLACIER
        $aws_exec s3 cp s3://$bucket/$file s3://$bucket/$file --ignore-glacier-warnings --storage-class GLACIER
}


case $command in
        lsjson)
                echo "rclone lsjson $source" >> $log_file;
                rclone lsjson $source;;
        copyto)
                arrSource=(${source/:/ });
                arrDest=(${dest/:/ });
                source=${arrSource[0]};
                dest=${arrDest[0]};

                case ${arrSource[0]} in
                        $rclone_remote)
                                echo "from glacier to local: ${arrSource[1]} --> ${arrDest[1]}" >> $log_file;
                                restore_from_glacier ${arrSource[1]} ${arrDest[1]};;
                        $rclone_local)
                                echo "from local to glacier: ${arrSource[1]} --> ${arrDest[1]}" >> $log_file;
                                upload_to_glacier ${arrSource[1]} ${arrDest[1]};;
                esac;;
        delete)
                echo "delete $source" >> $log_file
                rclone delete $source;;
esac
  • configure duplicati with rclone as backend, provide the same local and remote path as in the script, and make sure you point the rclone executable to the above script by setting the option rclone-executable

Backup should work perfectly, verification can take a while. The script will do restore, and check every minute if the file has been restored, and download once restored. It should go back to glacier after keeprestore days. Some log will be stored in the log-file. It only relies on rclone to provide the list of files.

I’m confident others can make a way better script than above. It works for me, so I’m happy.

Happy backing up!

Wim