Duplicati restore is very slow

Duplicati backup works well.

I use two parallel backup: So I backup my data to two location. Googledrive and local.

Local backup is 4T very fast SATA. Physically it is connected to internal SATA connector. So, I does not use USB etc. Between three month I change it disk- I use two disk. One connected and one in Safe.

Data quantity is about 1,5 T.

Okay, but this was not question. It was Prologi.

Now I test restore data. Not emergency, test. I connect this backup disk to one linux-computer and start restore. It is fast restore reason restore is “from internal disk to other internal disk”.

But- restore is very slow. Now, after 20 hour, “Downloading file… checking remote backup… 1530286 files restored (1,47 T)… checking existing target files… Scanning local files for needed data”.

Now, after 20 hour, folders are restored. So in my opinion, looks all folders are created. Still empty.

Okay, I understand restore from google drive will be very slow. It is understable. But this is pure, real local disk. Remove volume size is 50 MB.

Backup is scheduled and speed of it is not interesting. But restore speed is. So, is it any way how to grow-up speed of the backup restore? If I create new backup, what I can do? Remote volume size 500 MB? Or Remote volume size 1 kt? Or any other jinx? Just now I restore data. 20 hour. And when ready- many, many days. So, question is: “What is good way make local backup- remember parallel backup to googledrive- what I can do? How I can make it quite fast restore?”

Edit… “scanning local files for needed data”, now time spent about 27 hour. Tomorrow I will check again status. This is very interesting I make restore from internal SATA disk to other internal SATA disk. For me this is ok, but I imagine case “I must restore 15 T”- if 1,5 T take 72 hour, 15 T take 720 hour. Quite slow this Duplicati… :frowning:

1 Like

I have the same experience…

I have a backup on a local network drive (Gigabit network). Internal SATA Disk on that computer. Backup is just 42 Gigabytes.

I deleted locally in a Java-Project 4 or 5 files. So I want to restore this complete Project folder from the last duplicati backup from today 09:20 (so I have a current backup). The folder to restore is about 1,79 GB with 8000 files.

Restore to a new folder. 99,99% of the files exist without chance on the original path.

After 6,5 hours still “scan local blocks” and all I have in the target folder is the folder structure.

Last time I did that I had the result in the morning. So it is slow, but it works.

1 Like

Nobody’s saying they have an SSD, so random access time on hard drive may be one speed limiter.

no-local-blocks

Duplicati will attempt to use data from source files to minimize the amount of downloaded data. Use this option to skip this optimization and only use remote data.

The assumption is that your local disk is faster than your remote (which might also have egress fees).

Restore also fixes up small problems (such as a bad version) block by block instead of with whole file.

The backup process explained in the manual tries to describe this in non-technical language involving bricks in different shapes and colors, stored in small bags. I think the restore equivalent is to grab one bag at a time, and distribute any brick that a file needs, wherever the file is (thus doing random writes).

The development focus has been on making things work. Backups got a design change to work faster. Restore hasn’t had that, due to too many priorities and too few volunteers, so it’s still rather sequential.

I’m talking mainly about the multi-terabyte original post. The small 42 gigabyte issue might be different.

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

So if you meant incremental works well, you might have forgotten that the initial backup took it awhile. Description sounds like a full restore, but I’m not sure. Another slow-down to even starting restore is if database recreate is needed due to system loss. That takes awhile, and damaged backup gets worse.

Restoring files if your Duplicati installation is lost is worth testing with at least a small restore sometime, however a full database recreate (maybe save the old database in case new has issues) is also useful.

Choosing sizes in Duplicati has some advice. One frequent performance limiter with backups over 100 GB is that The block size is only 100 KB by default, and slows down due to too many blocks to handle.

maybe will happen someday to allow more graceful handling of at least 1 TB source backups. The loss from that is that block level deduplication saves less space, but some people care more about speeds. Going overboard (beyond simple blocksize scaling up to stay at around a million blocks) could be tried.

blocksize

The block size determines how files are fragmented. Choosing a large value will cause a larger overhead on file changes, choosing a small value will cause a large overhead on storage of file lists. Note that the value cannot be changed after remote files are created.

  1. My backup size is about 1,5 T. Block size is 50 M.
  2. Restore command is duplicati-cli restore “/mnt/backupharddisk” --passphrase=“passphrase” --restore-path=“/mnt/raidhanuri/ELKE_works” --overwrite=true
  3. Version: - 2.0.7.1_beta_2023-05-25. System is Debian, I build this computer at Monday and Debian is updated. Also Duplicati updated: sudo apt-get update, sudo apt-get upgrade duplicati. So this Duplicati is as new as possible upgrade.
    4.Harddisk contain update is Seagate 4T Ironwolf pro, so, maybe not slow. Of course it is not SSD, but in my opinion not slowest disk.
  4. Now, over 3 days: Still only folder tree restored. 0 bit data restored.
  5. Duplicati running still and no stitch.
  6. Local restore: local backup disk mounted /mnt/duplicatilokaali1, target /mnt/raidhanuri.

Now three days back. Folder tree restored. 0 bit data. Just now I am not worried reason 1,5 T is very big amount for Duplicati. So after one week I will think what I must do.

root@hanuri:/home/hanuristi1# ps aux | grep duplicati
root        1021  0.0  0.1  19500 10776 ?        Ss   maalis11   0:34 /sbin/mount.ntfs-3g /dev/sdd2 /mnt/duplicatilokaali1 -o rw
root        1036  0.0  0.0 145756  1568 pts/1    Sl+  maalis11   0:00 Duplicati.CommandLine /usr/lib/duplicati/Duplicati.CommandLine.exe restore /mnt/duplicatilokaali1/varmuuskopio20230819 --restore-path=/mnt/raidhanuri --passphrase=*****
root        1040  101  2.0 1806604 164252 pts/1  Rl+  maalis11 2802:14 /usr/bin/mono-sgen /usr/lib/duplicati/Duplicati.CommandLine.exe restore /mnt/duplicatilokaali1/varmuuskopio20230819 --restore-path=/mnt/raidhanuri --passphrase=*****
root        4300  0.0  0.0   6352  2200 pts/0    S+   12:41   0:00 grep duplicati
 
#TOP: --------------------------
 
root@hanuri:/home/hanuristi1# top
top - 12:43:05 up 1 day, 22:10,  1 user,  load average: 1,04, 1,06, 1,01
Tasks: 194 total,   2 running, 192 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0,0 us, 25,0 sy,  0,0 ni, 75,0 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
MiB Mem :   7936,4 total,   2151,5 free,   1063,6 used,   5029,2 buff/cache
MiB Swap:    976,0 total,    751,5 free,    224,5 used.   6872,9 avail Mem
 
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1040 root      20   0 1806604 164252   6852 R 100,0   2,0     46,43 mono-sgen
   4303 root      20   0   11600   5148   3244 R   6,7   0,1   0:00.01 top
      1 root      20   0  168256   8300   4744 S   0,0   0,1   0:01.93 systemd
      2 root      20   0       0      0      0 S   0,0   0,0   0:00.05 kthreadd
      3 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 rcu_gp
      4 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 rcu_par_gp
      5 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 slub_flushwq
      6 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 netns
      8 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 kworker/0:0H-events_highpri
     10 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 mm_percpu_wq
     11 root      20   0       0      0      0 I   0,0   0,0   0:00.00 rcu_tasks_kthread
     12 root      20   0       0      0      0 I   0,0   0,0   0:00.00 rcu_tasks_rude_kthread
     13 root      20   0       0      0      0 I   0,0   0,0   0:00.00 rcu_tasks_trace_kthread
     14 root      20   0       0      0      0 S   0,0   0,0   0:01.60 ksoftirqd/0
     15 root      20   0       0      0      0 I   0,0   0,0   0:29.37 rcu_preempt
     16 root      rt   0       0      0      0 S   0,0   0,0   0:00.32 migration/0
     18 root      20   0       0      0      0 S   0,0   0,0   0:00.00 cpuhp/0
 
root@hanuri:/home/hanuristi1# iostat
Linux 6.1.0-18-amd64 (hanuri)   13.03.2024      _x86_64_        (8 CPU)
 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          12,10    0,00    1,85    5,29    0,00   80,76
 
Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
md0               1,28         1,16       198,59         0,00     192945   33118896          0
sda              69,28     33907,63       110,60         0,00 5654911124   18444775          0
sdb             322,81       323,25      2115,15         0,00   53909499  352752808          0
sdc              58,15        12,07     34005,95         0,00    2013060 5671309275          0
sdd               1,19       124,85         0,39         0,00   20821190      65552          0
sde              59,37     33907,63       110,57         0,00 5654912098   18439659          0
 
 
1 Like

Thank you @ts678 for your reply. And yes, in my case you are correct. There is no SSD involved. All 3 locations are on normal SATA HDDs, all crypted with VeraCrypt.

I tried a copy of all backup files from the “backup hosting machine” to my local machine with robocopy. It took a little bit over 7 minutes to copy that 42~43GB (just to get a feeling for the IO- and Network-Performance).

As expected the restore worked and was finished the next morning. Don’t know how long it took? Something between 9 and 15 hours.

I will retry it from commandline with “no local blocks” (just to see if it is faster and to know if I should use this next time).

1 Like

One additional question: I start restore files about Tuesday 12am. Now restore time is 50 hour, and still, only folder structure restored. Reason of long time- when I must stop it and think any other method for restore? I understand, 1,5 T is much data. But 50 hour is also much time.

So… now 50 hour. 24 hour more- tomorrow? Or next week, next tuesday will be many hundred hours. This is only interesting reason restore take much time. Too much, in my opinion.

You’re likely mixing up two values. Please read Choosing sizes in Duplicati.
50 MB is the default “Remove volume size” a.k.a. dblock-size, described as

--dblock-size = 50mb
This option can change the maximum size of dblock files. Changing the size can be useful if the backend has a limit on the size of each individual file.

The one I’m trying to get you to raise to boost the performance is blocksize

--blocksize = 100kb
The block size determines how files are fragmented. Choosing a large value will cause a larger overhead on file changes, choosing a small value will cause a large overhead on storage of file lists. Note that the value cannot be changed after remote files are created.

So the backup size is 15 times beyond the recommended limit of 100 GB for the default blocksize. Performance can drop extremely fast beyond a certain size. I’m suggesting a generous blocksize, because you’re seeking speed, and I assume you’re prioritizing that over smallness. It’s a tradeoff.

The best speed and the worst smallness is probably a direct file-to-file copy, with a complete set of everything for any backup version. Duplicati goes to great lengths to not be so very space wasteful.

It gives a feel, but maybe not a relevant feel. I can’t talk about Veracrypt internals, but here’s a drive:

is from a review of a similar drive. Look at the 4K lines to see how hugely random access slows things, compared to the high numbers posted by the sequential access. That’s partly seek time, partly rotation.

There’s a performance issue being reported here, without any monitoring of things like drive activity, so I’m assuming the bottleneck is the random access on the drive, but it’s not a certainty without the stats.

Another potential bottleneck is the single-threaded SQLite CPU processing when facing lots of blocks. Peeking at About → Show log → Live → Profiling can find SQL queries which have gotten overloaded.

SQLite queries dealing with too much data also overflow to temporary files. Raising SQLite cache can reduce such a need, but a simpler way to help is to reduce the number of blocks by raising blocksize.

Use a big enough (maybe overly generous) blocksize as suggested, but it requires a fresh backup.

EDIT:

How to Monitor Disk IO in a Linux System is the kind of measurement I think might explain slowness.

In this tutorial, we’ll discuss how to monitor disk I/O activity in the Linux system. It’s an important task to perform while maintaining a system. Essentially, getting data back from the disk costs time. As a result, the disk I/O subsystem is considered the slowest part and can slow down the whole system.

but there’s a lot more. Windows has some nice tools. I’m not sure what Debian comes with, but more could be installed, if you want to confirm or refute my guess that the hard drive is part of the problem.

Ok, so I must ask more. “How to”.

I can make fresh backup. Reason is, my two goal is: 1) file copy from old nas to new, and 2) test is it possible restore backup. 1, it I can make other way. But same times I want test is it possible restore or not.

Backup machine is Windows-machine. It make backup using “windows-gui-based” Duplicate.

Ok. If I make fresh, new backup, how to? “New backup”. I check settings and there is only “Remote volume size” 50 M. But this block size-setting is not available. So. In windows-gui-based Duplicate I can select only source, target, remote volume size, timetable. There is no any other settings.

If I want make “advanced backup setting” I must forget this windows-machine and use any linux-machine? Or, is it any special way adjust it gui-based? Remember: There is not possible adjust this “block size”.

So:
1, if I want make most excelent and best ever settings:

  • List of settings
  • Is it possible make it in Windows gui-based or not
  • how to

2, excelent restore command in Linux? (Backup machine = windows, but restore = linux, debian)

That’s one option, but another is to delete the destination files manually, use the Database screen Delete button, and use Editing an existing backup. This avoids some typing if the job is complex.

Look at the Advanced options manual page to see more. I gave a link to blocksize on that page.
Go to Options screen 5 and open the dropdown which is organized into various different sections.

image

I don’t understand this at all. Both Linux and Windows have a GUI, and both have a blocksize option.

Above screenshot is Windows. Linux should have similar. Options → Advanced options → pick → set.
New value should probably be at least 2 MB per general scaling rule, but you can set it much bigger if you’re trying to raise performance at expense of some space, but it can’t exceed Remote volume size. Maybe you could try 5 MB, maybe 10. Speed is very system dependent, so it’s always a bit of a guess.

If that’s been covered enough, what this new twist? A nas was not mentioned previously, and it sounds almost like you want to back it up using Duplicati on Windows over SMB, or something indirect like that.

Duplicati generally should be on the system containing the Source data. Remote might work, but slowly.

Ok. I understand.

Two cases:
Local backup. Reason it is very fast, volumesize can be big. Also blocksize can be big- instead of 100k it is better use 100MB.
–blocksize=100MB
–dblock-size=100MB

Backup to cloud, eg. google drive:
blocksize 50MB and dblocksize 50MB. Maybe. First: my network is very stable, it is 1 G and all this bandwith is mine. This 50 MB is maybe ok reason of 50 MB in “cloud disk” is big but not too big. And also, very fast transfer.

Is it any reason why same value is not suitable for both? So there is no any reason use same value in both?

You might see some very short files mixed in, because each source file gets a metadata block of a few hundred bytes. This means there’s no longer room to add a block of maximum size in the same dblock, resulting in the short dblock being uploaded because it is “full” from a block point of view. Other than it looking strange like that (compared to having dblock size allow multiple full size blocks), it should work.

One positive factor is that I think (I haven’t tested though) that compact will have an easier time, as the need to compact partially filled dblock files won’t exist, as there’s no such thing. Either full, or empty, so empty just gets deleted, which is faster than having to download to repackage blocks into new dblocks.

Compacting files at the backend

When a predefined percentage of a volume is used by obsolete backups, the volume is downloaded, old blocks are removed and blocks that are still in use are recompressed and re-encrypted.

Your setting will (I think) never have to do that. If you have a lot of files, it might drive up the tiny-dblock count, which might exceed Google Drive files-per-folder limit. Web search gives varying values for that.

Your gigabit network doesn’t mean Google Drive will run at that rate, especially for individual transfers. There’s not much you can do about that though, nor about their daily upload limit or any other throttling or mere limits due to other users on the same equipment. Typically, parallel transfers can go faster, as each runs into some limits, but multiple may move more data per time. Backup does parallel. Restore doesn’t.

As a side note, rclone specializes in parallel transfers, so if restore ever gets really time critical (e.g. for business downtime reasons in a disaster), you could perhaps rclone from Google Drive to a temporary local hard drive or SSD, and run the Duplicati restore from there. Or maybe a 1 GB link will be enough. Link just needs to outrun whatever its final destination is, but the larger blocksize should help the drive.

In the GUI, I’d suggest setting the dblock size via Remote volume size which I think is the typical way, and how some future support person might ask. If you use the advanced option, make sure it works too, and if you set both but to different non-default values, I’m not sure which wins when the values conflict.

Sorry, this answer is very good, but I am not sure answer.

The important thing is that my internet connection does not limit the transfer. It is fast and stable. Whether the cloud service can receive at that speed is a different matter. Likewise, whether the cloud service has a daily transfer limit is a different matter. As well as the fact that there is a file size limit in the cloud service. That is also a different matter.

File size is important information in cloud services. But: I would assume that a 50 megabyte file is small after all. Not common, but not a rarity either.

I didn’t quite understand everything in the answer.

–dblock-size = remote volume size. This I can select from GUI. Default is 50 M.

–block-size is advanced. Default 100 k.

Ok. Remote volume size is it visible file in target folder. CLOUD backup is clever use low size- eg. 50 M, default, is ok. In my opinion. 50 M file maybe fit cloud service limits. And file quantity is not too big. LOCAL BACKUP no limit file size. So, LOCAL BACKUP maybe 100M or more works.

Then: block size. This is not so clear. 100 kb is very little. This is it I cannot understand.

Blockquote
As Duplicati makes backups with blocks, aka “file chunks”, one option is to choose what size a “chunk” should be. The chunk size is set via the advanced option --block-size and is set to 100kb by default. If a file is smaller than the chunk size, or the size is not evenly divisible by the block size, it will generate a block that is smaller than the chunk size. Due to the way blocks are referenced (by hashes), it is not possible to change the chunk size after the first backup has been made. Duplicati will abort the operation with an error if you attempt to change the chunk size on an existing backup.

This is really not clear. “If a file is smaller than the chunk size” - today “little file” is 1 M or more. So. In nutcell. What is good --block-size?? As earlier I wrote, maybe it 100 M is ok?

(Btw. I started backup restore, backup 1.5T, blocketc sizes default. Still only folder tree visible.Still no data. Next tuesday one week restore. I really hope it will be ready. 1.5T. HOW many week restore take… I cannot love duplicate, and only reason is this. Even not God cannot know how much restore take time.)

So. Now I will make new backup work. --block-size. 100k? 1M? 100M? 1G? 1T? Only I need direct answer. Not phillosophic answer :).

Many of these different matters (e.g. transfer rate) affect decisions on performance tuning, although there is no formula for it and it’s all educated guesswork based on what is known.

I disagree. Smaller files exist on many systems. I often have smaller .txt, .pdf, etc. Checking system altogether, I found over 1 million below 1 MB, but most aren’t things that I don’t backup.

If you’re saying the default is too small for some of today’s expanding backups, I agree with that, however I’m not a maintainer. Changing its default also makes it a bit unclear what’s really used.

What else is not clear? I made a test backup with a 5 byte file recently. Its block was 5 bytes not 100 KB, because its data ran out. A smaller block is typically left at the end of even very big files because these are fixed size blocks unless they can’t be, such as at the end of a file (generally).

Cryptographic hash function is what hash means. It’s sort of a 256 bit unique identifier of a block, meaning you can’t, for example, pack 10 blocks into a bigger one, as its hash can’t be calculated easily. Originally, the hash of a block is calculated at the time when a block is read for its backup.

exists, but it might be a bumpy ride for a motivated and technical person, in order to get it going.

You want to over-simplify a complex problem.

I posted two suggestions, based on not much, as it has not been benchmarked on any system, especially not yours. I don’t know your file counts or your file sizes (sounds like most are large).

Unless you have a whole lot of files (seems less likely based on the comment on small files, but your log file will say exactly what you have, if you can get to it), your choice of making blocksize identical to Remote volume size will probably work. You have been warned you’ll see small files.

There might also be something else running strangely here, but fewer blocks should help things.

EDIT:

Second thought.

I used the Everything search to count non-empty, < 1 MB files. Any idea what you actually have? Drawback of files smaller than blocksize plus blocksize = Remote volume size is only 1 block fits, resulting in the small dblock issue (not hundreds of bytes, but whatever the file size actually was). Running at blocksize = Remote volume size is rare (I think). Default 50 MB / 100 KB is 500 times.

My previous proposal was “5 MB, maybe 10”, but you could maybe push up to 25 if you like that, just so you can get several smaller files into a dblock. If all the files are big, the plan may change.

What’s on the status bar at top of screen? Say something you do see, not just what you don’t.

EDIT:

Status bar should show several of the following phases. You looking beats me guessing spot.

Status bar show nothing reason I use CLI restore.

I’m less familiar with CLI view. Sometimes it gives useful clues. Post any you might have seen.

CLI is also tough because you can’t just peek at the live log to figure out what step you’re doing.

EDIT:

After looking through code a bit more, I just tried a --console-log-level=verbose restore, and got

The operation Restore has started
Checking remote backup ...
Backend event: List - Started:  ()
  Listing remote folder ...
Backend event: List - Completed:  (11 bytes)
Searching backup 0 (3/18/2024 2:53:30 PM) ...
Needs to restore 1 files (100 bytes)
Mapping restore path prefix to "C:\backup source\" to "C:\backup restore\"
Restore list contains 2 blocks with a total size of 237 bytes
Checking existing target files ...
  1 files need to be restored (100 bytes)
Target file does not exist: C:\backup restore\short.txt
Scanning local files for needed data ...
Target file is patched with some local data: C:\backup restore\short.txt
1 remote files are required to restore
Backend event: Get - Started: duplicati-be9e97ab97c58424b9f2b658a371cd748.dblock.zip (754 bytes)
  Downloading file duplicati-be9e97ab97c58424b9f2b658a371cd748.dblock.zip (754 bytes) ...
Backend event: Get - Completed: duplicati-be9e97ab97c58424b9f2b658a371cd748.dblock.zip (754 bytes)
Recording metadata from remote data: C:\backup restore\short.txt
Patching metadata with remote data: C:\backup restore\short.txt
  0 files need to be restored (0 bytes)
Verifying restored files ...
Testing restored file integrity: C:\backup restore\short.txt
Restored 1 (100 bytes) files to C:\backup restore
Duration of restore: 00:00:14

Default log only shows Warning and worse. Do you see anything like above, even in scrollback?

Also make sure there’s nothing showing up, e.g. right click on restore folder to get its Properties.

EDIT 1:

Whenever it gets far enough to start restoring files, it will download dblock files to restore files as necessary blocks become available. You will likely have partial files, in the process of restoration.

EDIT 2:

At default log level, it looks like this:

Checking remote backup ...
  Listing remote folder ...
Checking existing target files ...
  1 files need to be restored (100 bytes)
Scanning local files for needed data ...
  Downloading file duplicati-be9e97ab97c58424b9f2b658a371cd748.dblock.zip (754 bytes) ...
  0 files need to be restored (0 bytes)
Verifying restored files ...
Restored 1 (100 bytes) files to C:\backup restore

How much of that are you seeing?

Original backupwork is now restored: I started it one week ago. So, local restore from local harddisk to other hardisk. 1,5T. Original blocksize (100k) and volume size (50M). So, it is: 1,5 T restore need one week. Only problem is now, during one week original data inside it NAS was changed. So… now data restored to new NAS using it local disk restore. Old NAS I use continously- it is fully workink. Now I must found way refresh this new NAS meet present NAS… :).

Maybe I can make new restore. Maybe any way “restore only changed data”. But. Still. 1,5 T need one week to restore.

… for a backup that’s not configured quite right (there may be a default change coming to help), along with inadequate information on system resource usages, or the speed of different phases.

If you have lots of files, setting the attributes on files can take awhile and need lots of download. This would be towards the end of the restore, after all of the file blocks have already been made. You can look at the metadata lines from my verbose log example to see the file attribute setting.

Since there’s no info on location of the delay, and we’ve already been through the blocksize talk (however talk doesn’t fix things), I will pause performance work until some more results come in.

You might be able to use some sort of sync program (Duplicati isn’t one) such as rsync or rclone, however even they won’t be instant, so refresh timing relative to cutover to new NAS may matter.

Duplicati never writes restore data if the file is already perfect as it is. This confuses people who think of restore as a copy, and wonder why it didn’t happen. There’s even a confusing warning:

Restore completed without errors but no files were restored

I’m not sure Duplicati would do the refresh super-fast, but it might be able to beat original timing.

Overview

Duplicati is not:

  • A file synchronization program.
    Duplicati is a block based backup solution. Files are split up in small chunks of data (blocks), which are optionally encrypted and compressed before they are sent to the backup location. In backup location, Duplicati uploads not original files but files that contain blocks of original files and other necessary data that allows Duplicati to restore stored files to its original form by restoration process. This block based backup system allows features like file versioning and deduplication. If you need to be able to access your files directly from the backup location, you will need file synchronization software, not block based backup software like Duplicati.

There are many potential hazards that a refresh could trip over, even down to filesystem types having different time resolutions. However you do this, a small trial test in advance will be wise.

I make new backup work, and --blocksize 1M. Error during backup: “You have attempted to change the parameter “blocksize” from “102400” to “1048576”, which is not supported. Please configure a new clean backup if you want to change the parameter”. Ok, 1 M not supported.

I know, Duplicate is not sync tool. Original reason was, as we say here, “hit two fly one hit”. It was: test back up restore is possible, and second, “sync”. So reason I am building new NAS, I think I can test restore.

As I wrote, original backup work is: 1,5T Data. Volume size default, 50 M. Blocksize default 100 k. As I wrote, this is local backup. 4 T Sata-disk connected to windows-10-computer. And this computer only work is Duplicati. It make two parellel backup: One to Google Drive. And one to local disk. Data I save: location is inside local network, NAS in local desktop. Backup work run from day to day, week to week and it is ok.

So. One backup inside google drive. One backup in local disk inside this same backup-duplicati-computer. Hard disk.

I tested restore. I connect this hard disk to linux-computer. I mounted it “/mnt/backupdiskfromwindowscomputer” and try restore it to “/mnt/newlocation”. Inside linux-computer. From local disk to local disk.

Restore is ok. It works. But this “1,5T. 50 M volume size. 100k block size.” restore time is one week. Now it is tested. Restore is possible. But it need much time. I try read answers, but I am not sure. “Is it possible make this restore faster or not?”.

This is my question in nutcell. Backup ok and possible. Restore is possible, tested, but it take much time. So, “is it possible make backup settings for faster restore?” “And if, how?”