'Array too small' error

Hi,

I’m running Duplicati on OpenMediaVault on a Raspberry Pi 3.

If I try to sync to OneDrive I get the error “array too small. numBytes/offset wrong. Parameter name: array”

This happens even if I do a dry run. When I test the connection to OneDrive, all is fine.

Full error log here…

Parameter name: array
  at (wrapper managed-to-native) System.IO.MonoIO:Write (intptr,byte[],int,int,System.IO.MonoIOError&)
  at System.IO.MonoIO.Write (System.Runtime.InteropServices.SafeHandle safeHandle, System.Byte[] src, System.Int32 src_offset, System.Int32 count, System.IO.MonoIOError& error) [0x00010] in <8f2c484307284b51944a1a13a14c0266>:0 
  at System.IO.FileStream.FlushBuffer () [0x0005d] in <8f2c484307284b51944a1a13a14c0266>:0 
  at System.IO.FileStream.WriteInternal (System.Byte[] src, System.Int32 offset, System.Int32 count) [0x000d5] in <8f2c484307284b51944a1a13a14c0266>:0 
  at System.IO.FileStream.Write (System.Byte[] array, System.Int32 offset, System.Int32 count) [0x000a5] in <8f2c484307284b51944a1a13a14c0266>:0 
  at SharpCompress.Writers.Zip.ZipCentralDirectoryEntry.Write (System.IO.Stream outputStream) [0x0020f] in <20afbe34b18d4bdda049db0f59cd5db0>:0 
  at SharpCompress.Writers.Zip.ZipWriter.Dispose (System.Boolean isDisposing) [0x00024] in <20afbe34b18d4bdda049db0f59cd5db0>:0 
  at SharpCompress.Writers.AbstractWriter.Dispose () [0x0000e] in <20afbe34b18d4bdda049db0f59cd5db0>:0 
  at Duplicati.Library.Compression.FileArchiveZip.Dispose () [0x00022] in <b049db475a4e429f87e7c9e1c0fbe31d>:0 
  at Duplicati.Library.Main.Volumes.VolumeWriterBase.Close () [0x00008] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.FinalizeRemoteVolumes (Duplicati.Library.Main.BackendManager backend) [0x0009a] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00665] in <118ad25945a24a3991f7b65e7a45ea1e>:0

Welcome to the forum! I edited your post to improve the formating. (Just added ~~~ before and after the output you pasted, please see here for details).

1 Like

Thanks! I couldn’t figure out how to do that!

To help make sure we’re looking in the right place for the issue, if you make a test job (or temporarily change your current one) to point to a local folder and do a dry run do you still get the error?

If so, then it’s likely not a OneDrive issue…

I tried this - both source and destination are different folders on the same disk. The same ‘array’ error occurs.

Someone posted the same error message about a year ago in the old Duplicati Google group. He later figured out, that his /tmp directory/partition was full. Maybe your problem is caused by this as well?

Thanks for the check - since we know it’s not related to OneDrive I’m going to remove that from the topic title (so as to not drive others away - let me know if you disagree).

Please let us know if @Tekki is correct that it’s a temp folder issue so we can try and improve the error messages related to that (you know, something meaningful like “temp drive is full”).

Assuming it is temp space, in your particular case I’m guessing it filled up while compressing your dblock (archive) file - do you have a particularly large “Upload volume size” set in step 5 (Options) of your backup job? (Well, large compared to your temp disk size…)

Sure, a more specific title will be more helpful/less misleading to others.

I will have a look at the ‘temp’ issue at the weekend and report back.

I suspect it has something to do with the ridiculously small boot partition on my Pi SD - I’ll check it out.

There is nothing wrong with the boot partition.
OpenMediaVault on Raspberry pi 3 hosts the /tmp in RAM. The swap is also in RAM. So if you have no memory then you have a problem :slight_smile:
The tmp and swap is not allocated resident so it could vary on runtime. The filesystem info could be wrong. In my case the size of the tmp directory was to small because another job polluted the directory.

Advice here:
stay in ram on pi but reduce the volume size like mentioned before or reduce the setting “asynchronous-upload-limit”