You can change the password used for authentication to the remote storage, but you cannot change the encryption password for an existing backup. (At least not easily - there is a script that supposedly help.)
read it, and if i just delete all entry’s in my cloud folder it works gain?
(ive more then one place where backup are so delete isn’t a problem. take some time to upload again but that’s just as it is.)
What about this:
The easiest if this passphrase is known, it would be to change the remote password (and therefore authId)
(is that the cloud acces pasword which you fill in step 2?)
If you don’t care about starting over with your backups, then that’s the most straight forward. Delete the remote files, delete the local database (for that job), reconfigure the job to use the encryption passphrase you want, and then kick off a new backup.
I don’t really know what you mean in your second paragraph. AuthId sounds like authenticaation ID, which is separate and independent from the encryption passphrase.
Did you also delete the local database for that job? You do not need to make a NEW backup job.
To delete the job database, click the job name in the Web UI to expand the options. Click the blue “Database …” link, and then click the Delete button. If there is no local database and there are no files on the back end storage, Duplicati will “start over.”
recreate button (that’s the one you talk about right?)
(running at this moment)
error so hit “delete” then “repair”
(Was the only one still blue and active)
last action delete all entry’s in backup cloud try again after “delete”
It’s ok if you manually deleted the database. You do not want to use the recreate option if you’re trying to start over.
That password field you show in your screen shot is for encryption, NOT for authentication to the back end storage.
I hope that we are on the same page and you are really trying to change your ENCRYPTION password. If you were just trying to change your cloud account password that could have been done much more easily.
i changed my duplicati webpage password and my cloud folder remote-entry password and cloud-controlwebpage password, they where to old and easy… lot’s of free time due corona for the “hackers” so i updated some security-levels.
(i didn’t know that there was a third place for a password, encription password.)
i have a exported job configuration imported it placed the latest passwords in it and renamed it back up 2020.
Duplicati is now deleting remote files and former job configuration, the one who’s acting up. So if i am right my job backup(which i imported to create a new job is a clone of the problem version except the passwords. So i just wait until the deleting of the old job is done and see if the new job will Run without errors.
(update: before posting)
and yes it runs a back up at this moment.
checked the cloud folder and it’s filling up.
(lucky i back upped the backup-job settings )
So if i have this problem and i am content with a fresh backup easiest way is create a new job from a job export(backup) rename it preset year. delete the other job including the remote files.
did a fully new configuration, build from scratch.
used test connection and even let it make a folder in stack (which proves it can write in stack.)
my temp folder on local (my pc) is filling with dup-files in bits of 102,394kb and here it stops.
starting to be frustrated.
Don’t understand why it’s erroring.
(maybe the problem lies on the otherside)
Yes, and i think i “broke” the clouds file management, i manual deleted 660Gb overthere and realize i needed to lett this done by Duplicati, so i pulled it out the trashcan and let duplicati do this deleting. (i deleted the trashcan after that to maintain space. (trash is space)
Second time 660gb deleting (used old backuped job config to redo the job configuration.
and it seems to work but it stopped, (i took a look at the trashcan and it was empty so i thought it was empty enough over there.) it had uploading about 335Gb. now i realize it 's the amount i had free left space 1Tb-660Gb= 340GB (so it didn’t deleted the 660GB for real. (What a night sleep can help to find the troubled point… )
So i rebuild my job from scratch thinking i imported a flaw.
That’s why i a fairly sure it’s the transip place which blocking the upload.
my controlpanel is at -24GB free space! wile my stack folder manager is at 302MB usage…
trash is empty, even my syncfolder can’t upload.
Tomorow i know more when they look at it. (the why it’s stating to be full)
This is not true… if you really do want to start over, you can manually delete the back end data and then delete the local database to start over. You do not have to delete the backup job itself. I do this often on my test machine.
Not exactly sure what your current issue is, but it could be that your deleted items aren’t freed up immediately in cloud storage. Maybe you can empty the trash can? Hopefully your control panel will then tell you that you have 1TB free.
at this moment it’s running, the stack server was acting up. technicians over there repaired the problem. (something about too large amount of GB in one go deleting is breaking the system down.) (660Gb is too much to deal with in Trashcan i think.)
fingers crossed to see if it’s getting a full backup run done.