Duplicati will recognize the files as new because the paths are different, however the actual data in the files should not need to be uploaded again, because Duplicati de-duplicates on a file block basis. Because your seemingly new files haven’t actually changed, Duplicati “should” just have its new file list use the old blocks.
duplicate files are detected regardless of their names or locations, but there are still relatively small dlist and dindex files needed in order to be able to list the paths and locate the dblock files that contain the file blocks.
If you want to confirm this, you can use the dry run option to see whether it wants to upload many dblock files. Manual file transfer might have changed something (file times? permissions?), and I hope Duplicati still does deduplication of the data itself (which is generally larger). You could try testing one large file to see if it puts a small dblock in OneDrive, or one that would be reasonable for the large file after compression on it got done.
If you care about older versions, note you’ll need to use the old paths. If you limited version retention, the old paths (looking like deletions) will eventually age away, and you might even get a little OneDrive space back…
Edit: I should also add that I’m assuming you migrated Duplicati to the new computer in a way that has the old pathnames visible. Here is an example of that. If you just moved an exported job configuration, that’s different. Doing a move that includes the actual database cache of remote data is probably faster than regenerating it. Somewhat surprisingly, I couldn’t find a nice article on best ways to do common moves. I might have missed it.