Occasional "Failed Move" errors

Get help with all aspects of SABnzbd
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Post Reply
AnonyMouse
Newbie
Newbie
Posts: 27
Joined: December 9th, 2017, 10:45 am

Occasional "Failed Move" errors

Post by AnonyMouse »

So I occasionally, but not infrequently get "Failed Move" errors, which actually look partly to be from the rename operation after an unpack. In most cases the unpack has actually been successful. I suspect what is happening here is that the backend file server is actually presenting a union file system which is sometimes can take a few seconds to present refreshed states, which causes the failed move. I'm wondering if it's possible to either insert a pause of a few seonds before the file operation, or if maybe increasing the wait_ext_drive settting might help here?

Or any other ideas to address that would be helpful (maybe a custom post-proc script? not sure that would help me or not).

Thanks!
User avatar
safihre
Administrator
Administrator
Posts: 5338
Joined: April 30th, 2015, 7:35 am
Contact:

Re: Occasional "Failed Move" errors

Post by safihre »

This indeed is what we have seen before, the system takes too long to process the change.
What OS are you on?
If you like our support, check our special newsserver deal or donate at: https://sabnzbd.org/donate
AnonyMouse
Newbie
Newbie
Posts: 27
Joined: December 9th, 2017, 10:45 am

Re: Occasional "Failed Move" errors

Post by AnonyMouse »

Sorry, I lost track of the thread!
The backend file system is Windows NTFS. It's a union FS presented by the software package DrivePool. I have the storage tiered feature in use, so that there's a 512GB SSD caching recent write/near term read activity. The share is a Windows CIFS (presumably v3, since it's Win10) share.

I have sabnzbd running in a docker container hosted on an Ubuntu VM. The docker container access the file system as bind mounts, but in practice, they are bind mounts that are actually SMB mount points in the docker host (temp working folders and everything). The file server is also a VM, on the same virtual network (so I don't think should actually be moving any traffic over the wire). But I think somewhere in there is the problem, or contributing to it, the SMB mounts. I was able to reduce the occurance of the problem a bit by adjusting the SMB caching optiosn in the mount but I still run into these. I don't recally ever having had this problem when I was running sabnzbd under another Win10 VM, so it seems like something in my Linux setup that causes is.

Possible solutions I've come up with:
- inserting the brief pause I asked about in the sabnzbd operations(or some other refresh before performing ops)
- changes in the host mount that fix it
- changes in the DrivePool config to improve response, but I don't think I'll get far with this
- adding more local storage to the docker host for use as a working cache for sabnzbd. I don't like this option a lot because I don't know if I can give it enough storage (VMs are on SSDs for performance, may not have enough free, especially if cleanup doesn't happen enough/I get orphaned junk, which seems to happen over time). I've normally depended on the Drivepool storage tier to transparently handle having recent activity on SSDs and historically, it's worked well.
- It just occurred to me using an NFS export from the Windows side, perhaps if that helps the Linux side of things, might not be a bad path to go down either. Any information on that?

The last one seems like it might be the most likey to work, but would also add more disk load, and has the disk space challenge. But if you have any suggestions of configs known to help with this in a similar layout(the docker/remote mounted storage) I'd be interested in them, Or even just hearing if moving to more local disk space and just having to move resultant files to the file server after processesing seems to fix it.

Thanks!
User avatar
safihre
Administrator
Administrator
Posts: 5338
Joined: April 30th, 2015, 7:35 am
Contact:

Re: Occasional "Failed Move" errors

Post by safihre »

I've never seen this kind of thing before ;)
All I know for sure is that SAB expects the files to be gone after it sends a move or delete-request!
If you like our support, check our special newsserver deal or donate at: https://sabnzbd.org/donate
AnonyMouse
Newbie
Newbie
Posts: 27
Joined: December 9th, 2017, 10:45 am

Re: Occasional "Failed Move" errors

Post by AnonyMouse »

safihre wrote: September 5th, 2019, 8:32 am I've never seen this kind of thing before ;)
All I know for sure is that SAB expects the files to be gone after it sends a move or delete-request!
Heh, yeah I think the tiny latency between the union FS and SMB caching must be the culprit. I gave up and attached some more local storage as a working cache and that seems to have made it go away. It's not my ideal solution but it works. I do need to check the folder once in a while, for whatever reason, I still seem to get an occasional dangling SAB working folder that didn't get cleaned up. I thought I have everything set to delete after the transfer but maybe not. I'll just live with it. :D

Thanks!
Post Reply