Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings.
Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Been downloading close to 80-90 movies last 24 houers, then all of a sudden it started to crash mutiple times. not sure where to look for error loggs but this is the windows event on the crash.
I'm having this issue right now. There is nothing in the log. It seems to be caused by an upload with a large (> 20GiB) file in it. RAM usage always goes up and then SABnzbd crashes with exception 0xc0000005 which is memory access violation.
My hypothesis is that _yenc.pyd, being part of a 32bit SABnzbd is limited to 4GiB of address space but doesn't check if allocation of new memory is successful. As more and more articles get downloaded, they are cached until a whole file is ready to be written to disk. With normal uploads which have about 50MiB per file and maximum of 2 files being cached at once, that is a maximum of about 100MiB which won't cause any problems. However, when there is a large file, more and more memory will be allocated until the 4GiB limit is reached. At that point the allocation will fail (return NULL pointer), SABnzbd (_yenc.pyd) will try to write to a NULL pointer, cause an exception and be terminated immediately, having no time to write into its own log.
If this is the case, setting an article cache limit under Config → General → Article Cache Limit to something lower than 4G should fix this (depending on how badly is _yenc.pyd written, you might need to go under 2G).
I'm testing this hypothesis now. Will update the post with results.
EDIT: OK, I set Article Cache Limit to 1G (just to be on the safe side) and the download finished without any further problems. Either the authors of yenc.pyd should fix it or SABnzbd developers should check for values too high in the Article Cache Limit field.
Thanks for your detailed report.
SABnzbd caching was designed at a time when posters kept segment files below 100M.
So normally, SABnzbd will not use more cache that is needed for about 1.5 times the maximum file size in a post.
However there are circumstances when usage can go very high, like when one article of each file stalls.
Your example of a 20G file is equally problematic, assuming you're talking about a segment file and not the final result.
At the very least, SABnzbd should limit its memory usage to what it can handle.
Checking whether the cache size make sense on a particular system is just about impossible to do.
The simple solution may be that we set an internal limit of 1G.
(An upgrade of both Python and _yenc will be done for 0.8.0, but it may not solve this problem.
Going 64bit on Windows is prohibited by some of the libraries that we need.)
I ended up changing the value for "Article Cache Limit" from -1 (unlimited) to 4G (4GB RAM) and it fixed my issue. This is located under settings, general, tuning. The system in question is running Win7 Pro x64 with 8GB RAM. The file that was giving sabnzbd problems was a single 2.4 GB .mkv file.