I'm pretty sure the problem is the fact that I have 1.70 TB of 200-300mb large files in queue.
But I keep getting this error when I want to view the queue (I need to remove a download because it's stuck, but I can't access the queue because of this error)
500 Internal Server Error
The server encountered an unexpected condition which prevented it from fulfilling the request.
Traceback (most recent call last):
File "cherrypy\_cprequest.pyo", line 618, in respond
File "cherrypy\_cpdispatch.pyo", line 25, in __call__
File "sabnzbd\interface.pyo", line 1133, in index
File "_Program_Files__x86__SABnzbd_interfaces_Classic_templates_queue_tmpl.py", line 834, in respond
File "Cheetah\DummyTransaction.pyo", line 32, in getvalue
MemoryError
Powered by CherryPy 3.2.0
Is there any way I can fix this without clearing the queue?
Here's also an idea, you should make the queue come in pages as an option. I.e. page 1, page 2, page 3, to page 50 if needed.. Because sabnzbd has huge problems with big queues.
Error - help!
Forum rules
Help us help you:
Help us help you:
- Are you using the latest stable version of SABnzbd? Downloads page.
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Re: Error - help!
You can use another skin. The Plush and smpl skins have pagination.
Select another skin and restart SABnzbd.
Select another skin and restart SABnzbd.
Re: Error - help!
Thanks Shypike, I love you and you're the man!shypike wrote: You can use another skin. The Plush and smpl skins have pagination.
Select another skin and restart SABnzbd.
Re: Error - help!
1.7T with 300M-sized jobs means 5600 entries, yuck.
Having 5000+ jobs in the queue is not something we have ever tested
Having 5000+ jobs in the queue is not something we have ever tested