Page 1 of 1

[0.5.x branch] Excessive Memory Usage (2GB+) (GC/memory freeing issue)

Posted: May 12th, 2010, 8:15 am
by Lukian
Versions: 0.5.0 betas to 0.5.2 final
OS: Ubuntu 9.10 -> 10.04
Install-type: Launchpad repository
Skin: smpl
Firewall Software: None
Are you using IPV6? No
Is the issue reproducible? Yes, locally. I'm happy to run debug/profile builds, debuggers, profilers,  etc under advisement.


htop: http://dl.dropbox.com/u/303361/scr/sabn ... -usage.png

pmap -d:

Code: Select all

Address           Kbytes Mode  Offset           Device    Mapping
0000000001931000 1893696 rw--- 0000000000000000 000:00000   [ anon ]
1.9GB of (anonymous) memory mapped by sabnzbd while a partially downloaded job is paused in the queue.
Restarting sabnzbd 'frees' the memory. There's likely a memory leak or a massive issue with GC.

This is the highest value I've seen, resulting in other application crashes due to lack of memory (as I don't use a swap file) and amusing errors such as "Unpacking failed, [Errno 12] Cannot allocate memory" within sabnzbd itself (when trying to complete a different job without restarting).
This issue has been occurring since 0.5.0, I don't believe it was present in the 0.4 branch.

Re: [0.5.x branch] Excessive Memory Usage (2GB+)

Posted: May 12th, 2010, 9:56 am
by shypike
The only (non-reproduceable) report we got related to this is when
someone had a crazy refresh rate on one of the skins (0).

Given that this "leak" does not occur under normal circumstances we would have
to know more about your's.

BTW: I don't see SABnzbd in your screenshot, but a lot of other Python programs.

Re: [0.5.x branch] Excessive Memory Usage (2GB+)

Posted: May 12th, 2010, 1:30 pm
by Lukian
sabnzbd+ is the highlighted task at the top (sorted by RESident memory usage).

Regarding the possibility of skin refresh: my browser instance of the sabnzbd+ webpage is closed while the memory gains increase.
The refresh is set to 5 seconds.

This issue is reproducible locally, so I'm able to do any testing you would normally do.
If you advise me what needs to be done and/or provide a profile/debug/logging build, I'm more than happy to provide my results.

If you wish to know more information about "my setup", then please provide a list of a information you require. I've taken the initiative by attaching my sabnzbd.ini (with passwords redacted), the output from 'pmap' and by reporting this bug (on a medium other than IRC).

It's possible this issue is related to having an 'unlimited' cache set. After mentioning this issue again on IRC, it was suggested I change the cache value.

Setting the max cache value seems to correctly limit the memory usage. However, this bug report still remains 'open' as there is a bug free'ing the memory/gc'ing in the above instances. Additionally, even with an 'unlimited' cache, I don't believe the memory cache should ever exceed article size * number of parts currently downloading.

From my own downloads, I've observed the memory is freed/gc'ed correctly *only* when the next job in the queue starts (and the previous job is completely downloaded). The memory is not freed/gc'ed if the job is paused&another job started, or the job is completed with no further jobs to process.

Re: [0.5.x branch] Excessive Memory Usage (2GB+) (GC/memory freeing issue)

Posted: May 14th, 2010, 12:23 pm
by shypike
Each job in the queue uses memory.
0.5.0 had the problem that sometimes all queue data ended up in memory.
This should no longer occur in 0.5.2
Memory allocation and garbage collection are black magic in Python and
diagnose of it is badly supported.
The memory cache only relates to the amount of articles that need to be assembled
into a file, are kept in memory.
As soon as a file is complete its articles are all removed from memory.

Your's is the only report we got regarding this type of memory leak.
So we don't really have a clue where to start without a specific scenario to test.

In your last paragraph only the final remark "or the job is completed with no further jobs to process" is unexpected.
As long a job is not finished it will use memory.
As soon as it start, it will need considerably more memory and will never give it back until completed.
I will look into this last issue, because it is suspicious.