sanbnzbd does a reasonable job handling nzb files with multiple parsets. However, it differs processing of the entire sets of pars until the entire nzb has been downloaded. Wouldn't it be better to try and process and the par2 right away?
yes. I know one can (should? though i'm not sure I agree with should) create a separate nzb file for each parity set, however that can be annoying if one is creating an nzb to download multiple items from a search engine (for instance).
A way to perhaps attack this problem would be to preprocess an nzb file. If it's contents can be fully assigned to individual parity sets, create a individual nzb files corresponding to those sets and process normally as that. Otherwise, continue as normal. (and by fully assign I mean get a set of parity "header" files, get the set of "data" files and "parity" file blocks, see if they can all be assigned to those "header" files, if they can, I'd think that you are pretty safe in automatically dividing up the large nzb file into multiple smaller nzb files)