Note: whenever I say "you" I don't me you personally, I mean "whoever did that at your site"
Yes, well, basically it ran out of workspace. There is probably something in the informational messages for IBM Support. When you see lots of those, it should be easy to connect the dots and contact IBM Support. They may from those be able to say what you ran out of.
However, you have two messages which indicate the coming problem. "don't know how many records" and "I'm going to pause the sorting now and do a bit of merging, to see if that gets me to termination, sorry, it'll take a bit longer than otherwise".
There's a few ways you can specify an estimated number of records. 30% a quarter is quite rapid. And immediate fix is to hard-code an estimate of the number of records, probably most convenient in a DFSPARM DD. Then you need to change it "periodically". This can be a manual change of the hard-coded value, or output a formatted count from the COBOL program and feed that value into the next day's run (or if you have somewhere earlier that you've used the data set that day you can even go for the exact number of records if there are no insertions/exclusions).
This will probably remove the workspace problem, and the two informational messages alluded to will probably both disappear.
Expect improvement in elapsed, CPU and IO.
Best, most accurate advice, pointers, explanations and suggestions of things to look out for in the future, will come from DFSORT Support.
Dynamic allocation doesn't come into it, because you have SORTWKnn DDs in the JCL. So it is using those. From the use of Memory Objects and DASD work space, it looks to have dropped from using MO to DASD when the "intermediate merge" was selected, and the explanation of the message supports that.
Remember, the amount of "memory" require was greatly overestimated because the number of records is not known. It is here not a case of increasing memory available, but of aiding a better estimate of the workspace needed.
I don't think on your work data sets you are getting as much secondary allocation as you may expect. Secondary allocation for work datasets must be within the same volume (ie cannot be multi-volume).
I don't know why you specified SORTWKs. If the MO worked, you'd not use an DASD. Much simpler to allow dynamic allocation to support the potential lack of MO (unless DFSORT say otherwise...). With the SORTWKs, why 1000 cylinders primary? 6000 cylinders per SORTWK is being used, why not specify that as the primary? RLSE I don't see any point in (data set is going to be deleted), and DELETE,DELETE is the default for NEW, so why type it out even once?
Put in the FILSZ=En, remove the SORTWKs. Look to see how much memory is actually used.
However, better, consult DFSORT Support