Page 1 of 1

SVC Dump

PostPosted: Mon Jun 14, 2010 10:17 am
by Antonyraj85
Dear Friends,

52 IEA793A NO DUMP DATA SETS AVAILABLE FOR DUMPID=039 BY JOB (DB21MSTR).
USE THE DUMPDS COMMAND OR REPLY D TO DELETE THE DUMP

The above is the console msg we got. we have activated auto dump allocate command in commandxx member. But it is trying to allocate more than 9000m size. So system not able to allocate. Any way, I can alter ACS routine too, that is the next option.Is it possible to capture the dump content into the nomal DASD dataset(Manually created dataset with 9000 MB size) right now by the reply message 52.


Anyone, Can you please reply me back.

Thanks in Advance !.

Re: SVC Dump

PostPosted: Mon Jun 14, 2010 3:43 pm
by Robert Sample
Research the MVS Initialization and Tuning Reference and Guide manuals for details about the dump data sets. This message normally shows up on our system when there have been repeated dumps and the dumps are not being cleared quickly enough. And as far as I know (the manuals will tell you for sure), it is not possible to use a manually created "normal" data set for a dump data set -- that's the whole point of the PARMLIB definitions for dump data sets.

Re: SVC Dump

PostPosted: Mon Jun 14, 2010 4:52 pm
by Antonyraj85
Dear Robert,

Thanks a lot for your reply.Last question, So Is there any alternate method to get the dump. Application team needs this dump.

Thanks in Advance.

Re: SVC Dump

PostPosted: Mon Jun 14, 2010 5:17 pm
by Robert Sample
I would have to do some research -- 9 GB for a dump is extremely large; I'm not sure if a dump is allowed to span packs or not. I know it is possible to use more than one pack for dumps but I'm not sure whether individual dumps can span packs.

Your best bet would be to open an issue with IBM's DB2 team about the dump size and how to capture it.

Re: SVC Dump

PostPosted: Mon Jun 14, 2010 6:00 pm
by Antonyraj85
Thanks a lot.I will raise the same to IBM's DB2 Team.

Re: SVC Dump

PostPosted: Wed Jun 16, 2010 3:34 pm
by Antonyraj85
Hi Robert,

I gone thru one IBM z/OS Tech Doc, regarding SVC Dump, it is mentioned ,

Do not specify multi-volume dump data sets as they are not supported by z/OS

Applies to both SYS1.DUMPxx and dynamically allocated data sets Can cause SWA corruption in the DUMPSRV address space
Would only support a dump of 65535 TRKS, any dump requiring more than this would have a
dynamic allocation failure

IGD17051I ALLOCATION FAILED FOR DATA SET LSE.SVC090.AMMLP1.D020321.T10
4951.S00002 , PRIMARY SPACE EXCEEDS 65535 TRKS

Thanks