Page 1 of 2

Loading huge data

PostPosted: Wed Mar 16, 2011 2:04 pm
by Panda15
Hi

I need to populate huge data in my test db from prod. curently the randomizer parameters suits to only small volume say 1000 records. now if i need to populate 500000 recors, can you pls suggest what changes needs to be done in dfshdc40 for an hdam db?

Thanks.

Re: Loading huge data

PostPosted: Thu Mar 17, 2011 5:42 am
by DFSHDC40
No change is required in DC40

Re: Loading huge data

PostPosted: Thu Mar 17, 2011 2:52 pm
by Panda15
Hi

I have read somewhere that it is required to change randomizer parameters whene ever data volume changes, as it only decides where the data sits and where to read from database. st like root anchor points etc. pls clarify. also can you pls tell me the significance of different parameters that appear in dfhhdc40 randomizer?

Thanks
Panda15

Re: Loading huge data

PostPosted: Thu Mar 17, 2011 3:14 pm
by NicC
You read 'somewhere' - be more specific - look it up again or look at the manual.
Different parameters - read the manual yourself - why should we do it for you? If you are unclear about something after reading the manual, and trying some experiments (if you can), then post your questions.

Re: Loading huge data

PostPosted: Sat Mar 19, 2011 5:28 am
by DFSHDC40
Just to be clear OP asked "what changes needs to be done in dfshdc40 for an hdam db" .... and the answer is nothing
The parms you pass are a different matter - but as NicC says this is documented
This is very specific to the data profile & the access you need ... its not a case of 10000 recs => parms 1,2,3 9999999999 records parms => 11 222 3333

Re: Loading huge data

PostPosted: Fri Mar 25, 2011 2:09 pm
by Panda15
Thanks. But what, I was trying to say is that if I put below values:

RMNAME=(dfshdc40,10,100) for 1000 records, will the performance be good? I arrived at these figures by considering below calculation:

10 RAPs in a CI and total CI are 100 . so will this be good from performance point of view, as I directed dfshdc40 to create 1000 raps for 1000 records?


pls let me know?

Regards
Panda15

Re: Loading huge data

PostPosted: Fri Mar 25, 2011 3:04 pm
by NicC
What are 'RAPS'?

Re: Loading huge data

PostPosted: Fri Mar 25, 2011 3:24 pm
by enrico-sorichetti
RAP ==> root anchor point ( the first hop of the randomizer routine )

Re: Loading huge data

PostPosted: Sun Mar 27, 2011 9:54 pm
by DFSHDC40
"so will this be good from performance point of view"
depends if the records randomise uniquely to their own blk/rap
depends on the keystructure
depends on the size of the RAA
depends on the CI size
depends on the size of the db-record
depends on the insert/update profile
depends of freespace
depends on how you want to access the data

.... and I thought we had 500000 records

It's why you pay a DBA

Re: Loading huge data

PostPosted: Mon Mar 28, 2011 9:28 am
by Panda15
Hi

So does it mean that modifying dfshdc40 parameters for accomodating 500000 records won't be enough?? I assumed that if i just modify these, IMS should take care of gererating unique raps for each root segment given hdc40 is the best and most preffered randomizer that suits most of the variation/type/nature of data. pls suggest.

Regards
Panda15