Loading huge data



IBM's hierarchical database management system with a Database Manager (IMS DB) and a Transaction Manager(IMS DC)

Loading huge data

Postby Panda15 » Wed Mar 16, 2011 2:04 pm

Hi

I need to populate huge data in my test db from prod. curently the randomizer parameters suits to only small volume say 1000 records. now if i need to populate 500000 recors, can you pls suggest what changes needs to be done in dfshdc40 for an hdam db?

Thanks.
Panda15
 
Posts: 9
Joined: Sun Feb 27, 2011 8:05 pm
Has thanked: 0 time
Been thanked: 0 time

Re: Loading huge data

Postby DFSHDC40 » Thu Mar 17, 2011 5:42 am

No change is required in DC40
If the code hasn't been changed, why is the LKED date today?
User avatar
DFSHDC40
 
Posts: 41
Joined: Sat Oct 16, 2010 4:16 pm
Location: IMS.SDFSRESL
Has thanked: 0 time
Been thanked: 0 time

Re: Loading huge data

Postby Panda15 » Thu Mar 17, 2011 2:52 pm

Hi

I have read somewhere that it is required to change randomizer parameters whene ever data volume changes, as it only decides where the data sits and where to read from database. st like root anchor points etc. pls clarify. also can you pls tell me the significance of different parameters that appear in dfhhdc40 randomizer?

Thanks
Panda15
Panda15
 
Posts: 9
Joined: Sun Feb 27, 2011 8:05 pm
Has thanked: 0 time
Been thanked: 0 time

Re: Loading huge data

Postby NicC » Thu Mar 17, 2011 3:14 pm

You read 'somewhere' - be more specific - look it up again or look at the manual.
Different parameters - read the manual yourself - why should we do it for you? If you are unclear about something after reading the manual, and trying some experiments (if you can), then post your questions.
The problem I have is that people can explain things quickly but I can only comprehend slowly.
Regards
Nic
NicC
Global moderator
 
Posts: 3025
Joined: Sun Jul 04, 2010 12:13 am
Location: Pushing up the daisies (almost)
Has thanked: 4 times
Been thanked: 136 times

Re: Loading huge data

Postby DFSHDC40 » Sat Mar 19, 2011 5:28 am

Just to be clear OP asked "what changes needs to be done in dfshdc40 for an hdam db" .... and the answer is nothing
The parms you pass are a different matter - but as NicC says this is documented
This is very specific to the data profile & the access you need ... its not a case of 10000 recs => parms 1,2,3 9999999999 records parms => 11 222 3333
If the code hasn't been changed, why is the LKED date today?
User avatar
DFSHDC40
 
Posts: 41
Joined: Sat Oct 16, 2010 4:16 pm
Location: IMS.SDFSRESL
Has thanked: 0 time
Been thanked: 0 time

Re: Loading huge data

Postby Panda15 » Fri Mar 25, 2011 2:09 pm

Thanks. But what, I was trying to say is that if I put below values:

RMNAME=(dfshdc40,10,100) for 1000 records, will the performance be good? I arrived at these figures by considering below calculation:

10 RAPs in a CI and total CI are 100 . so will this be good from performance point of view, as I directed dfshdc40 to create 1000 raps for 1000 records?


pls let me know?

Regards
Panda15
Panda15
 
Posts: 9
Joined: Sun Feb 27, 2011 8:05 pm
Has thanked: 0 time
Been thanked: 0 time

Re: Loading huge data

Postby NicC » Fri Mar 25, 2011 3:04 pm

What are 'RAPS'?
The problem I have is that people can explain things quickly but I can only comprehend slowly.
Regards
Nic
NicC
Global moderator
 
Posts: 3025
Joined: Sun Jul 04, 2010 12:13 am
Location: Pushing up the daisies (almost)
Has thanked: 4 times
Been thanked: 136 times

Re: Loading huge data

Postby enrico-sorichetti » Fri Mar 25, 2011 3:24 pm

RAP ==> root anchor point ( the first hop of the randomizer routine )
cheers
enrico
When I tell somebody to RTFM or STFW I usually have the page open in another tab/window of my browser,
so that I am sure that the information requested can be reached with a very small effort
enrico-sorichetti
Global moderator
 
Posts: 2994
Joined: Fri Apr 18, 2008 11:25 pm
Has thanked: 0 time
Been thanked: 164 times

Re: Loading huge data

Postby DFSHDC40 » Sun Mar 27, 2011 9:54 pm

"so will this be good from performance point of view"
depends if the records randomise uniquely to their own blk/rap
depends on the keystructure
depends on the size of the RAA
depends on the CI size
depends on the size of the db-record
depends on the insert/update profile
depends of freespace
depends on how you want to access the data

.... and I thought we had 500000 records

It's why you pay a DBA
If the code hasn't been changed, why is the LKED date today?
User avatar
DFSHDC40
 
Posts: 41
Joined: Sat Oct 16, 2010 4:16 pm
Location: IMS.SDFSRESL
Has thanked: 0 time
Been thanked: 0 time

Re: Loading huge data

Postby Panda15 » Mon Mar 28, 2011 9:28 am

Hi

So does it mean that modifying dfshdc40 parameters for accomodating 500000 records won't be enough?? I assumed that if i just modify these, IMS should take care of gererating unique raps for each root segment given hdc40 is the best and most preffered randomizer that suits most of the variation/type/nature of data. pls suggest.

Regards
Panda15
Panda15
 
Posts: 9
Joined: Sun Feb 27, 2011 8:05 pm
Has thanked: 0 time
Been thanked: 0 time

Next

Return to IMS DB/DC

 


  • Related topics
    Replies
    Views
    Last post