Hi
I need to populate huge data in my test db from prod. curently the randomizer parameters suits to only small volume say 1000 records. now if i need to populate 500000 recors, can you pls suggest what changes needs to be done in dfshdc40 for an hdam db?
Thanks.
Loading huge data
- DFSHDC40
- Posts: 41
- Joined: Sat Oct 16, 2010 4:16 pm
- Skillset: MVS IMS DB2 JCL ISPF REXX ... and all the glue that keeps them together
- Referer: internet
- Location: IMS.SDFSRESL
Re: Loading huge data
No change is required in DC40
If the code hasn't been changed, why is the LKED date today?
Re: Loading huge data
Hi
I have read somewhere that it is required to change randomizer parameters whene ever data volume changes, as it only decides where the data sits and where to read from database. st like root anchor points etc. pls clarify. also can you pls tell me the significance of different parameters that appear in dfhhdc40 randomizer?
Thanks
Panda15
I have read somewhere that it is required to change randomizer parameters whene ever data volume changes, as it only decides where the data sits and where to read from database. st like root anchor points etc. pls clarify. also can you pls tell me the significance of different parameters that appear in dfhhdc40 randomizer?
Thanks
Panda15
-
- Global moderator
- Posts: 3025
- Joined: Sun Jul 04, 2010 12:13 am
- Skillset: JCL, PL/1, Rexx, Utilities and to a lesser extent (i.e. I have programmed using them) COBOL,DB2,IMS
- Referer: Google
- Location: Pushing up the daisies (almost)
Re: Loading huge data
You read 'somewhere' - be more specific - look it up again or look at the manual.
Different parameters - read the manual yourself - why should we do it for you? If you are unclear about something after reading the manual, and trying some experiments (if you can), then post your questions.
Different parameters - read the manual yourself - why should we do it for you? If you are unclear about something after reading the manual, and trying some experiments (if you can), then post your questions.
The problem I have is that people can explain things quickly but I can only comprehend slowly.
Regards
Nic
Regards
Nic
- DFSHDC40
- Posts: 41
- Joined: Sat Oct 16, 2010 4:16 pm
- Skillset: MVS IMS DB2 JCL ISPF REXX ... and all the glue that keeps them together
- Referer: internet
- Location: IMS.SDFSRESL
Re: Loading huge data
Just to be clear OP asked "what changes needs to be done in dfshdc40 for an hdam db" .... and the answer is nothing
The parms you pass are a different matter - but as NicC says this is documented
This is very specific to the data profile & the access you need ... its not a case of 10000 recs => parms 1,2,3 9999999999 records parms => 11 222 3333
The parms you pass are a different matter - but as NicC says this is documented
This is very specific to the data profile & the access you need ... its not a case of 10000 recs => parms 1,2,3 9999999999 records parms => 11 222 3333
If the code hasn't been changed, why is the LKED date today?
Re: Loading huge data
Thanks. But what, I was trying to say is that if I put below values:
RMNAME=(dfshdc40,10,100) for 1000 records, will the performance be good? I arrived at these figures by considering below calculation:
10 RAPs in a CI and total CI are 100 . so will this be good from performance point of view, as I directed dfshdc40 to create 1000 raps for 1000 records?
pls let me know?
Regards
Panda15
RMNAME=(dfshdc40,10,100) for 1000 records, will the performance be good? I arrived at these figures by considering below calculation:
10 RAPs in a CI and total CI are 100 . so will this be good from performance point of view, as I directed dfshdc40 to create 1000 raps for 1000 records?
pls let me know?
Regards
Panda15
-
- Global moderator
- Posts: 3025
- Joined: Sun Jul 04, 2010 12:13 am
- Skillset: JCL, PL/1, Rexx, Utilities and to a lesser extent (i.e. I have programmed using them) COBOL,DB2,IMS
- Referer: Google
- Location: Pushing up the daisies (almost)
Re: Loading huge data
What are 'RAPS'?
The problem I have is that people can explain things quickly but I can only comprehend slowly.
Regards
Nic
Regards
Nic
-
- Global moderator
- Posts: 3006
- Joined: Fri Apr 18, 2008 11:25 pm
- Skillset: tso,rexx,assembler,pl/i,storage,mvs,os/390,z/os,
- Referer: www.ibmmainframes.com
Re: Loading huge data
RAP ==> root anchor point ( the first hop of the randomizer routine )
cheers
enrico
When I tell somebody to RTFM or STFW I usually have the page open in another tab/window of my browser,
so that I am sure that the information requested can be reached with a very small effort
enrico
When I tell somebody to RTFM or STFW I usually have the page open in another tab/window of my browser,
so that I am sure that the information requested can be reached with a very small effort
- DFSHDC40
- Posts: 41
- Joined: Sat Oct 16, 2010 4:16 pm
- Skillset: MVS IMS DB2 JCL ISPF REXX ... and all the glue that keeps them together
- Referer: internet
- Location: IMS.SDFSRESL
Re: Loading huge data
"so will this be good from performance point of view"
depends if the records randomise uniquely to their own blk/rap
depends on the keystructure
depends on the size of the RAA
depends on the CI size
depends on the size of the db-record
depends on the insert/update profile
depends of freespace
depends on how you want to access the data
.... and I thought we had 500000 records
It's why you pay a DBA
depends if the records randomise uniquely to their own blk/rap
depends on the keystructure
depends on the size of the RAA
depends on the CI size
depends on the size of the db-record
depends on the insert/update profile
depends of freespace
depends on how you want to access the data
.... and I thought we had 500000 records
It's why you pay a DBA
If the code hasn't been changed, why is the LKED date today?
Re: Loading huge data
Hi
So does it mean that modifying dfshdc40 parameters for accomodating 500000 records won't be enough?? I assumed that if i just modify these, IMS should take care of gererating unique raps for each root segment given hdc40 is the best and most preffered randomizer that suits most of the variation/type/nature of data. pls suggest.
Regards
Panda15
So does it mean that modifying dfshdc40 parameters for accomodating 500000 records won't be enough?? I assumed that if i just modify these, IMS should take care of gererating unique raps for each root segment given hdc40 is the best and most preffered randomizer that suits most of the variation/type/nature of data. pls suggest.
Regards
Panda15
-
- Similar Topics
- Replies
- Views
- Last post
-
- 0
- 1175
-
by gascolese
View the latest post
Fri Sep 03, 2021 4:57 pm
-
- 3
- 6870
-
by socker_dad
View the latest post
Wed May 04, 2022 12:59 am
-
-
eliminate duplicates with different data
by cobol_dev » Fri Jun 03, 2022 3:47 am » in DFSORT/ICETOOL/ICEGENER - 7
- 1888
-
by cobol_dev
View the latest post
Wed Jun 15, 2022 10:44 pm
-
-
-
Copying All Data Off An Old Mainframe
by zunebuggy » Wed Sep 25, 2024 1:10 am » in Mainframe Security - 0
- 1613
-
by zunebuggy
View the latest post
Wed Sep 25, 2024 1:10 am
-
-
- 4
- 3299
-
by socker_dad
View the latest post
Fri Dec 31, 2021 5:28 am