Page 3 of 3

Re: SPLIT1R error

PostPosted: Tue Jul 02, 2013 12:34 pm
by BillyBoyo
OK, 400,000 records (if I got that right) is not a lot, but still best not to read them more often than needed.

First, find somewhere else where the file is read. Change that to also produce a count, as well as what it is already doing. Realistically the Analyst should do the "looking", but since they've missed out on this to start with (likely), you never know :-)

If nothing else reads the file, or it is not possible to get something changed now, then make the first step an ICETOOL step using the COUNT operator. The output can easily be set to a file, just tell COUNT to do so.

Have a step which reads whichever count you have ended up with, and generates the control card(s) for the final step. These you should ensure are 80-byte, fixed-length, records. If you finish every BUILD or the first OVERLAY with 80:X)... then that is what you'll get (assuming you've not got too much data on the card (> 71 characters)).

Have the final step process the file, splitting into five, using the calculation in the generated cards, and based on the count of the records.

If you make all your data fixed-length (due to the previous messages, and the complexity of mixing the different tasks in one step) then you'd have been wasting 35% of the DASD allocated to the files, input and output.

Re: SPLIT1R error

PostPosted: Tue Jul 02, 2013 12:40 pm
by luckyboyroh
thanks . But i need the extract in VB format only. will try your suggestions . thanks again for ur help till now . i was also wondering, the current jcl cant be changed to handle VB?

Re: SPLIT1R error

PostPosted: Tue Jul 02, 2013 6:13 pm
by NicC
JCL has to reflect the dataset attributes. Do you mean the sort control cards which are NOT jcl?

Re: SPLIT1R error

PostPosted: Tue Jul 02, 2013 6:17 pm
by luckyboyroh
Yes , if they can work for my case.

Re: SPLIT1R error

PostPosted: Tue Jul 02, 2013 11:04 pm
by BillyBoyo
Because your TRAILER1 was outputting data from your VB, the output from that (since you did nothing else) was also VB.

Because that was VB and you were reading the output to generate a SORT Control Card, SORT was confused as it was, naturally, expecting the input of Control Cards to be F.

If you like, when you changed your data to FB, that solved the problem "by accident".

So, yes, absolutely no problem having your data as VB. There is no real connection between your data and your Control Card, you just created one by accident. Much better have have VB and save 35% of your DASD usage,

Re: SPLIT1R error

PostPosted: Wed Jul 03, 2013 1:20 am
by BillyBoyo
For the moment, let's pretend that you don't have anything else which reads your data and which could write a count of records to a file for you.

So, in the first step you use ICETOOL/SYNCTOOL's COUNT operator, with the output going to a file.

In the second step, you read that file with the count on it, and generate your Control Cards as you are already doing.

In the third step you concatenate your generated cards to the base cards for that step on SYSIN, with your file as SORTIN, and JCL for your five outputs.

  <BASE>

  OPTION COPY

  <GENERATED CARDS>

  SPLIT1R=...



You run step 1. It should produce a count. You run step 2. It should produce the SPLIT1R parameter. You run step 3, which reads your data file, uses the SORT Control Cards you have as a base, concatenates to that the "generated" dynamic value, and you get your five outputs.

If you break it into three simple steps, you should find it to be simple.

In the end, you should replace step 1 to save your client costs, but someone will have to "prioritise" that, just make sure you raise the paperwork for it so you don't get blamed if it becomes a "political" issue.

Re: SPLIT1R error

PostPosted: Wed Jul 03, 2013 3:25 am
by dick scherrer
Hello,

If the process that creates the file would put a record count in a separate new file, this could be used to get the count without passing all of the data.

Not my favorite, but would eliminate a complete pass of the data. For this particular volume, not a real big deal, but what happens when there are 400 million records . . .