Hi!
>Please show an example of the records in each input file (relevant fields only) and what you expect for output.
The record structure is fairly simple and all the same for input and output files.
Key: 10 Byte Char
Data: 367 Byte Char
In detail this consists some calendar data. One datachar for each day of a year.
eg.
US 2008YNYNYNYN...
US 2009YNNNNNYN...
US/NYC2008NYNNYNNN...
US/NYC2009YYNNNNNY...
>Explain the "rules" for getting from input to output.
file1 is the base file with all valid calendar-records. Each key is unique.
file2 is an update-file for the calendar-data.
The output should hold the following records:
* all records which are only in file1
* all records which are only in file2
* records of file2 where key has an duplicate in file1
This means file2 can bring in new keys and should update existing key records.
Which comes down to the question: How can i handle duplicates and tell the utility (if there is one) that if a dup occurs i always want the record from file2?
Or is the other way round better? Take all records from file2, add file1 and skip duplicates?
>Give the starting position, length and format of each
>relevant field.
start: position 1
length: 10 byte key
format: char
>Give the RECFM and LRECL of the input files.
Fixed blocked, LRECL 377 byte
files are sorted ascending
>If file1 can have duplicates within it, show that in your example.
>If file2 can have duplicates within it, show that in your example.
no duplicates
>Also, indicate which Sort product you're using (DFSORT, Syncsort, CA-Sort).
I intended to use DFSORT. But actually i am looking for a utility which can handle this. If there is a "standard" solution for this i dont need to write a new program just to reinvent the wheel.
I hope this was understandable.
Thanks in advance!
Geri