Query reagarding READ WORK FILE RECORD ?



Software AG's platform-independent programming language with full support for open-source and Internet applications

Query reagarding READ WORK FILE RECORD ?

Postby diptisaini » Tue Dec 21, 2010 12:24 pm

Why is READ WORK FILE RECORD used, and what is the difference between it and READ WORK FILE,
diptisaini
 
Posts: 90
Joined: Sun Mar 14, 2010 5:12 pm
Has thanked: 0 time
Been thanked: 0 time

Re: Query reagarding READ WORK FILE RECORD ?

Postby fidelis » Tue Dec 21, 2010 6:42 pm

Hi diptsaini,

"READ WORK FILE" read data from a non-Adabas physical sequential work file. With the RECORD option, the data are read without checking with higher performance than with the SELECT option. With no option used,the SELECT option is default and all specified data fields are checked.
fidelis
 
Posts: 7
Joined: Thu Dec 16, 2010 1:01 am
Location: Brazil
Has thanked: 0 time
Been thanked: 0 time

Re: Query reagarding READ WORK FILE RECORD ?

Postby RGZbrog » Thu Dec 23, 2010 1:59 am

I used to agree with most Adabas DBAs, Natural Administrators, and Natural Tech Leads. "Use the RECORD clause on all READ WORK statements," I would say. The reason was compelling - a massive saving of CPU. Here are the details.

When you code a READ WORK without the RECORD clause, you get the SELECT clause by default, as Fidelis said in his earlier post. The SELECT clause causes Natural to retrieve a set of fields from the WORK file. As data is moved from the record block to the individual target fields, Natural verifies the data content is compatible with the target field. For example, if the target field is packed decimal, the data must contain only appropriate hexadecimal values. If Natural finds conflicting data, your program terminates with an error code. It is up to you to find the bad data, correct or remove it, and restart or re-run the program. Note that alpha and binary fields don't need this verification.

When you code the RECORD clause, the entire contents of the record are moved from the record buffer to the target structure; Natural does not verify data content, so the associated CPU overhead is eliminated. Of course, if you cannot trust the data, then you must verify the contents yourself, programmatically. (My rule of thumb is, trust data that I created - verify everything else.) If you find bad data, write it to a suspense file for subsequent reporting/processing, and let the program continue with the next record. Because the record is moved as a single string, the target fields must be defined contiguously, unlike SELECT which allows you to use any sequence of any fields defined in your module.

Bottom line: the RECORD clause saves a lot of CPU and avoids program abends from bad data.

Now, here is why I don't use the RECORD clause. A documented but little-known feature of the RECORD clause is that it resets the data structure once a record's processing is complete.
1 #REC
  2 #FIELD1 (N10)
  2 #FIELD2 (P11.2)
  2 #FIELD3 (A9)
...
READ WORK 1 #REC
  ...
END-WORK

Think of this as having "RESET #REC" hidden under the END-WORK clause. At end of file, the last record's contents are not available, but in some situations, program logic requires that the record contents remain. Adding logic to save each record incurs too much overhead just to see the final one.

The solution is to use the SELECT clause, but not verify the contents of the (numeric) fields.
  2 #FIELD3 (A9)
...
               1 REDEFINE #REC
  2 #RECORD (A1000)
...
READ WORK 1 #RECORD
  ...
END-WORK

Here a single, alpha field is read, with only a tiny bit of CPU used to see that no fields require verification. For those of you not yet on Natural version 4:
2 #RECORD (A250/4)
...
READ WORK 1 #RECORD (*)
User avatar
RGZbrog
 
Posts: 101
Joined: Mon Nov 23, 2009 1:34 pm
Location: California, USA
Has thanked: 0 time
Been thanked: 0 time


Return to Natural

 


  • Related topics
    Replies
    Views
    Last post