Page 1 of 1

CICS SYNCPOINT Issue

PostPosted: Sat Aug 27, 2011 7:42 am
by Quasar
Hello everyone,

In our application, we have written a new CICS Transaction. This CICS Transaction, initially takes one SAVEPOINT, then performs database updates and if any of the updates fails midway, we rollback, so all the DB Updates are undone. This CICS Transaction was working absolutely fine.

We then injected, the code to take a backup/snapshot of the rows being that were about to be updated, in a History Table. Say, for example, if 250 Rows are to be updated, we would take an image of these 250 Rows and INSERT it into the History_tb table. To accomplish this task, we introduced a code snippet similar to the following -

Before
Perform until end-of-cursor
    MOVE TABLE-REC to HISTORY-TABLE-REC
    INSERT INTO HISTORY_TB VALUES(:HISTORY-TABLE-REC)
End-Perform


Now, this works fine in general. But, in a very specific case(for particular Annuity Policy Contract), the number of these rows that need to be inserted runs into 12000. For this specific case, the transaction was abending. So, as a work-around I had to introduce Syncpoints, at a commit-interval of every 500 Rows. Now, the above code snippet looks as follows -

After
Perform until end-of-cursor
    MOVE TABLE-REC to HISTORY-TABLE-REC
    INSERT INTO HISTORY_TB VALUES(:HISTORY-TABLE-REC)
    Evaluate SQLCODE
        WHEN 0
             ADD +1 TO WS-ISRT-ROWS-CNT
        WHEN OTHER
             EXEC CICS SYNCPOINT ROLLBACK EXEC-CICS
    End-Evaluate   

    IF WS-ISRT-ROWS-CNT = 500
        EXEC CICS SYNCPOINT END-EXEC
        MOVE ZEROES TO WS-ISRT-ROWS-CNT
    END-IF
End-Perform


Upon this change, the transaction started working smoothly. The abend was taken care of. But with this quick0-fix, I have landed into another problem - my requirement is that, if at any point there's a DB Error, I need to rollback everything. Merely, Rolling back to the last syncpoint is not enough, it does not suffice. How, do I then achieve this? Does anyone know of a way out? Dick, could you please help, or suggest an alternative approach?

Thank you very much.

Re: CICS SYNCPOINT Issue

PostPosted: Sat Aug 27, 2011 9:27 am
by dick scherrer
Hello,

You either run the entire transaction as a single unit-of-work or it is multiple u-o-ws (each checkpoint starts a new u-o-w).

IMHO, 12000 is too many updates for a single transacton anyway.

Re: CICS SYNCPOINT Issue

PostPosted: Sat Aug 27, 2011 3:39 pm
by Quasar
Hi Dick,

But, the transaction abends for this case, if I make it as a single u-o-w. Any other ideas? There;s gotta be someway.

Thank you very much.

Re: CICS SYNCPOINT Issue

PostPosted: Sun Aug 28, 2011 9:50 am
by dick scherrer
Hello,

There;s gotta be someway.

Yes, there is, but it will take a considerable change in the process.

There is no good reason to run such large transactions. So, the process needs to be redesigned to break up the huge volume and process smaller sets.

Even in smaller sets, i still believe this is not a good candidate for a CICS transaction.

Why can this not be run in batch with multiple checkpoints after copying the data so you can get back to the beginning if necessary?

Other than beeing too big, why are there transaction abends? These should be fixed immediately if they are due to bad code.

Re: CICS SYNCPOINT Issue

PostPosted: Sun Aug 28, 2011 10:28 am
by Quasar
Dick, I did give that a thought - why not put that part of the process in Batch. But one quick question, say, if I were to do it in batch with multiple checkpoints in smaller sets, wouldn't the code still look the same? Wouldn't I still be having this loop with checkpoints taken inside it?? I am at sixes and sevens about it.

Re: CICS SYNCPOINT Issue

PostPosted: Mon Aug 29, 2011 10:02 am
by dick scherrer
Hello,

if I were to do it in batch with multiple checkpoints in smaller sets, wouldn't the code still look the same? Wouldn't I still be having this loop with checkpoints taken inside it??
Yes, the code might look VERY much the same. . .

Also, yes, there would be checkpoints inside the loop.

You would need to find a new way to "get back to the beginning" if it were necessary. This might include making the table unavailable for the duration of the batch process. If there were to be multiple "sets" of input data, you would need to consider how to manage this. If the process successfully completes a "set" why would there be a reason to go back to the content before the set was processed?

You haven't provided any info about what kind of business process this supports. It is quite likely that someone here has dealt with somethng very similar, but without knowing exactly what you are trying t accomplish, it makes it rather difficult to suggest alternatives. . .