I don't think you understand what "deadlock" means -- a deadlock occurs when two tasks are EXECUTING and each holds (locks) resources required by the other to complete. From the Troubleshooting and Support
manual in the CICS Transaction Server for z/OS 5.3.0 knowledge center:
Maximum task condition waits
Tasks can fail to run if either of the following limits is reached:
MXT (maximum tasks in CICS® system)
MAXACTIVE (maximum tasks in transaction class)
If a task is waiting for entry into the MXT set of transactions, the resource type is MXT, and the resource name is XM_HELD. If a task is waiting for entry into the MAXACTIVE set of transactions for a TCLASS, the resource type is TCLASS, and the resource name is the name of the TCLASS that the task is waiting for.
If a task is shown to be waiting on resource type MXT, it is being held by the transaction manager because the CICS system is at the MXT limit. The task has not yet been attached to the dispatcher.
The limit that has been reached, MXT, is given explicitly as the resource name for the wait. If this type of wait occurs too often, consider changing the MXT limit for your CICS system.
In other words, if you hit MAXTASKS (the TCLASS limit) or MXT (the region task limit), the tasks waiting for MXT or MAXTASKS are NOT executing -- and hence CANNOT be involved in a deadlock.
I have designed a task ABCD which links a program E in another CICS region. The program E reads an online file in the same region as the originating task ABCD. This induces another task ISMI.
Let me make sure of what you are saying. Task ABCD is executed in CICS region C1 (from a terminal, I assume -- and C1 is not using MRO). The program (call it ABCDPROG) invoked by transaction ABCD in turn links to a program E in another CICS region C2. And program E in CICS region C2 reads one (or more) record(s) from a file defined to region C1. Program E then invokes transaction ISMI -- is it attempting to execute ISMI in region C1 or C2 or somewhere else? If this accurately states what you have happening, then (a) the designer of this system needs to be either committed to a mental institution until sanity returns, or taken out back and shot before the insanity can spread any further, and (b) a complete redesign of the process should be started immediately. While CICS allows resource sharing between regions, such sharing should only be done carefully after fully considering the potential performance impact; moving data between CICS regions can potentially seriously impact the entire system (not just the affected CICS regions).
The CICS manual on Getting started with intercommunication
has this to say:
Performance problems can occur when function shipping requests that are waiting for free sessions are queued in the issuing region.
Requests that are to be function shipped to a resource-owning region might be queued if all bound contention winner sessions are busy, so that no sessions are immediately available. If the resource-owning region is unresponsive, the queue can become so long that the performance of the issuing region is severely impaired. Further, if the issuing region is an application-owning region, its impaired performance can spread back to the terminal-owning region.
Note: Contention winner is the terminology used for APPC connections. On MRO and LUTYPE6.1 connections, the SEND sessions (defined in the session definitions) are used for ALLOCATE requests; when all SEND sessions are in use, queuing starts.
On IPIC connections, queuing starts when there are no available send sessions. The number of send sessions are specified using the SENDCOUNT attribute on the IPCONN resource definition on the local server. The number of receive sessions are specified using the RECEIVECOUNT attribute defined in the IPCONN resource definition on the remote system. The number of send sessions that are used is the lower of the following two values:
SENDCOUNT on the local definition
RECEIVECOUNT on the remote definition
The symptoms of this impaired performance are as follows:
The system reaches its maximum transactions (MXT) limit, because many tasks have requests queued.
The system becomes short on storage.
In either case, CICS cannot start any new work.
CICS provides two methods to prevent these problems:
The QUEUELIMIT and MAXQTIME options on both the IPCONN and CONNECTION definitions. You can use these options to limit the number of requests that can be queued against particular remote regions, and the time that requests must wait for sessions on unresponsive connections.
The global user exits XZIQUE, XISCONA, and XISQUE. The XZIQUE or XISCONA exit program is invoked if no contention winner session is immediately available. The exit program can instruct CICS to queue the request, or to return SYSIDERR to the application program.
If you decide, for whatever reason, that the design cannot be changed then you may want to investigate the use of QUEUELIMIT and so forth per the quote.