Best practices



Post anything related to mainframes (IBM & UNISYS) if not fit in any of the above categories

Best practices

Postby jeroc » Fri Jun 20, 2008 6:55 pm

Hi,
I'm student and I have a project on quality of Cobol z/OS applications. So I'm looking for programming guidelines, how to avoid pitfalls that can generate abend in production, bad data manipulations...

I'm interested in feedback, experiences or document. They could be useful for my project.

Thanks in advance.

Have a good day.
jeroc
 
Posts: 22
Joined: Tue Sep 18, 2007 2:11 pm
Has thanked: 0 time
Been thanked: 0 time

Re: Best practices

Postby dick scherrer » Fri Jun 20, 2008 9:05 pm

Hello,

One of the biggest "pitfalls" that cause production problems (abends/incorrect data) is insufficient testing.

When new code is developed it is critical to make sure that all of the code is tested. Simply running a test that does not abend is not sufficient. One way to more thoroughly test is to prepare what the expected output from a test should be and make sure that the actual output from the test matches the expected result.

If the code is to process "new" data (i.e. user input), every part of every field entered must be thoroughly validated. Often this is accomplished by creating a series of transactions that include both good and bad entries. The testing directions describe everything that should happen for both accepted and rejected entries. This includes data entered from a terminal or generated external to the application to be processed as input "transactions".
Hope this helps,
d.sch.
User avatar
dick scherrer
Global moderator
 
Posts: 6268
Joined: Sat Jun 09, 2007 8:58 am
Has thanked: 3 times
Been thanked: 93 times

Re: Best practices

Postby marun » Mon Jun 23, 2008 11:01 am

dick scherrer wrote:Hello,

One of the biggest "pitfalls" that cause production problems (abends/incorrect data) is insufficient testing.

When new code is developed it is critical to make sure that all of the code is tested. Simply running a test that does not abend is not sufficient. One way to more thoroughly test is to prepare what the expected output from a test should be and make sure that the actual output from the test matches the expected result.

If the code is to process "new" data (i.e. user input), every part of every field entered must be thoroughly validated. Often this is accomplished by creating a series of transactions that include both good and bad entries. The testing directions describe everything that should happen for both accepted and rejected entries. This includes data entered from a terminal or generated external to the application to be processed as input "transactions".



U meant self review... rite??? :roll:
marun
 
Posts: 8
Joined: Fri Jun 20, 2008 9:59 am
Has thanked: 0 time
Been thanked: 0 time

Re: Best practices

Postby dick scherrer » Mon Jun 23, 2008 11:15 am

Hello,

Initially, self review is quite important.

Once the developer completes their unit and/or component/system test, many organizations require that system testing or User Acceptance Testing (UAT) be done before promotion to production.

Final testing of a new system or a new "release" of an existing system should be done by other than the developer(s).
Hope this helps,
d.sch.
User avatar
dick scherrer
Global moderator
 
Posts: 6268
Joined: Sat Jun 09, 2007 8:58 am
Has thanked: 3 times
Been thanked: 93 times

Re: Best practices

Postby jeroc » Tue Jun 24, 2008 9:48 pm

Hello,

I agree with you that it is important to perform testing campaign and self testing. However, it could be also possible to create routines that inspect the source code in order to detect bad data manipulations or error-prone statements because, sometimes people can miss some problems or can forget to test. No ?
In this this case, an automated routine can be helpful. What kind of constructs is it necessary to look for? (in fact, routines could automate what you do manually)

Regards
jeroc
 
Posts: 22
Joined: Tue Sep 18, 2007 2:11 pm
Has thanked: 0 time
Been thanked: 0 time

Re: Best practices

Postby dick scherrer » Tue Jun 24, 2008 11:13 pm

Hello,

However, it could be also possible to create routines that inspect the source code in order to detect bad data manipulations or error-prone statements because, sometimes people can miss some problems or can forget to test. No ?
Not likely. Code to do this would need to even be "smarter" than the compiler. How would "bad data manipulation" be defined?

sometimes people can miss some problems or can forget to test.
Not on a well-managed project. People do not have the chance "to forget". The test plan is formalized detailing every business rule and the ways each will be tested.

(in fact, routines could automate what you do manually)
Large, sophisticated testing is often automated. This handles both volume and testing consistency.
Hope this helps,
d.sch.
User avatar
dick scherrer
Global moderator
 
Posts: 6268
Joined: Sat Jun 09, 2007 8:58 am
Has thanked: 3 times
Been thanked: 93 times

Re: Best practices

Postby jeroc » Wed Jun 25, 2008 6:26 pm

Hello,

Point 1.
I would say that bad data manipulation is when:
- you move a data into another that is shorter
- you move an alphanumeric data into a numeric data outside IF NUMERIC block
- you do not check subscript

Any other examples?

Point 2.
I agree with you that test-plans are very important but it could be interesting to provide an automated tool to check code in addition to the test-plan, no? Some problems are difficult to find and can occur a specific cases that are not covered by test plan. In addition, if you can detect a problem before the test phase, then it could be useful for developers.

Point 3.
I agree with that test-plans and code checker tools can be automated.

Regards
jeroc
 
Posts: 22
Joined: Tue Sep 18, 2007 2:11 pm
Has thanked: 0 time
Been thanked: 0 time


Return to All other Mainframe Topics

 


  • Related topics
    Replies
    Views
    Last post