Page 1 of 1

Best practices

PostPosted: Fri Jun 20, 2008 6:55 pm
by jeroc
Hi,
I'm student and I have a project on quality of Cobol z/OS applications. So I'm looking for programming guidelines, how to avoid pitfalls that can generate abend in production, bad data manipulations...

I'm interested in feedback, experiences or document. They could be useful for my project.

Thanks in advance.

Have a good day.

Re: Best practices

PostPosted: Fri Jun 20, 2008 9:05 pm
by dick scherrer
Hello,

One of the biggest "pitfalls" that cause production problems (abends/incorrect data) is insufficient testing.

When new code is developed it is critical to make sure that all of the code is tested. Simply running a test that does not abend is not sufficient. One way to more thoroughly test is to prepare what the expected output from a test should be and make sure that the actual output from the test matches the expected result.

If the code is to process "new" data (i.e. user input), every part of every field entered must be thoroughly validated. Often this is accomplished by creating a series of transactions that include both good and bad entries. The testing directions describe everything that should happen for both accepted and rejected entries. This includes data entered from a terminal or generated external to the application to be processed as input "transactions".

Re: Best practices

PostPosted: Mon Jun 23, 2008 11:01 am
by marun
dick scherrer wrote:Hello,

One of the biggest "pitfalls" that cause production problems (abends/incorrect data) is insufficient testing.

When new code is developed it is critical to make sure that all of the code is tested. Simply running a test that does not abend is not sufficient. One way to more thoroughly test is to prepare what the expected output from a test should be and make sure that the actual output from the test matches the expected result.

If the code is to process "new" data (i.e. user input), every part of every field entered must be thoroughly validated. Often this is accomplished by creating a series of transactions that include both good and bad entries. The testing directions describe everything that should happen for both accepted and rejected entries. This includes data entered from a terminal or generated external to the application to be processed as input "transactions".



U meant self review... rite??? :roll:

Re: Best practices

PostPosted: Mon Jun 23, 2008 11:15 am
by dick scherrer
Hello,

Initially, self review is quite important.

Once the developer completes their unit and/or component/system test, many organizations require that system testing or User Acceptance Testing (UAT) be done before promotion to production.

Final testing of a new system or a new "release" of an existing system should be done by other than the developer(s).

Re: Best practices

PostPosted: Tue Jun 24, 2008 9:48 pm
by jeroc
Hello,

I agree with you that it is important to perform testing campaign and self testing. However, it could be also possible to create routines that inspect the source code in order to detect bad data manipulations or error-prone statements because, sometimes people can miss some problems or can forget to test. No ?
In this this case, an automated routine can be helpful. What kind of constructs is it necessary to look for? (in fact, routines could automate what you do manually)

Regards

Re: Best practices

PostPosted: Tue Jun 24, 2008 11:13 pm
by dick scherrer
Hello,

However, it could be also possible to create routines that inspect the source code in order to detect bad data manipulations or error-prone statements because, sometimes people can miss some problems or can forget to test. No ?
Not likely. Code to do this would need to even be "smarter" than the compiler. How would "bad data manipulation" be defined?

sometimes people can miss some problems or can forget to test.
Not on a well-managed project. People do not have the chance "to forget". The test plan is formalized detailing every business rule and the ways each will be tested.

(in fact, routines could automate what you do manually)
Large, sophisticated testing is often automated. This handles both volume and testing consistency.

Re: Best practices

PostPosted: Wed Jun 25, 2008 6:26 pm
by jeroc
Hello,

Point 1.
I would say that bad data manipulation is when:
- you move a data into another that is shorter
- you move an alphanumeric data into a numeric data outside IF NUMERIC block
- you do not check subscript

Any other examples?

Point 2.
I agree with you that test-plans are very important but it could be interesting to provide an automated tool to check code in addition to the test-plan, no? Some problems are difficult to find and can occur a specific cases that are not covered by test plan. In addition, if you can detect a problem before the test phase, then it could be useful for developers.

Point 3.
I agree with that test-plans and code checker tools can be automated.

Regards