Chapter 9

Completing and Deploying a Solution


Few situations in life reward us for good planning alone; instead, the successful execution of the plan is the result that most endeavors are judged. So too with application development. In spite of your good research, planning, and coding efforts, a project can fail or fall short of its mark. I'll focus on five key areas in this chapter to help you prevent such situations:

Counting Critters


"To keep life interesting, we established various milestones in the Access 95 development process, with prizes to members of the team that achieved each milestone. One such benchmark was a designated prize for reporting the ten thousandth bug.

"One of the testers (I'll call him Bob to protect his dignity) decided he really wanted the prize for this bug count milestone, so he wrote a very clever program on his machine. Starting at bug 9,950, his program monitored bug reports as they were being submitted to the server, watching for that critical bug number 9,999. When his program saw the magic number, it immediately sent in a bug he had been holding back, in order to clinch him the coveted slot number 10,000.

"Bob's program ran perfectly, and late one Saturday night, it saw number 9,999 go by, launched on cue, and submitted the winning bug report. However, before Bob came back in on Monday, some of his fellow testers noticed that he had submitted the prize-winning report on a night when he was not even in the building, and they suspected foul play. For fun, they moved Bob's bug from 10,000 to 10,001 and placed one of their own in the winning slot.


"In the days that followed, people who were in on this nefarious plot would wander by Bob's office and peer in, chuckling to themselves as Bob groped through his code line by line to try to figure out what had gone wrong and how it could have fired one number too late!"

Tod Nielsen, General Manager, Microsoft Access Business Unit

All too often, developers and users with the best of intentions miss the mark with respect to deploying a successful and useful application. While all parties apply the appropriate amount of good intentions and effort, they miss a few critical opportunities, and the development effort ends in diminished success or outright failure.

In my experience, development efforts that produce poor applications or user satisfaction do so as a result of some combination of the following factors:

What can be done to dodge these bullets and produce the best possible solution? In Chapter 3, "Preparing for Development," I described methodologies that system designers can use to minimize the occurrence of flawed design processes and communication problems. In this chapter, I'll assume that the design plan is solid, and show how to turn the plan into a shipped application.

Managing Application Development

Sign of the Times


I saw this sign at Microsoft: "All tasks that are due yesterday must be fully assigned by noon tomorrow."

As the sidebar alludes, there never seems to be enough time in an Access development project to do a perfect job. The low cost of Access and the machines that run it produces a mindset that Access development can be done quickly and inexpensively. With time and money constraints in place from the beginning of a project, the project has no chance of success unless it is directed by someone competent in budgeting, time management, motivation, problem solving, and team coordination.

Developers are an independent lot, and are often creative and liberal types with disdain for traditional business hierarchies. Nevertheless, application development must be structured and respected like any other business process.

As I alluded in Chapter 3, "Preparing for Development," even a development effort involving only one programmer will be more successful if that lone coder is managed by someone else. When programmers are freed from user interruptions and deployment planning, they can focus exclusively on the creation of the solution.

Creating a Time Budget from the Specification

In Chapter 4, "Creating Design Specifications," and Appendix B, "Design Specifications-A Detailed Outline," I describe in minute detail an approach for creating design specifications. One element of a high-quality specification (as shown in Appendix B, Section 2, titled "Application Processes") is a detailed itemization of the component objects that comprise the solution, and the application's primary tasks.

Generally, the project time and monetary budgets are based upon the listings of objects and tasks in the design document. The manager's primary role at the beginning of the development process is to refine the time budget and object listings into a finite and detailed project work plan.

If the project tasks were assigned time estimates in the specification, the manager can extrapolate the maximum calendar time required for development. Adding all of the time elements and dividing by the average length of a workday would produce the number of person-days required in a one-person, linear development model.

Of course, most development projects involve more than a single developer. Consequently, the manager's initial mission is far more complex than simply defining a time budget in days. Rather, the manager must proceed according to the following plan:

  1. Itemize objects and processes to the lowest reasonable level of detail.
  2. Define the time budget for each itemized object and process.
  3. Assign a developer to each object and process.
  4. Define an order that each developer will execute his or her defined tasks.
  5. Balance the assignment of tasks and times to synchronize the completion of all tasks.
  6. Revise the schedule to manage dependencies within developers and between developers. Establish interim and final milestones for the project (this process is described in the following section).
  7. Launch the development process and manage the development effort against the plan. Revise the plan and reassign tasks as required.

In the "Preparing Project Timelines" section of Chapter 4, "Creating Design Specifications," I introduced the concept of placing time estimates by task into a project plan. At the highest level, a sample project plan looks like the one shown in figure 9.1.

Fig. 9.1

You can create development project timelines using tools like Microsoft Project.

When a detailed plan exists for a project, I use the plan as a time budget by breaking each task into what I call "SpecItems," which are detailed tasks (objects or processes) on a specification. Each SpecItem on a project is a measurable unit of work with distinct attributes and a prescribed time budget. On projects that have a finite amount of hours (and/or dollars) associated with them, creating time budgets at this detailed subitem level greatly increases the ability to manage available resources against the budget.


The term SpecItem is my own. You can replace it with any term that provides the equivalent meaning to your development team, such as task, item, subitem, workitem, or element.

Figure 9.2 shows a portion of the project plan from Figure 9.1 that includes the SpecItem number and time budget for each task.

Fig. 9.2

A project plan that shows individual time budget items ("SpecItems").


If you want to create a time budget with SpecItems for your project and are not concerned with a visual or hardcopy timeline, you can use Excel instead of Project to create the grid portion of the information shown in the figure.

The SpecItem numbers can be used to track time against the project by requiring that each time-keeping entry made by a developer against the project includes the task (SpecItem) identifier. See the section "Performing Progress Reviews" later in this chapter.

Establishing Project Responsibilities and Milestones

The project manager's initial planning effort (as defined in the previous section) includes assigning tasks to developers and creating a plan for each developer to accomplish those tasks. A manager performing such a planning process must take many factors into account:


This process of matching skills to tasks may be more successful if the manager encourages the involvement of members of the development team.

Additionally, I mentioned in Chapter 3, "Preparing for Development," the objective that development managers should also possess a high level of development skills themselves. Nowhere in the management process is the need for these skills more obvious than when the manager is estimating the time for each development task and matching the tasks to the skills of the programming team members. A manager who is only a "business" person and not familiar with the nuances and details of the development process will not be able to successfully create an accurate and detailed development plan.


When creating separate development efforts to run in parallel, I've found that the simplest organization is to have three development teams. First, all of the teams work together on the tables, code libraries, and application infrastructure. Then, one team works on forms, one on reports, and one on core processes.

Whichever team is done first with their list inherits the early testing work and the creation of user-education materials and deployment plans.

When creating a project development plan, interim milestones can be built around status points (a percentage complete, a specific number of objects created, or a subset of functionality demonstrable), or they can be built around the completion of named project components. When dependencies exist between objects or processes, it is useful to define functional subsets or components of an application, and to manage the completion of each component as its own miniature product.

Thus, the accounts receivable, accounts payable, and payroll components of an application may each be managed as if they were a full project. The development of these components can be run successively, or can be staged consecutively if enough resources are available. The management of application components requires careful timing of interim deliveries, and may also require component-level review, documentation, and testing efforts.

Whether interim milestones are based on measurable progress or the completion of identifiable components, they provide several types of value to the users and developers:


Interim milestones are a good place to locate team-building exercises like project review meetings, training and other skills enhancement, or rewards and social events.


Does the previous point mean that the timely delivery of a project is related to the number of milestones in the project? Not precisely. A project with an overload of milestones can become bogged down in preparing for milestones, creating interim builds, and coordinating user reviews, all of which may slow the progress of development rather than help it.

In the "Prototyping" section of Chapter 3, "Preparing for Development," I noted that it can be useful to include at least two interim milestones in the development of every application:

Complex applications may benefit from a few more milestones. For example, a review of report prototypes can often prove useful in applications with extensive reporting needs. Also, applications that have highly involved processes may benefit from a user review of the process logic when it is finalized.


Users are not usually adept at reviewing the code or pseudo code for a process. Instead, the best way to enable a review of a process is to work with the users to dummy up a set of input data, run the process against the data, and then analyze the outputs from the process to determine if it is working as desired.

For example, an inventory control routine includes code to balance the running inventory balance to stock-on-hand (called physical count) information entered into the system. To assist users in reviewing this process, the developer would need to help the users enter sample records for initial product quantities, then create stocking-level adjustment records, and then run the inventory balancing process and review the output records.

When a project development plan has been created, it is important to share it not only with the developers, but with the users as well. Users on the design team that are involved in the interim milestones must be made aware of the timing of their involvement and the amount of energy that will be required of them. When developers deliver the application to users for review at interim milestones, it is critical that the user feedback comes back on time, or the development timeline may be impacted.


The only thing more frustrating to a development manager than coders that are behind on a deadline is users that are behind on a deadline. The development manager rarely has the authority to force the users to complete their obligations to a project.


Users involved in the testing process must also block out the testing phase on their calendars and check their availability early in the development process.

Identifying Pitfalls and Priorities

When creating a project timeline, the manager must attempt to foresee events that can derail that timeline. The ideal development model is one where all of the coders have all required skills and knowledge, sit down at their workstations, and proceed undistracted through the coding effort. However, this virtually never happens.

Some missed deadlines are a result of poor coding effort or a skills deficiency on the part of the development team. But there are other roadblocks that can crop up in the course of development:


The addition of a coder to the team can actually slow a project. Over the long term, an extra person usually increases the performance of the team, but in the short term the new hire requires equipment, training, reviews, and motivation. Depending on the new person's skills, there may be a net drain on the project timeline rather than a net contribution. The best time to add a new person to a development team is in the short space between projects.


Never casually agree to finish a project redesigned during development on the same timeline and budget as the original project. Stop all development, go back to the drawing board, and recalculate all deadlines and costs. If possible, involve the members of the original design team in this process.

Of course, to release a solution on time, the development manager must be able to expect or predict situations like the ones listed and to include in the project plan contingency tactics to apply when the situations occur.

In addition to contingency planning for situations that reduce productivity, managers should prioritize development efforts to ensure that the most important items are completed first, when possible. This situation ensures that a project that is late, runs out of money, or loses key staff is not abandoned because its most critical features are missing; instead, it can at least be delivered in a reduced form.

As you prioritize individual project elements, these three different attributes should cause an element's priority to be escalated:

Tasks with one or more of these listed attributes should be slotted for completion earlier in the project rather than later. At the point where the most difficult and important issues are completed, much of the project stress for both users and developers goes away, so it is important to reach this milestone as early as possible in a project's lifecycle.

After reading this topic, it should now be apparent why I emphasize the need for a project manager to provide support to the coders. The complexities of project planning, project management, and problem resolution in even small applications often exceed the career interests or job descriptions of technical personnel.

Measuring Development Progress

Having had the privilege to work with many bright coders over the years, I've observed that each has his or her own artistic gifts. Some coders are brilliant at doing what they know, but poor at solving new problems. Some programmers learn slowly but never forget, while others understand everything instantly but have no retention.

With so much variety, there is no single test you can give a programmer (or yourself) that measures development aptitude against a finite scale. The real proof of skills is in the quality and solidity of the solution components created.

If you must try to measure the abilities, and the growth in abilities, of programming team members, you must do it by analyzing the products they create. Evaluating skills based on criteria like the following can prove useful:

Of course, benchmarks such as those listed can also be used to judge the quality and status of a development project in addition to that of developers (see the sidebar). In order to determine the percentage of a project that is completed, the development manager must periodically review the completion status of each project element and the quality of that element.

Is It Done Yet?


I heard a software company executive in the 80s once say something to this effect: the first 90 percent of software development is easy; it's the last 90 percent that kills you.

He was expressing his frustration with a process that has been the same as long as I have been in the business-when you think you're done coding, you're really only about halfway done. There always seems to be one more change or one more bug on the list, and as you cross items off the list other people are adding to it.

In the end, there is always a philosophical mind game where developers go through the product after they've decided to ship it and go down the open issues list reclassifying every open item. A quirk that never got corrected becomes a feature; a feature that got dropped at the last minute is hyped as a compelling reason to anticipate next year's upgrade, and the unfixed bugs are cataloged blandly as known issues near the bottom of the README file and blamed on the operating system.

Mostly as a result of time pressures, every software product ships with bugs, and the important question becomes not "How many?" but rather "How severe?"


The previous statement in no way means that developers should capitulate and stop striving for zero defect applications. Because we have done it, I know that it is possible to write software that has no defects known to users. The bugs that were in the application were actually found by developers doing future upgrades, but were never discovered by any users. This is as close to perfection as software ever gets.

Performing Progress Reviews

All projects should have either a time or cost budget, and most have both because time and money intertwine in business. As the manager performs interim progress reviews of a project, he or she must gauge progress by comparing the current status against the milestones and budgets established at the beginning of the development phase.

If a project plan is designed for what I call progress budgeting, it has a defined mixture of milestones, and lists the expected status of each application process and object at each milestone. Determining the current state of a project relative to its completion is a matter of determining the current status of each object with respect to its nearest milestone.

For example, assume a project plan with progress budgeting defines the 40-percent-complete milestone for a project in terms of the completion of coding against a defined list of objects and processes. The status of the project with respect to the 40 percent mark is determined by the status of each object or process as compared to its expected status at the 40 percent milestone. Table 9.1 shows an example of such progress. For the project in the table, the application is assumed to be 40 percent complete when the three listed forms are completed.

Table 9.1 Status of a Project As Measured by Completion of Specified Objects

Object

Actual Percent Complete

frmCust

80%

frmAddr

100%

frmOrder

90%

Average

90%

Project

90% X 40% X 36% actual project completion

A second type of budgeting is what I call time budgeting, which measures all progress on a project in terms of hours (or dollars derived from the cost of the hours). Time budgeting assumes that each element of the project has an associated time projection. If the projections are good, it is safe to say that when the time budget for an item is consumed, the item is complete (because the allocated time or money available for it has been used up).

To measure project progress using SpecItems, developers must record their time at the SpecItem level. For example, Figure 9.3 shows how a developer might record his time against the SpecItems shown in the previous Figure 9.2.

Fig. 9.3

An Access table for logging development time by project and SpecItem.

In this example, project status is measured by collecting time records in an Access table, then running a query with time grouped and summed by SpecItem in order to measure the hours accrued against each item. If the original time estimates were good, the number of hours expended against one SpecItem divided into the number of hours budgeted should indicate the percentage complete.

Performing Code Reviews

An Unfavorable Comment


The need for code reviews is indicated by the following quote, which we found in a code comment in an application inherited from a client:

' I'm writing this on my last night here.


' The code's not working and I'm sick and tired and giving up.


' This should have been thought out more clearly before I started.


Clearly, the person writing this code comment was disgruntled and was a lame duck. You cannot expect people in such a situation to do their best work. Discovering this comment during our code review indicated to us that we must carefully review the commented routine-and any other routines written by this same person-because the code quality may have been affected by the negative personal attitude.

As the sidebar indicates, if only one person writes a routine, reviews the routine, and tests the routine, the routine is completely a reflection of that person's skills and attitudes. If, however, someone else reviews the routine, the quality of the code can be brought up to a standard that better reflects the abilities of the entire team or the goals for the project.

A code review is simply the process of carefully reading code routines associated with a process or object, looking for areas that can be improved. Code procedures are often drafted, then coded to completion, then optimized, and a code review can occur at any or all of these milestones.


If a chunk of code can only be reviewed one time in a project, it should be reviewed when the coder thinks it's completely done and perfected. While an early code review (an architectural review given soon after a code routine is drafted) is very useful to see if the content is appropriate to the task at hand, too many code changes take place between this point and application completion for an early review to be useful as the only review.

It is often easiest to perform a code review at one of the project milestones, when an identifiable set of routines have been completed. (The strategy for defining project milestones and creating interim builds was discussed earlier in this chapter.) Any set of code delivered as part of an interim build is fair game for a review unless otherwise noted.


All code reviews should occur before the testing cycle. When testing begins, the code should be complete from the perspective of both the programmer and the manager.

Properly commented code can be useful during a thorough code review. It can sometimes be very difficult to determine exactly what a piece of code is doing with out appropriate comments.

Conversely, managers must not get in the habit of reviewing code by simply reading the comments. A comment is not a valid indicator of the quality of the code that follows it, nor are comments even guaranteed to be accurate because coders often forget to revise comments as they revise code.

In a perfect world, all code in a project would be reviewed before shipment to the users. However, there is rarely enough time or budget in a project to allow for this. Routines should be prioritized for review using the same criteria as was described previously in this chapter for ordering the flow of development. Thus, routines that are mission critical, highly complex, or a top priority for the users should be reviewed first.

What should the reviewer look for when scrutinizing code routines? A good coder can often spot bugs or logic flaws while reading another person's code. The reviewer should also note the following:


You can imagine how difficult it would be for a manager to review the code of ten different developers who used ten different coding styles. Code reviews are made much more difficult when consistent coding styles are not enforced throughout the team.

It is tempting to perform a code review from printed hardcopy code listings or in Word documents with revision marks enabled. Either of these options enables the reviewer to easily make notes and annotations. However, because reviewing code involves jumping to related routines and sometimes even running the code to follow a logic trail, most code reviews are actually performed online.


When reviewing code online, you can use comment marks (an apostrophe) to make comments directly in the code. Create a standard notation that the original programmer can search for at the end of the review in order to read each review comment. For example, I place 'SL at the beginning of my code review comments inserted into code.

Performing Architecture Reviews

A review of an application's architecture is different than an audit of its code. An architectural review is concerned with the layout of objects, the dependencies between objects, and the flow from one object to the next.

This type of review is usually done at some interim milestone near the midpoint of the project. When the application is half completed, the table structures, form and report shells, and primary code routines should have been at least outlined and located within the solution. The reviewer should be able to divine if these objects are structured properly and match the intended and optimal design and usage.


In Chapter 4, "Creating Design Specifications," I describe how to create a navigation map for an application. The architectural review of a solution should include comparing the application against the navigation map from the design document. The reviewer should make sure that the application flow envisioned by the users was actually implemented in the solution.

Testing Access Applications

A Tester's Vocabulary


Various incarnations of this list of definitions have crossed my desk over the years. I've edited the lists and added a few items of my own:

Some developers love to test applications, others do not. Some are better at it than others. Because there is no value in an untested routine, either to the rest of the development team or to the users, each developer must constantly strive to become a better tester of his or her own code.

Unfortunately, the testing process is often underbudgeted, because a common perception is that a good coder or team should not be creating bugs in the first place. This is a misconception. First, all humans make mistakes, and thus all coders have bugs in their completed work. Secondly, a software application is a complex collection of code and objects interacting with users and data. There are many different combinations of such interactions to test, and many different things that can go wrong in the interaction.

While random testing done by a good tester finds many of the problems in an application, it is wiser to proceed with testing according to a plan.

Creating a Test Plan

The quality of your product is directly affected by the quality of the testing performed on it. Notice that I said quality, not quantity. While a thousand robots or test scripts pounding on your application may eventually find every problem in it, one well-directed human generally finds as many issues and in less time.

Consequently, in the same fashion as a specification was created to provide the developers with a map to use during development, a test plan can be created to provide a map for use by the testers. The biggest danger in random testing is that a feature or feature set will be overlooked and untested. Testers working from a comprehensive plan are unlikely to generate this problem.

The most basic test plan is simply a list of the elements of an application: objects, features, and processes. A tester working from a comprehensive list of application elements verifies that each item on the list works as designed and is free from errors. A broader test plan includes suggestions for how to test each object, feature, and process.


If you prefer a formal definition for test plan, I offer this:


A list of every feature in the application, and every logical feature grouping, with a strategy for triggering each feature/group from every possible user-interface approach and internal circumstance. By following a test plan, a user unfamiliar with an application should be directed to use every feature in it in conjunction with every logical combination of data state (such as empty record, full record, valid record, invalid record) and initiation method (including keyboard, mouse, cascading code events, thresholds/triggers, and timers).

To determine if a feature works as designed, the tester must have a copy of the design document to test against. If the design document is not detailed and specific, or one does not exist at all, the tester is forced to either guess as to the intended workings of a feature, or ask a user or designer of the system for input on the intended behavior. Neither of these options is efficient or accurate. Consequently, we've just bumped into another benefit of a written specification: it provides the framework for a testing plan.

To write a basic test plan, make an outline list of the high level features in your application. For an Access application, the outline might be organized by bar or switchboard menu features. Next, fill in the outline with each feature that can be accessed in that area of the product. Remember to include features on menus, buttons, and those triggered by events. The outline might look something like this:

Using the structure shown, here are some actual entries from a test plan:

You can see that even a simple feature list like this helps organize the testing process, provides the testers with a structure for recording issues discovered or objects certified, and enables managers to track testing progress.

With a basic test plan, we are relying on the abilities of the tester to try all possible variations of data and interaction with the system in order to properly test a feature. Skilled testers may function adequately in this scenario, because they have seen many applications fail and know what to look for. Lesser-skilled testers, however, may not be able to properly anticipate all the different ways a user might interact with a feature, and may not test well from a basic plan.

Writing a more comprehensive test plan than a simple outline takes additional time. Creating this test plan generally involves using the specification as an outline of features and objects, as well as adding details that are specific to the testing process. To create the details, browse the application and add to the plan the menu options, toolbar and form buttons, keystroke sequences, and events that could be triggered by a user at a given point. The objective is to come up with a document that lists each possible action that can be initiated by the user or via an event initiated by the user. In other words, this outline should list everything that can be accomplished using the application.

It is also helpful to review the code in an application with an eye toward branch points, subroutine calls, automated features (like timers), or feature combinations (such as dependent features), and then to add information to the test plan about how to test these internal code processes. What you are looking for here are areas in the application where events can either collide with each other, or be spawned by other events, or fail in an ungraceful manner.

Continuing on, add text to the plan, in appropriate places, that describes in detail how to properly evaluate whether or not each listed feature or function is working correctly. Note that the objective here is to convey to a tester, who may not be either a developer or user of the application, an understanding of the interaction of events amongst themselves or events between the users and the system. While it is assumed that testers of your applications are trained on the application's workings first, they still may not understand what a feature should do, only how to use it.

As a final step, read through your test plan and add notations about resource requirements for each individual test item. For example, if performance testing is an objective for the testing process, as it is in most database applications, someone needs to build a sample database (called a "test case") of sufficient size to allow the application to be tested under load. Someone also needs to configure the machines on which the performance tests will be run.


As a rule of thumb, you can assume that your users will upgrade their Access product, their client machine, or their database server machine, or all of the above, every 12 to 24 months. Thus you would want to simulate in testing the data load that will be placed on your application 24 months from now, in order to appropriately test under the future load that will be placed on your application using the user's current hardware configuration.


For example, in an invoicing system that will be logging 70 orders per day, you should add 33,600 records (70 x 20 working days x 24 months) to the system as dummy records so that performance can be tested for queries and reports using the full data load.

Another area involving additional resource requirements for your test plan might include calling APIs or other external programs from your application. For example, if the application is e-mail enabled, you need to test the mail functionality of the application in an environment that simulates that of the users. Thus, if your users include both Novell Netware and Windows NT users, you need to provide resources to your testers that include networks of both these varieties with e-mail services on which to test the application. You may also have to test more than one variety of mail services, such as Microsoft Mail and Exchange servers.


Remember to include usage of notebook/laptop computers as an element of your testing. In an application that has local data, allows the replication of central data locally, or allows the users to make their own local copy of the data, a machine can be removed from the network and still use the application. In such a case, any dependencies your application has on the network (for example, the e-mail server or shared code libraries) must be tested in a non-network scenario.

As an example of a comprehensive test plan, presume that your application has a customer form. On the customer form is a combo box for the CustType field. The test plan for this single control would include the following suggestions:

  1. Enter a value not on the list, you should receive an alert. Did you receive an alert? Was the alert accurate? Was the value disallowed?
  2. Delete a value from the combo box, you should receive an alert. Did you receive an alert? Was the alert accurate? Was the delete disallowed?
  3. When you enter a new record on the form, the CustType value should default to the customer type value for that CustID in the Customer table. Was the default entered? Can you select a value other than the default? Does the default reset on a new record?
  4. When you change the CustID value on the form from one customer number to a different one, the value in the combo box should be replaced with the default value matching the new ID. Was the default entered? Can you select a value other than the default?
  5. Does the control provide quality status bar text and a What's This help message? Is the control described in the application's help file? Can you easily find the topic?
  6. Does the control behave the same whether entered and exited via a mouse click, Tab, or Shift+Tab?

You can see that the mere process of testing a single combo box control can have many steps. Compound this test action list by every control on every form, then add different data scenarios to the test, and also factor in the interaction between the various controls, and you will quickly understand why the testing process for a fairly standard Access application can take weeks.

Stalking the Wild Tester

If your team can afford dedicated testers, you have solved one of the biggest problems in software development: how to ensure that applications are properly and thoroughly tested. If, instead, your testers are also developers, then you face the significant burden of making the developers into good testers.


There is no correlation between a person's ability to write code and their ability to test. These skill sets are completely independent.


Historically, developers are not good testers for one single reason: they make too many assumptions. This is the reason why developers are very poor at testing their own code and only marginally better at testing code written by others. Here are some of the traditional (and flawed) assumptions that a developer makes when testing an application:

You can do several things to learn how to be a better tester or to help your development team improve their testing skills:

A great tester is able to meticulously, almost instinctively, examine each feature of an application and find its flaws. He or she can also contrive situations that are quirky but logical and reflect the strangest interaction a user can have with an application (see the sidebar).


The best tester is not simply a thorough tester, but a crazed one. In my experience, great testers are slightly deviant when compared to normal people-they have a sadistic bent and enjoy breaking other people's code and confidence. With a different upbringing or personality, the best tester on your team would probably have turned out as one of those people who write software viruses or hack into secret military systems for fun!

Testing Time


One of my employees, Sid, fits the definition of a great (and slightly maniacal) tester. When he was helping test our retail ActiveX clock control product, I caught him placing 50 clocks on a single Access form, each set to a different time zone, size, or color combination.

I disingenuously asked him, "Do you really think a user would ever do that?" His reply was the kind that separates great testers from good testers: "It doesn't matter if the users will ever do it, our product should be good enough to handle this situation anyway."

Setting Up a Test Machine

We've explored what qualifies a person to be a tester, but what qualifies a machine to be a test machine? I see three configuration-related mistakes commonly made when testing Access applications:

The first maxim in configuring a test machine should be to reproduce the average user workstation as closely as possible.

After you've tested on the average user machine, test on machines that reproduce the most extreme user equipment scenarios. First, test on a machine that is very underpowered and boasts a simple configuration. A 486/33 box with 8MB of RAM, running in 16 colors with only default Windows fonts is a good candidate to test on.

Issues that can be discovered on a machine like this include:

Next, test on a state-of-the-art machine, one that runs at blinding speed, has a variety of popular software products installed on it, has hardware options like a CD-ROM and sound, and runs in the highest available screen resolution. This test machine helps you discover the following:

Finally, it is wise to always test an application on what I call a "clean machine." Such a computer reflects a new PC with a minimum of software on it. We create a clean machine by reformatting the disk drive and installing Windows 95 on the drive and nothing more.

Note that I don't install Office or Access on the clean machine. An Access application you ship with an ODE runtime setup should, in theory, be self-sufficient, and your users should be able to run the setup and the application on a machine containing nothing more than Windows. Of course, if your application is meant for users of the full Access product, you'll have to install Access (but not Office) on your clean machine in order to make it a valid test configuration.


When you must develop applications on a machine that will also be subject to non-standard software, beta test copies of products, and so forth, it is wise to create two development environments on your machine. This is done by making the machine dual boot-run two copies of Windows. Keep one copy of Windows unpolluted, as close as possible to the configuration of a clean machine-and do your development and testing in this environment.

It is also possible to create a dual boot machine that has both Windows NT and Windows 95 on it. You could use NT as your development environment and keep the Windows 95 environment unpolluted as a test environment.

Managing the Testing Process

Whether the testing process involves one tester or multiple testers, the objective is the same: do not allow any feature to be overlooked. Sometimes I see clients give an application to their staff and say something like this: "Genie, you test inventory; Mary, you test the general ledger; and Mike, you test accounts payable." In the end, we discover that the individual features were tested, but not their interaction, because that aspect of testing was overlooked by all parties. For example, an inventory receipt transaction generates a payable item; Genie thought that Mike would test this under payables testing, and Mike thought that Genie would test it via the inventory receipts.


The paradox of delegating the testing responsibility even afflicts Microsoft. When they were creating Windows NT version 4.0, I witnessed confusion there over whether the application groups should test their products on the new operating system, or whether the operating system group itself was responsible for compatibility testing.


From these examples, you can see that it is important to make sure every feature is included on the test plan, and that every feature is delegated to at least one person for testing.

You may need to modularize the test process for the benefit of multiple testers, so that two testers do not overlap on the same feature and miss another one altogether. Most applications should be tested both empty (as it will be delivered to users) and full (with 24 months worth of sample data loaded in). You can assign these tasks to two different people and have a parallel testing process.

As an alternate example, you might break your testing task delegation into modules based on the experience of the tester. For example, the old adage that developers make poor testers can be carried further to say that a tester that knows a lot about the feature may also make a poor tester. Thus, if your tester is highly literate in accounting, do not delegate that person to test the invoicing system. Use an accounting neophyte to do the testing of this element instead. This tester brings no preconceptions about how things should work and more closely recreates the scenario of the least-skilled user that may use the application.


I don't mean to infer in the previous paragraph that only persons ignorant in an application's purpose should be testing it. The point I'm making is that the people with the least number of preconceptions are often the best testers. Thus, an accounting neophyte will utilize the application in ways different from someone well versed in accounting and accounting software. The neophyte's testing efforts will often uncover usability, data protection, and process flow shortcomings that the accounting expert may breeze right past.

On the other hand, an accounting neophyte may not fully understand the workflow being automated by the system, and thus an accounting expert may provide better testing of the finer features of an application.

Ideally, the most favorable testing environment is one that solicits input from users at both of these ends of the skills spectrum.


Understanding the Types of Testing

Many developers and managers think that all testing happens at the end of application development. If the developers are also serving as testers, this model provides a good allocation of resources because coders are not distracted from the development effort while it is ongoing. They can shift gears mentally from coding to testing at the end of the project.

In a more optimal environment, where testers and developers are not the same people, it is better to have the testing process running parallel to the development process (to the extent that it can be).

There are several kinds of testing commonly defined in the software industry:


Because of the incomplete state of the application at the time of unit testing, it may be difficult to involve users in that process. However, by the time integration testing is in motion, parts of the application are stable enough that it may be possible to involve users in the testing process.

An added benefit of involving users at this stage is that they can compare the features to the specification (and to their expectations for the solution) well before the application components are combined for final testing.

Obviously, in a smaller team, there may be few or no testers designated to help with these efforts. In such a case, developers provide the testing resources, and their success is directly correlated with their ability to function as qualified testers, and with their understanding of the testing process as described here.

Involving Users in the Testing

The deployment mentality applied by developers in years past saw them saying to users: "You'll have the application when it's good and ready." Development teams feared they would lose their shroud of mystery if they exposed any elements of the development or testing processes (or an application's flaws) to the users.

The newer model I'm advocating says this to users instead: "Help us to make the application good and to determine when it's ready." Users that are realistic about the budgets and timelines they help to throw at developers must also be realistic in their approach to testing-they must not form a negative opinion of the development team simply because it asks for help with testing, or because a few bugs are found in the process.

Thus, the trend I've already established in this book of involving users in design and ongoing reviews carries logically to testing as well.

The benefits of involving users in the testing process outweigh the frustrations, which arise mainly from the communication gaps that occur when developers/testers attempt to talk about technical issues with users (or when users attempt to do the same with developers/testers).


I've found that a user's "bug" is sometimes a developer's "by design" behavior. During testing, it can sometimes be difficult to determine what a serious flaw is and what is simply a shortcoming, because different members of the project team may have different perceptions about an item's severity.

Before pulling users into a testing process, establish a mediation or management strategy that will help "triage" issues discovered and rate their priority, so as to minimize hostile debates between users and developers.

The primary benefits of user involvement in testing are:

Here are the primary pitfalls to avoid when adding users to the testing team:


I've found that it can be helpful to create a reward process for users when you are motivating them to test out a new system, because users are often too busy or intimidated to volunteer for such an effort.

For example, I might see if the client will include in the project budget money for printing a few T-shirts sporting a big roach and stating that "I killed Inventory System bugs." These shirts would be awarded to the users who provided the most usability or problem feedback during testing.

Bear in mind, however, that for some development teams the word "bug" is an abomination and is disallowed because it has negative connotations with respect to the development team's reputation. In such an environment, a more politically correct reward idea may be required. For example, instead of the roach shirt idea, consider a shirt with a cartoonish programmer-type figure on it and the words "Honorary Inventory System development team member."

Tracking Testing Discoveries

Once testing begins, the testers need a way to track problems discovered during the testing. Obviously, in a pinch, a Windows Notepad document is better than handwritten notes, and a spreadsheet or outlined Word document is better than a Notepad file. The best solution, however, is for testers to have access to an issues database into which they can log problems, suggestions, and shortcomings found during the testing.

Visual What?


With Microsoft's emphasis on the term "visual" (witness Visual Basic, Visual C++, and Visual Java), it was only a matter of time before the term became a running joke. With the advent of the Internet, so many people at Microsoft were using Notepad to edit their HTML Web pages that they recently began to jokingly refer to Notepad within the company as "Visual Notepad, the HTML Editor."

The power of a central issues database lies in the fact that it can be shared by testers and developers. Developers can query the database at any point to see the newly entered issue records, and testers can view previously entered issue records and determine the latest status information entered into them by the developers.


Notice my politically correct term for the tracking system-I prefer the term "issue" to "bug" for two reasons. First, there is a negative connotation (and subjective nature) to the classification "bug." Secondly, a good issue management system should not only be used at the end of development to track bugs. It can also be used throughout development to prioritize to-do items, record enhancement requests, and log technical discussions.


If you don't need to be politically correct within your team or your system actually is only used for bugs, you can use fun terms like "Bug Base" or "Ant Farm" to describe your tracking system.

Table 9.2 shows samples of the data fields that we use in our internal issue tracking system:

Table 9.2 Table Fields Useful For Managing Issues That Are Discovered During Development and Testing

Field

Description

IssueID

Unique record ID

IssueType

Type of issue (see Table 9.3)

Priority

Urgency, between 1-3

ProjectName

Name of the application

ProjectArea

Object, task, or process name

ProjectSubArea

Subsidiary object, task, or process name

ProjectVersion

Version or build number that the issue came up in

Platform

Hardware or software environment necessary to recreate the issue

ShortDescription

Description of the issue

LongDescription

Long description of the issue and steps to reproduce it

Status

Current status of the issue (see Table 9.5)

StatusSetBy

Person that set the current status

StatusSetAt

Date/time the current status was set

StatusVersion

Version number matching the current status

StatusDueAt

Target date/time for changing to the next status

IssueDueAt

Date/time the issue must be closed

WorkLog

Detailed history of how the issue evolved, was resolved, and/or was retested

SupportingInformation

Names of files or documents that provide more information

ReportedBy

Person who originally reported the issue

ReportedAt

Date/time the issue was originally reported

CreatedBy

Person that created the record

CreatedAt

Date and time the record was created

ChangedBy

Person that changed the record last

ChangedAt

Date and time the record was last changed


A database with the issue management system tables found in Tables 9.2, 9.3, and 9.4 is on the CD-ROM as file AESISSUE.MDB.

Table 9.3 suggests codes that can be used to designate a type categorization (IssueType) for each issue record.

Table 9.3 Lookup Values For Assigning IssueType Values to Issue Records

Code

Description

Application Bug-Fatal

Data is lost or the application
or system crashes. The user
cannot use the application.

Application Bug-Major

A specific object or process is
unusable or unstable but the
application does not crash.

Application Bug-Minor

There is a logic flaw in a
process or an error alert is
generated. The application is
usable but requires fixing in
the next release.

Assistance Request

The user requires information or
training about a feature, or is
unsure about how to use portions
of the application.

Cosmetic Problem

There is a display or printing
problem related to colors,
fonts, sizing, layout, or
spelling.

Documentation/Help Problem

There is a flaw in the user
education materials for the
application.

Enhancement Order (Approved)

An enhancement request for the
system that has already been
discussed with management and
approved for inclusion in the
next release.

Enhancement Request (Wish)

A suggestion for improving the
application that should be
considered in the next design
review.

Incomplete Feature

A feature in the application
does not match the
specification, documentation, or
design documents.

Platform/Infrastructure Problem

The user's machine does not run
the application well or behaves
differently after installing the
application.

Setup/Installation Problem

There were problems installing
the application on a machine or
a user could not setup the
application at all.

Each issue can go through many designated Status field values. Table 9.4 suggests status codes that you can use to track the current status of each issue record.

Table 9.4 Lookup Values For Assigning Issue Status

Status

Description

New

New issue, needs to have a status assigned

ToDo

Assigned to developer for action, management for review, or tester for more information

Test

Fixed or added, needs to be tested or retested

Failed

Failed test or retest, reassigned to developer for rework

Deferred

Deferred to another version

Denied

No action will ever be taken

Closed

Closed and deployed or awaiting deployment

All issues in this example begin life with a status of New, and proceed through multiple interim rankings until they are closed; a closed issue has a status of either Denied or Closed.


The previous issue tracking table and the forms and reports that work with it can be expanded slightly to serve as an issue management system for the development process as well. To use this system for management of development, add information to the issue records to tie the issue to the specification (SpecItem), to budget time for the issue, to measure coding progress, and so on.

This issue tracking system serves multiple purposes:

Keeping issue information in a database offers the pluses of sorting, reporting, querying, and so on, but requires that the testers load Access and a sometimes large issue management application on their system in addition to the application being tested.


In order to leave as many system resources available as possible for testing, we decided to invest in the development of our own Visual Basic/Jet issue tracking system. The compiled VB executable interface can be kept open on the testers' and developers' desktops at all times because it has a smaller memory footprint than an Access-based system.

With Access 97's support for ASP files, you now can actually build an issue management system that doesn't even require a Visual Basic entry system. Instead, testers can submit issues using Internet Explorer and a Jet-literate Web page.

Logging issues into the tracking system is more of an art than a science. In some sense, a tester describing a bug to a developer is much like a sighted person describing an object to an unsighted person-the tester must describe in enough detail for the developer to recreate the exact incident that is being logged. In addition, the tester may be wise to include keystroke-by-keystroke reproductions of the issue, sample data used to generate to issue, sample files generated from the issue, or any other supporting materials that allow the developer to quickly recreate the situation being reported. There is basically no such thing as too much information coming from the tester to the developer in this respect.

Closing the Testing Phase

An application seems to never really be done. Users, testers, and developers are all quite adept at suggesting new features or defining modifications to existing processes.

The development manager has three significant challenges when trying to move the application from the testing stage to the deployment stage:

Regardless of their status, many applications ship to users simply because the project deadline has been reached or the allotted resources for the project have been consumed. At some point, if an application is reasonably stable and the primary bugs have been fixed, it may need to be released to the users simply to allow the involved parties to move on to other projects. While this situation is not ideal, it certainly does provide the impetus for bringing a project to closure.

Avoiding Common Problems

Here are some concepts to apply to the testing process that can help you catch more problems:


It is hard to overstate the importance of this concept. At a recent conference, a Microsoft speaker stated that the majority of serious bugs in one of the Microsoft Office products were introduced late in the development cycle by fixing lesser bugs and improperly retesting the fix.

Having tested scores of Access applications, I asked my staff to help me make a list of the most common deficiencies we encounter when testing applications or when helping clients test theirs. The following list provides a "cheat sheet" (also reproduced on the CD-ROM in the document PRE-SHIP.DOC) that you can add to your test plans to help you catch common problems (the order of the list items is inconsequential):


The default Font Size setting in the Windows Control Panel is Small Fonts. You may want to test your applications using the Large Fonts setting as well in order to determine if any display problems will accrue to your forms under such a setting.


Review the Access Help topic "Set color properties to Microsoft Windows system colors" for information on how to pull attributes of the current desktop color scheme into the BackColor, BorderColor, or ForeColor properties of objects. To find this topic, look under "colors" in the Help Index.

All too often, brilliant application designs go astray in the implementation for the simplest of reasons, and the reason is often that the developers failed to provide enough testing to ensure that users were provided with a solid, quality application. Don't detract from the reputation of your team or its application by scrimping in this area.

Educating Your Users

Prior to deploying a tested application, the users must be taught how to work with it. However, small development teams often cannot spare a person to provide several days of hands-on training.

Two points made in this book assist in solving this situation. First, according to the principles listed in Chapter 17, "Bulletproofing Your Application Interface," your objective for an application should be that it educates and guides the users as they use it. This approach, coupled with an online help file and printed documentation, often serves as a surrogate for formal user training. (Additionally, users that have participated in testing are often partially trained by the time the application ships. These users can train other users.)

The second point regarding education derives from the structure of the design team as described in Chapter 3, "Preparing for Development." User representatives on the design team are intimately involved in the project research and design; they have participated in the ongoing reviews during development, and they helped with the testing. Thus, by ship time, these "super users" may have a significantly detailed knowledge of the application and can provide the training in place of developers.

Few development project plans these days budget for the creation of a training manual. Thus, the user documentation must additionally serve as a training document. When you create your user documentation, bear in mind how it might be used and strive to make it comprehensive.

Also, applications are frequently tested using sample data from a real or fictional scenario (called a test case). Supplying this sample data to the users for group or self-education can serve to shorten their learning curve.


If the application ships loaded with real data, the users should be trained on a copy of that data as the test case. This allows the training environment to closely mimic the actual work environment.


If the application is shipped without data, the test case does not provide a comprehensive educational example because it does not show users how to configure the system and seed it with initial values. Consider if the users will have any trouble migrating from the test case to an empty database, and include extra information in the training to help them through this situation.

Shipping a Completed Application

When the testing is completed and the users have been trained, it is time to create and test a setup to use to deploy the application. Many developers mistakenly assume that the Setup Wizard in the ODE provides the only option for creating an Access application setup. In this section, I'll describe your options for deploying a solution.

Here are the different techniques that you can use to deliver Access applications to your users:

Why are there so many options for building setups? In my opinion, it is because no single tool has really found the correct balance of features yet. Each of the options I've listed has strong and weak points, but none provide all of the features that I want in a single setup engine:

Table 9.5 provides a short summary of when you might choose to use each setup techniques I've listed previously.

Table 9.5 Comparing the Different Setup Approaches

This Option...

Is Best For...

Pull files from the server

Instantly creating the simplest form of setup

Push files from the server

Providing central control of distribution/updates

Acme (ODE) setup

Building setups quickly with a wizard and doing minor customization

Windows Setup Toolkit

Building very small setups with modifiable scripts and file lists

Visual Basic Setup Toolkit

Creating custom forms and code for the setup

InstallShield

Using a robust setup language with built-in dialogs

Regardless of the tool you use, creating a quality setup for your application adds technical and visual value to the application and increases its reception by users. A professional setup program shows the users that you are serious about a quality distribution mechanism for the solution. Here are the important points to consider when deploying your application with a custom setup:


The Change Folder... button is automatically added to the main setup dialog in an Acme setup created by the ODE Setup Wizard. You can remove this button and hard-code the destination for your application by changing one flag in the setup table (SETUP.STF) file. Find the line in the setup table file with a Type value of AppSearch, and change the second boolean flag in this line from "yes" to "no" as shown in these before and after examples:

C:\Homer<C:\Program Files\Homer>,HOMER.MDB,,128,no,yes,


C:\Homer<C:\Program Files\Homer>,HOMER.MDB,,128,no,no,


Refer to Figure 9.7 for an example of the AppSearch value in a setup table file.

Don't cut corners when building a friendly, solid setup for your application. For users not involved in the design or testing processes, the first impression of your application is generated by the setup.

Customizing Acme Setups

When working with Acme-based setups created by the ODE Setup Wizard, you can modify the configuration settings that are stored in the setup table (STF) file built by the Setup Wizard. Open the file using Excel, tell the Text Import Wizard that the file is Tab delimited, and click Finish. Once the file is open, change the required values and save the file over itself in the original Tab delimited format.

The format of the STF file is complex and cryptic, and explaining it fully is beyond the scope of this chapter. Even worse is the fact that documentation for the Acme setup engine is not made publicly available by Microsoft. You have to experiment with the file format by changing values and observing the repercussions.


Each line in the STF file below the header section is indexed with a unique Object ID (labeled ObjID in the file). This identifier is used to reference a specific line in the file from other lines. Because of the intricate relationships utilizing these references, do not add or delete lines in the file until you fully understand how to find and modify any other lines that reference the affected ObjID values.

Figure 9.4 shows the heading section of a sample STF file created to install a single Access database file.

Fig. 9.4

The ODE Setup Wizard produces an STF setup control file with this structure in the top section.

Note in the figure of the setup table file that the application name string you supply to the setup wizard ("Homer" in this example) is plugged into the file and used for dialog captions and dialog messages (as reflected by these values that were set to the string Homer: App Name, Frame Caption, Dialog Caption Base, and About Box String). These four values are examples of settings that you could change manually after running the wizard to further customize your setup.

For example, Figure 9.5 shows the welcome dialog from an Acme setup sample. The Frame Caption string from SETUP.STF is displayed in the title bar of the setup parent window, and the Dialog Caption Base string is displayed in the dialog message. The ODE Setup Wizard sets these two options to the same string (the one you type in as the application name). As an example of customizing this setup, you could manually change the Dialog Caption Base value in the STF file, re-save the file, and cause the welcome dialog to display as shown in Figure 9.6 instead.

Fig. 9.5

A setup created by the Setup Wizard produces a default welcome dialog.

Fig. 9.6

Manual modifications to the setup table (STF) file can provide a customized welcome dialog instead of the default dialog.

Figure 9.7 shows the detail section of a sample STF file created to install a single Access database file. Note that each element to be installed by the setup is described on a detail line in the STF file. You can reverse-engineer the structure of STF files and determine how to customize these detail lines.

Fig. 9.7

The ODE Setup Wizard produces an STF setup control file with this structure in the detail section.

Ensuring User Satisfaction

A user's satisfaction with your application is first impacted by the smoothness of the setup process (or lack thereof). Here are some suggestions to make this process easier:

Unless they have been involved in design or testing, individual users are naive about the application during the first few weeks they are using it. It is important that the development manager provides resources to support users in this critical phase and not leave them abandoned.

During the development phase, the development manager can select a designated trainer or trainers for the application; these trainers should be recruited from the pool of "super users" from the design or testing teams. This approach provides good support for new users without encumbering developers. If there are no super users to provide peer-to-peer handholding and support, the development manager must provide one or more developers to fill this role.

Users may have comments during the installation of the application, as well as after they have installed the application and are using it. The deployment process or the application itself should provide a mechanism for users to submit feedback about the application, its installation, and its usage. See the following topic for more information.

Reporting User Problems and Issues

For a developer, the job is only half done when the application ships. Ongoing maintenance and enhancement often consume more time and resources than creating the initial version of an application. The development manager is responsible for producing and implementing a plan to manage each successive release of the application. Information gathered from users of the application once in production is one of the most useful inputs to the creation of an upgrade plan.

All marketing hyperbole and developers' good intentions aside, one of the realities of software development is that every commercial product and custom application ships with at least one bug. Development tools are too powerful, development projects too complex, and development time frames too short to allow software developers the achievement of that Nirvana called zero-defects.

In Chapter 17, "Bulletproofing Your Application Interface," I stress that your applications should provide a device for collecting ongoing user feedback, both about bugs and about any other issue relating to the use of the application. Feedback mechanisms for users include:


In Chapter 17, "Bulletproofing Your Application Interface," I note that some feedback devices are not exposed to users. For example, the automatic logging of system errors to a table can happen in the background.

Table 9.6 shows the structure of a sample table for logging user feedback directly into a database. You can provide a form where the user can create feedback records in the table and load the form from an option on the Help menu, a custom toolbar button, or a keyboard shortcut.

Table 9.6 Table Fields Useful For Collecting User Feedback About An Application

Field

Description

FeedbackID

Unique record ID

IssueType

Type of issue (see Table 9.3)

ProjectName

Name of the application

ProjectArea

Object, task, or process name

ProjectSubArea

Subsidiary object, task, or process name

ProjectVersion

Version or build number that the issue was discovered in

Platform

Hardware or software environment necessary to recreate the issue

ShortDescription

Description of the issue

LongDescription

Long description of the issue and steps to reproduce it

ResolutionStatus

Resolution code, for example Sent to Issues System, Solved, Discarded

ResolutionStatusAt

Date/time of ResolutionStatus

CreatedBy

Person that created the record

CreatedAt

Date and time the record was created

ChangedBy

Person that changed the record last

ChangedAt

Date and time the record was last changed


The fields in the table mirror the fields in the issue tracking system designed in the earlier section "Tracking Testing Discoveries," in order to allow items from the feedback table to be copied into the issue system's table for development action. The IssueType field values in this table should be the same as those used in the larger issue management system; refer to Table 9.3 for codification examples.


A database with the feedback table shown in Table 9.6 is on the CD-ROM as file AESISSUE.MDB.

Users need to be able to provide their feedback, but they also should be made to feel that the feedback has been received and has value. If users provide a bug report or suggestion on a system, it can be valuable to send the user an e-mail notification that the input was received and thanking him or her for the feedback. (Replies to incoming e-mails or voice mails can often be generated automatically using modern technology, thus sparing a developer the burden of managing this process.)

Another key element in user satisfaction is to provide users that submit input with disposition information on the input. If a user submission is tracked through its life cycle, it is not difficult to notify users about the resolution of the issue that they generated. In the example provided earlier in this chapter, when an issue submitted by a user and stored in the issue tracking system receives a status code of Deferred, Denied, or Closed, the submitting user could be sent an automatic e-mail from the issue system describing the final status and the reason for it.

From Here...

In this chapter, we explored the techniques for solidifying an application before shipping through the use of rigorous testing. I also described issues relating to the management of project deadlines.


1996, QUE Corporation, an imprint of Macmillan Publishing USA, a Simon and Schuster Company.