If functional software testing is the practice of verifying and validating a software package — in other words making sure the software works and does was it was designed to do — then non-functional software testing is everything else beyond that. One of the first of these “everything-else” categories is performance testing.
Testing the performance of a new or existing program is the process of determining how well an application performs under actual working conditions. Performance testing was not a great concern for software developers until the rise of client/server technology. In the earlier days of computer development, most programs were standalone, independent creations that were restricted to local environments. Having thousands and thousands of users attempting to access the same information simultaneously was not a consideration.
All of that emphasis on standalone software went away with the advent of the Internet. Suddenly, a program was expected to concurrently service similar requests of any number of users. The need for performance testing became evident in a hurry.
Performance Testing Requirements
The first requirement for testing performance is that the program in question be completely functional. That means that all functional testing should have been completed successfully, and all bugs should have been corrected. Performance testing also requires that the testing be done automatically, with the aid of a service that is designed to simulate multiple users performing identical actions in a short (or, at least, defined) period of time.
Performance testing attempts to verify that an application can perform satisfactorily while serving a given number of users. Typically, the number of virtual users will be determined by the testing program but it can often be more than a thousand simulated users. Hence the requirement for automation. Endurance testing, an element of performance testing sometimes referred to as “stability testing,” verifies that the application can continue to perform for an extended period of time under conditions that are as near to real-life parameters as possible. Load testing can easily fit into this category, as it attempts to verify either the number of simultaneous users that can be serviced or the level of parallel requests for data manipulation (this can include transfer quantity and/or data modification activity).
Stress testing, another form of performance testing, attempts to find the point at which the application begins to break down or fall apart under duress. One of the important elements of stress testing is to determine how the program reacts to overload situations: does it just quit, or does it systematically shut down, sending warning messages to users and management as appropriate?
Usability testing is the method of checking to see if the program is easy to use and performs things the user wants in an intuitive manner. A relatively new practice employed by software developers is to have the users perform these tests themselves. The results are solicited through the use of surveys, which contain questions such as “Did you find the program easy to use?” and “Do you have any suggestions for improvements?” This form of testing is extremely useful to the developer because it often gets to the direct root of any existing usability problems.
The term “user-friendly” comes to mind under the umbrella of usability testing. “User-friendly” often refers to the user being able to progress through the program without having to make any abstract or unexpected decisions, which could cause the program to inadvertently abort. This may take the form of yes/no questions, radio-button pushing, or having the user make a selection from a dropdown list of choices. The more user-friendly a program is, the less likely it is to fail the usability test. Of course, writing user-friendly code for such a program puts a much greater demand on the programmer.
The latest addition to the whole software testing scheme involves the testing of security. Again, the need for this type of testing became evident with the creation of the Internet. Hackers originally took pleasure in breaking into a system and modifying the performance of an application. Their motivation was primarily self-identification to show the world just how smart they were. As the Internet became more commercial, though, it quickly became evident that there was money to be made in the practice of hacking.
There are numerous programs and services available to scan soon-to-be-released software for security breaches. They present a risk-based analysis as to the vulnerability of the program indicating whether the defects found are in need of immediate attention, or if they can possibly wait for the next release. At any level, they merely point out the security problems – it’s still up to the developers to come up with the solutions.
The difficulty with testing security is that it involves the human element. Passwords, personal questions, or other means of authenticating the user must be safeguarded by the user themself. If the user is not dutifully inclined to protect said security information, then all the security testing in the world is for naught. Writing passwords down, giving passwords to others, or just leaving a computer open and unattended are invitations for a security disaster.
Wrapping It Up
I hope you have enjoyed and gotten something out of this exploration on non-functional software testing. Obviously, my approach was merely an overview of the whole concept. Entire books (emphasis on the plural) have been written on the subject. My intent was to present you with the major facets and, hopefully, spark some interest in the subject.
Do you have any experience with non-functional software testing? Do you have any stories of security disasters or seemingly unsquashable software bugs?