Custom Search

Improving software testing skills and manual vs. automated testing

Q- I am working as software tester in India. I want to become more efficient in this field. Please kindly tell me what to do for that? Could you also give me some tips for manual testing and tell me what you think is best, manual or automated testing?

 

There are many ways to learn about software testing, including reading books and articles, attending training classes, listening to webcasts and hands-on work in the field. But there is no one answer book or one path to learn software testing. Consider, too, that there are many different types of software and what works well for one type of software may not work at all for another type. I'm using the word "type" to identify financial applications, e-commerce Web sites, medical devices, embedded software, manufacturing applications and so forth.

Tools and techniques that work in one environment might not work at all with a different type of application. I point this out not to confuse you or to be obscure, but as a reminder that as you learn what makes sense for what you are testing now, it would be best not to become too attached or rigid in your thinking about testing software. Even if you remain testing one type of software, technology changes and oftentimes our tools and techniques have to change as well.

These warnings aside, there are two books in the software testing field I often recommend:

  1. Testing Computer Software, 2nd Edition
    By Cem Kaner, Jack Falk, Hung Q. Nguyen
    ISBN-13: 978-0471358466
  2. Lessons Learned in Software Testing
    By Cem Kaner, James Bach, Bret Pettichord
    ISBN-13: 978-047108112

Keep reading and keep learning. As James Bach suggested in his tutorial on Self-Education for Testers, build your own curriculum. Find the books, articles, Web sites and blogs that help you learn. Build your own educational plan. James does a great job of pointing out how to survey a book. I interpreted his idea this way -- sometimes we want to learn in detail and sometimes we just need to know a little bit about a topic. For instance, if you decide you want to learn more about relational databases, investigate books at your local bookstore or library. You can also go online for continued research. Surveying multiple topics will help you choose what topics to you want to learn in-depth and what topics you want to gain an awareness of.

Additionally, I recommend finding opportunities to work with the technology you want to know more about. You can read, research via the internet, attend training classes and read tech magazines, but nothing compares to live experience. Seek work opportunities that help you grow.

Look at your current position and determine if there are ways to increase your knowledge with the opportunities that you already have access to. I recommend a mix of learning -- learn technical skills about current technology and learn about testing practices. If you vary your reading, your own education will become more full-bodied and varied. I wouldn't recommend trying to learn one area -- say, Java -- in detail without learning more about requirements building or managing a defect process. So many topics are important in the field of software testing that it's nearly impossible to create one list and say, "Here, go and read these books." But I do believe it's essential to keep reading and keep learning. I generally plan several study times for myself each week.

Automated vs. manual testing
Regarding your question about which form of testing is best, manual or automated. There is no one right answer. Here are a few considerations to help you make the assessment whether a task is worth automating.

  • Will the functionality need to be tested multiple times due to multiple internal builds before production release?
  • Will the functionality need to be tested on different operating systems and/or on different browsers?
  • Will the functionality need to be tested with many different types of data?
  • If automation is built, will it be used for one product release or for many releases in the future?
  • Does your company have sufficient staff to bring a tool in-house?
  • Do you have or can you gain sufficient knowledge of the tool and automation techniques in general to put a tool to good use?


--
Thanks and Regards,
Venu Naik Bhukya.

Guidelines and Checklist for Website Testing

Worldwide web is browsed by customers with different knowledge levels, while testing websites (static/Dynamic) QA department should concentrate on various aspects to make effective presentation of website in www.

 

Aspects to cover

 

       Functionality

       Usability

       User Interface

       Serverside Interface    

       Compatibility

       Security

       Performance

 

Description

 

       Functionality

 

1.1           1.1   Links

 

Objective is to check for all the links in the website

 

1.1.1        1.1.1All Hyperlinks

1.1.2        1.1.2All Internal links

1.1.3       1.1.3 All  External links

1.1.4       1.1.4 All Mail links

1.1.5        1.1.5Check for orphan pages

1.1.6       1.1.6 Check for Broken links

 

1.2Forms

 

Check for the integrity of submission of all forms

 

1.1.7        1.2.1All Field level Checks

1.1.8      1.2.2  All Field level Validations

1.1.9        1.2.3Functionality of create,modify,delete and view

1.1.1    1.2.4Handling of wrong inputs

1.1.1    1.2.5Default values if any (standard)

1.1.1    1.2.6Optional Versus Mandatory fields

 

1.3 Cookies

 

1.3.1Check for the cookies that has to be enabled and how it has to be expired

 

1.4 Web Indexing

 

Depending on how the site is designed using metatags, frames, HTMLsyntax, dynamically created pages, passwords or different languages, our site will be searchable in different ways

 

1.4.1        1.4.1Meta Tags

1.4.2        1.4.2Frames

1.4.3        1.4.3HTML syntax

 

 

1.5 Database

Two types of errors that may occur in web application

 

a)     1.5.1 Data Integrity : Missing or wrong data in table

b)      1.5.2Output Errors : Errors In writing ,editing or reading operation in the table

 

 

       Usability

 

How simple customer can browse the website

 

2.1 Navigation

Navigation describes the way user navigate with in a webpage, between different user interface controls (buttons, text boxes, combo boxes, dropdown lists  ...etc)

 

   2.1.1 Application navigation is proper through tab (key board)

   2.1.2 Application navigation through mouse

   2.1.3 Main features accessible from the main/home page (mother window)

   2.1.4 Any hotkeys, control keys to access menus

 

2.2 Content

Correctness is whether the information is truthful or contains misinformation. The accuracy of the information is whether it is without grammatical or spelling errors. Remove irrelevant information from your site this may otherwise cause misunderstanding or confusion

 

  2.2.1 Spelling and grammar

 2.2.2 Updated information (contact details mail IDs help reports)

 

2.3 General appearance

 

  2.3.1 Page appearance

 2.3.2 Colour, font size

 2.3.3 Frames

 2.3.4 Consistent Designs

 2.3.5 Symbols and logos (localization)

 

 

       User Interface

 

All about how exactly website looks like (view)

Basic guidelines for web User interface (Microsoft)

 

3.1 Verify whether screen resolution been taken in to account while using browser, will the UI resize itself as you maximize/minimize this must be tested with various screen resolutions

3.2 verify whether the number of controls per page have been checked, generally 10 is a good number 

3.3 Where there are options to choose from, user should be forced to choose from a set of radio buttons and the default should be pointing at default message (none, select one..etc)

3.4 Every dropdown box should have the first choice as NONE (it could be any other meaningful sentence such as Choose one or select

3.5 Ensure persistency of all values, when you enter values in a form, and move on the next page and then return to change one of the previous screen .The users must not be forced to re-enter all the values once again, this must be checked and ensured

3.6 Horizontal scrolling is not preferable in general .Avoid using horizontal scroll bar Ensure that the use of vertical scroll bar is judicious

3.7 Consider the use of pagination where appropriate.

3.8 Ensure that shift click control click work in list boxes, clarify these features work in both browsers

3.9 The location of buttons (OK and Cancel) should be at the right hand bottom side of a screen and consistent.

3.10 Clarify whether encryption of the password occurs from your login page all the way to the backend (the login page must not transmit clear text password)

3.11 Illegal operations should give popup messages (message should be simple and clear)             3.12 Verify if there is a requirement to use image maps in the application (does net scape support this well

3.13 positive popup messages should be displayed (submitted, deleted, updated, done and cleared)

3.14 Ensure that you have multiple check boxes when multiple selections is to be performed 

3.15 Avoid long scrolling drop down list make it short

3.16 website URL should be small and simple

 

 

       Serverside Interface  

 

4.1 Serverside Interface

 

4.1.1 Verify that communication is done correctly, web server-application server, application server-database sever and vice versa

4.1.2 Compatibility of server software, hardware, network connections

4.1.3 Database compatibility

4.1.4 External interface if any

 

       Client side Compatibility

 

5.1 platforms

Check for the website compatibility with

5.1.1 Windows (95, 98, 2000, NT)

5.1.2 Unix

5.1.3 Linux

5.1.4 Macintosh (if applicable)

5.1.5 Solaris (if applicable)

 

5.2 Browsers

5.2.1 Internet Explorer (3.x, 4.x, 5.x)

5.2.2 Netscape Navigator (3.x, 4.x, 6.x)

5.2.3 AOL

5.2.4 Browser settings (security settings graphics, java etc..)

5.2.5 Frames and cascade style sheets

5.2.6 HTML specification

 

5.3 Graphics

Loading of images graphics, etc.

 

       Security

 

6.1 Valid and invalid login

6.2 limits defined for the number of tries

6.3 Can it be bypassed by typing URL to a page inside directly in the browser?

6.4 verify log files are maintained to store the information for traceability

6.5 verify encryption is done correctly if SSL is used (if applicable)

6.6 No access to edit scripts on the server without authorization

 

       Performance

 

7.1 Connection speed

7.1.1 Try with different connection speeds (14.4, 28.8, 56.6, ISDN, cable DSL, T1, T3)

 

7.2 Load

7.1.1 Perform load test as per the SLA (Service Level Agreement)

7.1.2 What is the estimated number of users per time period and how will it be divided over the period

7.1.3 Will there be peak loads and how systems react

7.1.4 Can your site handle a large amount of users requesting a certain page?

7.1.5 Is large amount of data transferred from user?

 

7.3 stress    

7.3.1 Stress testing is done in order to actually break a site or certain features to determine how the system reacts

7.3.2 Stress tests are designed to push and test system limitations and determine whether the system recovers gracefully from crashes

Note: Hackers often stress systems by providing loads of wrong in-data until it crash and then gain access to it during start-up

7.3.3 System abnormal conditions (stress testing)

          Less bandwidth of cable

          Low disk memory

          Low processor speed

 

7.4 Soke test

7.4.1 Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24X7

7.4.2 Will downtime be allowed or is that out of the question

7.4.2 Verify that the application is able to meet the requirements and does not run out of memory or disk space



--
Thanks and Regards,
Venu Naik Bhukya.

Tools, methods to test software more efficiently

Several classes of testing tools are available today that make the testing process easier, more effective and more productive. When properly implemented, these tools can provide a test organization with substantial gains in testing efficiency.

However, test tools need to fit into the overall testing architecture and should be viewed as process enablers -- not as the "answer." Test organizations will often look to tools such as reviews, test management, test design, test automation and defect tracking to facilitate the process. It is quite common for a testing tool or family of tools to address one or more of those needs, but for convenience they will be addressed from a functional perspective not a "package" perspective.

It is important to note that, as with any tool, improper implementation or ad-hoc implementation of a testing tool can negatively impact the testing organization. Ensure a rigorous selection process is adhered to when selecting any tool and do such things needs analysis, on-site evaluation, and an assessment of return on investment (ROI).

Reviews
Reviews and technical inspections are the most cost-effective to way detect and eliminate defects in any project. This is also one of the most underutilized testing techniques. Consequently there are very few tools available to meet this need. Any test organization that is beginning to realize the benefits of reviews and inspections but is encountering scheduling issues between participants should look to an online collaboration tool. There are several tools available for the review and update of documents, but only one that I'm aware of that actually addresses the science and discipline of reviews --
ReviewPro by Software Development Technologies. I normally do not "plug" a particular tool, but when the landscape is sparse I believe some recognition is in order.

Test management
Test management encompasses a broad range of activities and deliverables. The test management aid or set of aids selected by an organization should integrate smoothly with any communication (email, network, etc.) and automation tools that will be applied during the testing effort. Generic management tools will often address the management of resources, schedules and testing milestones but not the activities and deliverables specific to the testing effort -- test requirements, test cases, test results and analysis.

Test requirements: Requirements or test requirements often become the responsibility of the testing organization. For any set of requirements to be useful to the testing organization, it must be maintainable, testable, consistent and traceable to the appropriate test cases. The requirements management tool must be able to fulfill those needs within the context of the testing team's operational environment.

Test cases: The testing organization is responsible for authoring and maintaining test cases. A test case authoring and management tool should enable the test organization to catalogue, author, maintain, manually execute and auto-execute automated tests. These test cases need to be traceable to the appropriate requirements and the results recorded in such a manner as to support coverage analysis. Key integration requirements when looking to test case management tools: Does it integrate with the test requirement aide? Does it integrate with the test automation tools being used? Will it support coverage analysis?

Test results and analysis: A test management suite of tools and aids needs to be able to report on several aspects of the testing effort. There is an immediate requirement for test case results -- which test case steps passed and which failed? There will be periodic status reports that will address several aspects of the testing effort: test cases executed/not executed, test cases passed/failed, requirements tested/not tested, requirements verified/not verified, and coverage analysis. The reporting mechanism should also support the creation of custom or ad-hoc reports that are required by the test organization.

Test automation
Several test automation frameworks have been implemented over the years by commercial vendors and testing: record & playback, extended record and playback, and load/performance.

Record and playback: Record and playback frameworks were the first commercially successful testing solutions. The tool simply records a series of steps or actions against the application and allows a playback of the recording to verify that the behavior of the application has not changed.

Extended record and playback: It became quickly apparent that a simple record and playback paradigm was not very effective and did not make test automation available to non-technical users. Vendors extended the record and playback framework to make the solution more robust and transparent. These extensions included data-driven, keyword and component-based solutions.

Load/performance: Load/performance test frameworks provide a mechanism to simulate transactions against the application being tested and to measure the behavior of the application while it is under this simulated load. The load/performance tool enables the tester to load, measure and control the application.

Defect tracking
The primary purpose of testing is to detect defects in the application before it is released into production; furthermore, defects are arguably the only product the testing team produces that is seen by the project team. The defect management tools must enable the test organization to author, track, maintain, trace to test cases, and trace to test requirements any defects found during the testing effort. The defect management tool also needs to support both scheduled and ad-hoc analysis and reporting on defects.

 

Web application testing: The difference between black, gray and white box testing

Web application testing: The difference between black, gray and white box testing

 

 

Black, white and gray box tests provide different approaches for assessing the security of Web applications. Each approach has specific advantages and disadvantages, and selecting a testing approach needs to be done based on the time and resources available, as well as the overall goals of the test being performed.

You can assume most real-world attackers will approach systems from a black-box perspective. But to better account for the advantage attackers have with regard to time and resources, and to avoid relying on security through obscurity, gray and white box tests can be appropriate approaches as well. Maximizing the security value of testing approaches when you have limited time and resources requires careful test planning and a thorough understanding of how testing constraints affect the completeness of testing results.

Let's take a look at the differences between the three tests.

Black box testing
Black box testing refers to testing a system without having specific knowledge to the internal workings of the system, no access to the source code, and no knowledge of the architecture.

In essence, this approach most closely mimics how an attacker typically approaches applications. However, due to the lack of internal application knowledge, the uncovering of bugs and/or vulnerabilities can take significantly longer. Black box tests must be attempted against running instances of applications, so black box testing is typically limited to dynamic analysis such as running automated scanning tools and manual penetration testing.

White box testing
White box testing, which is also known as clear box testing, refers to testing a system with full knowledge and access to all source code and architecture documents. Having full access to this information can reveal bugs and vulnerabilities more quickly than the "trial and error" method of black box testing. Additionally, you can be sure to get more complete testing coverage by knowing exactly what you have to test.

However, because of the sheer complexity of architectures and volume of source code, white box testing introduces challenges regarding how to best focus the testing and analysis efforts. Also, specialized knowledge and tools are typically required to assist with white box testing, such as debuggers and source code analyzers.

In addition, if white box testing is performed using only static analysis techniques using the application source code and without access to a running system, it can be impossible for security analysts to identify flaws in applications that are based on system misconfigurations or other issues that exist only in a deployment environment of the application in question.

Gray box testing
When we talk about gray box testing, we're talking about testing a system while having at least some knowledge of the internals of a system. This knowledge is usually constrained to detailed design documents and architecture diagrams. It is a combination of both black and white box testing, and combines aspects of each.

Gray box testing allows security analysts to run automated and manual penetration tests against a target application. And it allows those analysts to focus and prioritize their efforts based on superior knowledge of the target system. This increased knowledge can result in more significant vulnerabilities being identified with a significantly lower degree of effort and can be a sensible way for analysts to better approximate certain advantages attackers have versus security professionals when assessing applications.

Selecting a testing methodology
The testing approach you use should depend on a number of factors, including time allocated to the assessment, access to internal application resources and goals of the test.

Tests intended to best approximate short-term efforts of attackers with limited resources can be conducted using black box methodologies.

If the test is intended to reflect longer-term efforts by attackers who have more significant resources, gray box tests can help to reflect knowledge that attackers might learn about application internals without requiring the assessment team to expend the full amount of resources that would be available to attackers.

Teams that need to make the most insightful and far-reaching recommendations about applications within a limited amount of time should use white or clear box testing.

Security testers should be flexible and be able to plan a test approach for any of these scenarios given the time and access to resources available for an application. By meshing the availability of assessment time and testing resources with the overall goals of the testing, analysts can select a testing methodology that will maximize the security benefits of the findings within the given constraints. Given an understanding that time and testing resources favor attackers in the wild, assessments teams should optimize their activities accordingly.

Designing test cases using Cause-Effect Graphing Technique

Q- Can you explain how you can design test cases using the Cause-Effect Graphing Technique?

A- The Cause-Effect Testing Technique is another of several efforts for mapping input to output/response. In the Cause-Effect Graphing Technique, input and output are modeled as simple text, such as this:

Cause --> intermediate mode --> effect

See the Wikipedia article "Cause-effect graph" for additional information.

Much more information about the Cause-Effect Graphing Technique can be found in the "Cause-Effect Graphing User Guide", which is an entire PDF for the use of Bender RBT testing tool. (Note that this manual was written in 2006, so it might be somewhat out of date.)

The goal of cause-effect graphing is to reduce the number of test cases run. Many input/modifiers/effect combinations have the same output and exercise the same code in the system under test. By reducing the combinations, fewer tests can be run while still achieving the same confidence in quality.

A somewhat related modeling concept is PICT -- Pairwise Interdependent Combinatorial Testing. This technique goes a step beyond cause-effect and actually applies combinatorial theory to reduce the number of combinations needed to be tested.

Each theory has its application. In a highly related system where cause and intermediate effect are tightly coupled, it might be a challenge to model the system in PICT (although some PICT tools, including Microsoft's free PICT modeling tool, do allow you to weight various factors as well as create combinations).

My concern about cause-effect is simple: How could you model cause-effect on a high-availability Web service with 30 methods, each containing 40 or 50 parameters? By manually identifying each cause and effect, you'll spend so much time detailing the test that I'm not convinced you could actually test the application. This is why tools like Bender RBT exist -- to help automate this process.

My biggest complaint about a methodology like cause-effect is its lack of real-world application. I doubt your manager would tolerate your spending five or six days modeling your application into a cause-effect graph. Your manager is more likely to expect you to apply good testing sense and develop cases quickly, but keeping an eye on critical combinations. Still, further research is warranted, and if you can efficiently apply this model to your efforts, you'll be rewarded with higher quality at a reduced input cost.

How to test Web services

In many ways testing a Web service is no different than testing anything else. You still do all the same steps of the test method, but some of them can take on a bit of a different flavor. Here are a couple of tidbits I've picked up over the last couple of years testing Web services.

Determining coverage and oracles
When testing Web services, I find that scheme coverage becomes important. You still probably care about all the same coverage you cared about with non-Web service testing (scenario coverage, requirements coverage, code coverage, and application data coverage), but now you get to add schema coverage to the list.

For me, schema means not only performing a schema validation against all the response XMLs that come back from the service you're testing, it also includes testing for the right number of maximum and minimum repeating elements, correct reference identification tag values, and other subtle nuances for XML. This can sometimes create an oracle problem. I find that I'm often comparing against a schema along with a mapping document of some sort. A mapping document tells me what data should be stored in what element and when it should be there.

Determining the test procedures
One of the initial problems to be solved when performing Web service testing is management of all the data files (typically XML, but not always). There are tools that do this file management for you (I'll cover some of them briefly below), but you should have a game plan for files that need to be managed manually. When I'm testing I typically track three files: a request file, an expected response file, and an actual response file.

The naming convention I've picked up along the way is to put a "-rq.xml", "-rs.xml", or "-result.xml" on the end of all my files. That way for each file name you have a set of XMLs that paint the entire picture for that test case. Once you get a naming convention worked out, get the files into source control if you don't have a tool that keeps them all straight for you.

Once you have all these XMLs (sometimes hundreds, sometimes thousands), now you get the joy of keeping them up to date. I've found that schema changes can happen quite often on a project, as can mapping changes. Whenever one of those changes occur, you get the pleasure of updating all those XML files. And remember, if you have 100 test cases, you have 200 files -- assuming you don't bother to update your actual result file since you'll be re-running the test case.

The way I handle most schema changes is to have a Ruby script handy that I can use to make the updates for me programmatically. If you do it enough times, you eventually build a library of scripts for just about every type of change that comes your way. There are some exceptions to that where you may need to do some changes manually, but I find those changes don't normally affect all the test cases, just a subset.

Operating the test system
There are a lot of great tools available for testing web services. MindReef SoapScope is a fine commercial option, and SoapUI is a fine open source option. I've found them both to have all the basic features I've needed. SoapScope is a bit better about data trending and analysis. SoapUI has a few more technical features for test execution. I've also spent some time using IBM Rational for SOA Quality, which is an Eclipse-based tool focused on Web service testing. It's a latecomer, but has a nice feature set.

More often then not, I find that I use homegrown tools written in Ruby and Java to perform Web service testing. It gives you more control over the interfaces, features, and a working knowledge of what's actually going on in the test tool you're using. There are some drawbacks like support and documentation. Even the open source SoapUI team does a fantastic job in support -- they turned around a defect for me in 24 hours once. That's better then you'll see from MindReef or IBM I'm sure.

Try a couple of tools before you settle. I've found that I switch between the different tools based on the team I'm working with and what we're tying to do. Once you've compared a couple of them you'll figure out which features you really need and which are nice but optional.

Evaluating the test results
Evaluating the results for Web service tests can sometimes be really easy, and sometimes painful. There are couple of things you'll want to practice getting good at:

  • writing XSLTs to transform your actual response to mask out values you don't care about (server dates for example);
  • writing Xpath queries to check for specific values in an XML document;
  • learning all the command line options on your favorite diff-tool;
  • and ensuring you have at least one person on the team that knows the schema inside and out and can see the entire mapping document in their head when they look at the response files.

The better you are at the first three, the less important the last one is. I've also found that custom logging (both in your test execution tool and in the Web service under test) can also help add visibility to the results. Depending on what you're testing, sometimes you can cut out the need for file comparison entirely.

 

Once you get up to speed with the basics of manually testing the Web service, performance testing is normally trivial. Some of the tools have the ability to generate load built in. If you're using a homegrown option, all you need to do is thread your requests and you have the same ability. Getting the usage model right can sometimes be a challenge, but it's doable.

Other concerns that come into play can be authentication/authorization and encryption. I've not personally had to do a lot there, but I know that it's a problem for some. I imagine that if you're testing a payment gateway like your questions states, you'll need to do some investigation around what that means for your data and the tools you can use.



How to evaluate testing software and tools

Once a testing organization reaches a certain size, level of maturity or workload, the requirement to purchase/build testing software or aides becomes apparent. There are several classes of testing tools available today that make the testing process easier, more effective and more productive. Choosing the appropriate tool to meet the testing organization's long-term and short-term goals can be a challenging and frustrating process. But following a few simple guidelines and applying a common-sense approach to software acquisition and implementation will lead to a successful implementation of the appropriate tool and a real return on investment (ROI).

One of the simplest questions to ask when looking at testing software is "What is ROI?" The simplest answer is "Anything that reduces the resource (people) hours required to accomplish any given task." Testing tools should be brought into an organization to improve the efficiency of a proven testing process. The value of the actual process has already been established within the organization or within the industry.

Example: Test management
The organization has meticulously tracked test requirements and test cases using spreadsheets, but it finds this to be a cumbersome process as the test organization grows. It has been shown that this process has reduced the number of defects reaching the field, but the cost of maintaining the approach now impacts its effectiveness. Solution: Invest in a test management tool or suite of tools.

Example: Test automation
The organization has created a suite of manual test cases using a text editor, but it finds it difficult to maintain, use and execute these test cases efficiently as the test organization's role grows. The test cases have proven effective in detecting defects before they reach production, but the time required to manage and execute these test cases now impacts the ROI. Solution: Invest in a test automation tool or suite of tools.

Example: Defect Management
The test organization has implemented a defect tracking process using email and a relational database, but it now finds that defects are being duplicated and mishandled as the volume of defects grows. Solution: Upgrade the current in-house solution or invest in a defect management tool.

Needs analysis

The first thing an organization must accomplish is catalogue what needs or requirements the testing software is expected satisfy. For an organization that is new to the acquisition process, this can be an intimidating exercise. There are three categories or "points of view" that must be addressed: management/organization, test architecture and end user.

Needs analysis: Management/organization
Management or the test organization needs to clearly state what the objective is for purchasing testing software. It must state the mission or goal that will be met by acquiring the test software and the expected ROI in terms of person-hours once the tool has been fully implemented. That can be accomplished by creating a simple mission statement and a minimum acceptable ROI.

It should be noted that any ROI fewer than three hours saved for every hour invested should be considered insufficient because of the impact of introducing a new business process into the testing organization. This should be a concise statement on the overall goal (one to three sentences) not a dissertation or catalogue of the products capabilities.

Example: Test management
The selected Test Management system shall enable end users to author and maintain requirements and test cases in a Web-enabled, shareable environment. Furthermore, the test management tool shall support test management "best practices" as defined by the Test Organization. The minimum acceptable ROI is four hours saved for every hour currently invested.

Example: Test automation
The selected Test Automaton tool shall enable end users to author, maintain and execute automated test cases in a Web-enabled, shareable environment. Furthermore, the test automation tool shall support test case design, automation and execution "best practices" as defined by the Test Organization. The minimum acceptable ROI is five hours saved for every hour currently invested.

Example: Defect management
The selected Defect Management tool shall enable end users to author, maintain and track/search defects in a Web- and email-enabled, shareable environment. Furthermore, the defect management tool shall support authoring, reporting and tracking "best practices" as defined by the Test Organization. The minimum acceptable ROI is four hours saved for every hour currently invested.

Needs analysis: Test architecture
Management has defined the immediate organizational goal, but the long-term architectural necessities must be defined by the testing organization. When first approaching the acquisition of testing software, test organizations usually have not invested much effort in defining an overall test architecture. Lack of an overall test architecture can lead to product choices that may be effective in the short term but lead to additional long-term costs or even replacement of a previously selected toolset.

If an architectural framework has been defined, then the architectural needs should already be clearly understood and documented. If not, then a general set of architectural guidelines can be applied. Example:

The selected testing software and tool vendor shall do the following:

  1. Have a record of integrating successfully with all Tier 1 testing software vendors.
  2. Have a history of operational success in the appropriate environments.
  3. Have an established end-user community that is accessible to any end user.
  4. Support enterprisewide collaboration.
  5. Support customization.
  6. Support several (1 to n) simultaneous engagements/projects.
  7. Provide a well-designed, friendly and intuitive user interface.
  8. Provide a smooth migration/upgrade path from one version of the product to the next.
  9. Provide a rich online help facility and effective training mechanisms (tutorials, courseware, etc.).

The general architectural requirements for any tool will include more objectives than the nine listed above, but it is important to note that any objective should be applied across the entire toolset.

Needs analysis: End user
The end user needs analysis should be a detailed dissertation or catalogue of the envisioned product capabilities as they apply to the testing process. (It would probably be a page or more of requirements itemized or tabulated in such a way as to expedite the selection process.) This is where the specific and perhaps unique product capabilities are listed. The most effective approach is to start from a set of general requirements and then extend into a catalogue of more specific/related requirements.

Example: Test management
The Test Management solution shall do the following:

  1. Support the authoring of test requirements.
  2. Support the maintenance of test requirements.
  3. Support enterprise wide controlled access to test requirements (Web-enabled preferred).
  4. Support discrete grouping or partitioning of test requirements.
  5. Support traceability of requirements to test cases and defects.
  6. Support "canned" and "user-defined" queries against test requirements.
  7. Support "canned" and "user-defined" reports against test requirements.
  8. Support coverage analysis of test requirements against test cases.
  9. Support the integration of other toolsets via a published API or equivalent capacity.
  10. And so on…

The key here is to itemize the requirements to a sufficient level of detail and then apply these requirements against each candidate.

Example: Test automation
The Test Automation solution shall do the following:

  1. Support the creation, implementation and execution of automated test cases.
  2. Support enterprise wide, controlled access to test automation (Web-enabled preferred).
  3. Support data-driven automated test cases.
  4. Support keyword-enabled test automation.
  5. Integrate with all Tier 1 and 2 test management tools that support integration.
  6. Integrate with all Tier 1 and 2 defect management tools that support integration.
  7. Enable test case design within a non-technical framework.
  8. Enable test automation and verification of Web, GUI, .NET, and Java applications.
  9. Support the integration of other toolsets via a published API or equivalent capacity.
  10. And so on…

Once again, the key is to itemize the requirements to a sufficient level of detail. It is not necessary for all the requirements to be "realistic" in terms of what is available. Looking to the future can often lead to choosing a tool that eventually does provide the desired ability.

Example: Defect management
The Defect Management solution shall do the following:

  1. Support the creation of defects.
  2. Support the maintenance of defects.
  3. Support the tracking of defects.
  4. Support enterprise wide controlled access to defects (Web-enabled preferred).
  5. Support integration with all Tier 1 and 2 test management tools that support integration.
  6. Enable structured and ad-hoc searches for existing defects.
  7. Enable the categorization of defects.
  8. Enable customization of defect content.
  9. Support "canned" and customized reports.
  10. And so on…

In all cases, understanding of the basic needs will change as you proceed through the process of defining and selecting appropriate testing software.

In addition, for each case you must make sure that a particular vendor does not redefine the initial goal. Becoming an educated consumer in any given product space will lead to a redefinition of the basic requirements that should be recognized and documented.

Identify candidates

Identifying a list of potential software candidates can be accomplished by investigating several obvious sources: generic Web search, online quality assurance and testing forums, QA and testing e-magazines and co-workers. Once you create a list of potential software candidates, an assessment of currently available reviews can be done. Remember to keep an eye open for obvious marketing ploys.

It is also important to note which products command the largest portion of the existing market and which product has the fastest growth rate. This relates to the availability of skilled end users and end-user communities. Review the gathered materials against the needs analysis and create a short list of candidates (three to five) for assessment.

Assess candidates

If you have been very careful and lucky, your first encounter with the vendor's sales force will occur at this time. This can be a frustrating experience if you are purchasing a relatively small number of licenses, or it can be an intimidating one if you are going to be placing an order for a large number of licenses. Being vague as to the eventual number of licenses can put you in the comfortable middle ground.

Assessments of any testing software should be accomplished onsite with a full demonstration version of the software. When installing any new testing software, install it on a typical end-user system, check for "dll" file conflicts, check for registry entry issues, check for file conflicts, and ensure the software is operational. Record any issues discovered during the initial installation and seek clarification and resolution from the vendor.

Once the testing software has been installed, assess the software against the previous needs analysis -- first performing any available online tutorials and then applying the software to your real-world situation. Record any issues discovered during the assessment process and seek clarification and resolution from the vendor. Any additional needs discovered during an assessment should be recorded and applied to all candidates.

The assessment process will lead to the assessment team gaining skills in the product space. It is always wise to do one final pass of all candidates once the initial assessment is completed. Each software candidate can now be graded against the needs/requirements and a final selection can be made.

Implementation

Implementation obviously is not part of the selection process, but it is a common point of failure. Test organizations will often invest in testing software but not in the wherewithal to successfully use it. Investing hundreds of thousands of dollars in software but not investing capital in onsite training and consulting expertise is a recipe for disaster.

The software vendor should supply a minimum level of training for any large purchase and be able to supply or recommend onsite consultants/trainers who will ensure the test organization can take full advantage of the purchased software as quickly as possible. By bringing in the right mix of training, consulting and vendor expertise, the test organization can avoid much of the disruption any change in process brings and quickly gain the benefits that software can provide.