Custom Search

How to specialize in performance testing

How to specialize in performance testing

Q-I have been learning performance testing for the past eight months. However, I was not given any opportunities in performance testing in my present company even though I performed well in the internal interviews. I really want to become a good performance tester -- it is my dream to become a performance tester. Please guide me as to how I can make this happen.

Expert’s Response: I think this is a great question. It's specific, which makes it easy to start answering, but it's also general enough that anyone who is interested in specializing in something within software testing should be able to pull something from the answer. I think there are three overarching dynamics to your question:

· How can you best structure your future learning to support your goals?

· How can you best market your abilities to get the opportunities you want?

· How can you best structure the work you're currently doing to support your work and learning objectives?

Continuing to learning about performance testing
The activity that you have the most control over is your own learning. I've been studying and doing performance testing for eight years, and I honestly still learn something new about performance testing almost every week. It's a deep and rich specialization in software testing. There's a lot to performance testing that still needs to be formalized and written down. It's still a growing body of knowledge.

If your dream is performance testing, then you need to continue to learn. Reading articles, blogs, books and tool documentation is a good place to start. Attending conferences, training, workshops and local groups is a great place to meet others who have similar passions. If you don't have opportunities like those, then join one of the many online communities where performance testers have a presence. Depending on your learning style, dialog and debate can be as great a teacher as reading, if not greater.

Finally, no learning is complete without practice. I'm so passionate about the topic of practice that I wrote an entire article on it. Many of the materials you read will include exercises. Work through them. Many of the conferences, training, and workshops you attend will show examples. Repeat them. Going through the work on your own, even if you already know the outcome, provides a different kind of learning. Some people learn best when the experience is hands-on.

For performance testing, I think a great place to start practicing is in the open source community. Given the nature of performance testing, most tool knowledge is transferable to other performance testing tools. Learning multiple open source tools will also give you different ideas for how you can solve a performance testing problem. Many times, our available tools anchor our thinking about how to approach the problem. If you've practiced with multiple tools, you're more likely to have variety in your test approaches and solutions.

Once you know how to use a couple of performance testing tools, if you can't seem to get the project work you need at your current employer, and you're unwilling or unable to leave for another opportunity, then I recommend volunteering your time. There are a lot of online communities that help connect people who want to volunteer their technical talents to nonprofits or other community-minded organizations. Finding project work outside of your day job can be just as valuable as formal project work.

Start marketing your skills and abilities
If you're serious about performance testing as a career, I recommend you start pulling together some marketing material. A resume is the place most people focus their limited marketing skills. That could be a good place for you to start as well. What story does your resume tell a potential employer? Is it that you're a performance tester? How has each of your past experiences helped you develop a specific aspect of performance testing? Remember, one of the great challenges performance testing presents to practitioners is its variety. That makes it easy to relate a variety of experiences to the skills a performance tester needs.

Don't forget to include your training on your resume. I've had to remind several people of classes they've attended, workshops they participated in, or people who have been an active member of an online community for years and have not included that on their resume. If it helps you tell the story of your expertise, get it on there. Include anything that shows an employer that you're passionate about performance testing and you're continuously learning more about it.

Depending on the types of companies you want to work for, or the types of projects you might want, a certification might be appropriate. Certifications relevant to performance testing aren't just performance testing tool certifications. Appropriate certifications may also come in the form of programming languages (e.g., Java certification), networking (e.g., CCNA), application servers (e.g., WebSphere administrator certification), databases (e.g., Oracle certification), or even a certification in the context you want to work in (e.g., CPCU certification if you want to work in the Insurance industry). I'm not normally a big fan of certifications, but they are clear marketing products.

Finally, I think the best way to market yourself is to write. Start by being active in an online community. Answer questions on forums or debate ideas on mailing lists. As you learn, catalog your learning in a blog so others can benefit from your hard work. If you feel you're really starting to understand a specific aspect of performance testing, try writing an article or paper on it (for example, email your idea to an editor at SearchSoftwareQuality.com -- they'll point you in the right direction for help if you need it). Present your idea at a conference or workshop. The more of a public face you develop by writing, the more you learn. My experience has been that people are very vocal in their feedback on what you write. You should get to learn a lot. Even if you don't become the next Scott Barber, when a potential employer Googles your name, they'll quickly see that you know something about performance testing and have a passion for it.

Align your project work with performance testing activities
Even if you can't get performance testing projects at your current employer, you can still get project work that relates to performance testing. Does your team test Web services? See if you can get involved; it will get you experience with XML, various protocols and, often, specialized tools. Does your team test databases? See if you can get involved; it will get you experience with SQL and managing large datasets. Does your team write automated tests? See if you can get involved; it will get you experience programming and dealing with the problems of scheduled and distributed tests. Does your team do risk-based testing? See if you can get involved; it will get you experience modeling the risk of an application or feature and teach you how to make difficult choices about which tests to run. I could go on with more examples. Take your current opportunities and make them relevant for learning more about performance testing.

If you can't get your own performance testing project, ask if you can work with someone else. What if you volunteer some of your time? What if you work under someone else's supervision for a while? Work with your current manager to understand what factors are preventing them from giving you the opportunity. Perhaps they can't give you the opportunity for a number of reasons out of their direct control. Perhaps they can, they just haven't given it enough attention. After a conversation where you try to figure it out with them, you should have an idea of what opportunities are available at that company. Just recognize that sometimes you have to leave for different opportunities. If you do that, make sure you're clear with your new employer as to what your expectations are.

I hope that's helpful. Your question is a great one, and I feel like it covers a general concern software testers have. The general form of the answer is the same for people who might want to specialize in security testing, test automation, Web service testing, test management, or any other aspect of testing where there can be specialization. Stay focused on your learning and development, actively market your knowledge and abilities, and work to align your work with your goals -- even if that means taking projects outside of the specialization to help you develop a specific skill.



How to do integration testing

How to do integration testing

Q- How do testers do integration testing? What are top-down and bottom-up approaches in integration testing?

Expert’s response: Ironically, integration testing means completely different things to completely different companies. At Microsoft, we typically referred to integration testing as the testing that occurs at the end of a milestone and that "stabilizes" a product. Features from the new milestone are integration-tested with features from previous milestones. At Circuit City, however, we referred to integration testing as the testing done just after a developer checks in -- it's the stabilization testing that occurs when two developers check in code. I would call this feature testing, frankly…

But to answer your question, top-down vs. bottom-up testing is simply the way you look at things. Bottom-up testing is the testing of code that could almost be considered an extension of unit testing. It's very much focused on the feature being implemented and that feature's outbound dependencies, meaning how that feature impacts other areas of the product/project.

Top-down, on the other hand, is testing from a more systemic point of view. It's testing an overall product after a new feature is introduced and verifying that the features it interacts with are stable and that it "plays well"' with other features.

The key to testing here is that you are in the process of moving beyond the component level and testing as a system. Frankly, neither approach alone is sufficient. You need to test the parts with the perspective of the whole. One part of this testing is seeing how the system as a whole responds to the data (or states) generated by the new component. You want to verify that data being pushed out by the component are not only well-formatted (what you tested during component testing) but that other components are expecting and can handle that well-formatted data. You also need to validate that the data originating within the existing system are handled properly by the new component.

Real-world examples? Well, let's assume you are developing a large retail management system, and an inventory control component is ready for integration. Bottom-up testing would imply that you set up a fair amount of equivalence-classed data in the new component and introduced that new data into the system as a whole. How does the system respond? Are the inventory amounts updated correctly? If you have inventory-level triggers (e.g., if the total count of pink iPod Nanos falls below a certain threshold, generate an electronic order for more), does the order management system respond accordingly? This is bottom-up testing.

At the same time, you want to track how well the component consumes data from the rest of the system. Is it handling inventory changes coming in from the Web site? Does it integrate properly with the returns system? When an item's status is updated by the warehouse system, is it reflected in the new component?

We see constant change in the testing profession, with new methodologies being proposed all the time. This is good -- it's all part of moving from art to craft to science. But just as with anything else, we can't turn all of our testing to one methodology because one size doesn't fit all. Bottom-up and top-down testing are both critical components of an integration testing plan and both need considerable focus if the QA organization wants to maximize software quality.




Test coverage: Finding all the defects in your application

Test coverage: Finding all the defects in your application

Q-If trace matrix does not meet the requirement for test coverage, what would you suggest for the same? How can I assure the coverage of all functionalities by a team member as a team leader?

Expert Response: The trace matrix is a well-established test coverage tool. Let me offer a quick definition -- the purpose of the trace matrix is to map one or more than one test case to each system requirement, the trace matrix is usually formatted in a table. The fundamental premise is that if one or more than one test case has been mapped to each requirement, then all the requirements of the system must have been tested and therefore the trace matrix proves testing is complete.

I see flaws with this line of reasoning and here are my primary reservations on the over-reliance of the trace matrix:

  1. A completed trace matrix is only as valuable as the contents. If the requirements are not complete or clear than the test cases designed and executed might fulfill the requirements but the testing won't have provided what was needed. Conversely if the requirements are clear but the test cases are insufficient then a completed trace matrix still doesn't indicate the testing coverage and confidence that is being sought by a completed table.
  2. The trace matrix design relies too stringently on system requirements -- that is the primary design of the trace matrix -- to ensure all system requirements have been tested. But all sorts of defects can be found outside of the system requirements that are still relevant to the application providing a solution for the customer. By looking only at the system requirements and potentially not considering the customers' needs and real life product usage, essential testing could be overlooked. Testing only according to specified requirements may be too narrowly focused to be effective in real life usage -- unless the requirements are exceptionally robust.

Overall I feel the trace matrix might provide a clean high level view of testing but a checked-off list doesn't prove an application is ready to ship. The reason some people value the trace matrix is the matrix attempts to offer an orderly view of testing; but in my experience testing is rarely such a tidy task.

So how do you call the end of testing? And how can you assure test coverage?

  1. To be able to assure coverage at the end, I'd start with reviewing the beginning -- look at the test planning. Did your test planning include a risk analysis? A risk analysis at the start of a project can provide solid information for your test plan. Host a risk analysis either formally or informally, gather ideas by talking with multiple people. Get different points of view -- from your project stakeholders, talk to your DBAs, your developers, your network staff, and your business analysts. Plan testing based on your risk analysis.
  2. As a project continues, shift testing based on the defects found and the product and project as it evolves. Focus on high risk areas. Adapt testing based on you and your testing team's experience with the product. Be willing to adjust your test plan throughout the project.
  3. Throughout testing, watch the defects reported. Keep having conversations and debriefs with hands-on testers to understand not just what they've tested but how they feel about the application. Do they have defects they've seen but haven't been able to reproduce? What is their perception of the current state of the application?

In my view, there is no one tool including the trace matrix that signals testing is complete but the combination of knowing how testing was planned and adapted throughout the project, a thorough review of the defects reported and remaining, and the current state of the application according to you and your team's experience should provide you with an objective assessment of the product and the test coverage.




Don't mistake user acceptance testing for acceptance testing

Don't mistake user acceptance testing for acceptance testing

If you think software testing in general is badly misunderstood, acceptance testing (a subset of software testing) is even more wildly misunderstood. This misunderstanding is most common with commercially driven software as opposed to open source software and software being developed for academic or research and development reasons.

This misunderstanding baffles me because acceptance testing is one of the most consistently defined testing concepts I've encountered over my career both inside and outside of the software field.

First, let's look at what Wikipedia has to say about acceptance testing:

"In engineering and its various subdisciplines, acceptance testing is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery…

"In most environments, acceptance testing by the system provider is distinguished from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership…

"A principal purpose of acceptance testing is that, once completed successfully, and provided certain additional (contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system as satisfying the contract (previously agreed between sponsor and manufacturer), and deliver final payment."

This is consistent with the definition Cem Kaner uses throughout his books, courses, articles and talks, which collectively are some of the most highly referenced software testing material in the industry. The following definition is from his Black Box Software Testing course:

"Acceptance testing is done to check whether the customer should pay for the product. Usually acceptance testing is done as black box testing."

Another extremely well-referenced source of software testing terms is the International Software Testing Qualifications Board (ISTQB) Standard glossary of terms. Below is the definition from Version 1.3 (dd. May, 31, 2007):

"Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system."

I've chosen those three references because I've found that if Wikipedia, Cem Kaner and the ISTQB are on the same page related to a term or the definition of a concept, then the testing community at large will tend to use those terms in a manner that is consistent with these resources. Acceptance testing, however, is an exception.

There are several key points on which these definitions/descriptions agree:

  1. In each case, acceptance testing is done to determine whether the application or product is acceptable to someone or some organization and/or if the person or organization should pay for the application or product AS IS.
  2. "Finding defects" is not so much as mentioned in any of those definitions/descriptions, but each implies that defects jeopardize whether the application or product becomes "accepted."
  3. Pre-determined, explicitly stated, mutually agreed-upon criteria (between the creator of the application or product and the person or group that is paying for or otherwise commissioned the software) are the basis for non-acceptance.

    Wikipedia refers to this agreement as a contract, and identifies that non-compliance with the terms of the contract as a reason for non-payment. And Kaner references the contract through the question of whether the customer should pay for the product.

    The ISTQB does not directly refer to either contract or payment but certainly implies the existence of a contract (an agreement between two or more parties for the doing or not doing of something specified).

With that kind of consistency, you'd think there wouldn't be any confusion.

The only explanation I can come up with is that it is related to the fact that many people involved with software development only have experience with "user acceptance testing," and as a result they develop the mistaken impression that "user acceptance testing" is synonymous with "acceptance testing."

If my experiences with software user acceptance testing are common, I can understand where the confusion comes from. All of the software user acceptance testing that I have firsthand experience with involves representative users being given a script to follow and then being asked if they were able to successfully complete the task they were assigned by the person who wrote, or at least tested, the script.

Since the software had always been rather strenuously tested against the script, the virtually inevitable feedback that all of the "users" found the software "acceptable" is given to the person or group paying for the software. The person or group then accepts the software -- and pays for it.

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

I have no idea how many dissatisfied end users, unhappy commissioners of software and unacceptable software products this flawed process is responsible for, but I suspect that it is no small number.

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

http://media.techtarget.com/searchSoftwareQuality/images/spacer.gif

There are several flaws with that practice, at least as it relates to the definitions above.

  1. The "users" doing the acceptance testing are not the people who are paying for the development of the software.
  2. The person or group paying for the development of the software is not present during the user acceptance testing.
  3. The "users" are provided with all of the information, support, instructions, guidance and assistance they need to ultimately provide the desired "yes" response. Frequently this assistance is provided by a senior tester who has been charged with the task of coordinating and conducting user acceptance testing and believes he is doing the right thing by going out of his way to provide a positive experience for the "users."
  4. By the time user acceptance testing is conducted, the developers of the software are ready to be done with the project and the person or group paying for the development of the software is anxious to ship the software.

If that is the only experience a person has with acceptance testing, I can see why one may not realize that the goal of user acceptance testing is to answer whether the end users will be satisfied enough with the software -- which obviously ships without the scripts and the well-meaning senior tester to help out -- to want to use and/or purchase it.

I have no idea how many dissatisfied end users, unhappy commissioners of software and unacceptable software products this flawed process is responsible for, but I suspect that it is no small number.

In an attempt to avoid dissatisfied end users, unhappy commissioners of software and unacceptable software products, whenever someone asks me to be a part of any kind of acceptance testing -- whether qualified by additional terms like "user," "build," "system," "automated," "agile," "security," "continuous," "package," "customer," "business process," "market," or something else -- I pause to ask the following:

"For what purpose is who supposed to be deciding whether or not to accept what, on behalf of whom?"

My question often confuses people at first, but so far it has always lead to some kind of acceptance testing that enables decisions about the software and its related contracts. And that is what acceptance testing is all about.




How to Define a Test Strategy

How to define a test strategy

Q- I want to define one test strategy which is suitable for all the teams in my organization. What are the questions I need to ask the developers to define a test strategy?

Expert’s response: This is a broad question with several possible meanings. However, I'll take a stab at it. It sounds to me like the question is how do you drive a system test strategy, i.e., once each component of a project has been component-tested, how do you test the strategy? What do you need to know from developers to build the strategy?

When it comes to system testing, to be frank I want less information from the developers than I do from the customers. I want to approach my system testing from a scenario basis. That having been said, there are some important things to know about the components -- specifically, how they interact with each other. What are outbound and inbound dependencies (i.e., what data is transferred between components)? The key here is to ask questions that don't offload the burden of testing from you to the developers. You can't ask a developer, "How do I test this" because, if he answers you, he might as well do it himself. What you need to ask are questions such as "How does this component interact with that?" or (better yet) "I've been reading the technical specification for your component, and I have a couple of questions." Then ask your specific questions.

To put it into a real-world analogy, let's say you're testing a procurement and inventory control application for a small gas business. The application may consist of a procurement piece (code that automates ordering delivery of gas), a projection piece (code that projects short- and mid-term inventory needs), and a delivery tracking piece (code that verifies the gas ordered is delivered, even if it's split up among several deliveries). At this point in your test planning, your interview with developers will focus on the data being shared --which portions of the database are common and which are specific to a given component. You'll also ask how components modify shared data. While this data modification may have been tested to specification during initial testing, it's very possible that the original specification overlooked some element of interaction and the spec is deficient.

Another key step is to examine the test strategy for each individual component. Here you are looking for the overlap: Which cases do two or more components have in common? Often, the team developing the integration test strategy will have spent time identifying system-level test cases, and you can leverage them here.

In our real-world example, interviewing the test leads from each team should result in them sharing with you cases that they felt were "system test cases" by nature -- cases that cover interaction, cases with cover dependencies, etc. The test lead for the procurement component, for instance, might have identified cases that cover order size and delivery date and will want the delivery tracking piece to be sure to check that they handle split orders appropriately. Through these interviews, you should build a list of cases that cover common cases.

Finally, as I mentioned, you want to spend a lot of time in system-level testing thinking about scenarios. You want to define how a user will interact with your product and follow that interaction as it goes from component to component. You definitely want to speak with the customer and in two phases. First, sit down with the customer and ask them to work with you to identify key customer scenarios. Document everything! Then, from that meeting, develop any other scenarios or obvious variations on scenarios. Prioritize these scenarios and write up the steps that comprise the scenario. Finally, return to the customer and validate your final scenarios and your steps that cover them.

In our real-world example, you'd walk through the lifetime of an order -- from the moment the projection component identifies a new order is needed, through order placement and then fulfillment.

You definitely want to script scenarios that cover full order delivery as well as split deliveries. You want to run a scenario that probes how the projection component deals with fluctuating demand, and so on. Once you've identified a set of scenarios, script high-level steps for them. Circle back, refine your scenarios and steps, and then sit with the customer and have them validate your planning -- taking feedback and modifying appropriately.

Through this research, you can begin to identify a system-level test approach that results in the highest possibility of verifying customer functionality. You'll minimize test case overlap with your integration-level testing, as well. The key to good system-level work is focusing on higher-level testing (scenario-based testing) and minimizing your component-level testing (assuming component-level testing has been carried out successfully). If you involve developers in this stage, be sure to do so as a planning augmentation. Don't ask them to define your strategy for you. Bring them in as valued experts, but be sure to minimize the questions you ask them. You want them on your side when it comes time to advocate for fixes.




What software testers can learn from children

What software testers can learn from children


When I went back to consulting, I started my own company -- not because I wanted to run a company, but because I didn't want to have to answer to anyone else when I chose to not travel during baseball season so I could coach my son's team. In the same spirit, when I work from home, I frequently do so in a room with my boys, who are naturally curious about what I'm doing. Over the past few years of this, I've learned a lot of things about being a good tester from them. Some of the most significant are these:

Don't be afraid to ask "Why?" As testers, we know to embrace our natural curiosity about what applications or systems do and how they do them, but we have a tendency to forget to ask why someone would want an application or system to do it. We tend to assume that if the application or system does something, some stakeholder wants the application or system to do that thing.

My children make no such assumption. Every time I start explaining to them what the application or system I'm testing does, they ask why it does what it does. No matter how many times they ask, I'm always surprised when I find that I don't have a good answer. More important, I'm amazed by the number of times, after realizing that I don't have answer and posing the question to a stakeholder, I find that they don't know either. In my experience, this has a strong positive correlation with rather significant changes being made to the design or functionality of the application or system.

Exploratory play is learning. Over the years, I have found that many testers seem to limit their testing to what they are instructed to test. My children have no such "testing filter." They are forever asking me what a particular button does or asking if they can type or use the mouse. Invariably, they find a behavior that is obviously a bug, but that I'd have been unlikely to uncover. The most intriguing part is they can almost always reproduce the bug, even days later. They may not have learned to do what the application was intended to do, but they have learned how to get the application to do something it wasn't intended to do -- which is exactly what we as testers ought to do

Recording your testing is invaluable. When Taylor was younger, he couldn't reproduce the defects he found. All he knew was to call me when the things he saw on the screen stopped making sense. Recently, I found a solution to this (since we all have trouble reproducing bugs, at least sometimes). I now set up a screen and voice recorder so that after I'm done with a test session, I can play back and watch the narrated video of the session. I can even edit those videos and attach segments of them to defect reports. Besides, Taylor loves watching the videos and listening to his voice as we sit together and watch the video of him testing while I took a phone call or did whatever else called me away and left him alone at the keyboard.

"Intuitive" means different things to different people. The more we know about what a system or application is supposed to do, the more intuitive we believe it is. My boys not only don't know what the applications and systems I test are supposed to do, but things like personnel management, retirement planning and remote portal administration are still a bit beyond them. That said, showing them a screen and asking, "What do you think Daddy is supposed to do now?" can point out some fascinating design flaws. For example, even Nicholas, who now reads well, will always tell me that he thinks I'm supposed to click the biggest or most brightly colored button or that he thinks I'm supposed to click on some eye-catching graphic that isn't a button or link at all. In pointing this out, he is demonstrating to me that the actions I am performing are unlikely to be intuitive for an untrained user.

Fast enough depends on the user. I talk about how users will judge the overall goodness of response time based on their expectations. My children expect everything to react at video game speed. They have absolutely no tolerance for delay.

You can never tell what a user may try to do with your software. When pointing out a bug that results from an unanticipated sequence of activity, we are often faced with the objection of "No user would ever do that." (Which James Bach interprets to mean, "No user we like would ever do that intentionally.") Interestingly enough, that objection melts away when I explain that I found the defect because one of my boys threw a ball that fell on the keyboard, or sat down and starting playing with the keyboard when I got up to get a snack.

Sometimes the most valuable thing you can do is take a break. Granted, my boys didn't teach me this directly, but I have learned that when I am sitting in front of my computer jealously listening to them playing while I am experiencing frustration due to my inability to make progress, taking a break to go play with them for a while almost always brings me back to the computer in a short while, refreshed and with new ideas.

Speaking of taking a break, my boys are waking up from their nap, so I think I'm going to go play for a while.

Ten software testing traps

Ten software testing traps

Everyone at some point in their careers faces difficulties. The problem could be not having enough resources or time to complete projects. It could be working with people who don't think your job is important. It could be lack of consideration and respect from managers and those who report to you.
Software testers aren't exempt from this. But as Jon Bach pointed out in his session titled "Top 10 tendencies that trap testers" that he presented at StarEast a couple weeks ago, software testers often do things that affect their work and how co-workers think about them.
Bach, manager for corporate intellect and technical solutions at Quardev Inc., reviewed 10 tendencies he's observed in software testers that often trap them and limit how well they do their job.
"If you want to avoid traps because you want to earn credibility, want others to be confident in you, and want respect, then you need to be cautious, be curious and think critically," he said.
Here's a look at what Bach considers the top 10 traps and how to remedy them:
10. Stakeholder trust: This is the tendency to search for or interpret information in a way that confirms your preconceptions. But what if a person's preconceptions are wrong? You can't automatically believe or trust people when they say, "Don't worry about it," "It's fixed," or "I'll take care of it."
Remedies include learning to trust but then verify that what the person says is correct. Testers should also think about the tradeoffs compared with opportunity costs, as well as consider what else might be broken.
9. Compartmental thinking: This means thinking only about what's in front of you. Remedies include thinking about opposite dimensions -- light vs. dark, small vs. big, fast vs. slow, etc. Testers can also exercise a brainstorm tactic called "brute cause analysis" in which one person thinks of an error and then another person thinks of a function.
8. Definition faith: Testers can't assume they know what is being asked of them. For example, if someone says, "Test this," what do you need to test for? The same goes for the term "state." There are many options.
What testers need to do is push back a little and make sure they understand what is expected of them. Is there another interpretation? What is their mission? What is the test meant to find?
7. In-attentional blindness: This is the inability to perceive features in a visual scene when the observer is not attending to them. An example of this is focusing on one thing or being distracted by something while other things go on around you, such as a magic trick.
To remedy this, testers need to increase their situational awareness. Manage the scope and depth of their attention. Look for different things and look at different things in different ways.
6. Dismissed confusion: If a tester is confused by what he's seeing, he may think, "It's probably working; it's just something I'm doing wrong." He needs to instead have confidence in his confusion. Fresh eyes find bugs, and a tester's confusion is more than likely picking up on something that's wrong.
5. Performance paralysis: This happens when testers are overwhelmed by the number of choices to begin testing. To help get over this, testers can look at the bug database, talk with other testers (paired testing), talk with programmers, look at the design documents, search the Web and review user documentation.
Bach also suggests trying a PIQ (Plunge In/Quit) cycle -- plunge in and just do anything. If it's too hard, then stop and go back to it. Do this several times -- plunge in, quit; plunge in, quit; plunge in, quit. Testers can also try using a test planning checklist and a test plan evaluation.
4. Function fanaticism: Don't get wrapped up in functional testing. Yes, those types of tests are important, but don't forget about structure tests, data tests, platform tests, operations tests and time tests. To get out of that trap, use or invest in your own heuristics.
3. Yourself, untested: Testers tend not to scrutinize their own work. They can become complacent about their testing knowledge, they stop learning more about testing, they have malformed tests and misleading bug titles. Testers need to take a step back and test their testing.
2. Bad oracles: An oracle is a principle or mechanism used to recognize a problem. You could be following a bad one. For example, how do you know a bug is a bug? Testers should file issues as well as bugs, and they should mention in passing to people involved that things might be bugs.
1. Premature celebration: You may think you've found the culprit -- the show-stopping bug. However, another bug may be one step away. To avoid this, testers should "jump to conjecture, not conclusions." They should find the fault, not just the failure.
Testers can also follow the "rumble strip" heuristic. The rumble strip runs along most highways. It's a warning that your car is heading into danger if it continues on its current path. Bach says, "The rumble strip heuristic in testing says that when you're testing and you see the product do strange things (especially when it wasn't doing those strange things just before) that could indicate a big disaster is about to happen."

What to include in a performance test plan

What to include in a performance test plan


Before performance testing can be performed effectively, a detailed plan should be formulated that specifies how performance testing will proceed from a business perspective and technical perspective. At a minimum, a performance testing plan needs to address the following:

  • Overall approach
  • Dependencies and baseline assumptions
  • Pre-performance testing actions
  • Performance testing approach
  • Performance testing activities
  • In-scope business processes
  • Out-of-scope business processes
  • Performance testing scenarios
  • Performance test execution
  • Performance test metrics

As in any testing plan, try to keep the amount of text to a minimum. Use tables and lists to articulate the information. This will reduce the incidents of miscommunication.

Overall approach
This section of the performance plan lays out the overall approach for this performance testing engagement in non-technical terms. The target audience is the management and the business. Example:

"The performance testing approach will focus on the business processes supported by the new system implementation. Within the context of the performance testing engagement, we will:

· Focus on mitigating the performance risks for this new implementation.

· Make basic working assumptions on which parts of the implementation need to be performance-tested.

· Reach consensus on these working assumptions and determine the appropriate level of performance and stress testing that shall be completed within this compressed time schedule.



This is a living document, as more information is brought to light, and as we reach consensus on the appropriate performance testing approach, this document will be updated."

Dependencies and baseline assumptions
This section of the performance test plan articulates the dependencies (tasks that must be completed) and baseline assumptions (conditions testing believes to be true) that must be met before effective performance testing can proceed. Example:

"To proceed with any performance testing engagement the following basic requirements should be met:

· Components to be performance tested shall be completely functional.

· Components to be performance tested shall be housed in hardware/firmware components that are representative or scaleable to the intended production systems.

· Data repositories shall be representative or scaleable to the intended production systems.

· Performance objectives shall be agreed upon, including working assumptions and testing scenarios.

· Performance testing tools and supporting technologies shall be installed and fully licensed."

Pre-performance testing actions
This section of the performance test plan articulates pre-testing activities that could be performed before formal performance testing begins to ensure the system is ready. It's the equivalent to smoke testing in the functional testing space. Example:

"Several pre-performance testing actions could be taken to mitigate any risks during performance testing:

· Create a "stubs" or "utilities" to push transactions through the QA environment -– using projected peak loads.

· Create a "stubs" or "utilities" to replace business-to-business transactions that are not going to be tested or will undergo limited performance. This would remove any dependencies on B2B transactions.

· Create a "stubs" or "utilities" to replace internal components that will not be available during performance testing. This would remove any dependencies on these components.

· Implement appropriate performance monitors on all high-volume servers."

Performance testing approach
This section of the performance plan expands on the overall approach, but this time the focus is on the both the business and technical approach. As an example:

"The performance testing approach will focus on a logical view of the new system implementation. Within the context of the performance testing engagement, we will:

· Focus on mitigating the performance risks for this new implementation.

· Make basic working assumptions on which parts of the implementation need to be performance-tested.

· Reach consensus on these working assumptions and determine the appropriate level of performance that shall be completed.

· Use a tier 1 performance testing tool that can replicate the expected production volumes.

· Use an environment that replicates the components (as they will exist in production) that will be performance-tested -– noting all exceptions.

· Use both production and non-production (testing) monitors to measure the performance of the system during performance testing."

Performance testing activities
This section of the performance test plan specifies the activities that will occur during performance testing. Example:

"During performance testing the following activities shall occur:

· Performance test shall create appropriate loads against the system following agreed-upon scenarios that include:

o User actions (workflow)

o Agreed-upon loads (transactions per minute)

o Agreed-upon metrics (response times)

· Manual testing and automated functional tests shall be conducted during performance testing to ensure that user activities are not impacted by the current load.

· System monitors shall be used to observe the performance of all servers involved in the test to ensure they meet predefined performance requirements.

· Post-implementation support teams shall be represented during performance testing to observe and support the performance testing efforts."

In-scope business processes
This section of the performance test plan speaks to which aspects of the system are deemed to be in-scope (measured). Example:

"The following business processes are considered in-scope for the purposes of performance testing:

· User registration

· Logon/access

· Users browsing content

· Article sales & fulfillment

· Billing

Business process list formed in consultation with: Business Analysts, Marketing Analyst, Infrastructure, and Business Owner."

Out-of-scope business processes
This section of the performance testing plan speaks to which aspects of the system are deemed to be out-of-scope (measured). Example:

"Business processes that are considered out-of-scope for the purposes of testing are as follows:

· Credit check

o Assumption: Credit check link shall be hosted by a third party -- therefore no significant performance impact.

· All other business functionality not previously listed as in-scope or out-of-scope

o Assumption: Any business activity not mentioned in the in-scope or out-of-scope sections of this document does not present a significant performance risk to the business."

Formulating performance testing scenarios
The existence of this section within the body of the performance testing plan depends on the maturity of the organization within the performance testing space. If the organization has little or no experience in this space, then include this section within the plan otherwise include it as an appendix. Example:

"Formulation of performance testing scenarios requires significant inputs from IT and the business:

· Business scenario

o The business scenario starts as a simple textual description of the business workflow being performance-tested.

o The business scenario expands to a sequence of specific steps with well-defined data requirements.

o The business scenario is complete once IT determines what (if any) additional data requirements are required because of the behavior of the application/servers (i.e. caching).

· Expected throughput (peak)

o The expected throughput begins with the business stating how many users are expected to be performing this activity during peak and non-peak hours.

o The expected throughput expands to a sequence of distinguishable transactions that may (or may not) be discernable to the end user.

o The expected throughput is completed once IT determines what (if any) additional factors could impact the load (i.e. load-balancing)

· Acceptance performance criteria (acceptable response times under various loads)

o Acceptance performance criteria are stated by the business in terms of acceptable response times under light, normal and heavy system load. System load being day-in-the-life activity. These could be simulated by other performance scenarios.

o The performance testing team then restates the acceptance criteria in terms of measurable system events. These criteria are then presented to the business for acceptance.

o The acceptance criteria are completed once IT determines how to monitor system performance during the performance test. This will include metrics from the performance testing team.

· Data requirements (scenario and implementation specific)

o The business specifies the critical data elements that would influence the end-user experience.

o IT expands these data requirements to include factors that might not be visible to the end user, such as caching.

o The performance testing team working with IT and the business creates the necessary data stores to support performance testing."

Performance test execution
Once again the existence of this section of the performance test plan is dependent upon the maturity of the organization within the performance testing space. If the organization has significant performance testing experience, then this section can become a supporting appendix. Example:

"Performance testing usually follows a linear path of events:

· Define performance-testing scenarios.

· Define day-in-the-life loads based on the defined scenarios.

· Execute performance tests as standalone tests to detect issues within a particular business workflow.

· Execute performance scenarios as a "package" to simulate day-in-the-life activities that are measured against performance success criteria.

· Report performance testing results.

· Tune the system.

· Repeat testing as required."

Performance test metrics
The performance test metrics need to track against acceptance performance criteria formulated as part of the performance testing scenarios. If the organization has the foresight to articulate these as performance requirements, then a performance requirements section should be published within the context of the performance test plan. The most basic performance test metrics consist of measuring response time and transaction failure rate against a given performance load -- as articulated in the performance test scenario. These metrics are then compared to the performance requirements to determine if the system is meeting the business need.