Custom Search

Failover and Recovery Testing

Failover and Recovery Testing testing verifies product in terms of ability to confront and successfully recover from possible failures, arising from software bugs, hardware failure or communication problems (eg network failure). The objective of this test is to check the system restore (or duplicate the main functional systems), which, in the event of failure, ensure the safety and integrity of the data product being tested.

Testing for failure and recovery is very important for systems operating on the principle of "24×7". If you create a product that will work, such as the Internet, without this kind of test you just can not do. Because Every minute of downtime or data loss in case of equipment failure can cost you money, losing customers and reputation in the market.

The technique of this test is simulating various fault conditions and subsequent study and evaluation of the reaction of protective systems. During these inspections it turns out, was it achieved the desired degree of recovery after the crash occurred.

For clarity, we consider some variants of this test, and general methods for their implementation. The object of testing in most cases are highly probable operational problems, such as:

  • Denial of electricity on a computer server
  • Denial of electricity on the client computer
  • Incomplete data processing cycle (interruption of data filters, interrupt synchronization).
  • Announcement or introduction into arrays of data are not available or erroneous elements.
  • Refusal data carriers.

These situations can be played as soon reached a point in development when all the system restore or duplication are ready to perform its functions. Technically, to implement the tests in the following ways:

  • Simulate the sudden failure of electricity on the computer (disconnect the computer).
  • Simulate the loss of communication with the network (turn off the power cord, disconnect the network device)
  • Simulate the failure of carriers (disconnect the external storage medium)
  • Simulate the situation in the presence of invalid data (a special test kit or a database).

Upon reaching the appropriate conditions of failure and performance-based recovery systems, we can estimate the product in terms of testing to failure. In all the cases listed above, upon completion of the recovery to be achieved some desired state of the data product:

  • Data loss or corruption within an acceptable range.
  • Report or reporting system, indicating the processes or transactions that were not completed because of errors.

It is worth noting that testing for failure and recovery - is very product-specific testing. Development of test scripts shall be subject to all the features of the system under test. Taking into account the rather harsh methods of influence, we should also evaluate the usefulness of this type of testing for a particular software product.

Design-Based Test Case Design an Effective Software Testing Technique

Software design errors and faults can be discovered and software designs validated by two techniques like:

1) Requirements-based test case design being the primary technique

2) Another  technique being the early design-based test case design.

In design-based test case design the information for deriving them is taken from the software design documentation.

Design-based test cases focus on the data and process paths within the software structures. Internal interfaces, complex paths or processes, worst-case scenarios, design risks and weak areas, etc. are all explored by constructing specialized test cases and analyzing how the design should handle them and whether it deals with them properly. In software testing effort, requirements-based and design-based test cases provide specific examples that can be used in design reviews or walkthroughs. Together they provide a comprehensive and rich resource for design based software testing.

Design Testing Metrics:
Increasingly, formal design reviews are adopting metrics as a means of quantifying test results and clearly defining expected results.

The metrics (measures that are presumed to predict an aspect of software quality) vary greatly. Some are developed from scored questionnaires or checklists. For example, one group of questions may relate to design integrity and system security.

Typical Integrity Questions can be like the following

Q.1: Are security features controlled from independent modules?

Q.2: Is an audit trail of accesses maintained for review or investigation?

Q.3: Are passwords and access keywords blanked out?

Q.4: Does it require changes in multiple programs to defeat the access security?

Each reviewer would answer these questions, and their answers would be graded or scored. Over time, minimum scores are established and used as pass/ fail criteria for the integrity metric. Designs that score below the minimum are reworked and subjected to additional review testing before being accepted.

Another example of a metric-based design test that can be used effectively is a test for system maintainability. An important consideration in evaluating the quality of any proposed design is the ease with which it can be maintained or changed once the system becomes operational. Maintainability is largely a function of design. Problems or deficiencies that produce poor maintainability must be discovered during design reviews; it is usually too late to do anything to correct them further along in the cycle of software testing.

To test the maintainability we develop a list of likely or plausible requirements changes (perhaps in conjunction with the requirements review). Essentially, we want to describe in advance what about the system we perceive is most apt to be changed in the future. During the design review a sample of these likely changes is selected at random and the system alterations that would be required are walked through by the reviewers to establish estimates for how many programs and files or data elements would be affected and the number of program statements that would have to be added and changed. Metric values for these estimates are again set on the basis of past experience. Passing the test might require that 80 percent of the changes be accomplished by changes to single programs and that the average predicted effort for a change be less than one man-week. Designs that score below these criteria based on the simulated changes are returned, reworked, and re-subjected to the maintainability test before being accepted. This is just one example of an entire class of metrics that can be built around what-if questions and used to test any quality attribute of interest while the system is still being designed.

Design for Testing:
In addition to the testing activities we perform to review and test the design, another important consideration is the features in the design that simplify or support testing. Part of good engineering is building something in a way that simplifies the task of verifying that it is built properly. Hardware engineers routinely provide test points or probes to permit electronic circuits to be tested at intermediate stages. In the same way, complex software must be designed with "windows" or hooks to permit the testers to "see" how it operates and verify correct behavior.

Providing such windows and reviewing designs to ensure their testability is part of the overall goal of designing for testability. With complex designs, testing is simply not effective unless the software has been designed for testing. Testers must consider how they are going to test the system and what they will require early enough in the design process so that the test requirements can be met.

Design Testing Tools and Aids:
Automated tools and aids to support design testing play an important role in a number of organizations. As in requirements testing, our major testing technique is the formal review; however there is a greater opportunity to use automated aids in support of design reviews.

Software testing tools in common use include design simulators (such as the data base and response time simulators; system charters that diagram or represent system logic; consistency checkers that analyze decision tables representing design logic and determine if they are complete and consistent; and data base dictionaries and analyzers that record data element definitions and analyze each usage of data and report on where it is used and whether the routine inputs, uses, modifies, or outputs the data element.

None of these software testing tools performs direct testing. Instead, they serve to organize and index information about the system being designed so that it may be reviewed more thoroughly and effectively. In the case of the simulators, they permit simplified models to be represented and experimentation to take place, which may be especially helpful in answering the question of whether the design solution is the right choice. All the tools assist in determining that the design is complete and will fulfill the stated requirements.

Documentation Errors

The purpose of error reporting is fixing it. About how to describe the error, what it consists of a description of the error and how it might look like the example tells this story.

So you found a bug. Not shelving them to start to write a bug report (not to procrastinate, then you can forget to write a report, to forget in what place was a mistake to miss part or even misquote the situation).

The first step is desirable to calm down and not make any sudden movements, not to press extra buttons, etc. We must remember the sequence of actions that have been made and try to reproduce the situation. Better to do it in a new browser window (if it is a web-application). Come and write data input / command buttons are pressed, in any menu jump to, what kind of reaction system to these actions, what error message is displayed.

Now we need to write their own actions. Record should be brief, but clear and understandable. Find a middle ground. If you write a memoir, the programmer will not read them or think that the error is very difficult and would defer until later, and ultrashort report no one will understand. As a consequence, error correction (bug) will hang in the air and sent back to you marked "not play" or ask clarifications, thereby simply and your spending and your time. Also, do not enter into one report more than one error. Motive is the same.

The report is written not only for ourselves but for others. So, it should be written so as to understand everything, but had no idea that you would like to say, do not ask again. Ask yourself a question: whether to repeat your actions the person who first sees the product?

If possible, try different options to express exactly the problem.

It is also desirable to avoid jargon or expressions which may be difficult to understand others.

In no case does not need to pass an oral report bugs, write an e-mail, icq, etc.! In most cases, forget about them, not treat them seriously and, if not corrected, especially in this case will blame you. You need this? All errors must be observed and described and have its own unique number. Then, for uncorrected errors will be the responsibility on the programmer.

These records will need another bug testers to work with you, the managers for them to see that you work and work productively, testers, who will come for you, as well as for writing reports.

Error Description

We now turn to the description of the error. Depending on what the company bagtrekingovaya system (accounting system errors), there will be different input fields.

At the beginning of opening a new bug report. It is possible that you will see a lot of lines for the filling, but it is possible that they do not all have to fill. It is better to consult with other testers, a manager or head of the group testing. But most likely have to fill out the following fields:

  • Priority (as a serious mistake and a speed of execution requires. Must be corrected quickly or you can wait)
  • Designate (who will deal with an error)
  • Class (this is what kind of error-serious, minor, typo …)

Header Error

The headline should concisely and fully describe the problem. We spend a lot of time leafing through the bugs database and browsing the headlines errors. Much time can be saved if the headlines are clear and not have to open the error description to understand what was meant.

Problem Description

Describe the problem better by using "arrows". With them, the text of the report is discarded many unnecessary words that interfere with understanding the essence.

Example: I opened www.aaa.ru -> introduced in the word bbb ccc -> clicked ddd -> get errors: ddd

An example from the life

Title: The problem with the menu "forgot password"

Problem Description: Go to the login page -> click "Forgot Password" - "in the" Personal Account "enter 2389 -> in the« e-mail »enter test@test.com -> System says:" Error sending message . (# 1) "

If necessary, the data in the operating environment, configuration, logs. Is there a dependence on the configuration, installation, condition, options, settings, version, etc.

Attachments

To make the report more detailed and vivid can and must resort to:

  • links
  • screenshot
  • video recording

Link

Well here everything is clear. Popped up an error - is taken link to this page, and inserted into the report. It is desirable also with screenshots. (Assuming that the tested Web application - approx. Editor)

Screenshots

A very useful thing to visualize the problem. Make a screenshot of the problematic area. (The simplest thing - it's on the keyboard to find the button Print Screen, then press it to open the program from Paint (if we are in the operating system of family Windows - approx. Editor), which is automatically installed in Windows and in her press Ctrl-V, then cut out unnecessary , store (preferably in the format JPG))

Although there is a more professional program that are more adapted to this kind of action and have a lot of very useful features, such as SnagIt, HyperSnap, HardCopy, RoboScreenCapture, FullShot 9, HyperSnap-DX 5, TNT 2. Screenshot want to attach to the bug report.

Videos

If the error difficult to describe, it is the most appropriate method. Program: SnagIt, CamStudio.

See Also

Top 10 negative test cases

Negative test cases used for testing the application, subject to receipt on his entry "incorrect" data. Such test cases should always be used during testing. Below are the ten most popular negative test scenarios:

Embedded Single Quote - Most SQL databases, there are problems in the presence of single quotes in the query (eg, Jones's car).
Use single quotes when checking each input field working with the database.

Required Data Entry - In the specification of your application should be clearly defined fields requiring mandatory data entry.
Check that the forms that have fields defined as mandatory for entry, can not be maintained in the absence of data in them.

Field Type Test - In the specification of your application should be clearly defined data types for each of the fields (fields date / time, numeric fields, fields for entering a telephone number or postal code, etc.)
Check that each of the fields allows you to enter or store data only certain specification of the type (for example, the application should not allow you to enter or maintain letters or special characters in numeric fields).

Field Size Test - In the specification of your application should be clearly defined maximum number of characters in each of the fields (for example, the number of characters in a field with a user name should not exceed 50).

To check that your application can not adopt or maintain more characters than specified. Do not forget that these fields should not only function correctly, but also to warn the user about the limitations, for example, with explanatory text boxes or error messages.

Numeric Bounds Test - Numeric field of your application may be limited allowable numeric values. These constraints can be specified in the specification of your application or arising from the logic of the program (for example, if you test the functionality associated with the accrual of interest on the account, it is logical to assume that the accrued interest can not take a negative value).

Check that the application displays an error message if the values fall outside the acceptable range (for example, the error message should appear when you enter a value of 9 or 51 in a field with valid values range from 10 to 50, or when you enter a negative value in the field whose values must be positive).

Numeric Limits Test - Most databases and programming languages define the numerical values of the variables with a certain type (eg, integer or long integer), which, in turn, have limited allowable numeric values (eg, integer values must be in the range -32768 32767, a long integer from -2147483648 to 2147483647).
Check the boundary values of the variables used for numeric fields, the boundary values which are not clearly defined specification.

Date Bounds Test - Very often in applications, there are logical limits to the fields containing the date and time. For example, if you check the box containing the user's date of birth, it is only logical to ban entry not yet due date (ie the date in the future), or restriction on the entry date is different from that of today more than 150 years.

Date Validity - Date fields should always be checking the validity of the entered values (eg, 10/31/2009 - not valid date). Also, do not forget about checking dates in leap years (years divisible by 4 m and multiples of 100 and 400 at the same time - a leap).

Web Session Testing - Many web applications use a browser session to track user logged into the system, application-specific application settings for a particular user, etc. At the same time, many of the features of the system can not or should not work without login. Check that the functionality or pages that are behind a password, not user is not authenticated.

Performance Changes - For each new product release is conducting a series of performance tests (for example, the rate of additions, deletions or changes of various elements on the page). Compare the results with tests of performance of previous versions. This practice will allow you to advance to identify potential performance problems due to code changes in new versions of the product.

localization testing tips and tricks

If you have already encountered with the testing locations, it is certainly one of the first questions you asked yourself starting to work, sounded something like this: "What should I test? Because I do not know the language / languages. "And the truth is that?

In fact, the correctness of the translation - not the only thing that you should pay attention when testing sites. Yes, of course, very important that the text was the grammatically, syntactically and logically correct. But that's not enough for a good location. That is why this kind of work and attract testers.

So, a few words about what needs to know and that should draw the attention of the tester in the test sites.

1. Prepare a suitable test environment for testing applications

Depending on the implementation, the choice of language for web applications can be carried out both manually and on the basis of language and regional settings on your browser or operating system, and even on your geographic location. And if a manual selection of a language more or less clear, in other cases, you will have to show a little ingenuity, and is likely to have multiple test environments. The ideal option would be virtual machines with installed OS and other software related sites. When configuring these machines, try to keep most settings to their original state, because very few users are using a configuration different from the CDS. When you create a virtual machine, is guided by the average statistical portrait of your end user, in order to imagine what software can be installed on its PCs. Why do it? The fact that some programs could seriously affect the final result of testing, and, accordingly, to get you to make false conclusions. For example, PC with MS Office 2003 and MS Office 2007 will behave differently in terms of working with a localized product, since the installation of MS Office 2007 includes the font Arial Unicode, which includes the inscription of characters overwhelming majority of world languages ??(in including Chinese and Japanese characters), but in the MS Office 2003 is not such a font.

2. Follow the correct translation

In my opinion, validation of the translation should always carry a person who is a native speaker or professional translator, or at least, people familiar with the language. All the rest of the evil one. But still, it is believed that such tests should exercise and a tester, even if he has no idea about the language. In such cases the Board to use electronic translators and dictionaries, not one but several at once, which will compare the results of translation and can make correct conclusions about its correctness.

In general, even if you have decided on such an adventure, try not too hold to, in fact, most likely, a professional translation interface, which, believe me, makes it more adequate translation than electronic translators.

3. Be the application "for you"

Before embarking on testing web applications in an unknown language to you, try to learn it so that you can move on it almost blindly. Read the basic functionality in the version of localization, the language you understand. This will save a lot of time, because you do not have to guess which may cause one or another link, or what the consequences will be pressing some buttons.

4. Begin testing with static elements

First of all, try to check the label on the static elements of the site: the block header explaining the inscriptions, etc., on them the user will pay attention at first.

By checking these items, do not forget that the length of one and the same text in different languages ??may differ materially. This is especially true if you checked the localization of the product for which the "native" language is English, because, as you know, when translating from English into any other text length increases by about 30%. Accordingly, try to make sure that all required inscriptions found a place in the markup of your site.

5. Pay attention to the controls, and error messages

Once the static elements will be finished, proceed to the rest of the control of your site: buttons, menus, etc. Remember that depending on the implementation, localization controls can be defined in the code, and may depend on your browser settings or OS.

Do not forget about error messages. Plan testing and compose test cases so that the maximum number of test error messages. Programmers are somehow paying very little attention to such things, because of which a significant part of the error messages may never entered into a localized version or be translated into appropriate language, or be totally unreadable because of problems with the encoding.

6. Ensure that data entry can be done in terms of localization

If the tested Web application involves the implementation of any data the user, be sure that users can enter data in terms of localization and all extended characters entered by the user and process the application correctly.

7. Do not forget the national and regional particularities

Another important point to which attention should be paid for testing - regional and national characteristics of the country, which is designed for localization. These features include the direction of the text, date formats, addresses, decimals, currency symbols, units of different quantities, etc. Always remember that good localization should be not only well-translated text, but an exact match cultural characteristics of people who speak the appropriate language.

I hope these simple tips will help you make your application more accessible and understandable for users of multi-lingual and multi-national World Wide Web.

Requirements Testing


Requirements Testing

Testing software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. Most of the defects in a system can be traced back to wrong, missing, vague or incomplete requirements.

Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria.

The Quality Gateway

As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.

To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.

Make The Requirement Measurable

In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement.

"The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."

In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.

The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.

Quantifiable Requirements

Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.

Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?

Non-quantifiable Requirements

Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.

Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.

An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.

Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.

Requirements Test 1

Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?

By adding a quality measure to each requirement we have made the requirement visible. This is the first step to defining all the criteria for measuring the goodness of the solution. Now let's look at other aspects of the requirement that we can test before deciding to include it in the requirements specification.

Requirements Test 2

Does the specification contain a definition of the meaning of every essential subject matter term within the specification?

When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.

Requirements Test 3

Is every reference to a defined term consistent with its definition?

Requirements Test 4

Is the context of the requirements wide enough to cover everything we need to understand?

Requirements Test 5

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?

Requirements Test 5 (enlarged)

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?

Requirements Test 6

Is every requirement in the specification relevant to this system?

Requirements Test 7

Does the specification contain solutions posturing as requirements?

Requirements Test 8

Is the stakeholder value defined for each requirement?

Requirements Test 9

Is each requirement uniquely identifiable?

Requirements Test 10

Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?

Conclusions

The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.

The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.

Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.

References:

An Early Start to Testing: How to Test Requirements
Suzanne Robertson