Custom Search

Internationalization Testing

Internationalization Testing

Introduction

World is flat. If you are reading this page, chances are that you are experiencing this as well. It is very difficult to survive in the current world if you are selling your product in only one country or geological region. Even if you are selling in all over the world, but your product is not available in the regional languages, you might not be in a comfortable situation.

Products developed in one location are used all over the world with different languages and regional standards. This arises the need to test product in different languages and different regional standards. Multilingual and localization testing can increase your products usability and acceptability worldwide.

Contents:


  • Definition

  • Internationalization

  • Pseudo-localization

Definition

Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales.

Localization (also known as L10N) refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files.

Assuming that there is not a separate base product for the locale, the localized files are installed at their proper location in the base product. This product is then released as a localized version of the product.

Localizing a properly internationalized product in most cases should require no changes to the source code.

Internationalization testing is the process, which ensures that product's functionality is not broken and all the messages are properly externalized when used in different languages and locale. Internationalization testing is also called I18N testing, because there are 18 characters between I and N in Internationalization.

Internationalization

In I18N testing, first step is to identify all the textual information in the system. This includes all the text present on the application's GUI, any text/messages that application is producing including error message/warning and help/documentation etc.

Main focus of the I18N testing is not to find functional defects, but to make sure that product is ready for the global market. As in other non functional testing it is assumed that functional testing has been completed and all the functionality related defects are identified and removed.

I18N testing can be divided in to two parts. First, to make sure that application's GUI or functionality will not be broken with the translated text. Second to make sure that translation of all the strings have happened properly. This activity is called Translation Verification Testing and is normally conducted by person who knows the language very well.

To make sure that application's functionality or GUI will not be broken after the translation a popular technique known as pseudo-translation is used. In pseudo-translation instead of translating it completely, it is translated in a pseudo manner. For example an externalized string "Bad Command" can be translated in Japanese as [JA XXXXX Bad Command XXXXXX JA]. Now if the product is launched with locale as Japanese it should show the externalized string as given above instead of "Bad Command". There are utilities to do this job for you, to do pseudo-translation of all the externalized strings of your application. During pseudo-translation you need to make sure that you are doing it roughly according to the rule. For example, width is normally expanded up to forty percent for the pseudo-translated strings as compare to the English.

As stated above, In I18N testing focus is not on the functionality but on the translation and locale related issues. Once all the externalized strings are pseudo-translated, you need to make sure that you have test case for every message or text element present in the system. Once it is done, same set of test cases can be executed on the properly translated build to make sure that translation is proper.


Pseudo-localization

A convenient approach to internationalization testing is to use the technique of pseudo-localization. This technique simulates the process of localizing products, involving many things a localization center does when localizing a product. To pseudo-localize a product:

  1. Pseudo-translate message files by inserting a specific prefix and suffix into every message. You can also modify localizable non-message resources, such as font names and colors. Localizable non-message resources should not be translated.

    Also, other files that may be localized should be modified in some way, such as help, text, html and graphics files.

  1. Install the pseudo-translated message files, as well as all other pseudo translated or modified files, in the locale of your choice, at the proper location in the product. In certain cases, such as for Java resource bundles, you must name the files with a locale-specific suffix and install them in the same location as other locale-specific message files.
  2. Run the product from this locale. The messages and GUI labels should display the prefixes and suffixes you added, and not the English default messages. You should also see the behavior of the modified, localizable non-messages, and other files that were modified, like help, text, html and graphics files, will show the modified versions of these files, when run in this locale.

This approach allows you to use the product, including its menus and other GUI objects, without needing to know another language or fully translate the message files.

Many of the sections that follow take this approach.

Localization Testing

Definition of Localization Testing


Localization is the process of adapting a globalized application to a particular culture/locale. Localizing an application requires a basic understanding of the character sets typically used in modern software development and an understanding of the issues associated with them. Localization includes the translation of the application user interface and adapting graphics for a specific culture/locale. The localization process can also include translating any help content associated with the application.

Localization of business solutions requires that you implement the correct business processes and practices for a culture/locale. Differences in how cultures/locales conduct business are heavily shaped by governmental and regulatory requirements. Therefore, localization of business logic can be a massive task.

Localization testing checks how well the build has been translated into a particular target language. This test is based on the results of globalized testing where the functional support for that particular locale has already been verified. If the product is not globalized enough to support a given language, you probably will not try to localize it into that language in the first place!

What we need to consider in Localization Testing?


What we need to consider in Localization Testing ?

  • Things that are often altered during localization, such as the UserInterface and content files.

  • Operating System

  • Keyboards

  • Text Filters

  • Hot keys

  • Spelling Rules

  • Sorting Rules

  • Upper and Lower case conversions
  • Printers

  • Size of Papers

  • Mouse

  • Date formats

  • Rulers and Measurements

  • Memory Availability

  • Voice User Interface language/accent

  • Video Content

Localization Testing Features


  • Object based Recording: Does not record based on coordinates and hence Localization testing is not affected by position change due to shorter or longer localized strings.

  • Centralized Object Repository: Localization testing will not be affected by textual changes as only logical name are placed in the scripts.

  • Unicode Support: Full Unicode support allows you to record in any language.

  • Automatic Resource Generator: Extracts all the string, literals and stores them in a resource file.

  • I18N Editor: Allows you to assign localized terms for the contents of the resource.

  • Powerful Library: Provides powerful libraries to get and set locales at runtime. Allows you to change the date format to suite the locale to be tested.

Purpose of Localization Testing


Products that are localized to international markets often face domestic competition, which makes it critical for the localized product to blend seamlessly into the native language and cultural landscape. The cost of a localization effort can be significant. Once you have the strings translated and the GUI updated, localization testing should be used to help ensure that the product is successfully migrated to the target market.

In addition to verifying successful translation, basic functional testing should be performed. Functional issues often arise as a result of localizing software. Don't risk the time and effort spent localizing by not performing adequate Quality Assurance.


General Areas of Focus in Localization Testing


Localization testing should focus on several general areas. The first involves things that are often altered during localization, such as the UI and content files. The second consists of culture-specific, language-specific, and country-specific areas. Examples include configurable components-such as region defaults and the default language-as well as language-specific and region-specific functionality-such as default spelling checkers, speech engines, and so on. You should also test the availability of drivers for local hardware and look for the encryption algorithms incorporated into the application. The rules and regulations for distribution of cryptographic software differ from country to country.

Pay specific attention to the customization that could not be automated through the globalization services infrastructure (Win32 NLS APIs and the .NET Framework). For example, check that formatting of mailing addresses is locale-specific and that parts of the user's name are ordered correctly. (The order in which surname and first name appear varies according to country. For instance, some Muslim countries and certain regions in India use a different name order than that used in the English language.) Functionality of this kind is often implemented by an application-testing must verify its correctness.

Other areas of localization testing should include basic functionality tests; setup, upgrade, and uninstall tests that are run in the localized environment; and, finally, application and hardware compatibility tests that are planned according to the product's target market.

Platform in Localization Testing


Any language version of Windows XP or Windows 2000 can be selected as a platform for the test if the product is properly globalized. Of course, in the case of localization testing, the localized version of the operating system can be a wise choice, since that's the most likely environment for your application in the real world. However, a globalized and localizable application, even after it undergoes localization, must be able to run on any language version of the operating system and with MUI installed.

You should run the application with MUI installed when your application implements an MUI behavior, through pluggable UI, satellite dynamic-link libraries (DLLs), or some other technique that adjusts the UI language to the user's preferences. MUI allows the user to switch the UI language of the operating system and thus you must make sure your application matches the operating-system settings. You should verify the behavior of the application when the user's default language of the UI differs from the other locale settings. By doing so, you'll immediately see any problems in the way resources are loaded and processed.

Localization Testing of the UI


Also keep an eye on the behavior of applications that run processes in a system-such as operating-system services-rather than in a user's context. When a system process queries its user default UI language settings, it might get a result different from what a user's process running at the same time will get. This can cause localization problems, inconsistency in the UI that the user sees (if parts of it are generated by the system services), or even problems in functionality. In order to avoid those problems, always check an application's behavior with different default user and system UI languages. The settings for UI languages should also be different from those used in the development environment.

For example, assume you have a machine with MUI installed and a user whose default UI language is different from that of the system. Suppose a fax service waiting for incoming calls is running continuously and that, when a fax arrives, the service displays a notification message to the currently logged user (if there is one). You must ensure that the message be in the user's language, which might not necessarily be the same as the one returned to the fax service when it queries its default UI language.

In particular, localization testing of the UI and linguistics should cover items such as:

  • Validation of all application resources.

  • Verification of linguistic accuracy and resource attributes.

  • Checking for typographical errors.

  • Checking that printed documentation, online Help, messages, interface resources, and command-key sequences are consistent with each other. If you have shipped localized versions of your product before, make sure that the translation is consistent with the earlier released versions.

  • Confirmation of adherence to system, input, and display environment standards.

  • Checking usability of the UI.

  • Assessment of cultural appropriateness.

  • Checking for politically sensitive content.

  • Making sure the market-specific information about your company, such as contact information or local product-support phone numbers, is updated.

Overview of Localization Testing

Overview of Localization Testing


Although localization and, by extension, localization testing are not strictly a part of the development of world-ready software, localization becomes possible once you have developed world-ready software. If you do decide to localize, you should be familiar with the scope and purpose of localization testing. Localizers translate the product UI and sometimes change some initial settings to adapt the product to a particular local market.

This definitely reduces the "world-readiness" of the application. That is, a globalized application whose UI and documentation are translated into a language spoken in one country will retain its functionality. However, the application will become less usable in the countries where that language is not spoken.

Localization testing checks how well the build has been translated into a particular target language. This test is based on the results of globalized testing where the functional support for that particular locale has already been verified. If the product is not globalized enough to support a given language, you probably will not try to localize it into that language in the first place!

You should be aware that pseudo-localization, which was discussed earlier, does not completely eliminate the need for functionality testing of a localized application. When you test for localizability before you localize, the chances of having serious functional problems due to localization are slim. However, you still have to check that the application you're shipping to a particular market really works. Now you can do it in less time and with fewer resources.

Localization Testing

Localization Testing

Introduction


Localization (L10N) is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.

Contents:

  • Overview of Localization Testing

  • Definition of Localization Testing

  • Things to be consider in Localization Testing

  • Localization Testing Features

  • Purpose of Localization Testing

  • General Areas of Focus in Localization Testing

  • Platform in Localization Testing

  • Localization Testing of the UI

Security Testing Tutorial

Security Testing Tutorial

A video on Security Testing


RDBMS Testing Concepts

RDBMS Testing Concepts

Database Testing Concepts

  1. Why test an RDBMS?
  2. What should we test?
  3. When should we test?
  4. How should we test?
  5. Who should test?
  6. Introducing database testing into your organization
  7. Database testing and data inspection
  8. Best practices



Testing Mobile Applications - Tutorial

Testing Mobile Applications - Tutorial: Mobile Application Testing Tutorial, Mobile Device Testing, Mobile Simulator Testing, Mobile Testing Tutorial, Telecom Testing

Excellent Tutorial on Mobile Application Testing. Must read.

http://www.automatedqa.com/techpapers/testcomplete/testing-pda-applications-with-testcomplete/



Mobile and hand held usability testing - why it matters


Mobile phone and PDA usability testing is critical to your business. In fact, mobile and handheld usability testing could be even more important than computer-based usability testing. The main reasons for this are:

  • The number of people accessing the Internet from mobile and handheld devices is increasing at a massive rate - in 2008 alone there'll be an estimated 58 million PDAs sold worldwide
  • People generally have a lot less experience of using their mobile to go online than they do of using their computer. This means that computer-based users can be assumed to have a higher level of existing expertise than mobile and handheld users
  • The platform through which users access your site is far less predictable when using mobile phones. Computer-based site visitors generally only differ from one another in their browser and operating system (i.e. most people will use a screen, mouse and keyboard), whereas the different types of mobile phones and PDAs differ can drastically.

Which mobile phones and PDAs do you need to consider?

Mobile phones and PDAs can differ from one another dramatically, and this will radically change how people experience and use websites. Some of the ways in which handheld devices can differ include:

  • Screen size (small vs. large)
  • Screen layout (portrait vs. landscape)
  • Input device (stylus, numeric keypad, dial-wheel, QWERTY keypad)

Because the mobile phone / PDA that someone is using will have such a profound effect on their experience of your site, you should try to test with as many mobile phones and PDAs as possible.

Of course, testing with every mobile phone and PDA is impossible. Here are some ideas to help narrow down the number of devices you'll need to test with:

  • Your mobile site visitors may belong to a specific audience. Certain audiences tend to prefer particular types of phones (e.g. phones with big screens that are designed to support online access vs. small-screen models that aren't).
  • There are 'phone families' that offer a very similar user experience (and will not need to be tested individually).
  • You may only want to test with the most popular mobile phones in Europe2 or the most popular models that are being used to access your site (you can check your site statistics to find out this information).

Where should mobile usability testing sessions be conducted?

Mobile phones and PDAs are used in the real world so usability testing of handheld devices should therefore not only take place in a usability laboratory.

Where, when and how a mobile phone is used is critical to a person's experience of the site they are accessing. Any of the following circumstances could influence this experience and therefore considerations of the site's usability:

  • Lighting
  • Background noise
  • Distractions
  • Concurrent tasks (i.e. anything the person is doing at the same time)
  • Physical movement

As such, we'd recommend, if possible, that any mobile phone and PDA usability testing is conducted both in a laboratory and also in the 'outside world'.

How you plan and run mobile phone usability sessions will be based on your business and its audience, but the most popular methods of mobile usability testing include:

  • Lab-based (using a camera to record the session)
  • Diary-studies (asking people to keep a diary of how they have used their mobile phone and any problems they encounter)
  • Paper prototypes (running usability testing on a paper-based version of the site, using mobile phone screen-sized pieces of paper)

What is Mobile Device Testing??

Mobile Device Testing is the process to assure the quality of mobile devices, like mobile phones, PDAs, etc. The testing will be conducted on both hardware and software. And from the view of different procedures, the testing comprises R&D Testing, Factory Testing and Certificate Testing.

R&D Testing

R&D test is the main test phase for mobile device, and it happens during the developing phase of the mobile devices. It contains hardware testing, software testing, and mechanical testing.

Factory Testing

Factory Testing is a kind of sanity check on mobile devices. It's conducted automatically to verify that there are no defects brought by the manufacturing or assembling.

Mobile Testing contains

Mobile application testing Hardware testing

 Battery(Charging) Testing
Signal receiving
Network Testing

Protocol testing mobile games Testing Mobile software compatibility Testing.

Certification Testing

Certification Testing is the check before a mobile device goes to market. Many institutes or governments require mobile devices to conform with their stated specifications and protocols to make sure the mobile device will not harm users' health and are compatible with devices from other manufacturers. Once the mobile device passes all checks, a certification will be issued for it.


Source: Wekipedia

Top 10 myths about job interviews

Here is his list of the top 10 job interview myths, and how to deal with them:

Myth #10: The interviewer is prepared.

"The person you're meeting with is probably overworked and stressed about having to hire someone," Couper says. "So make it easy for him or her. Answer that catchall request, 'Tell me about yourself", by talking about why you're a great fit for this job. If it's obvious they haven't read your resume, recap it briefly, and then tie it to the job you want." Tell them what they really need to know, so they don't have to come up with more questions.

Myth #9: Most interviewers have been trained to conduct thorough job interviews.

While human resources professionals do get extensive training in job interviewing techniques, the average line manager is winging it. "To make up for vague questions, be specific even if they don't ask," Couper suggests. "Be ready with two or three examples of particular skills and experiences that highlight why they should hire you."

Myth #8: It's only polite to accept an interviewer's offer of refreshment.

"They usually try to be courteous and offer you a drink, but they don't really want to bother with it," says Couper. "Unless the beverage in question is right there and won't take more than a second to get, just say, no, thank you."

Couper once interviewed a job candidate who said she would love a cup of tea, which, he recalls, "meant I spent half the allotted interview time looking for a tea bag, heating water, and so on. It was irritating."

Another good reason, Couper says, to decline caffeine is that "if the interview is a lengthy one, you don't want to need a restroom halfway through the conversation."

Myth #7: Interviewers expect you to hand over references' contact information right away.

Hold off until you're specifically asked, Couper advises, and even then, you can delay a bit by offering to send the information in an email in a day or two. There are at least two good reasons for not rushing it, Couper says. First, "you sometimes don't know until the end of the interview who would be the best references for this particular job," he notes. "If you get a sense that the interviewer cares most about, for instance, teamwork, you want to choose someone who can attest to your skills in that area. A reference who can only talk about some other aspect of your work is not going to help."

Second, and no less important, "you want a little time to prep your references, by gently coaching them on what you'd like them to say, before the employer calls them."

Myth #6: There's a right answer to every question an interviewer asks.

"Sometimes how you approach your answer is far more important than the answer itself," Couper says. If you're presented with a hypothetical problem and asked how you would resolve it, try to think of a comparable situation from the past and tell what you did about it.

Talkback: Has anything surprised you during a job interview? Leave a comment at the bottom of this story.

Myth #5: You should always keep your answers short.

Here's where doing lots of research before an interview really pays off. "The more you've learned about the company and the job beforehand, the better able you are to tell why you are the right hire," Couper says.

Don't be afraid to talk at length about it, partly because it will spare the interviewer having to come up with another question for you (see Myth #1 above) and partly because "in a good interview, you should be talking about two-thirds of the time."

Myth #4: If you've got great qualifications, your appearance doesn't matter.

Reams of research on this topic have proven that physical attractiveness plays a big part in hiring decisions. "Anyone who says otherwise is lying," Couper says. "People care about your looks, so make the absolute most of what you've got." Even if you're not drop-dead gorgeous, it's impossible to overestimate the importance of looking "healthy, energetic, and confident."

Myth #3: When asked where you see yourself in five years, you should show tremendous ambition.

The five-year question is a common one, and it's uncommonly tricky. "Interviewers want you to be a go-getter, but they also worry that you'll get restless if you don't move up fast enough. So you want to say something that covers all bases, like, 'I'd be happy to stay in this job as long as I'm still learning things and making a valuable contribution,'" says Couper.

You might also consider turning the question around and asking, "Where do you see me in five years?" Says Couper, "Sometimes the answer to that -- like, 'Well, we'd expect you to keep doing the same thing we hired you to do' -- is a good way to spot a dead-end job."

Myth #2: If the company invites you to an interview, that means the job is still open.

Alas, no. In fact, the job may never have existed in the first place: "Some companies use 'interviews' to do market research on the cheap. They ask you about your current or recent duties, your pay scale, and so on, to get information for comparison purposes." Another possibility, Couper says, is that "they may already have a strong internal candidate in mind for the job but just want to see if they come across someone better."

If you get an interview through a networking contact, he adds, "an employer may interview you simply as a courtesy to the person who referred you, if that is someone they don't want to disappoint."

Even if the job opening is phony, it's still worth going, he says: "Sometimes they discover you're a good fit for a different opening that really does exist. You never know where an interview might lead."

And the #1 myth about job interviewing: The most qualified person gets the job.

In at least one crucial respect, a job interview is like a date: Chemistry counts.

"A candidate who is less qualified, but has the right personality for the organization and hits it off with the interviewer, will almost always get hired over a candidate who merely looks good on paper," Couper says.

What can you do if you suspect you're not knocking an interviewer's socks off?

"At the end of the discussion, you'll probably be asked if you have any questions," Couper says. "If you sense the person has reservations about your style, ask what the ideal candidate for this job would be like." Then think fast. Can you talk a bit about how you fit that profile? "Addressing any concerns the interviewer might have, beyond your formal qualifications, is your chance to seal the deal," Couper says.

Interviews – Stop thinking aloud. Start thinking, Channelize your thoughts and Reply…

Some of the questions that were asked in Testing interviews are:-


"How do you test a pen?", "How do you test a mobile?", "How do you test a remote controller?", "How do you test a random number generator?", "How do you test an application that generates the fibonacci series?" etc.

When I asked them "How do you plan your testing for a website selling mobile phones, interacting with 3 suppliers?", none of them paused to think. The answer came immediately like below.

"I'd plan for sanity testing. Will plan for testing the site against XYZ interfaces. L&P Testing needs to be a part of the test plan. I'll have daily stand-ups. I will talk about cost and variances to think about estimation. I'll have a risk plan for managing risks proactively. I will do a requirements-traceability-matrix..."... and he'd go on and on and on.

After talking to many candidates, it struck me that most of them, when answering the above questions, did not pause to think; or ask for time to think. Though it seemed that they were answering the question, they were only "thinking out their thoughts aloud".

Thinking a bit more, I guess the best way to answer such questions, in an interview would be in the following 4 steps: 

STEP 1:- Think
Ponder about the question for a minute and think about the answers and various possibilities for the next couple of mins and speak up when you are prepared to answer. If you want more time, please ask the interviewer for time.

STEP 2:- Channelize your thoughts.
Think about the solution and channelize your thoughts to ensure that your answer is structured correctly, or how you want it to be structured. A structured answer, will definitely earn you a lot of brownie points with your future employer. If you want, write down short points on paper before you start talking about the answer

STEP 3:- Prioritize the reply and speak it out accordingly
Go ahead and speak up and start answering the question. If required, refer to the short points while you answer. If you need more time to think, please ask for more time. 

STEP 4:- Invite him for discussion on your reply
Ask if the interviewer has any questions or invite him to discuss the finer points of your answer. Try to give logical reasons for your decisions for prioritizing.
 

--------------------------------------------------------
--------------------------------------------------------

Are You a Fresher or Freelancer ? Try Beta Testing !

Are You a Fresher or Freelancer ? Try Beta Testing !

What is a beta testing?
A Beta test is a testing done for a piece of computer product prior to commercial release in the market. Beta testing can be considered the last stage of testing, and normally can involve sending the product to beta test sites outside the company for real-world exposure. Normally, a free trial version of the product is allowed to be downloaded over the Internet. Beta test versions of software are distributed to a wide range of users to get the software tested in different combination of hardware and associated software.
 
What is in for a fresher or freelancer?
So what should we do as a fresher or a freelance software tester? Beta testing provides a platform for us to test some real world software. There are many companies which release beta version of their software before they ship it to the market for commercial selling.
 
By signing up as a beta tester and downloading a test version of the software, you can get a first hand preview of the software. It is an exciting experience to get the software free of cost before the world gets to see the product. If you can find few bugs or write some reviews (positive or negative) about the beta version of the software, most of the companies give a free licensed version of the product when they release in the market.
 
Benefits of beta testing to a fresher?
Also as a fresher, you can show this experience of testing a beta product. You can list down what all defects or bugs you have submitted to the company. There are few companies which I know who pay for finding bugs in there products. Your resume will shine if you can prove that you earned few dollars by finding bugs for this and that company.

 
uTest.com

Sign Up with uTest.com and get paid for finding bugs. I am sure there are lots of other companies which pay for pointing out their defects.
 
 
Beta products from Microsoft

Microsoft also provides wide range of Beta software for testing. Sign up with them and start testing a free beta product today.
 
 
 
Here are some tips for running a Beta Test  
  • Take back up of your PC before you install a Beta version software. You never know when the software crashes.
  • Read the terms and conditions carefully. Find out which bugs have already been reported. You don't want to waste time finding bugs which someone has already reported.
  • Get the specification or requirement document first. How do you test a product if you don't know what it is supposed to do?
  • Create a network of Beta Testers. This will help you get informed of latest happenings around the world.


***Happy Testing***

 

Testing Glossary - 100 Most Popular Software Testing Terms

Are you confused of testing terms and acronyms? Here are few glossaries that might help you. Here are 100 most popular testing terms compiled from International Software Testing Qualifications Board's website. A complete and exhaustive list of terms is available for download at that site.


100 Most Popular Software Testing Terms

Acceptance testing

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Ad hoc testing

Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Agile testing

Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. 

Alpha testing

Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Back-to-back testing

Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.

Beta testing

Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Big-bang testing

A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.

Black-box testing

Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Black-box test design technique

Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

Blocked test case

A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Bottom-up testing

An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.

Boundary value

An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis

A black box test design technique in which test cases are designed based on boundary values.

Branch testing

A white box test design technique in which test cases are designed to execute branches.

Business process-based testing

An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

Capture/playback tool

A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

Certification

The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

Code coverage

An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Compliance testing

The process of testing to determine the compliance of the component or system.

Component integration testing

Testing performed to expose defects in the interfaces and interaction between integrated components.

Condition testing

A white box test design technique in which test cases are designed to execute condition outcomes.

Conversion testing

Testing of software used to convert data from existing systems for use in replacement systems.

Data driven testing

A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.

Database integrity testing

Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.

Defect

A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect masking

An occurrence in which one defect prevents the detection of another.

Defect report

A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

Development testing

Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.

Driver

A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

Equivalence partitioning

A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

Error

A human action that produces an incorrect result.

Error guessing

A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Exhaustive testing

A test approach in which the test suite comprises all combinations of input values and preconditions.

Exploratory testing

An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

Failure

Deviation of the component or system from its expected delivery, service or result.

Functional test design technique

Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure.

Functional testing

Testing based on an analysis of the specification of the functionality of a component or system.

Functionality testing

The process of testing to determine the functionality of a software product.

Heuristic evaluation

A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").

High level test case

A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available.

ISTQB

International Software Testing Qualification Board. Click here for more details.

Incident management tool

A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.

Installability testing

The process of testing the installability of a software product.

Integration testing

Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Isolation testing

Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

Keyword driven testing

A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.

Load testing

A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

Low level test case

A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators.

Maintenance testing

Testing the changes to an operational system or the impact of a changed environment to an operational system.

Monkey testing

Testing by means of a random selection from a large range of inputs and by randomly pushing buttons, ignorant on how the product is being used.

Negative testing

Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.

Non-functional testing

Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

Operational testing

Testing conducted to evaluate a component or system in its operational environment.

Pair testing

Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.

Peer review

A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Performance testing

The process of testing to determine the performance of a software product.

Portability testing

The process of testing to determine the portability of a software product.

Post-execution comparison

Comparison of actual and expected results, performed after the software has finished running.

Priority

The level of (business) importance assigned to an item, e.g. defect.

Quality assurance

Part of quality management focused on providing confidence that quality requirements will be fulfilled.

Random testing

A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

Recoverability testing

The process of testing to determine the recoverability of a software product.

Regression testing

Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Requirements-based testing

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

Re-testing

Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Risk-based testing

An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.

Severity

The degree of impact that a defect has on the development or operation of a component or system.

Site acceptance testing

Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

Smoke test

A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Statistical testing

A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases.

Stress testing

Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Stub

A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

Syntax testing

A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.

System integration testing

Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

System testing

The process of testing an integrated system to verify that it meets specified requirements.

Test automation

The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

Test case specification

A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

Test design specification

A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.

Test environment

An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test harness

A test environment comprised of stubs and drivers needed to execute a test.

Test log

A chronological record of relevant details about the execution of tests.

Test management tool

A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, and the logging of results, progress tracking, incident management and test reporting.

Test oracle

A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual's specialized knowledge, but should not be the code.

Test plan

A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Test strategy

A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite

A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Testware

Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

Thread testing

A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

Top-down testing

An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Traceability

The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Usability testing

Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case

A sequence of transactions in a dialogue between a user and the system with a tangible result.

Use case testing

 

A black box test design technique in which test cases are designed to execute user scenarios.

 

Unit test framework

 

A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.

Validation

 

Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Verification

 

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

Vertical traceability

 

The tracing of requirements through the layers of development documentation to components.

Volume testing

 

Testing where the system is subjected to large volumes of data.

Walkthrough

 

 

A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.

 

 

White-box testing

 

Testing based on an analysis of the internal structure of the component or system.