Custom Search

The Importance of “Hands-On” Mobile App Testing

On a simulator, you still use a mouse to 'touch' the screen and simulate gestures. You also have a full-sized keyboard for data entry. Of course, this is very different from using a mobile device, wouldn't you say?

First, a mobile device sits in your hand. Each of us likely has slightly different ways of holding and operating the device. For some, it's done with one-hand using your thumb or a finger. For others, it might be two hands using both thumbs.

Second, there's the act of touching various screen elements like buttons and controls. This is much easier to do with a mouse pointer than a pudgy finger.

Based on the prior experience of many mobile testers, this difference is a critical one, and the biggest, for testing application design and function. Using a mouse with the simulator, you do not get the full effect of having to scroll through a large list view of items or having to play 'whack-a-mole' on the screen with your thumb because button placement for navigating multiple screens is inconsistent.

Mobile developers are strongly encouraged to ensure that application testing begins early, and happens often, on the mobile device itself rather than on a simulator. The same holds true for tablet devices.

From Web Trends, Mobile Analytics: "Even on the same mobile platform, screen sizes and resolutions can vary based on device type. For instance, the screen size and resolution on the HTC Incredible is different than that on the HTC EVO 4G. Consequently, for an application to have a consistent look and feel across both devices and across a variety of other devices, user interface elements and graphics need to be scalable."

Top 10 Reasons to Become a Mobile App Tester

There are lots of reasons to become a mobile app tester, which you would know if you read our posts every day. Here are ten of those reasons, in no particular order:

1. High income potential

2. You want to work in the "wild west" of new technology

3. No fancy degrees or certifications needed to get started

4. You want to say "I tested that app!" to your friends and family

5. You're bored with testing the same old web and desktop apps

6. You want to see the latest, greatest apps before everyone else

7. You want to be one of the early experts in a fast-growing field

8. You're curious, with a knack for problem-solving

9. You want to get paid to play with the latest apps and devices

10. You want your wireless bill to be tax deductible

Mobile Functional Testing: Manual or Automated?

Mobile Functional Testing: Manual or Automated?


Okay, so you know what aspects of your mobile application are in need of functional testing. But before you start crafting test cases or user journeys, you must answer another important question: manual testing or automation?

For established companies, the answer to that question would be a resounding "both". But for startups with limited testing budgets and rapidly-evolving applications, manual testing – although slightly more costly – is the preferred option. Although there are several open-source automated solutions  many of them are exclusively made for one operating system (iOS). Preferred

Other advantages of manual testing include:

  • Find real bugs: Automation suites will highlight some errors, but most bugs within mobile apps – especially  usability and layout issues – are only discovered under true real-world scenarios.
  • Adaptability: Manual testing can be altered much more quickly and effectively than an elaborate automated test. Chances are, if you're working within a startup environment, your testing requirements are likely to change as new features are added.

  • Real feedback: Unfortunately, automated tests can't give you an honest (human) opinion about your app's performance, usability and functionality. We'll let you know when this changes. In the meantime, you need to see results from real users with real devices.
  • Variable Control: As we've alluded to earlier, there's simply too many outside variables to rely on automation for all of your testing objectives. Until you've isolated and addressed all of these variables, manual testing should be your preferred methodology.

Mobile testing for start-ups is all about discovering new areas of concern. So, to rehash an old quote from mobile testing expert Karen N. Johnson:

Software Cost Estimation

Software Cost Estimation

This article aims to study the process of Software Cost Estimation and its impact on the Software Development Process. We also highlight the various challenges involved in Software Cost Estimation and common solutions to navigate through these challenges.

Background:
Software Cost Estimation is widely considered to be a weak link in software project management. It requires a significant amount of effort to perform it correctly. Errors in Software Cost Estimation can be attributed to a variety of factors. Various studies in the last decade indicated that 3 out of 4 Software projects are not finished on time or within budget or both


Who is responsible for Software Cost Estimation?
The group of people responsible for creating a software cost estimate can vary with each organization. However the following is possible in most scenarios -


- People who are directly involved with the implementation are involved in the
..estimate.
- Project Manager is responsible for producing realistic cost estimates.
- Project Managers may perform this task on their own or consult with
..programmers responsible.
- Various studies indicate that if the programmers responsible for development
..are involved in the estimation it was more accurate. The programmers have
..more motivation to meet the targets if they were involved in the estimation
..process.


Following scenarios are also possible


- An independent cost estimation team creates an Estimate
- Independent Experts are given the Software specification and they create a
..Software Cost estimate. The Estimation team reviews this and group
..consensus arrives at a final figure.


Factors contributing to inaccurate estimation


· Scope Creeps, imprecise and drifting requirements
· New software projects pose new challenges, which may be very different from
..the past projects.
· Many teams fail to document metrics and lessons learned from past projects
· Many a times the estimates are forced to match the available time and
..resources by aggressive leaders
· Unrealistic estimates may be created by various 'political under currents'


Impact of Under-estimating:
Under-Estimating a project can be vary damaging


- It leads to improper Project Planning
- It can also result in under-staffing and may result in an over worked and
..burnt out team
- Above all the quality of deliverables may be directly affected due insufficient
..testing and QA
- Missed Dead lines cause loss of Credibility and goodwill


The Estimation Process:
Generally the Software Cost estimation process comprises of 4 main steps:


1) Estimate the size of the development product.
This comprises of various sub-steps or sub tasks. These tasks may have been done already during Requirement Analysis phase. If not then they should be done as a part of the estimation Process. Important thing is that they should be done to ensure the success of the Estimation Process and the Software Project as a whole


a) Create a detailed Work Break Down Structure. This directly impacts the accuracy of the estimate. This is one of the most important steps. The Work Break down structure should include any and all tasks that are within the scope of the Project, which is being estimated. The most serious handicap is the inability to clearly visualize the steps involved in the Project. Executing a Software Project is not just coding.


b) The work Break down structure will include the size and complexity of each software module that can be expressed as number of Lines of Code, Function Points, or any other unit of measure


c) The Work Break down structure should include tasks other than coding such as Software Configuration Management, various levels and types of Testing, Documentation, Communication, User Interaction, Implementation, Knowledge Transition, Support tasks(if any) and so on


d) Clearly indicate or eliminate any gray areas (vague/unclear specifications etc.)


e) Also take into account the various Risk Factors and down times. There are many different Risk Factors involved – Technical aspects such as availability of the Environment, Server/Machine uptime, 3rd party Software Hardware failures or Human aspects – Employee Attrition, Sick time, etc. Some of them may seem to be 'overkill' but real world experience shows that these factors affect the time lines of a project. If ignored they may adversely impact the Project timelines and estimates.


2) Estimate the effort in person-hours.
The Result of various tasks involved in step 1 is an effort estimate in person hours. The effort of various Project tasks expressed in person-hours is also influenced by various factors such as:


a) Experience/Capability of the Team members
b) Technical resources
c) Familiarity with the Development Tools and Technology Platform


3) Estimate the schedule in calendar months
The Project Planners work closely with the Technical Leads, Project Manager and other stakeholders and create a Project schedule. Tight Schedules may impact the Cost needed to develop the Application.


4) Estimate the project cost in dollars (or other currency)
Based on the above information the project effort is expressed in dollars or any other currency.

Measuring the Size/Complexity of the Software Program:
This is one of the most elusive aspects in the Software Cost Estimation Process.
There are different methodologies for arriving at and expressing the size/complexity of the Software Program. Some of the popular ones are


1) Function Points
2) Lines of Code
3) Feature Points
4) Mk II function points
5) 3D Function Points
6) Benchmarking

We briefly explain each of the above methods in the next few pages


Function Points
The Function Point methodology was developed by Allan Albrecht at IBM. This methodology is based on the belief that the size of a software project can be estimated during the requirements analysis. It takes into account the inputs and outputs of the system. Five classes of items are counted:


1. External Inputs
2. External Outputs
3. Logical Internal Files
4. External Interface Files
5. External Inquiries


The Total Function Point count is calculated based on the


a) Counts for each of these items
b) The weighting factors and adjustment factors in this methodology


What are function points and why count them?
"Function points are a measure of the size of Software applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application."


Function points are not a perfect measure of effort to develop an application or of its business value, although the size in function points is typically an important factor in measuring each. Since the function point count for an application is independent of the technology used to develop the application it can be used for almost all types of applications such as GUI, OOP, Client Server, etc.
Since function points are based on screens, reports and other external objects, this measure takes the users' view. In these days of outsourcing and other confusion regarding the role of IT in an organization, understanding the users' view is of critical importance!


Lines of code:
Counting lines of code measures software from the developers' point of view The number of lines of code is the traditional way of measuring the application size. Many people consider this method as irrelevant now. There are technical problems with the lines of code measure. It is difficult to compare lines of code when a mix of technologies is used. There is no standard definition of what a line of code is. A Program may have blank lines, comments, data declarations, and multi-line statements.


Feature points Methodology:
It was developed by Software Productivity Research (SPR) in 1986. This technique takes into account the number of algorithms used in the application. It is compatible with the Function Points Methodology. The size calculated by the two methods for an ordinary transactional program would be the same. Feature Points Methodology is generally more useful for estimation in real-time process control, mathematical optimization and various embedded systems. The estimates are higher and considered more accurate in these cases.


Mk II function points Methodology:
This was developed Charles Symons in 1984 at Nolan, Norton & Co., part of KPMG Management Consulting. The Original Function Point approach suffers from the following weaknesses:


· It is often difficult to identify the components of an application.
· The original Function Point Methodology assigned weights to function point
..components based on "debate and trial."
· The original Function Point Methodology did not provide a means of accounting
..for internal complexity. 'Feature points' technique addresses these issues.
· When small systems are combined into larger applications. Function
..Points Methodology makes the total function point count less than the sum
..of the components.


MKII decomposes the application being counted into a collection of logical transactions. Each transaction consists of an input, a process and an output. For each transaction, Unadjusted Function Points (UFP) become a function of the number of input data element-types, entity-types referenced and output data element-types. The UFPs for the entire system are then summed. Mk II is widely used in the UK, India, Singapore, Hong Kong and Europe. Users include governmental organizations, finance, insurance, retail and manufacturing.


3D function points:
This methodology was developed by Boeing Company and published in 1992. The new technique was designed to address two classic problems associated with the Albrecht approach( the original Functional Point Methodology)


a) The original Functional Point Methodology is not user friendly
b) It is inaccurate when measuring complex scientific and real-time systems.


The 3D function points takes into account the following Dimensions - data, function and control. The data dimension is similar to the original Function Point Methodology. The function dimension accounts for the transformations or algorithms. The control dimension accounts for transitions or changes in application state.



Benchmarking:
Over the years many Organizations with significant development experience and mature processes have collected metrics on the various software development projects. These include the time, effort required to develop applications on various platforms and in various Business Domains. Based on this data benchmarks are created.


Each new software module to be developed can be categorized using the


a) Number of inputs
b) Number of outputs
c) Number of transactions
d) Algorithms
e) Features of the module


Based on the above factors the module can be categorized for example as Simple, Medium or Complex. If it is too Complex you could express it in multiples of the above three categories. The baseline effort in terms of person-hours it takes for each category is predefined based on historical data/metrics for a similar platform. This figure can be improvised/refined over a period of time This can be correlated to an algorithm for calculating Car Insurance Premium. This is used to estimate the size and the effort needed for Software Development.

Risk Analysis in Software Testing

Risk Analysis

In this tutorial you will learn about Risk Analysis, Technical Definitions, Risk Analysis, Risk Assessment, Business Impact Analysis, Product Size Risks, Business Impact Risks, Customer-Related Risks, Process Risks, Technical Issues, Technology Risk, Development Environment Risks, Risks Associated with Staff Size and Experience.

Risk Analysis is one of the important concepts in Software Product/Project Life Cycle. Risk analysis is broadly defined to include risk assessment, risk characterization, risk communication, risk management, and policy relating to risk. Risk Assessment is also called as Security risk analysis.


Technical Definitions:

Risk Analysis: A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats.


Risk Assessment: A risk assessment involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization.


Business Impact Analysis: A business impact analysis involves identifying the critical business functions within the organization and determining the impact of not performing the business function beyond the maximum acceptable outage. Types of criteria that can be used to evaluate the impact include: customer service, internal operations, legal/statutory and financial.


Risks for a software product can be categorized into various types. Some of them are:


Product Size Risks:

The following risk item issues identify some generic risks associated with product size:


  • Estimated size of the product and confidence in estimated size? 
  • Estimated size of product? 
  • Size of database created or used by the product? 
  • Number of users of the product? 
  • Number of projected changes to the requirements for the product?

Risk will be high, when a large deviation occurs between expected values and the previous experience. All the expected information must be compared to previous experience for analysis of risk.


Business Impact Risks:

The following risk item issues identify some generic risks associated with business impact:


  • Affect of this product on company revenue? 
  • Reasonableness of delivery deadline? 
  • Number of customers who will use this product and the consistency of their needs relative to the product? 
  • Number of other products/systems with which this product must be interoperable? 
  • Amount and quality of product documentation that must be produced and delivered to the customer? 
  • Costs associated with late delivery or a defective product?

Customer-Related Risks:

Different Customers have different needs. Customers have different personalities. Some customers accept what is delivered and some others complain about the quality of the product. In some other cases, customers may have very good association with the product and the producer and some other customers may not know. A bad customer represents a significant threat to the project plan and a substantial risk for the project manager.


The following risk item checklist identifies generic risks associated with different customers:


  • Have you worked with the customer in the past? 
  • Does the customer have a solid idea of what is required? 
  • Will the customer agree to spend time in formal requirements gathering meetings to identify project scope? 
  • Is the customer willing to participate in reviews? 
  • Is the customer technically sophisticated in the product area? 
  • Does the customer understand the software engineering process?

Process Risks:

If the software engineering process is ill-defined or if analysis, design and testing are not conducted in a planned fashion, then risks are high for the product.


  • Has your organization developed a written description of the software process to be used on this project? 
  • Are the team members following the software process as it is documented? 
  • Are the third party coders following a specific software process and is there any procedure for tracking the performance of them? 
  • Are formal technical reviews are done regularly at both development and testing teams? 
  • Are the results of each formal technical review documented, including defects found and resources used? 
  • Is configuration management used to maintain consistency among system/software requirements, design, code, and test cases? 
  • Is a mechanism used for controlling changes to customer requirements that impact the software?

Technical Issues:

  • Are specific methods used for software analysis? 
  • Are specific conventions for code documentation defined and used? 
  • Are any specific methods used for test case design? 
  • Are software tools used to support planning and tracking activities? 
  • Are configuration management software tools used to control and track change activity throughout the software process? 
  • Are tools used to create software prototypes? 
  • Are software tools used to support the testing process? 
  • Are software tools used to support the production and management of documentation? 
  • Are quality metrics collected for all software projects? 
  • Are productivity metrics collected for all software projects?

Technology Risk:

  • Is the technology to be built new to your organization? 
  • Does the software interface with new hardware configurations? 
  • Does the software to be built interface with a database system whose function and performance have not been proven in this application area? 
  • Is a specialized user interface demanded by product requirements? 
  • Do requirements demand the use of new analysis, design or testing methods? 
  • Do requirements put excessive performance constraints on the product?

Development Environment Risks:

  • Is a software project and process management tool available? 
  • Are tools for analysis and design available? 
  • Do analysis and design tools deliver methods that are appropriate for the product to be built? 
  • Are compilers or code generators available and appropriate for the product to be built? 
  • Are testing tools available and appropriate for the product to be built? 
  • Are software configuration management tools available? 
  • Does the environment make use of a database or repository? 
  • Are all software tools integrated with one another? 
  • Have members of the project team received training in each of the tools?

Risks Associated with Staff Size and Experience:

  • Are the best people available and are they enough for the project? 
  • Do the people have the right combination of skills? 
  • Are staffs committed for entire duration of the project? 

Metrics Used In Software Testing

Metrics Used In Testing

In this tutorial you will learn about metrics used in testing, The Product Quality Measures - 1. Customer satisfaction index, 2. Delivered defect quantities, 3. Responsiveness (turnaround time) to users, 4. Product volatility, 5. Defect ratios, 6. Defect removal efficiency, 7. Complexity of delivered product, 8. Test coverage, 9. Cost of defects, 10. Costs of quality activities, 11. Re-work, 12. Reliability and Metrics for Evaluating Application System Testing.

The Product Quality Measures:

1. Customer satisfaction index


This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:


  • Number of system enhancement requests per year
  • Number of maintenance fix requests per year
  • User friendliness: call volume to customer service hotline
  • User friendliness: training time per new user
  • Number of product recalls or fix releases (software vendors)
  • Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities


They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.


3. Responsiveness (turnaround time) to users


  • Turnaround time for defect fixes, by level of severity
  • Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility


  • Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios


  • Defects found after product delivery per function point.
  • Defects found after product delivery per LOC
  • Pre-delivery defects: annual post-delivery defects
  • Defects per function point of the system modifications

6. Defect removal efficiency


  • Number of post-release defects (found by clients in field operation), categorized by level of severity
  • Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects
  • All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product


  • McCabe's cyclomatic complexity counts across the system
  • Halstead's measure
  • Card's design complexity measures
  • Predicted defects and maintenance costs, based on complexity measures

8. Test coverage


  • Breadth of functional coverage
  • Percentage of paths, branches or conditions that were actually tested
  • Percentage by criticality level: perceived level of risk of paths
  • The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects


  • Business losses per defect that occurs during operation
  • Business interruption costs; costs of work-arounds
  • Lost sales and lost goodwill
  • Litigation costs resulting from defects
  • Annual maintenance cost (per function point)
  • Annual operating cost (per function point)
  • Measurable damage to your boss's career

10. Costs of quality activities


  • Costs of reviews, inspections and preventive measures
  • Costs of test planning and preparation
  • Costs of test execution, defect tracking, version and change control
  • Costs of diagnostics, debugging and fixing
  • Costs of tools and tool support
  • Costs of test case library maintenance
  • Costs of testing & QA education associated with the product
  • Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work


  • Re-work effort (hours, as a percentage of the original coding hours)
  • Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
  • Re-worked software components (as a percentage of the total delivered components)

12. Reliability


  • Availability (percentage of time a system is available, versus the time the system is needed to be available)
  • Mean time between failure (MTBF).
  • Man time to repair (MTTR)
  • Reliability ratio (MTBF / MTTR)
  • Number of product recalls or fix releases
  • Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula


Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)


Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).


Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria


Defects per size = Defects detected / system size


Test cost (in %) = Cost of testing / total cost *100


Cost to locate defect = Cost of testing / the number of defects located


Achieving Budget = Actual cost of testing / Budgeted cost of testing


Defects detected in testing = Defects detected in testing / total system defects


Defects detected in production = Defects detected in production/system size


Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100


Effectiveness of testing to business = Loss due to problems / total resources processed by the system.


System complaints = Number of third party complaints / number of transactions processed


Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10


Source Code Analysis = Number of source code statements changed / total number of tests.


Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation


Test Execution Productivity = No of Test cycles executed / Actual Effort for testing