Chapter 1 Introduction to Test Case Design  

 

1.1 How to identify errors, bugs in the given application. 

1.2 Design entry and exit criteria for test case, design test cases in excel. 

1.3 Describe feature of a testing method used

What is Software Testing? Definition

Software Testing

Software Testing is a method to check whether the actual software product matches expected  requirements and to ensure that software product is Defect free. It involves execution of  software/system components using manual or automated tools to evaluate one or more  properties of interest. The purpose of software testing is to identify errors, gaps or missing  requirements in contrast to actual requirements.

Some prefer saying Software testing definition as a White Box and Black Box Testing. In  simple terms, Software Testing means the Verification of Application Under Test (AUT). This  Software Testing course introduces testing software to the audience and justifies the  importance of software testing.

Why Software Testing is Important?

Software Testing is Important because if there are any bugs or errors in the software, it can  be identified early and can be solved before delivery of the software product. Properly tested  software product ensures reliability, security and high performance which further results in  time saving, cost effectiveness and customer satisfaction.

What is the need of Testing?

Testing is important because software bugs could be expensive or even dangerous. Software  bugs can potentially cause monetary and human loss, and history is full of such examples.

In April 2015, Bloomberg terminal in London crashed due to software glitch affected  more than 300,000 traders on financial markets. It forced the government to postpone  a 3bn pound debt sale.

Nissan cars recalled over 1 million cars from the market due to software failure in the  airbag sensory detectors. There has been reported two accident due to this software  failure.

Starbucks was forced to close about 60 percent of stores in the U.S and Canada due to  software failure in its POS system. At one point, the store served coffee for free as they  were unable to process the transaction.

Some of Amazon’s third-party retailers saw their product price is reduced to 1p due to  a software glitch. They were left with heavy losses.

Vulnerability in Windows 10. This bug enables users to escape from security sandboxes  through a flaw in the win32k system.

In 2015 fighter plane F-35 fell victim to a software bug, making it unable to detect  targets correctly.

China Airlines Airbus A300 crashed due to a software bug on April 26, 1994, killing  264 innocents live

In 1985, Canada’s Therac-25 radiation therapy machine malfunctioned due to software  bug and delivered lethal radiation doses to patients, leaving 3 people dead and critically  injuring 3 others.

In April of 1999, a software bug caused the failure of a $1.2 billion military satellite  launch, the costliest accident in history

In May of 1996, a software bug caused the bank accounts of 823 customers of a major  U.S. bank to be credited with 920 million US dollars.

What are the benefits of Software Testing?

Here are the benefits of using software testing:

Cost-Effective: It is one of the important advantages of software testing. Testing any IT project  on time helps you to save your money for the long term. In case if the bugs caught in the earlier  stage of software testing, it costs less to fix.

Security: It is the most vulnerable and sensitive benefit of software testing. People are looking  for trusted products. It helps in removing risks and problems earlier.

Product quality: It is an essential requirement of any software product. Testing ensures a  quality product is delivered to customers.

Customer Satisfaction: The main aim of any product is to give satisfaction to their customers.  UI/UX Testing ensures the best user experience.

V-Model in Software Testing

V Model

V Model is a highly disciplined SDLC model which has a testing phase parallel to each  development phase. The V model is an extension of the waterfall model wherein software  development and testing is executed in a sequential way. It is known as the Validation or  Verification Model.







 What is Software Testing Life Cycle (STLC)?

Software Testing Life Cycle (STLC) is a sequence of specific activities conducted during the  testing process to ensure software quality goals are met. STLC involves both verification and  validation activities. Contrary to popular belief, Software Testing is not just a single/isolate  activity, i.e. testing. It consists of a series of activities carried out methodologically to help  certify your software product. STLC stands for Software Testing Life Cycle.

STLC Phases

There are following six major phases in every Software Testing Life Cycle Model (STLC  Model):




STLC Model Phases

1. Requirement Analysis

2. Test Planning

3. Test case development

4. Test Environment setup

5. Test Execution

6. Test Cycle closure

Each of these stages has a definite Entry and Exit criteria, Activities & Deliverables associated  with it.

What is Entry and Exit Criteria in STLC?

Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before  testing can begin.

Exit Criteria: Exit Criteria defines the items that must be completed before testing can be  concluded

Requirement Phase Testing

Requirement Phase Testing also known as Requirement Analysis in which test team studies  the requirements from a testing point of view to identify testable requirements and the QA team  may interact with various stakeholders to understand requirements in detail. Requirements

could be either functional or non-functional. Automation feasibility for the testing project is  also done in this stage.

Activities in Requirement Phase Testing

Identify types of tests to be performed.

Gather details about testing priorities and focus.

Prepare Requirement Traceability Matrix (RTM).

Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required).

Deliverables of Requirement Phase Testing

RTM

Automation feasibility report. (if applicable)

Test Planning in STLC

Test Planning in STLC is a phase in which a Senior QA manager determines the test plan  strategy along with efforts and cost estimates for the project. Moreover, the resources, test  environment, test limitations and the testing schedule are also determined. The Test Plan gets  prepared and finalized in the same phase.

Test Planning Activities

Preparation of test plan/strategy document for various types of testing

Test tool selection

Test effort estimation

Resource planning and determining roles and responsibilities.

Training requirement

Deliverables of Test Planning

Test plan /strategy document.

Effort estimation document.

Test Case Development Phase

 

The Test Case Development Phase involves the creation, verification and rework of test cases  & test scripts after the test plan is ready. Initially, the Test data is identified then created and  reviewed and then reworked based on the preconditions. Then the QA team starts the  development process of test cases for individual units.

Test Case Development Activities

Create test cases, automation scripts (if applicable)

Review and baseline test cases and scripts

Create test data (If Test Environment is available)

Deliverables of Test Case Development

Test cases/scripts

Test data

Test Environment Setup

Test Environment Setup decides the software and hardware conditions under which a work  product is tested. It is one of the critical aspects of the testing process and can be done in parallel  with the Test Case Development Phase. Test team may not be involved in this activity if the  development team provides the test environment. The test team is required to do a readiness  check (smoke testing) of the given environment.

Test Environment Setup Activities

Understand the required architecture, environment set-up and prepare hardware and software  requirement list for the Test Environment.

Setup test Environment and test data

Perform smoke test on the build

Deliverables of Test Environment Setup

Environment ready with test data set up

Smoke Test Results.

Test Execution Phase

Test Execution Phase is carried out by the testers in which testing of the software build is  done based on test plans and test cases prepared. The process consists of test script execution,  test script maintenance and bug reporting. If bugs are reported then it is reverted back to  development team for correction and retesting will be performed.

Test Execution Activities

Execute tests as per plan

Document test results, and log defects for failed cases

Map defects to test cases in RTM

Retest the Defect fixes

Track the defects to closure

Deliverables of Test Execution

Completed RTM with the execution status

Test cases updated with results

Defect reports

Test Cycle Closure

Test Cycle Closure phase is completion of test execution which involves several activities like  test completion reporting, collection of test completion matrices and test results. Testing team  members meet, discuss and analyze testing artifacts to identify strategies that have to be  implemented in future, taking lessons from current test cycle. The idea is to remove process  bottlenecks for future test cycles.

Test Cycle Closure Activities

Evaluate cycle completion criteria based on Time, Test coverage, Cost,Software, Critical  Business Objectives, Quality

Prepare test metrics based on the above parameters.

Document the learning out of the project

Prepare Test closure report

Qualitative and quantitative reporting of quality of the work product to the customer.

Test result analysis to find out the defect distribution by type and severity.

Deliverables of Test Cycle Closure

Test Closure report

Test metrics

STLC Phases along with Entry and Exit Criteria

STLC 

Stage

Entry Criteria

Activity

Exit Criteria

Deliverables

Requiremen t Analysis

Requirements  Document 

available (both

functional and

non functional)

Acceptance 

criteria 

defined.

Application 

architectural 

document 

available.

Analyse 

business 

functionality 

to know the

business 

modules and

module 

specific 

functionalities.

Identify all transactions in

the modules.

Identify all the user profiles.

Gather user interface/ 

 

authentication,

geographic 

spread 

requirements.

Identify types of tests to be

performed.

Gather details about testing

priorities and

focus.

Prepare 

Requirement

Traceability  

Matrix (RTM).

Identify test environment 

details where

testing is

supposed to be

carried out.

 

 

 

 

Signed off 

RTM

Test 

 

automation 

feasibility 

report 

signed off

 

by the client

 

 

 

 

 

 

 

 

RTM

Automation  feasibility 

report (if

applicable)

 

 

 

STLC 

Stage

Entry Criteria

Activity

Exit Criteria

Deliverables

 

 

Automation 

feasibility 

analysis (if

required).

 

 

Test 

Planning

Requirements  Documents

Requirement 

Traceability 

matrix.

Test 

automation 

feasibility 

document.

Analyze 

various testing

approaches 

available

Finalize on the best-suited 

approach

Preparation of test 

plan/strategy 

document for

various types

of testing

Test tool selection

Test effort estimation

Resource 

planning and

determining 

roles and

responsibilitie

s.

 

Approved 

 

test 

plan/strateg

y document.

 

Effort 

 

estimation 

document 

 

signed off.

 

 

 

Test 

plan/strateg

y document.

Effort 

estimation 

document.

Test case developmen t

Requirements  Documents

RTM and test plan

Automation 

analysis report

Create test cases, test

design, 

automation 

scripts (where

applicable)

Review and baseline test

cases and

scripts

Create test data

 

Reviewed 

and signed

test 

 

Cases/script

s

 

Reviewed 

 

and signed

 

test data

 

Test 

cases/script

s

Test data

 

Test 

Environmen t setup

System Design and 

architecture 

documents are

available

Environment 

set-up plan is

available

Understand the required 

architecture, 

environment 

 

set-up

Prepare 

hardware and

 

software 

development

Environmen t setup is

working as

per the plan

and 

checklist

Test data 

setup is

complete

Environmen t ready with

 

test data set

up

Smoke Test Results.

 

 

 

 

 

STLC 

Stage

Entry Criteria

Activity

Exit Criteria

Deliverables

 

 

requirement 

list

Finalize 

connectivity 

requirements

Prepare 

environment 

setup checklist

Setup test Environment 

and test data

Perform 

smoke test on

the build

Accept/reject  the build

depending on

smoke test

result

Smoke test is successful

 

 

 

 

 

Test 

Execution

Baselined 

RTM, Test

Plan , Test

case/scripts are

available

Test 

environment is

ready

Test data set up is done

Unit/Integratio n test report for

the build to be

tested is

available

Execute tests as per plan

Document test results, and log

defects for

failed cases

 

Update test 

plans/test 

cases, if

necessary

 

Map defects to test cases in

 

RTM

Retest the defect fixes

 

Regression  

 

Testing of

 

application

Track the defects to

closure

 

 

 

 

All tests planned are

 

executed

Defects 

 

logged and

 

tracked to

closure

 

 

 

 

Completed 

RTM with

execution 

 

status

Test cases updated 

 

with results

 

Defect 

reports

Test Cycle closure

Testing has been 

completed

Test results are available

Defect logs are available

Evaluate cycle completion 

criteria based

on – Time,

Test coverage,

 

Cost, Software

Quality, 

 

Critical 

Business 

Objectives

Test 

 

Closure 

report 

 

signed off

 

by client

Test 

Closure 

report

Test metrics

 

 

STLC 

Stage

Entry Criteria

Activity

Exit Criteria

Deliverables

 

 

Prepare test metrics based

on the above

parameters.

Document the learning out of

the project

Prepare Test closure report

Qualitative 

and 

quantitative 

reporting of

quality of the

work product

to the

customer.

Test result analysis to find

out the defect

distribution by

type and

severity

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What is a Test Case?

A Test Case is a set of actions executed to verify a particular feature or functionality of your software  application. A Test Case contains test steps, test data, precondition, postcondition developed for specific  test scenario to verify any requirement. The test case includes specific variables or conditions, using  which a testing engineer can compare expected and actual results to determine whether a software  product is functioning as per the requirements of the customer.

Test Scenario Vs Test Case

Test scenarios are rather vague and cover a wide range of possibilities. Testing is all about  being very specific.

For a Test Scenario: Check Login Functionality there many possible test cases are:

Test Case 1: Check results on entering valid User Id & Password

Test Case 2: Check results on entering Invalid User ID & Password

Test Case 3: Check response when a User ID is Empty & Login Button is pressed, and many  more

This is nothing but a Test Case.

The format of Standard Test Cases

Below is a format of a standard login Test cases example.

Test  Case  ID

Test Case Description

Test Steps

Test Data

Expected  Results

Actual  Results

Pass/Fail

TU01

Check 

Customer 

Login with valid Data

1. Go to site http://demo.check.com

2. Enter UserId

3. Enter Password

4. Click Submit

Userid = check 

Password  = pass99

User should Login into an 

application

As 

 

Expected

Pass

TU02

Check 

Customer 

Login with invalid Data

1. Go to site http://demo.check.com

2. Enter UserId

3. Enter Password

4. Click Submit

Userid = check 

Password  = glass99

User should not Login into an application

As 

 

Expected 

Pass

 

 

How to Write Test Cases in Manual Testing

Test Case for the scenario: Check Login Functionality

Step 1) A simple test case to explain the scenario would be

Test Case #Test Case Description

1 Check response when valid email and password is entered

Step 2) Test the Data. In order to execute the test case, you would need Test Data. Adding it below

Test 

Case # Test Case Description Test Data

1Check response when valid email and password is entered

    Email: check@email.com Password: 


lNf9^Oti7^2h


Identifying test data can be time-consuming and may sometimes require creating test data  afresh. The reason it needs to be documented.

Step 3) Perform actions. In order to execute a test case, a tester needs to perform a specific set of actions on the AUT.  This is documented as below:

Test 

Case # Test Case Description Test Steps Test Data 1) Enter Email 


Address

1Check response when valid email and 

Email: 

check@email.com


password is entered

2) Enter Password 3) Click Sign in

Password:  lNf9^Oti7^2h


Step 4) Check behavior of the AUT. The goal of test cases in software testing is to check behavior of the AUT for an expected result.  This needs to be documented as below

Test 

Case # Test Case Description Test Data Expected Result Email: 

1Check response when valid email and 


password is entered

guru99@email.com Password: 

lNf9^Oti7^2h

Login should be  successful


During test execution time, the tester will check expected results against actual results and  assign a pass or fail status


Test  Case  #

Description Test Data Expected 

Test Case 

Result

Check response when 

Actual 

Result Pass/Fail


Email: 

1

valid email and 

Login should 

Login was 


password is entered

guru99@email.com  Password: lNf9^Oti7^2h

be successful

successful Pass


Step 5) That apart your test case -may have a field like, Pre – Condition which specifies things that must be in place before the test can run. For our  test case, a pre-condition would be to have a browser installed to have access to the site under  test. A test case may also include Post – Conditions which specifies anything that applies after  the test case completes. For our test case, a postcondition would be time & date of login is  stored in the database

Best Practice for writing good Test Case.

1. Test Cases need to be simple and transparent:

Create test cases that are as simple as possible. They must be clear and concise as the author of  the test case may not execute them.

Use assertive language like go to the home page, enter data, click on this and so on. This makes  the understanding the test steps easy and tests execution faster.

2. Create Test Case with End User in Mind

The ultimate goal of any software project is to create test cases that meet customer requirements  and is easy to use and operate. A tester must create test cases keeping in mind the end user  perspective

3. Avoid test case repetition.

Do not repeat test cases. If a test case is needed for executing some other test case, call the test  case by its test case id in the pre-condition column

4. Do not Assume

Do not assume functionality and features of your software application while preparing test case.  Stick to the Specification Documents.

5. Ensure 100% Coverage

Make sure you write test cases to check all software requirements mentioned in the  specification document. Use Traceability Matrix to ensure no functions/conditions is left  untested.

6. Test Cases must be identifiable.

Name the test case id such that they are identified easily while tracking defects or identifying  a software requirement at a later stage.

7. Implement Testing Techniques

It’s not possible to check every possible condition in your software application. Software  Testing techniques help you select a few test cases with the maximum possibility of finding a  defect.

Boundary Value Analysis (BVA): As the name suggests it’s the technique that defines  the testing of boundaries for a specified range of values.

Equivalence Partition (EP): This technique partitions the range into equal  parts/groups that tend to have the same behavior.

State Transition Technique: This method is used when software behavior changes  from one state to another following particular action.

Error Guessing Technique: This is guessing/anticipating the error that may arise  while doing manual testing. This is not a formal method and takes advantages of a  tester’s experience with the application

8. Self-cleaning

The test case you create must return the Test Environment to the pre-test state and should not  render the test environment unusable. This is especially true for configuration testing.

9. Repeatable and self-standing

The test case should generate the same results every time no matter who tests it 10. Peer Review.

After creating test cases, get them reviewed by your colleagues. Your peers can uncover defects  in your test case design, which you may easily miss.

Test Case Management Tools

Test management tools are the automation tools that help to manage and maintain the Test  Cases. Main Features of a test case management tool are

1. For documenting Test Cases: With tools, you can expedite Test Case creation with use of  templates

2. Execute the Test Case and Record the results: Test Case can be executed through the tools  and results obtained can be easily recorded.

3. Automate the Defect Tracking: Failed tests are automatically linked to the bug tracker, which  in turn can be assigned to the developers and can be tracked by email notifications. 4. Traceability: Requirements, Test cases, Execution of Test cases are all interlinked through the  tools, and each case can be traced to each other to check test coverage.

5. Protecting Test Cases: Test cases should be reusable and should be protected from being lost  or corrupted due to poor version control. Test Case Management

Tools offer features like

Naming and numbering conventions

Versioning

Read-only storage

Controlled access

Off-site backup

Test cases types

Functionality test cases

Performance test cases.

Unit test cases.

User interface test cases.

Security test cases.

Integration test cases.

Database test cases.

Usability test cases.

Irrespective of the test case documentation method chosen, any good test case template must  have the following fields


 

Test Case Field

Description

Test case ID:

Each test case should be represented by a unique ID. To indicate test types follow some convention like “TC_UI_1” indicating “User Interface Test Case#1.”

Test Priority:

It is useful while executing the test.

Low

Medium

High

Name of the Module:

Determine the name of the main module or sub-module being tested

Test Designed by:

Tester’s Name

 


 

Date of test designed:

Date when test was designed

Test Executed by:

Who executed the test- tester

Date of the Test Execution:

Date when test needs to be executed

Name or Test Title:

Title of the test case

Description/Summary of Test:

Determine the summary or test purpose in brief

Pre-condition:

Any requirement that needs to be done before execution of this test case. To execute this test case list all pre-conditions

Dependencies:

Determine any dependencies on test requirements or other test cases

Test Steps:

Mention all the test steps in detail and write in the order in which it requires to be executed. While writing test steps ensure that you provide as much detail as you can

Test Data:

Use of test data as an input for the test case. Deliver different data sets with precise values to be used as an input

Expected Results:

Mention the expected result including error or message that should appear on screen

Post-Condition:

What would be the state of the system after running the test case?

Actual Result:

After test execution, actual test result should be filled

Status (Fail/Pass):

Mark this field as failed, if actual result is not as per the estimated result

Notes:

If there are some special condition which is left in above field

 

 

 

 

 

 

 

 

The following fields depending on the project requirements

Link / Defect ID: Include the link for Defect or determine the defect number if test  status is fail

Keywords / Test Type: To determine tests based on test types this field can be used.  Eg: Usability, functional, business rules, etc.

Requirements: Requirements for which this test case is being written References / Attachments: It is useful for complicated test scenarios, give the actual  path of the document or diagram

Automation ( Yes/No): To track automation status when test cases are automated Custom Fields: Fields particular your project being tested due to client/project  requirements.

Test Case with field

Types of Software Testing

1. Acceptance Testing: Formal testing conducted to determine whether or not a system  satisfies its acceptance criteria and to enable the customer to determine whether or not to accept  the system. It is usually performed by the customer.

2. Accessibility Testing: Type of testing which determines the usability of a product to the  people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is  conducted by persons having disabilities.

3. Active Testing: Type of testing consisting in introducing test data and analyzing the  execution results. It is usually conducted by the testing team.

4. Agile Testing: Software testing practice that follows the principles of the agile manifesto,  emphasizing testing from the perspective of customers who will utilize the system. It is usually  performed by the QA teams.

5. Age Testing: Type of testing which evaluates a system’s ability to perform in the future.  The evaluation process is conducted by testing teams.

6. Ad-hoc Testing: Testing performed without planning and documentation – the tester tries  to ‘break’ the system by randomly trying the system’s functionality. It is performed by the  testing team.

7. Alpha Testing: Type of testing a software product or system conducted at the developer’s  site. Usually it is performed by the end users.

8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the  product requirements. It is performed by the testing team.

9. API Testing: Testing technique similar to Unit Testing in that it targets the code level. Api  Testing differs from Unit Testing in that it is typically a QA task and not a developer task. Read  More on API Testing

10. All-pairs Testing: Combinatorial testing method that tests all possible discrete  combinations of input parameters. It is performed by the testing teams.Automated Testing:  Testing technique that uses Automation Testing tools to control the environment set-up, test  execution and results reporting. It is performed by a computer and is used inside the testing  teams. 

11. Basis Path Testing: A testing mechanism which derives a logical complexity measure  of a procedural design and use this as a guide for defining a basic set of execution paths. It is  used by testing teams when defining test cases. 

12. Backward Compatibility Testing: Testing method which verifies the behavior of the  developed software with older versions of the test environment. It is performed by testing team. 13. Beta Testing: Final testing before releasing application for commercial purpose. It is  typically done by end-users or others.

14. Benchmark Testing: Testing technique that uses representative sets of programs and  data designed to evaluate the performance of computer hardware and software in a given  configuration. It is performed by testing teams. 

15. Big Bang Integration Testing: Testing technique which integrates individual program  modules only when everything is ready. It is performed by the testing teams. 

16. Binary Portability Testing: Technique that tests an executable application for  portability across system platforms and environments, usually for conformation to an ABI  specification. It is performed by the testing teams.

17. Boundary Value Testing: Software testing technique in which tests are designed to  include representatives of boundary values. It is performed by the QA testing teams. Bottom  Up Integration Testing: In bottom-up Integration Testing, module at the lowest level are  developed first and other modules which go towards the ‘main’ program are integrated and  tested one at a time. It is usually performed by the testing teams.

18. Branch Testing: Testing technique in which all branches in the program source code  are tested at least once. This is done by the developer.

19. Breadth Testing: A test suite that exercises the full functionality of a product but does  not test features in detail. It is performed by testing teams.

20. Black box Testing: A method of software testing that verifies the functionality of an  application without having specific knowledge of the application’s code/internal structure.  Tests are based on requirements and functionality. It is performed by QA teams. Code-driven  Testing: Testing technique that uses testing frameworks (such as xUnit) that allow the  execution of unit tests to determine whether various sections of the code are acting as expected  under various circumstances. It is performed by the development teams.

21. Compatibility Testing: Testing technique that validates how well a software performs  in a particular hardware/software/operating system/network environment. It is performed by  the testing teams

22. Comparison Testing: Testing technique which compares the product strengths and  weaknesses with previous versions or other similar products. Can be performed by tester,  developers, product managers or product owners. 

23. Component Testing: Testing technique similar to unit testing but with a higher level  of integration – testing is done in the context of the application instead of just directly testing  a specific method. Can be performed by testing or development teams.

24. Configuration Testing: Testing technique which determines minimal and optimal  configuration of hardware and software, and the effect of adding or modifying resources such  as memory, disk drives and CPU. Usually it is performed by the Performance Testing  engineers.

25. Condition Coverage Testing: Type of software testing where each condition is  executed by making it true and false, in each of the ways at least once. It is typically made by  the Automation Testing teams.

26. Compliance Testing: Type of testing which checks whether the system was developed  in accordance with standards, procedures and guidelines. It is usually performed by external  companies which offer “Certified OGC Compliant” brand.

27. Concurrency Testing: Multi-user testing geared towards determining the effects of  accessing the same application code, module or database records. It it usually done by  performance engineers. 

28. Conformance Testing: The process of testing that an implementation conforms to the  specification on which it is based. It is usually performed by testing teams.  29. Context Driven Testing: An Agile Testing technique that advocates continuous and  creative evaluation of testing opportunities in light of the potential information revealed and  the value of that information to the organization at a specific moment. It is usually performed  by Agile testing teams.

30. Conversion Testing: Testing of programs or procedures used to convert data from  existing systems for use in replacement systems. It is usually performed by the QA teams. 

31. Decision Coverage Testing: Type of software testing where each condition/decision  is executed by setting it on true/false. It is typically made by the automation testing teams. 

32. Destructive Testing: Type of testing in which the tests are carried out to the specimen’s  failure, in order to understand a specimen’s structural performance or material behavior under  different loads. It is usually performed by QA teams.

33. Dependency Testing: Testing type which examines an application’s requirements for  pre-existing software, initial states and configuration in order to maintain proper functionality.  It is usually performed by testing teams.

34. Dynamic Testing: Term used in software engineering to describe the testing of the  dynamic behavior of code. It is typically performed by testing teams. Domain Testing: White  box testing technique which contains checkings that the program accepts only valid input. It is  usually done by software development teams and occasionally by automation testing teams.

35. Error-Handling Testing: Software testing type which determines the ability of the  system to properly process erroneous transactions. It is usually performed by the testing teams. 36. End-to-end Testing: Similar to system testing, involves testing of a complete  application environment in a situation that mimics real-world use, such as interacting with a  database, using network communications, or interacting with other hardware, applications, or  systems if appropriate. It is performed by QA teams. 

37. Endurance Testing: Type of testing which checks for memory leaks or other problems  that may occur with prolonged execution. It is usually performed by performance engineers.  38. Exploratory Testing: Black box testing technique performed without planning and  documentation. It is usually performed by manual testers. 

39. Equivalence Partitioning Testing: Software testing technique that divides the input  data of a software unit into partitions of data from which test cases can be derived. it is usually  performed by the QA teams. 

40. Fault injection Testing: Element of a comprehensive test strategy that enables the  tester to concentrate on the manner in which the application under test is able to handle  exceptions. It is performed by QA teams.

41. Formal verification Testing: The act of proving or disproving the correctness of  intended algorithms underlying a system with respect to a certain formal specification or  property, using formal methods of mathematics. It is usually performed by QA teams.

42. Functional Testing: Type of black box testing that bases its test cases on the  specifications of the software component under test. It is performed by testing teams.  43. Fuzz Testing: Software testing technique that provides invalid, unexpected, or random  data to the inputs of a program – a special area of mutation testing. Fuzz testing is performed  by testing teams. 

44. Gorilla Testing: Software testing technique which focuses on heavily testing of one  particular module. It is performed by quality assurance teams, usually when running full  testing.

45. Gray Box Testing: A combination of Black Box and White Box testing methodologies:  testing a piece of software against its specification but using some knowledge of its internal  workings. It can be performed by either development or testing teams.

46. Glass box Testing: Similar to white box testing, based on knowledge of the internal  logic of an application’s code. It is performed by development teams.

47. GUI software Testing: The process of testing a product that uses a graphical user  interface, to ensure it meets its written specifications. This is normally done by the testing  teams. 

48. Globalization Testing: Testing method that checks proper functionality of the product  with any of the culture/locale settings using every type of international input possible. It is  performed by the testing team. 

49. Hybrid Integration Testing: Testing technique which combines top-down and  bottom-up integration techniques in order leverage benefits of these kind of testing. It is usually  performed by the testing teams.

50. Integration Testing: The phase in software testing in which individual software  modules are combined and tested as a group. It is usually conducted by testing teams.  51. Interface Testing: Testing conducted to evaluate whether systems or components pass  data and control correctly to one another. It is usually performed by both testing and  development teams. 

52. Install/uninstall Testing: Quality assurance work that focuses on what customers will  need to do to install and set up the new software successfully. It may involve full, partial or  upgrades install/uninstall processes and is typically done by the software testing engineer in  conjunction with the configuration manager.

53. Internationalisation Testing: The process which ensures that product’s functionality  is not broken and all the messages are properly externalized when used in different languages  and locale. It is usually performed by the testing teams.

54. Inter-Systems Testing: Testing technique that focuses on testing the application to  ensure that interconnection between application functions correctly. It is usually done by the  testing teams.

55. Keyword-driven Testing: Also known as table-driven testing or action-word testing,  is a software testing methodology for automated testing that separates the test creation process  into two distinct stages: a Planning Stage and an Implementation Stage. It can be used by either  manual or automation testing teams. 

56. Load Testing: Testing technique that puts demand on a system or device and measures  its response. It is usually conducted by the performance engineers. 

57. Localization Testing: Part of software testing process focused on adapting a globalized  application to a particular culture/locale. It is normally done by the testing teams. 

58. Loop Testing: A white box testing technique that exercises program loops. It is  performed by the development teams. 

59. Manual Scripted Testing: Testing method in which the test cases are designed and  reviewed by the team before executing it. It is done by Manual Testing teams.

60. Manual-Support Testing: Testing technique that involves testing of all the functions  performed by the people while preparing the data and using these data from automated system.  it is conducted by testing teams.

61. Model-Based Testing: The application of Model based design for designing and  executing the necessary artifacts to perform software testing. It is usually performed by testing  teams. 

62. Mutation Testing: Method of software testing which involves modifying programs’  source code or byte code in small ways in order to test sections of the code that are seldom or  never accessed during normal tests execution. It is normally conducted by testers.  

63. Modularity-driven Testing: Software testing technique which requires the creation of  small, independent scripts that represent modules, sections, and functions of the application  under test. It is usually performed by the testing team.

64. Non-functional Testing: Testing technique which focuses on testing of a software  application for its non-functional requirements. Can be conducted by the performance  engineers or by manual testing teams. 

65. Negative Testing: Also known as “test to fail” – testing method where the tests’ aim is  showing that a component or system does not work. It is performed by manual or automation  testers

66. Operational Testing: Testing technique conducted to evaluate a system or component  in its operational environment. Usually it is performed by testing teams.  

67. Orthogonal array Testing: Systematic, statistical way of testing which can be applied  in user interface testing, system testing, Regression Testing, configuration testing and  Performance Testing. It is performed by the testing team. 

68. Pair Testing: Software development technique in which two team members work  together at one keyboard to test the software application. One does the testing and the other  analyzes or reviews the testing. This can be done between one Tester and Developer or  Business Analyst or between two testers with both participants taking turns at driving the  keyboard.

69. Passive Testing: Testing technique consisting in monitoring the results of a running  system without introducing any special test data. It is performed by the testing team. 

70. Parallel Testing: Testing technique which has the purpose to ensure that a new  application which has replaced its older version has been installed and is running correctly. It  is conducted by the testing team

71. Path Testing: Typical white box testing which has the goal to satisfy coverage criteria  for each logical path through the program. It is usually performed by the development team.  

72. Penetration Testing: Testing method which evaluates the security of a computer  system or network by simulating an attack from a malicious source. Usually they are conducted  by specialized penetration testing companies. 

73. Performance Testing: Functional testing conducted to evaluate the compliance of a  system or component with specified performance requirements. It is usually conducted by the  performance engineer. 

74. Qualification Testing: Testing against the specifications of the previous release,  usually conducted by the developer for the consumer, to demonstrate that the software meets  its specified requirements.

75. Ramp Testing: Type of testing consisting in raising an input signal continuously until  the system breaks down. It may be conducted by the testing team or the performance engineer. 

76. Regression Testing: Type of software testing that seeks to uncover software errors  after changes to the program (e.g. bug fixes or new functionality) have been made, by retesting  the program. It is performed by the testing teams.

77. Recovery Testing: Testing technique which evaluates how well a system recovers from  crashes, hardware failures, or other catastrophic problems. It is performed by the testing  teams. 

78. Requirements Testing: Testing technique which validates that the requirements are  correct, complete, unambiguous, and logically consistent and allows designing a necessary and  sufficient set of test cases from those requirements. It is performed by QA teams. 

79. Security Testing: A process to determine that an information system protects data and  maintains functionality as intended. It can be performed by testing teams or by specialized  security-testing companies. 

80. Sanity Testing: Testing technique which determines if a new software version is  performing well enough to accept it for a major testing effort. It is performed by the testing  teams. 

81. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story to  help a person think through a complex problem or system for a testing environment. It is  performed by the testing teams. 

82. Scalability Testing: Part of the battery of non-functional tests which tests a software  application for measuring its capability to scale up – be it the user load supported, the number  of transactions, the data volume etc. It is conducted by the performance engineer.  

83. Statement Testing: White box testing which satisfies the criterion that each statement  in a program is executed at least once during program testing. It is usually performed by the  development team.

84. Static Testing: A form of software testing where the software isn’t actually used it  checks mainly for the sanity of the code, algorithm, or document. It is used by the developer  who wrote the code. 

85. Stability Testing: Testing technique which attempts to determine if an application will  crash. It is usually conducted by the performance engineer. 

86. Smoke Testing: Testing technique which examines all the basic components of a  software system to ensure that they work properly. Typically, smoke testing is conducted by  the testing team, immediately after a software build is made

87. Storage Testing: Testing type that verifies the program under test stores data files in  the correct directories and that it reserves sufficient space to prevent unexpected termination  resulting from lack of space. It is usually performed by the testing team.  

88. Stress Testing: Testing technique which evaluates a system or component at or beyond  the limits of its specified requirements. It is usually conducted by the performance engineer 

89. Structural Testing: White box testing technique which takes into account the internal  structure of a system or component and ensures that each program statement performs its  intended function. It is usually performed by the software developers.

90. System Testing: The process of testing an integrated hardware and software system to  verify that the system meets its specified requirements. It is conducted by the testing teams in  both development and target environment. 

91. System integration Testing: Testing process that exercises a software system’s  coexistence with others. It is usually performed by the testing teams. Top Down Integration  Testing: Testing technique that involves starting at the top of a system hierarchy at the user  interface and using stubs to test from the top down until the entire system has been  implemented. It is conducted by the testing teams.

98. Thread Testing: A variation of top-down testing technique where the progressive  integration of components follows the implementation of subsets of the requirements. It is  usually performed by the testing teams.

99. Upgrade Testing: Testing technique that verifies if assets created with older versions  can be used properly and that user’s learning is not challenged. It is performed by the testing  teams.

100. Unit Testing: Software verification and validation method in which a programmer tests  if individual units of source code are fit for use. It is usually conducted by the development  team. User Interface Testing: Type of testing which is performed to check how user-friendly  the application is. It is performed by testing teams.

What is a Feature Testing?

A Software feature can be defined as the changes made in the system to add new functionality  or modify the existing functionality. Each feature is said to have a characteristics that is  designed to be useful, intuitive and effective.

In reality, a new test set is created for testing that feature corresponding to that cycle of that  release. The extremely important and generally used new features ought to be tested thoroughly  in each build of that release and also regression testing should be done relevant to those areas.

How to Effectively Test a Feature ?

Understanding the Feature : One Should read the requirement or specification  corresponding to that feature thoroughly.

Build Test Scenarios : Testers should Develop the test cases exclusively to test the  feature. Hence, the coverage, traceability can be maintained.

Prepare Positive and Negative Data Sets : Testers should have the test data covering  all possible negative, positive and boundary cases before the start of the testing.

How it is Implemented : Testers should know how the feature has been implemented  on application layer and the relevant changed to the back end if any. This will give us  clarity on the impacted areas.

Deploy the Build Early : Testers should start testing the feature early in the cycle and  report the defects and the same process should be repeated throughout the release  builds.