CH01.L04. Fundamental Test Process

MaharaTech - ITI MOOCA
16 Nov 201709:47

Summary

TLDRThis script outlines the five core activities of software testing: Test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting, and test closure activities. It emphasizes the iterative nature of testing, detailing how objectives are set, test conditions and cases are designed, and discrepancies are identified and resolved. The importance of monitoring progress, adjusting plans, and confirming fixes through regression testing is highlighted, ensuring a thorough and controlled testing process.

Takeaways

  • πŸ“ **Testing Process Overview**: The script outlines a five-step process for software testing, emphasizing the importance of each step and their interrelated nature.
  • 🎯 **Test Planning and Control**: The first step involves setting objectives and monitoring progress, with a focus on adjusting the plan as needed to meet testing goals.
  • πŸ” **Test Analysis and Design**: This step transforms test objectives into test conditions and cases, including the creation of a traceability matrix for tracking.
  • πŸ› οΈ **Test Implementation and Execution**: The longest phase, where test cases are developed, test data is created, and the environment is set up for testing.
  • πŸ”„ **Discrepancy Management**: Discrepancies found during testing are reported and recorded, with a follow-up to confirm fixes and conduct regression testing.
  • πŸ“ˆ **Evaluating Exit Criteria**: Testing stops when predefined exit criteria are met, which can be based on various factors like test case completion or bug detection rate.
  • πŸ“‹ **Reporting**: A summary report is prepared for stakeholders, detailing the testing process, outcomes, and any necessary follow-up actions.
  • πŸ”š **Test Closure Activities**: The final phase includes reviewing deliverables, documenting system acceptance, and archiving test materials for future reference.
  • πŸ”„ **Continuous Monitoring**: Test control is an ongoing activity that involves comparing actual progress with the plan and taking corrective actions when deviations occur.
  • πŸ”‘ **Traceability Matrix**: A key tool in the testing process, used to link test conditions to test cases, ensuring comprehensive coverage and tracking of testing stages.
  • πŸ›‘ **Regression Testing**: After fixing discrepancies, it's crucial to conduct regression testing to ensure that fixes haven't introduced new issues in other parts of the program.

Q & A

  • What are the five main activities included in software testing according to the transcript?

    -The five main activities in software testing are Test planning and control, Test analysis and design, Test implementation and execution, Evaluating exit criteria and reporting, and Test closure activities.

  • What is the primary objective of the Test planning and control activity?

    -The primary objective of Test planning and control is to set the objectives of the testing process and to monitor all test activities to ensure they align with the plan, making adjustments as necessary.

  • How does test control differ from the other activities in the testing process?

    -Test control is an ongoing activity that involves monitoring the actual progress against the plan, reporting deviations, and taking corrective actions to achieve the project's objectives.

  • What are the two main tasks involved in the Test analysis and design activity?

    -The two main tasks in Test analysis and design are collecting and reviewing test bases for writing test cases and evaluating or finding test conditions from the test bases that can be used for testing.

  • What is a 'traceability matrix' and how is it used in the testing process?

    -A traceability matrix is an Excel sheet created during the Test analysis and design stage that lists all test conditions and links them to the test cases, allowing for bi-directional traceability to understand which testing stages have been completed and which requirements are yet to be tested.

  • What is the purpose of creating test cases during the Test implementation and execution activity?

    -The purpose of creating test cases is to provide a structured approach to testing, ensuring that all aspects of the software are tested systematically and that the testing process is thorough and repeatable.

  • What is meant by 'test harnesses' in the context of the testing process?

    -Test harnesses refer to the stubs and drivers prepared during the implementation stage, which are used to support the execution of test cases in a controlled environment.

  • Why is regression testing important after fixing discrepancies found during test execution?

    -Regression testing is important to ensure that fixing one bug has not introduced new bugs or affected other parts of the program, maintaining the overall stability and correctness of the software.

  • What are 'exit criteria' in the context of testing, and why are they significant?

    -Exit criteria are the targets or conditions that, when met, indicate that testing should stop. They are significant because they help determine when enough testing has been done or when it's time to halt testing due to reaching a predefined limit such as cost or time constraints.

  • What is the purpose of the 'Evaluating exit criteria and reporting' activity in the testing process?

    -The purpose of this activity is to assess whether the predetermined exit criteria have been met and to write a test summary report for all stakeholders, informing them of the testing stage reached and any necessary next steps.

  • What tasks are involved in the Test closure activity of the testing process?

    -Test closure activities include checking the delivery of planned deliverables, closing any open modifications, documenting system acceptance, finalizing and archiving the test ware, and handing over the test ware to the maintenance organization.

Outlines

00:00

πŸ“ Test Planning and Control Overview

The first paragraph introduces the five main activities of software testing, emphasizing the importance of remembering the major tasks of each activity for certification purposes. The initial activity, Test Planning and Control, is broken down into setting objectives for the testing process, such as defect detection or informing decision-makers about software readiness, and test control, which involves ongoing monitoring of test activities against the plan. Deviations are reported, and corrective actions are taken to ensure the project meets its objectives. Feedback from monitoring is crucial for modifying the plan and implementing necessary changes.

05:00

πŸ” Test Analysis, Design, and Implementation

The second paragraph delves into the Test Analysis and Design activity, where test objectives are translated into test conditions and cases. It involves collecting test bases, evaluating existing material for testing, and designing test cases with prioritization. Test data preparation and environment setup are also part of this stage, along with the creation of a traceability matrix for tracking test progress and requirements coverage. The Implementation phase follows, where test cases are developed into procedures and suites, and test harnesses are prepared if needed. The environment setup is verified, and the traceability is confirmed. The Execution phase involves running test procedures, comparing expected results with actual outcomes, and recording discrepancies. It also includes confirmation testing to ensure fixes are effective and regression testing to verify that fixes do not introduce new issues.

Mindmap

Keywords

πŸ’‘Test Execution

Test Execution refers to the process of running test cases against the software to verify its behavior and identify defects. It is a fundamental part of the testing process, as it directly contributes to the discovery of bugs. In the script, test execution is mentioned as a clear part of testing where testers compare expected results with actual outcomes to identify discrepancies.

πŸ’‘Test Planning and Control

Test Planning and Control is the activity that sets the objectives and monitors the progress of testing to ensure it meets the project goals. It is crucial for aligning testing efforts with the overall project strategy. The script describes setting objectives such as finding defects or informing decision-makers about software readiness and emphasizes ongoing monitoring and corrective actions based on deviations from the plan.

πŸ’‘Test Analysis and Design

Test Analysis and Design involves turning test objectives into tangible test conditions and test cases. It is a preparatory phase where testers review requirements and other documents to create a foundation for test cases. The script mentions collecting test bases and evaluating existing conditions for testing, leading to the design of test cases and the creation of a traceability matrix.

πŸ’‘Test Implementation and Execution

Test Implementation and Execution is the phase where the designed test cases are put into action. It includes developing test procedures, creating test suites, and preparing test data. The script describes this as a two-step process: first, implementing the test cases, and second, executing them to compare expected results with actual results, which may lead to identifying discrepancies.

πŸ’‘Evaluating Exit Criteria

Evaluating Exit Criteria is the activity of assessing whether the predetermined targets for stopping testing have been met. These criteria can be based on various factors such as the percentage of test cases executed or the number of bugs found. The script explains that testers should check if these criteria have been achieved by reviewing test logs and deciding if further testing is necessary.

πŸ’‘Reporting

Reporting in the context of software testing refers to documenting the results of testing activities for stakeholders. It includes summarizing test outcomes and providing insights into the software's quality. The script mentions writing a test summary report to communicate the testing stage reached and the results of the testing process.

πŸ’‘Test Closure Activities

Test Closure Activities mark the final stage of the testing process, where all testing artifacts are reviewed, documented, and handed over. It includes verifying deliverables, finalizing reports, and archiving test materials. The script describes this as an essential step to ensure that all testing efforts are concluded and lessons learned are documented for future projects.

πŸ’‘Discrepancies

Discrepancies are the differences found between the expected results and the actual results during test execution. They indicate potential defects or issues in the software. The script discusses the importance of reporting and recording discrepancies to ensure they are addressed and verified as fixed through confirmation testing.

πŸ’‘Regression Testing

Regression Testing is the process of re-testing the software to ensure that changes made to fix defects have not introduced new issues in other parts of the program. It is a safeguard to maintain the overall quality of the software. The script differentiates regression testing from confirmation testing, emphasizing its role in verifying the impact of bug fixes on the entire system.

πŸ’‘Traceability Matrix

A Traceability Matrix is a tool used in test design to link test conditions to test cases, ensuring comprehensive coverage of requirements. It helps in tracking which test stages have been completed and which requirements are yet to be tested. The script mentions creating a traceability matrix during the Test Analysis and Design phase to facilitate bi-directional traceability.

πŸ’‘Test Conditions

Test Conditions are the elements or scenarios identified during test analysis that can be tested. They form the basis for creating test cases. The script describes evaluating existing test bases to find what can be used for testing, which are referred to as test conditions, serving as the starting point for test case design.

Highlights

Testing for software includes 5 main activities that may overlap during the test process.

Test planning and control involves setting objectives and monitoring progress against the plan.

Test objectives may include finding the maximum number of defects or informing decision-makers about software readiness.

Test control is an ongoing activity that involves monitoring and taking corrective actions if deviations occur.

Test analysis and design turns test objectives into perceptible test conditions and cases.

Analysis involves collecting and reviewing sources like requirements and risk analysis reports for writing test cases.

Test conditions are identified from test bases and used to design test cases and prioritize them.

Test data preparation and environment setup are part of the test design process.

A traceability matrix is created to link test conditions to test cases for bi-directional traceability.

Test implementation involves developing test procedures and creating test suites.

Test data is created during the implementation stage, and test harnesses are prepared if needed.

Test execution compares expected results with actual results and records discrepancies.

Discrepancies found during execution are reported and recorded in a test log.

Confirmation testing is conducted to ensure that identified discrepancies have been fixed.

Regression testing is performed to ensure that fixing one bug does not introduce new issues.

Evaluating exit criteria involves checking if predetermined targets for stopping testing have been met.

Reporting includes writing a test summary report for stakeholders to understand the testing stage reached.

Test closure activities include checking deliverables, finalizing reports, and documenting system acceptability.

Test ware includes all outputs from the testing process, such as test cases, suites, and procedures.

Handing over test ware to the maintenance organization helps in learning lessons for future projects.

Transcripts

play00:04

The most clear part of testing is

play00:08

the test execution process we do for the

play00:10

test cases. But actually

play00:12

the testing for any software

play00:14

includes 5 main activities.

play00:16

Please note that in this part

play00:18

you are required to concentrate and remember,

play00:20

in each activity we will mention,

play00:22

what are the major tasks which distinguish

play00:24

this activity from others? Because you will be

play00:26

be asked about it in details in the certificate.

play00:28

They are as follows:

play00:30

Test planning and control,

play00:32

Test analysis and design,

play00:33

Test implementation and execution,

play00:35

Evaluating exit criteria and reporting,

play00:38

Test closure activities,

play00:40

Although their order is

play00:42

logical, but

play00:44

these activities, during the test process, may

play00:46

overlap with another activity.

play00:48

or you may find 2 activities

play00:50

start simultaneously with each other. Each one of

play00:52

the previous activities is divided into 2 halves

play00:54

The first activity in test process

play00:56

is Test Plan and Control.

play00:58

so what do we do in a test plan?

play01:00

and what are the major tasks of it?

play01:02

Here we set the objectives

play01:04

of the testing process in general.

play01:06

like what?

play01:08

For example, if we need to find the largest

play01:10

possible number of bugs, which we call

play01:12

find defects as much as we can.

play01:14

or if our objective is

play01:16

to give information to those who

play01:18

work on decision making

play01:20

to decide whether this software is ready

play01:22

to be presented to market now or not

play01:24

and so on.

play01:26

So you will find many similar objectives.

play01:28

and whatever the testing objectives, you'll

play01:32

always need a description for the test activities

play01:34

which will achieve your test

play01:36

objectives. The second part

play01:38

of the first activity is "test control".

play01:40

To conduct a correct test control

play01:42

for your project, you should

play01:44

keep monitoring all the test activities

play01:46

as long as you work on the project.

play01:48

That's why the test control is

play01:50

the ongoing activity. i.e. as long as

play01:52

we are testing, the test control

play01:54

is ongoing and we keep

play01:56

monitoring all the time. So,

play01:58

what are we doing in a test control in details?

play02:00

you observe whether your plan

play02:02

is the same as what happens in reality

play02:04

which is called the actual progress

play02:06

or not. Then if there is

play02:08

a deviation in the actual progress

play02:10

from the plan, We should report

play02:12

the status including the deviation

play02:14

from the plan. As long as

play02:16

We have deviations you have to take

play02:18

an corrective action to be able to

play02:20

achieve the project's objectives,

play02:22

and to implement the plan as expected.

play02:24

For example, if we planning

play02:26

to finish testing in one week,

play02:28

then while working we discovered that

play02:30

one week is not enough,

play02:32

In this time, we will apply the steps of test

play02:34

control, which we compare

play02:36

the actual progress against the plan

play02:38

then we report the status

play02:40

and take action. Note that

play02:42

the feedback we get from monitoring

play02:44

and controlling the activities is

play02:46

what the test control depend on

play02:48

to modify the plan and to

play02:50

implement the required action for modifications.

play02:52

The second activity in test process

play02:54

is Test analysis & design.

play02:56

in which the test objective

play02:58

is turned to be perceptible

play03:00

in the form of test condition & test cases.

play03:02

This activity is also divided into

play03:04

2 partitions. In analysis,

play03:06

we collect and review

play03:08

the sources we will depend on

play03:10

to write the test cases

play03:12

which we call "test bases".

play03:14

Requirements,

play03:15

risk analysis reports,

play03:16

software structure

play03:18

and interface specifications.

play03:20

There are a lot of the resources that

play03:22

can help you in writing test cases.

play03:24

The second task

play03:26

is evaluating or finding

play03:28

what is already exists in the test bases

play03:30

which can be used for testing

play03:32

this is what we call "test condition"

play03:34

which means that anything

play03:36

can be tested, we call "test condition".

play03:38

From test conditions,

play03:40

the partition of test design will start.

play03:42

The first thing we will design

play03:44

is the test cases.

play03:46

Then, we will prioritize the test cases.

play03:48

which of these cases is more important

play03:50

which is less important

play03:52

in testing.

play03:54

In test cases,

play03:56

we always have input data,

play03:58

which we input into the program during the

play04:00

test case execution, then it results as an output.

play04:02

This is what we call

play04:04

the "test data", which

play04:06

we prepare and identify

play04:08

in this stage.In addition to designing

play04:10

the environment setup in this

play04:12

analysis and design process.

play04:14

In which we identify the infrastructure

play04:16

and the tools we will need during testing process.

play04:18

In this stage we create an excel sheet

play04:20

and name it "traceability matrix"

play04:22

where we add all the test conditions

play04:24

and link them to the test cases.

play04:26

So through this sheet we can

play04:28

have a bi-directional tractability

play04:30

which helps us to know what stages of testing

play04:32

have finished and what requirements

play04:34

are not tested yet. In the previous stage

play04:36

we have designed it by ourselves.

play04:38

In this stage, the implementation starts.

play04:40

this can be considered the longest thing

play04:42

and it is also done

play04:44

through 2 steps: Implementation

play04:46

then Execution. we start

play04:48

the implementation of the test cases,

play04:50

then we develop the test procedures

play04:54

which is ordering the test cases

play04:56

that we will go through during the execution.

play04:58

then we get out of the test cases,

play05:00

to create test suites which gathers

play05:02

the test cases.

play05:04

we should remember that

play05:06

the test data was identified

play05:08

during the previous stage,

play05:10

while in this stage it is being created.

play05:12

If our testing needs

play05:14

stubs and drivers,

play05:16

so we prepare them during the implementation

play05:18

stage and call them "test harnesses".

play05:22

Finally, in the implementation partition

play05:24

we verify whether the

play05:26

test environment has been set up or not.

play05:28

we also verify that

play05:30

the bi directional traceability sheet

play05:32

between the test bases and

play05:34

the test cases is correct.

play05:36

The second step for this stage

play05:38

is the execution for the test procedure

play05:40

whether it is manually or

play05:42

by using execution tools.

play05:44

as we've learned,

play05:46

in execution we compare the expected result

play05:48

with the actual result we got.

play05:50

so if we find that

play05:52

ER doesn't equal AR, we will call it

play05:54

Discrepancies .

play05:56

The discrepancies we've resulted during the execution

play05:58

should be reported and recorded.

play06:00

it is not a report only for the discrepancies,

play06:02

but to all the result information

play06:04

from the test execution we did.

play06:06

such as: the tester name

play06:08

the date of execution, the actual result,

play06:10

the test result (pass / fail)

play06:12

The tested Build/server, etc.

play06:16

This file we call the test log.

play06:18

the verb we will use for

play06:20

this writing process is logging

play06:22

and recording. For sure after

play06:24

finishing the report, we need to

play06:26

conduct a confirmation testing.

play06:28

which is making a re-execution or

play06:30

re-testing again to check whether

play06:32

the discrepancies found before were

play06:34

fixed or not. If they were fixed

play06:36

we mark it with a confirmation

play06:38

that they are fixed. Last note,

play06:40

finding a discrepancy in the software

play06:42

and getting sure that they were fixed

play06:44

is a good point, but it's not enough.

play06:46

that's why we should conduct a regression testing.

play06:50

which means to re-test again

play06:52

and re-executions for the other modules

play06:54

in the program, to be sure that when

play06:56

those defects were fixed, didn't cause

play06:58

problems in another part in the program,

play07:00

and the program keeps working correctly.

play07:02

we should notice the difference between

play07:04

the purpose of the retesting which is

play07:06

ensuring that the bug was fixed.

play07:08

while the purpose of the regression testing

play07:10

is ensuring that the fixed bug didn't

play07:12

affect the other parts of the program

play07:14

and didn't cause new bugs.

play07:18

For the fourth activity which has to be

play07:20

applied on each test level, which is

play07:22

Evaluating exit criteria and reporting.

play07:24

First of all, from the activity's name

play07:26

there is a new expression which is

play07:28

"Exit criteria"

play07:30

the exit criteria are the targets

play07:32

that when achieved, we will stop testing.

play07:34

such as: after 80 % of test

play07:36

case execution we will stop testing.

play07:38

or for example, we've reached

play07:40

a particular percentage of finding bugs

play07:42

in a software at which we will stop.

play07:44

or according to a particular cost or

play07:46

time we will stop testing.

play07:48

These criteria differ from

play07:50

one project to another.

play07:52

Here, we check the exit criteria,

play07:54

which we determined in the test plan,

play07:56

have been achieved or no.

play07:58

By getting back to the test logs

play08:00

that we wrote in the previous stage and

play08:02

we recorded all the outcomes of

play08:04

the execution process,

play08:06

you can asses whether you need more

play08:08

testing or not. Does the exit criteria

play08:10

need to be changed?

play08:12

As for the reporting section,

play08:14

it is to write a test summary report

play08:16

for all the stakeholders to know the stage

play08:18

we've reached. Finally, the last activity

play08:20

of testing process.

play08:22

The Test closure

play08:24

is the last stage which we apply

play08:26

at each milestone of the project.

play08:28

Examples for the milestone

play08:30

in the project:

play08:32

a software system is released

play08:34

a test project is completed or cancelled

play08:36

a maintenance release has been completed

play08:39

the test closure tasks:

play08:40

here you'll check which

play08:42

planned deliverables were delivered

play08:44

and which ones were not.

play08:46

The closing i.e. to close the report

play08:48

and any open modifications to any

play08:50

part of the project which weren't finished yet.

play08:52

Documenting: To document that

play08:54

the system is acceptable and agreed upon.

play08:56

Finalizing and archiving the test ware.

play08:58

What is meant by the test ware is

play09:00

all we have got from the testing process

play09:02

like: test hardness ,

play09:04

the tools we used,

play09:06

the test cases we have,

play09:08

test suites and procedures with

play09:10

their resulted data. The output of

play09:12

all these test activities is called

play09:14

the test ware.

play09:15

handing over:

play09:16

to hand over the test ware

play09:18

to the maintenance organization.

play09:20

the information we've collected will help us

play09:22

to know the learned lessons, and

play09:24

to determine what needs to be changed.

play09:26

and what we can do better in testing

play09:28

the future projects.

play09:29

Now we have finished

play09:30

the test process part

play09:32

in chapter 1. the completion

play09:34

of this part with the rest of terms is

play09:36

in the first video of chapter 4.

play09:38

you can review the part we've finished

play09:40

in the file entitled "fundamental test process

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Software TestingTest PlanningTest AnalysisTest ExecutionTest ControlExit CriteriaReportingTest ClosureQuality AssuranceBug Detection