L17 Integration Testing

Phil Koopman
16 Jul 202110:27

Summary

TLDRThis tutorial by Phil Copeman delves into integration testing, emphasizing its importance post-unit testing to ensure components work harmoniously as per high-level design. It focuses on testing component interfaces and interactions, avoiding common pitfalls like skipping to system testing or lacking traceability. The tutorial illustrates the process with a vending machine example, highlighting the need to check not just the final output but the entire sequence of actions. It also discusses the role of message dictionaries in automotive diagnostics, emphasizing the necessity of testing various message types and values to validate system behavior against design specifications.

Takeaways

  • 🔍 Integration testing is conducted after unit testing to see how various pieces of the system work together.
  • đŸ—ïž It traces back to the architectural and high-level design work, ensuring components not only function internally but also interact as intended.
  • 🔄 The primary focus of integration testing is on the interfaces between components, assessing if they handle input sequences and interactions correctly.
  • 📈 Integration testing aims to confirm that modules align with the high-level design, especially in terms of sequence diagrams.
  • 🔄 Avoid the anti-pattern of skipping straight to system testing after unit tests, as this might miss subtle interaction issues.
  • 🔗 Ensure traceability from integration tests to the high-level design to maintain the connection between testing and design intent.
  • đŸš« Integration tests should not be pass/fail based on system function alone but should focus on the correctness of interfaces and interactions.
  • đŸ› ïž A simple integration test example involves a vending machine sequence diagram, where the test checks if the system correctly processes a coin insertion and subsequent actions.
  • 📊 Integration testing should cover all sequence diagrams, testing nominal behaviors, missing inputs, false preconditions, and invalid sequencing.
  • 📝 High-level designs often include interface descriptions, such as message dictionaries, which should be tested for message structure, field values, and exception handling.
  • 🔑 Best practices in integration testing emphasize the importance of tracing tests to the high-level design and covering all interactions to ensure the system works as intended.

Q & A

  • What is the primary purpose of integration testing?

    -The primary purpose of integration testing is to ensure that the various pieces of a software system work together as intended after individual components have been unit tested. It focuses on the interactions between components and their adherence to the high-level design.

  • Why is it important to conduct integration testing after unit testing?

    -Integration testing is important because unit testing alone cannot identify interaction problems between components. It ensures that the components, which have been individually tested, function correctly when combined and that the overall system matches the high-level design.

  • What are the potential anti-patterns or pitfalls of integration testing mentioned in the script?

    -The script mentions three anti-patterns: skipping straight to system testing after unit testing without conducting integration tests, lacking traceability from integration tests to the high-level design, and basing integration test pass criteria on system functionality rather than on the proper functioning of interfaces.

  • How does integration testing relate to high-level design?

    -Integration testing is intended to verify that the modules of a system match the high-level design, particularly focusing on the sequence diagrams and ensuring that the system behaves as the design specifies.

  • What is a sequence diagram and how does it relate to integration testing?

    -A sequence diagram is a type of diagram in the high-level design that shows the interactions between objects or components in a system over time. Integration testing uses these diagrams to check that all inputs result in correct outputs and that every component interface is exercised as expected.

  • What should be the focus of an integration test for a vending machine as described in the script?

    -The focus of an integration test for a vending machine should be to exercise a specific sequence diagram, ensuring that all inputs lead to correct outputs, every component interface is tested with relevant values, and the sequence of actions matches the expected behavior as per the high-level design.

  • Why is it incorrect to base the pass/fail criteria of an integration test solely on the final output?

    -Basing the pass/fail criteria solely on the final output is incorrect because it overlooks the importance of the intermediate results and the sequence of actions. Integration testing should verify that all arcs appear in the expected sequence and that all side effects and timings happen as expected according to the high-level design.

  • What is an example of an interface description used in integration testing?

    -An example of an interface description is the OBD2 parameter ID message dictionary, which defines automotive operational parameters and their associated data structures or fields in network packets, allowing for the testing of message structures, values, and exception handling.

  • How should integration testing be approached in terms of best practices?

    -Integration testing best practices involve focusing on the interaction of components, tracing tests to the high-level design, exercising all arcs on every sequence diagram, and covering all modules, network interfaces, message types, and data fields to ensure the system works as intended.

  • What are the two main pitfalls of relying solely on system testing instead of conducting integration testing?

    -The two main pitfalls are missing system integration edge cases where the system may appear to work but the internal logic is incorrect, and the difficulty of exercising off-nominal sequence diagrams at a system level, which can lead to overlooking specific situations that the system will not handle as intended.

  • Why is traceability from integration tests to the high-level design important?

    -Traceability is important because it ensures that the integration tests are based on a clear reference point, allowing testers to verify whether the system is behaving as the high-level design specifies. Without traceability, there is no way to know if the system is functioning as intended.

Outlines

00:00

🔍 Understanding Integration Testing

This paragraph introduces the concept of integration testing as a process that occurs after unit testing. It emphasizes the importance of ensuring that individual components, which have been verified through unit tests, work together as intended. The focus is on the interfaces between components and whether they can handle all types of data and interactions as specified in the high-level design. The paragraph also highlights common anti-patterns, such as skipping integration testing and basing pass criteria on system functions rather than interfaces. An example of a simple integration test for a vending machine is provided to illustrate the process of verifying sequence diagrams and ensuring that the software implementation matches the high-level design.

05:02

đŸ› ïž Integration Testing Best Practices and Pitfalls

The second paragraph delves into best practices for integration testing, such as tracing tests back to the high-level design and ensuring that all sequence diagrams are covered. It discusses the importance of testing message structures, including a range of valid and invalid values, and handling exceptions like bad checksums or sequence numbers. The paragraph also warns about the pitfalls of relying solely on system testing, which can miss critical system integration edge cases and fail to exercise off-nominal sequence diagrams. The goal is to stress the interactions among components and ensure that the system works not only at a functional level but also adheres to the high-level design.

10:03

📝 The Importance of Traceability in Integration Testing

The final paragraph underscores the significance of traceability in integration testing. It points out that without a high-level design to reference, there is no clear way to determine if the system is behaving as intended. Skipping integration testing can lead to a lack of understanding of whether the system's internal logic aligns with the high-level design. The paragraph stresses the necessity of integration testing to ensure that the system's components interact correctly and that the overall system functions as designed.

Mindmap

Keywords

💡Integration Testing

Integration Testing is a type of software testing that occurs after unit testing, where individual units or components of a program have been tested for their functionality. The purpose of integration testing, as explained in the video, is to verify that these components work together as expected. It is a crucial step in the development process to ensure that the modules match the high-level design and that the system behaves as intended. The video uses the analogy of a vending machine to illustrate how integration testing checks the interaction between components in response to a sequence of events, such as inserting a coin.

💡Unit Testing

Unit Testing is the process of testing individual components or units of a software application to determine if they function correctly in isolation. In the context of the video, unit testing is mentioned as a prerequisite to integration testing. It ensures that each part of the software is working as intended before they are combined and tested together for interactions and integration issues.

💡High-Level Design (HLD)

High-Level Design (HLD) refers to the architectural and design aspects of a system that are developed before the actual coding begins. It includes the sequence diagrams and interface descriptions that guide the development and testing process. The video emphasizes the importance of tracing integration tests back to the HLD to ensure that the software implementation matches the intended design.

💡Sequence Diagram

A Sequence Diagram is a type of UML diagram that illustrates the interactions between objects or components in a system over time. In the video, sequence diagrams are used to show the expected sequence of events in a system, such as the steps a vending machine takes when a coin is inserted. Integration testing uses these diagrams to check if the system behaves as expected in response to specific actions.

💡Component Interfaces

Component Interfaces are the points of interaction between different components of a system. The video explains that integration testing focuses on these interfaces to ensure that data and control flow as intended between components. This is essential for identifying issues that unit testing might not catch, such as incorrect data handling or sequence errors.

💡Anti-Patterns

Anti-Patterns in the context of software development refer to common responses to a problem that are typically ineffective and/or counterproductive. The video mentions several anti-patterns in integration testing, such as skipping to system testing without proper integration testing, which can lead to subtle interaction problems being overlooked.

💡Traceability

Traceability in software testing is the ability to track the relationship between the requirements, design, and code. The video points out that without traceability from integration tests to the high-level design, it is difficult to ensure that the tests are verifying the correct aspects of the system and that the system matches the intended design.

💡System Function

System Function refers to the overall functionality of a system. The video warns against using system function as the pass criteria for integration tests, as this overlooks the importance of testing the intermediate results and interactions between components. Instead, the focus should be on the correct functioning of interfaces and sequence diagrams.

💡Message Dictionary

A Message Dictionary is a collection of definitions for messages that are used in a system, often in the context of network communications or data structures. The video uses the example of an OBD2 parameter ID message dictionary to explain how integration testing should cover the message structure, including valid and invalid values, to ensure proper message handling.

💡Validation Test Suite

A Validation Test Suite is a set of tests designed to verify that a system meets certain criteria or behaves as expected under various conditions. The video mentions that components often come with a validation test suite to ensure that all types of messages and interactions are supported correctly, which is an essential part of integration testing.

💡Best Practices

Best Practices are the most effective methods recommended for a particular activity or process. In the context of integration testing, the video outlines best practices such as focusing on the interaction of components, tracing tests to the high-level design, and ensuring comprehensive coverage of all sequence diagrams and interfaces.

Highlights

Integration testing is a process that occurs after unit testing to ensure that various pieces of a software system work together as intended.

The purpose of integration testing is to exercise all component interfaces and verify that they handle input sequences and interactions correctly according to the high-level design.

Integration testing should not duplicate unit testing efforts but focus on interaction problems that unit testing cannot find.

Skipping integration testing and moving straight to system testing can lead to missing subtle interaction problems.

Traceability from integration tests to the high-level design is crucial for ensuring the implementation matches the intended design.

Integration testing pass criteria should be based on interfaces, not just system functionality.

A simple integration test example involves a vending machine sequence diagram, where the test checks if the machine handles a coin insertion correctly.

Integration tests should monitor the cascade of actions in a sequence diagram to ensure they occur as expected.

The pass or fail of an integration test is determined by whether the system behaves according to the sequence diagrams, not just the final output.

Integration testing coverage includes checking all arcs on sequence diagrams, testing nominal behaviors, and handling missing inputs or preconditions.

Integration tests should also verify the correct implementation of side effects and timing within the system.

High-level designs often include interface descriptions, such as message dictionaries, which should be tested in integration.

Integration testing should exercise message structures, testing a range of values, valid and invalid fields, and message types.

Components should be accompanied by a validation test suite to ensure proper support for different message types.

Best practices in integration testing involve concentrating on component interactions and ensuring they satisfy high-level design aspects.

Pitfalls of system testing alone include missing system integration edge cases and difficulty in exercising off-nominal sequence diagrams.

Skipping high-level design traceability in integration testing leads to guessing the system's intended behavior.

Integration testing is essential to confirm that the system not only works but also operates exactly as the high-level design specifies.

Transcripts

play00:00

this is phil copeman with a tutorial

play00:03

on integration testing

play00:08

integration testing sits part way up the

play00:10

v

play00:11

on the right hand side the idea of

play00:14

integration testing

play00:15

is after you're done with unit test and

play00:17

the various pieces

play00:18

have been checked out you then see how

play00:21

the various pieces work together

play00:24

integration test traces back to the left

play00:26

hand side of the v

play00:27

where the architectural and high-level

play00:29

design work takes place

play00:34

in integration testing several

play00:37

components

play00:37

are tested as a set each component has

play00:41

an internal function

play00:42

which has already been assessed by unit

play00:44

test but also has communications

play00:47

or the ability to control other

play00:48

components

play00:50

it's the arrows between components that

play00:52

are the primary emphasis of

play00:54

integration testing the idea of

play00:57

integration testing is to exercise

play00:59

all the component interfaces and ask

play01:01

questions such as

play01:02

did the input sequences lead to correct

play01:05

responses

play01:06

and can all types of data and all types

play01:08

of interactions be handled by the

play01:10

interfaces

play01:11

as they're supposed to according to the

play01:12

high-level design

play01:14

integration testing is intended to make

play01:16

sure that the modules match the

play01:18

high-level design

play01:19

including especially the sequence

play01:21

diagrams

play01:23

when you're doing integration testing

play01:24

the idea is not to redo

play01:26

things that have already been found in

play01:28

unit testing

play01:29

you assume unit testing has already

play01:31

happened and instead

play01:33

concentrate on the types of things unit

play01:34

testing can't find

play01:36

which primarily are interaction problems

play01:39

the anti-patterns for integration

play01:41

testing include

play01:42

skipping straight to system test once

play01:45

you're done with the unit test

play01:46

you could just try and run the system

play01:49

but you can easily miss

play01:50

subtle interaction problems that result

play01:53

in the system

play01:54

almost working and you not knowing that

play01:55

there's some sort of issue

play01:57

that the implementation does not match

play01:59

the intended high level design

play02:01

because things mostly work and you don't

play02:03

notice the difference

play02:05

another anti-pattern is no traceability

play02:07

from integration test to the high level

play02:09

design

play02:10

the point of integration test is to

play02:12

compare against the high level design

play02:16

a third anti pattern is that the

play02:18

integration test pass criteria based on

play02:20

system function

play02:21

not interfaces that means if you run

play02:24

an integration test and you ignore all

play02:27

the interactions and just say yup sure

play02:29

enough the system seems to work at the

play02:31

end

play02:31

that's not an integration test because

play02:33

you're not paying attention to the

play02:35

intermediate results

play02:36

that's simply a system test and you're

play02:38

not actually doing

play02:39

integration testing

play02:44

let's take a look at the types of things

play02:46

that would happen in a very simple

play02:48

integration test

play02:50

the sequence diagram on the right shows

play02:52

what happens when a coin is put into a

play02:54

vending machine

play02:55

an integration test tracing to this

play02:58

sequence diagram

play02:59

would in step one initialize all the

play03:01

modules

play03:03

in step 2 it would make sure as part of

play03:06

or

play03:07

just after the initialization each

play03:09

precondition is satisfied

play03:11

so that the sequence diagram can be

play03:13

activated

play03:15

in step 3 the initial action that

play03:18

triggers the cascade of events in the

play03:19

sequence diagram is fed to the system

play03:21

in this case a coin is inserted

play03:25

after that the integration test consists

play03:27

of monitoring to make sure

play03:29

that the cascade of actions in the

play03:31

sequence diagram

play03:32

actually takes place the coin in signal

play03:35

is received

play03:36

in four and five there's a side effect

play03:38

that happens there may not be a way to

play03:40

observe it but in number six

play03:42

the result of that side effect is

play03:43

observable

play03:45

in the end the point of the integration

play03:47

test is

play03:48

to exercise a specific sequence diagram

play03:52

you check that all inputs result in

play03:54

correct outputs

play03:56

every component interface is exercised

play03:58

across the whole set of sequence

play04:00

diagrams

play04:01

with all the relevant values with all

play04:02

the relevant timing and sequencing

play04:05

if any of the sequence diagrams do not

play04:07

behave as expected

play04:09

that means the software does not

play04:10

correctly implement the high level

play04:12

design

play04:13

and you have some sort of integration

play04:14

testing failure

play04:16

the pass fail criteria is whether or not

play04:19

the system behaves as the sequence

play04:21

diagrams

play04:22

say the system should behave for

play04:25

integration testing coverage

play04:26

the questions you tend to ask are are

play04:28

all arcs on all sequence diagrams

play04:31

exercised are off nominal behaviors

play04:34

tested

play04:34

what happens if one of the inputs to a

play04:36

sequence diagram is missing

play04:38

what happens if one of the preconditions

play04:40

is false does the sequence diagram

play04:42

incorrectly trigger

play04:44

do you have invalid sequencing do you

play04:45

have extraneous outputs

play04:47

and so on

play04:50

it's important to realize that the point

play04:53

of an integration test based on a

play04:55

sequence

play04:56

diagram is not simply the end result

play04:59

in this example there's a sequence

play05:01

diagram that shows a

play05:03

vending machine that takes two coins for

play05:05

a purchase

play05:06

receiving a third coin and refunding it

play05:09

automatically

play05:10

going back to two coins inside the

play05:12

machine ready for purchase

play05:14

the integration test for this sets up

play05:16

the machine

play05:18

make sure that the machine thinks it

play05:19

already has two coins and then

play05:21

pops in another coin that initiates the

play05:24

test

play05:25

the remainder of the test is observing

play05:27

the coin in signal arrives

play05:29

the coin count increases and it thinks

play05:31

it has a third coin

play05:33

then it refunds the coin by exercising

play05:35

the coin out actuator

play05:37

and the coin count goes back to two

play05:40

simply looking that the final coin out

play05:42

is two

play05:43

is not how you determine pass fail for

play05:46

this integration test

play05:48

this integration test only succeeds if

play05:51

it notices it has an extra coin and it

play05:53

refunds the extra coin instead of just

play05:55

silently eating it

play05:56

and then goes back to the right number

play05:57

of coins only observing the final test

play06:01

output would not tell you whether it

play06:03

actually refunded the coin which is the

play06:04

whole point of this sequence diagram

play06:06

and this integration test what this

play06:09

example illustrates

play06:10

is that integration test is not simply a

play06:13

pass on the final output

play06:15

but rather did all the arcs appear in

play06:18

the expected sequence

play06:20

did all the timings happen as you

play06:22

expected did all the side effects happen

play06:24

as you expected

play06:25

in other words it is not simply that the

play06:27

pieces manage to work together more or

play06:29

less

play06:30

it's that they manage to work together

play06:32

exactly as they're supposed to according

play06:34

to the high level design

play06:38

in addition to sequence diagrams it is

play06:41

common for high level designs

play06:43

to also have some sort of interface

play06:45

description

play06:47

many interfaces look like messages one

play06:50

way or another

play06:51

here's an example of the obd2 parameter

play06:54

id message dictionary

play06:56

which makes automotive operational

play06:58

parameters available

play07:00

via diagnostic port this is a typical

play07:03

sort of message dictionary in that

play07:05

each message which might be an actual

play07:07

network message or it might be

play07:09

a data structure in memory that you can

play07:11

access has a descriptor

play07:13

with a categorical value saying what

play07:15

kind of message is it

play07:16

is this the engine speed is the engine

play07:18

coolant temperature

play07:20

is it the accelerator pedal position and

play07:22

so on

play07:24

once you know what the enum what the

play07:25

categorical value is you can then

play07:27

interpret the associated data in a data

play07:29

structure or

play07:30

fields in a network packet based on what

play07:33

that identifier is

play07:35

as an example if the enum says it's an

play07:37

engine speed

play07:38

then there might be an integer

play07:40

afterwards that is in

play07:42

tens of rpms or what have you

play07:45

integration testing should exercise the

play07:48

message structure

play07:49

it should test all types of messages

play07:52

a range of values inside fields valid

play07:55

and invalid field values

play07:57

and invalid message types it should also

play08:00

test the timing and exception handling

play08:02

what if there's a bad checksum

play08:04

indicating a message should be

play08:05

dropped what if there's a bad sequence

play08:07

number on a sequence of messages

play08:09

and so on the hld will have this message

play08:13

dictionary which should define

play08:15

all the message types formats and so on

play08:17

and give you a good basis for writing

play08:19

the integration tests based on the hld

play08:23

it's common to see components

play08:25

accompanied by a validation test suite

play08:27

so you can know that all the different

play08:29

types of messages

play08:30

are supported properly

play08:35

integration testing best practices

play08:38

revolve around concentrating on the

play08:40

interaction of components

play08:43

integration tests should be traced to

play08:44

the high level design

play08:46

including exercising all the arcs on

play08:49

every sequence diagram

play08:50

covering every sequence diagram in the

play08:53

high level design

play08:54

all the modules all the network

play08:57

interfaces

play08:58

all the message types all the data

play09:00

fields the idea

play09:01

is assuming unit testing has found

play09:04

everything unit testing is likely to

play09:05

find

play09:06

how can you additionally stress the

play09:08

interactions among components

play09:10

to make sure that sure the units each do

play09:12

what they want to do

play09:13

but when you put them together they

play09:15

actually still satisfy

play09:17

all the aspects of the high level design

play09:20

the two main integration testing

play09:21

pitfalls

play09:23

first system testing alone misses system

play09:26

integration edge cases

play09:28

sometimes a misbehaving system appears

play09:30

to work just fine

play09:32

but the internal logic isn't quite right

play09:34

and there's some specific edge case

play09:36

situation that it will not handle as

play09:38

intended

play09:39

because it was just getting lucky in the

play09:40

common case

play09:42

also it can be difficult to exercise off

play09:45

nominal sequence diagrams at a system

play09:47

level

play09:48

there are some tests which are very

play09:50

difficult to reproduce with a physical

play09:52

system or

play09:52

downright dangerous integration testing

play09:56

helps you cover all the fine-grained

play09:59

interactions to make sure

play10:00

not only is this system working but it's

play10:02

working the way you thought it was

play10:04

supposed to be working

play10:06

from a traceability point of view if you

play10:08

skip the high level design

play10:10

there's nothing to trace your

play10:11

integration test to so you're just sort

play10:13

of guessing

play10:14

what things are supposed to do and if

play10:16

you don't do integration testing

play10:18

there's no way to really know whether

play10:20

the system is behaving as intended

play10:22

or not

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Étiquettes Connexes
Integration TestingComponent InteractionHigh-Level DesignUnit TestingSequence DiagramsTest CoverageSoftware ValidationArchitecture AnalysisTest SuitesDesign Verification
Besoin d'un résumé en anglais ?