Integration Testing

Profile picture for user devraj

Integration testing focuses on integration between components or systems. During integration testing, we test interfaces between components, interactions with different parts of a system, such as the operating system, file system, and hardware, and interfaces with internal or external systems. Here, integration can also be called interfaces, interactions, or data flow between modules, which all mean the same. Integration testing is sometimes called Integration and Testing and abbreviated as I&T.

Objectives of Integration Testing

Objectives of integration testing include:

  • Reducing Risk by detecting defects early
  • Verifying whether the functional and non-functional behaviors of the interfaces are as designed and specified
  • Building confidence in the quality of the interfaces. In some cases, automated integration regression tests provide assurance that changes have not broken existing interfaces, components, or systems.
  • Finding defects (which may be in the interfaces themselves or within the components or systems)
  • Preventing defects from escaping to higher test levels, including system and acceptances level

Levels of Integration Testing

There are two integration testing levels, which may be carried out on test objects of varying sizes.

  1. Component integration testing: Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process.
  2. System integration testing: System integration testing focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with and interfaces provided by external organizations. (e.g., interaction with web services provided by the payment gateway or courier service providers). In this case, the developing organization does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organization’s code are resolved, arranging for test environments or sandbox accounts, etc.). No matter you use which SDLC model, System integration testing may be done after system testing or in parallel with ongoing system test activities.

Test Basis

Examples of work products that can be used as a test basis for integration testing include:

  • Software and system design
  • Sequence diagrams
  • Interface and communication protocol specifications
  • Use cases
  • Architecture at the component or system level
  • Workflows
  • External interface definitions

Test Objects

Typical test objects for integration testing include:

  • Subsystems
  • Databases
  • Infrastructure
  • Interfaces
  • APIs
  • Microservices

Typical defects and failures

Defects and Failures based on Integration testing levels:

1. Component Integration Testing Examples

Examples of typical defects and failures for component integration testing include:

  • Incorrect data, missing data, or incorrect data encoding
  • Incorrect sequencing or timing of interface calls
  • Interface mismatch
  • Failures in communication between components
  • Unhandled or improperly handled communication failures between components
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components

2. System Integration Testing Examples

Examples of typical defects and failures for system integration testing include:

  • Inconsistent message structures between systems
  • Incorrect data, missing data, or incorrect data encoding
  • Interface mismatch
  • Failures in communication between systems
  • Unhandled or improperly handled communication failures between systems
  • Inaccurate assumptions about the meaning, units, or boundaries of the data being passed between systems
  • Failure to comply with mandatory security regulations

Specific Approaches and Responsibilities

  • Focus on integration, not functionality: Component and system integration tests should concentrate on the integration itself. For example, during component integration testing of module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules, as that should have been covered during component testing. Similarly, during system integration of system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems, as that should have been covered during system testing. Functional, non-functional, and structural test types are applicable during integration testing.
  • Integration Testing Responsibility: Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture and should have influenced integration planning.
  • Integration Test and Strategy: There are three commonly used integration strategies: Big-Bang, Top-Down, and Bottom-Up. Top-Down and Bottom-Up are Incremental approaches. Also, a combination of the Top-Down and Bottom-Up approaches is called Sandwich Approach. In Big-Bang, all components or systems are integrated into one step. In the incremental approach, integration is performed by combining logically related two or more modules at a time. Systematic integration strategies may be based on the system architecture (e.g., top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or components. In order to simplify defect isolation and detect defects early, integration should usually be incremental rather than the big bang. If integration tests and strategy are planned before components or systems are built, those components or systems can be built in the order required for the most efficient testing. 
  • Implement Continuous Integration: A risk analysis of the most complex interfaces can help to focus on integration testing. The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting. This is one reason that continuous integration, where software is integrated on a component-by-component basis (i.e., functional integration), has become common practice. Such continuous integration often includes automated regression testing, ideally at multiple test levels.