Verification methods for industry-specific software. Verification is the process of checking a software product

Lending 19.12.2021
Lending

The two concepts validation and verification are often confused. In addition, validation of system requirements is often confused with validation of the system itself. I propose to look into this issue.

In the article, I looked at two approaches to modeling an object: as a whole and as a structure. In the current article we will need this division.

Let us have a designed functional object. Let us consider this object as part of the design of another functional Object. Let there be a description of the construction of an Object, such that it contains a description of the object. In such a description, the object has a description as a whole, that is, its interfaces for interaction with other objects within the framework of the Object’s design are described. Let a description of the object as a structure be given. Let there be an information object containing requirements for the design of a description of an object as a structure. Let there be a body of knowledge that contains inference rules, on the basis of which a description of the object as a structure is obtained from the description of an object as a whole. The body of knowledge is what designers are taught in institutes - a lot, a lot of knowledge. They allow, based on knowledge about an object, to design its structure.

So, we can begin. We can assert that if the object as a whole is correctly described, if the body of knowledge is correct, and if the rules of inference have been followed, then the resulting description of the object's design will be correct. That is, based on this description, a functional object will be built that corresponds to real operating conditions. What risks may arise:

1. Using incorrect knowledge about the Object. The model of the Object in people's heads may not correspond to reality. They didn’t know the real danger of earthquakes, for example. Accordingly, the requirements for the object may be incorrectly formulated.

2. Incomplete recording of knowledge about the Object - something was missed, mistakes were made. For example, they knew about the winds, but forgot to mention them. This may lead to insufficient full description requirements for the object.

3. Incorrect body of knowledge. We were taught to prioritize mass over other parameters, but it turned out that we had to increase speed.

4. Incorrect application of inference rules to the description of an object. Logical errors, something is missing in the requirements for the design of the object, the tracing of requirements is broken.

5. Incomplete recording of findings about system design. They took everything into account, calculated everything, but forgot to write it down.

6. The created system does not correspond to the description.

It is clear that all project artifacts appear, as a rule, in their completed form only towards the end of the project, and even then not always. But, if we assume that the development is waterfall, then the risks are as I described. Checking each risk is a specific operation that can be given a name. If anyone is interested, you can try to come up with and voice these terms.

What is verification? In Russian, verification is a check for compliance with the rules. The rules are drawn up in the form of a document. That is, there must be a document with documentation requirements. If the documentation meets the requirements of this document, then it has passed verification.

What is validation? In Russian, validation is checking the correctness of conclusions. That is, there must be a body of knowledge that describes how to obtain a description of a design based on data about the object. Checking the correct application of these conclusions is validation. Validation includes checking the description for consistency, completeness and understandability.

Validation of requirements is often confused with validation of the product built from those requirements. You shouldn't do that.

Let us give several definitions that determine general structure certification process software:

Software certification– the process of establishing and formally recognizing that software has been developed in accordance with specified requirements. During the certification process there is interaction Applicant, Certifying Body and Supervisory Body

Applicant- organization submitting an application to the relevant Certifying body to obtain a certificate (conformity, quality, suitability, etc.) of the product.

Certifying body– organization considering the application The applicant about holding Software certifications and either independently or by forming a special commission producing a set of procedures aimed at carrying out the process Applicant's software certification.

Supervisory body– a commission of specialists monitoring the development processes The applicant certified information system and giving an opinion on compliance this process certain requirements, which is submitted for consideration to Certifying body.

Certification can be aimed at obtaining a certificate of conformity or a certificate of quality.

In the first case, the result of certification is recognition of the compliance of development processes with certain criteria, and the functionality of the system with certain requirements. An example of such requirements could be guidance documents Federal service on technical and export control in the field of safety software systems.

In the second case, the result is recognition of the compliance of development processes with certain criteria that guarantee an appropriate level of quality of the manufactured product and its suitability for use in certain conditions. An example of such standards is the series of international quality standards ISO 9000:2000 (GOST R ISO 9000-2001) or aviation standards DO-178B, AS9100, AS9006.

Testing of certifiable software has two complementary purposes:

· The first goal is to demonstrate that the software satisfies the requirements for it.

· The second goal is to demonstrate with a high level of confidence that errors that could lead to unacceptable failure situations, as defined by the system failure safety assessment process, are identified during the testing process.

For example, DO-178B requires the following to satisfy software testing objectives:

· Tests, first of all, should be based on software requirements;

· Tests should be designed to verify correct functioning and to expose potential errors.


· Analysis of the completeness of tests based on software requirements should determine which requirements are not tested.

· Analysis of the completeness of tests based on the structure of the program code should determine which structures were not executed during testing.

This standard also talks about requirements-based testing. This strategy has been found to be most effective in identifying errors. Guidelines for selecting test cases based on requirements include the following:

· To achieve the goals of software testing, two categories of tests must be carried out: tests for normal situations and tests for abnormal (not reflected in the requirements, robust) situations.

· Specific test cases should be developed for software requirements and sources of errors inherent in the software development process.

The purpose of tests for normal situations is to demonstrate the software's ability to respond to normal inputs and conditions as required.

The purpose of tests for abnormal situations is to demonstrate the software's ability to respond adequately to abnormal inputs and conditions, in other words, it should not cause the system to fail.

Failure categories for a system are established by determining the severity of the failure situation to the aircraft and its occupants. Any error in the software can cause a failure that contributes to a failure situation. Thus, the level of software integrity required for safe operation, is associated with failure situations for the system.

There are 5 levels of failure situations from insignificant to critically dangerous. According to these levels, the concept of software criticality level is introduced. The level of criticality determines the composition of the documentation provided to the certification body, and therefore the depth of the system development and verification processes. For example, the number of document types and amount of system development work required for certification to the lowest level of DO-178B criticality may differ by one to two orders of magnitude from the number and scope required for certification to the highest level. The specific requirements are determined by the standard by which certification is planned.

  • 2. Systems engineering of computer systems
  • 2.1. Integration properties of systems
  • 2.2. The system and its environment
  • 2.3. System Modeling
  • 2.4. System creation process
  • 2.5. Purchasing systems
  • 3. Software creation process
  • 3.1. Models of the software creation process
  • 3.2. Iterative software development models
  • 3.3. Software Specification
  • 3.4. Software design and implementation
  • 3.5. Evolution of software systems
  • 3.6. Automated Software Development Tools
  • 4. Software production technologies
  • Part II. Software requirements
  • 5. Software requirements
  • 5.1. Functional and non-functional requirements
  • 5.2. User Requirements
  • 5.3. System requirements
  • 5.4. Documenting system requirements
  • 6. Requirements development
  • 6.1. Feasibility Analysis
  • 6.2. Formation and analysis of requirements
  • 6.3. Requirements certification
  • 6.4. Requirements management
  • 7. Requirements matrix. Development of a requirements matrix
  • Part III. Software Simulation
  • 8. Architectural design
  • 8.1. System structuring
  • 8.2. Management Models
  • 8.3. Modular decomposition
  • 8.4. Problem-Dependent Architectures
  • 9. Architecture of distributed systems
  • 9.1. Multiprocessor architecture
  • 9.2. Client/server architecture
  • 9.3. Distributed Object Architecture
  • 9.4. Corba
  • 10. Object-Oriented Design
  • 10.1. Objects and Object Classes
  • 10.2. Object-Oriented Design Process
  • 10.2.1. System environment and usage patterns
  • 10.2.2. Architecture design
  • 10.2.3. Defining Objects
  • 10.2.4. Architecture Models
  • 10.2.5. Specification of object interfaces
  • 10.3. System architecture modification
  • 11. Design of real-time systems
  • 11.1. Design of real-time systems
  • 11.2. Control programs
  • 11.3. Surveillance and control systems
  • 11.4. Data acquisition systems
  • 12. Design for component reuse
  • 12.1. Component-by-component development
  • 12.2. Application Families
  • 12.3. Design patterns
  • 13. User interface design
  • 13.1. User Interface Design Principles
  • 13.2. User interaction
  • 13.3. Presentation of information
  • 13.4. User support tools
  • 13.5. Interface evaluation
  • Part IV. Software Development Technologies
  • 14. Software life cycle: models and their features
  • 14.1. Cascade life cycle model
  • 14.2. Evolutionary life cycle model
  • 14.2.1. Formal systems development
  • 14.2.2. Software development based on previously created components
  • 14.3. Iterative life cycle models
  • 14.3.1 Incremental development model
  • 14.3.2 Spiral development model
  • 15. Methodological foundations of software development technologies
  • 16. Methods of structural analysis and software design
  • 17. Methods of object-oriented analysis and software design. UML modeling language
  • Part V: Written Communication. Documenting the Software Project
  • 18. Documentation of software development stages
  • 19. Project planning
  • 19.1 Clarification of the content and scope of work
  • 19.2 Plan content management
  • 19.3 Planning the organizational structure
  • 19.4 Planning configuration management
  • 19.5 Quality management planning
  • 19.6 Basic Project Schedule
  • 20. Software verification and certification
  • 20.1. Planning for Verification and Qualification
  • 20.2. Inspection of software systems
  • 20.3. Automatic static analysis of programs
  • 20.4. Clean room method
  • 21. Software testing
  • 21.1. Defect testing
  • 21.1.1. Black box testing
  • 21.1.2. Equivalence Regions
  • 21.1.3. Structural testing
  • 21.1.4. Testing branches
  • 21.2. Testing the build
  • 21.2.1. Downward and upward testing
  • 21.2.2. Interface testing
  • 21.2.3. Load testing
  • 21.3. Testing object-oriented systems
  • 21.3.1. Testing Object Classes
  • 21.3.2. Object Integration
  • 21.4. Testing Tools
  • Part VI. Software Project Management
  • 22. Project management
  • 22.1. Management processes
  • 22.2. Project planning
  • 22.3. Operating schedule
  • 22.4. Management of risks
  • 23. Personnel management
  • 23.1. The Limits of Thinking
  • 23.1.1. Organization of human memory
  • 23.1.2. Problem solving
  • 23.1.3. Motivation
  • 23.2. Group work
  • 23.2.1. Creating a team
  • 23.2.2. Team cohesion
  • 23.2.3. Group communication
  • 23.2.4. Group organization
  • 23.3. Recruitment and retention of personnel
  • 23.3.1. Working environment
  • 23.4. Model for assessing the level of personnel development
  • 24. Estimation of the cost of a software product
  • 24.1. Performance
  • 24.2. Assessment methods
  • 24.3. Algorithmic Cost Modeling
  • 24.3.1. Sosomo model
  • 24.3.2. Algorithmic cost models in project planning
  • 24.4. Duration of the project and hiring of personnel
  • 25. Quality management
  • 25.1. Quality Assurance and Standards
  • 25.1.1. Standards for technical documentation
  • 25.1.2. Quality of the software creation process and quality of the software product
  • 25.2. Quality planning
  • 25.3. Quality control
  • 25.3.1. Quality checks
  • 25.4. Software Measuring
  • 25.4.1. Measurement process
  • 25.4.2. Software Product Indicators
  • 26. Software reliability
  • 26.1. Ensuring Software Reliability
  • 26.1.1 Critical systems
  • 26.1.2. Efficiency and reliability
  • 26.1.3. Safety
  • 26.1.4. Security
  • 26.2. Reliability certification
  • 26.3. Security guarantees
  • 26.4. Software security assessment
  • 27. Improving software production
  • 27.1. Quality of product and production
  • 27.2. Production Analysis and Simulation
  • 27.2.1. Exceptions during the creation process
  • 27.3. Manufacturing Process Measurement
  • 27.4. Model for assessing the level of development
  • 27.4.1. Assessing the level of development
  • 27.5. Classification of improvement processes
  • 20. Software verification and certification

    Verification and validation are the testing and review processes that verify that software meets its specifications and customer requirements. Verification and certification cover the full life cycle Software - they begin at the stage of requirements analysis and end with verification of the program code at the stage of testing the finished software system.

    Verification and certification are not the same thing, although it is easy to confuse them. Briefly, the difference between them can be defined as follows:

    Verification answers the question of whether the system was created correctly;

    Certification answers the question of whether the system is working correctly.

    According to these definitions, verification verifies the software's compliance with the system specification, in particular the functional and non-functional requirements. Certification is a more general process. During certification, it is necessary to ensure that the software product meets the customer's expectations. Certification follows verification to determine how well the system meets not only the specifications, but also the customer's expectations.

    As noted earlier, in the early stages of software development, system requirements certification is very important. Errors and omissions are common in requirements; in such cases, the final product will likely not meet the customer's expectations. But of course, requirements validation cannot identify all the problems in the requirements specification. Sometimes shortcomings and errors in requirements are discovered only after the implementation of the system is completed.

    The verification and certification processes use two main techniques for verifying and analyzing systems.

    1. Software inspection. Analyze and verify various system representations, such as requirements specification documentation, architectural diagrams, or program source code. Inspection is performed at all stages of the software system development process. In parallel with inspection, automatic analysis of program source code and related documents can be performed. Inspection and automated analysis are static verification and validation methods because they do not require an executable system.

    2. Software testing. Run executable code with test data and examine output and performance characteristics software product to check the correct operation of the system. Testing is a dynamic method of verification and certification as it is applied to a running system.

    In Fig. Figure 20.1 shows the place of inspection and testing in the software development process. The arrows indicate those stages of the development process at which these methods can be applied. According to this scheme, inspection can be performed at all stages of the system development process, and testing can be performed in cases where a prototype or executable program has been created.

    Inspection methods include: program inspection, automatic source code analysis, and formal verification. But static methods can only check the compliance of programs with specifications; they cannot be used to check the correct functioning of the system. In addition, non-functional characteristics such as performance and reliability cannot be verified using static methods. Therefore, to evaluate non-functional characteristics, system testing is carried out.

    Rice. 20.1. Static and dynamic verification and certification

    Despite wide application software inspection, testing is still the predominant method of verification and certification. Testing is a test of the operation of programs with data similar to real data that will be processed during the operation of the system. The presence of defects and non-conformities in the program is detected by examining the output data and identifying anomalous ones among them. Testing is performed during the system implementation phase (to check that the system meets the developers' expectations) and after its implementation is completed.

    Different types of testing are used at different stages of the software development process.

    1. Defect testing carried out to detect inconsistencies between a program and its specification that are caused by errors or defects in the programs. Such tests are designed to identify errors in the system, and not to simulate its operation.

    2. Statistical testing evaluates the performance and reliability of programs, as well as system operation in various operating modes. Tests are designed to simulate real system operation with real input data. The reliability of the system is assessed by the number of failures noted in the operation of programs. Performance is assessed by measuring the total execution time of operations and the system response time when processing test data.

    The main purpose of verification and qualification is to ensure that the system is “fit for purpose”. Compliance of a software system with its intended purpose does not imply that it should be completely error-free. Rather, the system must serve reasonably well the purposes for which it was intended. Level of required reliability of compliance depends on the purpose of the system, user expectations and conditions on the software market.

    1. Purpose of the software. The level of confidence in compliance depends on how critical the software being developed is according to certain criteria. For example, the level of confidence for safety-critical systems should be significantly higher than the level of confidence for prototype software systems developed to demonstrate some new ideas.

    2. User expectations. It is sad to note that currently most users have low software requirements. Users are so accustomed to failures that occur while programs are running that they are not surprised by this. They are willing to tolerate system failures if the benefits of using it outweigh the disadvantages. However, since the early 1990s, user tolerance for failures in software systems has been gradually decreasing. Recently, the creation of unreliable systems has become practically unacceptable, so companies developing software products need to pay more and more attention to software verification and certification.

    3. Software market conditions. When evaluating a software system, the seller must know the competing systems, the price the buyer is willing to pay for the system, and the target date for the system's release to market. If the development company has several competitors, it is necessary to determine the date for the system to enter the market before the end of full testing and debugging, otherwise competitors may be the first to enter the market. If customers are unwilling to purchase software at a high price, they may be willing to tolerate more system failures. All of these factors must be taken into account when determining the costs of the verification and certification process.

    As a rule, during verification and certification, errors are discovered in the system. Changes are being made to the system to correct errors. This debugging process Typically integrated with other verification and qualification processes. However, testing (or more generally, verification and certification) and debugging are different processes that have different goals.

    1. Verification and certification is the process of detecting defects in a software system.

    2. Debugging is the process of localizing defects (errors) and correcting them (Fig. 20.2).

    Rice. 20.2. Debugging process

    There are no simple methods for debugging programs. Experienced debuggers detect errors by comparing test output patterns with the output of systems under test. Locating an error requires knowledge of error types, output patterns, programming language, and the programming process. Knowledge about the software development process is very important. Debuggers know the most common programmer errors (for example, those associated with incrementing a counter value). Errors typical of certain programming languages ​​are also taken into account, for example those associated with the use of pointers in the C language.

    Locating errors in program code is not always a simple process, since the error is not necessarily located near the place in the program code where the failure occurred. To localize errors, the debugger programmer develops additional software tests that help identify the source of the error in the program. It may be necessary to manually trace program execution.

    Interactive debugging tools are part of a set of language support tools integrated with the code compilation system. They provide a special program execution environment through which a table of identifiers can be accessed, and from there the values ​​of variables. Users often control program execution in a step-by-step fashion, moving from statement to statement. After each statement is executed, the values ​​of the variables are checked and possible errors are identified.

    An error detected in the program is corrected, after which it is necessary to check the program again. To do this, you can inspect the program again or repeat the previous testing. Retesting is used to ensure that changes made to a program have not introduced new bugs into the system, since in practice a high percentage of "bug fixes" either fail completely or introduce new bugs into the program.

    In principle, during retesting after each fix, all tests should be run again, but in practice this approach is too expensive. Therefore, when planning a testing process, dependencies between parts of the system are determined and tests are assigned to each part. It is then possible to trace software elements using special test cases (test data) tailored to those elements. If the trace results are documented, then only a subset of the entire set of test data can be used to test the changed software element and its dependent components.

    Software verification Verification is a form of testing. It was developed in the 80s. Clark and Emerson in the USA, and independently by Quayle and Sifakis in France. Software testing is the process of identifying errors in software. Current software testing methods do not allow us to unambiguously establish the correct functioning of the analyzed program. Verification (from Latin verus - true, facere - to do) - verification, verifiability, a way of substantiating (confirming) any theoretical propositions by comparing them with experimental data. Verification is confirmation based on the provision of objective evidence that the established requirements have been met (according to GOST ISO).


    Formal verification Formal verification As a rule, most developers of software systems use simulation modeling and testing methods to verify the correctness of the design. They are quite effective in the very early stages of debugging, when the system being designed is still riddled with errors, but the effectiveness of these methods quickly declines as the system becomes cleaner. Formal verification methods are a worthy alternative to simulation modeling and testing. During simulation and testing, only some of the possible behavior scenarios of the designed system are investigated, so it remains open question about whether it contains fatal error in unused trajectories. Formal verification provides an exhaustive analysis of all possible options for the behavior of the system.


    Methods of formal verification Automatic theorem proving – proof of theorems implemented in software. It is based on the apparatus of mathematical logic. Also uses ideas from the theory of artificial intelligence. The process of proof is based on propositional and predicate logic. Model checking. Method for automatic verification of parallel systems with a finite number of states. Symbolic execution (graphs). Abstract interpretation.


    Stages of formal verification on a model Stages of formal verification on a model Modeling. For the system being designed, it is necessary to build its abstract model (for example, a finite transition system) acceptable for tools verification of program models. Specification. This task consists of formulating the properties that the designed system should have. It is impossible to determine whether a given specification covers all the properties that a system should have. For hardware and software, as a rule, dynamic logics, timing logics and their variants with fixed points are used. Algorithm calculations. The result of the global model checking algorithm's calculations is a set of model states in which the specification is satisfied, and the local model checking algorithm constructs as a counterexample some calculation (error trace) that shows why the formula does not hold. Counterexamples are especially important for finding subtle errors in complex systems ah transitions.


    Model Checking Method Compared to other approaches to formal program verification, the model checking method has two remarkable advantages: It is completely automatic and its application does not require the user to have any special knowledge in mathematical disciplines such as logic and theorem proving theory. Anyone who can simulate a system being designed is fully capable of testing that system. If the designed system does not have the desired property, then the result of the model test will be a counterexample that demonstrates system behavior that refutes this property. This error trace provides invaluable information for understanding the cause of the error, as well as an important clue to solving the problem. The main disadvantage of the model checking method is "combinatorial explosion", which occurs when transitions in some components of the system are performed in parallel. In 1987, K. McMillan showed that, using a symbolic representation of the transition graph, very complex systems can be verified. The new symbolic representation was based on Briand's ordered binary resolution diagrams (OBDD).


    The concept of verification of BKU software In RSC Energia, the concept of verification cannot be used in full, since when creating very complex systems, it is impossible to implement a complete verification, since there are time and cost restrictions. Quality indicator for development and testing according to BKU KA


    Development and testing of BKU software. At the RSC Energia enterprise, NPOs operating in real time are used to test the BKU software. Comprehensive development and testing of software is carried out by the integration and testing group using specially developed programs and test methods (TMP) (test scenarios). NKO-1 NKO-2 (real BCWS machine) Used for integration and subsequent debugging of BKU software to the extent of: selective checks of main paths for the most likely emergency situations; interface control, i.e. software testing within the framework: exchange of data arrays and words; transfer of command arrays; transfer of TM data; checking resource allocation (memory, CPU time, I/O channels). Used for testing, or otherwise verification, of the BKU software in the following scope: testing of the BKU software in accordance with the flight plan (FP) and spacecraft modes; checking for software compliance with specifications.






    Test method program To conduct software testing, test method programs (TMP) are developed. The MPI for each scenario must contain information to establish the correspondence between the actual test results and the planned test results, as well as tolerances for each controlled parameter.


    Test scenario The test scenario for complex debugging is built on the basis of a logical diagram of debugging processes. The scenario should reflect the occurrence of events and the relationships between them in time. The choice of discrete moments in time at which the assessment is carried out and control actions are taken depends on the specifics of the software and the progress of the debugging implementation process. Test scripts are written in languages ​​developed at the enterprise. These languages ​​include: D Dipole (used in the creation of SM, TGK and KA of the Yamal satellite communication system, also used in KIS-control test bench); L Lua (currently used for MRM1 - small research module); into internal test languages.


    Requirements traceability matrix (NKO2) The requirements traceability matrix contains a list of all requirements, the program unit identifier, the name of the program unit, the number of the requirements of the higher technical specification and the identifier of the test confirming these requirements.


    Test report The test report is a text file containing, in chronological order, the system's responses to input influences during testing. The protocol contains the Moscow time of the event, the time relative to the start of the test, the values ​​of the parameters being set and notes containing comments about the events.


    TM archive Telemetry archive is a file containing in encoded form a set of telemetry messages received from the system during testing. The archive contains the onboard time of events and the values ​​of telemetric parameters. The Telemet2 program allows you to present the telemetry archive in the form of a text file with comments and parameter values ​​in decimal and hexadecimal format.


    Acceptance Criteria The PMI for each test must contain requirements defining the acceptance criterion. The scope and depth of checks are considered sufficient, provided that the following requirements for completeness of testing are met: the BKU software must function in all possible flight configurations; all functional alternatives have been tested in accordance with the external specification; basic emergency situations have been worked out; Boundary values ​​checked. The PMI for each test in the “Evaluation Criteria” section must contain information that allows you to establish the correspondence between the actual test results and the planned test results, as well as tolerances for each controlled parameter.

    White box testing

    Usability testing

    A) Load testing

    Performance testing

    Functional testing

    Software testing

    Testing is the process of executing a program (or part of a program) with the intention (or goal) to find errors.

    There are several criteria by which it is customary to classify types of testing. Typically the following symptoms are identified:

    I) By test object:

    (determining or collecting performance indicators and response time of a software and hardware system or device in response to an external request in order to establish compliance with the requirements for a given system)

    b) Stress testing

    (assesses the reliability and stability of the system when the limits of normal operation are exceeded.)

    c) Stability testing

    4) User interface testing

    5) Security testing

    6) Localization testing

    7) Compatibility testing

    II) By knowledge of the system:

    1) Black box testing

    (the object is being tested, internal organization which is unknown)

    (the internal structure of the program is checked, test data is obtained by analyzing the program logic)

    III) By degree of automation:

    1) Manual testing

    2) Automated testing

    3) Semi-automated testing

    IV) According to the degree of isolation of components:

    1) Component (unit) testing

    2) Integration testing

    3) System testing

    V) By testing time:

    1) Alpha testing– a closed process of testing a program by full-time developers or testers. An alpha product is most often only 50% complete; the program code is present, but a significant part of the design is missing.

    2) Beta testing– intensive use of an almost finished version of the program in order to identify the maximum number of errors in its operation for their subsequent elimination before the final release to the market, to the mass consumer. Volunteers from among ordinary future users are recruited for testing.

    Software verification is more general concept than testing. The purpose of verification is to ensure that the item being verified (requirements or program code) meets requirements, is implemented without unintended functions, and satisfies design specifications and standards ( ISO 9000-2000). The verification process includes inspections, code testing, analysis of test results, generation and analysis of problem reports. Thus, it is generally accepted that the testing process is an integral part of the verification process.

    We recommend reading

    Top