Services


An overview of software testing

Test Models


There are many models used in software testing;

Definitions of these models will differ, however the fundamental principle are agreed on by experts and practitioners alike.

Verification and validation (V&V), the V-Model and briefly discuss a Rapid Application Development (RAD) approach to testing.

V&V – Verification is defined by BS7925 as the process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase i.e have we built the correct product?

Validation and verification : Software testing can be defined as ‘a process of execution a computer program and comparing the actual behaviour with the expected behaviour’.

There are many models used to describe the sequence of activities that make a Systems Development Life Cycle (SDLC) is used to described activities of both development and maintenance work.

Three models are worth mentioning.

  • Sequential – the traditional waterfall model
  • Incremental – the function incremental model
  • Spiral- the incremental, iterative, evolutionary, Rapid Application Development, prototype  model

I am concentating on three test models

  • Water Fall Model
  • V Model
  • Agile Model

There are other methodology like :

  • Clean Room  Model
  • Iterative
  • RAD
  • RUP
  • Spiral
  • TDD   (Test-driven development)

Read more ->

VIDEO

Software testing


Software testing is the process of executing a program or system  with in intent of finding error. Software testing is any activity aimed at evaluating on attribute or capacibility of a program or system and determining that it meets its required results. Software testing  is a process of  investigation conducted to provide stakeholders with information about the quality of the product or service under test. A Software Analyst is responsible  for the analysis of product and project documentation to identify the most relevant and effective testing of the product,  providing support to the Quality Assurance/ Release Control Team in software testing  and co-ordinating release activities for the business. Key responsibiilty for the Software Test Analyst include:

  • Software Testing
  • Customer Service
  • Good interpersonal relationship with Business Analyst, Project Management and Developer.
  • Good Teamwork (raising defect, bugs, errors,fault fixing, new feature, regression,  manual or automated, and  ad-hoc testing , and supporting all releases and patches to the live customer environment.
  • Knowlege of the Environment under test

Testing can never completely identify all the errors, defects within software. Test Screen It furnish a criticism or comparison that compare the state and behaviour of the product against oracles prinicples or mechanism by which someone might recognize a problem. Software testing can be stated as the process of validating and verifying that a software  program/application /product. 1.  Meets the requirements that guilded its design and development 2. Test works as expected (system under test) 3. Can be implemented with the same characteristics 4. Level of test conducted and the exit criteria set What is Testing ?

  • The process of executing a program with the intent to certify its Quality.  Quality can be measured by testing for Correctness,Reliabilty,Usability, Maintainability,testability and Reusability.
  • The process of executing a program with the intent of finding failures/ faults
  • The process of exercising software to detect bugs, to verify that it satisfies specified functional and non-functional requirements
  • It ensure legal requirements are met
  • To help maintain the organization’s reputation

Even the most carefully planned and design software cannot possibly be free of defects Why Testing is Necessary We can’t test everything, what  can we do?

  • Managing and reducing Risk
  • Carry out a Risk Analysis of the application under test
  • Prioritise tests to focus on the main areas of risk
  • Apportion time relative to the degree of risk involved
  • Understand the risk to the business of the software nor functioning correctly
  • Continuing testing beyond the implementation date should be considered
  • It is better for us (Tester) to find errors than the users.

Software Development life cycle  (SDLC) is sometimes referred  to as the  system development life  cycle;  is the process of creating or altering software systems and the model and methodologies  that people  use  to develop these systems.In any such lifecycle; people process and technology all play a role in success. Fundamental Test Process:This process is detailed in what has become known as the fundamental test process, a key element of what testers do and is applicable at all stages of testing. The most visible part of testing is running one or more tests; test execution.  We also have to prepare for running tests, analyse the tests that have  been run, and see whether testing is complete. Both planning and analysis are very neccessary activities that enhance and amplify the benefits of the test execution itself.  It is no good testing without deciding  how, when and what to test. Planning is also required for the less formal test approaches such as the exploratory testing. The  test process consists of five parts that encompass all areas of software testing.

  • Planning and Control
  • Analysis and Design
  • Implementation and Execution
  • Evaluating exist criteria and Reporting
  • Test closure activities

    Test Plan

    Test Plan

 

Testing Framework is an object-oriented approach to Programmer Test :

Quality Assurance, or QA is another word the evaluation of different portions of the software development life cycle and is used to minimize downtime, bugs, and mistakes, while keeping the bottom line – profitability – ate the forefront of any process.

V-Model


V Model

V Model

There are many models used to describe the sequence of activities that make a Systems Development Life Cycle (SDLC) is used to described activities of both development and maintenance work.

The ‘V’ diagram in its simple form has been around for a long time and is especially useful as it easily demonstrate how testing work  done early in the SDLC can be used as inputs to assurance work later in development.

The V model of the SDLC offers considerable benefits over others as it emphasizes building of test data and test scenarios during development and not as an after thought.  The V model also allows for establishment of versions, incremental development and regression testing.

The V-Model is a systems development model designed to simplify the understanding of the complexity associated with developing system.

The V-Model of software development is widely in use today, especially in the defence industry.

Agile


Agile Methodogy

Agile Methodogy

Agile testing is a method of software testing that follows the principle of agile software development methodology (Scrum, Extreme Programming, or other flavours of Agile).
It is also a group of software development methodologies based on iterative and incremental development.
Agile software development methods can deliver tremendous impact to software development. Studies show Agile can deliver higher quality, shorter development cycles, improved customer value and lower cost. The value is clear. However, Agile is not a magic bullet – there are key factors to consider when implementing Agile software development in an enterprise context to produce the results you’re looking for in terms of improved quality, more rapid cycle times and cost savings.

Agile Scrum Methodology

Testing Terminology


Definition:  Testing is a process in which the defects are identified, isolated (separated), subject (sending) for rectification and ensured that the product is defect free in order to produce a quality product in the end and hence customer satisfactoion.

Reliability:  The probability that software will not cause the failure of a system for a specified period of time under specified conditions

Defect:  The departure of a quality characteristic from its specified value that results in a product or service not satisfying its normal usage requirements

Fault :    A manifestation of an ERROR in software. Faults are also known colloquially as defaults or bugs.   A fault , if encourage, may cause a failure which is a deviation of the software from its intended purpose.

Error :   A human action that produces an incorrect result.

Quality :   The totality of the characteristics of an entity that bear on its ability     to satisfy stated or implied needs.

Testing and Risk : How much would you be willing to perform if the risk of failure were negligible? Alternatively, how much testing would you be willing to perform if a single defect could cost you your life’s savings, or, even more significantly, your life?

Testing and Quality:  Testing identifies faults whose removal increases the software quality by increasing the software’s potential reliability.

Testing  is the measurement of software quality.

We measure how closely we have achieved quality by testing the relevant factors such as correctness, reliability, usability, maintainability, reusability, testability etc.

Failure  – the act of a product not behaving as expected – the manifestation of a fault.

Validation – establishing the correspondence between the software and its specification – are we now building the correct product?  Also known as ‘black box’ testing.

Verification –  are  we now building the product  correctly? Also known as ‘white box testing’.

Test Case – the collection of inputs, predicted results and execution conditions for a single test

Pass/fail criteria – decision rules  used to determine whether  a product passes or fails a given test

Test suites – a collection of test cases necessary to “adequately” test a product

Test Plan– a document describing the scope, approach, resources and schedule of intended testing activity- identifies  features to be testing tasks, who will do each task, and any risks requiring contingency planning.

 
 

Software Testing Inside and Out

The separation of debugging from testing was intially introduced by Glenford J. Myers in 1979. Although his attention was on breakage testing ‘a successful test is one that finds a bug.’

Historically there has not been a generally accepted set of testing definations.