Archive for the ‘TESTING’ Category

Testing Process Part-3

February 20th, 2010 by

The testing process

Traditional CMMI or Waterfall development model

A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed, before it is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.

A software Development Life Cycle (SDLC)

  • Requirement analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
  • Test planning: Test strategy, test plan, test bed creation. Since many activities will be carried out during testing, a plan is needed.
  • Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
  • Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
  • Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
  • Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
  • Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
  • Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
  • Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

Testing artifacts

Software testing process can produce several artifacts.

Test plan: Test Plan consists of scope, approach, objectives, risk analysis, test design, roles and responsibilities, resources and environment.

Traceability matrix: It is a document which trace out the link between the test plan and test cases i.e. by showing the no. of test cases prepared, executed, Passed and failed etc.

Test case :A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. .

Test script: The test script is the combination of a test case, test procedure, and test data.

Test suite: The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases.

Test data: In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project.


Testing Methods Part-2

February 20th, 2010 by

Testing methods:

  1. Black box testing: Black box testing methods include: equivalence partitioning , boundary value analysis , all-pairs testing, fuzz testing, model based testing, traceability matrix, exploratory testing  and specification-based test.
  1. White box testing: White is when the tester has access to the internal data structures and algorithms including the code that implement these.

Testing Levels:

Unit Testing

Unit testing refers to tests that verify the functionality of a specific section of code, Unit testing is also called Component Testing.

Integration Testing: Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together (“big bang”). Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed.

System Testing System testing tests a completely integrated system to verify that it meets its requirements.

Regression Testing Regression testing focuses on finding defects after a major code change has occurred. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.

Acceptance testing Acceptance testing can mean one of two things:

  1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration and regression .
  2. Acceptance testing performed by the customer, often in their lab environment on their own HW, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.

Alpha testing Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site.

Beta testing Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs.


February 20th, 2010 by

TESTING- it the process of identifying and detecting the defect in a product.

DEFECT- defect are variance from desired product not meeting the customer requirement expected result is not equal to actual result

Expected result-what be executing for the product.

Actual result –what the product actually gives you.

Software testing are two types:

MANNUAL TESTING- Tester with test the application if any defects are there it will be given to the developer. Developer fixes the defect and gives it to Tester. Tester will check the fix is correct or not. This process is 100% tester involved and this process is called manual testing.

AUTOMATION TESTING-. Test automation is a process of writing a computer program to do testing that would otherwise need to be done manually. Once tests have been automated, they can be run quickly. This is often the most cost effective method for software products that have a long maintenance life, because even minor patches over the lifetime of the application can cause features to break which were working at an earlier point in time.

Software Testing can also be stated as the process of validating and verifying that a software program/application/product:

  1. Meets the business and technical requirements that guided its design and development;
  2. Works as expected; and
  3. Can be implemented with the same characteristics.

Functional vs non-functional testing

Functional testing refers to tests that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of “can the user do this” or “does this particular feature work”.

Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or security.

Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, which result in errors of omission by the program designer. A common source of requirements gaps is non functional requirement such as testability,scalability,maintainability,usability,performance and security.

Error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.


A common cause of software failure (real or perceived) is a lack of compatibility with other  application software ,operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop  now being required to become a Web Application, which must render in  Web browser ).

Static vs. dynamic testing

There are many approaches to software testing Reviews, walkthroughs or inspections  are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing.

Software verification and validation

Software testing is used in association with verification and validation.

  • Verification: Have we built the software right? (i.e., does it match the specification).
  • Validation: Have we built the right software? (i.e., is this what the customer wants).

QUALITY- quality is defined as meeting the customer requirment for the first time and every time

Why quality is required:

  1. It gives defect free product.
  2. It gives user friendliness product.
  3. Reusability.
  4. Security
  5. Durability-long term performance of the product.
  6. It improves productivity and competiveness in any organization.

QUALITY ASSUARANCE- it is planned and systemetic set of activites necessary to provide adequate confidence that products and services will confirm to specified requirement and meet user needs.

QUALITY CONTROL- it is a product oriented after finishing the whole product tester will identify the defects so it is defect detection but it is not rectify the defect that time is going touswe quality assuarer i.e. developer.