Saturday, 31 October 2015

Keyword Driven Framwork

Keyword-driven testing is a technique that separates much of the programming work from the actual test steps so that the test steps can be developed earlier and can be maintained  repeatedly whenever required with only minor updates, even though when the application or testing needs changes dynamically. The Keyword-driven automation infrastructure usually includes one or more shared object repositories and one or more function libraries. Once the test automation infrastructure is ready, the application testers can begin designing their keyword-driven tests by selecting objects and operation keywords in the Keyword View.
The keyword-driven testing methodology divides test creation into two stages:
Preparing a set of testing resources or test automation infrastructure:  Preparing the test automation infrastructure includes a planning stage and an implementation stage.
Creating tests in the QTP Keyword View: By selecting the keywords (objects and/or operations) that represent the application functionality which we want to test.

Advantages of keyword-driven testing methodology:
  • It enables us to design our tests at a business level rather than at the object level. 
  • For example, QTP may recognize a single option selection in our application as several steps: a click on a button object, a mouse operation on a list object, and then a keyboard operation on a list sub-item. 
  • We can create an appropriately-named function to represent all of these lower-level operations in a single, business-level keyword.
  • By incorporating technical operations, like a synchronization statement that waits for client-server communications to finish, into higher level keywords, tests are made easier to read and are made easier for less technical application testers to maintain when the application changes.
  • Keyword-driven testing leads to a more efficient separation between resource maintenance and test maintenance.  This enables the automation experts to focus completely on maintaining objects and functions while application testers focus on maintaining the test structure and design.
  • When recording tests, we may not notice that new objects are being added to the local object repository. This may result in many testers maintaining local object repositories with copies of the same objects. When using a keyword-driven methodology, we select the objects for our steps from the existing object repository. When we need a new object, we can add it to our local object repository temporarily, but we are also aware that we need to add it to the shared object repository for future use.
  • When we record a test, QTP enters the correct object methods, and argument values for us.  Hence it is possible to create a test with little preparation or planning. 
Repercussions: Although we can easily create our tests quickly, But
# Such tests are harder to maintain when the application changes and often require re-recording large parts of the test.
  • While using keyword-driven methodology, we select from existing objects and operation keywords. Hence we must need to be familiar with both the object repositories and the function libraries that are available. 



We must also have a good idea of what we want our test to look like before we begin inserting steps. 
This usually results in well-planned and better-structured tests, which also results in easier long-term maintenance.  Automation experts can add objects and functions based on detailed product specifications even before a feature has been added to a product.

Friday, 30 October 2015

Overview of QTP Life Cycle


QTP is mercury iterative product takeover by HP company . It is functional and regression testing tool used to check the functionality of the application more effectively. The scripting we use in QTP is completely based on VB Scripting. VB scripting is not case sensitive. QTP generally works on windows platforms. It also works on other platforms but the installation process and some features will be changed according to the platforms. It supports all WinRunner supporting technologies like Java, .Net, Seibel, Web, VB, PHP, SAP, Oracle, multimedia applications. QTP has a work flow or Life cycle for functional testing involves the following main stages.
  • Test planning/Analyze the Application
  • Generating the basic test
  • Enhancing the test
  • Debugging the test
  • Executing the test
  • Analyzing the results.

Test Planning/ Analyzing the application:
In test planning the testers understand the requirements and analyze which environments are used to develop the application and controls in the application such as java, SAP , oracle etc. They identify the areas to be automated analyze positive as well as negative flow of these areas. Preparation of automation test plan document based upon the above analysis is done in this phase. They prepare the Pre configuration settings for further operations. So these are the activities that are carried out by the Test panning or Analyzing the Application process
.
Generate the basic Test:
Automation test engineers generate the basis test for positive as well as negative flow of the test. The recording operations on a standalone or web application or an application based on it’s respective environment is recorded to check the functionality of the application. We even use VB scripting concepts to generate Scripts in this Phase.

Enhancing the Test:
Tests can be enhanced by the following activities like
  • Inserting the checkpoints
  • Synchronizing the test
  • Parameterizing the test (Data Driven Testing)
  • Inserting the output values
  • Measuring transactions
  • Inserting programmatic statements
  • Inserting comments
  • Inserting the script statement manually

Debugging the tests:
It is a process of executing the script in a user desired fashion with an objective of identifying the errors in the script. For debugging QTP offers features like.
     3 - Step Commands:
    1) Step Into
    2) Step Out
    3) Step Over
    1 - Break Point: For breaking the execution temporarily
Executing Tests:
In this phase testers execute the Test scripts that has been enhanced and debugged and we execute them using different run modes. 
Analyzing Test Results:
Here we analyze the test results that are obtained during the test executions and these test results are traceable and testers can report the test results using QC explorer and send to the  Test managers where test managers analyze them, according to the priorities and severities of the bug they report it to the developer.






Thursday, 29 October 2015

Evaluating Marketing Strategy

Marketing Plans serves as a blueprints for any company’s sales strategy. They mention every detail of what has to come over the next year and what has to be altered and evaluated because of changes in the markets and all. Marketing should not be set in motion and left alone, It has to constantly reviewed , evaluated and adjusted in such a manner that it suits the company and satisfies the consumer requirements. If you understand how to judge your marketing plan is delivering, then results can save you time and money and help to ensure success of your business.
There are some of the ways to evaluate your marketing Strategy
ROI – Return on Investment:

Return on investment is always a major concern when it comes to marketing or any other business expense. It’s a general human psychology in business point of view, you put money for things which returns purely profits. The idea is to check whether the money you put into your marketing plan has resulted in a profit or not.  You must surely measure and maintain a detail about how much money is spent on each campaign of marketing your product  and compare it with the amount of sales you brought from each campaign specifically. The difference of these two measures tells us where we gained good amount of profit and where we have lost and accordingly we can take measures.
Sales Numbers
Reading the numbers can be the fastest and most basic way to determine whether your plan is working or not. For example, if your overall sales for last year from June 1 to September 1 totaled $100,000 and your total sales for this year totaled $150,000, you can deduce that your current marketing plan is having some sort of positive effect. Taking it into consideration in any rise in prices or expansion of the business, but when all is displayed and done in raw numbers, you are selling more than you did a year ago.
Customer Response
Marketing plans  can get a varied forms of information from the customer response in order to determine what type of reactions it has to create to develop the company’s marketing strategy. Doing surveys online and in person, general customer service feedback and online commentary and all can reveal what your customers think of your marketing and which campaigns have the greatest impact. Simple questions like "How did you find out about our seasonal sale?" can reveal which initiatives are reaching the customer and which market segments are making purchases more.

Expansion
If your marketing reach is expanding, then we can ensure that the plan is working effectively. Marketing that makes its way into new regions either by customer recommendation or natural growth indicates both a successful and popular product or experience and an effective marketing message. The expansion of your marketing budget is another sign that your plan is working well and has gained more support from the company and also the customer point of view.
Partner Response
Your marketing partners will offer feedback about whether your marketing plan is working effectively or not. Partner feedback reveals the effectiveness of your efforts in relation to associated brands, suppliers and vendors that are already available and established in the market and they also reveal the competition our product offering the other brands and vendors. These outside members of the team might feel the effects of a successful campaign before you do because they are often on the front lines and might have more direct customer interaction then actually we have. The same startegy goes for even a negative report. If your partners are asking when you will be releasing new marketing efforts, it might be time to change the market plan or to repair or patch up the old one with new strategies which increase the profits.
Salespeople
Outside salesmen are a great barometer for the measurement of effectiveness  of  marketing strategy of a company. Ask for feedback from your soldiers in the field to determine whether the message you are providing and the ways you are providing it are effective or not. You will be getting the feedback from anyone but these sales man are the people who directly interact with customers and come to know better about the product, but if the feedback is overwhelmingly negative or customers are completely unaware of your latest marketing efforts, your plan should be revised to better address existing clients and to suit the needs of your sales team.
Competitor Response
The actions of your competitors can often be very expressive when it comes to the success or failure of your marketing plan. If competitors rush to copy what you've done or try their best to one-up your initiatives, the plan is working much better. If your campaigns go largely ignored or there is an immediate negative response, there may be an issue or at least a question about what you've set in motion.


Non Functionality Testing

After completion of functionality testing. Testers concentrate on non -functionality features of a system.  Non functionality features are considered to be extra features of system which are even more important to be Tested before system get released into the market. In this testing particularly the quality of the system is tested . I mean here we think in depth of the application and beyond the application functionality aspects. Some of the factors can be
  • How does the application perform under normal circumstances?
  • How the application behave when too many user logs in concurrently?
  • Can the application handle stress?
  • How secure is the application?
  • Can the application recover from any disaster?
  • Can the application behave in the same way in different environment or OS?
  • How easy is to port the application in different system?
Are the documents / user manual provided with the application easy to understand?
These features are equally important as the functionality of the testing. Imagine the application which meets all the user requirements perfectly, but some tricky user can easily go and crack the data entered by the user in the application or application dies when more than 5BB of any file is uploaded. So would you say that the application is of good quality  of course not. So here comes the Non functionality Testing which helps in Testing the application in all the ways. Non functionality  testing tests the following extra features mostly
  • Reliability testing
  • Compatibility Testing
  • Portability testing
  • System Integration testing
  • Performance Testing
  • Security Testing
  • Localization or Internationalization testing
  • Installation Or uninstallation testing
  • Recovery Testing
  • Compliance Testing

Reliability Testing:  The probability that a software can work without any failure and defaults for a specified period of time under specified conditions. Simply we can say that warranty of the application or system.
Compatibility Testing:  It is a process of testing the functionality of the application or a software on different hardware and software environments. We mainly concentrate on the output the application is giving in different environments and compare with other environment. For example Web apps can be tested in different web browsers such as internet explorer, opera, Mozilla Firefox, Safari, Netscape Etc. So in order understand what is compatibility testing if we test IRCTC application in different web browsers we can better understand
Portability Testing: It is a process of testing the functionality of an application on different operating system. It refers to the process of testing the ease with which a computer software component or application can be moved from one environment to another. For example if we test the web application or standalone application in different operating systems like Windows 7, 8.1, Linux and Unix and Mac in Tosh etc.,
System Integration Testing: It is a process of testing an application interactions with other application’s interface. For example test an IRCTC application and book  a ticket, during the payment process it has to redirect to a net banking page of Bank application, that means the IRCTC application has to interact with Banking application to complete it’s transaction. So this non functionality testing helps in checking the System Integration testing.
Performance Testing: It is done to check the performance of the application of the software or the application. In the defined conditions with the focus on responsiveness  and scalability. For example test a Flipkart application at the peak business hours and check the performance, loading time and all. In performance of the following concepts are mainly covered like Load Testing, Stress Testing , Volume Testing, Soak testing.
  • Load Testing: In order to find out the stability and response time of the product, the application or product is tested against fixed number of issues.
  • Stress testing: It is done to evaluate, if the application is stretched at peak load that is beyond the limits of its specified requirements.
  • Volume Testing: It is done to test the stability of the application by processing huge amount of data which is exceeding the memory limit.
  • Soak testing: It is done by applying the significant load over an extended period of time to discover how the system behaves.

Security Testing: It is process of testing an application that how well it is protected form unauthorized users or entries. It is used to find out all the potentials loop holes and weakness of the system. It is also known as Penetration testing. Through this testing we can protect the system or applications highly sensitive information from theft and check that the system is secure and not exposed to any type of the attack.
Localization or Internationalization: Internationalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes. Whereas Localization is a process of adapting internationalized software for a specific region or language by adding local specific components and translating text.
Installation and Uninstall: IT is done to check whether all the files related to the Application are installed and uninstalled perfectly or not.
Recover Testing: Recovery testing is done in order to check how fast and better the application can recover after it has gone through any type of crash or hardware failure etc. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. For example, when an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application’s ability to continue receiving data from the point at which the network connection got disappeared. Restart the system while a browser has a definite number of sessions and check whether the browser is able to recover all of them or not.
Compliance Testing:  It is related with the IT standards followed by the company and it is the testing done to find the deviations from the company prescribed standards.

Wednesday, 28 October 2015

Descriptive Programming to retreive data from an Excel sheet

I’ve an excel sheet which consisting of Test results. The contents of Excel sheet are Test case id, Test case  description,  status and comments. In status there is pass and fail criteria of test case mentioned. I want to retrieve number of failed test cases available in the excel sheet. Then I can write my VB scripting in this way. First I’ve to create an object to retrieve the data from an excel sheet. I use dynamic descriptive programming and create an object and then I retrieve the data in this way

Step1. Set Mycom = creatobject (“ADODB.connection”)
Step2. Mycom.open “driver = (Microsoft excel (*.xls)); OBQ= Path ‘, Randomly= true”
‘I’m creating a connection between the tool and the excel sheet firs . I’m giving path of the excel sheet

Step3. Set Myrs = createoject(“ ADODB. Recordset”)
Step4. Myrs.open “select count(*) from [Test Result] where status = ‘ fail’ ”, Mycom
‘ I’m might have many tabs in excel sheet so I want to specify a particular tab named Test Result , Remember when you specify the Tab name be specific at cases because it is case sensitive and any changes may fail the scripting which we have written.

Step5. Msgbox myrs(0)
‘Release the objects

Step6. Set Myrs = nothing
Step7. Set Mycom = nothing

And when we run the script the number of failed criteria of the test result tab will be displayed in in a msg box.  If we count manually the failed test cases are 8. So the result will be 8 displayed in the Msgbox.
Similarly if I want to retrieve the test cases, test case description and comments columns  for each failed test case in the Excel sheet then I can write my script in this way

Step1. Set Mycom = creatobject (“ADODB.connection”)
Step2. Mycom.open “driver = (Microsoft excel (*.xls)); OBQ= Path ‘, Randomly= true”
Step3. Set Myrs = createoject(“ ADODB. Recordset”)
Step4. Myrs.open “select  Testcase_id,Test case Description, Comments,  from [Test Result] where status = ‘Fail’ “,Mycom
‘I can either give the Column name or if not I can give the column number as 1, 2 or so on. I use the Do while loop here to get the data till the end of the file

Step5. Do while not Myrs.EOF                    (end of File)
Step6. Msgbox Myrs ( “ Test case_ID “)
Step7. Msgbox Myrs(“ Test case Description”)
Step8. Msgbox Myrs( “Comments”)
Myrs. Move Next
Loop
‘Release the object

Step9. Set Myrs = nothing
Step10. Set Mycom = nothing

If we want to concatenate the same data in the form of List instead of Retreiving  each column in a msgbox then I use ampersand symbol and get them in a single throw 
.
Step6. Msgbox Myrs(“ Test case Description”) & “:” & (“ Test case_ ID “) &”:” & (“ Comments”)

And when we analyze the result data, The testcase_id, description and comments will be displayed in the msgbox and we have to click ok to close the msgbox and get the result of another fail criteria row.
If we are not comfortable in clicking Ok all the time then we can use print statement in front of Msgbox so that the data of all the fail criteria’s available in the test result sheet will be displayed in the form a list.
So this how you write Scripting in QTP to extract data from an excel sheet.

Tuesday, 27 October 2015

Evaluating Exit Criteria and Reporting, Test Closure Activities

Evaluating exit criteria 
It is an activity where test execution is assessed against the defined objectives. This should be done for each test level, as for each test level it is needed to know whether there was done enough testing or not. 
Based on risk assessment, exit criteria will be set as the measure "enough testing".  That means if the risk assessed is reduced and testing doe is enough then that particular test item is set into exit criteria. This criteria varies for each project and is known as exit criteria. This tells whether it can be declared that a given testing activity or level is complete. 

During this stage the following major tasks are identified:

  • Checking test logs against the exit criteria specified in test planning
  • Assessing if more tests are needed or if the exit criteria specified should be changed
  • Writing a test summary reports for stakeholders. 

It may be a mix of coverage or
  • Completion criteria (which tells about test cases that must be included to complete ,  for example "the driving test must include an emergency stop" or "the software test must include a response measurement")
  • Acceptance criteria (which tells how we know whether the software has passed or failed overall for example "only pass the driver if they have completed the emergency stop correctly" or "only pass the software for release if it meets the priority 1 requirements list")
  • Process exit criteria (which tells whether there have been completed all the tasks need to be done, For example  "the examiner/tester has not finished until they have written and filed the end of test report").

Exit criteria should be set and evaluated for each test level:
  • Completion or exit criteria are used to determine when testing a testware at any stage is completed or not. 
  • These criteria may be defined in terms of costs, time, faults found or coverage criteria and so on.
  • Coverage criteria is defined in terms of items that are exercised by test suites such as branches, user requirements and most frequently used transactions.

Exit Criteria Examples :
Exit criteria is evaluated if it satisfies some the below conditions
  • 100%  of statement coverage
  • 100% of requirement coverage
  • If all screens / dialogue boxes / error messages are seen
  • 100% of test cases have been run
  • 100% of high severity faults are fixed
  • 80% of low & medium severity faults are fixed
  • maximum of 50 known faults remain
  • maximum of 10 high severity faults predicted
  • time has run out
  • testing budget is used up

 Test Summary Report :
The purpose of test summary report is to summarize the result of the testing activities and to provide an evaluation based on these results. According to IEEE std. 829 Test summary report consists of the following contents.
  • Test summary report Identifier
  • Summary
  • Variances
  • Summary of resultsà  1. Resolved incidents  2. Unresolved Incidents
  • Evaluation
  • Recommendations
  • Summary of Activities
  • Approvals

Test Closure Activity:
This is the last stage of STLC . During this activity , we collect the data from the completed test activities and we bring all the data into one format including the checking and fitting of testware and analyzing the facts and numbers .The major activities involved in this test closure are as follows
  • Finalizing and archiving the testware, test environment and also the test infrastructure used in this test activity for the later reuse purpose
  • Handover the testware to the Maintenance organization who handover it to the actual Client
  • Analyzing the lessons  learned for future releases and projects and the improvement of test Maturity.




Monday, 26 October 2015

Test Log Document and Test Incident reports

Test Log Document:
According to IEEE 829 -1998, The test log document is a chronological record of relevant details about the execution of test cases. The purpose of the test log is to share information among testers, users developers and others to facilitate the replication of a situation encountered during testing.  It is simply a document which is used to share and refer about the execution on test cases among testers, developers, users.
Contents of the Test log consists of
  • Test log Identifier
  • Description
  • Activity and Event entries

Test Incident report: 
An incident report provides a formal mechanism for recording software incidents , defects, enhancements and their status.
Now what are these incidents ?

Incident: While executing a test, we might observe that the actual results vary from expected results. When the actual result is different from the expected result then it is called as incidents, bugs, defects, problems or issues. To be specific, we sometimes make difference between incidents and the defects or bugs. An incident is basically any situation where the system exhibits questionable behavior, but very often we refer an incident as a defect only when the root cause itself is some problem in the item which we are testing. Other causes of incidents include misconfiguration or failure of the test environment, corrupted test data, bad tests, invalid expected results and tester mistakes.
Test  Incident report Contents:
The test incident report has a template , and all the incidents are reported systematically in this format only.
  • Incident Summary report Identifier
  • Incident Summary
  • Incident Description
Incident summary Report Identifier:
It is a unique company generated number to identify an incident report, its level and the level of software that particular incident report related to. The number may also identify at what level of testing the incident occurred and all the details. This is to assist in coordinating software and testware versions within configuration management and also to assist in the process of elimination of incidents through process improvement. 
Incident Summary:
This is a summarization or description of the actual incident.  Provide enough details to enable others to understand how the incident was discovered and any relevant supporting information such as: 
  • References to
  • Test Procedure used to discover the incident
  • Test Case Specifications that will provide the information to repeat the incident
  • Test logs showing the actual execution of the test cases and procedures
  • Any other supporting materials, trace logs, memory dumps/maps etc.  

Incident Description:
Incident Description provides as much details on the incident as possible. Especially if there are no other references to describe the incident.  It includes all the relevant information that has not been included in the incident summary information or any additional supporting information which includes details like
  • Inputs
  • Expected results
  • Actual results
  • Date and Time
  • Procedure step
  • Environment
  • Observers
  • Impact
  • Investigation
  • Matrices

Impact: It describes the actual or potential damage caused by the incident.  This can include either the Severity of the incident and the Priority to fix the incident or both. Severity and Priority need to
Severity:  
The potential impact to the system. It indicates the impact of the defect on the business.
  • Blocker or show Stopper:  An application will not function or system fails. It will not allow the testers to proceed until and unless this is rectified. There is no possible work around.
  • Critical: Severe problems but possible to work around . Test can be continued but it has to be rectified. But is quite difficult to work
  • Major: It  does not impact the functionality or usability of the production but it has to be fixed befor getting released into the market and there is a possibility to work around more easily when compared to critical.
  • Minor: The functionality is nice to have in production but it is not necessary to be fixed prior to get released.

Priority:  The order in which the incidents are to be addressed
  • High:  Must be fixed as soon as possible
  • Medium:  System is usable but incident must be fixed prior to next level of test or shipment
  • Low:  Defect can be left in if necessary doe to time or costs


Tuesday, 20 October 2015

Test Implementation

After Test analysis and design Test engineers mainly concentrate on preparing test cases. These test case preparation is a deep thinking process and is done either by manual or automation. If we write this test cases in manual we write them in either word, .doc or excel where as in automation we take the help of quality center.

Test cases:
According IEEE standard (829), Test cases are defined as documentation that is specifying inputs in predicted results and a set of execution condition for a test “Item”  
“ A test case is defined as a set of test inputs , execution conditions and expected results which are developed in order to compare those expected results  with actual results  such as to exercise a particular path or to verify deviations with the specific requirement “

Inputs  taken for Preparing Test cases:
There are certain inputs, that testers has to concentrate on in order to prepare the Test cases effectively. They are
Business Requirement/ Functional Specifications: Until  and unless we are aware of the requirements and basic functionality of a software we can’t prepare effective test case document
Prototype: The prototype helps in Knowing what are the Items that are used to prepare the Application.
Test Scenarios: Scenarios simply summarizes the Functionality of the application and describes us what to do

Need of Test cases:
  • Test cases identify and communicate the conditions that will be implemented in tests
  • It is necessary to verify the expected results with actual results to know the completeness and correctness of the application , So test cases make it more easier to compare
  • Test cases help in finding problems in the requirement or design of an application
  • Test cases help us in determining whether we meet the Client expectations or not.

Imagine if we doesn’t write test cases what will happen. There will be no process adherence. The deliverables which we have to submit to the client would be poor. Increase in time taken for execution, We have depend on Module owner for testing and there would be no requirement coverage tracking . we can’t track that how many requirements are covered and how many are left. So test cases are at most important in order to overcome these problems.

Test case Attributes: The quality of the Test case is determined by following attributes
Accurate: The test case will test based on the description given
Economical: The test cases will have only the steps or fields needed for their purpose.
Repeatable: A test case is a controlled experiment, It should produce the same results no matter who tests it. If only the writer can test it and get the result , or if the test gets different results for different retests , it needs more work in the set or actions.
Appropriate: A test case has to be appropriate for the testers and environment they use . If the test case is theoretically sound but require skills that none of the testers understand it then that test case will sit on the shelf .
Traceable: Test case must be traceable to requirements. It may meet all the standards but if their pass or fail criteria , results reports doesn’t matter why bothers
Self Cleaning: Picks up after itself . It returns the test environment to the pre test state.

Guidelines to write Test cases:
Be careful with your working
Make steps easy and understandable to follow
Be descriptive if necessary
Have a clear set of expected results
Write your test cases so that every tester can understand it when they go through the test cases.

Test case types: Generally the testers follow two different designs  to prepare test cases effectively  
User Interface Test case design
Functional and system based Test case design
User Interface Test case design: To prepare the test cases for usability testing, testers depend upon this method. Testers  in this method, identify the interest of customer side people and user interface rules  in the market

Functional and system based Test case design: After preparation of User Interface test case design, testers concentrate on functional and system based test case design . In this method, testers are preparing  test cases in IEEE 829 standards which we are going to discuss in the next blog.

Monday, 19 October 2015

Test Analysis and Design

Test analysis and design is the activity where general  testing  objectives are transformed into tangible test conditions and test designs. In this phase, generally the normal and general test objectives are identified during the planning and build test designs and test procedures which are also known as scripts. Test analysis and design follows few steps which helps in designing the test procedures.
  • Step 1: Reviewing the test basis
  • Step 2: Evaluating theTestability
  • Step 3: Identifying the test conditions
  • Step 4: Designing the test cases
  • Step 5: Set up Test data
  • Step 6: Set up Test environment
  • Step 7: Create traceability matrix

Step 1- Reviewing the Test basis: 
It is a criteria for verification of requirements. They conduct review meeting to decide the following factors
  • Traceability
  • Unambiguous
  • Consistency
  • Correctness
  • Completeness
  • Necessity
  • Testability
  • Feasibility
  • Prioritization

Step 2- Evaluating the testability: 
Testability is the degree to which a software artifact facilitates testing in a given test context. Ensure that “expected result “ can be derived from the requirement and can be programmatically or visually verified. If the testability of the software artifact is high, then finding faults in the system (if it has any) by means of testing is easier.

Step 3- Identifying the Test Conditions: 
Test condition is defined as an item or event that could be verified by one or more test cases. Generally these Test conditions are prepared in a template form called Test condition expected result template.
Test case Id
Test Condition description
Expected Result
Requirement 1
Sc1
Successful login
It should redirect to flight reservation window
R1

So in this way we create a Test condition template , but before that we derive test scenarios from Business requirement specifiactions and derive test conditions according to the scenarios and write multiple number of test cases for each and every test condition.
Requirement (B.R.S)
Test Scenario ( test specification)
Test condition ( TCER)
Test cases ( test case document)
Secured login
Login Functionality
Successful login with valid data   
Unsuccessful login with invalid data                  
Verify login with valid user id and valid password
Verify login with invalid user id and valid password
Verify the login with valid user id and invalid password
Verify the login without user id and only valid password
Verify the login with valid user id and no password
Verify the login with no user id and no password
Verify the login with invalid user id and in valid password

Test Scenario: It summarizes the functionality of the application without providing steps to follow the action . It is a high level documentation which describes what to do.

Step 4- Design Test cases: 
Test case design is a deep thinking process or activity and in order to think well , we need to design a “ Test Design Technique “.
Technique: it is a way of deriving good test cases . It is a way of objectively measuring a test effort. Best practices. Successful at finding faults. Based on a structural or functional model fo the software . Time saving because different people have  a similar probability to find defaults. Effective testing which helps finding more faults. Efficient testing which find faults with the less effort.
Test cases : A test case is defined as set of test inputs , executions conditions and expected results developed for a particular objective such as to exercise  a particular path or to verify compliance with a specific requirement.
A test case is documentation specifying input predicted results and a set fo execution condition for a test “item”

Step 5- Set up Test data: 
Testers are preparing a test data with the help of test design techniques and that could be used when they are testing an application.

Step-6 Set up Test environment: 
Testers in this phase, set up a test environment which identifies any required infrastructure and tools.
Note: developers and testers environment should not be the same but testers environment corresponds the customer requirement.

Step 7- Creating a Traceability matrix:
Requirement tracing is the process of deriving the links between the user requirements for the system and the work product that has been developed to implement and verify those requirements

Requirement Traceability: