HowToGetSoftwareJob

HowToGetSoftwareJob
HowToGetSoftwareJob Google Hangout

Tuesday, 20 March 2012

How to Answer Interview Questions for a Software Testing Job

How to Answer Interview Questions for a Software Testing Job


You have an interview at a software company and you want to make sure you answer the questions properly. Plan ahead and prepare for the interview by knowing ahead of time what the interviewer might cover. Follow these tips to answer interview questions for a software testing job.

Instructions

    • 1 Know all about the company. The interviewer will ask you questions about the business as well as software questions like how your skills will help the business, so do your research and wow the interviewer with your knowledge of the company.
    • 2 Talk about your education in software testing. List any examples of software testing you have conducted in the past in a job or in school. You want the employer to understand how to test software.
    • 3 Use open ended questions to explain your qualifications and tell about yourself. Give examples of each skill you have.
    • 4 Show your personal qualities, such your willingness to work overtime. Explain that you are a team player who can also work independently and that you are also a problem solver with great communication skills.
    • 5 Identify which software testing procedures you are familiar with. Describe any experience you have with test plans or test cases.
    • 6 Explain your knowledge and experience with automated testing and manual testing of software packages. Include the testing tools you used. List your knowledge of software development or programming languages like Java, Visual or C++.
    • 7 Describe your willingness to learn new things if you don't know a software procedure, program or term used by the interviewer. Remind the interviewer of all your current knowledge and skills including your ability and willingness to learn more.

Good interview questions QA from Cem Kaner article: "Recruiting testers" December 1999

On this page you can find more than 400 different QA interview questions fromdifferent recourses.Some software testers interview question are very simple some are a little bit difficult.If you would like to check you technical knowledge or to see more interview questions and answers you can try online online quiz from this web site.
You can try our Online interview questions QA test too. There are links from this page to the SQL, Linux, WEB, QTP, QC and programming interview questions and answers
Please do not send me e-mails asking the answers to these interview questions.
This page is being updated on a quarterly basis.

1. What is software quality assurance?
2. What is the value of a testing group? How do you justify your work and budget?
3. What is the role of the test group vis-୶is documentation, tech support, and so forth?
4. How much interaction with users should testers have, and why?
5. How should you learn about problems discovered in the field, and what should you learn from those problems?
6. What are the roles of glass-box and black-box testing tools?
7. What issues come up in test automation, and how do you manage them?
8. What development model should programmers and the test group use?
9. How do you get programmers to build testability support into their code?
10. What is the role of a bug tracking system?
11. What are the key challenges of software testing?
12. Have you ever completely tested any part of a product? How?
13. Have you done exploratory or specification-driven testing?
14. Should every business test its software the same way?
15. Discuss the economics of automation and the role of metrics in testing.
16. Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
17. When have you had to focus on data integrity?
18. What are some of the typical bugs you encountered in your last assignment?
19. How do you prioritize testing tasks within a project?
20. How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
21. When should you begin test planning?
22. When should you begin testing?
23. Do you know of metrics that help you estimate the size of the testing effort?
24. How do you scope out the size of the testing effort?
25. How many hours a week should a tester work?
26. How should your staff be managed? How about your overtime?
27. How do you estimate staff requirements?
28. What do you do (with the project tasks) when the schedule fails?
29. How do you handle conflict with programmers?
30. How do you know when the product is tested well enough?
31. What characteristics would you seek in a candidate for test-group manager?
32. What do you think the role of test-group manager should be? Relative to senior management?
Relative to other technical groups in the company? Relative to your staff?
33. How do your characteristics compare to the profile of the ideal manager that you just described?
34. How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
35. Who should you hire in a testing group and why?
36. What is the role of metrics in comparing staff performance in human resources management?
37. How do you estimate staff requirements?
38. What do you do (with the project staff) when the schedule fails?
39. Describe some staff conflicts you’ve handled.

Here are some interview questions you might be asked on a job interviewfor a testing opening: (sample from MU COSC 198 Software Testing by Dr. Corliss)
  1. Why did you ever become involved in QA/software testing?
  2. What is the software testing lifecycle and explain each of its phases?
  3. What is the difference between testing and QA (Quality Assurance)?
  4. What is Negative testing?
  5. What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
  6. What are two of your strengths that you will bring to our QA/testing team?
  7. How would you define (QA) Quality Assurance?
  8. What do you like most about Quality Assurance/Software Testing?
  9. What do you like least about Quality Assurance/Testing?
  10. What is the Waterfall Development Method and do you agree with all the steps?
  11. What is the V-Model Development Method and do you agree with this model?
  12. What is the Capability Maturity Model (CMM)? At what CMM level were the last few companies you worked?
  13. What is a "Good Software Tester"?
  14. Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
  15. List 5 words that best describe your strengths.
  16. What are two of your weaknesses?
  17. What methodologies have you used to develop test cases?
  18. In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?
  19. Define each of the following and explain how each relates to the other: Unit, System, and Integration testing.
  20. Define Verification and Validation. Explain the differences between the two.
  21. Explain the differences between White-box, Gray-box, and Black-box testing.
  22. How do you go about going into a new organization? How do you assimilate?
  23. Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
  24. What is ISO 9000? Have you ever been in an ISO shop?
  25. When are you done testing?
  26. What is the difference between a test strategy and a test plan?
  27. What is ISO 9003? Why is it important
  28. What are ISO standards? Why are they important?
  29. What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
  30. What is IEEE? Why is it important?
  31. Do you support automated testing? Why?
  32. We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
  33. What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
  34. Are reusable test cases a big plus of automated testing and explain why.
  35. Can you build a good audit trail using Compuware's QA Center products. Explain why.
  36. How important is Change Management in today's computing environments?
  37. Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.
  38. We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
  39. When is a good time for system testing?
  40. Are regression tests required or do you feel there is a better use for resources?
  41. Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.
  42. Tell me about a difficult time you had at work and how you worked through it.
  43. Give me an example of something you tried at work but did not work out so you had to go at things another way.
  44. How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The client does not want to mask dates on the output files to allow compares. - Answer-Rerun baseline and future date input files same # of days as future dated run of program with change. Now run a file compare against the baseline future dated output and the changed programs' future dated output.

Questions to ask during an interview

  1. What is the structure of the company?
  2. Who is going to do the interview-possible background information of interviewer?
  3. What kinds of assignments might I expect in the first six months of the job?
  4. What do you like best about your company?
  5. What is the employer's environment (platforms, tools, etc.)?
  6. What are the employer's methods and processes used in software arena?
  7. What is the employer's philosophy?
  8. What is the project all about you are interviewing for-as much information as possible.
  9. Any terminologies that the company may use.
  10. What is the structure of the software testing team?
  11. What are the responsibilities of the software testing team members?
  12. How many computers are in the software testing lab?
  13. What kinds of software testing tools are installed in the software testing lab?
  14. What is the ratio between software developers and testers in the company?
  15. What development and QA methodologies is the company using?
  16. What are your growth projections for next year?
  17. Have you cut your staff in the last three years?
  18. Is this a new position or am I replacing someone?

71 basic SQA / testing interview questions

  1. What is the Difference between Project and Product testing? What difference you have observed while testing the Clint/Server application and web server application
  2. What are the differences between interface and integration testing? Are system specification and functional specification the same? What are the differences between system and functional testing?
  3. What is Multi Unit testing?
  4. What are the different types, methodologies, approaches, methods in software testing
  5. What is the difference between test techniques and test methodology?

  6. What is meant by test environment,… what is meant by DB installing and configuring and deploying skills?
  7. What is logsheet? And what are the components in it?
  8. What is Red Box testing? What is Yellow Box testing? What is Grey Box testing?
  9. What is business process in software testing?
  10. What is the difference between Desktop application testing and Web testing?
  11. Find the values of each of the alphabets. N O O N S O O N + M O O N J YOU N E
  12. With multiple testers how does one know which test cases are assigned to them? • Folder structure • Test process
  13. What kind of things does one need to know before starting an automation project?
  14. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A Test Case? What’s is their order of succession in the STLC?
  15. How many functional testing tools are available? What is the easiest scripting language used?
  16. Which phase is called as the Blackout or Quite Phase in SDLC?
  17. When an application is given for testing, with what initial testing the testing will be started and when are all the different types of testing done following the initial testing?
  18. What is difference between test plan and use case?
  19. In an application if I enter the delete button it should give an error message “Are you sure you want to delete” but the application gives the message as “Are you sure”. Is it a bug? And if it is how would you rate its severity?
  20. Who are the three stake holders in testing?
  21. What is meant by bucket testing?
  22. What is test case analysis?
  23. The recruiter asked if I have Experience in Pathways. What is this?
  24. What is the difference between GUI testing and black box testing
  25. What are the main things we have to keep in mind while writing the test cases? Explain with format by giving an example
  26. How we can write functional and integration test cases? Explain with format by giving examples.
  27. Explain the water fall model and V- model of software development life cycles with block diagrams.
  28. For notepad application can any one write the functional and system test cases?
  29. Can you give me the exact answer for Test Bug?
  30. What is the difference between Use Case and test case?
  31. What is installation shield in testing
  32. What is one key element of the test case?
  33. What are the management tools we have in testing?
  34. Can we write Functional test case based on only BRD or only Use case?
  35. What’s main difference between smoke and sanity testing? When are these performed?
  36. What Technical Environments have you worked with?
  37. Have you ever converted Test Scenarios into Test Cases?
  38. What is the ONE key element of ‘test case’?
  39. What is the ONE key element of a Test Plan?
  40. What is SQA testing? tell us steps of SQA testing
  41. How do you promote the concept of phase containment and defect prevention?
  42. Which Methodology you follow in your test case?
  43. What are the test cases prepared by the testing team
  44. During the start of the project how will the company come to an conclusion that tool is required for testing or not?
  45. Define Bug Life Cycle? What is Metrics
  46. What is a Test procedure?
  47. What is the difference between SYSTEM testing and END-TO-END testing?
  48. What is Traceability Matrix? Is there any interchangeable term for Traceability Matrix? Are Traceability Matrix and Test Matrix same or Different?
  49. What is the difference between an exception and an error?
  50. Correct bug tracking process - Reporting, Re-testing, Debugging, …..?
  51. What is the difference between bug and defect?
  52. How much time is/should be allocated for testing out of total Development time based on industry standards?
  53. What are test bugs?
  54. Define Quality - bug free, Functionality working or both?
  55. What is the purpose of software testing’s - Bug removal, System’s functionality working, quality or all?
  56. What is the major difference between Web services & client server environment?
  57. Is there any tool to calculate how much time should be allocated for testing out of total development?
  58. What is Scalability testing? Which tool is used?
  59. Define Reliability?
  60. Best to solve defects - requirements, plan, design, code / testing phase?
  61. Cost of solving a bug from requirements phase to testing phase - increases slowly, decreases, increases steeply or remains constant?
  62. What is scalability testing? What are the phases of the scalability testing?
  63. What is the difference between end to end testing and system testing.
  64. What kind of things does one need to know before starting an automation project?
  65. Have you worked with data pools and what is your opinion on them? Give me an example as to how a script would handle the data pool.
  66. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A Test Case? What’s is their order of succession in the STLC?
  67. How many functional testing tools are available? What is the easiest scripting language used?
  68. If we found the bug in SRS or FRS, how to categorize that bug?
  69. What is the difference between end to end testing and system testing.
  70. What is the difference between a defect and an enhancement?
  71. Project is completed. Completed means that UAT testing is going. In that situation as a tester what will you do?

If you are recruiter / employer select interview questions to assess software testers or QA in all of the following areas


  • Motivation
  • Problem solving
  • Decision making
  • Technical knowledge
  • Time management
  • Multitasking
  • Leadership
  • Sincerity

    General interview questions for Software testers and QA professionals:
    1. What types of documents would you need for QA, QC, and Software Testing?
    2. What did you include in a test plan?
    3. Describe any bug you remember.
    4. What is the purpose of the software testing?
    5. What do you like (not like) in this job?
    6. What is QA (quality assurance)?
    7. What is the difference between QA and software testing?    Read the answer for this interview question   for software testers
    8. How do you scope, organize, and execute a test project?
    9. Why did you choose to be a software tester?
    10. What is the role of QA in a development project?
    11. What sort of manager would you like to work with?
    12. What is the role of QA in a company that produces software?
    13. Define quality for me as you understand it
    14. Describe to me the difference between validation and verification.
    15. Describe to me what you see as a process. Not a particular process, just the basics of having a process.
    16. Describe to me when you would consider employing a failure mode and effect analysis.
    17. Describe to me the Software Development Life Cycle as you would define it.
    18. What are the properties of a good requirement?
    19. What do you want to be doing in your career in five years?
    20. How do you differentiate the roles of Quality Assurance Manager and Project Manager?
    21. Tell me about any quality efforts you have overseen or implemented. Describe some of the challenges you faced and how you overcame them.
    22. How do you deal with environments that are hostile to quality change efforts?
    23. In general, how do you see automation fitting into the overall process of testing?
    24. How do you promote the concept of phase containment and defect prevention?
    25. If you come onboard, give me a general idea of what your first overall tasks will be as far as starting a quality effort.
    26. What kinds of software testing have you done?
    27. What the best work have you performed?
    28. What metrics do you think determine a tester’s progress in an organisation?
    29. Have you ever created a test plan?
    30. Describe ideal software testing environment.
    31. Have you ever written test cases or did you just execute those written by others?
    32. What did your base your test cases?
    33. Suppose a software has three inputs, each having a defined valid range. How many test cases will you need to test all the boundary values?
    34. How do you determine what to test?
    35. How do you decide when you have 'tested enough?'
    36. How do you test if you have minimal or no documentation about the product?
    37. Describe me to the basic elements you put in a defect report?
    38. Why would you like to work on this project?
    39. How do you perform regression testing of software?
    40. What knowledge do you think are important to be successful in software testing?
    41. At what stage of the life cycle does testing begin in your opinion?
    42. How do you analyze your test results? What metrics do you try to provide?
    43. Realising you won't be able to test everything - how do you decide what to test first?
    44. Where do you get your expected results?
    45. If automating - what is your process for determining what to automate and in what order?
    46. In the past, I have been asked to verbally start mapping out a test plan for a common situation, such as an ATM. The interviewer might say, "Just thinking out loud, if you were tasked to test an ATM, what items might you test plan include?" These type questions are not meant to be answered conclusively, but it is a good way for the interviewer to see how you approach the task.
    47. If you're given a program that will average student grades, what kinds of inputs would you use?
    48. Tell me about the best bug you ever found.
    49. How should the testing of a excel spreadsheet and an aircraft application differ?
    50. What made you pick software testing over another career?
    51. What is the exact difference between Integration & System testing, give me examples with your project.
    52. How did you go about software testing a project?
    53. When should software testing start in a project? Why?
    54. What are your most valuable lessons learned from the last Software Testing project that you was involved?
    55. How do you go about testing a web application?
    56. Difference between Black & White box software testing
    57. What is Configuration management? Tools used?
    58. How do you apply configuration management (CM) process in software testing?
      Answer:Configuration management
    59. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)
    60. Would you like to work in a team or alone, why?
    61. How you handled a difficult situation with a coworker.
    62. Give me 5 strong & weak points of yours
    63. Why do you want to join our company?
    64. What was your background before testing?
    65. Why did you get involved in testing?
    66. When should software testing be stopped?
    67. How do you know when you've finished and can stop testing? How much is enough?
    68. Tell me about how you had to meet a very tight deadline.
    69. How to motivate a software tester?
    70. What sort of things would you put down in a bug report?
    71. Why is it important to assign both severity and priority levels to a defect?
    72. Who in the company is responsible for Quality?
    73. Who defines quality?
    74. What is an equivalence class?
    75. Is a "A fast database retrieval rate" a testable requirement?
    76. Should we test every possible combination/scenario for a program?
    77. What criteria do you use when determining when to automate a test or leave it manual?
    78. When do you start developing your automation tests?
    79. Discuss what test metrics you feel are important to publish an organization?
    80. Describe the role that QA plays in the software lifecycle.
    81. What should Development require of QA?
    82. What should QA require of Development?
    83. How would you define a "bug?"
    84. Give me an example of the best and worst experiences you've had with QA.
    85. How does unit testing play a role in the development / software lifecycle?
    86. Explain some techniques for developing software components with respect to testability.
    87. Describe a past experience with implementing a test harness in the development of software.
    88. Have you ever worked with QA in developing test tools? Explain the participation Development should have with QA in leveraging such test tools for QA use.
    89. Give me some examples of how you have participated in Integration Testing.
    90. How would you describe the involvement you have had with the bug-fix cycle between Development and QA?
    91. What is unit testing?
    92. Describe your personal software development process.
    93. Describe your solution to one of the most difficult testing problems you have faced.
    94. How do you know when your code has met specifications?
    95. How do you know your code has met specifications when there are no specifications?
    96. Describe your experiences with code analyzers.
    97. How do you feel about cyclomatic complexity?
    98. Who should test your code?
    99. How do you survive chaos?
    100. What processes/methodologies are you familiar with?
    101. What type of documents would you need for QA/QC/Software Testing?
    102. How can you use technology to solve problem?
    103. What type of metrics would you use?
    104. How to find that tools work well with your existing system?
    105. What automated tools are you familiar with?
    106. How well you work with a team?
    107. How would you ensure 100% coverage during software testing?
    108. How would you build a test team?
    109. What problem you have right now or in the past? How you solved it?
    110. What you will do during the first day of job?
    111. What would you like to do five years from now?
    112. Tell me about the worst boss you've ever had.
    113. Give some specific examples of creative solutions in your last testing project.
    114. What are your greatest weaknesses?
    115. What are your strengths?
    116. What is a successful product?
    117. What do you like about Windows?
    118. What is good code?
    119. How do you deal with criticism?
    120. Who is Kent Beck, Dr Grace Hopper, Dennis Ritchie?
    121. What are basic, core, practises for a QA specialist?
    122. What do you like about QA?
    123. What has not worked well in your previous QA experience and what would you change?
    124. How does a programmer's testing differ from a QA department member's testing?
    125. How you will begin to improve the QA process?
    126. What is the difference between QA and QC?
    127. What is UML and how to use it for software testing?
    128. What is CMM and CMMI? What is the difference?
    129. What do you like about computers?
    130. Do you have a favourite QA book? More than one? Which ones? And why.
    131. Can you briefly explain the benefits you enjoyed as a result of reading this book?
    132. What is the responsibility of programmers vs QA?
    133. What are the properties of a good requirement?
    134. Ho to do test if we have minimal or no documentation about the product?
    135. What are all the basic elements in a defect report?
    136. Is an "A fast database retrieval rate" a testable requirement?
    137. Why should you care about objects and object-oriented testing?
    138. What does 100% statement coverage mean?
    139. How do you perform configuration management with typical revision control systems?
    140. What is code coverage?
    141. What types of code coverage do you know?
    142. What tools can be used for code coverage analysis?
    143. Is any graph is used for code coverage analysis?
    144. At what stage of the development cycle software errors are least costly to correct?
    145. What can you tell about the project if during testing you found 80 bugs in it?
    146. How to monitor test progress?
    147. Describe a few reasons that a bug might not be fixed.
    148. What are the possible states of software bug?s life cycle?
    149. What books about QA (software testing) did you read?
    150. What type of testing based specifically on a program code?
    151. What type of testing based on any document that describes the "structure of the software"?
    152. Please describe test design techniques like: state-transition diagrams, decision tables, activity diagrams.
    153. Describe business process testing and what test design technique would you use for it?
  • Software Testing Interview Questions - Part 4

    151. Hi,Can anybody clear me about Equivalance Class partition.

    152. Pls explain the concept of Regression Testing with the help of simple Example

    153. there is 100 requirements,how can u find is there all requirements are covered by your test case?

    154. What do you mean by dynamic testing?

    155. what are the software testing deliverables by a software testing company to their client for their inhouse developed modules? any specific format for white box manual testing?

    156. What is the difference between Code Walkthrough & Code Review?

    157. What is TDLC?

    158. Last test case for project will be written in which phase?


    159. What is the Diff between SIT & IST?


    160. What is a Data Guideline?
    161. How is risk analysis done?


    162. What is the Difference between Retest and Regression Testing?

    163. What do you do if the documentation given to you unclear/not understandable?

    164. traceability matrix, test strategy

    165. what are the functional parameters.

    166. when we know that testing is complete?

    167. what is virtual object in winrunner, use of it?

    168. If there's a problem in the software but no one ever discovers itnot programmers, not testers, and not even a single customeris it a bug?

    169. what is sure stopper in testing

    170. What is Build Interval Period?
    171. What is Risk Analysis?

    172. What is mean by software engineering?

    173. How can we do Database Testing?


    174. How do you do the integration of modules before performing integration testing?


    175. 1)How do you prepare defect policy report

    176. who gives the documentation to the tester?

    177. Hi Tell me details about use cases . How we start actual testing in any organization . when the role of tester starts.

    178. You are to run 2 test scriptsTest script 1 : This is the new version of s/w to be released after regression testingTest script 2: This is the new version of entire s/wYou found a lot of faults and defects in test script 1. What do you do and why???


    179. What are the differences we can find while testing an java application and .Net application?

    180. Hi Group,Can anybody tell me abt the Non Functional Requirements with the suitable example.ThanksRanjeet
    181. What is Use case?what is psudo code?

    182. please send a model question paper for selection of software functions.

    183. Are the test scripts for GUI/Naviagation type of testing will be written for each individual forms separately?
    184. what is meant by Build in Testing?

    185. What is Interoperability Testing?


    186. what isTest metric
    187. Discuss realtime system testing

    188. what is the differance between cmm and tmm?

    189. Who prepares the test policy document?

    190. What is Interoperability Testing?
    191. What are the fields u can see in Test Report.

    192. What is the difference b/w Test Methodology and Test strategy?

    193. What is the difference between System Testing & End- to- end Testing?
    194. what is pairwise testing?

    195. If a bug is found that is not replicable all times, does that bug should be reported to developer?

    196. What is open issue?

    197. What is Test Scenario?

    198. Given requirement collection document, tester can prepare which type of test plan?

    199. I would like to know, when you are converting oracle9i reports to crystal reports, what all do you need to test?

    200. 3 module is there like a,b, c i have to inegrate these 3 module but module b is not ready how i ll integrate these module?
    201. WAS(web application stress) tool is used for which type of testing?

    202. What is the meaning of Functional Contract in the context of Requirment specs?
    203. What is the process of complete EDI Testing for both inbound and outbound transaction sets?

    204. What is the difference between Master Test Plan and Test Plan? Do we really require two Test Plans? Please explain in detail?

    205. When an application is having database, name few checks that need to be carried out for testing database?

    206. Name few common bug's in localization testing?

    207. What is mean by data flow diagram? What is the difference between the data flow diagram and flow chart?

    208. In Test Director what is the difference between Favourite & Filter?

    209. How to test Memory Leakage Manually?

    210. What is Cost of time and who said it
    211. What is Testing Decompositaion?

    212. What do we do when an extra functionality,which is not specified the functionality specification is found?

    213. How to set/verify the testing environment?

    214. Why is a software project difficult to monitor and control? Substantiate your answer with specific examples.

    215. In software testing generally it happens that when a tester finds a bug in a piece of software then the chances of finding another increases.. Why So

    216. How to prepare tracebility matrix?

    217. Dear Freinds , could some body throw some light on the basic process and principles followed for SAP testing? thanx,regards,jahnavi

    218. what is meant by fault/bug density ,how do u calculate this.


    219. What is share action,per action and object repository in QTP? Please explain me along with the example.

    220. How will you select reasonable tests to be applied on a project?
    221. what is pilot testing?

    222. What is bi-directional traceability matrix? Give sample format?

    223. wats the difference betweeen the 'V' shape and 'U' shape.

    224. where we do the acceptance testing?is it done only at during installation of the software or it can be done als when all the customer requirements are collected???

    225. What is ERP Testing?

    226. What are the functional parameters of software testing? &What are the principles of software testing?

    227. Define searchability matrix used in application testing?

    228. What is the differance between bug and feature .How u dill with it with R&D.thanksronen

    229 hi alli am into a project where the system generates various reports and i have to test all the reports so what has to be tested in the reports i.e what are the important things to be tested in the reportsferoz

    230. What is non-compliance testing?
    231. how to identify/find bugs in a testscript?

    232. what are all the scenarios to test a report

    233. How to test for Check 21 implementation on specific \

    234. What is the difference between Functional Testing and System Testing, there is any marjor difference between these two concepts?

    235. what is latent bug?

    236. How do u test a weblink which is changing dynamically?

    237. whar are the major diff. between the Winrunner 6.0 and 7.0 (with internal procedure)?

    238. Write test cases for char * my_itoa(int n) method? What if this function is mission critical? How will you test it? How can you speed up the implementation?

    Software Testing Interview Questions - Part 3

    101. what test you perform mostly? Regression or retesting in your testing process?


    102. what are the skills required for Monkey testing?

    103. how can we map testcases with requirements .. please give some examples

    104. What is Compatability Testing? and What are all the procedures to conduct Compatability Testing?


    105. what are the tools used manual and automated testing for test mainframe products

    106. A system test that focus the s/w to fall in a variety of ways & verify that s/w is aqble to continue execution without interruption.This is know as:a) Recovery testing or b) Stress testing or c) both

    107. What is a base line document and how to write good test cases by using baseline document?What are the contents in the base line document?

    108. What is Business acceptance testing and what is the difference between UAT and BAT

    109. What is the difference between test case and use case?

    110. How Load, Stress & performance testing is done practically? What templates and measures are noted down in all these testing?
    111. Explain the difference between Re-testing and Regression testing?

    112. Write test case format and test cases for Login window where we have to test username and password

    113. what is mean by srs?explian briefly.

    114. Difference betwen load and stress testing ?

    115. What kind of Document you need for going for an Functional testing?

    116. is v model better than water fall model. If so how is it?

    117. Explain statement coverage?

    118. what is rush or panic mode

    119. Write a Test Case on a program that calculates P=R/I where R, I are integer inputs and P a floating point output.

    120. what is the difference between web based application and client server application
    121. What is Test Readiness Review? and what is the criterion for composing it?

    122. what is SQT and why do we use.

    123. what is the advantage of manual testing over automation testing?

    124. What is the Initial Stage of testing?

    125. What is the Outcome of Integration Testing?

    126. what is difference between functional testing and system testing

    127Suppose Test Lead or Project Manager is given test cases to execute 300 Test Cases within 3 days. But Because of some Serious Family problem i have to go my native and i have only one and half day time to execute 300 test cases and it is not possible with in one and half day so at that time what you will do ( But here i the person knows well this module Project manager is not ready to give this test to other he is telling to execute me only ) at that time what are the steps u will take any one

    128. What are the Minimum requirements to start testing?


    129. What is the relationship between Quality & Testing?

    130. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
    131. What are the things, you prefer & Prepare before starting Testing?

    132. Overview of test plan – IEEE standards.

    133. What happens when Software defects increase as software development processesa.increases linearlyb.decreases linearlyc.increases exponentiallyd.decreases exponentially

    134. what is meant by defect submission with examples

    135. How can you reduce/eliminate duplicate test cases

    136. What's the different between Tester and a Test Analyst?

    137. How to calculate Time required to execute test cases?

    138. What is Kpas?What is ISO,SEICM,PCMM,SIX SIGMA?


    139. 1.How to estimate test effort?2.How to test server without client?3.If their is less time for testing How do test plan the application?4.Adhoc Testing?5.What is sysops Testing?6.Detetmine server stability through load/performance testing?7.tomake attributes test engineers?8.If change the code how do you dynamic changes?9.write a TSL script for addition of 2 no.s?10.What isthe name of the class in WR where declare inarray in a function?

    140. Why UNIX is Required for a testing Engineer? What is the Role of Testing Engineer If he knows UNIX?
    141. what is the testing policy ?and how to decided which one is best ?


    142. This question has been asked in a interview if any body get answer plz give the ans1. given requirement collection document, tester can prepare which type of test plan?2. Last test case for a project will be written in which phase?3. Name few comman bugs in localization testing.4.What is localization and internationalization testing?What is the difference between them?5. When an application is having database ,Name few checks that need to be carried out for testing database?6. WAS(web applic

    143. What testing phases are performed On each SDLC stages?

    144. what is the difference between RTM and TRTM

    145. What are the features, you take care in Prototype testing?

    146. What is the Concept of Up-Down & Down-Up in Testing in integration testing?

    147. If the developer is not ready to fix the bug what do you do?Question asked to me in an interview

    148. What Is Test Bed?

    149. what is meant by Fuzz testing?

    150. what is acid testing?

    Software Testing Interview Questions - Part 2

    51. What is a UseCase , and what is the difference between a Usecase and a Testcase.Pls Help


    52. What is your process for determining what to automate and in what order?


    53. How will you test a stapler?


    54. There are 2 systems, A and B, A is sending data to B and B inturn is sending an acknowledgement to A, What are the test scenarios for this. This is the question:Apart from Functional Data validations, interviewer is expecting some thing more.


    55. In the first step of the software testing i.e., requirements gathering.What are requirements

    56. What would be strategy to fix bugs in an unknown piece of code?

    57. What happens when a software tester makes wrong decession


    58list out the contents in traceability matrix with sample template. who is responsible for preparing the traceability matrix

    59. Explain about how you would do Mainframe testing,

    60. Do you test your own code? How do you test your code
    61. What are the different types of load Conditions?


    62. If a customer wants a new feature to be added, how would you go about adding that?

    63. Hi What is the diffrence between Regression testing and Retesting?


    64 If a scale down environment is given instead of production environment, how can we certify the product by performance testing?

    65. What is the entry criteria for Automation testing?

    66. what did you do as a team leader?


    67. What is defect leakage?

    68. Did u ever have to deal with someone who doesn't believe in testing? What did u do?

    69. Describe the last project scenario and generate test cases for it?

    70. Which tool can be used to test business component?

    71. What is WSWAS and Link checker?
    72. How will you test a keyboard?

    73. What is the role of tester in SDLC cycle ( in each phase )

    74. What is FPD (Functional Poing Description)? When do we prepare FPD?

    75. You are a tester for testing a large system. The complex data model is very large with lots of fields, data, and interdependent paths between them. many attributes and there are a lot of interdependencies within the fields. What steps would you use to test the system

    76. What is the difference between Return and treturn?

    77. What important test cases will need to be run to test an installable?

    78. What are the case study-historical data?

    79. What are the steps followed in Test Strategy and Test Plan

    80. How would you test a fast lazer printer?
    81. One person reviews a QA test plan prepared by his counterpart and gives him comment.It is a part of?a) QCb) QAc) Bothd) None
    82. 1. Automation2. Manual Testing3. Database4. ProcessesWhich one is best to select and present for experienced tester?

    83. What is the diff between Volume & Load?

    84. why most of the companys go for manual testing rather than automation testing ,when automation testing useful?


    85. What are Pareto Digrams?What is meant by Code Checkers?What is Static testing?Criteria used for selecting test tool?Which among the options provided below, is not a quality factor for the usability testing?a) In-consistencyb) Navigationc) Comprehensibilityd) Responsiveness


    86. What are the different phases in SDLC,what is test teams part in tht??

    87. What is the difference between Product-based Company and Projects-based Company?

    88. What is meant by test bid and business reason for using the test bid?

    89. Who actually perform Alpha testing?Both developer and customer or only customer?
    90. What is the Black Box Testing for web-based application?
    91. What are the processes of game testing?How does game testing differs from other testing methods?
    92. What is the diff between Stress & Load Testing?


    93. In Software Testing prespective, what is differnce between Continous data and Discreet Data ?

    94. what are the disadvantages of blackbox testing


    95. Generate test cases for replace string method


    96. What is the difference between web testing and Application testing?


    97. How do you test telephone and inlcude possible test cases.

    98. What is diff between Volume & Stress?


    99. Explain White Box testing techniques related to mainframe testing..?


    100. Q. Calculator performs this operation 1*1=2, 2*2=4, 3*3=10, 4*4=16, 5*5=26Identify the bug and write bug title and description.

    Software Testing Interview Questions - Part 1

    1. Static testing requires...?

    2. What is tracebility matrix?

    3. In System Testing itself we are validating the data then why we have to go for Validation Testing

    4. how to test Maintenance project and what is the difference between the Maintenance project and project?

    5. How will you write test cases for a code currently under development?

    6. Priority and Severity Examples

    7. what type of testing in qa is for deaf and dumb people?

    8. How can the efficiency of the Tester be mapped? Is there any process flow or matrix for this?

    9. Patient X had her physical checkup in a Doctor’s office and the nurse monitored her weight and height. How to test an application to cover all the aspects of height (Cm to foot inch) and weight (kgs to lbs) conversions? What is the type of testing you would look for and why

    10. which of the following is not a coding defect?

    11. What is Optimal Testing? What happens if you exceed that testing?

    12. Statement and decision coverage

    13. Explain steps for doing integration testing? When does it come into picture?

    14. Quality Matrix

    15. Branch coverage

    16. What is an Inconsistent bug?

    17. What are the flaws in water fall model and how to overcome it?

    18. If there are a lot of bugs to be fixed, which one would you resolve first

    19. what is inspection,review

    20. What criteria do you use when determining when to automate a test or to leave it manual?

    21. When do you start developing your automation tests?

    22. how to write security and cookie test cases for web application

    23. what are the drawbacks of AD-HOC testing and ways to overcome them?

    24. which testing method is used to check the software in abnormal condition?

    25. How do you deploy a build? What is the required i/p s for deploying a build?

    26. What types of scripting techniques for test automation do you know?

    27. Can the environmental changes cause errors. Eg. if a client is using oracle 8i for the present process if he changes and inputs all his data into the oracle 9i then does it effect the fuctionality of the current screen

    28. What are the drawbacks of adhoc testing? Suggest way to overcome them.Suppose a tester believes a unit contains a specification defect. Which testing strategy would be best to uncover the defect and why?

    29. What are attributes of good bug ?

    30. The testing technique that requires devising test cases to demonstrate that each program function is operational is called A. gray-box testing B. glass-box testing C. Black-box testing D. white-box testing

    31. What is the difference between V-Model and V-Shaped Model

    32. what is differnce between regression and retestingwhat is differnce between v model and water fall model

    33. How is a process-oriented approach different from a people-oriented approach? List pros and cons of each.

    34. What is SDLC and STLC and the different phases of both?What is the difference between system testing and functional testing?

    35. Realizing you won’t be able to test everything- how do you decide what to test first?

    36. _______contains a set of testing instructions to be run by a human tester

    37. What is meant by Fishbone Chart?

    38. How do you analyze your test results?

    39. what is game testing? How game testing is different from normal testing? What is the role of game tester?

    40. What is the difference between Bug Reporting and Bug Tracking?

    41. Describe common problems of test automation?

    42. What is bug density ?

    43. What method or technic is used to find the minimum no. of test cases.

    44. What is the role of Closure phase in Software Development Life Cycle?

    45. What are the errors encountered while testing an application manually or using automated tool like Testdirector,Winrunner.

    46. what is the actual different between re-testing and regressiontesting .brefily explain

    47. How do we know about the build we are going to test? where do you see this?

    48. What is system testing and what are the different types of tests you perform in system testing?

    49. 1. What is Bug life cycle?2. Test case biggest number of given three numbers?

    50. How will you test a newly installed elevator? What will you be the broad categories of your test cases.


    Interview Questions of Software Testing

    What’s the Software Testing?
    A set of activities conducted with the intent of finding errors in software.
    What’s the Test Plan?
    A high level of documents that define the software testing project.
    What’s the Test Case?
    A set of test inputs, execution, and expected result developed for a particular objective.
    What’s the Test Log?
    A chronological record of all relevant details about the execution of a test.
    What’s the Test Data?
    The actual values used in the test or that are necessary to execute the test.
    What’s the Database testing?
    In database testing, we can check the integrity of database field values.
    What’s the Defect?
    The difference between the functional specification and actual program text.
    What’s the Negative testing?
    That testing is designed for break the system.
    What’s the Test Bed?
    An environment that contain different types of hardware, software, simulator, testing tools, and other support elements that are necessary to conduct a test.
    What’s the Test Condition?
    A set of circumstances that a test invokes.
    What’s the Usability testing?
    Usability testing is for user friendliness.
    What’s the Volume Testing?
    We can perform the Volume testing, where the system is subjected to large volume of data.
    What’s the Black Box testing?
    Black Box testing is not based on any knowledge of internal logic.
    What’s the White Box testing?
    White Box testing is based on knowledge of internal logic.
    What’s the Regression Testing?
    After every update we should test the system and check the effects on all venerable point of system
    What’s the System Testing?
    After integration of all module check the correctness of functional flow of system.
    What’s the performance Testing?
    In performance testing we can check the system with valid entry data and minimize the blocker condition possibility.
    What’s the Defect Tracking?
    Defect tracking is the process of finding bugs in software.
    What’s the Unit Testing?
    Testing of individual component of software.
    What’s the Test Tool?
    A computer program that used in testing the systems.
    What’s the Test Driver?
    A program or test tool used to execute test.
    What’s the End-To-End Testing?
    Testing a complete application environment in a situation that mimics a real world use.
    What’s Coding?
    A generation of source code.
    What’s the Cause Effect Graph?
    It’s graphical representation of inputs and the output effects that are used to design test case.
    What’s the Test Life Cycle?
    A Test Life Cycle is contains seven steps-
    • Plan Test
    • Design Test Case
    • Run Test
    • Analyze Result
    • Documents Test Result
    • Preparation of Validation report
    • Regression Testing
    What’s the Validation?
    Validation refers to a set of activities that ensure that the software has been built is traceable to customer requirements.
    What’s the Verification?
    Verification refers to a set of activities that ensure that correctly implements a specific function.
    What’s Ad Hoc Testing?
    A testing where the tester tries to break the software by randomly trying functionality of software.
    What’s Compatibility Testing?
    In Compatibility testing we can test that software is compatible with other elements of system.
    What’s the Data Flow Diagram?
    A modeling notation that represent the functional composition of a system.
    What’s the Debugging?
    Debugging is the method of finding and rectifying the cause of software failures.
    What’s the Positive Testing?
    In positive testing we can assume that software is working fine.
    Who’s the good software engineer?
    A good software engineer has “test to break” attitude, an ability to take the point of view of the customers, and strong quality desire.
    What’s the Security Testing?
    A testing which confirms that the software can restrict the access of unauthorized personnel.
    What’s the Software Requirement Specification?
    A deliverable that describe all data, functional and behavioral requirement, and all validation requirements for software.
    What’s the Static Testing?
    In Static testing we can analyze the source code to expose potential defects.
    What’s the Accessibility Testing?
    Testing that determines if software will be usable by people with disabilities.
    What’s the Bottom-up Testing?
    An approach to integration testing where the lowest level component are tested first.
    What’s the Smoke Testing?
    A Smoke testing is a cursury examination of all of of the basic components of the software to ensure that they will work correctly.
    What’s the Boundry Value Analysis?
    Boundry Valuse Analysis is a test data selection technique in whice values are choosen maximum, minimum, just inside, just outside boundries, typical values, and error values. The hope is then if software work correctly for these values then it’s will works for all values in between.
    What’s the Top-Down testing?
    An approach to integration testing where the top level component are tested first.
    What’s the Acceptance Testing?
    A testing conducted to enable a user or customer to determine whether to accept a software project.
    What’s the Functional Testing?
    In Functional Testing, we can test features or operational behaviour of a product to ensure that they correspond to it’s requirements.
    What’s the Integration Testing?
    In Integration Testing, we can test combined parts of application to determine if they work together correctly.
    When we performed Integration Testing?
    Usually performed after unit and functional testing.
    What’s the Test Scenario?
    It defines a set of test cases or test scripts and the sequences in which they are to be executed.
    What’s the Traceability Matrix?
    A document that showing the relationship between Test Requirements and Test Cases.
    What’s the Validation Strategies?
    A Validation Strategies are-
    Unit Testing
    Integration Testing
    System testing
    End to End Testing
    User Acceptance Testing
    Installation Testing
    Beta Testing
    What’s the Verification Strategies?
    A Verification Strategies are-
    Requirement Reviews
    Design Reviews
    Code Walkthrough
    Code Inspections
    What’s the Code Walkthrough?
    Code Walkthrough help in analyzing the coding techniques and if the code is meeting the coding standards.
    What’s the Code Inspections?
    Code Inspections is formal analysis of the program source code to find defects as defind by meeting system design specifications.
    What’s the Beta Testing?
    Testing the application after the installation at the client place.
    What’s the Installation Testing?
    Testing the computer system during the installation.
    How many types of testing?
    There are two types of testing-
    • Functional- Black Box Testing
    • Structural- white Box Testing
    How many types of approaches are used in Integration Testing?
    There are two types of approaches used-
    • Bottom-Up
    • Top-Down
    What’s the Alpha Testing?
    The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software

    Software Testing Interview Questions – basic

    Here is a list of commonly asked basic level Software Testing Interview Questions.You must prepare about different types of testing and about the commonly used Words in Software Testing before attending the Interview.
    1. What are the importance of Software Testing?
    2. What are the main tools you are used for Software Testing?
    3. What are the different types of Software Testing?
    4. What are the difference between Black Box and White Box testing?
    5. What are the difference between Manual Testing and Automated Testing ?
    6. What is Unit Testing ?
    7. What is Integration Testing ?
    8. What is acceptance testing ?
    9. What is Static testing?
    10. What is System testing?
    11. What is Load Testing?
    12. What is Smoke Testing?
    13. What is Soak Testing?
    14. What is Scalability Testing?
    15. What is Sanity Testing?
    16. What is Ramp Testing?
    17. What is Monkey Testing?
    18. What is Gray Box Testing?
    19. What is Functional Testing?
    20. What is Glass Box Testing?
    21. What is Dynamic Testing?
    22. What is Compatibility Testing?
    23. What is Concurrency Testing?
    24. What is Component Testing?
    25. What is Ad Hoc Testing?
    26. What is Agile Testing?
    27. What are the different phases in Software Testing?
    28. How you define defects and Bugs?
    29. What are the roles of a QA specialist?
    30. What is Test Case and Test Plan ?
    31. Tell me about Top Down and Bottom Up approaches in testing?
    32. Tell me about Software Testing Life cycle

    Software Testing Interview Questions - 1

    Q1. What is Software Testing?
    Ans. Operation of a system or application under controlled conditions and evaluating the results. The controlled conditions must include both normal and abnormal conditions. It is oriented to detection.

    Q2. What is Software Quality Assurance?
    Ans. Software QA involves the monitoring and improving the entire software development process, making sure that any agreed-upon standards and procedures are followed. It is oriented to prevention.

    Q 3. What are the qualities of a good test engineer?
    Ans.

    A good test engineer has a test to break attitude.
    An ability to take the point of view of the customer
    a strong desire for quality
    Tactful and diplomatic
    Good communication skills
    Previous software development experience can be helpful as it provides a deeper understanding of the software development process
    Good judgment skills

    Q4. What are the qualities of a good QA engineer?
    Ans.

    The same qualities a good tester
    Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization.
    In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed.
    An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

    Q5. What are the qualities of a good QA or Test manager?
    Ans.

    Must be familiar with the software development process
    able to maintain enthusiasm of their team and promote a positive atmosphere
    always looking for preventing problems
    able to promote teamwork to increase productivity
    able to promote cooperation between software, test, and QA engineers
    have the skills needed to promote improvements in QA processes
    have the ability to say 'no' to other managers when quality is insufficient or QA processes are not being adhered
    have people judgement skills for hiring and keeping skilled personnel
    be able to run meetings and keep them focused

    Q6. What is the 'software life cycle'?
    Ans. The life cycle begins when an application is first conceived and ends when it is no longer in use.

    Q7. Tell us about some world famous bugs
    Ans. 1. In December of 2007 an error occurred in a new ERP payroll system for a large urban school system. More than one third of employees had received incorrect paychecks that results in overpayments of $53 million. Inadequate testing reportedly contributed to the problems

    2. A software error reportedly resulted in overbilling to 11,000 customers of a major telecommunications company in June of 2006. Making the corrections in the bills took a long time.

    3. In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges.

    Q8. What are the common problems in the software development process?
    Ans.

    Poor requirements
    Unrealistic schedule
    Inadequate testing
    A request to pile on new features after development is unnderway.
    Miscommunication

    Q9. What are the common solutions to software development problems?
    Ans.

    Solid requirements
    Realistic schedules
    Adequate testing
    stick to initial requirements where feasible
    require walkthroughs and inspections when appropriate

    Q10. What is a Quality Software?
    Ans. Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and / or expectations, and is maintainable.

    Q11. What is good code?
    Ans. Good code is code that works, is reasonably bug free, and is readable and maintainable.

    Q12. What is good design?
    Ans. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable. It should also be robust with sufficient error-handling and status logging capability and work correctly when implemented. And, good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.

    Q13. What's the role of documentation in QA?
    Ans. QA practices must be documented to enhance their repeatability. There should be a system for easily finding and obtaining information and determining what documentation will have a particular piece of information.

    Q14. Which projects may not need independent test staff?
    Ans. It depends on the size & nature of the project. Then, it depends on business risks, development methodology, the skills and experience of the developers.

    Q15. Why does software have bugs?
    Ans.

    miscommunication or no communication
    software complexity
    programming errors
    changing requirements
    time pressures
    poorly documented code
    software development tools
    egos - people prefer to say things like:
    • 'no problem'
    • 'piece of cake'
    • 'I can whip that out in a few hours'

    Software Testing Interview Questions and Answers

    Here are some software testing interview questions and answers. You can add more to the comments field at the bottom to help the community out!
    Note that Pay4Bugs does not require interviews – You can get paid for testing software and finding bugs without any interviews, tests, or certifications. However, the software developer customers can also block users who do not post good bugs, so it is almost like a constant interview situation!
    Once your product, website, or application is nearing the beta phase, and it’s time to get hands-on feedback from real users at a low price, Pay4Bugs is the way to go.

    Software Testing Interview Questions and Answers

    Software Testing Interview Question 1 – What is a Traceability Matrix?

    A Traceability Matrix documents the relationship between two baseline documents to determine the completeness of the relationship. A requirements traceability matrix may be used to check to see if the current project requirements are being met. This matrix is usually in the form of a table.
    The identifier for each of the items of one document are usually placed in the left column, and the identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column, with a higher number indicating a higher correlation between the two documents. Zero values indicate that no relationship exists; large values indicate that the relationship is too complex.

    Software Testing Interview Question 2 – If there are many bugs to be fixed, which should you resolve first?

    Fix the highest-priority bugs first. The severity of a software defect may not correlate directly with the priority placed on fixing it. Severity and priority should be tracked separately, although in a small organization or on a small project, there may not be a large number of defects and you will not need to track both.
    It may also be helpful to track the “urgency” of a bug fix (as determined by the client).
    In larger projects, you may have a Triage team.
    Triage is a medical term; it is the assessment of which patients need to be dealt with first. Some patients will die regardless of what you do; some patients will heal by themselves. The third group, the patients that will only heal with your help, are the highest-priority patients. You can assign software defects to a similar type of “triage” list, based on the defects’ priority and severity.

    Software Testing Interview Question 3 – What’s the difference between re-testing and regression testing?

    Regression testing is the process of testing new bugfixes to ensure that they don’t cause problems to occur involving problems fixed earlier. This process involves running a suite of tests.
    Re-testing is the process of testing a single defect that was just fixed. Only one test is performed, and the goal is to make sure that the defect that was just fixed was, in fact, fixed properly.

    Software Testing Interview Question 4 – What are the flaws in waterfall model and how to overcome it?

    The major drawback that you do not test the application logic until very late in the development cycle. Although a very detailed system specification should result in a less error-prone application, a single serious error spotted late in the development cycle may be very expensive to fix. Also, the waterfall model does not adapt well to rapidly changing technology.

    Software Testing Interview Question 5 – What is the difference between Functional Testing and System Testing?

    Functionality testing is based on functional requirements of the application. By contrast, system testing is end-to-end testing that covers all of an application’s functionality including usability, security and performance.
    Functional testing is a subset of system testing.

    Software Testing Interview Question 6 – What is the V-Model Development Method?

    Software Testing Interview Questions and Answers - V-Model
    The V-Model
    The V-model is a software development process which may be considered an extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.
    To see a graphical representation of the V-Model, see the image to the right.

    Software Testing Interview Question 7 – What are the pre-requisites for white-box testing?

    They are the same as for black-box testing, with one major exception: during white-box testing, the testers have access to the application logic. The tester should ask for access to detailed functional specs and requirements, design documents (both high-level and detailed), and source code. The tester analyzes the source code and prepares functional tests to ensure that the application behaves in compliance with both the requirements and the specs.

    Software Testing Interview Question 8 – What is the agile manifesto?

    The Agile Manifesto is a statement of the principles that underpin agile software development:
    • Individuals and interactions are take priority over processes and tools
    • Working software takes priority over comprehensive documentation
    • Customer collaboration takes priority over contract negotiation
    • Response to change takes priority over following a plan
    The QA team may want to add one more principle:
    • Craftsmanship takes priority over execution
    The idea is to prioritize the creation of good code over the creation of code that barely works.
    Have more Software Testing Interview Questions and Answers? Add them to the comments field below!

    Software Testing

    1. Software Testing is more Oriented to Detecting the defects or often equated to finding bugs. Testing is a process of executing a software system to determine whether it matches its specification and executes in its intended environment under controlled conditiions. The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.    

    SQA: - Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.  

    Stop Testing: - Testing is potentially endless. We can not test till all the defects are unearthed and removed -- it is simply impossible. At some point, we have to stop testing and ship the software. The question is when.

    Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models. The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources -- time, budget, or test cases -- are exhausted. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost. [Yang95] This will usually require the use of reliability models to evaluate and predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure data gathering -- modeling -- prediction. This method does not fit well for ultra-dependable systems, however, because the real field failure data will take too long to accumulate.
     
    For Verification & Validation (V&V)
    Just as topic Verification and Validation indicated, another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test.
     
    2. What is the purpose of software testing?
    The purpose of software testing is
    a. To demonstrate that the product performs each function intended;
    b. To demonstrate that the internal operation of the product performs according to specification and all internal components have been adequately exercised;
    c. To increase our confidence in the proper functioning of the software.
    d. To show the product is free from defect.
    e. All of the above.
     
    3. Types of Levels: -
    COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
     
    CONFORMANCE TESTING. Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.
     
    FUNCTIONAL TESTING. Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, APIs, database management, security, installation, networking, etcF testing can be performed on an automated or manual basis using black box or white box methodologies.
     
    LOAD TESTING. Load testing is a generic term covering Performance Testing and Stress Testing.
     
    PERFORMANCE TESTING. Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.
     
    REGRESSION TESTING. Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.
     
    SMOKE TESTING. A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
     
    STRESS TESTING. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.
     
    UNIT TESTING. Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.
     
    Black Box Testing
    Black box testing methods focus on the functional requirements of the software. Tests sets are derived that fully exercise all functional requirements. This strategy tends to be applied during the latter part of the lifecycle.
    Tests are designed to answer questions such as:
     
    1) How is functional validity tested?
    2) What classes of input make good test cases?
    3) Is the system particularly sensitive to certain input values?
    4) How are the boundaries of data classes isolated?
    5) What data rates or volumes can the system tolerate?
    6) What effect will specific combinations of data have on system operation?
     
    Equivalence Partitioning: -
    This method divides the input of a program into classes of data. Test case design is based on defining an equivalent class for a particular input. An equivalence class represents a set of valid and invalid input values.
    Guidelines for equivalence partitioning -
     
    1) If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
    2) If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
    3) If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined.
    4) If an input condition is boolean, one valid and one invalid class are defined.
     
    Boundary Value Analysis: -
    Boundary value analysis is complementary to equivalence partitioning. Rather than selecting arbitrary input values to partition the equivalence class, the test case designer chooses values at the extremes of the class. Furthermore, boundary value analysis also encourages test case designers to look at output conditions and design test cases for the extreme conditions in output.
    Guidelines for boundary value analysis -
     
    1) If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b, and values just above and just below and b.
    2) If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values above and below the minimum and maximum are also tested.
    3) Apply the above guidelines to output conditions. For example, if the requirement specifies the production of an table as output then you want to choose input conditions that produce the largest and smallest possible table.
    4) For internal data structures be certain to design test cases to exercise the data structure at its boundary. For example, if the software includes the maintenance of a personnel list, then you should ensure the software is tested with conditions where the list size is 0, 1 and maximum (if constrained).
     
    Cause-Effect Graphs
    A weakness of the two methods is that do not consider potential combinations of input/output conditions. Cause-effect graphs connect input classes (causes) to output classes (effects) yielding a directed graph.
    Guidelines for cause-effect graphs -
     
    1) Causes and effects are listed for a modules and an identifier is assigned to each.
    2) A cause-effect graph is developed (special symbols are required).
    3) The graph is converted to a decision table.
    4) Decision table rules are converted to test cases.
     
    We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail.
     
    Performance testing
    Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade.
     
    Reliability testing
    Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software.
     
    Security testing
    Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe.
    Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.
     
    TESTING means "quality control"
           * QUALITY CONTROL measures the quality of a product
           * QUALITY ASSURANCE measures the quality of processes used to create a quality product.
    Beta testing is typically conducted by end users of a software product who are not paid a salary for their efforts.
     
    Acceptance Testing
    Testing the system with the intent of confirming readiness of the product and customer acceptance.
     
    Ad Hoc Testing
    Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
     
    Alpha Testing
    Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
     
    Automated Testing
    Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.
     
    Beta Testing
    Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.
     
    Black Box Testing
    Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document..
     
    Compatibility Testing
    Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.
     
    Configuration Testing
    Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
     
    Functional Testing
    Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
     
    Independent Verification and Validation (IV&V)
    The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.
     
    Installation Testing
    Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs.
     
    Integration Testing
    Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing)
     
    Load Testing
    Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.
     
    Performance Testing
    Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.
     
    Pilot Testing
    Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing)
     
    Regression Testing
    Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.
     
    Security Testing
    Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
     
    Software Testing
    The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation)
     
    Stress Testing
    Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.
     
    System Integration Testing
    Testing a specific hardware/software installation. This is typically performed on a COTS (commerical off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.
     
    White Box Testing
    Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.
     
    Difference Between Verification & Validation: -
    - Verification is about answering the question "Does the system function properly?" or "Have we built the system right?"
    - Validation is about answering the question "Is the product what the customer wanted?" or "Have we built the right system?"
     
    This definition indicates that Validation could be the same thing as Acceptance Test (or at least very similar).
     
    I have often described Verification and Validation processes in the same way, ie:
     
    1. Plan the test (output Test Plan)
    2. Specify the test (output Test Specification)
    3. Perform the test (output Test Log and/or Test Report
     
    Verification & Validation
    Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.