50+ Interview Questions on SDLC (Software Development Life Cycle)

50+ Interview Questions on SDLC (Software Development Life Cycle)



1) What is SDLC?

Software Life Cycle Model is a systematic approach to develop Software. 

2) What are the phases in Software Development Life Cycle?

Requirements Gathering
Analysis & Planning
Software Design
Coding or Implementation
Testing
Release & Maintenance
Note: Phase names may vary from one company to another 

3) What is STLC?

Software Testing Life Cycle or Software Test Process is a formal way to perform Testing.

4) What are the phases in Software Testing Life Cycle?

Test Planning
Test Design
Test Execution
Test Closure

5) What is Bug Life Cycle?


Bug Life Cycle is the Process through which a defect or bug goes starts when defect found & ends when defect is closed.

Different Flows in Bug Life Cycle

New: Tester provides new status while Reporting (for the first time)

Open: Developer / Dev lead /DTT opens the Defect

Rejected: Developer / Dev lead /DTT rejects if the defect is invalid or defect

Fixed: Developer provides fixed status after fixing the defect

Deferred: Developer provides this status due to time

Closed: Tester provides closed status after performing confirmation Testing

Re-open: Tester Re-opens the defect with valid reasons and proofs 

6) What is ALM (Application Lifecycle Management)?

Application Lifecycle Management (ALM) is a continuous process of managing the life of Software application through development, testing and maintenance.

7) What are the phases in Software Application Life Cycle?

Development phase
Testing phase
Production phase

8) What is Error?

Any mismatch in Computer programming, another name is Mistake.

9) What is Defect or Bug?

Any mismatch that found in testing phase

10) What is Failure?

Any mismatch that found in production phase (Customer environment)

11) What is Waterfall Model?

The waterfall model is a sequential software development approach,  in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Requirements Gathering, Analysis and Planning, Software Design, Coding or Implementation, Testing and Release & Maintenance.  

12) What is V Model?

A framework, It describes the software development lifecycle activities from
requirements specification to Release & maintenance. 


The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle.

13) What is Prototype Model?

14) What is Spiral Model?

15) What is Agile Model?

16) What is Quality?

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. 

17) What is Software Quality?

18) What is SQA (Software Quality Assurance)?

19) What is Quality Control?

20) What is Verification?

21) What is Validation?

22) What is Testing?

23) What is the relation between Quality and Testing?

24) What are the advantages of Waterfall Model?

25) What are the disadvantages of Waterfall Model?

26) What are the advantages of V Model?

27) What are the disadvantages of V Model?

28) What are the advantages of Prototype model?

29) What are the disadvantages of Prototype model?

30) What are the advantages of Agile model?

31) What are the disadvantages of Agile model?

32) What is Requirement?

33) What is Requirement Specification?

34) What is BRS?

35) What is SRS?

36) What is High Level Design or Global Design?

37) What is Detailed Design or Low Level Design?

38) What are the important types of Requirements?

39) What is Business Requirement?

40) What is Software Requirement?

41) What is Test Requirement?

42) What is Software Release?

43) What is Software Maintenance?

44) What is Change Request?

45) What is PIN Document and who prepares this document?

46) What are the teams involved in Software Development?

47) What is Static Testing?

48) What is Dynamic Testing?

49) Why we use Test Design Techniques?


Exhaustive testing is impractical so we use Test Design Techniques in order to reduce the sizes of input and output domains. 

50) What is Exhaustive Testing?

A test approach in which the test suite comprises all combinations of
input values and preconditions.

51) What is Quality Management?

Coordinated activities to direct and control an organization with regard
to quality. 


Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement.

Difference Between Verification And Validation With Example

Difference Between Verification And Validation With Example

Verification and Validation example is also given just below to this table.

             Verification
             Validation
1. Verification is a static practice of verifying documents, design, code and program.
1. Validation is a dynamic mechanism of validating and testing the actual product.
2. It does not involve executing the code.
2. It always involves executing the code.
3. It is human based checking of documents and files.
3. It is computer based execution of program.
4. Verification uses methods like inspections, reviews, walkthroughs, and Desk-checking etc.
4. Validation uses methods like Blackbox (functional)  testing, gray box testing, and white box (structural) testing etc.
5. Verification is to check whether the software conforms to specifications.
5. Validation is to check whether software meets the customer expectations and requirements.
6. It can catch errors that validation cannot catch. It is low level exercise.
6. It can catch errors thatverification cannot catch. It is High Level Exercise.
7. Target is requirements specification,Application and software architecture, high level, complete design, and database design etc.
7. Target is actual product-a unit, a module, a bent of integrated modules, and effective final product.
8. Verification is done by QA team to ensure that the software is as per the specifications in the SRS document.
8. Validation is carried out with the involvement of testing team.
9. It generally comes first-done before validation.
9. It generally follows afterverification.

Example of verification and validation are explained below:-


Suppose we have the specifications related to the project than by checking that specifications without executing to see whether the specifications are up to the mark or not is what we have done in verification.

Similarly Validation of the software is done to make sure that the software always meets the requirements of the customer by executing the specifications of the project and product. 

Note that the customer and end users are concerned in validation of the software. 

It is also crucial to differentiate between end users, and customers. Considering example, if you are developing a library monitoring system, the librarian is the client and the person who issue the books, collect fines etc. are comes under the category of the end users.

Techniques or Methods of Verification and Validation


Methods of Verification

1. Walkthrough

2. Inspection
3. Review
Methods of Validation
1. Testing

2. End Users

Conclusion:

1) Verification and Validation both are necessary and complementary.
2) Both of them provides its own sets of Error Filters.
3) Each of them has its own way of detect out the errors left in the software.

Lots of people use verification and validation interchangeably but both have different meanings. 

Verification process describes whether the outputs are according to inputs or not, and 

Validation  process describes whether the software is accepted by the user or not.

Manual Testing Interview Questions with answers by www.jobinterviewquestionswithanswers.blogspot.com

Manual Testing Interview Questions with answers by www.jobinterviewquestionswithanswers.blogspot.com

1. What are the components of an SRS?
An SRS contains the following basic components:
Introduction
Overall Description
External Interface Requirements
System Requirements
System Features
manual testing interview questions


   2. What is the difference between a test plan and a QA plan?
A test plan lays out what is to be done to test the product and includes how quality control will work to identify errors and defects.  A QA plan on the other hand is more concerned with prevention of errors and defects rather than testing and fixing them.
   3. How do you test an application if the requirements are not available?
If requirements documentation is not available for an application, a test plan can be written based on assumptions made about the application.  Assumptions that are made should be well documented in the test plan.
   4. What is a peer review?
Peer reviews are reviews conducted among people that work on the same team.  For example, a test case that was written by one QA engineer may be reviewed by a developer and/or another QA engineer.
    5. How can you tell when enough test cases have been created to adequately test a system or module?
You can tell that enough test cases have been created when there is at least one test case to cover every requirement.  This ensures that all designed features of the application are being tested.
   6. Who approves test cases?
The approver of test cases varies from one organization to the next. In some organizations, the QA lead may approve the test cases while another approves them as part of peer reviews.
   7. Give an example of what can be done when a bug is found.
When a bug is found, it is a good idea to run more tests to be sure that the problem witnessed can be clearly detailed. For example, let say a test case fails when Animal=Cat and.  A tester should run more tests to be sure that the same problem doesn’t exist with Animal=dog.  Once the tester is sure of the full scope of the bug can be documented and the bug adequately reported. This question is one of the most frequently asked manual testing interview questions.
   8. Who writes test plans and test cases?
Test plans are typically written by the quality assurance lead while testers usually write test cases.
   9. Is quality assurance and testing the same?
Quality assurance and testing is not the same.  Testing is considered to be a subset of QA. QA is should be incorporated throughout the software development life cycle while testing is the phase that occurs after the coding phase.

Typical Manual Testing Interview Questions:

  10. What is a negative test case?
Negative test cases are created based on the idea of testing in a destructive manner.  For example, testing what will happen if inappropriate inputs are entered into the application.
  11. If an application is in production, and one module of code is modified, is it necessary to retest just that module or should all of the other modules be tested as well?
It is a good idea to perform regression testing and to check all of the other modules as well.  At the least, system testing should be performed.
  12. What should be included in a test strategy? 
The test strategy includes a plan for how to test the application and exactly what will be tested (user interface, modules, processes, etc.).  It establishes limits for testing and indicates whether manual or automated testing will be used.
  13. What can be done to develop a test for a system if there are no functional specifications or any system and development documents?
When there are no functional specifications or system development documents, the tester should familiarize themselves with the product and the code.  It may also be helpful to perform research to find similar products on the market.
  14. What are the functional testing types?
The following are the types of functional testing:
  • Compatibility
  • Configuration
  • Error handling
  • Functionality
  • Input domain
  • Installation
  • Inter-systems
  • Recovery
  15. What is the difference between sanity testing and smoke testing?
When sanity testing is conducted, the product is sent through a preliminary round of testing with the test group in order to check the basic functionality such as button functionality.  Smoke testing, on the other hand is conducted by developers based on the requirements of the client.
  16. Explain random testing.
Random testing involves checking how the application handles input data that is generated at random. Data types are typically ignored and a random sequence of letter, numbers, and other characters are inputted into the data field.
  17. Define smoke testing.
Smoke testing is a form of software testing that is not exhaustive and checks only the most crucial components of the software but does not check in more detail.

  Advanced Manual Testing Interview Questions

   18. What steps are involved in sanity testing?
Sanity testing is very similar to smoke testing. It is the initial testing of a component or application that is done to make sure that it is functioning at the most basic level and it is stable enough to continue more detailed testing.
   19. What is the difference between WinRunner and Rational Robot?
WinRunner is a functional test tool but Rational Robot is capable of both functional and performance testing. Also, WinRunner has 4 verification points and Rational Robot has 13 verification points.
   20. What is the purpose of the testing process?
The purpose of the testing process is to verifying that input data produces the anticipated output.
   21. What is the difference between QA and testing?
The goals of QA are very different from the goals of testing.  The purpose of QA is to prevent errors is the application while the purpose of testing is to find errors.
   22. What is the difference between Quality Control and Quality Assurance?
Quality control (QC) and quality assurance (QA) are closely linked but are very different concepts. While QC evaluates a developed product, the purpose of QA is to ensure that the development process is at a level that makes certain that the system or application will meet the requirements.
   23. What is the difference between regression testing and retesting?
Regression testing is performing tests to ensure that modifications to a module or system do not have a negative effect on previous releases.  Retesting is merely running the same testing again. Regression testing is widely asked manual testing interview questions and hence further research to understand this topic is needed.
   24. Explain the difference between bug severity and bug priority.
Bug severity refers to the level of impact that the bug has on the application or system while bug priority refers to the level of urgency in the need for a fix.
   25. What is the difference between system testing and integration testing?
For system testing, the entire system as a whole is checked, whereas for integration testing, the interaction between the individual modules are tested.
   26. Explain the term bug.
A bug is an error found while running a program. Bug fall into two categories: logical and syntax.

    Senior Tester Interview Questions

   27. Explain the difference between functional and structural testing.
Functional testing is considered to be behavioral or black box testing in which the tester verifies that the system or application functions according to specification.  Structural testing on the other hand is based on the code or algorithms and is considered to be white box testing.
   28. Define defect density.
Defect density is the total number of defects per lines of code.
   29. When is a test considered to be successful?
The purpose of testing is to ensure that the application operates according to the requirements and to discover as many errors and bugs as possible.  This means that tests that cover more functionality and expose more errors are considered to be the most successful.
   30. What good bug tracking systems have you used?
This is a simple interview question about your experience with bug tracking.  Provide the system/systems that you are most familiar with if any at all.   It would also be good to provide a comparison of the pros and cons of several if you have experience. Bug tracking is the essence of testing process and is a must asked manual testing interview questions in any interview. Do not forget this.
   31. In which phase should testing begin – requirements, planning, design, or coding?
Testing should begin as early as the requirements phase.
   32. Can you test a program and find 100% of the errors?
It is impossible to fine all errors in an application mostly because there is no way to calculate how many errors exist.  There are many factors involved in such a calculation such as the complexity of the program, the experience of the programmer, and so on. This Manual testing interview questions is the most tricky questions considered by testers.
   33. What is the difference between debugging and testing?
The main difference between debugging and testing is that debugging is typically conducted by a developer who also fixes errors during the debugging phase.  Testing on the other hand, finds errors rather than fixes them.  When a tester finds a bug, they usually report it so that a developer can fix it.
   34. How should testing be conducted?
Testing should be conducted based on the technical requirements of the application.
   35. What is considered to be a good test?
Testing that covers most of the functionality of an object or system is considered to be a good test.
   36. What is the difference between top-down and bottom-up testing?
Top-Down testing begins with the system and works its way down to the unit level.  Bottom-up testing checks in the opposite direction, unit level to interface to overall system. Both have value but bottom-up testing usually aids in discovering defects earlier in the development cycle, when the cost to fix errors is lower.
   37. Explain how to develop a test plan and a test case.
A test plan consists of a set of test cases. Test cases are developed based on requirement and design documents for the application or system. Once these documents are thoroughly reviewed, the test cases that will make up the test plan can be created.
   38. What is the role of quality assurance in a product development lifecycle?
Quality assurance should be involved very early on in the development life cycle so that they can have a better understanding of the system and create sufficient test cases. However, QA should be separated from the development team so that the team is not able to build influence on the QA engineers.
   39. What is the average size of executables that you have created?
This is a simple interview question about our experience with executables.  If you know the size of any that you’ve created, simply provide this info.
   40. What version of the Oracle are you familiar with?
This is an interview question about experience.  Simply provide the versions of the software that you have experience with.
   41. How is an SQL query executed in Oracle 8?
This is an interview question to check your experience with Oracle and you can simply provide the answer “from the command prompt.”  If you do not have Oracle experience, do not pretend and simply state that you have not worked on an Oracle database. Though this is a common manual testing interview questions, the answers can be different. Because if you have experience in other tools such as TOAD, SQL server etc, you can conveniently answer as per your experience.
  42. Have you performed tests on the front-end and the back-end?
This is an interview question in which you should explain whether you performed testing on the GUI or the server portion of previous applications.
  43. What is the most difficult problem you’ve found during testing?
This is a simple interview question in which you should provide an example. This is one of  most tricky manual testing interview questions as your answer will decide your job. You need to answer in such a way that your problem solving skills and your eagerness to learn new things, and your dedication towards the job will indicated by your answers.
  44. What were your testing responsibilities at your previous employer?
This interview question is very likely being asked to verify your knowledge of your resume. Make sure that you know what is on your resume and that it is the truth.
 

Important Manual Testing Interview Questions That Every Tester Should Know

Important Manual Testing Interview Questions That Every Tester Should Know

Interested In A Manual Testing Career?

Many companies are in need of manual testers to work towards improving their software/application development. A manual tester must have multiple skills to be an effective part of the development team. The key skills all manual testers should have are communication, analytical ability, technical ability, leadership skills, and a strong knowledge of testing. Whether you are an experienced tester or new to the industry be prepared to answer multiple questions on your knowledge of testing during the interview process. There are several key questions that you can familiarize yourself with before your next big interview. If you can answer these questions well then you have a great chance to be the next manual tester for a company!

What Makes A Good Test Engineer?

A good test engineer should have the “test to break mentality.” This can be described as the ability to take the point of view of the user and carefully test the application with the intention to find any issues. A test engineer should also have good communication skills and be able to know when to speak in technical terms and when not to. Knowledge of the software development process is very important so that there is an understanding of the developer’s mindset. The ability to manage time on the essential features to be tested requires not only time management skills but also strong judgement skills. A tester must be able to focus on what needs to be tested during the current lifecycle of the project.

How Important Is Documentation In Quality Assurance?

Documentation is quality insurance is critical and it doesn’t have to be physical it can be digital. Everything from the client’s requirements to your test cases is necessary to have documented so that it can be referred to during the development phases. Examples of essential quality assurance documents are the requirement analysis, functional specification, test plan, and defects.

Why Are Requirements Important?

The requirements are one of the most important aspects of the software development life cycle. Requirements provide the team the details as to how the features of the product will function. It’s important that the requirements for the project are clear, detailed, and can be tested. Requirements are critical and in non-agile development needs to be documented, but this does not mean that in agile development there is no need for documentation. In agile development there is a priority of working software over documentation. The goal in agile is to meet the requirements without letting documentation take away too much focus on the objective of completing the software.

What Steps Are Needed To Develop And Run Software Tests?

  • Acquire all necessary documents. This includes the requirements, functional design, internal design specifications.
  • Acquire the budget and schedule needs
  • Determine who will be working on the project and what their responsibilities, reporting requirements, and required processes are
  • Identify what aspect of the application is high risk, set priorities to asses it, determine the scope and possible limitations of the test
  • Assess what types of testing and approaches will be needed (unit, system, load, usability, integration, functional, etc.)
  • Determine the testing environment (software, hardware, etc.)
  • Determine the tools you are will be using when testing, also known as testware
  • Determine the test input data requirements
  • Assess the required tasks and who will be completing them.
  • Set a schedule for testing. Determine the timeline and the milestones
  • Determine what the type of data you will use for testing (equivalence classes, boundary value analysis, and error classes)
  • Prepare the test plan to have it reviewed and approved
  • Create test cases
  • Have test cases reviewed and inspected
  • Prepare the test environment and the testing software to be used. Obtain any manuals/documentation/installation guides, test data, etc.
  • Install new software releases if any
  • Perform tests
  • Evaluate the results of the tests and report them
  • Track any problems or bugs
  • Retest if needed
  • Update and maintain all documentation as you continue through the development life cycle. This includes test plans and test cases

What Is A Test Plan?

A test plan is a document that describes the objective, scope, approach, and schedule of the testing process being done for the software. A test plan also contains who will be doing the testing, the tasks that need to be accomplished, the test environment, and the potential risks that will come with testing. The test plan should contain the following items:
  • Title
  • Identification of software including version/release #
  • Revision history of a document including authors, dates, approvals
  • Table of contents
  • Purpose of the document, intended audience
  • Objective of the testing effort
  • Software product overview
  • List of relevant documents such as the requirements, design documents, etc.
  • Legal requirements and standards
  • Traceability requirements
  • Important identifier conventions and naming conventions
  • Team contact information and responsibilities
  • Dependencies and assumptions
  • Project risk analysis
  • Testing priorities
  • Limitations and scope of testing
  • Test outline
  • Output of data input which includes boundary value analysis, error classes, etc.
  • Testing Environment - operating system, hardware, etc.
  • Testing Environment Validity analysis - this outlines the difference between the test and production systems and their impact on test validity
  • Setup of testing environment
  • Software migration process
  • Software CM process
  • Requirements of test data setup
  • Database setup requirements
  • Test tools to be used
  • Tools to track bugs/defects
  • Test metrics of the project
  • Requirements for reporting and testing deliverables
  • Sanity testing period and criteria

What Is A Test Case?

A test case is a document is a document that describes the set of conditions and inputs that a tester will use to see if the requirements of an application have been met. Creating test cases is an important part of the development process because it helps to find issues in the requirements of the software. Creating test cases in the early phases of development can mean finding defects or finishing the application ahead of schedule if the requirements are met.

A test case should have the following information:

  • Test Case ID
  • Unit to test
  • Assumption
  • Test Data
  • Steps to be executed
  • Expected Result
  • Actual Result
  • Pass/Fail
  • Comments

Newer Posts Older Posts