DT8021 Ed 2015
- 1 Contact
- 2 Objectives
- 3 Assessment
- 4 Slides and Study Material
- 5 Project Description
- 6 Academic Papers
- 7 Acknowledgment
- 8 Back to Home
- Office: E 305
- Telephone 035 16 71 22
- Email: firstname.lastname@example.org
Lectures: 2 hours lecture in a week (typically on Mondays from 10:15 to 12:00 at R3144)
Labs: 3 hours in a week
- Knowledge and understanding
- Explain various classification of test techniques
- Explain behavioral modeling, techniques and model-based testing and test case generation from behavioral models
- Explain the latest research trends in the area of testing and alternatives to testing, particularly, model checking
- Skills and abilities
- Apply the traditional test techniques to realistic examples
- Write abstract behavioral models for embedded systems
- Use the behavioral models in order to perform model-based testing.
- Judgement and approach
- Analyse the suitability of various test techniques given the test assumptions and test goals
- Analyse research results in the field of testing embedded systems
Assessment is performed in terms of the deliverable of the paper presentation (oral presentation + report), the practical project (software + report) and a written examination. Each of these will contribute to one-third of the final mark.
Final Examination at Chalmers/GU, March 2015, Final Examination at Chalmers / GU, March 2015 with Solutions
Final Examination at Chalmers/GU, April 2015, Final Examination at Chalmers / GU, April 2015 with Solutions
Slides and Study Material
|Lecture||Handouts / Slides||Other Material|
| Lecture 1: Terminology and Functional Testing
March 23, 2015
|Handouts Slides||Chapters 1 and 4 of Ammann and Offutt|
| Lecture 2: Functional Testing
March 30, 2015
|Handouts Slides||Chapters 6 and 7 of Jorgensen|
| Lecture 3: Coverage Criteria
April 1, 2015
| Chapter 2 Ammann and Offutt
Chapters 9 and 10 of Jorgensen
| Lecture 4: Guest Lecture
April 9, 2015
| Lecture 5: Data Flow Testing
April 13, 2015
| Lecture 6: Model Checking
April 20, 2015
| Lecture 7: Slicing and Debugging
April 27, 2015
Chapters 5, 6, and 13 of Zeller M. Wiser, Program Slicing
| Guest Lecture: UI Testing
May 11, 2015
VGT Cheat Sheet (Examples, Exercises)
| Lecture 8: Reviewing Model Examination
May 18, 2015
|Model Examination Solutions|
- P. Ammann and J. Offutt. Introduction to Software Testing. Cambridge, 2008.
Recommended Reading Material
- P.C. Jorgensen. Software Testing: A Craftsman’s Approach. Auerbach Publications, 3rd edition, 2008.
- L. Aceto, A. Ingolfsdottir, K.G. Larsen, and J. Srba. Reactive Systems: Modelling, Specification and Verification, Cambridge University Press, 2010.
- A. Zeller. Why Programs Fail? Morgan Kaufmann, 2nd edition, 2009.
- Sujoy Acharya. Test-driven development with Mockito.
- Marcin Grzejszczak. Instant Mockito.
- F. Vaandrager, A First Introduction to Uppaal. In J. Tretmans, editor. Quasimodo Handbook. To appear.
- M. Wiser, Program Slicing, Proc. of ICSE'81, pp. 439-449, ACM, 1981.
The project is about Test-Driven Development of a WhatsApp-like client-side application, which we call WhatsUpHH. The final implementation will comprise an Android TCP-IP-based client implemented in Java, allowing for communication with the server through an XML interface for adding, editing, and fetching messages.
From the testing and verification perspective, the project comprises the application of the following techniques:
- test-driven development,
- unit testing using jUnit,
- gathering coverage metrics using EclEmma (or similar tools),
- integration testing, including developing stubs using Mockito (or similar tools), and
- UI testing using the Sikuli tool.
This is a group project that is to be carried out in groups of 3. You need to have formed your groups and emailed your group structure to the lecturer by Thursday March 30 at 17:00; please put '[DT8021] Group Registration' in the subject line of your email.
The deliverables comprise a report and the implementation code: the report should document the major steps in each phase and code snippets (few concise examples) of how they are implemented and possibly screen shots of the results. Extensive pieces of program code should not be included in the report. The most important factors in judging the reports are: their logical structure and sufficiently clear explanation of the steps (all figures and code snippets should be accompanied with clear descriptions).
The deadlines are to be respected and each phase is to be delivered using email and discussed before the deadline. You may get an extension of one week for at most two phases; for that you need to send an email before the deadline to the instructors. Sending an email is sufficient for receiving the extension and you need not wait for a response to your email.
As a general principle, when you find an ambiguity in the requirements, make a reasonable assumption and document it clearly in your report.
|Phase 1: TDD of a Unit||April 17, 2015 23:59|
|Phase 2: Integration (Testing) of the Client||May 1, 2015 23:59|
|Phase 3: UI Testing||May 22, 2015 23:59|
Each group is supposed to write a short report on an academic paper and present it in the allocated time-slot. The following papers are examples of papers that can be selected for this exercise:
- P. Godefroid. Compositional dynamic test generation. In Proc. of POPL 2007, IEEE, 2007.
- J.H. Siddiqui and S. Khurshid. Scaling symbolic execution using staged analysis. ISSE 9(2): 119–131, 2013.
- Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Pasareanu, Koushik Sen, Nikolai Tillmann, Willem Visser: Symbolic execution for software testing in practice: preliminary assessment. In Proc. of ICSE 2011, ACM Press, 2011.
The report is to be submitted on May 15; the presentations will be held on May 18 from 09:00 to 10:00 at the lecture hall R 3144.
This course is based on the material produced for several earlier editions given at TU Eindhoven, the Netherlands. Part of the material has been produced (or is based on the produced material) by Judi Romijn and Tim Willemse.