Speakers

Dr Antoine ROLLET, University of Bordeaux, France

Antoine Rollet graduated in Computer Engineering from University of Nice Sophia Antipolis (France) in 2002. He received his PhD degree in Computer Science from the University of Reims Champagne Ardenne (France) in 2004. He is currently Associate Professor at the University of Bordeaux / Bordeaux INP (France) and member of the LaBRI CNRS Lab for mtore than ten years. His research interests include verification and test of complex systems, in particular formal testing and runtime verification applied to real-time systems. He participated to several research projects including industrial partners such as EDF or Dassault Systèmes. He represents France in the Management Committee of the European COST Action "Runtime Verification beyond Monitoring" (ARVI).

rollet

Title of presentation: Tutorial on testing techniques

The objective of this tutorial is to bring the necessary basements on software testing techniques and researches so that the audience (and particularly students) may be prepared for the following talks during the summer school. We will address the problem of test cases generation and verdicts, and discuss the usual classifications and approaches on software testing. We will focus on Code Based Testing and Model Based Testing, and their particularities. For Code Based Testing, we will recall some standard coverage criteria and present the research problematic and some advances in this area. Concerning Model Based Testing, we will present several formal models (e.g. FSM, LTS) and some extensions, and recall some (non exhaustive) past results on them.

 
   

Pr Rob HIERONS, Brunel University, UK

Rob Hierons received a BA in Mathematics (Trinity College, Cambridge), and a Ph.D. in Computer Science (Brunel University). He then joined the Department of Mathematical and Computing Sciences at Goldsmiths College, University of London, before returning to Brunel University in 2000. He was promoted to full Professor in 2003. His research largely concerns the automated generation of efficient, systematic test suites on the basis of program code, models or specifications. He also has a significant interest in program analysis and automated transformation techniques such as program slicing. He is joint Editor-in-Chief of the Journal of Software Testing, Verification, and Reliability (STVR) and is on the editorial board of both The Computer Journal and Formal Aspects of Computing. He has organised or been on the steering committee of several international conferences and workshops.

 RHierons

Title of presentation: Complete Controllable Test Suites for Distributed Testing

Some systems interact with their environment at several physically distributed interfaces/ports and when testing such a system it is normal to place a local tester at each port. This talk will explore problems that occur when the local testers cannot interact with one another during testing and there is no global clock. In this context, each local tester observes only the sequence of inputs and outputs at its interface and this is called a local trace. Typically, we cannot reconstruct the global trace that occurred based on the local traces. It has been known for many years that this situation can introduce additional controllability and observability problems in testing and while there has been interest in test generation algorithms that overcome such problems, these algorithms lack generality.

 
   

Dr Sylvain HALLE, Université du Québec à Chicoutimi, Canada

Sylvain Hallé, Ph.D. is an Associate Professor in the Department of Computer Science and Mathematics at UQAC, where he has been working since 2010. He obtained a Ph.D. in Computer Science from Université du Québec à Montréal (UQAM) and has worked as a postdoctoral research fellow at University of California, Santa Barbara, from 2008 to 2010. Pr. Hallé has received numerous international awards for his research on software testing and verification, and earned two Governor General of Canada’s Academic Medals, in 1997 and 2009.

halle

Title of presentation: Chasing Bugs with the BeepBeep Event Stream Processor

Runtime verification is the process of observing a sequence of events generated by a running system and comparing it to some formal specification for potential violations. We show how the use of the BeepBeep event stream processor can greatly speed up the testing phase of a video game under development, by automating the detection of bugs when the game is being played. This process generalizes to a wide number of other use cases, including web application debugging and network intrusion detection.

 
   

Pr Roland GROZ, Grenoble INP, France

Roland Groz graduated from Ecole Polytechnique and Telecom Paris Tech (1982) in Telecommunication Engineering. He received a PhD in Computer Science from Université de Rennes I in 1989, and an habilitation (HDR) in 2000 from Université de Bordeaux I. He worked for 20 years for France Telecom in the research lab at Lannion, covering protocol engineering, verification techniques and software engineering for telecommunication systems. Since 2002 he has been full professor at Grenoble Institute of Technology (Grenoble INP) in the LIG lab. His research addresses testing techniques and model inference, with a particular focus on security testing and the detection of vulnerabilities.
 groz

Title of presentation: Testing and Machine Learning

Model based testing and more generally model driven engineering assume that models are available for the system under test, and that they have been maintained to correspond to the current state of development. However, in most cases, we cannot rely on any model. Yet it is possible to retrieve models through machine learning techniques: testing a system collects samples of its behaviour, so that with well designed sets of tests, it is possible to retrieve accurate models. This talk will present methods that can be used to retrieve models from active testing techniques, under various assumptions and for different types of systems and models. We shall also see various types of applications of these algorithms in testing activities. 

 
   

Dr Jurgen GROSSMANN, Fraunhofer FOKUS, Germany

As a member of the Competence Center "System Quality Center" (SQC) Jürgen Großmann is involved / responsible for validation, verification and testing projects on next generation networks and software technologies for embedded systems. Jürgen Großmann is an expert on model-based development, model driven testing as well as in security engineering and security testing. Jürgen Großmann has experiences in numerous standardization activities for various standardization bodies, including OMG, ETSI, ASAM and AUTOSAR. He is technical expert for test specification in ETSI working groups and rapporteur for the ETSI MTS Security SIG. 

Jürgen Großmann received his PhD at the Technical University of Berlin and his degree as a computer scientist at the University of Applied Science in Berlin. Before he has joined the Fraunhofer Institute FOKUS, he was a researcher at the DaimlerChrysler AG (Department of Research and Technology - Software, Methods and Tools). Jürgen Großmann has been involved in many national and international research projects and contributes to national and international conferences contributions. His research interests include the development of methods, tools and tool infrastructure for model-based development, model-based testing, and security engineering and testing.

 grossmann

Title of presentation: Integrating security testing, risk assessment and compliance assessment

Assessing and managing cyber security has become increasingly important due to the growing interconnectivity of computerized systems and their wide spread use in society. A comprehensive assessment of cyber security is often challenging as its spans across different domains of knowledge and expertise. Identifying cyber security vulnerabilities requires detailed technical expertise and knowledge, while the assessment of organizational impact and legal implications of cyber security incidents may require expertise and knowledge related to risk and compliance.

Risk assessment, security testing, and legal compliance are basic and well studied areas of activity and supported by standards such as ISO 31000 and ISO 29119. However, they are normally treated as distinct areas that are isolated from one another.  While the industry, however, demands integrative approaches that cope with security as a whole, currently no standardized approach exists that sufficiently emphasizes the systematic integration of security risk assessment security testing, and legal compliance.

In my talk I will present a method that provides a comprehensive approach to cyber security assessment by integrating the notions of security risk, security testing, and legal compliance. I will show, how risk assessment can be used to guide security testing and compliance assessment as well as how security testing provides valuable technical feedback to optimize risk assessment results. 

 

 
   

Dr Jorge López, Télécom SudParis, France

Jorge is currently a postdoctoral researcher at Télécom SudParis / Université Paris-Saclay in Paris, France. He obtained his PhD degree in the area of informatics from the Paris-Saclay University. He obtained his master's and bachelor degree the area of systems, informatics and computer science from Galileo University in Guatemala City. Jorge possesses experience working in European and international research projects such as the ITEA3 Measuring Software Engineering (MEASURE) project and the STIC AmSud Mobile Crowd Sensing and data Of-floading in Collaborative Networks (MOSAIC) project. Jorge has worked in academic/research positions at Tomsk State University in Russia and Galileo University in Guatemala Also, Jorge has experience working in the industry for international companies like: Huawei Telecommunications and Telus International and eApps Hosting.

Currently his research interests include: software testing, quality estimation, formal languages and algorithms, network monitoring and security, and distributed systems.

 

Title of presentation: “Non-intrusive” Testing Techniques for Communication Protocols

This lecture focuses on the analysis of some techniques utilized for testing communication systems for which their operation cannot be interrupted. This lecture is divided into two main parts. The first part concerns a white-box testing approach; it assumes the live system cannot be interrupted and no intrusion can be performed, but access to the code of communication system is possible. Basic notions of static analysis are introduced, mainly the foundation of code analysis. The second part, presents a black-box testing approach; it assumes that the system cannot be interrupted, nor access to its application code is allowed. However, access to the implementation input/output messages is assumed. Basic notions of monitoring techniques are given, mainly notions of invariants and trace analysis.

 
   

Pr Luigi Logrippo, Université du Québec en Outaouais, Canada

Luigi Logrippo received a degree in law from the University of Rome La Sapienza (Italy), followed by Master’s and PhD degrees in Computer Science, respectively from the Universities of Manitoba and Waterloo (Canada). After working in industry for some years, he was with the University of Ottawa (Canada) for almost thirty years, where he was Chair of the Computer Science Department for seven years. In 2002 he moved to the Université du Québec en Outaouais, Département d'informatique et d'ingénierie, while remaining associated with the University of Ottawa as Emeritus Professor. His interest areas are algebraic and logic methods with their applications to the specifications of the software requirements for complex systems, such as distributed and telecom systems, or organizational systems. For a number of years he worked on the development of tools and methods for LOTOS, a formal specification language for distributed systems. Past research dealt with the formal analysis of the feature-rich communications services that are made possible by internet telephony and the web, of the policies that govern them, and of their interactions. Currently he is doing research on access control systems, with their applications to data security in organizations. Related interest areas include web services, e-commerce, privacy, data protection in the Cloud, and legal aspects. He participates or has participated in the work of several standardization groups in the area of telecommunications (ITU, ISO, IETF), as well as in IFIP WG 6.1. He is a lifetime member of the ACM.

 

Title of presentation: Access control to data based on concepts of data flow history and risk (joint work with Sofiene Boulares and Kamel Adi)

Access control systems allow or deny access of subjects to data objects according to policies. In some organizations, policies are expressed by using fixed clearance levels of subjects and classification levels of objects (e.g. from TopSecret to Unclassified). In others, they are expressed in consideration of roles, and the role structure is supposed to be fairly stable (e.g. role Doctor has different permissions than role Nurse). The talk will start by reviewing these concepts. The study of some examples will lead us to the conclusion that more flexible access control systems are needed, allowing permissions to change in time, according to risk considerations determined by data flow histories. The use of such systems is important in dynamic environments such as the Web and the Cloud, where subject-object relationships and data flows vary rapidly over time.

We will then illustrate access control mechanisms where the security levels of subjects and data objects, as well as the risk of allowing subjects to access data objects, and thus access authorizations, are determined dynamically by information flow history. In other words, the security levels of subjects and objects in these systems are not fixed, but are determined by the importance of the data that has flown to them over time. A subject can be authorized to access an object if the access is not considered to be risky, in consideration of what data the subject already knows and what data the object already contains. A formalism capable to capture these concepts will be illustrated, together with some of its properties.

 
   

Pr Sébastien Salva, University of Auvergne, France

Sébastien Salva is a professor of the University of Auvergne (France) and a fellow of the LIMOS laboratory, team Sic. He received the HDR (Habilitation à diriger des recherches) in 2012 from the University of Auvergne and the Ph.D. degree in computer science from the Reims University (France) in 2001. His research interests concern Model Based Testing (active and passive testing) and automatic testing applied to different kinds of systems. His research also adresses model inference for testing purposes (mobile apps and industrial systems).

 

Title of presentation: Testing in the clouds

« The market seems to have come to the conclusion that cloud computing has a lot in common with obscenity--you may not be able to define it, but you’ll know it when you see it » (James Urquhart – The Wisdom of Clouds). Beyond this intrigouss definition, the presentation will introduce the general architecture and characteristics of Clouds and how Web applications are executed in Clouds. This talk will then focus on the difficulties encountered for testing applications deployed in Clouds and will present some methods which attempt to overcome these issues,under various assumptions. 
 
   

Dr Safouan Taha, CentraleSupelec, France

Safouan Taha is an associate professor at CentraleSupélec and a member of the LRI CNRS Lab. He got his engineering diploma from ENSIMAG in
2004, a PhD in Computer Science from CEA Saclay in 2008, and he joined the Department of Computer Science of CentraleSupélec right after. His
research focus on the design and verification of embedded systems. He is the author of TemporalOCL, that is a temporal extension of OCL to formalize and then verify dynamic properties of object-oriented systems.

 taha

Title of presentation: Testing Temporal Property Patterns

Dynamic properties are commonly described by temporal logics such as the Linear Temporal Logic (LTL). These formalisms are difficult to master by
system designers and validation engineers. In order to ease their understanding and writing, there are predefined patterns and scopes together with their translation into usual temporal logics (LTL, CTL, etc.). This translational semantics is not compositional and thus not easily extensible to other patterns/scopes. We will present a compositional automata-based semantics that is well-adapted to verify properties, and to generate and evaluate tests because it is provided with many usual structural coverage criteria.

 

Pr Adenilso Simao, University of Sao Paulo, Brazil

Adenilso Simao received the BS degree in computer science from the State University of Maringa (UEM), Brazil, in 1998, and the MS and PhD degrees in computer science from the University of Sao Paulo (USP), Brazil, in 2000 and 2004, respectively. Since 2004, he has been a professor of computer science at the Computer System Department of USP. From August 2008 to July 2010, he has been on a sabbatical leave at Centre de Recherche Informatique de Montreal (CRIM), Canada. He has received best paper awards in several important conferences. He has also received distinguishing teacher awards in many occasions. His research interests include software testing and formal methods.
 
 

Title of presentation: Generating Checking Sequences: When Resetting is not an Option

Model-based testing is an attempt to cope with the demand for highly dependable software in a timely way. The system under test (SUT) is modelled using a suitable notation, which can then be employed in the generation of test cases. The system is then executed with the test cases. When several test cases are generated, the SUT should be reset, i.e., brought to its initial state, before applying each of them. However, there are scenarios where resetting the SUT is not desirable (e.g., it is too costly) or possible (e.g., when executing the system changes it in an unrecoverable way). In this case, one a single test should be generated and executed. Such a test is usually called a checking sequence. In this talk, we will discuss techniques which can be used for testing systems which cannot be reset. We will introduce the classical methods for checking sequence generation, as well as some recent advancements. Finite state machines will be the notation used for modelling the system.

 

Pr Alexander K. Petrenko, Russian Academy of Sciences, Russia

Alexander K. Petrenko was born in Khimki, Moscow Region on June 11, 1951. He graduated from the Moscow Electrical Machinery Institute (MIEM), in 1974 with the diploma thesis on "Translator from LISP language" (under the supervision of A.M. Rodionov and V.M. Mihelev). In March 1974, he got a position in Keldysh Institute of Applied Mathematics Institute, Soviet Academy of Sciencies. In 1983, he got his PhD degree for the work on "Instrumental means for development of interactive tool sets" (under the supervision of Vs.S. Shtarkman). In 2003, he obtained his Doctor of Sciences degree in Institute for System Programming RAS for the work on "Specification-based testing in software development processes". His current academic status is a senior researcher (1994). Since 1994, he has been working at the Institute for System Programming RAS. Currently, he is working as the Head of Software Engineering Department. Meanwhile he also works for Moscow State University as a professor of the System Programming department of the Faculty of Computational Mathematics and Cybernetics.

Title of presentation: Testing and verification of operating systems and information security issues

Reliability and fault tolerance, along with resistance to attack by malicious software, are the most important requirements to operating systems (OS) as OS is the foundation of software stack as a whole. In the past 10-15 years, a growing interest in testing and verification of operating systems aroused, the main reasons for this are as follows: i) Active penetration of computer systems in all spheres of life. ii) Increasing diversity of operating systems. iii) Development of testing and verification technologies as well as introduction of new hardware capabilities for both dynamic and static analysis. However, nowadays not a single verification method ensures a complete check of the operating system. Moreover, the concept of "full analysis" in the case of the operating system requires significant clarification, because the quality of the operating system or the correctness criteria are determined in the context of the system requirements and the hardware platform behaviour specification. Therefore, for the introduction of advanced testing and verification methods and the OS verification it is necessary to develop methods and approaches for deriving environment specifications, as well as methods for integrating a variety of verification techniques for “environment – OS” model, where the OS is represented by its implementation together with the specification of the properties covering integrity, confidentiality, etc. The talk presents a short state-of-the-art review and the experience of ISP RAS in the development of testing and verification tools and their use in industrial and research projects for testing and verification of the Linux operating system and ARINC-653 compatible RTOS.

 

Pr Nina Yevtushenko, Tomsk State University, Russia

Nina Yevtushenko received her Ph.D. degree in Computer Science from Saratov State University in 1983. She received "Doctor of Technical Sciences" degree and a professorship title from the Supreme Attestation Committee in Moscow. Up to 1991 she worked as a researcher with the Siberian Scientific Institute of Physics and Technology. From 1991 she has joined Tomsk State University as a professor and presently she is a leader of a research team working on the synthesis and analysis of discrete event systems. She stayed as a visiting researcher/professor in Moscow State University, Université de Montréal (Canada), University of Ottawa (Canada) and Institut National des Télécommunications d'Evry (France). She published around 100 research papers. She currently serves as a program committee member for a number of international workshops and conferences. Her research interests include formal methods, automata theory, distributed systems, protocol and software testing.

 

Title of presentation: FSM Based Testing: From TAROT-1 to TAROT-12

A number of methods are known for developing test suites based on the model of a Finite State Machine (FSM). FSMs include a ‘natural reactivity’ and that is the reason why they are widely used to derive tests with the guaranteed fault coverage for checking functional requirements for various types of digital systems including communication protocols. Given a specification machine and a fault domain, a complete test suite can be derived which detects any faulty implementation in the fault domain. Nowadays FSM based test derivation is extended with novel powerful methods for different FSM types. In TAROT’2005, the main deterministic FSM based methods have been presented for deriving complete test suites when the state number of an implementation FSM is limited and in this presentation, we show corresponding experimental results for such methods. We also present how nowadays FSM based methods have been extended to deal with nondeterministic, possibly partial FSMs that are usually derived as RFC based specifications for communication protocols and show how the adaptivity can help for deriving shorter test suites. Moreover, despite the fact that almost all the problems for FSM based test derivation are rather complex (PS-complete w.r.t. the number of states), we consider some proper classes of machines where the complexity is much less and encourage researchers to look for novel such classes which correspond to real-life interactive systems. In conclusion, we briefly discuss how FSM extensions can be useful for checking not only functional but also non-functional requirements of a system under test.