The two symposium objectives were orthogonal:

  1. Collaboration and exchange between Test and Verification research threads.
  2. Collaboration and exchange between Academia and Industry.

This is the first FaceTAV workshop, and next year FaceTAV 2018 may be a bigger event.

The idea is great. There exist an old conflict and a rivalry between verification (as formal analysis) and test (as a practical activity, often considered an art more than a science). It is not to caricature to say that "verifiers" always have seen "testers" as unprincipled tinkers, and "testers" always have thought that formal analysis is useful for writing scientific papers but has no practical applications in the real world. The good news is that we had some talks about effective formal analysis on large-scale real-world software projects, and also on the combination of verification and test. 

We are slightly on the testing side, and our platform (simplyTestify) utilizes model checking and temporal logic of action to verify the behavioral specifications of the components of a distributed architecture and to generate valid interaction scenarios for test cases. See:

Hillah, L. M., Maesano, A.-P., De Rosa, F., Kordon, F., Wuillemin, P.-H., Fontanelli, R., Bona, S. D., Guerri, D., & Maesano, L. (2017). Automation and Intelligent Scheduling of Distributed System Functional Testing. International Journal on Software Tools for Technology Transfer, 19 (3), 281-308.

As an introductory talk, Prof. Bertolino made an insightful survey of advances in software testing. The topics are old and new:

  • Test generation
  • Test prioritization
  • Mutation
  • Cost of testing
  • Test of concurrent systems
  • MBT - Model-based testing
  • Test in production
  • Machine learning for test
  • Automated test oracles

As a conclusion of her talk, she raised the question of the use of the research advances on real-world use cases. In our humble opinion, the problem is not the insufficient use of research results by the industry, but rather that the industry is still only partially able to charge the research agenda with real-world use cases, which are two orders of magnitude more complex than the toy problems that we find in too much research papers. Note that there were no toy problems presented at FaceTAV, which is an excellent start, and the industry talks were given by technically sophisticated participants from the big platforms (Google, Facebook, Amazon, Microsoft) and some advanced T&V companies like Sapienz and DiffBlue.

Anyway, inspiring talks and great hospitality. Perhaps too much white-box testing and not enough black-box and gray-box testing. For the next edition, we have suggested giving more room to test automation, black-box and gray-box testing of distributed systems, formal verification of models in MBT, and self-healing systems on multi-cloud.