ESR 10: Cloud-Based Testing Workbench for Low-Code Engineering
Faezeh KhorramIMT Atlantique (IMT) (France)
Objectives
The benefits brought by low-code development, in terms of simplicity and maintainability could be annihilated if developed software is not correctly verified. A trap would be to consider that software with less code requires less test, which would be indeed the case for unit tests since the quality of the code is highly related to the quality of the code generators.
However, the code generators themselves should be verified which would be a duty of Lowcomote experts. Moreover, functional tests are still mandatory and LCEP should provide methods and tools to manage their heterogeneity and distribution upon scalability. Lowcomote will provide a quality workbench for LCEP. The first objective is to help on test configuration.
To follow the LCDP principles, the tests should be written in the same language as the software, meaning that the users should only provide their expert knowledge and the test implementation should be up to the test workbench. Here,MDE techniques will be useful to transform the low-code tests into a test model that will be merged with the system and infrastructure models.
Therefore, since model transformations will be used to generate executable platform dependent tests, their heterogeneity and distribution are the main issues for this task. While Cloud computing techniques may help for managing distributed tests, they also have quality issues. Distributed test data must be collected in different formats and to run dependent code which could be distributed and written in different languages. The second objective is to run the tests and get test results to be analysed for diagnostic. This objective requires to consider heterogeneity of the deployment platforms over the Cloud. Finally, dynamic modelling is still an issue which faces the scalability issue. Each test execution generates a trace that must be reified and linked to the global model, involving the generation of an important amount of data, which should be stored and queried effectively.
Expected Results
The test workbench will firstly provide a set of distributed model transformations. To be effective they should consider two faces of the distribution: where they run, where the models they manipulate are stored. The result will be to provide transformations that will divided into parts running as close as possible to their models and test data. The test workbench will secondly provide execution facilities through virtualisation to run the tests under heterogeneous infrastructure constraints. Finally, the test workbench loads the dynamic results into a dynamic model, adding a dynamic dimension to the system and infrastructure models. The scalability of the dynamic models is a major issue since the number of test will highly increase the size of this model. A result will be to distribute the dynamic models where they have to be used. Extension of this subject would be to consider non-functional testing and in particular the performance of the low-code software depending of the deployment infrastructure.
Publications
-
A testing framework for executable domain-specific languages.. Faezeh Khorram, Dec. 2022. IMT Atlantique.
-
From Coverage Computation to Fault Localization. A Generic Framework for Domain-Specific Languages. Faezeh Khorram, Erwan Bousse, Antonio Garmendia, Jean-Marie Mottu, Gerson Sunyé, Manuel Wimmer, Nov. 2022. ACM 15th SIGPLAN International Conference on Software Language Engineering (SLE'22)
-
Automatic Test Amplification for Executable Models.. Faezeh Khorram, Erwan Bousse, Jean-Marie Mottu, Gerson Sunyé, Pablo Gómez-Abajo, Pablo C Cañizares, Esther Guerra, Juan de Lara, Oct. 2022. ACM/IEEE 25th International Conference on Model Driven Engineering Languages and Systems (MODELS ’22), Oct 2022, Montreal, Canada.
-
Advanced Testing and Debugging Support for Reactive Executable DSLs.. Faezeh Khorram, Erwan Bousse, Jean-Marie Mottu, Gerson Sunyé, Jul. 2022. Software and Systems Modeling, Springer Verlag, In press.
-
Adapting TDL to Provide Testing Support for Executable DSLs. Faezeh Khorram, Erwan Bousse, Jean-Marie Mottu, Gerson Sunyé, Jun. 2021. The Journal of Object Technology, Chair of Software Engineering
-
Challenges & Opportunities in Low-Code Testing. Faezeh Khorram, Jean-Marie Mottu, Gerson Sunyé, Oct. 2020. ACM/IEEE 23rd International Conference on Model Driven Engineering Languages and Systems, (Virtual Conference) (MODELS 2020)
Supervisors
-
Gerson Sunye
Supervision (IMT)
-
Jean-Marie Mottu
Supervision (IMT)
Secondments
Secondment 1: Collaboration with ESR8 on discovering reusable test models.
-
Ilirian Ibrahimi
ESR 8
(CLMS)
-
Yannis Zorgios
Supervision
(CLMS)
Secondment 2: Collaboration with ESR1 on designing and configuring lowcode tests with chatbot help.
-
Lissette Almonte Garcia
ESR 1
(UAM)
-
Iván Cantador
Supervision
(UAM)
-
Esther Guerra
Supervision
(UAM)
Will be visited by
-
Panagiotis Kourouklidis
ESR 3
(BT)
-
Jean Felicien Ihirwe
ESR 4
(INT)
Other ESR:
1;
2;
3;
4;
5;
6;
7;
8;
9;
10;
11;
12;
13;
14;
15;
-
Ilirian Ibrahimi
ESR 8 (CLMS)
-
Yannis Zorgios
Supervision (CLMS)
Secondment 2: Collaboration with ESR1 on designing and configuring lowcode tests with chatbot help.
-
Lissette Almonte Garcia
ESR 1
(UAM)
-
Iván Cantador
Supervision
(UAM)
-
Esther Guerra
Supervision
(UAM)
Will be visited by
-
Panagiotis Kourouklidis
ESR 3
(BT)
-
Jean Felicien Ihirwe
ESR 4
(INT)
Other ESR:
1;
2;
3;
4;
5;
6;
7;
8;
9;
10;
11;
12;
13;
14;
15;
-
Lissette Almonte Garcia
ESR 1 (UAM)
-
Iván Cantador
Supervision (UAM)
-
Esther Guerra
Supervision (UAM)
Will be visited by
-
Panagiotis Kourouklidis
ESR 3 (BT)
-
Jean Felicien Ihirwe
ESR 4 (INT)
Other ESR: 1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15;