Logo of MSCA (Marie Skłodowska-Curie Actions)

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement n° 813884.

Flag of European Union

Lowcomote logo

Contact:

ESR 10: Cloud-Based Testing Workbench for Low-Code Engineering

Faezeh Khorram
IMT Atlantique (IMT) (France)

Objectives

The benefits brought by low-code development, in terms of simplicity and maintainability could be annihilated if developed software is not correctly verified. A trap would be to consider that software with less code requires less test, which would be indeed the case for unit tests since the quality of the code is highly related to the quality of the code generators.

However, the code generators themselves should be verified which would be a duty of Lowcomote experts. Moreover, functional tests are still mandatory and LCEP should provide methods and tools to manage their heterogeneity and distribution upon scalability. Lowcomote will provide a quality workbench for LCEP. The first objective is to help on test configuration.

To follow the LCDP principles, the tests should be written in the same language as the software, meaning that the users should only provide their expert knowledge and the test implementation should be up to the test workbench. Here,MDE techniques will be useful to transform the low-code tests into a test model that will be merged with the system and infrastructure models.

Therefore, since model transformations will be used to generate executable platform dependent tests, their heterogeneity and distribution are the main issues for this task. While Cloud computing techniques may help for managing distributed tests, they also have quality issues. Distributed test data must be collected in different formats and to run dependent code which could be distributed and written in different languages. The second objective is to run the tests and get test results to be analysed for diagnostic. This objective requires to consider heterogeneity of the deployment platforms over the Cloud. Finally, dynamic modelling is still an issue which faces the scalability issue. Each test execution generates a trace that must be reified and linked to the global model, involving the generation of an important amount of data, which should be stored and queried effectively.

Expected Results

The test workbench will firstly provide a set of distributed model transformations. To be effective they should consider two faces of the distribution: where they run, where the models they manipulate are stored. The result will be to provide transformations that will divided into parts running as close as possible to their models and test data. The test workbench will secondly provide execution facilities through virtualisation to run the tests under heterogeneous infrastructure constraints. Finally, the test workbench loads the dynamic results into a dynamic model, adding a dynamic dimension to the system and infrastructure models. The scalability of the dynamic models is a major issue since the number of test will highly increase the size of this model. A result will be to distribute the dynamic models where they have to be used. Extension of this subject would be to consider non-functional testing and in particular the performance of the low-code software depending of the deployment infrastructure.

Publications

Supervisors

Secondments

Secondment 1: Collaboration with ESR8 on discovering reusable test models.

Secondment 2: Collaboration with ESR1 on designing and configuring lowcode tests with chatbot help.

Will be visited by


Other ESR:  1;  2;  3;  4;  5;  6;  7;  8;  9;  10;  11;  12;  13;  14;  15;