Detailed description of process and methodology

Every new release of a Generic Enabler is included in the QA plan. This plan is a shared space where the FIWARE QA team keeps track of the components to test and the work done. Every kind of test runs in an asynchronous way and once all of them are finalized for a given GE version, it is possible to proceed and compute an updated value for the overall labeling of that GE.

The FIWARE QA Team is continuously assessing all Generic Enablers.


  1. Designed tests aim to check the completeness, consistency, soundness and usability of GEs. Some checks are subjective because they are based on the evaluator profile. Therefore two evaluation levels (decision makers and developers) are identified.
  2. Checking completeness means to verify that each released artefact is complete in all its parts. A detailed checklist, containing all verifications to execute, is linked to these tests. This information is mostly useful to high-level user profiles that need an overview to determine if an application fits their needs. Another verification of completeness is to verify if the Programmer’s Guide properly covers the Open Specification.
  3. The consistency check intends to verify that the release contains all the expected artefacts and that they are consistent between them.
  4. The soundness verification ensures that each artefact has proper contents to its purpose and suitable to the profile of those who uses them. In fact, the content of a document might be enough to a manager who must decide whether to adopt a solution based on FIWARE but not enough to the developer that must implement and vice versa.
  5. Finally, the usability check intends to verify that a document or a package is easily usable, for example, that an installation manual allows to properly installing a released package.


  1. Select the GE to be tested.
  2. Define the test cases accordingly with the API specifications provided by the documentation.
  3. Prepare a running instance of the selected GE. The following sub-steps, listed in order of decreasing alleged execution speed, are one an alternative to the other for having, in the shortest possible time, an up and running instance of the enabler to be tested.
    1. On the FIWARE LAB (“Compute->Images” menu) find the GE image to be instantiated having the version matching with that one be tested. If not available, move to the next option.
    2. In the “Instances” section of the selected GE, in the FIWARE Catalog website, identify the service endpoint which corresponds to an up and running instance of the version to be tested. If not available, or if this solution is not viable (e.g. because there is no certainty about the GE instance version at the endpoint) move to the next option.
    3. Download the Dockerfile from the link provided by the FIWARE Catalog website for that version to be tested. This implies to have already installed the Docker application container in test environment. If not available, move to the next option.
    4. Install the GE from scratch following links and documentation provided in the FIWARE Catalog for the selected version of the product.
    5. If none of the ways for having an up and running instance of the GE is viable, since the test cannot start, a high priority issue will be opened to the GE owner.
  4. Develop the test script (JMeter is chosen in order to automate the testing process) with the needed assertions in order to be able to verify, at runtime in a single step, that obtained results corresponds to those expected by the API specifications.
  5. Run the test script (during this step it is also verified the compliance between expected results and those obtained of each API invocation).
  6. Collect the results and publish the TF/TE rate.
  7. Analyse results and open an issue for those API failed.


  1. Select the GEs to test in current phase. This selection is done in alignment with the functional testing, as we only test those GEs that are ready for being tested from the two aspects, otherwise it is not possible to come up with quality label. All criteria for labelling process must be evaluated to be able to set up a global label. The potential list is shared and approved by the FIWARE Technical Steering Committee.
  2. Following steps are the same for each tested GE. Analyze the GE and define the metrics to test. Each FIWARE GE is a different type of application and behaves completely different, so we need to use different metrics to measure its behaviour in terms of performance, stability and scalability. Besides, we need to establish the reference values for each of the identified metrics. These values will be used to establish different degrees of quality depending on the obtained results during the testing execution. This step is done the first time a GE is tested, but needs to be reviewed and updated if needed every time the GE is re-tested.
  3. Definition of test cases and testing scripts. With the support of the corresponding GE owner, the potential scenarios of overload are identified. Associated test cases to these scenarios are defined and corresponding scripts to launch the required tests of these cases are developed.
  4. Testing environment setup. Each GE is installed and deployed in different way, so the testing team needs to set up specific environment with required infrastructure to install, configure and run the GE. The GE must be deployed in a clean and isolated environment to ensure that nothing external to the GE is affecting the performance, stability or scalability of the component. Even network latency is removed by installing the GE and the testing script in different machines but connected directly.
  5. Test execution and collection of results. Once all is setup, the testing script is launched to run the GE with the defined test cases collecting behaviour in the three different aspects: stressing the component with extreme load (performance); running the component during long time at constant load level (stability); and executing the GE in a larger infrastructure with more nodes (scalability). All the obtained results are stored in row for further analysis.
  6. Analysis of testing results. The collected results from testing tools are purely numbers only meaningful for experts in testing techniques. Thus, the task analyzes and extract conclusions from the results to consolidate them in a report human readable. Professional tools are used to show graphically the behaviour of the GE with respect to the mentioned aspects and identified metrics. Every GE testing is reported in a dedicated report available at FIWARE repository.
  7. Assignment of quality labels. Based on obtained results, the non-functional team will assign the corresponding labels to different criteria. Then, these values will be merged with resulting labels from functional testing to obtain the global label that will be published in the FIWARE Catalogue