Aller au contenu

Embedded Software Testing with Mbed OS

Introduction

Mbed OS Testing

In this codelab, we will first understand the basic principles of software testing and in particular of embedded software testing. We will then experiment testing by running existing Mbed OS testing with the provided Greentea testing framework. Finally, we will write a few simple test applications and integrate testing into our “BikeComputer” program.

What you’ll build

  • You will run existing tests and collect test results in various formats.
  • You will understand how to build your own test programs with the Greentea test framework and you will write a few tests, including tests with host integration.

What you’ll learn

  • The basic principles of embedded software testing.
  • How to run tests and check test results with Mbed OS.
  • How to write your own test programs with Mbed OS.

What you’ll need

  • Mbed Studio for developing and debugging your program in C++.
  • Mbed CLI 1 for compiling and running the tests.
  • A good understanding of the Mbed OS library architecture and principles.

Embedded Software Testing

Software has become an essential part of many systems, including embedded systems. Each of us interacts with an ever-increasing amount of software every day and solutions are evolving faster and faster. This means including new features, as well as improving product stability and security.

For embedded systems, solutions for deploying updates are often difficult to implement and the pace for providing software updates is much slower than on other systems like computers or smartphones. However, given the nature of embedded systems that often operate over long periods of time, software quality is crucial and ways to deliver software without sacrificing its quality must be developed. Automated testing is an important part of solutions targeting higher quality software. Building, testing and deploying a software involving many components becomes very quickly impossible. In addition, manual testing involves repetitive work and costs time that is not invested into the software development. Automating everything — from build to tests, deployment and infrastructure — is the only way forward. It is also true for embedded software.

In this context, CI/CD, standing for Continuous integration/Continuous Delivery, is a method that helps developers to frequently deliver applications by introducing automation in the development process. When working in developer groups with many developers, continuous integration helps developers merge their code changes back to a shared branch on a regular basis (often daily). Once developer’s changes are merged, those changes need to be validated. This is done by automatically building the application and running different levels of automated testing, typically unit and integration tests. If automated testing discovers failures, then reports are made to developers for fixing those bugs before a new version of the application is delivered.

When automated builds and tests succeed in CI, continuous delivery allows the developers to automate the release of that validated code to a repository. From this perspective, CD can only happen in a CI/CD environment, where CI is already built into the development pipeline. At the end of the CI/CD process, the application is ready for deployment to production.

In this codelab, we will demonstrate how to run and develop automated tests for your Mbed OS components and application. We will demonstrate how single components of a library or of the application can be tested. Tests integrating several components will also be developed.

Motivations for Building an Automated Test Environment

Like any other product used by customers, software needs to be tested before it is delivered to users. Of course, the simplest and obvious way to test a product is to use it for a while and to make sure it behaves as expected. Since the developer knows the application and thus knows how to quickly test the changes made to it, one may think that this is a reliable way of testing a software. Of course, this statement is mostly wrong:

  • Developers are biased towards the parts of the software that they know best and towards the changes that they made. One might easily forget to test some parts or might not realize the ramifications of a change.
  • Environments on which the software runs may be different from the environment of the machine on which the development was made and on which tests are made. Very often, the environment has an influence on the way a specific software may run.
  • Last but not least, testing is a boring task that is time-consuming. Very often, developers will minimize the time that they spend on testing. In some situations, people are hired for running test activities, but the problem remains.

Even in situations where developers and testers are two different groups of people, the drawback is that developers are not involved anymore in testing and may lose the overall picture. On the other hand, testers have little knowledge of changes made and have to bother developers whenever they find something they don’t understand.

Also, in modern software development, it is necessary to build ways for rapid and safe ways to modify code. From this perspective, automated testing plays a key role, together with a clearly defined test strategy including different types of tests. Generally, tests range from unit tests, which are focused on the technical details, to acceptance tests, which show that the application objectives are being met. More details on the different types of tests will be given in the next section. Tests can thus be different, but good tests mostly share the same characteristics:

  • A good test is deterministic: it doesn’t matter where and when it runs, it should always produce the same outputs given the same inputs. It must also be verifiable. This of course makes sometimes the task of writing tests difficult.
  • A good test is fully-automated: since it must be endlessly repeated, and since machines are good at repetitive tasks, tests have to be run as automated tasks by a machine.
  • A good test is responsive: it must provide quick and straight feedback. Integrating testing feedback in the development process is essential and it must be done quickly and efficiently.

Be also aware that:

  • Testing does not slow down development: in the long term, the time spent writing tests is an investment that allows changes in software in an efficient and robust way.
  • Testing is not only for finding bugs: finding bugs is an important purpose, but making bugs easily detectable and fixable after every single change is even more important. This gives developers a safety net that makes changes in software easier. Without a safety net, developers would only make very conservative changes, while some less conservatives changes may be required.

The Test Pyramid

The concept of the Test Pyramid was introduced by Mike Cohn in his book “Succeeding with Agile”. The concept applies in particular for applications with a UI but it can easily be adapted to embedded software.

In the original Mike Cohn’s concept, the test pyramid consists of three layers that any test suite should consist of (picture taken from Test Pyramid in Spring Boot Microservice):

  1. User Interface or System Tests
  2. Service or Integration Tests
  3. Unit Tests

Test Pyramid

Test Pyramid

When applied to embedded systems, it is not possible to strictly adopt this concept, neither in the number of layers nor in the naming of the layers. Still, due to its simplicity, the essence of the test pyramid serves as a good basis for establishing an embedded software test suite. In particular, one should try to::

  1. Write tests with different granularities
  2. Adapt the number of tests to their level: low-level tests require many tests while fewer tests are needed for high-level tests.

In other words, it means that one should develop a test suite with a lot of small and fast Unit Tests. Write a reasonable number of Integration Tests and very few System Tests that test your application from end to end.

Start Testing

The first thing that you need to do for establishing a testing environment and strategy for your application is “Start Testing”. What does this mean on the Mbed OS platform ?

The Mbed OS platform offers several tools that helps in the development of an appropriate test suite:

  • Unit tests can be developed and built using a separate build system. Unit tests are built and run on the host/development machine and do not require any hardware or software dependencies.
  • Greentea, htrun and mbed-ls are testing tools written in Python provided as part of the Mbed OS platform. With the Greentea framework, developers can program functional unit tests in C++, but also integration tests that can implement complex use cases that are executed on microcontrollers.

Mbed CLI 1 is required for running Greentea tests and building a test suite. With Mbed CLI and Greentea, developers get access to an automated testing framework for Mbed OS development. Ths testing framework automates the process of flashing Mbed OS boards, driving the tests and accumulating test results into test reports. The system is used for developing the Mbed OS library itself and it may also be used for automation in a Continuous Integration environment.

Using Greentea to Build and Run Tests

This section explains how to build and run existing tests in the Mbed OS library using the Greentea framework. Greentea tests run on the embedded devices themselves, but Greentea also supports host tests. The host tests are Python scripts that run on a host computer and can communicate back to the embedded device. This is particularly useful for example if you want to test that data written to the cloud by the embedded system was correctly written to the cloud.

The first initial steps for building and running Greentea tests are the following:

  • Install Mbed CLI 1 if not already done. Tests are not well supported with Mbed CLI 2 so you need to install Mbed CLI 1 for the time being.
  • Install one GNU Toolchain | GNU Arm Embedded Toolchain – Arm Developer. The version used while developing this codelab is “9 2019-q4-major”. Installing the gnu toolchain for ARM is required unless you own a license to the ARM C compiler that you can use outside of Mbed Studio. If you run all commands from the Mbed Studio Terminal, you may also use the ARMC6 toolchain - in this case, you must replace -t GCC_ARM by -t ARMC6 in all commands.

Discovering Existing Tests

Once these software are installed, you may start a new Mbed project and import the blinky example. You may then run the following commands in a cmd prompt or shell of your choice, at the root directory of your project: “mbed test -t GCC_ARM -m DISCO_H747I –compile-list –greentea”. This command lists all Greentea tests available for the DISCO_H747I device. You should get a long list of available tests like shown below:

> mbed test -t GCC_ARM -m DISCO_H747I --compile-list --greentea
[mbed] Working path "D:\Mbed Programs\bike-computer" (library)
[mbed] Program path "D:\Mbed Programs\bike-computer"
Test Case:
        Name: mbed-os-connectivity-mbedtls-tests-tests-mbedtls-multi
        Path: .\mbed-os\connectivity\mbedtls\tests\TESTS\mbedtls\multi
Test Case:
        Name: mbed-os-connectivity-mbedtls-tests-tests-mbedtls-sanity
        Path: .\mbed-os\connectivity\mbedtls\tests\TESTS\mbedtls\sanity
Test Case:
        Name: mbed-os-connectivity-mbedtls-tests-tests-mbedtls-selftest
        Path: .\mbed-os\connectivity\mbedtls\tests\TESTS\mbedtls\selftest
Test Case:
        Name: mbed-os-drivers-tests-tests-mbed_drivers-buffered_serial
        Path: .\mbed-os\drivers\tests\TESTS\mbed_drivers\buffered_serial
Test Case:
        Name: mbed-os-drivers-tests-tests-mbed_drivers-c_strings
        Path: .\mbed-os\drivers\tests\TESTS\mbed_drivers\c_strings
...

For each test, the test name is displayed together with its location in the Mbed OS library tree structure. If you filter the output for all tests containing “timer” in their name, you should get the following output:

> mbed test -t GCC_ARM -m DISCO_H747I --compile-list --greentea | grep timer
        Name: mbed-os-drivers-tests-tests-mbed_drivers-lp_timer
        Path: .\mbed-os\drivers\tests\TESTS\mbed_drivers\lp_timer
        Name: mbed-os-drivers-tests-tests-mbed_drivers-timer
        Path: .\mbed-os\drivers\tests\TESTS\mbed_drivers\timer
        Name: mbed-os-drivers-tests-tests-mbed_drivers-timerevent
        Path: .\mbed-os\drivers\tests\TESTS\mbed_drivers\timerevent
        Name: mbed-os-rtos-tests-tests-mbed_rtos-systimer
        Path: .\mbed-os\rtos\tests\TESTS\mbed_rtos\systimer

Building and Running a Test

For building and running tests, you may run the same “mbed test” command with other parameters. Before doing this, it is important to understand what actions are executed when building and running tests:

  1. A new folder “.\BUILD\tests\“target_name”\“toolchain_name”” is created, if necessary. This directory is used for copying all required files and for the build process. Its name is specific to the target device and to the toolchain used for testing.
  2. The non-test source code (all code not under a TESTS folder) is copied to the BUILD directory and compiled. The resulting object files are placed in the BUILD directory.
  3. The test code is copied to the BUILD directory and compiled, including the main.cpp file containing the main function to be called for executing the test. Both the test and non-test object files are linked to create the executable file used for running the test. This step is performed for each test discovered (based on the command parameters).
  4. Based on the test specification file “test_spec.json” that is also copied to the BUILD directory, the tests are run one at a time.

It is important to note the following points regarding the build process:

  • The “mbed_app.json” file defined at the root of your application is also used for building the test program. This configuration file is used for building both the non-test code and each test case.
  • The “.mbedignore” file defined at the root of your application is also used. Make sure that all required Mbed OS files are not excluded from the build, in particular all features related to tests located in the “mbed-os/features/frameworks” folder (like “mbed-os/features/frameworks/greentea-client” or “mbed-os/features/frameworks/unity”.
  • Since your application probably also contains a “main.cpp” file with a main() function, it is required that you modify your “main.cpp” application file as shown below. When building a test program, the MBED_TEST_MODE compilation symbol will be defined and this will prevent the presence of two main() symbols at link time.
main.cpp
...
#if !MBED_TEST_MODE
int main() {
    // Application code
}
#endif
...

If you read carefully the information above and made the required changes in your application, you should be ready for building and running your first test. If you enter the following command: “mbed test -t GCC_ARM -m DISCO_H747I -n mbed-os-drivers-tests-tests-mbed_drivers-timer –compile –run”, the test named “mbed-os-drivers-tests-tests-mbed_drivers-timer” defined in the “.\mbed-os\drivers\tests\TESTS\mbed_drivers\timer” folder will be built and run. The test should succeed for all cases defined in the test program and you should get test results similar to

> mbed test -t GCC_ARM -m DISCO_H747I -n mbed-os-drivers-tests-tests-mbed_drivers-timer --compile --run
...

mbedgt: test suite report:
| target              | platform_name | test suite                                     | result | elapsed_time (sec) | copy_method |
|---------------------|---------------|------------------------------------------------|--------|--------------------|-------------|
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | OK     | 18.62              | default     |
mbedgt: test suite results: 1 OK
mbedgt: test case report:
| target              | platform_name | test suite                                     | test case                                                      | passed | failed | result | elapsed_time (sec) |
|---------------------|---------------|------------------------------------------------|----------------------------------------------------------------|--------|--------|--------|--------------------|
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on os ticker) - measured time accumulation. | 1      | 0      | OK     | 1.14               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on os ticker) - reset.                      | 1      | 0      | OK     | 0.01               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on os ticker) - start started timer.        | 1      | 0      | OK     | 0.02               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on os ticker) is stopped after creation.    | 1      | 0      | OK     | 0.02               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on user ticker) - reset.                    | 1      | 0      | OK     | 0.02               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on user ticker) - start started timer.      | 1      | 0      | OK     | 0.02               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on user ticker) is stopped after creation.  | 1      | 0      | OK     | 0.01               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer (based on user ticker) measured time accumulation. | 1      | 0      | OK     | 0.0                |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer - copying 5 ms.                                    | 1      | 0      | OK     | 0.0                |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer - moving 5 ms.                                     | 1      | 0      | OK     | 0.0                |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer - time measurement 1 ms.                           | 1      | 0      | OK     | 0.0                |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer - time measurement 1 s.                            | 1      | 0      | OK     | 1.01               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer - time measurement 10 ms.                          | 1      | 0      | OK     | 0.02               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-drivers-tests-tests-mbed_drivers-timer | Test: Timer - time measurement 100 ms.                         | 1      | 0      | OK     | 0.08               |
mbedgt: test case results: 14 OK
mbedgt: completed in 19.25 sec

As you can observe, all test cases have succeeded for this particular test. You may also generate the test results in a more readable format by using the command: “mbed test -t GCC_ARM -m DISCO_H747I -n mbed-os-drivers-tests-tests-mbed_drivers-timer –run –report-html test_result.html”. If you then open the “result.html” file, your browser should display a window like the one below, in which you can open a detailed view of the test results.

Note that for exporting the test results in a specific format, you first need to compile the tests with the “–compile” option and then to run them with the “–run” option, in two separate steps. The command combining compilation and result exports does not work properly, for an undocumented reason.

HTML test report

HTML test report

Demonstrating a Test Failure Case

For illustrating a scenario where not all test cases succeed, we may choose the sys_timer test by running the “mbed test -t GCC_ARM -m DISCO_H747I -n mbed-os-rtos-tests-tests-mbed_rtos-systimer” command. If you run this test on your DISCO_H747I target device, then you should observe that one of the test cases fails:

> mbed test -t GCC_ARM -m DISCO_H747I -n mbed-os-rtos-tests-tests-mbed_rtos-systimer --compile --run
...
mbedgt: :340::FAIL: Deep sleep should be allowed
mbedgt: retry mbedhtrun 1/1
mbedgt: ['mbedhtrun', '-m', 'DISCO_H747I', '-p', 'COM17:115200', '-f', '"BUILD/tests/DISCO_H747I/GCC_ARM/mbed-os/rtos/tests/TESTS/mbed_rtos/systimer/systimer.bin"', '-e', '"mbed-os\\rtos\\tests\\TESTS\\host_tests"', '-d', 'G:', '-c', 'default', '-t', '08140221013F69703E7BF059', '-r', 'default', '-C', '4', '--sync', '5', '-P', '60'] failed after 1 count
mbedgt: checking for GCOV data...
mbedgt: test on hardware with target id: 08140221013F69703E7BF059
mbedgt: test suite 'mbed-os-rtos-tests-tests-mbed_rtos-systimer' ..................................... FAIL in 13.28 sec
        test case: 'Handler called twice' ............................................................ OK in 0.08 sec
        test case: 'Tick can be cancelled' ........................................................... OK in 0.03 sec
        test case: 'Tick count is updated correctly' ................................................. OK in 0.00 sec
        test case: 'Tick count is zero upon creation' ................................................ OK in 0.01 sec
        test case: 'Time is updated correctly' ....................................................... OK in 0.00 sec
        test case: 'Wake up from deep sleep' ......................................................... FAIL in 0.02 sec
        test case: 'Wake up from sleep' .............................................................. OK in 0.03 sec
mbedgt: test case summary: 6 passes, 1 failure
mbedgt: all tests finished!
mbedgt: shuffle seed: 0.8403045412
mbedgt: test suite report:
| target              | platform_name | test suite                                  | result | elapsed_time (sec) | copy_method |
|---------------------|---------------|---------------------------------------------|--------|--------------------|-------------|
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | FAIL   | 13.28              | default     |
mbedgt: test suite results: 1 FAIL
mbedgt: test case report:
| target              | platform_name | test suite                                  | test case                        | passed | failed | result | elapsed_time (sec) |
|---------------------|---------------|---------------------------------------------|----------------------------------|--------|--------|--------|--------------------|
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Handler called twice             | 1      | 0      | OK     | 0.08               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Tick can be cancelled            | 1      | 0      | OK     | 0.03               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Tick count is updated correctly  | 1      | 0      | OK     | 0.0                |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Tick count is zero upon creation | 1      | 0      | OK     | 0.01               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Time is updated correctly        | 1      | 0      | OK     | 0.0                |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Wake up from deep sleep          | 0      | 1      | FAIL   | 0.02               |
| DISCO_H747I-GCC_ARM | DISCO_H747I   | mbed-os-rtos-tests-tests-mbed_rtos-systimer | Wake up from sleep               | 1      | 0      | OK     | 0.03               |
mbedgt: test case results: 6 OK / 1 FAIL
mbedgt: completed in 13.94 sec
mbedgt: exited with code 1```

In this test, the “Wakeup from deep sleep” case fails. As you can read from the test result log, the error :340::FAIL: Deep sleep should be allowed is reported. By analyzing the source code of the test program at line 340, one can find the exact failure cause: the sleep manager does not allow our target device to enter deep sleep state. Understanding this error and possibly correcting the failure cause is beyond the scope of this codelab.

Writing Your Own Test Programs

The framework that we experimented in the previous section allows us to write our own test programs. Developers can write test programs by using the Greentea client Greentea client and the unity and utest frameworks, which are all located in the “mbed-os/features/frameworks” folder of the Mbed OS library.

By convention, all tests are placed under a directory named “TESTS” and under two other directories: a test group directory and a test case directory, as illustrated below

Test directory structure

Test directory structure

This convention allows the Mbed OS test tools to discover all tests available from the root directory of your application. The “TESTS” folder can be located anywhere in the tree structure of your application. For simplicity, we will create a “TESTS” folder at the root director0y of the application.

Write A Simple Test Program That Always Succeeds

Our first test program will be very simple and it will allow us to discover the most important features of the test frameworks. For developing this test program, you must first create the folder structure and create a “simple-test” test group, an “always-succeed” test case and “main.cpp” file. You may verify that your tests is correctly discovered by the the mbed tools:

> mbed test -t GCC_ARM -m DISCO_H747I --compile-list | grep always
        Name: tests-simple-test-always-succeed
        Path: .\TESTS\simple-test\always-succeed

The always-succeed test can be implemented as follows in the main.cpp file:

TESTS/simple-test/always-succeed/main.cpp
// Copyright 2022 Haute école d'ingénierie et d'architecture de Fribourg
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

/****************************************************************************
 * @file main.cpp
 * @author Serge Ayer <serge.ayer@hefr.ch>
 *
 * @brief Simple example of test program that always succeeds
 *
 * @date 2022-09-01
 * @version 0.1.0
 ***************************************************************************/

#include "greentea-client/test_env.h"
#include "mbed.h"
#include "unity/unity.h"
#include "utest/utest.h"

using namespace utest::v1;

// test handler function
static control_t always_succeed(const size_t call_count) {
    // this is the always succeed test
    TEST_ASSERT_EQUAL(4, 2 * 2);

    // execute the test only once and move to the next one, without waiting
    return CaseNext;
}

static utest::v1::status_t greentea_setup(const size_t number_of_cases) {
    // Here, we specify the timeout (60s) and the host test (a built-in host test or the
    // name of our Python file)
    GREENTEA_SETUP(60, "default_auto");

    return greentea_test_setup_handler(number_of_cases);
}

// List of test cases in this file
static Case cases[] = {Case("always succeed test", always_succeed)};

static Specification specification(greentea_setup, cases);

int main() { return !Harness::run(specification); }

This test program makes use of the following libraries:

  • utest: the Harness::run() calls allows to run a series of C++ test cases given a specification object. The specification object can be defined in many different ways. In our simple example, it is made of a pointer to a function that is called for setting up the greentea test framework and of an array of Case objects. Note that the type of the template arguments are automatically deduced for the Specification constructor.
  • The array of Case objects defines all test cases that must be run, one simple case in the example above. There are again many different ways of constructing Case objects. In this example, the case is constructed with a description and with a pointer to a function that must be called at test execution time. Note that this function returns a control_t variable, which defines the behavior of the test cases execution. In this example, the test handler function returns CaseNext, which means that this test case must be executed only once and that the next test case must be executed without delay.
  • Note that all mechanisms for building tests exist in different forms and offer a high level of flexibility. For example, all handler functions are available with different possible signatures for allowing different test scenarios.

If you run this test, you should observe a successful result:

> mbed test -t GCC_ARM -m DISCO_H747I -n tests-simple-test-always-succeed --compile --run
...
mbedgt: test suite report:
| target              | platform_name | test suite                       | result | elapsed_time (sec) | copy_method |
|---------------------|---------------|----------------------------------|--------|--------------------|-------------|
| DISCO_H747I-GCC_ARM | DISCO_H747I   | tests-simple-test-always-succeed | OK     | 13.42              | default     |
mbedgt: test suite results: 1 OK
mbedgt: test case report:
| target              | platform_name | test suite                       | test case           | passed | failed | result | elapsed_time (sec) |
|---------------------|---------------|----------------------------------|---------------------|--------|--------|--------|--------------------|
| DISCO_H747I-GCC_ARM | DISCO_H747I   | tests-simple-test-always-succeed | always succeed test | 1      | 0      | OK     | 0.0                |
mbedgt: test case results: 1 OK
mbedgt: completed in 14.22 sec

From the simple “always-succeed example”, and although many configuration steps are omitted for this simple example, it is important to understand how a test is specified. In general, a test specification contains:

  • a setup handler (mandatory)
  • several test cases (mandatory) and
  • a teardown handler (omitted for always-succeed).

Each test case contains:

  • a textual description,
  • a setup handler (omitted for always-succeed),
  • a teardown handler (omitted for always-succeed),
  • a failure handler (omitted for always-succeed), as well as
  • the actual test handler (mandatory).

The order of handler execution is:

  1. Test setup handler,
  2. For each test case:

    1. Test case setup handler.
    2. Test case execution handler.
    3. Test case teardown handler.
  3. Test teardown handler.

In addition to the specification above, it is worth mentioning that test cases can be run asynchronously and can be repeated several times.

Test a basic C++ component

One important feature in the C++ language is the so-called smart pointers concept. As compared to raw pointers, smart pointers are intended to help ensure that programs free memory and resources whenever they are not referenced any more. This mechanism thus prevents memory and resource leaks. It is also exception safe, meaning that the expected behavior is also guaranteed in the case of an exception.

In this section, we demonstrate how the behavior of raw and smart pointers can be tested. Raw pointers just behave as addresses to memory locations and with raw pointers, it is the responsability of the programmer to deallocate any morey space allocated with a new or a malloc. On their side, smart pointers allow to encapsulate memory allocation within classes, so that memory is released whenever the object encapsulating the memory buffer is destroyed. Smart pointers are now implemented as part of the C++ standard using the std::unique_ptr or the std::shared_ptr classes.

std::unique_ptr and std::shared_ptr both implement mechanisms for a proper deallocation of the encapsulated resource. However, they differ in the way they handle the ownership of the resource: - With std::unique_ptr only one variable can refer to the object and the allocated resource will be reclaimed when that variable is destroyed. Transfering the ownership of the resource cannot be done with the assignment operator (operator=()) and instead one must use the std::move() semantics. - With std::shared_ptr, the ownership of the resource can be shared among several variables. A reference counting mechanism is implemented and when the last reference to the resource is destroyed, then it is reclaimed. Be aware that circular references are possible with std::shared_ptr.

For testing the behavior of both raw and smart pointers, we use a simple Test structure, with the following definition:

struct Test {
    Test() {
        _instanceCount++;
        _value = 33;
    }

    ~Test() {
        _instanceCount--;
        _value = 0;
    }

    int _value;
    static uint32_t _instanceCount;
};

This structure contains a static attribute that counts the number of instances that are still alive, by incrementing/decrementing upon construction/destruction. With this mechanism, we may test the following behaviors:

  • Test that when creating a shared pointer in a given scope, it will destroyed when leaving the scope.
  • Test that multiple instances of shared pointers correctly manage the reference count and that the object is released correctly.
  • Test that when creating a raw pointer and deallocating it correctly, the destructor is called.

The tests described above can be integrated in a test program under “TESTS/simple-test/ptr-test” as shown below. If you run the test, you should observe that all test cases run successfully.

TESTS/simple-test/test-ptr/main.cpp
// Copyright 2022 Haute école d'ingénierie et d'architecture de Fribourg
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

/****************************************************************************
 * @file main.cpp
 * @author Serge Ayer <serge.ayer@hefr.ch>
 *
 * @brief Simple example of test program for raw and shared pointers
 *
 * @date 2022-09-01
 * @version 0.1.0
 ***************************************************************************/

#include "greentea-client/test_env.h"
#include "mbed.h"
#include "unity/unity.h"
#include "utest/utest.h"

using namespace utest::v1;
struct Test {
    Test() {
        _instanceCount++;
        _value = kMagicNumber;
    }

    ~Test() {
        _instanceCount--;
        _value = 0;
    }

    int _value;
    static constexpr uint32_t kMagicNumber = 33;
    static uint32_t _instanceCount;
};
uint32_t Test::_instanceCount = 0;

/**
 * Test that a shared pointer correctly manages the lifetime of the underlying raw pointer
 */
void test_single_sharedptr_lifetime() {
    // Sanity-check value of counter
    TEST_ASSERT_EQUAL(0, Test::_instanceCount);

    // Create and destroy shared pointer in given scope
    {
        std::shared_ptr<Test> shared_ptr(new Test);
        TEST_ASSERT_EQUAL(1, Test::_instanceCount);
        TEST_ASSERT_EQUAL(Test::kMagicNumber, shared_ptr->_value);
    }

    // Destroy shared pointer
    TEST_ASSERT_EQUAL(0, Test::_instanceCount);
}

/**
 * Test that multiple instances of shared pointers correctly manage the reference count
 * to release the object at the correct point
 */
void test_instance_sharing() {
    std::shared_ptr<Test> shared_ptr1(nullptr);

    // Sanity-check value of counter
    TEST_ASSERT_EQUAL(0, Test::_instanceCount);

    // Create and destroy shared pointer in given scope
    {
        std::shared_ptr<Test> shared_ptr2(new Test);
        TEST_ASSERT_EQUAL(1, Test::_instanceCount);
        // share share_ptr2 with shared_ptr1
        shared_ptr1 = shared_ptr2;
        // still one instance only
        TEST_ASSERT_EQUAL(1, Test::_instanceCount);
        TEST_ASSERT_EQUAL(Test::kMagicNumber, shared_ptr1->_value);
        TEST_ASSERT(shared_ptr1.get() == shared_ptr2.get());
    }

    // shared_ptr1 still owns a raw pointer
    TEST_ASSERT_EQUAL(1, Test::_instanceCount);

    shared_ptr1 = nullptr;

    // Shared pointer has been destroyed
    TEST_ASSERT_EQUAL(0, Test::_instanceCount);
}

static utest::v1::status_t greentea_setup(const size_t number_of_cases) {
    // Here, we specify the timeout (60s) and the host test (a built-in host test or the
    // name of our Python file)
    GREENTEA_SETUP(60, "default_auto");
    return greentea_test_setup_handler(number_of_cases);
}

// List of test cases in this file
static Case cases[] = {
    Case("Test single shared pointer instance", test_single_sharedptr_lifetime),
    Case("Test instance sharing across multiple shared pointers", test_instance_sharing)};

static Specification specification(greentea_setup, cases);

int main() { return !Harness::run(specification); }

Exercice: Write a test program for unique_ptr

Exercice: Write a test program for raw pointers

Continuous Integration

Now that we have developed some automated test programs for Mbed OS, we may benefit from those test programs for automating both the build process and the test process. For doing so, we will use the facilities offered by Github and in particular GitHub Actions. Since you will be using Github for delivering your project, this makes the integration of actions even easier. We will also use a Docker image provided by Mbed OS for running builds within Github.

Automating the test process would require running the GreenTea tests against USB devices in the Docker Container. This is beyond the scope of this codelab since this would require a custom setup for running automated builds and tests. The goal is here only to demonstrate how you can automatically build your applications and how this could be extended to automate tests.

CI/CD Workflow

In the picture below (taken from GitLab CI/CD | GitLab, the typical development workflow is depicted. Our workflow will be simplified but it is useful to have an overview of the global picture:

  • Once changes have been made to a software under development, these changes can be pushed to a specific branch in a remote Gitlab repository. As we will experience later, this push triggers the CI/CD pipeline for your project.
  • The GitLab CI/CD usually runs automated scripts to build and test your application and then deploy the changes in a review application (different from the production application).
  • If all tests and deployment are successful, the code changes get reviewed and approved, a merge of the specific branch into the production branch is made and all changes are deployed to the production environment.
  • If something goes wrong, changes are rolled back or further changes are made for correcting the detected problems.

In our case, we will simplify the process and skip the branch/merge steps.

CI/CD process

CI/CD process

Integrate the Test Stage

In the previous step, we have developed a number of test programs. For automating test builds, we will now integrate those builds into the GitHub Actions. With GitHub Actions, one can automatically build, test, and deploy any application.

Using GitHub Actions is straightforward and this capability is already integrated into your Github account. For integrating your build into your repository, it is enough to create a new workflow in the Actions tab of your repository

Github actions

Github actions

Once you create the workflow, you may edit it to contain the actions described below. Note that in the notation below \{ and \} should be reproduced without the \.

.github/workflows/build-test.yml
name: Build test application
on:
  pull_request:
  push:

jobs:
  build-cli-v1:
    container:
      image: ghcr.io/armmbed/mbed-os-env:master-2022.05.21t04.23.55

    runs-on: ubuntu-20.04

    strategy:
      matrix:
        target: [K64F, DISCO_H747I]
        profile: [release, debug, develop]


    steps:
      -
        name: checkout
        uses: actions/checkout@v2

      -
        name: build-test
        run: |
          set -e
          mbed deploy
          mbed test -t GCC_ARM -m $\{\{ matrix.target \}\} --profile $\{\{ matrix.profile \}\} --compile -n tests-simple-test-always-succeed,tests-simple-test-ptr-test
          mbed compile -t GCC_ARM -m $\{\{ matrix.target \}\} --profile $\{\{ matrix.profile \}\}

A quick start to understand the basics of actions is given under Actions quickstart. For an understanding of this actions description, you need to understand the following:

  • The description is following the YAML specifications syntax. Yaml is somehow an extension of the JSON syntax for improved human readiness.
  • GitHub Actions workflow is triggered when an event occurs in the repository (specified with on). In the example above, the workflow is triggered when a push is done or when a pull request is opened.
  • Each workflow contains one or more jobs, which can run in sequential order or in parallel. Each job will run inside its own virtual machine runner or inside a container, and has one or more steps that either run a script that you define or run an action.

In the example above, we do the following

  • One job, named build-cli-v1, is run upon push or pull request.
  • The job is run inside a Docker container running the latest Mbed OS Docker image.
  • The strategy creates a build matrix for the job, that allows to define different variations to run the job. In our case, the variations are targeting different target devices and different build profiles. These variations are defined in the strategy matrix.
  • The job is made of two steps. The first step checks out your repository, so your workflow can access it. The second step builds the application.
  • The build step does the following:

  • run: | specifies a multi-line command

  • set -e instructs that the job will stop immediately if any command fails.
  • Further commands are standard git / mbed cli commands for installing library dependencies and compiling the test programs.

As you can read from the file above, you must push the test programs developed in the previous sections to your GitHub repository for a successful execution of the build-test script.

The first time the “build-test.yml” file is committed to your repository, the runner runs your jobs. The same applies for any further push to the repository. Whenever the runner runs a job, the job results are displayed in the actions tab.

Job progress

Job progress

The picture above shows the details of the Build test workflow. You can follow the workflow progress by choosing a specific run:

Job progress details

Job progress details

If you select a particular job, you can see all the details. If a job fails, you can also read the log for understanding the problem and fixing it.

Job details

Job details