logo elektroda
logo elektroda
X
logo elektroda

OpenBeken Automatic Testing guide: Windows Simulator and Per Platform tests tutorial

p.kaczmarek2 2013 4

TL;DR

  • OpenBeken uses automatic self-tests for firmware commits, covering simulator-only Windows tests and per-platform tests on physical devices.
  • Simulator tests run the SDL-based Windows OBK Simulator and start with `openBeken_win32.exe -runUnitTests 2`, checking use-case flows like CW light and `$CH1` expansion.
  • Per-platform tests require `ENABLE_TEST_COMMANDS` and the `Test` driver, letting `backlog startDriver Test; StartTest 100;` verify platform-specific code such as `str_to_ip`.
  • The system caught regressions at compile time, including broken channel handling and a real `sscanf`/IP-parsing issue on W600/LN882H/Realtek platforms.
Generated by the language model.
ADVERTISEMENT
📢 Listen (AI):
  • Screenshot showing results of a software build simulation.
    OpenBeken features an automatic self-testing system that checks the firmware for potential bugs and errors with each new commit. Each test simulates a practical use-case scenario, simulates certain inputs and verifies if the outputs are within the expected range. Thanks to this, we are able to quickly identify and fix issues before releasing new firmware versions. This system enhances stability and reliability by catching regressions early, ensuring that new features do not introduce unintended side effects, for example, don't break existing integrations and configs.
    Here I will present the self testing implementation details and explain how you can use it while contributing to our firmware.

    Two types of automatic testing
    Currently there are two types of automatic tests available in OBK.
    - simulator-only self tests - they are run in OBK simulator, which is currently compiled on Windows. They are used to verify the main logic flow of OBK itself (app code). You don't need any WiFi module hardware to run them, just a Windows machine
    - per-platform tests - they are run on physical OBK device and should be run that way on any supported platform. They are used to check platform-specific things, like memory allocation or basic string processing. This can't be done in OBK Simulator because we are not able currently to compile some of the per-platform SDK-specific code on Windows/Linux, it has to be run on target device.

    Simulator tests
    Let's start with Simulator tests. Simulator tests are run in OBK Simulator, which, more precisely, is a generic SDL platform port of OpenBeken with a simulated HAL and MQTT support. Those tests are currently run on Windows platform, altough compiling for Linux should be also easily possible.
    The Simulator Tests are available in selftests directory:
    https://github.com/openshwprojects/OpenBK7231T_App/tree/main/src/selftest
    Currently they are run on Github on each build:
    Screenshot showing results of a software build simulation.
    More precisely, OBK Simulator is built for Windows platform on each commit and then it's used to run the tests:
    
      build2:
        name: Build Simulator
        needs: refs
        runs-on: windows-latest
    
        steps:
        - name: Checkout repository
          uses: actions/checkout@v4
    
        - name: Setup MSBuild
          uses: microsoft/setup-msbuild@v2
    
        - name: Checkout simulator repository
          run: |
            git clone https://github.com/openshwprojects/obkSimulator
            mkdir -p ./libs_for_simulator
            cp -r ./obkSimulator/simulator/libs_for_simulator/* ./libs_for_simulator
    
        - name: Build project
          run: msbuild openBeken_win32_mvsc2017.vcxproj /p:Configuration=Release /p:PlatformToolset=v143
        - name: Flatten build assets
          run: |
            mkdir -p flat
            cp ./Release/openBeken_win32.exe flat/
            cp ./obkSimulator/simulator/*.dll flat/
            cp ./run_*.bat flat/
            mkdir -p flat/examples
            cp -r ./obkSimulator/examples/* flat/examples/
        - name: Run unit tests
          run: |
            ./flat/openBeken_win32.exe -runUnitTests 2
        - name: Compress build assets
          run: |
            Compress-Archive -Path flat/* -DestinationPath obkSimulator_win32_${{ needs.refs.outputs.version }}.zip
        - name: Copy build assets
          run: |
            mkdir -Force output/${{ needs.refs.outputs.version }}
            cp obkSimulator_win32_${{ needs.refs.outputs.version }}.zip output/${{ needs.refs.outputs.version }}/obkSimulator_${{ needs.refs.outputs.version }}.zip
        - name: Upload build assets
          uses: actions/upload-artifact@v4
          with:
            name: ${{ env.APP_NAME }}_${{ needs.refs.outputs.version }}_sim
            path: output/${{ needs.refs.outputs.version }}/obkSimulator_${{ needs.refs.outputs.version }}.zip
    

    To be more specific, the ./flat/openBeken_win32.exe -runUnitTests 2 line is responsible for starting them on Github machine. Then return value is checked to see whether all tests have succeded.
    Those self tests are basically simulating a use case scenario, for example, setting up a LED, and then they check externally is the use case result as expected.
    For instance, if we set two PWM pins, we expect CW control to show up and we expect Dimmer command to work. We expect certain things to be published when light state changes, we can test it as well. So, we simulate setting of two PWM pins, then we run some commands, and check if result is as expected (for example, are output PWM values as expected for correct CW control).
    Let's see two practical examples of such a mechanism.
    Example 1 - if we set channel 1 to 123, is $CH1 constant working in script and expands to 123?
    Code: C / C++
    Log in, to see the code

    If "buffer" is not "123", then SELFTEST_ASSERT_STRING will show error on Github build (red cross instead of green checkmark).

    Example 2 - if we set two PWM pins, are they correctly detected as CW light? Is light responding to commands and setting PWMs correctly?
    Code: C / C++
    Log in, to see the code

    This way the full behaviour of CW light is checked. First a virtual device is set up, along with pins and channels:
    Code: C / C++
    Log in, to see the code

    And then many behaviours are simulated and checked.
    Let's look deeper at one fragment:
    Code: C / C++
    Log in, to see the code

    This basically says:
    - if you have CW light set up, and OBK receives a POWER OFF Tasmota command, the led_enableAll variable should be 0 (false) and both PWMs should be 0
    - if you later receive POWER ON Tasmota command, led_enableAll is expected to be 1 (true) and for current configuration (set earlier in code) first PWM should be 100%, second 0% (because we earlier set up temperature 100% cold)

    Thanks to this mechanism, as soon as somebody breaks the expected behaviour (for example, adds a change that breaks CW lights), we will know at compile time, because self test will catch it.

    How to add new test?
    Just follow the samples from selftests directory. Add your separate file, call it, let's say, selftest_sample.c, copy required headers from a file you want to base on (let's say from selftest_cmd_generic.c), and create your function there. Don't forget to add it to selftest_local.h and call it from currently Win_DoUnitTests (subject to change).

    Per-platform tests
    Per-platform tests are run, well, per platform. They can be compiled into OBK binary just like any feature or a driver, but they are disabled by default.
    To enable them, enable the required define in obk_config.h:
    
    #define ENABLE_TEST_COMMANDS    1
    

    For more information obk_config.h and online builds, refer to our guides:
    - OpenBeken online building system - compiling firmware for all platforms (BK7231, BL602, W800, etc)
    - How to create a custom driver for OpenBeken with online builds (no toolchain required)
    The test runner is created as separate driver, so it resides in drv_main.c drivers array and its implementation is here:
    https://github.com/openshwprojects/OpenBK7231T_App/blob/main/src/driver/drv_test.c
    The test commands are available globally, they can be invoked via console. Thanks to this, you can either run them in batch via test runner, or run a single specific command manually:
    https://github.com/openshwprojects/OpenBK7231T_App/blob/main/src/cmnds/cmd_test.c
    Per-platform tests should be run on each platform, because their SDKs are separate, they have separate memory management code, etc..
    Those tests were introduced because not everything can be tested in Windows simulator.
    Some things are per-platform, for example sscanf command, or realloc, etc.
    So we have "different" sscanf or sprintf on each platform - different on BK7231, W800, W600...

    That's why I added test commands like this one:
    Code: C / C++
    Log in, to see the code

    Of course, this command has to be registered before use, just like I did it in cmd_test.c:
    Code: C / C++
    Log in, to see the code

    This checks IP parsing, and this is required, because it has proven problematic, see implementation:
    Code: C / C++
    Log in, to see the code

    This is not just a theoretical sample - we really had this issue:
    Screenshot of a comment discussing software update issues.
    As you can see above, we had a sscanf problem that requires special handling on W600/LN882H/Realtek, and we didn't catch it early.
    If we had per-platform device tests back then that cover str to ip, we would have caught it earlier.

    It is impossible to catch this problem on WIndows self tests, because it is present only on some platforms with their specific implementation.

    Thanks to the per-platform tests, it's now possible to catch it.
    If you compile OBK with test commands enabled, and if your platform has str_to_ip broken, then this code:
    Code: C / C++
    Log in, to see the code

    will attempt to parse "192.168.0.123", but it will fail, then "if" will detect it, so it will return CMD_RES_ERROR.
    So, later, the drv_test.c will catch it, and it will show error here:
    Test panel displaying switch states and test results.

    Self tests are very useful, because they can quickly check if all tested features can work as expected. You don't need to setup a CW light to check if PWMs are set correctly - this is done by Windows selftests in SImulator on each Github build. You also don't need now to manually check each page like local ip config on every platform, because per-platform tests will cover it as well....

    How to run self tests?
    - to run Windows simulator self tests, just trigger online github build, they are ran automatically. Alternatively, if you compile OBK simulator on your machine with MSVC, you can run it with required argument: openBeken_win32.exe -runUnitTests 2
    - to run per-platform tests, compile OBK with ENABLE_TEST_COMMANDS , flash it to your device, and run "backlog startDriver Test; StartTest 100;". Alternatively, just execute the desired test command in console. It may be a better approach when one of the tests is crashing the device and you're trying to narrow down the crash cause.

    Practical sample where self tests are useful
    So now let's make a demonstration. We'll consider a hypothetical scenario where someone breaks some function by accident, for example, the channel set.
    I modified CHANNEL_Set_Ex to always set value to 0:
    Screenshot shows a GitHub pull request for merging a branch in the openshwprojects project.
    Then I commited changes to Github.
    Let's see what happens.
    It's building, so we wait a moment...
    Task list in progress on a version control platform with status information.
    And then we get:
    Dashboard showing the status of task checks in a build system.
    As you can see, many self tests have failed. They expect channels to work, so they recognized that something is wrong:
    Computer screen from a CI/CD panel showing various build and test job statuses, with one error.

    Summary
    Now you've learned the basics of automatic testing in OBK. As you can see, this test system is indeed really useful, especially thanks to the ability to compile and run OBK on Windows. You don't even need an IoT device to test and develop OBK, you can write most of the functionality on Windows, maybe except the platform-specific stuff.
    Self tests should be also possible to run on Linux, as there are no required Windows dependencies, but I just haven't attempted it yet.
    Let me know if you've found the self-tests useful and if you have any suggestions how to improve it! Currently there may be still some OBK functionality that is not covered by tests, so any help is also welcome...

    Cool? Ranking DIY
    Helpful post? Buy me a coffee.
    About Author
    p.kaczmarek2
    Moderator Smart Home
    Offline 
    p.kaczmarek2 wrote 14316 posts with rating 12202, helped 648 times. Been with us since 2014 year.
  • ADVERTISEMENT
  • #2 21467583
    p.kaczmarek2
    Moderator Smart Home
    But beautiful testing caught a new bug in Pull Requests.... someone added TLS support for MQTT for us, but accidentally included code like this:
    Code fragment in a Pull Request indicating a fix in WiFi connection handling, with differences in the implementation for the BK7231N platform. .
    I guess he forgot to set g_bHasWiFiConnected on platforms other than BK7231N though, to the effect all other platforms after accepting this change would think they have no WiFi connection.

    Automatic testing has detected this however! .
    Screenshot of a code management discussion about issues with TLS implementation in MQTT. .
    Screenshot of continuous integration test results showing errors in a Pull Request. .
    Neither the author, myself nor anyone reading saw the error, but nevertheless the tests caught it and allowed me to correct it.
    Screenshot of a Pull Request interface on a version control platform showing all checks passed with no conflicts.
    Helpful post? Buy me a coffee.
  • ADVERTISEMENT
  • #3 21491873
    p.kaczmarek2
    Moderator Smart Home
    I'm porting OBK self tests to Linux for @niterian . @divadiow can you check those binaries here, are they still working?
    https://github.com/openshwprojects/OpenBK7231T_App/pull/1577
    I'm just asking to make sure that I haven't broken anything on Beken, etc, or other platforms by accident
    Helpful post? Buy me a coffee.
  • ADVERTISEMENT
  • #4 21492839
    divadiow
    Level 38  
    p.kaczmarek2 wrote:
    are they still working


    well, it boots...

    Screenshot of the OpenBK7231N user interface with configuration options and device information.
  • #5 21568262
    _johnny_
    Level 10  
    Why don't the embedded tests include a hardware check to see if the LED was actually switched on, for example. Nowadays, switching the status on the gpio is behind many layers of code and even there there may be where the error is. How is this resolved?
📢 Listen (AI):

FAQ

TL;DR: OpenBeken uses 2 automatic test types, and "Automatic testing has detected this" even caught a WiFi-state regression from an MQTT TLS pull request. This FAQ helps contributors run Windows simulator tests, enable device-side tests, and decide which path catches app-logic bugs versus platform-specific SDK faults. [#21467583]

Why it matters: OpenBeken can catch regressions before release, including bugs that reviewers missed during normal code review.

Test type Where it runs Best for Hardware required
Simulator-only self tests Windows OBK Simulator App logic, commands, PWM/light behavior, MQTT/Tasmota flows No
Per-platform tests Physical OBK device SDK-specific code, memory handling, sscanf, realloc, parsing Yes

Key insight: Use the simulator to catch fast regressions in shared firmware logic, but use per-platform tests for anything tied to a specific SDK or C library implementation.

Quick Facts

  • OpenBeken currently uses 2 automatic test categories: simulator-only self tests and per-platform device tests, each aimed at a different failure class. [#21464648]
  • The Windows simulator test runner starts with openBeken_win32.exe -runUnitTests 2, and the build checks the program return value to decide pass or fail. [#21464648]
  • CW light tests exercise concrete ranges and states, including channel 1, channel 2, 100% warm, 100% cold, Dimmer 0-100, and CT values from 154 to 500. [#21464648]
  • Per-platform parsing tests validate exact IP conversion of 192.168.0.123, because %hhu handling in sscanf differed on W600, LN882H, and Realtek targets. [#21464648]

What is the OpenBeken automatic self-testing system, and what kinds of firmware bugs is it designed to catch?

OpenBeken’s automatic self-testing system runs predefined firmware checks on each new commit to catch regressions before release. It simulates real use cases, injects inputs, and verifies outputs stay within expected values. The thread shows it catches broken channel handling, CW light logic failures, MQTT or Tasmota command regressions, and WiFi-state mistakes introduced by later code changes. It is designed to stop new features from silently breaking existing integrations or configs. [#21464648]

How do OpenBeken simulator-only self tests differ from per-platform tests in purpose, setup, and coverage?

Simulator-only tests check shared app logic on a Windows build, while per-platform tests check code that must run on real hardware. The simulator needs no WiFi module and focuses on commands, channels, PWM logic, and MQTT-style behavior. Per-platform tests require a flashed device and target SDK-specific areas such as memory allocation, string processing, sscanf, or realloc. In short, the simulator catches logic regressions fast, and device tests catch platform-dependent faults. [#21464648]

How can I run OpenBeken Windows simulator tests locally with openBeken_win32.exe and the -runUnitTests 2 argument?

You can run them locally by building the Windows simulator and starting it with the test argument. 1. Compile the OBK simulator on a Windows machine with MSVC. 2. Open the built executable directory. 3. Run openBeken_win32.exe -runUnitTests 2. That same command appears in the GitHub workflow, where its return value determines whether the build gets a green checkmark or a failure. [#21464648]

What is the OBK Simulator, and how does the SDL-based simulated HAL help test OpenBeken firmware without hardware?

"OBK Simulator" is a generic SDL platform port that runs OpenBeken with a simulated HAL and MQTT support, letting developers execute firmware logic on a desktop instead of a target board. That setup lets tests create virtual pins, channels, commands, and network messages without any WiFi module. The thread says it is currently compiled on Windows, and Linux compilation should also be possible. [#21464648]

How do I add a new self test to OpenBeken, including where to place the file and how to register it in selftest_local.h and Win_DoUnitTests?

Add a separate source file in the src/selftest directory, then register and call it. The thread recommends copying the needed headers from an existing file such as selftest_cmd_generic.c, writing your test function, adding its declaration to selftest_local.h, and calling it from Win_DoUnitTests. A sample filename given in the post is selftest_sample.c. [#21464648]

Why are some OpenBeken tests impossible to run in the Windows simulator and required to run on physical devices instead?

Some tests must run on physical devices because the simulator cannot compile or reproduce every platform SDK behavior on Windows or Linux. The thread names SDK-specific code, memory management, sscanf, sprintf, and realloc as examples. Those functions can behave differently on BK7231, W800, W600, LN882H, or Realtek targets. A Windows build cannot reliably expose those differences, so only device-side tests can catch them. [#21464648]

What is ENABLE_TEST_COMMANDS in obk_config.h, and how does it enable per-platform test commands in OpenBeken?

ENABLE_TEST_COMMANDS is the compile-time switch that turns on OpenBeken’s per-platform test commands. You enable it in obk_config.h by setting #define ENABLE_TEST_COMMANDS 1. After that, the firmware includes the test runner and globally available test commands, which you can invoke from the console either in a batch or one by one to isolate failures. [#21464648]

How do I run OpenBeken per-platform tests from the console using backlog startDriver Test; StartTest 100;?

Run them by flashing a build with test commands enabled and starting the test driver from the console. 1. Set ENABLE_TEST_COMMANDS 1 and compile the firmware. 2. Flash that build to the target device. 3. Execute backlog startDriver Test; StartTest 100; in the console. The post also says you can launch a single test command manually, which helps when one test crashes the device and you need to narrow the cause. [#21464648]

Why did OpenBeken need a dedicated per-platform test for str_to_ip and sscanf behavior on W600, LN882H, and Realtek devices?

OpenBeken needed that test because sscanf did not behave the same way on all platforms. On W600, LN882H, and Realtek, %hhu handling could fail or overwrite more memory, so str_to_ip needed special handling with temporary integers. The thread states this was a real bug, not a theoretical case, and that Windows self tests could not catch it because the failure existed only on certain device SDKs. [#21464648]

What does the TestParseIP command check in OpenBeken, and how does it reveal platform-specific parsing bugs?

TestParseIP checks whether str_to_ip converts 192.168.0.123 into the exact four-byte IP values 192, 168, 0, and 123. If any byte differs, the command returns CMD_RES_ERROR instead of CMD_RES_OK. That makes the device-side test runner report a failure immediately, which exposes parsing bugs caused by platform-specific sscanf behavior. [#21464648]

How do OpenBeken self tests verify CW light behavior, PWM channels, Dimmer commands, CT commands, and Tasmota POWER control?

They build a virtual CW light, drive it through commands, and assert exact channel results. The sample sets two PWM pins to channels 1 and 2, enables LED control, then checks led_temperature 500 gives warm output and led_temperature 154 gives cold output. It also loops Dimmer 0-100, loops CT from 154 to 500, and verifies POWER ON or POWER OFF from Tasmota-style HTTP and MQTT commands updates both led_enableAll and PWM values correctly. [#21464648]

What happened in the MQTT TLS pull request bug that broke WiFi connection state on non-BK7231N platforms, and how did automatic testing catch it?

A pull request added MQTT TLS support but appears to have updated g_bHasWiFiConnected only for BK7231N. That meant other platforms could behave as if WiFi was disconnected after the change. The author wrote, "Automatic testing has detected this," because the regression appeared in tests even though reviewers did not notice it during code reading. The failure let the bug get fixed before wider damage. [#21467583]

OpenBeken simulator tests vs per-platform device tests — which is better for catching regressions in app logic versus SDK-specific issues?

Simulator tests are better for shared app-logic regressions, and per-platform device tests are better for SDK-specific issues. The simulator checks behavior such as channels, lights, expressions, commands, and integration flows on every build. Device tests target code paths that depend on a platform’s memory manager or C library. If a change breaks common logic, use the simulator first; if it breaks sscanf, parsing, or allocator behavior, use device tests. [#21464648]

How is OpenBeken self-test support being ported to Linux, and what should developers verify when checking whether the Linux binaries still work?

OpenBeken self-test support is being ported to Linux through a pull request that asks developers to verify the binaries still boot and do not break other targets accidentally. The stated goal was to make sure nothing was broken on Beken or other platforms during the port. The immediate check reported in the thread was simple but important: "well, it boots..." That confirms baseline startup, not full functional coverage. [#21492839]

Why don’t the embedded OpenBeken tests include a hardware check to confirm that an LED actually switched on, and how is that kind of end-to-end verification resolved?

They mainly verify software-visible behavior, not physical light output, because the built-in tests check channel values, command handling, parsing, and platform code paths. The thread repeatedly validates PWM outputs and logic states such as led_enableAll, not optical confirmation from a real LED. End-to-end hardware proof is therefore split: simulator tests verify control logic, per-platform tests verify device-specific code, and final physical confirmation still requires hardware-level checking outside these embedded assertions. [#21568262]
Generated by the language model.
ADVERTISEMENT