pushpeshkarki
10 hours ago
Can we have more complex examples in the examples sections, like actual gameplay automation rather than just basic UI navigation? This will allow the readers to understand the capability of the tool/framework better. Also, I would like to know how the results are displayed to the end users once the automation test suite execution is completed.
jspinak
4 hours ago
Thanks for your questions! The mobile game demo (https://jspinak.github.io/brobot/docs/tutorials/tutorial-bas...) shows game automation and automated image collection and labeling to build a dataset for model training.
Here's the Qontinui Runner's action log during live automation: https://i.imgur.com/8R4d2Uf.png. Note the GO_TO_STATE action – that’s unique to model-based GUI automation. Instead of writing explicit navigation steps, you tell the framework "go to this state" and it handles pathfinding automatically.
You can see some actions failed (red X) - like "select to process corn". Traditional scripts would crash here. The model-based approach handles this differently: the next GO_TO_STATE call finds paths from wherever the GUI actually is (the current active states) to the desired state. So even when individual actions fail, the automation self-corrects on the next navigation.
Important clarification: This isn't test automation (using bots to test applications). The breakthrough is making the AUTOMATION ITSELF testable, enabling standard software engineering practices in a domain where they were previously infeasible. You can write integration tests that verify your bot works correctly before running it live. Section 11 of the paper covers this (Appendix 3 has an example from Brobot; qontinui.io provides visual test output).
The approach works for any GUI automation: gaming, visual APIs for RL agents, data collection, business automation, and yes, also software testing. I started with games (Brobot, 2018) because brittleness was most painful there.
Does that help clarify?