Here in Eficode, we have a prime directive that has been mentioned in this blog already several times: If you can write it down, we can automate it.
When dealing with the automation of desktop client software with graphical user interfaces (GUI), things are not always so rosy. This is especially true with GUIs implemented with, how would I put it nicely, older technologies, where the only interface you have is the GUI. What if you could automate exactly what is done manually — ie. seeing a thing that looks like a button, then clicking it?
As Mark Twain tells us, "there is no such thing as a new idea". Automating GUI testing by recognizing UI elements has been done before, Sikuli Script being a notable example of this. Taking inspiration from predecessors, and continuing our theme from the last week, our next open source release is an effort to provide ever-increasing automation utilizing image recognition -- let me introduce you to ImageHorizonLibrary for Robot Framework!
The main goal of this project has been providing the capabilities of Sikuli with lot less hassle: ImageHorizonLibrary collects together the top of open source solutions, all in pure Python, to provide easy-to-install and intuitive collection of keywords to be used in your tests. It is based on top of pyautogui, which builds on the shoulders of pymsgbox, pytweening, pillow and pyscreeze, making the start of image-recognition-based test automation as simple as `pip install`.
With ImageHorizonLibrary, as shown in the accompanying animation, we have a set of reference images taken with a screenshot tool from different elements of the UI: buttons, menus and so on. We are then able to automate the usage of the application as is recommended by industry best practices: we test software by having the test automation replicate exactly what the user would do in an automatic, repeatable way. ImageHorizonLibrary also offers you interaction with keyboard and mouse, automatic screenshots when tests fail, and more — come and check us out at Github!
The march does not end here, though. In the future, we are looking to implement even better image recognition as well as iron out few quirks regarding handling keyboard events.