Testing Alfred 3 Using Alfred's Own Workflows
We have another little sneak peek at Alfred 3's workflows for you today!
For version 3, numerous aspects of Alfred's workflows have been updated, improved and tons of new features have been added. As a result, every facet of workflows need to be tested thoroughly to ensure the features work exactly as expected.
Anyone with programming experience will appreciate that this can be a laborious process. Automation can help speed things up, as well as avoid human error (and human distraction - oh look, new tweets!)
So how are we doing this? Workflows have become so powerful that we're now using workflows to test workflows! (You wouldn't believe the number of Inception jokes this has led to...)
Creating a "Pass/Fail Test" workflow
Without any scripting, we've created some great unit tests for the new "Transform" utility object, which takes the input and performs one of the following transforms: Trim Whitespace, Upper Case, Lower Case, Camel Case, Reverse String, Strip Diacritics and Strip Non-Alphanumeric.
Let's take a look at the workflow in more details. The keyword "utest" launches the workflow; a new unit-tests.txt file is created in a specified location (overwriting if a previous one exists), so that the results of each test can be appended to the file.
The JSON utility object (in yellow) allows you to modify the workflow stream dynamically; argument, configuration and variables. The string we want to test is set as the JSON's output argument, the test name and the expected result are added as variables. In this workflow, each yellow object sets up the test for its connected Transform object:
Has the test passed? Each Transform object is connected to two Filter objects, one for pass (green) and one for fail (red). The green filter is configured to only continue if the input is equal to the expected value, the red filter is configured to only continue if the input is not equal to the expected value.
We have used the variables we set earlier to configure the Filters:
To tidy up the results, we use an Argument utility, where we output whether the test has passed or failed (depending on which filter they are connected to), the name of the test (variable), and the processed text from the Transform utility being tested:
Each result is appended to the file created earlier.
When the test finishes, a Notification pops up to let me know, and the unit-tests.txt file opens for me to look at the results. Wonderful, every object has passed!
Another advantage of creating these tests now is that we can run them during testing of future releases, as part of regression testing, making it a huge time saver!
In a future post, we'll show you how we could simplify this workflow even further by dynamically configuring the Transform objects in the yellow JSON utility, so that we can connect them to a single Transform object, instead of multiple ones.
With every day that passes - and every test that passes with flying colours - we get a step closer to Alfred 3 being ready. We can't wait to see what amazing things you'll create with the infinitely more flexible workflow objects in Alfred 3!
Stay tuned for more news and sneak peeks. Follow us on Twitter (@alfredapp).