PARALLELISM – A DIFFERENT YET SIMPLE TESTING APPROACH
In this digital age, every bit of information has to be created, transferred, and used with zero delay. Software products are an inherent part of this huge data stream and thus the abovementioned rule applies to them as well. The Importance of scheduled deliveries is exceptional, and for that reason software vendors try to optimize their product delivery lifecycle processes. One of the processes that is often optimized is testing. Among other classic automation approaches, parallel testing aims to claim its rightful place among the methods used in testing process optimization. Usually parallel testing is referred to as a purely automated approach. However, imagine a situation when the test scripts update lags behind the actual UI changes or when there is no automation team available. In the first scenario, more effort might be spent on script preparation than is actually spent on benefiting from script execution. In the second case one may want to grab a crowbar (the red one) and "optimize" the test team so it works faster. *This would inevitably result in lots of missed bugs and therefore is not an acceptable enhancement. So instead of using a crowbar, one may want refer to a more innovative approach which is manual parallel testing.
There are different models of product delivery throughout the globalization process. There are currently three major routes. These are – linear, clustered and simultaneous. The linear delivery model implies single language delivery at a time and is most common to products that are evolving in a slow pace. The clustered model, on the other hand, is utilized when there is a need to provide culture adaptation for a certain relatively small (3-6) batch of languages. The simultaneous model (also known as SimShip) is quite popular nowadays and implies the simultaneous delivery of every supported language at once. Since parallel testing invokes the use of several instances of the product this approach is most useful in the SimShip model and can be useful in clustered model if correctly applied. An overview of the possible implementation methods throughout the localization (L10n) and internationalization (i18n) workflow is provided below.
Localization Quality Assurance
Different process templates applied during localization (L10n), but they all have the core goal – to adapt the application to a certain culture. Let’s take a look at the key points of L10n where parallelism can assist.
Since in any case there is a need to deal with the UI-related bugs – SyncTest can help in dealing with these. It helps in detecting truncations, overlapped controls, misalignments and general untidiness of the UI. The analysis can be achieved either through captured screenshots or on the fly product validation. Both of these methods require attentiveness and may consume a bit more time than the analysis of a single language pair during the conventional testing (e.g. English and German. However, taking into account that several additional languages are covered simultaneously – the overall time reduction benefit is achieved.
From the linguists’ perspective SyncTest cut down resources allocated for the screenshot capturing and helps in providing the relevant application context for translation. Translators might not have the access to the product itself or the product might be way too complex to deploy on their side. Providing sets of screens and/or remote access to clients reduces re-work costs for translators, engineers and testers in case of out-of-context translation.
Internationalization Quality Assurance
In internationalization (i18n) the checks are a bit different from those applied in L10n, but the concept of parallel testing remains. In this case one may deal with the single language layout (some mockup for example) that is to be tested against several OS locales.
The tests that can be covered by parallel approach in sense of i18n vary from functional to the UI-related ones. These include tests targeting product functionality, character corruption, date/time validity, font consistency, Bi-Di layout acceptability and others. In fact, every manual i18n related test can be run using SyncTest solely due to the fact that a tester is in control of the process.
The output of testing can be arranged in sets of screenshots or relevant video streams. While performing these tests, the output can be gathered for further bug analysis.
Verifying product functionality is one of the key factors that are taken into account while performing quality control. While such checks are often automated, there still are some situations when automated tests do not return on the investments provided. For instance, when the test scripts update lags behind the actual UI changes the benefit of automation is severely reduced. In this case more effort might be spent on script preparation than gaining any actual benefit from script execution.
Should the automation team be absent, the search for ways to optimize the testing process without quality degradation is an even harder challenge. Still, if there is a need to perform compatibility tests or build-vs-build smoke checks SyncTest can be of assistance.
Given that compatibility testing deals with different environment configurations, it makes sense to envision a sort of parallelism between compatibility and, say, internationalization testing from purely configuration point of view. Although the checks applied are different to certain extent, the requirement of performing these against different setups is apparent.
Thus it is possible to utilize the parallel solution for compatibility checks on different OS versions (Vista, Seven, XP, etc.), OS flavors (Home, Pro, Enterprise, etc.) as well as checks including different third party software installed and hardware utilized. With SyncTest running tests on real hardware can be optimized even further. Due to the fact that clients are accessed remotely all the hardware can be stored somewhere in a server room and not occupy the tester’s desk.
The test results would be similar to those performed manually, but the overall time reduction would be immediate..
Usually we need a lot of hardware, a bunch of testers and a few test cases to test, and we are good to go.
All the testers do the same thing on each computer. However there is other approach where only one tester performs the same tests in different environments simultaneously. This will help greatly reduce the amount of man-hours.
SyncTest is our internal tool, development of which was based on the experience gained after many years of software localization and QA. In addition it's worth mentioning that test results include videos and screenshots so that they can be used for linguistic review or other types of analysis. We will also be able to view real-time performance information of tested gaming systems IN ONE PLACE.