Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings

Tag : tests


Offline UI testing on iOS with stubs

Here at JUST EAT, while we have always used stubs in Unit Tests, we tested against production public APIs for our functional and UI Testing. This always caused us problems with APIs returning different data depending on external factors, such as time of day. We have recently adopted the UI testing framework that Apple introduced at the WWDC 2015 to run functional/automation tests on the iOS UK app and stubs for our APIs along with it. This has enabled us to solve the test failures caused by network requests gone wrong or returning unexpected results.


For out UI Testing we used to rely on KIF but we have never been completely satisfied, for reasons such as:

  • The difficulty of reading KIF output because it was mixed in the app logs
  • The cumbersome process of taking screenshots of the app upon a test failure
  • General issues also reported by the community on the GitHub page

We believe that Apple is providing developers with a full set of development tools and even though some of them are far from being reliable in their initial releases, we trust they will become more and more stable over time.

Another pitfall for us is that our APIs return different values, based on the time of the day, because restaurants might be closed and/or their menu might change. As a consequence, the execution of automation tests against our public APIs was causing some tests not to pass.

Proposed Solution

Rethinking our functional tests from scratch allowed us to raise the bar and solve outstanding issues with a fresh mind.

We realised we could use the same technology used in our Unit test to add support for offline testing in the automation tests, and therefore we designed around OHHTTPStubs to stub the API calls from the app. Doing this was not as trivial as it might seem at first. OHHTTPStubs works nicely when writing unit tests as stubs can be created and removed during the test, but when it comes to automation tests it simply doesn’t work.

The tests and application run as different instances, meaning that there is no way to inject data directly from the test code. The solution here is to launch the application instance with some launch arguments for enabling a “testing mode” and therefore generating a different data flow.

We pass parameters to the app either in the setup method (per test suite):

or per single test:

In our example we pass two parameters to signal to the app that the automation tests are running. The first parameter is used to stub a particular set of API calls (we’ll come back to the naming later) while the second one is particularly useful to fake the reachability check or the network layer to avoid any kind of outgoing connections. This helps to make sure that the app is fully stubbed, because if not, tests could break in the future due to missing connectivity on the CI machine, API issues or time sensitive events (restaurants are closed etc).

We enable the global stubbing at the end of the application:didFinishLaunchingWithOptions: method:

The launch arguments are retrieved from the application thanks to the NSProcessInfo class. It should now be clearer why we used the STUB_API_CALLS_stubsTemplate_addresses argument: the suffix stubsTemplate_addresses is used to identify a special bundle folder in the app containing the necessary information to stub the API calls involved in the test.

This way the Test Automation Engineers can prepare the bundle and drop it into the project without the hassle of writing code to stub the calls. In our design, each bundle folder contains a stubsRules.plist file with the relevant information to stub an API call with a given status code, HTTP method and, of course, the response body (provided in a file in the bundle).


This is how the stubs rules are structured:


At this point, there’s nothing more left than showing some code responsible for doing the hard work of stubbing. Here is the JEHTTPStubManager class previously mentioned in the AppDelegate.

We created an utility category around OHHTTPStubs:

Having our automation tests running offline reduced the majority of red test reports we were seeing with our previous setup. For every non-trivial application, running all the test suites takes several minutes and the last thing you want to see is a red mark in C.I. due to a network request gone wrong. The combination of OHHTTPStubs and Apple’s test framework has enabled us to run the automation tests at anytime during the day and to completely remove the possibility of errors that arise as a result of network requests going wrong.


Acceptance Testing in Appium

Why use acceptance testing?

A well-tested application measures the quality of the application and helps achieve good compatibility between the users and devices (iOS and Android devices in our case). To get the most out of our testing efforts and test coverage, a robust and cross-platform tool is required to automate the application. Appium is a pretty good choice which fits the bill.

Acceptance testing gives us confidence in two main areas: that new features have been built to the correct specification; and existing features continue to function after new integrations.
It’s notoriously difficult to prove that developers have built the feature they were asked for. One reason for this is that the use of natural language to describe feature specifications can result in ambiguities and misunderstandings between developers and stakeholders.


One approach being undertaken by a number of teams in International Engineering at JUST EAT is to produce executable specifications.

This has numerous benefits in terms of understanding the requirements of a project through user stories but, in particular, it give us specific steps that can be tested to verify a feature has been implemented correctly. These step sets are called scenarios, and capture the flow of a specific task that a user wants to perform.

The following are some examples from our own specifications…

Scenario Outline: Make a successful order – Login to JUST EAT through Facebook
 Given I am on the home screen

 And I search for a restaurant

 And I add items to basket

 And I login through <social> account <email> and <password>

 When I checkout with cash payment

 Then the order should be placed successfully

 |social        |email                    |password |

 |facebook   |test@test.com |test |

Scenario: View order details on Order History screen
  Given I am on the home screen

  And I have previously ordered from this device

  And I navigate to order history

  When I select an order

    |Test Restaurant |

  Then I am shown the details of that order

These specifications are written in Gherkin syntax, which is supported by the Cucumber suite of test tools. The specific tools we chose to use are outlined in the next section.

Summary of technology


The iOS Webkit Debug Proxy allows access to webviews and their content. It provides an interface to the various components of a web page within a webview using the same contract as that used for the native application. For the JUST EAT app we use it to facilitate automation of the checkout flow so that we can enter delivery details, make an order and read the details of the order confirmation page.

As part of the initialisation of the Appium process, it launches the iOS Simulator. Appium communicates with the simulator using Apple’s UI Automation layer. This layer provides Javascript APIs to access the UI of the application at runtime. Appium then wraps this interface in another API that conforms to the JSON Wire Protocol. The benefit of using this abstraction is that it standardises UI access for different platforms meaning the same tests can be run for a variety of platforms.

While the Appium server provides the interface to access the UI of the running application, Cucumber JS is used to execute our scenarios defined in Gherkin syntax. The code that backs these steps contains the procedures to send commands to Appium.

NodeJS underlies most of the technologies listed above. NodeJS implements a non-blocking I/O model and an event loop that uses a single thread and performs I/O asynchronously. Mocha, Chai and Chai-as-Promised were among other modules used to provide additional testing support.

Page object model


Since we have the apps on iOS and Android platforms we created a single test suite which can run the same set of tests on both platforms, to avoid duplication of test code and save time, and Page Object Model helped us achieve this.

Page object models each represent a screen or group of controls in the user interface. For example, the home screen of the iOS app can be represented by a page object that provides a text field and a search button as its interactive elements. These elements can be obtained by ID, Xpath, Class name or by Name, and stored as properties on the model to be used during the test.

This adds a level of abstraction and the tests use an instance of the page object to interact with the UI elements rather than worrying about the element locators in the test itself. The abstraction is useful because the model is tightly coupled to the user’s perception of the user interface, and so is more appropriate for acceptance testing.

Another benefit is that we can add elements to a given model over time, as needed by the tests, instead of including all the elements on the screen that the tests do not interact with.

Below is a diagram of the test project structure, indicating where the page objects are located.



We direct the tests to use the page objects from the right folder, based on the platform we are running the tests on.

A potential issue with a cross-platform test suite is that you may have non-uniform UI designs across platforms. Luckily, our UX designers and developers make sure the flow is uniform across iOS and Android, meaning that we don’t have to add lots of “if(platform)” statements in the test code.

Continuous integration

A major use of automated acceptance test is verifying the validity of new and existing features during automated build processes. To this end, we created a new script that runs the tests on TeamCity.

The script itself takes a number of arguments to allow configuration of the test environment for different build jobs:


  • Platform Name specifies which platform it runs on, i.e. iOS or Android.
  • Device Name specifies the type of device to run on, e.g. iPhone 6 Plus, Samsung Galaxy S4.
  • Platform Version allow a particular SDK to be targeted, e.g. iOS 7.1, Android 4.0.4.
  • App Path specifies a path to the app executable under test.
  • Environment was a custom option introduced to allow the selection of a particular QA environment, e.g. QA19, QA99, Staging.
  • Country Code lets the tests know which flavour of the app is under test.
  • Cucumber Flags allows some additional configuration to be passed to the grunt task runner.

To integrate it with TeamCity we took the following steps…

  1. We created a new build template and added a number of configurable parameters for the script arguments.
  2. We added build features to allow consumption of the JUnit report and archiving of the HTML report.

  3. We added an artifact dependency to ensure that the most recent valid build would always be under test.



Reporting test results is an important step in continuous integration as it allows tracking of test results over time, as well as providing a quick insight for interested parties without requiring them to read test metadata files such as XML and JSON.

We use a selection of Javascript tools to produce and convert the output of Cucumber JS. Grunt provides the framework in which to execute Cucumber JS and consume the test output, through a simple task runner with various configurable settings.

The JSON output produced is simple and readable but not necessarily compatible with continuous integration reporting tools. To this end we use protractor-cucumber-junit which converts the JSON into two formats:

  • HTML provides a simple and readable page that any non-technical user can access for a quick overview of test results.
  • JUnit XML is almost universally consumable by CI tools allowing native presentation of results in your CI front-end of choice, as well as enabling CI tools to track trends in testing over time, code coverage, and so on.


Simulation across platforms

iOS simulators are used, which are included as part of Xcode and hence can only be run from a mac. Xcode can run only one instance of instruments at a time and therefore, we cannot parallelise the iOS test run.

Genymotion is being used for Android emulators as it’s more promising than the Android emulator that comes with the Android sdk. Read an awesome blog on how to use Android emulators on CI, by Andrew Barnett, here [//tech.just-eat.com/2015/02/26/using-android-emulators-on-ci/]

Problems and challenges

We’re using simulators and emulators to run our acceptance tests – as they give us more coverage in terms of devices and also an easy way to test any small changes. Nevertheless, not all types of user interactions can be tested on emulators and the iOS simulators in particular have problems with certain Appium gestures.

Further, during the test execution there are no real events like battery consumption and interrupts and hence, we do not get accurate results regarding the performance of the app. Since we are in initial stages of automating the apps, we find simulators/emulators as a good stop-gap while experimenting with continuous integration.

In the future it would be desirable to use a cloud-hosted service such as SauceLabs. This would allow tests to be run across many devices and OS versions in parallel.