Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings

Tag : Testing

2115 views

Unit testing front-end JavaScript with AVA and jsdom

Writing tests for JavaScript code that interacts with the DOM can be tricky. Luckily, using a combination of AVA and jsdom, writing those tests becomes a lot easier.

This article will walk you through how to set everything up so you can get started writing your tests today.

What is AVA?

AVA is described as a “Futuristic JavaScript test runner“. Sounds fancy, huh?! So, what is it exactly that makes it “futuristic“?!

Tests run quickly

AVA runs test files in parallel, each in its own separate process, with the tests inside those files running concurrently. This offers better performance than other test runners that run tests serially, such as Mocha. This also means that each test file is run in an isolated environment — great for writing atomic tests.

Simple API

AVA’s API is very small because, in AVA’s own words, it is “highly opinionated“. You won’t find any assertion aliases here! This reduces the cognitive load required when writing tests.

Write tests in ES2015

You don’t need to do anything to be able to write tests in ES2015, AVA supports this out of the box! Under the covers it’s using Babel to transpile with the es2015 and stage-2 presets.

No implicit globals

AVA has no implicit globals, simply import it into your test file and you have everything you need.

Other benefits

There are a whole host of other benefits which AVA offers such as:

  • Promise support
  • Generator function support
  • Async function support
  • Observable support
  • Enhanced assertion messages
  • Clean stack traces

All of this combined sounds very “futuristic” to me!

Getting off the launchpad with AVA

Now that we know more about AVA, let’s create a new project and start writing some tests.

Start by running npm init inside a new project folder. This will create a package.json file, which will contain various pieces of information about the project such as its name, authors, and dependencies, among others. Hitting enter for each question will fill in a default value.

Installing AVA

Add AVA to the project by typing npm install ava --save-dev, then update the scripts section in package.json:

The --verbose flag enables the verbose reporter, which means more information is displayed when the tests are run.

When using npm scripts, the path to AVA in the node_modules folder will be resolved for us, so all we need to do is type npm test on the command line. Doing so at the moment this will give us an exception:

Let’s fix that by adding a test.

Writing a test

Create a test directory, with a file named demo.test.js inside, then add a test:

First, AVA is imported into the module, then the test function is called, passing a string as the first parameter which describes what the test is doing. The second parameter is the test implementation function which contains the body of the test, this provides us with an object, t, from which we can call the assertion functions.

The is assertion is used here, which takes two values and checks that they are both equal (using === so there is no type conversion).

Note: You can choose any name you like for the t parameter, such as assert. However, using the t convention in AVA will wrap the assertions with power-assert which provides more descriptive messages.

Run npm test and the test result will be printed out

Success! Our test passed as expected. To see an example of what a failing test would look like change the test assertion to t.is(1 + 1, 1). Run the test now and you’ll see an error

As you can see, there is a lot of useful information provided in order to help us track down the issue.

Testing modules

To demonstrate how to test a module, create a new folder called src in the root of the project with a file inside called demo-module.js with the contents:

Update demo.test.js by first importing the module, then adding a new test:

Running npm test now will give you the following exception

Uh oh, what happened?

AVA will transpile ES2015 code in your tests; however, it won’t transpile code in modules imported from outside those tests. This is so that AVA has zero impact on your production environment.

If our source modules are written in ES2015, how do we tell AVA that we’d like them to be transpiled too?

Transpiling source files

To transpile source files, the quick and dirty option is to tell AVA to load babel-register which will automatically transpile the source files on the fly. This is ok if you have a small number of test files, but there is a performance cost which comes with loading babel-register in every forked process.

The other option is to transpile your sources before running the tests in order to improve performance.

The next two sections look at how each technique can be achieved.

Transpile with babel-register

Add babel-register by running npm install babel-register --save-dev, then add a "babel" config to package.json

Next, add "babel-register" to the AVA "require" section

Run npm test and the tests will once again pass, great!

The recommendation from the AVA team is to use babel-registeruntil the performance penalty becomes too great“. As your test base grows you’ll need to look into setting up a precompilation step.

Setting up a precompilation step

A precompilation step will transpile your source modules before the tests are run in order to improve performance. Let’s look at one way to set this up.

Note: If you were following along with the last section you’ll need to remove the references to babel-register. First run npm uninstall babel-register --save-dev, then remove "babel-register" from the AVA "require" section in package.json.

Start by adding the babel-cli and babel-preset-es2015 packages to the project: npm install babel-cli babel-preset-es2015 --save-dev.

Next, add a "babel" config to package.json

In order to run the tests, we need to update the npm scripts. Add a new npm script called precompile

The precompile npm script will tell Babel to take the files in the src directory, transpile them, then output the results to the dist directory.

Next, the test npm script needs to be updated so that it runs the precompile step before running the tests

The double ampersand (&&) tells npm to first run the precompile script and then the AVA tests.

The final task is to update the reference to demo-module inside demo.test.js to point at the compiled code, we do this by replacing ../src with ../dist:

Run npm test and we’re presented with all green tests!

Testing the DOM using Node

So far we have the ability to test JavaScript code, but what if we’d like to test a function which makes use of the DOM? Node doesn’t have a DOM tree, so how do we get around this?

One option is to use a combination of a test runner and a browser — a popular combination is Karma and PhantomJS. These offer a lot of benefits like being able to test against real browsers, run UI tests, take screenshots, and the ability to be run as part of a CI process.

However, they typically come with a fairly large overhead, so running lots of small tests can take minutes at a time. Wouldn’t it be great if there was a JavaScript implementation of the DOM?

Welcome to the stage; jsdom!

jsdom

jsdom is described as “A JavaScript implementation of the WHATWG DOM and HTML standards, for use with Node.js“.

It supports the DOM, HTML, canvas, and many other web platform APIs, making it ideal for our requirements.

Because it’s purely JavaScript, jsdom has very little overhead when creating a new document instance which means that tests run quickly.

There is a downside to using a JavaScript implementation over an actual browser – you are putting your trust in the standards being implemented and tested correctly, and any inconsistencies between browsers will not be detected. This is a deal breaker for some, but for the purposes of unit testing I think it is a reasonable risk to take; jsdom has been around since early 2010, is actively maintained, and thoroughly tested. If you are looking to write UI tests then a combination of something like Karma and PhantomJS may be a better fit for you.

Integrating jsdom

Setting up jsdom can be a daunting task, the documentation is great, but very lengthy and goes into a lot of detail (you should still read it!). Luckily a package called browser-env can help us out.

Add browser-env to the project npm install browser-env --save-dev.

Create a helpers directory (which is ignored by convention when using AVA) inside test, then add setup-browser-env.js with the contents

We need to tell AVA to require this module before any of the tests are run so that browser-env can create the full browser environment before any DOM references are encountered. Inside your package.json add

Note: You may have noticed that this file is written in ES5. This is because AVA will transpile ES2015 code in the tests, yet it won’t transpile any modules imported or, in this case, required from outside the tests — see the transpiling source files section.

Testing the DOM

Let’s write a test which makes use of the document global which has been provided thanks to jsdom. Add a new test to the end of demo.test.js:

First, we add a paragraph element with some text to the document body, then query for that element using document.querySelector, and finally, we verify that the selected paragraph tag has an innerHTML value equal to 'Hello, world'.

Run the tests with npm test

Congratulations, you’ve just unit-tested the (virtual) DOM!

Test coverage with nyc

As a bonus let’s quickly set up some test coverage. Because AVA runs each test file in a separate Node.js process, we need a code coverage tool which supports this. nyc ticks the box — it’s basically istanbul with support for subprocesses.

Add it to the project with npm install nyc --save-dev, then update the test npm script by adding nyc before the call to ava:

You’ll also need to update the Babel config to tell it to include source maps when developing so that the reporter can output the correct lines for the transpiled code:

Run the tests and witness the awesome code coverage table!

What next?

If you’re interested in what else you can do with AVA, have a look through the AVA readme, check out the AVA recipe docs, read about common pitfalls, and listen to this JavaScript Air podcast episode. I’d also recommend looking into setting up linting for your code.

You can browse the source code for this blog post on GitHub.

So, now you have no excuse for not testing your front-end JavaScript!


Damian Mullins is a UI Engineer at Just Eat. Progressive enhancement advocate, web standards supporter, JavaScript enthusiast.

669 views

Calabash Page Objects

Faster development of Calabash tests

While creating the page object classes in our Calabash mobile test suites at JUST EAT, we found ourselves repeating a lot of actions when waiting for, scrolling to and interacting with elements on the screen. We abstracted these actions into a library to avoid this unnecessary duplication of code and made these actions agnostic to screen size. This library has now been published as a ruby gem called calabash-page-objects.

Why use this?

Dealing with small screens

Sometimes you have to scroll to elements on small screens but not on larger screens. We initially used if-statements dealing with an environment variable for ‘small screen’ inside your test code – not good!

We wrote a method to scroll to an element if it wasn’t immediately there. This method was then included in many of the methods available to our elements; touching, inputting text, asserting presence etc.

Multiple scrollable views

When attempting to use Calabash’s default scroll method, we noticed that sometimes it didn’t appear to scroll the view we wanted if there were multiple scrollable views on the screen.

After looking into the Calabash methods, we noticed that you could perform scroll actions on locators. We wrapped up this method in the gem too so that we could pass both the element we’re searching for and the view it belongs in into all the helper methods. This became the ‘parent’ parameter that the gem methods can optionally take.

How to use?

The calabash-page-objects gem exposes two element classes, one for iOS the other for Android. They are implemented in the same way regardless of the platform under test. These element classes have methods for, waiting for them to be visible, waiting for them to disappear, touching them, etc. These methods all take variables in a consistent format.


More information

See the project on Github for more information, and a more detailed description of the methods and parameters. Feel free to fork it and contribute too.

789 views

Acceptance Testing in Appium

Why use acceptance testing?

A well-tested application measures the quality of the application and helps achieve good compatibility between the users and devices (iOS and Android devices in our case). To get the most out of our testing efforts and test coverage, a robust and cross-platform tool is required to automate the application. Appium is a pretty good choice which fits the bill.

Acceptance testing gives us confidence in two main areas: that new features have been built to the correct specification; and existing features continue to function after new integrations.
It’s notoriously difficult to prove that developers have built the feature they were asked for. One reason for this is that the use of natural language to describe feature specifications can result in ambiguities and misunderstandings between developers and stakeholders.

image02

One approach being undertaken by a number of teams in International Engineering at JUST EAT is to produce executable specifications.

This has numerous benefits in terms of understanding the requirements of a project through user stories but, in particular, it give us specific steps that can be tested to verify a feature has been implemented correctly. These step sets are called scenarios, and capture the flow of a specific task that a user wants to perform.

The following are some examples from our own specifications…

Scenario Outline: Make a successful order – Login to JUST EAT through Facebook
 Given I am on the home screen

 And I search for a restaurant

 And I add items to basket

 And I login through <social> account <email> and <password>

 When I checkout with cash payment

 Then the order should be placed successfully
 Examples:

 |social        |email                    |password |

 |facebook   |test@test.com |test |

Scenario: View order details on Order History screen
  Given I am on the home screen

  And I have previously ordered from this device

  And I navigate to order history

  When I select an order

    |Test Restaurant |

  Then I am shown the details of that order

These specifications are written in Gherkin syntax, which is supported by the Cucumber suite of test tools. The specific tools we chose to use are outlined in the next section.

Summary of technology

image06

The iOS Webkit Debug Proxy allows access to webviews and their content. It provides an interface to the various components of a web page within a webview using the same contract as that used for the native application. For the JUST EAT app we use it to facilitate automation of the checkout flow so that we can enter delivery details, make an order and read the details of the order confirmation page.

As part of the initialisation of the Appium process, it launches the iOS Simulator. Appium communicates with the simulator using Apple’s UI Automation layer. This layer provides Javascript APIs to access the UI of the application at runtime. Appium then wraps this interface in another API that conforms to the JSON Wire Protocol. The benefit of using this abstraction is that it standardises UI access for different platforms meaning the same tests can be run for a variety of platforms.

While the Appium server provides the interface to access the UI of the running application, Cucumber JS is used to execute our scenarios defined in Gherkin syntax. The code that backs these steps contains the procedures to send commands to Appium.

NodeJS underlies most of the technologies listed above. NodeJS implements a non-blocking I/O model and an event loop that uses a single thread and performs I/O asynchronously. Mocha, Chai and Chai-as-Promised were among other modules used to provide additional testing support.

Page object model

 

Since we have the apps on iOS and Android platforms we created a single test suite which can run the same set of tests on both platforms, to avoid duplication of test code and save time, and Page Object Model helped us achieve this.

Page object models each represent a screen or group of controls in the user interface. For example, the home screen of the iOS app can be represented by a page object that provides a text field and a search button as its interactive elements. These elements can be obtained by ID, Xpath, Class name or by Name, and stored as properties on the model to be used during the test.

This adds a level of abstraction and the tests use an instance of the page object to interact with the UI elements rather than worrying about the element locators in the test itself. The abstraction is useful because the model is tightly coupled to the user’s perception of the user interface, and so is more appropriate for acceptance testing.

Another benefit is that we can add elements to a given model over time, as needed by the tests, instead of including all the elements on the screen that the tests do not interact with.

Below is a diagram of the test project structure, indicating where the page objects are located.

PageObjects/Android/HomeScreen.js

PageObjects/iOS/HomeScreen.js

We direct the tests to use the page objects from the right folder, based on the platform we are running the tests on.

A potential issue with a cross-platform test suite is that you may have non-uniform UI designs across platforms. Luckily, our UX designers and developers make sure the flow is uniform across iOS and Android, meaning that we don’t have to add lots of “if(platform)” statements in the test code.

Continuous integration

A major use of automated acceptance test is verifying the validity of new and existing features during automated build processes. To this end, we created a new script that runs the tests on TeamCity.

The script itself takes a number of arguments to allow configuration of the test environment for different build jobs:

 

  • Platform Name specifies which platform it runs on, i.e. iOS or Android.
  • Device Name specifies the type of device to run on, e.g. iPhone 6 Plus, Samsung Galaxy S4.
  • Platform Version allow a particular SDK to be targeted, e.g. iOS 7.1, Android 4.0.4.
  • App Path specifies a path to the app executable under test.
  • Environment was a custom option introduced to allow the selection of a particular QA environment, e.g. QA19, QA99, Staging.
  • Country Code lets the tests know which flavour of the app is under test.
  • Cucumber Flags allows some additional configuration to be passed to the grunt task runner.

To integrate it with TeamCity we took the following steps…

  1. We created a new build template and added a number of configurable parameters for the script arguments.
    image05
  2. We added build features to allow consumption of the JUnit report and archiving of the HTML report.

    image00
  3. We added an artifact dependency to ensure that the most recent valid build would always be under test.

    image04

Reporting

Reporting test results is an important step in continuous integration as it allows tracking of test results over time, as well as providing a quick insight for interested parties without requiring them to read test metadata files such as XML and JSON.

We use a selection of Javascript tools to produce and convert the output of Cucumber JS. Grunt provides the framework in which to execute Cucumber JS and consume the test output, through a simple task runner with various configurable settings.

The JSON output produced is simple and readable but not necessarily compatible with continuous integration reporting tools. To this end we use protractor-cucumber-junit which converts the JSON into two formats:

  • HTML provides a simple and readable page that any non-technical user can access for a quick overview of test results.
    image03
  • JUnit XML is almost universally consumable by CI tools allowing native presentation of results in your CI front-end of choice, as well as enabling CI tools to track trends in testing over time, code coverage, and so on.

    image01

Simulation across platforms

iOS simulators are used, which are included as part of Xcode and hence can only be run from a mac. Xcode can run only one instance of instruments at a time and therefore, we cannot parallelise the iOS test run.

Genymotion is being used for Android emulators as it’s more promising than the Android emulator that comes with the Android sdk. Read an awesome blog on how to use Android emulators on CI, by Andrew Barnett, here [//tech.just-eat.com/2015/02/26/using-android-emulators-on-ci/]

Problems and challenges

We’re using simulators and emulators to run our acceptance tests – as they give us more coverage in terms of devices and also an easy way to test any small changes. Nevertheless, not all types of user interactions can be tested on emulators and the iOS simulators in particular have problems with certain Appium gestures.

Further, during the test execution there are no real events like battery consumption and interrupts and hence, we do not get accurate results regarding the performance of the app. Since we are in initial stages of automating the apps, we find simulators/emulators as a good stop-gap while experimenting with continuous integration.

In the future it would be desirable to use a cloud-hosted service such as SauceLabs. This would allow tests to be run across many devices and OS versions in parallel.

680 views

How to take screenshots for failing tests with KIF

How to take screenshots for failing tests with KIF

In the Consumer iOS app team we use KIF to test our application. KIF (Keep It Functional) is an open source tool for iOS automation from square. More details here: //github.com/kif-framework/KIF

Taking screenshots for failing tests is a good way to understand why the test is failing.

Most test automation tools come with some way of taking screenshots. To enable taking screenshots for failing tests in KIF, we need to set an environment variable “KIF_SCREENSHOTS= ‘location’”. Location is the path which tells KIF where to save the screenshots.

To make sure that screenshots are saved for all failing tests, irrespective of the machine they run on, ‘location’ should be relative. I used the location ‘$PROJECT_DIR/Failed_Screenshots’.

$PROJECT_DIR is an Xcode build setting which stores the value of the Xcode project directory. This will already be set to where the project is on your system. To see the value of $PROJECT_DIR, go to the terminal, navigate to where the Xcode project is, and type ‘xcodebuild -showBuildSettings | grep PROJECT_DIR’.

KIF creates the Failed_Screenshots folder automatically, whenever a test fails and saves screenshots for all the failing tests.

Below are the steps to enable screenshots from Xcode:

How to enable screenshots using Xcode

  • Select the test scheme in Xcode

 

  • Select Edit Scheme

schemes

  • Select Run and set an environment variable KIF_SCREENSHOTS= ‘location_to_Save_screenshots’

variable settings

  • Select Close

 

  • Write a failing test

test file

test file2

This is a sample test which I created to demonstrate how KIF can take screenshots. This test is looking for an accessibility label which does not exist and always fails.

  • Run the test using Xcode or xcodebuild on command line

 

  • Check the screenshot in the failed screenshots folder, which you set as the value of the KIF_SCREENSHOTS variable. The screenshot is saved with the same name as the test class and also includes the line number of the failing step. For example, ‘JETESTScreenshots.m, line 17.png’, in this case ‘JETESTScreenshots.m’ is the test class and line 17 is the line number of the failing step.

 

  • Give the path to the ‘Failed_Screenshots’ folder in the artifact paths field of the project in your CI system, to save the screenshots as artifact for every failing build job.

 

Thanks for reading.

Preet

360 views

Tests that rely on Data

With our day-to-day test automation, we try to avoid dependencies as much as possible. The majority of tests are executed on a set of data that exists in the database in the QA environment. However, there are situations when we need to add or edit data in order to carry out end to end testing. When we do this, we need to make sure we don’t influence existing data which is used by different tests.

Cucumber with ActiveRecord

Since we have a limited number of tests that are dependent on a unique set of data, we use a standalone ruby script to deal with the database without creating a model. We create a connection to the QA database using ActiveRecord. We use this connection to perform queries and validate data. Here is an example test scenario where we altered data values via ActiveRecord…

Here is how we make the DB connection with ActiveRecord in the helper file. Also it notifies us after the DB has connected successfully and vice-versa.

The scenario validates certain error messages if the product price has changed or a product has been removed from the menu when the user orders something they have ordered before. Since we need to delete the item or edit the item price for a particular menu, we had to add a new item to the DB every time we ran this test. This was to ensure we were not affecting existing data which is used by other tests. The following example describes creating a new product and adding it to certain menus. The first task is to a create the new item. We had to make sure every new item was created with a unique name. When the item is created, it is given a generated product ID, which we then needed to extract in order to add it to a menu.

Initially we assign the SQL insertion query which needs to be executed into a variable.

Afterwards, we pass the assigned variable along with the DB connection. In this scenario, we have used the execute method since we don’t expect anything to return.

Then we have used the select_value method to return a single value (i.e. product ID) from a record, in order to add the new item into the respective menu.

When updating a certain value related to a product (e.g. price of the product), we pass the update SQL query to the execute method, like we did in the insertion step.

In each method we clear the DB connection which we created at the beginning, because we don’t want to leave the DB connection open.

The changes that we are doing within the test need to be reset at the end of test. Here’s an example where we delete the item from all the references it has…

We add an exit tag for the tests that require data to be reset at the end of execution. This tag ensures that the item is deleted from the DB even if the test failed in the middle of execution. This is how we defined the exit tag in the env.rb file.

DB changes via API

Since the components of the JUST EAT site communicate with various APIs, any data updated in the DB needs an API refresh in order to be visible in the front-end.

For example, when we are updating an existing item price via the DB, we need to clear the cache of the menu API in order to see the price change in the menu (in the front-end) during the test execution.

Summary

We can see how data intensive it can be to test an application. In this instance particularly, the tests are completely data dependent, and getting the data right at first is an absolute must. As an example we can see not only the data model approach but also the direct sql query execution fits the purpose of retrieving or altering data.

 

Thanks for reading.

~ Deepthi