Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings

Category : Testing

217 views

Xamarin 101, S01E01 – UI Tests

Xamarin 101 is a new series that we hope will prepare you and your team to use Xamarin in production. Each episode will focus on one particular topic on Xamarin development.

Subscribe to our meetup page to receive notifications about all our future events including the next Xamarin 101 episodes, we’ll also give you free pizza and drinks whilst you learn, win-win.
//www.meetup.com/London-Mobile-Dev.

Our first episode was about UI Tests, the full presentation is now available on our Youtube channel!

Two of our JUST EAT Xamarin Engineers also spoke at the event.

Xamarin UITest and Xamarin Test Cloud by Gavin Bryan

This talk covered Xamarin’s Automation Test library, which allows you to create, deploy and run automation tests on mobile devices, simulators and emulators. The library is based on Calabash and allows you to write automation tests in C# using NUnit in a cross-platform manner so that tests can be shared across different platforms if required.
The library is very rich in functionality, allowing quite involved and complex automation tests to be written. The talk gave an overview of some basic test automation and tools available for creating and running tests. The automation tests can be run both locally, in CI environments and in Xamarin Test Cloud (XTC). XTC is an on-demand cloud based service with over 2000 real mobile devices available to run your automation tests on.
We showed the options available for running automation tests on a variety of devices in XTC and showed the analysis and reporting that was available in XTC.

 

Presentation Assets
Slides – goo.gl/TXTVzI
Demo – goo.gl/QiKprk

____

BDD in Xamarin with Specflow and Xamarin UITest by Emanuel Amiguinho

Following Gavin’s presentation, it was time to bring BDD to Xamarin development using Specflow to fill in the gap between Gherkin Feature/Steps definition and Xamarin.UITest framework to have the best UI test coverage possible and good documentation that everyone inside of your team can understand (technical and non-technical personnel).

Presentation Assets
Slides – goo.gl/ITWen8
Demo – goo.gl/7BDpfp

____

Our next topic is databases and we are currently looking for speakers that have had experience with any type of local database in their development (SQLite, DocumentDB, Realm database, etc). If you are interested, please send an email outlining which database you want to talk about and your availability to:
emanuel.amiguinho@just-eat.com or nathan.lecoanet@just-eat.com

1494 views

Unit testing front-end JavaScript with AVA and jsdom

Writing tests for JavaScript code that interacts with the DOM can be tricky. Luckily, using a combination of AVA and jsdom, writing those tests becomes a lot easier.

This article will walk you through how to set everything up so you can get started writing your tests today.

What is AVA?

AVA is described as a “Futuristic JavaScript test runner“. Sounds fancy, huh?! So, what is it exactly that makes it “futuristic“?!

Tests run quickly

AVA runs test files in parallel, each in its own separate process, with the tests inside those files running concurrently. This offers better performance than other test runners that run tests serially, such as Mocha. This also means that each test file is run in an isolated environment — great for writing atomic tests.

Simple API

AVA’s API is very small because, in AVA’s own words, it is “highly opinionated“. You won’t find any assertion aliases here! This reduces the cognitive load required when writing tests.

Write tests in ES2015

You don’t need to do anything to be able to write tests in ES2015, AVA supports this out of the box! Under the covers it’s using Babel to transpile with the es2015 and stage-2 presets.

No implicit globals

AVA has no implicit globals, simply import it into your test file and you have everything you need.

Other benefits

There are a whole host of other benefits which AVA offers such as:

  • Promise support
  • Generator function support
  • Async function support
  • Observable support
  • Enhanced assertion messages
  • Clean stack traces

All of this combined sounds very “futuristic” to me!

Getting off the launchpad with AVA

Now that we know more about AVA, let’s create a new project and start writing some tests.

Start by running npm init inside a new project folder. This will create a package.json file, which will contain various pieces of information about the project such as its name, authors, and dependencies, among others. Hitting enter for each question will fill in a default value.

Installing AVA

Add AVA to the project by typing npm install ava --save-dev, then update the scripts section in package.json:

The --verbose flag enables the verbose reporter, which means more information is displayed when the tests are run.

When using npm scripts, the path to AVA in the node_modules folder will be resolved for us, so all we need to do is type npm test on the command line. Doing so at the moment this will give us an exception:

Let’s fix that by adding a test.

Writing a test

Create a test directory, with a file named demo.test.js inside, then add a test:

First, AVA is imported into the module, then the test function is called, passing a string as the first parameter which describes what the test is doing. The second parameter is the test implementation function which contains the body of the test, this provides us with an object, t, from which we can call the assertion functions.

The is assertion is used here, which takes two values and checks that they are both equal (using === so there is no type conversion).

Note: You can choose any name you like for the t parameter, such as assert. However, using the t convention in AVA will wrap the assertions with power-assert which provides more descriptive messages.

Run npm test and the test result will be printed out

Success! Our test passed as expected. To see an example of what a failing test would look like change the test assertion to t.is(1 + 1, 1). Run the test now and you’ll see an error

As you can see, there is a lot of useful information provided in order to help us track down the issue.

Testing modules

To demonstrate how to test a module, create a new folder called src in the root of the project with a file inside called demo-module.js with the contents:

Update demo.test.js by first importing the module, then adding a new test:

Running npm test now will give you the following exception

Uh oh, what happened?

AVA will transpile ES2015 code in your tests; however, it won’t transpile code in modules imported from outside those tests. This is so that AVA has zero impact on your production environment.

If our source modules are written in ES2015, how do we tell AVA that we’d like them to be transpiled too?

Transpiling source files

To transpile source files, the quick and dirty option is to tell AVA to load babel-register which will automatically transpile the source files on the fly. This is ok if you have a small number of test files, but there is a performance cost which comes with loading babel-register in every forked process.

The other option is to transpile your sources before running the tests in order to improve performance.

The next two sections look at how each technique can be achieved.

Transpile with babel-register

Add babel-register by running npm install babel-register --save-dev, then add a "babel" config to package.json

Next, add "babel-register" to the AVA "require" section

Run npm test and the tests will once again pass, great!

The recommendation from the AVA team is to use babel-registeruntil the performance penalty becomes too great“. As your test base grows you’ll need to look into setting up a precompilation step.

Setting up a precompilation step

A precompilation step will transpile your source modules before the tests are run in order to improve performance. Let’s look at one way to set this up.

Note: If you were following along with the last section you’ll need to remove the references to babel-register. First run npm uninstall babel-register --save-dev, then remove "babel-register" from the AVA "require" section in package.json.

Start by adding the babel-cli and babel-preset-es2015 packages to the project: npm install babel-cli babel-preset-es2015 --save-dev.

Next, add a "babel" config to package.json

In order to run the tests, we need to update the npm scripts. Add a new npm script called precompile

The precompile npm script will tell Babel to take the files in the src directory, transpile them, then output the results to the dist directory.

Next, the test npm script needs to be updated so that it runs the precompile step before running the tests

The double ampersand (&&) tells npm to first run the precompile script and then the AVA tests.

The final task is to update the reference to demo-module inside demo.test.js to point at the compiled code, we do this by replacing ../src with ../dist:

Run npm test and we’re presented with all green tests!

Testing the DOM using Node

So far we have the ability to test JavaScript code, but what if we’d like to test a function which makes use of the DOM? Node doesn’t have a DOM tree, so how do we get around this?

One option is to use a combination of a test runner and a browser — a popular combination is Karma and PhantomJS. These offer a lot of benefits like being able to test against real browsers, run UI tests, take screenshots, and the ability to be run as part of a CI process.

However, they typically come with a fairly large overhead, so running lots of small tests can take minutes at a time. Wouldn’t it be great if there was a JavaScript implementation of the DOM?

Welcome to the stage; jsdom!

jsdom

jsdom is described as “A JavaScript implementation of the WHATWG DOM and HTML standards, for use with Node.js“.

It supports the DOM, HTML, canvas, and many other web platform APIs, making it ideal for our requirements.

Because it’s purely JavaScript, jsdom has very little overhead when creating a new document instance which means that tests run quickly.

There is a downside to using a JavaScript implementation over an actual browser – you are putting your trust in the standards being implemented and tested correctly, and any inconsistencies between browsers will not be detected. This is a deal breaker for some, but for the purposes of unit testing I think it is a reasonable risk to take; jsdom has been around since early 2010, is actively maintained, and thoroughly tested. If you are looking to write UI tests then a combination of something like Karma and PhantomJS may be a better fit for you.

Integrating jsdom

Setting up jsdom can be a daunting task, the documentation is great, but very lengthy and goes into a lot of detail (you should still read it!). Luckily a package called browser-env can help us out.

Add browser-env to the project npm install browser-env --save-dev.

Create a helpers directory (which is ignored by convention when using AVA) inside test, then add setup-browser-env.js with the contents

We need to tell AVA to require this module before any of the tests are run so that browser-env can create the full browser environment before any DOM references are encountered. Inside your package.json add

Note: You may have noticed that this file is written in ES5. This is because AVA will transpile ES2015 code in the tests, yet it won’t transpile any modules imported or, in this case, required from outside the tests — see the transpiling source files section.

Testing the DOM

Let’s write a test which makes use of the document global which has been provided thanks to jsdom. Add a new test to the end of demo.test.js:

First, we add a paragraph element with some text to the document body, then query for that element using document.querySelector, and finally, we verify that the selected paragraph tag has an innerHTML value equal to 'Hello, world'.

Run the tests with npm test

Congratulations, you’ve just unit-tested the (virtual) DOM!

Test coverage with nyc

As a bonus let’s quickly set up some test coverage. Because AVA runs each test file in a separate Node.js process, we need a code coverage tool which supports this. nyc ticks the box — it’s basically istanbul with support for subprocesses.

Add it to the project with npm install nyc --save-dev, then update the test npm script by adding nyc before the call to ava:

You’ll also need to update the Babel config to tell it to include source maps when developing so that the reporter can output the correct lines for the transpiled code:

Run the tests and witness the awesome code coverage table!

What next?

If you’re interested in what else you can do with AVA, have a look through the AVA readme, check out the AVA recipe docs, read about common pitfalls, and listen to this JavaScript Air podcast episode. I’d also recommend looking into setting up linting for your code.

You can browse the source code for this blog post on GitHub.

So, now you have no excuse for not testing your front-end JavaScript!


Damian Mullins is a UI Engineer at Just Eat. Progressive enhancement advocate, web standards supporter, JavaScript enthusiast.

832 views

Why fewer End-to-End Tests?

At Just Eat, we undertake a lot of end to end (e2e) tests that continuously run every time we change something on our website. It goes without saying that tests give you more confidence about the product’s quality since it covers more use cases from the user’s perspective. However time is crucial in the Software Development Life Cycle – the more you wait to get feedback, the slower the development process will be.

The issues we face having more e2e website tests are…
  • They require lot of knowledge, skills and time to write quality e2e tests.
  • They can take a long time to execute, which means feedback is slow.
  • They can be very brittle, due to relying on UI which changes often.
  • Testing negative paths can be complex if integrating with APIs or databases.
  • They can be fragile since they normally rely on external dependencies and environment.
  • And when e2e test fails… it can be like finding a needle in a haystack.

 

So how do we overcome these issues? Well we need to make sure we are writing the right amount of tests at the right level… and by level I mean unit, acceptance and end to end.

If we consider unit tests, they test a small unit/component in isolation and are fast, often very reliable and normally make easy to find the bug when they fail. The main disadvantage of a unit test is that even if the unit/component works well in isolation, we do not know if it works well with the rest of the system. In other words we don’t test the communications between each component. For that we need to have an integration test and these tests should focus on the contracts between each component and how they act when this contract is met and not met. But despite the above issues with e2e tests we know that they are the only tests which simulate real user scenarios, ie when I place an order or when I leave a review, etc – so it’s important to have a right balance of all these test types.

The best visual indication of this is the Agile Testing Pyramid (see below).

 

pyramid_thumb

According to the pyramid, the best combination will be 70% unit tests, 20% integration/acceptance tests and only 10% end to end tests.

In-memory acceptance tests

We allow each developer to run all of the integration/acceptance level test locally. To achieve this we’ve developed a framework to minimise the issues normally encountered. The framework supports real tests via browser by hosting the website in-memory and mocks out all of the endpoint calls so it’s not dependent on the actual QA environment.

inmemory test framework diagram

FiddlerCore – Is the base component used by Fiddler and allows you to capture and modify HTTP and HTTPS traffic and we use it injecting a proxy.

Coypu/Selenium – This is a wrapper for browser automation tools on .Net with Selenium WebDriver (same as Capybara in the ruby framework).

IIS Express – Visual Studio .NET built in IIS server which we use here to host the Website.

When you trigger an in-memory test the Selenium driver; with the Coypu wrapper, communicates with the browser whilst Fiddlercore will inject the proxy to the request via a header.

The browser accesses the Website; hosted in IIS Express, exercising the server side code which attempts to communicate with various endpoints (APIs). Through FiddlerCore, we can listen to the API call being made, inject a proxy and mock the API response, so we can test the presentation layer in isolation. You can mock scenarios where the API can fail or return unexpected data and handle how this will affect the user journey – in most cases, you can just show a minimal set to a user instead of an error page.

In some cases eg Authentication, you can inject an in-memory implementation of the identity server that can hijack authentication requests and issue tokens that your application trusts. Ideally, a developer should be able to run most of the tests, including all unit tests and a huge subset of the acceptance tests suite without being connected to a network.

The benefits of this framework are…
  • Faster and reliable than e2e tests.
  • Can also be used to write tests to simulate real user scenarios.
  • Can also be used to write integration/api tests.
  • They provide the ability to write scenarios where you can gracefully degrade in terms of functionality.
  • No dependency on the environment or QA.
  • Earlier feedback in the development cycle.
  • They can be run locally.
  • Works well with continuous integration.
  • Motivates the developer to write their own e2e and Integration tests. Also it helps developers to think about and simplify the architecture with simpler dependencies.

Summary :

Having more e2e tests don’t automatically reflect on faster delivery but it is important to have a right balance of tests and a good test automation strategy. So it goes; without saying, less e2e tests will save you time plus you can now also spend more time on exploratory testing. Also “right” set of tests allows you to evolve your architecture, refactor your code in order to continuously improve.

As a footnote I would like to mention ‘Rajpal Wilkhu’ who architectured and helped us to develop this amazing framework.

 

Thanks for reading …

Deepthi Lansakara

438 views

Calabash Page Objects

Faster development of Calabash tests

While creating the page object classes in our Calabash mobile test suites at JUST EAT, we found ourselves repeating a lot of actions when waiting for, scrolling to and interacting with elements on the screen. We abstracted these actions into a library to avoid this unnecessary duplication of code and made these actions agnostic to screen size. This library has now been published as a ruby gem called calabash-page-objects.

Why use this?

Dealing with small screens

Sometimes you have to scroll to elements on small screens but not on larger screens. We initially used if-statements dealing with an environment variable for ‘small screen’ inside your test code – not good!

We wrote a method to scroll to an element if it wasn’t immediately there. This method was then included in many of the methods available to our elements; touching, inputting text, asserting presence etc.

Multiple scrollable views

When attempting to use Calabash’s default scroll method, we noticed that sometimes it didn’t appear to scroll the view we wanted if there were multiple scrollable views on the screen.

After looking into the Calabash methods, we noticed that you could perform scroll actions on locators. We wrapped up this method in the gem too so that we could pass both the element we’re searching for and the view it belongs in into all the helper methods. This became the ‘parent’ parameter that the gem methods can optionally take.

How to use?

The calabash-page-objects gem exposes two element classes, one for iOS the other for Android. They are implemented in the same way regardless of the platform under test. These element classes have methods for, waiting for them to be visible, waiting for them to disappear, touching them, etc. These methods all take variables in a consistent format.


More information

See the project on Github for more information, and a more detailed description of the methods and parameters. Feel free to fork it and contribute too.

949 views

In Memory DynamoDb

Problem

Here at JUST EAT we use DynamoDb in a lot of our components. The DynamoDb API can be awkward and slow to work with at times and this has sometimes lead to a decision between having complicated tests or sacrificing coverage.

Usually our integration tests are run against a QA AWS account that mirrors production. This gives us confidence that our changes are good to push live, but can be slow to iterate on when designing new features. To speed up our cycle time, we wanted a way to decouple our integration tests from AWS without losing confidence in our code.

Solution

Amazon provide a java app that can be used to deploy a local version of DynamoDb. We used this to develop a new feature, and it was the fastest we’ve ever worked with DynamoDb. We decided to wrap it up in a nuget package for use in other projects and when that succeeded just as well we decided to open source it.

The nuget contains a wrapper for this jar file to start and stop it, and provides a DynamoDb client that is configured to use the local instance of dynamo.

This eliminated the need to mock out the complicated DynamoDb api in our tests without sacrificing speed or coverage. It also removed a lot of lines from our tests.

Local Dynamo Test Example

Caveats

There are a couple of potential pitfalls with our approach.

AWS have licensed the DynamoDb jar file in such a way that we couldn’t include it in the nuget. This means that as part of setup, you need to download the file and its dependencies from Amazon and include them in your solution. This has the downside that the version you are testing against has the potential to get out of sync with the version that is running in AWS.

Our QA environments have a mirror of our production security groups, therefore any tests passing in QA give us confidence that our feature will work as intended in production. The local version of DynamoDb provided by Amazon has no concept of security groups, meaning that this aspect of our testing would no longer be covered. To ensure security groups continued to be tested adequately, we implemented a dependency check endpoint in our API that executes a read and write against dynamo and returns an http response giving success or failure information. This check is run post-deploy and has to pass before a deployment is considered successful.

Dependency Check Example

 

The project is available on nuget: nuget.org/packages/LocalDynamoDb and on github at: github.com/justeat/LocalDynamoDb

Alan NicholsSteve Brazier