Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings


Bringing Apple Pay to the web

Using Apple Pay on the web from just-eat.co.uk

Using Apple Pay on the web from just-eat.co.uk.


Back in June at WWDC, Apple announced that Apple Pay was expanding its reach. No longer just for apps and Wallet on TouchID compatible iOS devices and the Apple Watch, it would also be coming to Safari in iOS 10 and macOS Sierra in September 2016.

Just Eat was a launch partner when Apple Pay was released in the UK in our iOS app in 2015. We wanted to again be one of the first websites to support Apple Pay on the web by making it available within just-eat.co.uk. Our mission is to make food discovery exciting for everyone – and supporting Apple Pay for payment will make your experience even more dynamic and friction-free.

Alberto from our iOS team wrote a post about how we introduced Apple Pay into our iOS app last year, and this post follows on from that journey with a write-up of how we went about making Apple Pay available on our website to iOS and macOS users with the new Apple Pay JS SDK.

Getting set up

In the iOS world, due to the App Store review process and signed entitlements, once your app is in users’ hands you just use PassKit to get coding to accept payments. For the web things are a little different.

Due to the more loosely-coupled nature of the integration, instead trust between the merchant (Just Eat in our case) and Apple is provided through some additional means:

  • A valid SSL/TLS certificate
  • A validated domain name to prove a merchant owns a given domain
  • A Merchant Identify Certificate


As we already use Apple Pay here at Just Eat, the first few steps for getting up and running have already been achieved. We already have an Apple developer account and a merchant identifier via our iOS app development, and the Just Eat website is already served over HTTPS, so we have a valid SSL/TLS certificate.

We also do not need to worry about decrypting Apple Pay payment tokens ourselves. We use a third-party payment provider to offload our payment processing, and internal APIs for passing an Apple Pay token for processing via our payment provider already exists for handling iOS payments, so the website can integrate with those as well.

To get up and running and coding end-to-end, we need just need a Merchant Identity Certificate. This is used to perform two-way TLS authentication between our servers and the Apple Pay servers to validate the merchant session when the Apple Pay sheet is first displayed on a device.

The first step in getting a Merchant Identify Certificate is to validate a domain. This involves entering a domain name into the Apple Pay Developer Portal for the merchant identifier you want to set up Apple Pay on the web for – where you then get a file to download. This is just a text file that verifies the association between your domain and your merchant ID. You just need to deploy this file to the web server(s) hosting your domain so Apple can perform a one-time request to verify that the file can be found at your domain.

You need to do this for all domains you wish to use Apple Pay for, including internal ones for testing, so you may have to white list the Apple IP addresses so that the validation succeeds.

Once you have validated at least one domain, you can generate your Merchant Identify Certificate for your Merchant Identifier. This requires providing a Certificate Signing Request (CSR).

Uploading the CSR file in the Apple Developer Portal will generate a certificate file (merchant_id.cer) for you to download. This acts as the public key for your Merchant Identify Certificate. The private key is the CSR you provided. In order to create a valid TLS connection to the Apple Pay merchant validation server, you will need to create a public-private key pair using the CSR and the CER files, such as using a tool like OpenSSL. In our case we generated a .pfx file for use with .NET. Make sure you keep this file secure on your server and don’t expose it to your client-side code.

Separating concerns

So now we’ve got a validated domain and a Merchant Identify Certificate, we can start thinking about implementing the JavaScript SDK. At a high-level the components needed to create a working Apple Pay implementation in Safari are:

  1. JavaScript to test for the presence of Apple Pay, display the Apple Pay sheet and to respond to user interactions and receive the payment token
  2. CSS to render the Apple Pay button on a page
  3. An HTTPS resource to perform merchant validation

From the user’s point of view though, it’s just a button. So rather than add all the code for handling Apple Pay transactions directly into the codebase of our website, we decided instead to contain as much of the implementation as
possible in a separate service. This service presents its own API surface to our website, abstracting the detail of the Apple Pay JavaScript SDK itself away.

The high-level implementation from the website’s point of view is therefore like this:

  1. Render a hidden div on the appropriate page in the checkout flow to represent the Apple Pay button as well as some meta and link tags to drive our JavaScript API
  2. Reference a JavaScript file from the Apple Pay service via a script tag
  3. Provide some minimal CSS to make the Apple Pay button size and colour appropriate to the current page
  4. Call a function on our JavaScript API to test for whether Apple Pay is available
  5. If it is, call a second function passing in some parameters related to the current checkout page, such as the user’s basket, the DOM element for the div representing the Apple Pay button and some callback functions for when the payment is authorised, fails or an error occurs.

The rest of the Apple Pay implementation is handled by our JavaScript abstraction so that the Just Eat website itself never directly calls the Apple Pay JavaScript functions.

Our new Apple Pay service itself should have the following responsibilities:

  • Serve the JavaScript file for the abstraction for the website
  • Serve a file containing the base CSS for styling the Apple Pay button
  • Provide HTTP resources that support Cross Origin Resource Sharing (CORS) to:
    1. Provide the payment request properties to set up an Apple Pay sheet
    2. Validate merchant sessions
    3. Verify that a restaurant partner delivers to the selected delivery address
    4. Receive the Apple Pay payment token to capture funds from the user and place their order

Separating the CSS, JavaScript and back-end implementation allows us to decouple the implementation from our website itself allowing for more discrete changes. For example, the current Apple Pay version is 1. By abstracting things away we could make changes to support a future version 2 transparently from the website’s point-of-view.

Delving into the implementation

As mentioned in the high-level design above, integrating Apple Pay into a website requires a mix of client-side and server-side implementation. We need to implement some JavaScript, make some CSS available and provide some server-side HTTP resources to handle merchant validation of payment processing. There’s also some HTTP meta and link tags you can add to enhance your integration.

Let’s delve into the different layers and things we need to add…


Well first we need an Apple Pay button. You can add one with some HTML like this:

Ignore the apple-pay-* CSS classes for now as I’ll come back to them, but the hide class (or some other similar approach) ensures that the div for the button is not visible when the page first loads. This allows us to display it as appropriate once we have detected that Apple Pay is available in the browser using JavaScript.

HTML metadata

Apple Pay supports a number of different HTML meta and link tags that you can use to improve the user experience for your integration.

First, there’s some link tags you can add to provide an icon for use on an iPhone or iPad when a confirmation message is shown to the user initiating a payment from macOS:

These link elements can even be added dynamically by scripts when you detect the Apple Pay is available, provided that they are in the DOM before you create an ApplePaySession object.

There’s also some meta tags you can add so that crawlers (such as Googlebot can identify your website as supporting payment through Apple Pay:

Integrating the Apple Pay JavaScript SDK

So now we’ve got the HTML for the Apple Pay button and some metadata tags, we need some JavaScript to drive the integration.

In our case we have placed all of our Apple Pay-related JavaScript into a single file. This allows us to use server-side feature flags to decide to render the script tag for it (or not), so that the relevant file is only fetched when the feature is enabled.

Within this JavaScript file, there are functions for dealing with the Apple Pay workflow and calling the Safari functions in the browser.

The psuedo-code for an implementation within a consuming website would be:

First we have functions in je.applePay that contain simple functions for feature detection. For example, the isSupportedByDevice() function tests if the current browser supports Apple Pay at all, where as the isSupportedForCheckout() function additionally tests if the Just Eat specific information (such as the ID of the basket to pay for) is available to the current page.

The controller is the top-level object in our abstraction that the containing page uses to handle the Apple Pay payment flow. This handles things so that when the user clicks the Apple Pay button, we create an Apple Pay session with the appropriate payment information, do callbacks to the server to validate the merchant session and capture payment – and invoke the website-supplied callback functions when the payment process ends.

Within our abstraction, we use the ApplePaySession object to drive our integration. For example, to test for Apple Pay support, we use code similar to this (logging removed for brevity):

Assuming that the device supports Apple Pay then we’ll want to display the Apple Pay button. However before we do that we’ll need to wire-up an onclick event handler to invoke the JavaScript to handle the payment process itself when it is clicked or pressed. For example with jQuery:

Now the Apple Pay button will be displayed. The rendering of the button itself is handled by the CSS provided by Apple. There are four possible variants. First there’s a choice between a black or a white button, then there’s the choice of either an Apple Pay logo only, or the logo prefixed by “By with” (CSS).

The logo itself is provided by resources built into Safari, such as shown in this snippet:

The CSS file for this is loaded dynamically by our JavaScript abstraction so users with devices that do not support Apple Pay do not pay the penalty of a network request to get the CSS file. This also removes the need for the consuming website to explicitly load the CSS itself with a link tag and allows the location of the CSS file itself to be modified at any time in our Apple Pay service.

So when the user either taps or clicks the button, that’s when the work to start the Apple Pay session begins. First you need to create a properly set up payment request object to create an instance of ApplePaySession along with the Apple Pay version (currently 1).

Be careful here – Apple Pay only allows an ApplePaySession object to be created when invoked as part of a user gesture. So, if you want to do any interaction with your server-side implementation here, ensure you do not make use of asynchronous code such as with a Promise object. Otherwise creating the ApplePaySession may occur outside the scope of the gesture handler, which will cause a JavaScript exception to be thrown and the session creation to fail.

We haven’t done enough to show the sheet yet though. Next we need to register the callback functions for the events we want to receive callbacks for. At a minimum you will need two of these:

onvalidatemerchant is called after the sheet is displayed to the user. It provides you with a URL to pass to the server-side of your implementation to validate the merchant session.

An example of how you could do this in jQuery is shown in the snippet below:

onpaymentauthorized is called after payment is authorised by the user either with a fingerprint from an iPhone or iPad or by pressing a button on their Apple Watch. This provides the payment token for capturing the funds from the user.

An example of how you could do this in jQuery is shown in the snippet below:

The functionality to actually capture funds from the user is outside the scope of this blog post – information about decrypting Apple Pay payment tokens can be found here.

There’s also events for payment selection, shipping method selection, shipping contact selection and cancellation. This allows you to do things such as:

  • Dynamically adjust pricing based on payment method or shipping address
  • Validate that the shipping address is valid, for example whether a restaurant delivers to the specified shipping address

Note that before the payment is authorised by the user, not all of the shipping contact and billing contact information is yet available to you via the parameters passed to the event handlers. For example, the country, locality (eg a city or town), administrative area (eg a county or state) and the first part of the postal code (eg outward code in the UK, such as EC4M 7RF). This is for privacy reasons as before the user authorises the payment it is still a request for payment, and as such the full information is only revealed to use you the integrator by the onpaymentauthorized event.

Once you’ve registered all your event handlers, you just need to call the begin function to display the Apple Pay sheet.

HTTP resources

Our server-side implementation has 4 main resources that we consume from our JavaScript code for all flows:

  1. GET /applepay/metadata
  2. GET /applepay/basket/{id}
  3. POST /applepay/validate
  4. POST /applepay/payment

The metadata resource is used to test whether Apple Pay is available on the current domain (for example www.just-eat.co.uk). The JSON response returned indicates whether the Apple Pay feature is enabled for the referring domain, the merchant capabilities, the supported payment networks, the country and currency code and the available Apple Pay touch icons and their URIs. This allows our JavaScript example to build up the link tags for the touch icons dynamically, deferring the need for them until necessary.

The basket resource is used to fetch details about the user’s current basket so that we can render the Apple Pay sheet to show the items for their order, the total, the shipping method and the required shipping contact fields. For example, we require the user’s postal address for delivery orders but that isn’t required for collection orders. This removes the need for the JavaScript to determine any of this information itself, as it can just copy the fields into the payment request object for the ApplePaySession constructor directly from the JSON response.

The validate resource is used to implement the merchant session validation with the Apple Pay servers. This posts the Apple validation URL to our back-end which then calls the specified URL using the Merchant Identify Certificate associated with the requesting domain to validate the merchant session. The JSON response then returns a MerchantSession dictionary for consumption by the JavaScript to pass to the completeMerchantValidation function.

The payment resource is used to POST the encrypted payment token, as well as the basket ID and billing and shipping contact details to our server to place the order. This resource then returns either an order ID (and optionally a token if a guest user account was created) if the payment was authorised successful or an error code otherwise.

For delivery orders we also have a POST /applepay/basket/{id}/validatepostalcode resource to check that the user’s chosen shipping address can be delivered to.

Merchant Validation

Initiating the POST to Apple’s servers to validate the session is relatively simple in ASP.NET Core (more about that later), provided you’ve already performed the steps to create a .pfx file for your Merchant Identify Certificate.

First we need to load the certificate, whether that’s from the certificate store or from a file on disk. In our service we store the certificate as an embedded resource as we have multiple certificates for different environments, but the simplest form is loading from disk.

This was the approach I was using in some local initial testing, but when I deployed the code to a Microsoft Azure App Service to leverage the free SSL certificate, this stopped working. After some digging around I found that this was because on Windows you need to be able to load the user profile to access private keys in certificates, and this isn’t possible by default in IIS as it isn’t loaded. This is easy enough to fix when you have full control of the infrastructure (such as our Amazon Web Services (AWS) Elastic Cloud Compute (EC2) instances), but there’s no option available to enable this in Azure.

Luckily there is a way around this. First, you upload the certificate that has a private key that you wish to use to the App Service using the “SSL certificates” tab in the Azure Portal. Next, you add the WEBSITE_LOAD_CERTIFICATES App setting to the “Application settings” tab and set its value to the thumbprint of the certificate you want to use. This causes the App Service to make the specified certificate available in the “My” store in the “Current User” location so it can be read by the identity associated with the IIS App Pool. Note that the validOnly parameter value is set to false; if it is not the Merchant Identifier Certificate will not be loaded as it is not considered valid for use by Windows, even though it is valid from Apple’s perspective.

The next step in the merchant validation process is to construct the payload to POST to the Apple server. For this we need our domain name, the store display name (in our case “Just Eat”) and the merchant identifier. While we could configure the merchant identifier to use per domain, we can be smart about it and read it from the Merchant Identifier Certificate instead. Thanks to Tom Dale’s node.js example implementation, we discovered that this can be found from the 1.2.840.113635.100.6.32 X.509 extension field, so we can read it out of our X509Certificate2 like so:

Now we can POST to the validation URL we received from the JavaScript. As mentioned previously we need to provide the Merchant Identifier Certificate with the request for two-way TLS authentication. This is achieved by using the HttpClientHandler class which provides a ClientCertificates property where we can use our certificate, and then pass it into the constructor of HttpClient to handle authentication for use when we POST the data:

Assuming we get a valid response from the Apple server, then we just need to deserialise the JSON containing the merchant session and return it to the client from our API controller method:

Now our JavaScript needs to consume the response body as mentioned earlier in the JavaScript implementation to pass it to the ApplePaySession.completeMerchantValidation function to allow the user to authorise the payment.

New Tricks with ASP.NET Core

When we started implementing Apple Pay for our website, ASP.NET Core 1.0.0 had just been released, and as such we were running all our C#-based code on the full .NET Framework. We decided that given the relatively small size and self-contained nature of the service for Apple Pay (plus there being no legacy code to worry about) that we’d dip our toes into the new world of ASP.NET Core for implementing the service for Apple Pay.

There are a number of capabilities and enhancements of ASP.NET Core that made it attractive for the implementation, but the main one was the improved integration with client-side focused technologies, such as Bower, Gulp and npm. Given that a bulk of the implementation is in JavaScript, this made it easier to use best-practice tools for JavaScript (and CSS) that provide features such as concatenation, minification, linting and testing. This made implementing the JavaScript part of the integration much easier to implement that the equivalent workflow in an ASP.NET MVC project in Visual Studio.

Getting cut at the bleeding edge

Of course, going with a new version of a well-established technology isn’t all plain-sailing. There’s been a few trade-offs moving to ASP.NET Core that have made us go back a few steps in some areas. These are gaps we hope to address in the near future to obtain feature parity with our existing ASP.NET applications. Some of these trade-offs are detailed below.


Here at Just Eat we have a variety of shared libraries that we add as dependencies into our .NET applications to share a common best-practice and allow services to focus on their primary purpose, rather than also have to worry about boiler-plate code, such as for logging, monitoring and communicating with other Just Eat services over HTTP.

Unfortunately a number of these dependencies are not quite in the position to support consumption from .NET Core-based applications. In most cases this is due to dependencies we consume ourselves not supporting .NET Core (such as Autofixture used in tests), or using .NET APIs that are not present in .NET Core’s surface area (such as changes to the UdpClient class).

We’re planning to move such libraries over to support .NET Core in due course (example), but the structure of the dependencies makes this a non-trivial task. The plan is to move our Apple Pay service over to versions of our libraries supporting .NET Core as they become available, for now it uses its own .NET Core forks of these libraries.


At Just Eat we have a very mature monitoring and logging solutions using Kibana and Grafana, amongst other tools. Part of our monitoring solution involves a custom service that is installed on our AWS EC2 Amazon Machine Images (AMIs) which collects performance counter data to publish to StatsD.

Unfortunately ASP.NET Core does not currently implement performance counters on Windows. In ASP.NET, there are various performance counters available that we collect as part of our monitoring, such as the number of current IIS connections, request execution times, etc. Even though ASP.NET Core can be hosted via IIS, because the .NET Framework is not used, these performance counters are of no use when it comes to monitoring an ASP.NET Core application.

Testing the implementation

So once we’ve gotten our server-side implementation to get details for rendering the Apple Pay sheet, validating merchant sessions and processing payment in place, as well as our JavaScript abstraction and base CSS, we can start going about testing it out.

But how do we test Apple Pay without using our own personal credit/debit card?

Luckily with iOS 10, watchOS 3 and macOS Sierra, Apple have provided us with a way to do this. It’s called the Apple Pay Sandbox. This provides us with a way to set up users with “real” payment cards that allow us to test transaction processing (at least up to the point of trying to capture funds). You can find more details on the website, but the main steps are:

  1. Setup a sandbox tester account in iTunes Connect
  2. Sign into iCloud on your test device(s) using your sandbox tester
  3. Add one or more test card(s) to Wallet on you test device(s)


Using the Apple Pay sandbox then allows you to test as many transactions as you like on your test devices without worrying about spending a fortune or misplacing your personal payment card details.

Stubbing Out the SDK

With the majority of Just Eat’s back-end services (and our website) being written in ASP.NET, this posed a bit of a challenge for testing. Of course the interactions with the sheet and the rendering need to be tested on a real Apple Pay-supporting device, but how could we run the full-back end stack on our local Windows 10 machines and use Apple Pay for local testing of changes without setting up lots of proxying to macOS and iOS test devices?

Well luckily in JavaScript it’s quite simple to add a polyfill to a browser to provide a native API where there would otherwise not be one available. So that’s what we did.

You can find it in a here on GitHub.

Effectively the polyfill provides the ApplePaySession object if it does not already exist, and functions in a way that makes the functions behave as if Apple Pay is available on the current device and chains the events and their handlers together to make it appear that a user is interacting with the Apple Pay sheet.

Of course it is no substitute for testing with a real device, but the polyfill provides enough of an implementation to test feature detection (i.e. only adding the button if Apple Pay is supported) and the server side implementation for fetching and rendering the basket, performing merchant validation, and passing on a valid sandbox payment token.

You can get a valid payment token for a sandbox transaction that you can embed within your own copy of the Polyfill by adding some JavaScript logging to print out the text representation of the object passed as the event parameter to the onpaymentauthorized function, as well as populating it with some appropriate billing and payment contact details.

We use the polyfill for testing in our QA environments by loading it into the browser via a script tag in our checkout-related pages where the Apple Pay button would appear.


So we’ve got our new service, and we’ve integrated it into our website and it’s all working locally. Now it just needs deploying to our QA environments for testing, and then eventually onto our production environment.

We have our own deployment pipeline here at Just Eat that sets up deploying IIS applications from ZIP packages and we also build our own custom AWS AMIs to deploy our services onto, so that’s all taken care of by our Platform Engineering team.

Our AMIs do not yet have .NET Core installed on them though, so if we tried to use the deployed in IIS it would return an HTTP 502. That’s easy enough to resolve though, we just need to make a new AMI with .NET Core on it.

This is nice and easy as Chocolatey provides packages for both the .NET Core runtime and the Windows Server Hosting installer for IIS hosting.

Now there’s just a few more things we need to do to get our feature ready to run:

  1. We need to set the ASPNETCORE_ENVIRONMENT environment variable so that the application runs with the right configuration
  2. We need to set up the registry hives required for the ASP.NET Core data protection system (used for things like antiforgery tokens)
  3. We need to adjust the App Pool configuration


Our deployment process already provides us with hooks to run PowerShell scripts post-deployment, so we just need to write some small scripts to do the steps.

Setting the environment name

We can set the environment name machine-wide because we deploy each service on its own EC2 instance. There are other approaches available, like setting environment variables in the ASP.NET Core Module, but this was simpler:

Configuring the App Pool

We also need to amend the IIS App Pool for the website to disable the .NET Framework (because we don’t need it) and to load the user profile so we can load the private keys in our Merchant Identifier Certificates.

Setting Up Data Protection

The process for setting up Data Protection for IIS, which in turn provides a link to a PowerShell script, can be found here.

After these three steps are done, then IIS just needs to be restarted (such as with iisreset) to pick up the configuration changes.

The (Apple) pay off

So now with Apple Pay integrated into our website, it’s possible for the user to pay using the cards loaded into Wallet on either their iPhone running iOS 10 or their Apple Watch running watchOS 3 when paired with a MacBook running macOS Sierra.

iPhone payment flow

At the start of the checkout flow the user is prompted to select what time they would like their food delivered for (or be ready for collection) and an optional note for the restaurant.

At first the user is shown the Apple Pay button in additional to the usual button to continue through checkout to provide their delivery and payment details.

The user taps the Apple Pay button and the Apple sheet is displayed. Then the user selects their payment card as well as their delivery address. While this happens we asynchronously validate the merchant session to enable TouchID to authorize payment as well as validate that the restaurant selected delivers to the postcode provided by the user in the case of a delivery order.

Once the user authorizes payment with their finger or thumb, the sheet is dismissed, they are logged in to a guest account if not already logged in, and redirected to the order confirmation page.

The Apple Pay button displayed during checkout in Safari on iOS 10.

The Apple Pay button displayed during checkout in Safari on iOS 10.

The Apple Pay payment sheet in iOS.

The Apple Pay payment sheet in iOS.

macOS payment flow

At the start of the checkout flow the user is prompted to select what time they would like their food delivered for (or be ready for collection) and an optional note for the restaurant.

Here the user is shown the Apple Pay button in additional to the usual button to continue through checkout to provide their delivery and payment details.

The Apple Pay button displayed during checkout in Safari on macOS Sierra.

The Apple Pay button displayed during checkout in Safari on macOS Sierra.

The user clicks the Apple Pay button and the Apple sheet is displayed. The user selects their payment card as well as their delivery address. While this happens we asynchronously validate the merchant session to enable the ability to authorize payment using an iPhone, iPad or Apple Watch paired with the signed in iCloud account, as well as validate that the restaurant selected delivers to the postcode provided by the user in the case of a delivery order.

The Apple Pay payment sheet in macOS Sierra.

The Apple Pay payment sheet in macOS Sierra.

Once the merchant session is validated, the user is then prompted to authorize the payment on their paired device, for example using either an iPhone with TouchID or an Apple Watch.

Payment confirmation for a purchase from macOS using Touch ID on an iPhone.

Payment confirmation for a purchase from macOS using Touch ID on an iPhone.

Payment confirmation for a purchase from macOS using Apple Watch.

Payment confirmation for a purchase from macOS using Apple Watch.

Once the user authorizes payment with their finger or thumb with TouchID or by pressing a button on their Apple Watch, the sheet is dismissed, they are logged in to a guest account if not already logged in, and redirected to the order confirmation page.

Now the user just needs to wait for their food with their inner food mood to be prepared.

Example integration

An example integration of Apple Pay JS adapted from our own implementation is available on GitHub. You should be able to use it as a guide to implementing Apple Pay into your website by viewing the JavaScript for creating an ApplePaySession and the C# for validating a merchant session. Also, provided you have an Apple Developer account so that you can generate your own merchant identifier and the associated certificates, you should also be able to run it yourself and see Apple Pay in action.


We hope you’ve found this post about how we brought Apple Pay to the Just Eat website informative and interesting, and that the example integration is a useful resource if you’re thinking about implementing Apple Pay into your own e-commerce solution yourself.

It’s been an interesting SDK to integrate with a number of challenges along the way, but we’ve also learned a lot in the process, particularly about Apple Pay itself, as well as the differences between ASP.NET and ASP.NET Core (the good and the not so good).

Just Eat is here to help you find your flavour, and with Apple Pay as a payment option in our website now, we hope you’ll now be able to find it even easier!


Why fewer End-to-End Tests?

At Just Eat, we undertake a lot of end to end (e2e) tests that continuously run every time we change something on our website. It goes without saying that tests give you more confidence about the product’s quality since it covers more use cases from the user’s perspective. However time is crucial in the Software Development Life Cycle – the more you wait to get feedback, the slower the development process will be.

The issues we face having more e2e website tests are…
  • They require lot of knowledge, skills and time to write quality e2e tests.
  • They can take a long time to execute, which means feedback is slow.
  • They can be very brittle, due to relying on UI which changes often.
  • Testing negative paths can be complex if integrating with APIs or databases.
  • They can be fragile since they normally rely on external dependencies and environment.
  • And when e2e test fails… it can be like finding a needle in a haystack.


So how do we overcome these issues? Well we need to make sure we are writing the right amount of tests at the right level… and by level I mean unit, acceptance and end to end.

If we consider unit tests, they test a small unit/component in isolation and are fast, often very reliable and normally make easy to find the bug when they fail. The main disadvantage of a unit test is that even if the unit/component works well in isolation, we do not know if it works well with the rest of the system. In other words we don’t test the communications between each component. For that we need to have an integration test and these tests should focus on the contracts between each component and how they act when this contract is met and not met. But despite the above issues with e2e tests we know that they are the only tests which simulate real user scenarios, ie when I place an order or when I leave a review, etc – so it’s important to have a right balance of all these test types.

The best visual indication of this is the Agile Testing Pyramid (see below).



According to the pyramid, the best combination will be 70% unit tests, 20% integration/acceptance tests and only 10% end to end tests.

In-memory acceptance tests

We allow each developer to run all of the integration/acceptance level test locally. To achieve this we’ve developed a framework to minimise the issues normally encountered. The framework supports real tests via browser by hosting the website in-memory and mocks out all of the endpoint calls so it’s not dependent on the actual QA environment.

inmemory test framework diagram

FiddlerCore – Is the base component used by Fiddler and allows you to capture and modify HTTP and HTTPS traffic and we use it injecting a proxy.

Coypu/Selenium – This is a wrapper for browser automation tools on .Net with Selenium WebDriver (same as Capybara in the ruby framework).

IIS Express – Visual Studio .NET built in IIS server which we use here to host the Website.

When you trigger an in-memory test the Selenium driver; with the Coypu wrapper, communicates with the browser whilst Fiddlercore will inject the proxy to the request via a header.

The browser accesses the Website; hosted in IIS Express, exercising the server side code which attempts to communicate with various endpoints (APIs). Through FiddlerCore, we can listen to the API call being made, inject a proxy and mock the API response, so we can test the presentation layer in isolation. You can mock scenarios where the API can fail or return unexpected data and handle how this will affect the user journey – in most cases, you can just show a minimal set to a user instead of an error page.

In some cases eg Authentication, you can inject an in-memory implementation of the identity server that can hijack authentication requests and issue tokens that your application trusts. Ideally, a developer should be able to run most of the tests, including all unit tests and a huge subset of the acceptance tests suite without being connected to a network.

The benefits of this framework are…
  • Faster and reliable than e2e tests.
  • Can also be used to write tests to simulate real user scenarios.
  • Can also be used to write integration/api tests.
  • They provide the ability to write scenarios where you can gracefully degrade in terms of functionality.
  • No dependency on the environment or QA.
  • Earlier feedback in the development cycle.
  • They can be run locally.
  • Works well with continuous integration.
  • Motivates the developer to write their own e2e and Integration tests. Also it helps developers to think about and simplify the architecture with simpler dependencies.

Summary :

Having more e2e tests don’t automatically reflect on faster delivery but it is important to have a right balance of tests and a good test automation strategy. So it goes; without saying, less e2e tests will save you time plus you can now also spend more time on exploratory testing. Also “right” set of tests allows you to evolve your architecture, refactor your code in order to continuously improve.

As a footnote I would like to mention ‘Rajpal Wilkhu’ who architectured and helped us to develop this amazing framework.


Thanks for reading …

Deepthi Lansakara


Beautiful Rooms & Why Smartphones Are Too Dumb

Some time in the future, the age of the smartphone will draw to a close and experiences will become more in-tune with the way humans actually live. We need to be thinking about this new wave of interactions at a time when our customer’s attention is a premium. We need to be augmenting their worlds, not trying to replace them…

I’m Craig Pugsley – a Principal UX Designer in Product Research. Our team’s job is to bring JUST EAT’s world-leading food ordering experience to the places our consumers will be spending their future, using technology that won’t be mainstream for twelve to eighteen months.

It’s a great job – I get to scratch my tech-geek itch every day. Exploring this future-facing tech makes me realise how old the systems and platforms we’re using right now actually are. Sometimes it feels like we’ve become their slaves, contorting the way we want to get something done to match the limitations of their platforms and the narrow worldview of the experiences we’ve designed for them. I think it’s time for change. I think smartphones are dumb… I feel like we’ve been led to believe that ever more capable cameras or better-than-the-eye-can-tell displays make our phones more useful. For the most part, this is marketing nonsense. For the last few years, major smartphone hardware has stagnated – the occasional speed bump here, the odd fingerprint sensor there… But nothing that genuinely makes our phones any smarter. It’s probably fair to say that we’ve reached peak phone hardware.


What we need is a sea-change. Something that gives us real value. Something that recognises we’re probably done with pushing hardware towards ever-more incremental improvements and focuses on something else. Now is the time to get radical with the software.

I was watching some old Steve Jobs presentation videos recently (best not to ask) and came across the seminal launch of the first iPhone. At tech presentation school, this Keynote will be shown in class 101. Apart from general ambient levels of epicness, the one thing that struck me was how Steve referred to the iPhone’s screen as being infinitely malleable to the need – we’re entirely oblivious to it now, but at that time phones came with hardware keyboards. Rows of little buttons with fixed locations and fixed functions. If you shipped the phone but thought of an amazing idea six months down the line, you were screwed.

In his unveiling of the second generation of iPhone, Jobs sells it as being the most malleable phone ever made. “Look!” (he says), “We’ve got all the room on this screen to put whatever buttons you want! Every app can show the buttons that make sense to what you want to do!”. Steve describes a world where we can essentially morph the functionality of a device purely through software.


But we’ve not been doing that. Our software platforms have stagnated like our hardware has. Arguably, Android has basic usability issues that it’s still struggling with; only recently have the worse Bloatware offenders stopped totally crippling devices out-the-box. iOS’s icon-based interface hasn’t changed since it came out. Sure, more stuff has been added, but we’re tinkering with the edges – just like we’ve been doing with the hardware. We need something radically different.

One of the biggest problems I find with our current mobile operating systems is that they’re ignorant of the ecosystem they live within. With our apps, we’ve created these odd little spaces, completely oblivious to each other. We force you to come out of one and go in the front door of the next. We force you to think first not about what you want to do, but about the tool you want to use to do it. We’ve created beautiful rooms.

Turning on a smartphone forces you to confront the rows and rows of shiny front doors. “Isn’t our little room lovely” (they cry!) “Look, we’ve decorated everything to look like our brand. Our tables and chairs are lovely and soft. Please come this way, take a seat and press these buttons. Behold our content! I think you’ll find you can’t get this anywhere else… Hey! Don’t leave! Come back!”

“Hello madame. It’s great to see you, come right this way. Banking, you say? You’re in safe hands with us. Please take a seat and use this little pen on a string…”

With a recent iOS update, you’re now allowed you to take a piece of content from one room and push it through a little tube into the room next door.

Crippled by the paralysis of not alienating their existing customers, Android and iOS have stagnated. Interestingly, other vendors have made tantalizing movements away from this beautiful-room paradigm into something far more interesting. One of my favorite operating systems of all time, WebOS, was shipped with the first Palm Pre.


There was so much to love about both the hardware and software for this phone. It’s one of the tragedies of modern mobile computing that Palm weren’t able to make more of this platform. At the core, the operating system did one central thing really, really well – your services were integrated at a system level. Email, Facebook, Twitter, Flickr, Skype, contacts – all managed by the system in one place. This meant you could use Facebook photos in an email. Make a phone call using Skype to one of your contacts on Yahoo. You still had to think about what beautiful room you needed to go into to find the tools you needed, but now the rooms were more like department stores – clusters of functionality that essentially lived in the same space.

Microsoft took this idea even further with Windows Phone. The start screen on a Windows Phone is a thing of beauty – entirely personal to you, surfacing relevant information, aware of both context and utility. Email not as important to you as Snapchat? No worries, just make the email tile smaller and it’ll report just the number of emails you haven’t seen. Live and die by Twitter? Make the tile huge and it’ll surface messages or retweets directly in the tile itself. Ambient. Aware. Useful.



Sadly, both these operating systems have tiny market shares.

But the one concept they both share is a unification of content. A deliberate, systematic and well executed breaking down of the beautiful room syndrome. They didn’t, however, go quite far enough. For example, in the case of Windows Phone, if I want to contact someone I still need to think about how I’m going to do it. Going into the ‘People Hub’ shows me people (rather than the tools to contact them), but is integrated only with the phone, SMS and email. What happens when the next trendy new communication app comes along and the People Hub isn’t updated to support the new app? Tantalizingly close, but still no cigar.

What we need is a truly open platform. Agnostic of vendors and representing services by their fundamentally useful components. We need a way to easily swap out service providers at any time. In fact, the user shouldn’t know or care. Expose them to the things they want to do (be reminded of an event, send a picture to mum, look up a country’s flag, order tonight’s dinner) and figure out how that’s done automatically. That’s the way around it should be. That’s the way we should be thinking when designing the experiences of the future.


Consider Microsoft’s Hololens, which was recently released to developers outside of Microsoft. We can anticipate an explosion of inventiveness in the experiences created – the Hololens being a unique device leapfrogging the problem of beautiful rooms to augment your existing real-world beautiful rooms with the virtual.


Holographic interface creators will be forced to take into account the ergonomics of your physical world and work harmoniously, contextually, thoughtfully and sparingly within it. Many digital experience designers working today should admit to the fact that they rarely take into account what their users were doing just before or just after their app. This forces users to break their flow and adapt their behavior to match the expectations of the app. As users, we’ve become pretty good at rapid task switching, but doing so takes attention and energy away from what’s really important – the real world and the problems we want to solve.

Microsoft may be one of the first to market with Hololens, but VR and AR hardware is coming fast from the likes of HTC, Steam, Facebook and Sony. Two-dimensional interfaces are on the path to extinction, a singular event that can’t come quick enough.


Solving Italian address input with Google Places

Solving Italian address input with Google Places

Postcodes in Italy

JUST EAT’s UK website uses postcodes to determine whether or not a takeaway restaurant delivers to an address. This is the case for a lot of our international websites and it often proves to be an effective way of accurately specifying a location. When JUST EAT started operating in Italy we observed that postcodes are not a popular with our customers as a way of defining delivery address. One possible reason for this is that postcodes in Italy are not as accurate as we are used to in the UK, even in built up areas.


This was issue as our systems use postcodes. There were already projects in place to move from postcodes to latitude and longitude which would allow us to define our own custom delivery areas. This would remove our dependency on postcodes but from a customer’s point of view would not help them define their location anymore easily. We needed a user interface that would allow the customer to enter their delivery address in a way that suited them.

What did we try?

We produced three experiments that were A/B tested in parallel. The experiments were made available to a limited percentage of customers over the course of a month to ensure the results were statistically significant. The three experiments are described below.

Postcodes Anywhere

Postcodes Anywhere is now known as PCAPredict. They provide an address lookup service called Capture+. This service autocompletes the user’s input and forces them to make a selection from the address options given. A trial version was implemented using PCAPredict’s prototyping tool. This allowed us to insert an instance of the Capture+ interface that would capture the address and pass the appropriate data to our server upon search. This was the easiest of the three experiments to implement.

Google Places

Google Places is a google service for retrieving location data for residential areas, business areas and tourist attractions. An autocomplete service is provided as with PCAPredict with a slight difference. Google Places suggests locations at different levels of accuracy instead of forcing residential address level accuracy. The autocomplete widget provided by google allows you to attach to an existing html input element and react to selection events. When experimenting with Google Places, we needed to specify two options; ‘componentRestrictions’ and ‘types’.


The ComponentRestrictions allow us to filter by country using a two-character, ISO 3166-1 Alpha-2 compatible country code so this was set to ‘IT’. The ‘address’ type instructs the Places service to return only geocoding results with a precise address. Google data is always improving, however does not always suggest street level accuracy. This was an issue we needed to rectify in order to use the widget with our existing system. This will be discussed in a later section. Once the widget was configured, in much the same way as the PCAPredict Capture+ tool, the data needed to be processed and passed to our servers.

Google Geocoder

Google Geocoder allows a string description of an address to be submitted to the service with the closest matching location data sent back to the client. This service is not designed for autocomplete suggestions but during initial investigations, the suggestions were consistent with behaviour expected from google map searches compared to the suggestions given from Google Places. We also found that the quality of the data seemed to be more mature than that of Google Places but during the course of development this doesn’t seem to be as apparent anymore. This was the reason we decided that it was useful to test a solution based on Google Geocoder in addition to Google Places. We constructed a widget that was similar to the Google Places widget but serving suggestions from the Google Geocoder.

What was the outcome of our A/B testing?

The experiments were being carried out against the existing home page search. This search was made up of four input boxes that allowed the user to specify the street, street number, city and postcode. This was used as a control that the experiments could be compared against. The experiments were run for approximately four weeks with 10% of Italian users being sent to them. The metric we were using to determine success was conversion which is defined as the percentage of visitors to the website that complete an order with JUST EAT.

Poscodes Anywhere

The PCAPredict Capture+ experiment didn’t see an increase in conversion during A/B testing.

Google Geocoder

The Google Geocoder showed small improvement although the interface was awkward and not designed for this purpose. The overall increase in conversion was minor.

Google Places

Showed an substantial increase in conversion. This was a stand out winner from our testing but there were still areas we thought we could improve. The suggestions could not be filtered to the ones that provided the accuracy we required. This would mean users would have to keep trying options until one met the criteria for a successful search.

How did we resolve the issues?

Based on the A/B testing results, we made the decision to develop the Google Places experiment further. The accuracy of suggestions was still an issue and after testing it had been revealed that this issue was mostly focused on getting from street to street number accuracy. The solution we decided upon was that we would ask the user for the street number explicitly when this situation occurred. This would take the form of an additional input that would be revealed to the user that prompted them for this information. To achieve this, we took the interface that had been built for the Google Geocoder and replaced the Google Geocoder service with the Google Places autocomplete service. As we had complete control of the logic within the widget, it was trivial to detect the missing data events and react to them by displaying the additional input.


The second issue encountered with Google Places was that sometimes addresses could not be found that users were requesting. This was not an issue we encountered with the Google Geocoder service. For this reason we built a fall back into the Google Places custom widget that would Geocode the given address if the street number was provided but the address was not found.

What was the final outcome?

The final outcome of implementing the custom Google Places search on the the Italian homepage was a significant increase in conversion. This implementation is now being used for 100% of users in Italy.
What next?
There are still many ways we can improve on this feature. Google Places allows us to alter the type of suggestions made to the user. It also returns more data than we currently make use of and allows us to upload our own data that can be returned with selected suggestions. Google Places also integrates seamlessly with Google Maps which opens up more possibilities for specifying location and returning location based results. For these reasons, JUST EAT will be continuing to experiment with Google Places during the second quarter of 2016 with an aim to roll this feature out internationally.

Stay tuned for more updates.


Tech-talk: David Clarke of Wonga on Scaling Agile Planning

Yesterday, David Clarke of Wonga came and talked to us about how they plan the work that they take on in engineering. Regularly, across a unit of 150 people, in a company of 600.


David Clarke, Head of Tech Delivery.


Planning @ Wonga

Every six weeks all our Scrum teams (approx. 8), together with Tech Ops and Commercial guys go off site to plan. We have done this at Wonga for a long time. The evolution to what and how we do it reflects Wonga’s evolution as an organisation from early start up days to being a formally regulated body. To people coming along for the first time it can seem like organised chaos.

  • Why we do it (and what happens when we don’t)
  • Who is involved (and what happens when they are not)
  • How we prepare for planning
  • How we do it (and lots of ways we don’t do it anymore, epic failures included)
  • What metrics we collect
  • Biscuit awards (and other ways to keep it fun)

The talk will be based on real planning event artefacts, data and plenty of photos from the events.