Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings


Top 5 Tips for Building Just Eat on Amazon’s Echo Show

Hi, I’m Andy May – Senior Engineer in Just Eat’s Product Research team. I’m going to take you through some top tips for porting your existing Alexa voice-only skill to Amazon’s new Echo Show device, pointing out some of the main challenges we encountered and solved.


Since we started work on the Just Eat Alexa skill back in 2016, we’ve seen the adoption to voice interfaces explode in popularity. Amazon’s relentless release schedule for Alexa-based devices has fueled this, but the improvements in the foundational tech (AI, deep learning, speech models, cloud computing) coupled with the vibrant third-party skill community look set to establish Alexa as arguably the leader in voice apps.

From an engineering perspective adapting our existing code base to support the new Echo Show was incredibly easy. But, as with any new platform, simply porting an existing experience across doesn’t do the capabilities of the new platform justice. I worked incredibly closely with my partner-in-crime Principle Designer Craig Pugsley to take advantage of what now became possible with a screen and touch input. In fact, Craig’s written some top tips about exactly that just over here

In order to add a Show screen to your voice response you simply extend the JSON response to include markup that describes the template you want to render on the device. The new template object (Display.RenderTemplate) is added to a directives Array in the response.

For more details on the Alexa response object visit //developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interface-reference#response-body-syntax

Sounds simple, doesn’t it? Well, it’s not rocket science, but it does have a few significant challenges that I wished someone had told me about before I started on this adventure. Here are five tips to help you successfully port your voice skill to voice-and-screen.

1. You need to handle device-targeting logic

The first and main gotcha we found was that you cannot send a response including a template to a standard Echo or Dot device. We incorrectly assumed a device that does not support screens would simply ignore the additional objects in the response.

Our own Conversation Class that all Alexa requests and responses go though is built on top of the Alea Node SDK. The SDK did not exist when we first launched our Skill. We added a quick helper method from the Alexa Cook Book (//github.com/alexa/alexa-cookbook/blob/master/display-directive/listTemplate/index.js#L589) to check if we are dealing with an Echo Show or voice only device.

This method is called before we return our response to ensure we only send RenderTemplates to devices that support them.

Finally we extended our Response Class to accept the new template objects and include them in the response sent to Alexa. The result visual screens are displayed on the Echo Show alongside the spoken voice response.

2. Don’t fight the display templates

There are currently 6 templates provided to display information on the Echo Show. We decided to create one file this means the markup and structure is only declared once. We then pass the data we need to populate the template. Object destructuring, string literals alongside array.map and array.reduce make generating templates easy. We use Crypto to generic a unique token for every template we return.


Image of list – mapping basket to template listItems.

Image of basket list  – reducing basket to single string.

Markup is limited to basic HTML tags including line breaks, bold, italic, font size, inline images, and action links. Action Links are really interesting but the default blue styling meant we have so far had to avoid using them.

Many of the templates that support images take an array of image objects however just the first image object is used. We experimented providing more than one image to provide a fallback image or randomise the image displayed. The lack of fallback images means that we need to make a request to our S3 bucket to validate the image exists before including in the template.

Don’t try to hack these templates to get them to do things that weren’t designed for. Each template’s capabilities have been consciously limited by Amazon to give users a consistent user experience. Spend your time gently stroking your friendly designer and telling them they’re in a new world now. Set their expectations around the layouts, markup and list objects that are available. Encourage them to read Craig’s post.

3. Take advantage of touch input alongside voice

The Echo Show offers some great new functionality to improve user experience and make some interactions easier. Users can now make selections and trigger intents but touching the screen or saying the list item number “select number 2”.

It is your job to implement capture touch and voice selection. When a user selects a list item you code will receive a new request object of type Display.ElementSelected.

The token attribute you specify when creating the list is passed back in this new request object:

In the above example we receive the value ‘Indian’ and can treat this in the same way we would the cuisine slot value. Our state management code knows to wait for the cuisine intent with slot value or Display.ElementSelected request.

Finally we create a new Intent, utterances an a slot to handle number selection. If our new Intent is triggered with a valid number we simply match the cuisine value from the cuisine array in state with a index offset.

Find out more about touch and voice selection – //developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/display-interface-reference#touch-selection-events

4. Adapt your response based on device

The Echo Show provides lots of opportunities and features. In one part of our Skill we decided to change the flow and responses based on the device type.

When we offer users the opportunity to add popular dishes it made sense for us to shorten the flow as we can add use the screen in addition to the voice response.

We use the same supportsDisplay method to change the flow of our skill.

We use the same logic when displaying the list of popular dishes. Based on Amazon recommendations if the device supports display we don’t read out all the dishes.

You can find out more about our thoughts designing user experience for the Echo Show here.

5. The back button doesn’t work

The back button caused us some problems. When a user touches the back button the Echo Show will display the previous template. Unfortunately no callback is sent back to your code. This creates huge state management problem for us.

For example a user can get the checkout stage at this point our state engine expects only a 2 intents Pay Now or Change Something  (exc back, cancel and stop). If a Echo Show user touched back the template would now show our Allergy prompt. The state engine does not know this change has taken place so we could  not process the users Yes/No intents to move on from allergy as think the user is still on the checkout stage.

Just to add to this problem the user can actually click back through multiple templates. Thankfully you can disable the back button in the template response object:

To find out more about the Just Eat Alexa Skill visit //www.just-eat.co.uk/alexa

For more information visit on developing Alexa Display Interface visit  //developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/display-interface-reference


Top 10 Voice Design Tips for the Amazon Echo Show

When we started work on the Amazon Echo Show design, our first feeling was of recognisable comfort. We’ve been designing voice interactions for over a year and half, but this new device brings a touch screen into the mix and, with it, a whole new set of design challenges and opportunities.


In this article I’ll take you through some of the lessons we learnt adapting our voice-first Alexa experience to voice-first-with-screen, and give you the head-start you need to make the most out of your own voice-enabled apps.

I’m Craig Pugsley, Principle Designer in Just Eat’s Product Research team. I’ve been designing touch-screen experiences for 10 years. Last year, we made a little journey into the world of voice-based apps with our original Amazon Echo skill, and it mangled my mind. Having just about got my head around that paradigm shift, Amazon came along with their new Echo Show device, with its 1024px x 600px touch screen, and everything changed again. I started getting flashes of adapting our iOS or android apps to a landscape aspect screen. Designing nice big Fitts-law-observing buttons that could be mashed from across the room. But it very soon became apparent that Amazon have been making some carefully orchestrated decisions about how experiences should be designed for their new ‘voice-first’ devices, and trying to adapt an existing visual experience just wouldn’t cut the mustard.

A Bit of Background

But I’m getting ahead of myself here. Let’s jump back to 2014, when Amazon brought to the US market the world’s first voice-enabled speaker. You could play music, manage calendars, order from the Amazon store, set kitchen timers, check the weather, etc… all with your voice, naturally, as though they were having a conversation with another human. Fast forward to 2017 and you can now pick from hundreds of third-party apps to extend the speaker’s functionality. Many of the big tech names have ‘skills’ for the Echo, including Uber, Sky, The Trainline, Jamie Oliver, Philips Hue and Just Eat.

Since 2014, Amazon have brought a range of Alexa-enabled devices to market, at a multitude of wallet-friendly prices – starting with the £50 Echo Dot (like it’s big brother, but without the nice speaker) up to the new Echo Show at £199 (essentially a standard Echo, but with a touch screen and camera), with screens of all shapes and sizes in-between.


Why did we get into voice? Our job is to hedge the company’s bets. Just Eat’s mission is to create the world’s greatest food community, and that community is incredibly diverse – from the individual who orders their weekly treat, all the way through to repeat customers using our upwards of thirty thousand restaurants to try something new every night. To be this inclusive, and let our restaurant partners reach the widest possible audience, we need to be available on every platform, everywhere our users are. Just Eat’s core teams are hard at work on the traditional platforms of iOS, Android and Web, so we take longer-shot calculated risks with new technologies, methodologies, business models and platforms. Being a small, rapidly-iterative, user-centred team, our goal is to fail more often than we succeed – and scout a route to interesting new platforms and interactions, without needing to send the whole army off in a new direction.

So, we made a bet on voice. To be honest, it was a fairly low-risk gamble: the smartphone market has stagnated for years, become ripe for new innovation to make the next evolutionary step, and we’ve reached peak iPhone. We have projects looking at VR, AR, big screens, one buttons, distributed ordering, (so many, in fact, that we had to showcase them all in a swanky Shoreditch event last year).


It was only natural that voice (or, more specifically, conversational user interfaces) would be in that mix. When we were handed an Amazon Echo device under a table in a cafe in London (sometime in early 2016 – several months before the Echo’s UK release) that gave us the route to market we were looking for.

The Next Frontier

From a Design perspective, conversational UIs are clearly the next interaction frontier. They’re the perfect fit for busy people, they don’t suffer from the cognitive load and friction of moving between inconsistently-designed apps’ walled gardens (something I’ve called Beautiful Room Syndrome), and they have a slew of tangential benefits that might not be obvious at first thought. For example, our data seems to suggest that more users interacting with our skill seem to skew older. I find this fascinating! And entirely obvious, when you think about it.

There’s a whole generation of people for whom technology is alien and removed from the kinds of interactions they’re used to. Now, almost out of nowhere, the technologies of deep learning, natural language processing, neural networks, speech recognition and cloud computing power have matured to enable a kind of interaction at once both startlingly new and compelling, whilst being so obvious, inevitable and natural. At last, these people who would have been forced to learn the complexities and vagaries of touchscreen interfaces to engage in the digital world, will be given access using an interface they’ve have been using since childhood.

Amazon clearly recognised the new market they were unlocking. After the Amazon Echo speaker (around £150), they quickly followed up with a range of new devices and price points. Possibly most compelling is the £50 Echo Dot – a device barely larger than a paperback on it’s side, but packing all the same far-field microphone technology allowing it to hear you across the room and all the same Alexa-enabled smarts as it’s more expensive cousins. With the launch of the Echo Show, Amazon have addressed one of the more significant constraints of a voice-only interface: we live in an information age, and sometimes it’s just better to show what the user’s asked for, rather than describe it.

Designing For Alexa

Amazon’s design guidance on their screen-based devices is strong, and shows their obvious strategic push towards voice experiences that are augmented by simple information displays. Designing for the Show will give you all you need to translate your skill to Alexa on Fire tablets and Fire TVs, if and when Amazon enable these devices. It’s an inevitable natural progression of the voice interface, and Amazon have made some strategic design decisions to help make your skill as portable as possible.

For example, you don’t have control over all of those 1024×600 pixels. Instead, you have (at the moment) 6 customisable templates that you can insert content into. Ostensibly, there are two types: lists and blocks of text. Into that, you have four font sizes and a range of basic markup you can specify (bold, italic, etc.). You can also insert inline images (although not animated GIFs – we tried!) and ‘action buttons’ which are controls that will fire the same action as if they user said the command. Each template also contains a logo in the top right, page title and a background image. It’s fair to say the slots you get to fill are fairly limited, but this is deliberate and positive step for the Alexa user experience.


[For a more detailed breakdown of how to build an app for Echo Show, take a look at my colleague Andy May’s in-depth article]

One key element is the background image you can display on each screen. You can make your background work really hard, so definitely spend some time exploring concepts. Amazon’s guidance is to use a photo, with a 70% black fill, but I find that too muddy and it felt too dark for our brand. Instead, we used our brand’s signature colours for the background to denote each key stage of our flow. I like how this subliminally suggests where the user is in the flow (e.g. while you’re editing your basket, the background remains blue) and gives a sense of progression.

Top 10 Tips for Designing Voice Interactions

Be Voice First

You have to remember you’re designing an experience that is augmented with a visual display. This one’s probably the hardest to train yourself to think about – we’ve been designing UI-first visual interfaces for so long, that thinking in this voice-first way is going to feel really unnatural for a while. Start by nailing your voice-only flows first, then tactically augment that flow with information using the screen.

The 7ft Test

Amazon provide four font sizes for you to use: small, medium, large and extra large. You have to make sure crucial information is large enough to be read from 7ft away. Remember: users will almost certainly be interacting only with their voice, probably from across the room.


Be Context Aware

Your users have chosen to use your Alexa skill over your iOS app. Be mindful of that reason and context. Maybe their hands are busy making something? Maybe they’re dashing through the kitchen on their way out, and just remembered something? Maybe they’re multi-tasking? Maybe they’re an older user who is engaging with your brand for the first time? Use research to figure out how and why your users use your voice skill, and use that insight to design to that context.

Don’t Just Show What’s Said

An obvious one, but worth mentioning in this new world. Your engineers will need to build a view to be shown on the screen for each state of your flow – the Show platform will not show a ‘default’ screen automatically (which, we admit, is kinda weird) and you’ll end up in a situation where you’re showing old content while talking about something at an entirely different stage of the flow. Super confusing to the user. So, we found it was useful to start by building screens that displayed roughly what was being spoken about first, for every state.


This will let you, the designer, make sure you’ve nailed your voice experience first, before then cherry-picking what you want to display at each state. You can use the display to show more than you’re saying, and even give additional actions to the user. Remember, like all good UX, less is most definitely more. Use the screen only when you think it would significantly add to the experience. If not, just display a shortened version of what you’re asking to the user. Typically, this could be one or two verb-based words, displayed in large font size.

Be careful with lists

In fact, be careful with how much information you’re saying, period. It’s a good design tip to chunk lists when reading them out (e.g. ‘this, this, this and this. Want to hear five more?’), but when you’ve got a screen, you can subtly adjust what you say to cue the user to look at the screen. You could, for example say ‘this, this, this and these five more’ while showing all eight on the screen.


If you’re building a VUI with multiple steps in the flow, make sure you’re consistent in what you’re showing on screen. This is one of the few tips you can carry over from the world of visual UI design. Make sure you have consistent page titles, your background images follow some kind of semantically-relevant pattern (images related to the current task, colours that change based on state, etc…) and that you refer to objects in your system (verbs, nouns) repeatedly in the same way. You can (and should) vary what you say to users – humans expect questions to be asked and information to be presented in slightly different ways each time, so it feels more natural to be asked if they want to continue using synonymous verbs (‘continue’, ‘carry on’, ‘move on’, etc…). This is more engineering and voice design work, but it will make your experience feel incredibly endearing and natural.

Be Wary of Sessions

Remember what your user was doing, and decide whether you want to pick up that flow again next time they interact. If you’re building an e-comm flow, maybe you persist the basket between sessions. If you’re getting directions, remember where the user said they wanted to go from. This advice applies equally to non-screen Alexa devices, but it’s critical on the Show due to the way skills timeout if not interacted with. Users can tap the screen at any time in your flow. Alexa will stop speaking and the user has to say “Alexa” to re-start the conversation. If they don’t, your skill will remain on screen for 30 seconds, before returning to the Show’s home screen. When your user interacts with your skill again, you should handle picking up that state from where they were, in whatever way make sense to your skill. You could ask if they want to resume where they were, or you could figure out how long it was since they last interacted and decide that it’s been a couple of days, so they probably want to start again.

Show the prompt to continue on screen

This one is super-critical on the Echo Show. Best practise suggests that you should have your prompt question (the thing that Alexa will be listening to the answer for) at the end of her speech. But, if the user starts interacting with the screen, Alexa will immediately stop talking, and the user won’t hear the question and won’t know what to say to proceed. You need to decide what’s best for your skill, but we found that putting the prompt question in the page title (and doing it consistently on every page) meant users could safely interrupt to interact with the screen, while still having a clear indication of how to proceed.


Worship your copywriter

Another tip relevant to non-screen voice interfaces, but it really takes the nuanced skills of a professional wordsmith to target the same message to be both spoken, written in the companion app card, and displayed on the limited real estate of the Echo Show screen. Make sure you’re good friends with your team’s copywriter. By them beer regularly and keep them close to the development of your voice interface. Encourage them to develop personality and tone of voice style guides specifically for VUIs. They’re as much a core part of your design team as UX or User Researchers. Treat them well.

In terms of user testing, we weren’t able to work with actual customers to test and iterate the designs for the Echo Show, as we routinely do with all our other products, due to the commercial sensitivity around the Echo Show UK release. So, we had to make the best judgements we could, based on the analytics we had and some expert reviewing within the team 😉 That said, we did plenty of internal testing with unsuspecting reception staff and people from other teams – Neisen’s guidance still stands: 5 users can get you 80% of usability issues, and we definitely found UX improvement, even testing with internals. Aside from the Show, we test future concepts in a wizard-of-oz style with one of us dialing in to the test lab and pretending to be Alexa. We get a huge amount of insight without writing a single line of code using this method, but that’s a whole other blog post for another day 😉

So there we go. Armed with these words of wisdom, and your existing voice-first skill, you should be fully equipped to create the next big app for the next big platform. Remember: think differently, this market is very new, look for users outside your traditional demographics and be prepared to keep your skills updated regularly as tech and consumer adoption changes. Good luck!

Craig Pugsley
Bristol, UK – Sept 2017

To find out more about the Just Eat Alexa Skill visit: //www.just-eat.co.uk/alexa

For more information visit on designing for Alexa visit: //developer.amazon.com/designing-for-voice/


Reliably Testing HTTP Integrations in a .NET Application


Testing HTTP dependencies in modern web applications is a common problem, but it’s also something that can create difficulty for authoring reliable tests.

Today, we’re open-sourcing a library to help reduce the friction many developers have with this common requirement: JustEat.HttpClientInterception.

You can find the repository in our GitHub organisation and can find the package available to download at JustEat.HttpClientInterception on NuGet.org.

The Problem

Many modern software applications integrate with external Application Programming Interfaces (APIs) to provide solutions for problems within their domain of responsibility, whether it be delivering your night in, booking a flight, trading financial instruments, or monitoring transport infrastructure in real-time.

These APIs are very often HTTP-based, with RESTful APIs that consume and produce JSON often being the implementation of choice. Of course integrations might not be with full-blown APIs, but just external resources that are available over HTTP, such as a website exposing HTML, or a file download portal.

In .NET applications, whether these are console-based, rich GUIs, background services, or ASP.NET web apps, a common go-to way of consuming such HTTP-based services from code is using the HttpClient class.

HttpClient provides a simple API surface for GET-ing and POST-ing resources over HTTP(S) to external services in many different data formats, as well as functionality for reading and writing HTTP headers and more advanced extensibility capabilities.

It is also commonly exposed as a dependency that can be injected into other services, such as third-party dependencies for tasks such as implementing OAuth-based authentication.

Overall this makes HttpClient a common and appropriate choice for writing HTTP-based integrations for .NET applications.

An important part of software development is not just implementing a solution to a given problem, but also writing tests for your applications. A good test suite helps ensure that delivered software is of a high quality, is functionally correct, is resilient in the face of failure, and provides a safety net against regression for future work.

When your application depends on external resources though, then testing becomes a bit more involved. You don’t want to have the code under test making network calls to these external services for a myriad of reasons. They make your tests brittle and hard to maintain, require a network connection to be able to run successfully, might cost you money, and slow down your test suite, to name but a few examples.

These issues lead to approaches using things like mocks and stubs. HttpClient, and its more low-level counterpart HttpMessageHandler, are not simple to mock however. While not difficult to do, their lack of interface and design lead to a requirement to implement classes that derive from HttpMessageHandler in order to override protected members to drive test scenarios, and to build non-primitive types by hand, such as HttpResponseMessage.

Another approach that can be used to simplify the ability to use mocks is to create your own custom IHttpClient interface and wrap your usage of HttpClient within an implementation of this interface. This creates its own problems in non-trivial integrations though, with the interface often swelling to the point of being a one-to-one representation of HttpClient itself to expose enough functionality for your use-cases.

While this mocking and wrapping is feasible, once your application does more than one or two simple interactions with an HTTP-based service, the amount of test code required to drive your test scenarios can balloon quite quickly and become a burden to maintain.

It is also an approach that only works in a typical unit test approach. As usage of HttpClient is typically fairly low down in your application’s stack, this does not make it a viable solution for other test types, such as functional and integration tests.

A Solution

Today we’re publishing a way of solving some of these problems by releasing our JustEat.HttpClientInterception .NET library as open-source to our organisation in GitHub.com under the Apache 2.0 licence.

A compiled version of the .NET assembly is also available from JustEat.HttpClientInterception on NuGet.org that supports .NET Standard 1.3 (and later) and .NET Framework 4.6.1 (and later).

JustEat.HttpClientInterception provides a number of types that allow HTTP requests and their corresponding responses to be declared using the builder pattern to register interceptions for HTTP requests in your code to bypass the network and return responses that drive your test scenarios.

Below is a simple example that shows registering an interception for an HTTP GET request to the Just Eat Public API:

The library provides a strongly-typed API that supports easily setting up interceptions for arbitrary HTTP requests to any URL and for any HTTP verb, returning responses that consist of either raw bytes, strings or objects that are serialized into a response as-and-when they are required.

Fault injection is also supported by allowing arbitrary HTTP codes to be set for intercepted responses, as well as latency injection via an ability to specify a custom asynchronous call-back that is invoked before the intercepted response is made available to the code under test.

With ASP.NET Core adding Dependency Injection as a first-class feature and being easy to self-host for use within test projects, a small number of changes to your production code allows HttpClientInterceptorOptions to be injected into your application’s dependencies for use with integration tests without your application needing to take a dependency on JustEat.HttpClientInterception for itself.

With the library injected into the application, HTTP requests using HttpClient and/or HttpMessageHandler that are resolved by your IoC container of choice can be inspected and intercepted as-required before any network connections are made. You can also opt-in to behaviour that throws an exception for any un-intercepted requests, allowing you to flush out all HTTP requests made by your application from your tests.

Further examples of using the library can be found at these links:

The Benefits

We’ve used this library successfully with two internal applications we’re developing with ASP.NET Core (one an API, the other an MVC website) to really simplify our tests, and provide good code coverage, by using a test approach that is primarily a black-box approach.

The applications’ test suites self-host the application using Kestrel, with the service registration set-up to create a chain of DelegatingHandler implementations when resolving instances of HttpClient and HttpMessageHandler. With HttpClientInterceptorOptions registered to provide instances of DelegatingHandler by the test start-up code when the application is self-hosted, this allows all HTTP calls within the self-hosted application in the tests to be intercepted to drive the tests.

The tests themselves then either initiate HTTP calls to the public surface of the self-hosted server with a vanilla HttpClient in the case of the API, or use Selenium to test the rendered pages using browser automation in the case of the website.

This approach provides many benefits, such as:

  • Simple setup for testing positive and negative code paths for HTTP responses, such as for error handling.
  • Exercises serialization and deserialization code for HTTP request and response bodies.
  • Testing behaviour in degraded scenarios, such as network latency, for handling of timeouts.
  • Removes dependencies on external services for the tests to pass and the need to have access to an active network connection for services that may only be resolvable on a internal/private network.
  • No administrative permissions required to set-up port bindings.
  • Speeds up test execution by removing IO-bound network operations.
  • Allows you to skip set-up steps to create test data for CRUD operations, such as having to create resources to test their deletion.
  • Can be integrated in a way that other delegating handlers your application may use are still exercised and tested implicitly.
  • Allows us to intercept calls to IdentityServer for our user authentication and issue valid self-signed JSON Web Tokens (JWTs) in the tests to authenticate browser calls in Selenium tests.

In the case of the ASP.NET Core API using this test approach, at the time of writing, we’ve been able to achieve over 90% statement coverage of a several thousand line application with just over 200 unit, integration and end-to-end tests. Using our TeamCity server, the build installs the .NET Core runtime, restores its dependencies from NuGet, compiles all the code and runs all the tests in just over three-and-a-half minutes.

Some Caveats

Of course such a solution is not a silver bullet. Intercepting all of your HTTP dependencies isolates you from interface changes in your dependencies.

If an external service changes its interfaces, such as by adding a new API version or deprecating the one you use, adds new fields to the responses, or changes to require all traffic to support HTTPS instead of HTTP, your integration tests will not find such changes. It also does not validate that your application integrates correctly with APIs that require authentication or apply rate-limits.

Similarly, the black-box approach is relatively heavyweight compared to a simple unit test, so may not be suited to testing all of the edge cases in your code and low-level assertions on your responses.

Finally, your intercepted responses will only cater for the behaviour you’ve seen and catered-for within your tests. A real external dependency may change its behaviour over time in ways that your static simulated behaviours will not necessarily emulate.

A good mixture of unit, interception-based integration tests, and end-to-end tests against your real dependencies are needed to give you a good robust test suite that runs quickly and also gives you confidence in your changes as you develop your application over time. Shipping little and often is a key tenet of Continuous Delivery.

In Conclusion

We hope that you’ve found this blog post interesting and that you find JustEat.HttpClientInterception useful in your own test suites for simplifying things and making your applications even more awesome.

You can find the project in our organisation on GitHub and you can download the library to use in your .NET projects from the JustEat.HttpClientInterception package page on NuGet.org.

Contributions to the library are welcome – check out the contributing guide if you’d like to get involved!

If you like solving problems at scale and don’t think testing is just for QAs, why not check out open positions in Technology at Just Eat?

About the Author

This blog post was written by Martin Costello, a Senior Engineer at Just Eat, who works on consumer-facing websites and the back-end services that power them.


OWASP meetup in Just Eat

We are looking forward to tonight’s OWASP London Chapter event at Just Eat offices in London. We will start broadcasting live at 6:30 PM UK time today. 

watch live now on youtube



Troy Hunt – Hack Yourself First workshop at Just Eat

2016 was a year full of internet security issues from the Yahoo breach, to TalkTalk hack to US Election rigging and the massive Tesco Bank breach to an Internet crippling DDoS attack. Today, internet security is now no longer just the domain of techies and security experts, but the responsibility of all of us.

Hacked website

A close-up of Troy Hunt’s demo site with hacked videos inserted

I remember my first computer. It was a ZX Spectrum, running with 48K of RAM on a Z80 processor running at 3.5 MHz. It was on this rubber-keyed machine that I learnt about for loops, if clauses and how much fun it was getting a computer to do your bidding, even if it was only to print “HELLO” all the way down the screen.

Hello Hello

Today, many years later, I spend most days getting Just Eat’s computers to do what I want them to do. And it’s still as satisfying as it always was.

A few weeks ago, Troy Hunt came and visited Just Eat for the second year running, to lead a fresh group of our engineers through his two-day ‘Hack Yourself First’ security workshop. And I learned something new and interesting – how to get other people’s computers to do what I wanted them to…

(For those of you who don’t know, Troy is one of the world’s best known web security experts.)

Troy Hunt sees hacked site for the first time

Troy Hunt discovers his test site has been hacked by Rick Astley

Twenty Just Eat engineers participated in the workshop, which consisted of a mixture of an overview of some of the most common security flaws out there in the wild, taking us gently (and sometimes not-so-gently) through (among other things) SQL injection attacks, badly configured applications and poorly thought-out password policies. Not only did he show us what the implications were when these things happened, but showed us how to get our hands dirty and hack a demonstration website that he had made specifically to be hacked.

Now my interest was piqued. Of course, as a seasoned developer, I’d heard about most of the security flaws that Troy was talking about, but actually being able to hack a site and see what information gets compromised: getting someone else’s computer to do what I wanted it to do was even more satisfying than getting my own one to behave: having my trusty laptop break a website (albeit one written purposely to have these security holes), spam its reviews, and enumerate through all the registered users’ details in less than ten minutes was an eye-opener.

Rick-rolled Troy Hunt

Troy Hunt discovers his test site has been hacked

Troy’s workshop helped all of us to understand, through our own practical application & experience, that security is something we must all take responsibility for, and how to do this in a practical way.

Troy continues to be instrumental in highlighting security issues, and showing how to prevent or combat them (through his blog, his database of data leaks and his online courses). Our thanks to Troy for spending a couple of days giving us a fairly broad yet deep dive into some of these issues.

I for one was inspired to look deeper into this fascinating part of our industry, and the feedback suggests it wasn’t just me!