19 May 2016
Comments 0

Beautiful Rooms & Why Smartphones Are Too Dumb

19 May 2016, Comments 0

Some time in the future, the age of the smartphone will draw to a close and experiences will become more in-tune with the way humans actually live. We need to be thinking about this new wave of interactions at a time when our customer’s attention is a premium. We need to be augmenting their worlds, not trying to replace them…

I’m Craig Pugsley – a Principal UX Designer in Product Research. Our team’s job is to bring JUST EAT’s world-leading food ordering experience to the places our consumers will be spending their future, using technology that won’t be mainstream for twelve to eighteen months.

It’s a great job – I get to scratch my tech-geek itch every day. Exploring this future-facing tech makes me realise how old the systems and platforms we’re using right now actually are. Sometimes it feels like we’ve become their slaves, contorting the way we want to get something done to match the limitations of their platforms and the narrow worldview of the experiences we’ve designed for them. I think it’s time for change. I think smartphones are dumb… I feel like we’ve been led to believe that ever more capable cameras or better-than-the-eye-can-tell displays make our phones more useful. For the most part, this is marketing nonsense. For the last few years, major smartphone hardware has stagnated – the occasional speed bump here, the odd fingerprint sensor there… But nothing that genuinely makes our phones any smarter. It’s probably fair to say that we’ve reached peak phone hardware.


What we need is a sea-change. Something that gives us real value. Something that recognises we’re probably done with pushing hardware towards ever-more incremental improvements and focuses on something else. Now is the time to get radical with the software.

I was watching some old Steve Jobs presentation videos recently (best not to ask) and came across the seminal launch of the first iPhone. At tech presentation school, this Keynote will be shown in class 101. Apart from general ambient levels of epicness, the one thing that struck me was how Steve referred to the iPhone’s screen as being infinitely malleable to the need – we’re entirely oblivious to it now, but at that time phones came with hardware keyboards. Rows of little buttons with fixed locations and fixed functions. If you shipped the phone but thought of an amazing idea six months down the line, you were screwed.

In his unveiling of the second generation of iPhone, Jobs sells it as being the most malleable phone ever made. “Look!” (he says), “We’ve got all the room on this screen to put whatever buttons you want! Every app can show the buttons that make sense to what you want to do!”. Steve describes a world where we can essentially morph the functionality of a device purely through software.


But we’ve not been doing that. Our software platforms have stagnated like our hardware has. Arguably, Android has basic usability issues that it’s still struggling with; only recently have the worse Bloatware offenders stopped totally crippling devices out-the-box. iOS’s icon-based interface hasn’t changed since it came out. Sure, more stuff has been added, but we’re tinkering with the edges – just like we’ve been doing with the hardware. We need something radically different.

One of the biggest problems I find with our current mobile operating systems is that they’re ignorant of the ecosystem they live within. With our apps, we’ve created these odd little spaces, completely oblivious to each other. We force you to come out of one and go in the front door of the next. We force you to think first not about what you want to do, but about the tool you want to use to do it. We’ve created beautiful rooms.

Turning on a smartphone forces you to confront the rows and rows of shiny front doors. “Isn’t our little room lovely” (they cry!) “Look, we’ve decorated everything to look like our brand. Our tables and chairs are lovely and soft. Please come this way, take a seat and press these buttons. Behold our content! I think you’ll find you can’t get this anywhere else… Hey! Don’t leave! Come back!”

“Hello madame. It’s great to see you, come right this way. Banking, you say? You’re in safe hands with us. Please take a seat and use this little pen on a string…”

With a recent iOS update, you’re now allowed you to take a piece of content from one room and push it through a little tube into the room next door.

Crippled by the paralysis of not alienating their existing customers, Android and iOS have stagnated. Interestingly, other vendors have made tantalizing movements away from this beautiful-room paradigm into something far more interesting. One of my favorite operating systems of all time, WebOS, was shipped with the first Palm Pre.


There was so much to love about both the hardware and software for this phone. It’s one of the tragedies of modern mobile computing that Palm weren’t able to make more of this platform. At the core, the operating system did one central thing really, really well – your services were integrated at a system level. Email, Facebook, Twitter, Flickr, Skype, contacts – all managed by the system in one place. This meant you could use Facebook photos in an email. Make a phone call using Skype to one of your contacts on Yahoo. You still had to think about what beautiful room you needed to go into to find the tools you needed, but now the rooms were more like department stores – clusters of functionality that essentially lived in the same space.

Microsoft took this idea even further with Windows Phone. The start screen on a Windows Phone is a thing of beauty – entirely personal to you, surfacing relevant information, aware of both context and utility. Email not as important to you as Snapchat? No worries, just make the email tile smaller and it’ll report just the number of emails you haven’t seen. Live and die by Twitter? Make the tile huge and it’ll surface messages or retweets directly in the tile itself. Ambient. Aware. Useful.



Sadly, both these operating systems have tiny market shares.

But the one concept they both share is a unification of content. A deliberate, systematic and well executed breaking down of the beautiful room syndrome. They didn’t, however, go quite far enough. For example, in the case of Windows Phone, if I want to contact someone I still need to think about how I’m going to do it. Going into the ‘People Hub’ shows me people (rather than the tools to contact them), but is integrated only with the phone, SMS and email. What happens when the next trendy new communication app comes along and the People Hub isn’t updated to support the new app? Tantalizingly close, but still no cigar.

What we need is a truly open platform. Agnostic of vendors and representing services by their fundamentally useful components. We need a way to easily swap out service providers at any time. In fact, the user shouldn’t know or care. Expose them to the things they want to do (be reminded of an event, send a picture to mum, look up a country’s flag, order tonight’s dinner) and figure out how that’s done automatically. That’s the way around it should be. That’s the way we should be thinking when designing the experiences of the future.


Consider Microsoft’s Hololens, which was recently released to developers outside of Microsoft. We can anticipate an explosion of inventiveness in the experiences created – the Hololens being a unique device leapfrogging the problem of beautiful rooms to augment your existing real-world beautiful rooms with the virtual.


Holographic interface creators will be forced to take into account the ergonomics of your physical world and work harmoniously, contextually, thoughtfully and sparingly within it. Many digital experience designers working today should admit to the fact that they rarely take into account what their users were doing just before or just after their app. This forces users to break their flow and adapt their behavior to match the expectations of the app. As users, we’ve become pretty good at rapid task switching, but doing so takes attention and energy away from what’s really important – the real world and the problems we want to solve.

Microsoft may be one of the first to market with Hololens, but VR and AR hardware is coming fast from the likes of HTC, Steam, Facebook and Sony. Two-dimensional interfaces are on the path to extinction, a singular event that can’t come quick enough.

4 May 2016
Comments 0

Solving Italian address input with Google Places

4 May 2016, Comments 0

Solving Italian address input with Google Places

Postcodes in Italy

JUST EAT’s UK website uses postcodes to determine whether or not a takeaway restaurant delivers to an address. This is the case for a lot of our international websites and it often proves to be an effective way of accurately specifying a location. When JUST EAT started operating in Italy we observed that postcodes are not a popular with our customers as a way of defining delivery address. One possible reason for this is that postcodes in Italy are not as accurate as we are used to in the UK, even in built up areas.


This was issue as our systems use postcodes. There were already projects in place to move from postcodes to latitude and longitude which would allow us to define our own custom delivery areas. This would remove our dependency on postcodes but from a customer’s point of view would not help them define their location anymore easily. We needed a user interface that would allow the customer to enter their delivery address in a way that suited them.

What did we try?

We produced three experiments that were A/B tested in parallel. The experiments were made available to a limited percentage of customers over the course of a month to ensure the results were statistically significant. The three experiments are described below.

Postcodes Anywhere

Postcodes Anywhere is now known as PCAPredict. They provide an address lookup service called Capture+. This service autocompletes the user’s input and forces them to make a selection from the address options given. A trial version was implemented using PCAPredict’s prototyping tool. This allowed us to insert an instance of the Capture+ interface that would capture the address and pass the appropriate data to our server upon search. This was the easiest of the three experiments to implement.

Google Places

Google Places is a google service for retrieving location data for residential areas, business areas and tourist attractions. An autocomplete service is provided as with PCAPredict with a slight difference. Google Places suggests locations at different levels of accuracy instead of forcing residential address level accuracy. The autocomplete widget provided by google allows you to attach to an existing html input element and react to selection events. When experimenting with Google Places, we needed to specify two options; ‘componentRestrictions’ and ‘types’.


The ComponentRestrictions allow us to filter by country using a two-character, ISO 3166-1 Alpha-2 compatible country code so this was set to ‘IT’. The ‘address’ type instructs the Places service to return only geocoding results with a precise address. Google data is always improving, however does not always suggest street level accuracy. This was an issue we needed to rectify in order to use the widget with our existing system. This will be discussed in a later section. Once the widget was configured, in much the same way as the PCAPredict Capture+ tool, the data needed to be processed and passed to our servers.

Google Geocoder

Google Geocoder allows a string description of an address to be submitted to the service with the closest matching location data sent back to the client. This service is not designed for autocomplete suggestions but during initial investigations, the suggestions were consistent with behaviour expected from google map searches compared to the suggestions given from Google Places. We also found that the quality of the data seemed to be more mature than that of Google Places but during the course of development this doesn’t seem to be as apparent anymore. This was the reason we decided that it was useful to test a solution based on Google Geocoder in addition to Google Places. We constructed a widget that was similar to the Google Places widget but serving suggestions from the Google Geocoder.

What was the outcome of our A/B testing?

The experiments were being carried out against the existing home page search. This search was made up of four input boxes that allowed the user to specify the street, street number, city and postcode. This was used as a control that the experiments could be compared against. The experiments were run for approximately four weeks with 10% of Italian users being sent to them. The metric we were using to determine success was conversion which is defined as the percentage of visitors to the website that complete an order with JUST EAT.

Poscodes Anywhere

The PCAPredict Capture+ experiment didn’t see an increase in conversion during A/B testing.

Google Geocoder

The Google Geocoder showed small improvement although the interface was awkward and not designed for this purpose. The overall increase in conversion was minor.

Google Places

Showed an substantial increase in conversion. This was a stand out winner from our testing but there were still areas we thought we could improve. The suggestions could not be filtered to the ones that provided the accuracy we required. This would mean users would have to keep trying options until one met the criteria for a successful search.

How did we resolve the issues?

Based on the A/B testing results, we made the decision to develop the Google Places experiment further. The accuracy of suggestions was still an issue and after testing it had been revealed that this issue was mostly focused on getting from street to street number accuracy. The solution we decided upon was that we would ask the user for the street number explicitly when this situation occurred. This would take the form of an additional input that would be revealed to the user that prompted them for this information. To achieve this, we took the interface that had been built for the Google Geocoder and replaced the Google Geocoder service with the Google Places autocomplete service. As we had complete control of the logic within the widget, it was trivial to detect the missing data events and react to them by displaying the additional input.


The second issue encountered with Google Places was that sometimes addresses could not be found that users were requesting. This was not an issue we encountered with the Google Geocoder service. For this reason we built a fall back into the Google Places custom widget that would Geocode the given address if the street number was provided but the address was not found.

What was the final outcome?

The final outcome of implementing the custom Google Places search on the the Italian homepage was a significant increase in conversion. This implementation is now being used for 100% of users in Italy.
What next?
There are still many ways we can improve on this feature. Google Places allows us to alter the type of suggestions made to the user. It also returns more data than we currently make use of and allows us to upload our own data that can be returned with selected suggestions. Google Places also integrates seamlessly with Google Maps which opens up more possibilities for specifying location and returning location based results. For these reasons, JUST EAT will be continuing to experiment with Google Places during the second quarter of 2016 with an aim to roll this feature out internationally.

Stay tuned for more updates.

29 April 2016
Comments 3

Tech-talk: David Clarke of Wonga on Scaling Agile Planning

29 April 2016, Comments 3

Yesterday, David Clarke of Wonga came and talked to us about how they plan the work that they take on in engineering. Regularly, across a unit of 150 people, in a company of 600.


David Clarke, Head of Tech Delivery.


Planning @ Wonga

Every six weeks all our Scrum teams (approx. 8), together with Tech Ops and Commercial guys go off site to plan. We have done this at Wonga for a long time. The evolution to what and how we do it reflects Wonga’s evolution as an organisation from early start up days to being a formally regulated body. To people coming along for the first time it can seem like organised chaos.

  • Why we do it (and what happens when we don’t)
  • Who is involved (and what happens when they are not)
  • How we prepare for planning
  • How we do it (and lots of ways we don’t do it anymore, epic failures included)
  • What metrics we collect
  • Biscuit awards (and other ways to keep it fun)

The talk will be based on real planning event artefacts, data and plenty of photos from the events.


4 April 2016
Comments 0

Customising Salesforce Marketing Cloud

4 April 2016, Comments 0

Personalised marketing has evolved quickly in recent years and customising digital communications based on customer behaviour has become commonplace. With the widespread consumer adoption of mobile devices and social media, the number of channels across which marketing operations must be carried out has only increased – meaning data plays a crucial role in creating tailored, cross-channel customer interactions. Fortunately, a number of marketing management CRM solutions are available to greatly streamline this process. At JUST EAT, we have chosen Salesforce Marketing Cloud.

Marketing Cloud offers a comprehensive suite of services, including data and analytics, email editing, management of social media advertising, interactive website creation, and cross-channel marketing automation. We currently only use a subset of Marketing Cloud’s functionality but it is already proving to be a powerful enabling platform for the marketing team – helping them to build automated campaigns without the need to write code. However, we have found that there are many business requirements that still can not be fulfilled by the vanilla Marketing Cloud product. Fortunately, we can customise the experience and provide the marketing team with even more automation tools.

A (very) brief Marketing Cloud 101

We needed to make unique voucher codes available to campaigns from our main e-commerce platform. A worked example will best illustrate the problem we encountered when introducing this custom behaviour.

We have built a Windows service that sits inside our Virtual Private Cloud (VPC) in AWS and subscribes to a number of important messages published to our internal message bus. In turn, these messages are mapped to a structure that enables each one to be sent in a POST request to Marketing Cloud’s REST API – the information will be written to a new row in a Marketing Cloud Data Extension. Just think of Data Extensions as tables in a relational database. The following image shows a simple Data Extension with some test customer entries pushed to Marketing Cloud by our service.


Marketing Cloud uses a contact model to provide a single view of a customer’s information across Data Extensions, so let’s assume that this Data Extension is correctly set up with our contact model, otherwise we couldn’t be able use this data in our Marketing Cloud campaigns.

The first step to building a campaign will be to build a simple automation in Marketing Cloud’s Automation Studio. Automations can be used for a number of purposes but we’ve found them particularly useful for running a series of activities to firstly query the data in our Data Extensions, in order to establish an audience for a campaign based on some criteria, and then trigger the running of the campaign for this audience. For example, we may want to run a campaign that sends out vouchers to an audience which only contains customers who haven’t recently placed an order. The image shows a simple automation with just two activities – a query and a trigger.


The query activity will write the audience to another Data Extension which we define and the trigger will fire an event which will run our campaign for any contacts written to this Data Extension.

The campaign will be defined as a customer journey in Marketing Cloud. We can use Marketing Cloud’s Journey Builder to drag and drop the different activities that make up a customer journey from a palette onto a canvas. Example activities include sending an email or SMS, updating rows in Data Extensions, waiting for a period of time, or making decisions to send our contacts on different paths through the journey. We can define a simple journey that just sends an email. Note that Journey Builder also requires a wait activity before a contact exits a journey.


Our entry event shows the event data source as our Data Extension that contains our audience. Each contact in this Data Extension will pass through this journey and should eventually receive an email based on a template that we define for the email activity.

Now we want to add an additional activity before sending the email that requests a voucher from our internal Voucher API to include in the email. This is the exact problem that we encountered in our recent work and, by default, there’s no way to do that from a customer journey. However, we can create a custom activity that will be available from the activity palette and allow us to do just that.

Building custom behaviour

A custom activity is simply a web application that is hosted on a web server. The structure that these applications must follow in order to be used as Journey Builder activities is well defined but there is still a great deal of flexibility with regards to the technology chosen to build the application. All of the basic examples provided by Salesforce are built using the Express web framework for Node.js so we decided to do the same as it seemed the path of least resistance. However, knowing what we know now, we could have just as easily built it using other web frameworks or technologies.

When a contact reaches our custom activity in a customer journey we want the following chain of events to occur…

  1. A voucher request is made from the journey to our web application back-end and the contact moves to a wait activity in the journey.
  2. The web application makes a request to our internal Voucher API and receives a voucher code in the response.
  3. The web application sends the voucher code to the Marketing Cloud REST API so that it can be written to a column in our campaign audience Data Extension against the contact record.
  4. The contact moves to the email activity in the journey where some server-side JavaScript inside the email template fetches the voucher code for that contact from the Data Extension and writes it to the email.

We need to write the voucher codes to a Data Extension in order to make them accessible to a Marketing Cloud email template.


The back-end for the web application is a fairly standard Express REST API that includes a number of endpoints required by Journey Builder. During a running journey, the voucher request is sent to an endpoint in order to execute the functionality required to complete steps two and three, listed previously. There are a few other endpoints that are only required by Journey Builder when the journey is being edited.

During the editing process, both Standard and custom activities in Journey Builder have a configuration wizard that displays in an HTML iframe in order to configure the activity after it is placed on the canvas. For example, for our voucher custom activity it makes sense for us to be able define voucher amount, validity period and other related parameters for that particular campaign. We also need to choose the Data Extension and column to which the voucher codes will be written. This wizard is provided by the front-end code of our web application.


Salesforce even provides FuelUX, a front-end framework which extends Bootstrap and provides some additional JavaScript controls. This enabled us to match the look and feel of the Marketing Cloud UI and include a picker for choosing the Data Extension and column for the voucher codes.

There are a couple of requirements for the front-end code to function correctly in Journey Builder. Firstly, Postmonger must be used in our code. It is a lightweight JavaScript utility for cross-domain messaging and is required as a mediator between our configuration wizard and Journey Builder. Secondly, the root of the front-end code must include a configuration file that contains, amongst other things, the URLs for our back-end endpoints, the inputs and outputs of the custom activity, and a unique application key.

We define the unique application key when we create a new application in the Salesforce App Center as an Application Extension and add our custom activity to this. We also need to provide the endpoint of our custom activity at this point. This step is required for connecting applications to the Marketing Cloud platform and will provide us with a generated Client ID and Client Secret to authenticate with Marketing Cloud and allow our custom activity to interact with the Marketing Cloud API.

Salesforce recommend to use Heroku for hosting custom activities. Heroku is a great option for this type of lightweight Node.js application but wasn’t ideal for us as we needed to interact with our Voucher API which sits inside our VPC. As a result, our custom activity is also hosted inside our VPC so communication with any internal resources is not an issue. This means that we only have to manage the security between our custom activity and Marketing Cloud without publicly exposing the endpoint to our Voucher API. Hosting within the VPC also allows us to take advantage of our internal stacks setup for logging and recording stats.

Following these steps we are now able to drag and drop our custom activity from the activity palette onto the canvas for use in the customer journey.



Not only did we deliver a critical component that will be used across a number of our marketing campaigns, but we also opened up the possibilities of what can be done within the confines of a customer journey. Marketing Cloud offers some great automation tools for marketers but pairing it with the flexibility of our own platform in AWS should open up some interesting opportunities for coordination between the two. We will surely be exploring what other custom activities we can add to the marketing team’s toolset in order to further enable them to react quickly without the need to make amendments to our codebase.

28 March 2016
Comments 0

The minimal form (part one – the explanation)

28 March 2016, Comments 0

Before we start, you’ll notice I’ve named this blog part one – I plan to deliver a series of posts over the coming weeks, and this is primarily so you can see my progression and how my learning amplifies as I dig deeper into the topic of the minimal form. But, at this point you’re probably wondering what the minimal form is. So without further ado…

The minimal form (or single input field) is a new way to display and interact with form fields. It is essentially a single form field that switches and changes to suit the desired input once a user submits their information. It’s primary purpose is to simplify the form filling process, whilst keeping it engaging and less tiresome.

Here is a great example I’ve grabbed from the web, which outlines the concept in full.

These types of forms aren’t limited to capturing basic information like name, address or telephone number. They can be implemented to enhance the form filling process for far more complex information, such as credit card or payment information.

Furthermore, there are a few companies who are currently taking full-advantage of the minimal form to create engaging and easy form filling questionnaires, such as Typeform.

What’s the benefit for your users?

Sure, the majority of these forms dotted around the web are simply seen as nice to haves or delightful elements. However, there are certainly some initial wins from a user experience perspective when it comes to the minimal form, especially beyond first glances.

For example, the most tangible benefit of a minimal form would be for mobile devices. As you can see, I’ve included a screen that outlines the skeleton of a common mobile web form.

You can see the canvas space that designers have to play with, when they’re specifically considering form fields in the design process.

In 2016 it is considered common practice to simply collapse information to fit onto mobile. However, when you do this, spatial problems arise, and to maximise product objectives designers and engineers can minimise extensive clutter in multiple ways. For example, you can stack content below the fold, hide it amongst tabs, accordions or drawers. Ultimately, these solutions are seen as functional in most cases, which is great for mobile users across the web.

However, at JUST EAT, the UX team don’t just aspire to build experiences which are simply functional. As a collective team, we aim to build, explore and innovate these kind of experiences to help us empower our users to love their takeaway experience.

Ultimately, this means it’s our job to ensure we’re making experiences which are functional, reliable, usable, as well as pleasurable.

JEUXSubsequently, we’ve identified that minimal form is certainly an interesting area which could push the usable and pleasurable aspect of our product, whilst retaining that functional and reliable foundation.

There’s no denying that minimal form (for mobile, at least) has multiple benefits from a user perspective…

  • It allows users to concentrate their effort on one form field at a time.
  • Ensures the user is not overwhelmed with the sheer amount of information he/she may or may not be required to submit.
  • No/minimal scrolling required.
  • Less tapping/frustration.
  • We assume it will be a quicker experience for our users – although, it would be interesting to see if this is the case via user testing. Perhaps it’s actually a longer process, but is perceived to feel quicker?)


An assumption might be, if you’re only focusing at one form at a time, would the information submitted be more accurate from your user base? We’ll dive deeper into the kind of metrics a minimal form could improve in part two…

Some disadvantages…

There are certainly some disadvantages to the minimal form concept. For example, how would you display an error state? Would a user have to go back if their password confirmation was wrong? Surely that would go against the progressive motion of the minimal form?

Furthermore, it’s great not having to see all of the forms you have to fill, but how would you check to ensure all of the information was correct before submitting?

Also, what place does the minimal form have on desktop? What tangible benefits does implementing a minimal form have for desktop users? And finally, how many people using this form for the first time will know how it works?

I’m confident that these are problems that can be solved when implemented in a user flow with a bit of intuitive design thinking.

Ok, so where would you start?

Well, you’ll have to stay tuned for part two where we look at implementing something similar to the minimal form as we continue to enhance our product, and help implement a more usable and pleasurable experience for all things mobile.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Next page