Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings

Tag : Android


Beautiful Rooms & Why Smartphones Are Too Dumb

Some time in the future, the age of the smartphone will draw to a close and experiences will become more in-tune with the way humans actually live. We need to be thinking about this new wave of interactions at a time when our customer’s attention is a premium. We need to be augmenting their worlds, not trying to replace them…

I’m Craig Pugsley – a Principal UX Designer in Product Research. Our team’s job is to bring JUST EAT’s world-leading food ordering experience to the places our consumers will be spending their future, using technology that won’t be mainstream for twelve to eighteen months.

It’s a great job – I get to scratch my tech-geek itch every day. Exploring this future-facing tech makes me realise how old the systems and platforms we’re using right now actually are. Sometimes it feels like we’ve become their slaves, contorting the way we want to get something done to match the limitations of their platforms and the narrow worldview of the experiences we’ve designed for them. I think it’s time for change. I think smartphones are dumb… I feel like we’ve been led to believe that ever more capable cameras or better-than-the-eye-can-tell displays make our phones more useful. For the most part, this is marketing nonsense. For the last few years, major smartphone hardware has stagnated – the occasional speed bump here, the odd fingerprint sensor there… But nothing that genuinely makes our phones any smarter. It’s probably fair to say that we’ve reached peak phone hardware.


What we need is a sea-change. Something that gives us real value. Something that recognises we’re probably done with pushing hardware towards ever-more incremental improvements and focuses on something else. Now is the time to get radical with the software.

I was watching some old Steve Jobs presentation videos recently (best not to ask) and came across the seminal launch of the first iPhone. At tech presentation school, this Keynote will be shown in class 101. Apart from general ambient levels of epicness, the one thing that struck me was how Steve referred to the iPhone’s screen as being infinitely malleable to the need – we’re entirely oblivious to it now, but at that time phones came with hardware keyboards. Rows of little buttons with fixed locations and fixed functions. If you shipped the phone but thought of an amazing idea six months down the line, you were screwed.

In his unveiling of the second generation of iPhone, Jobs sells it as being the most malleable phone ever made. “Look!” (he says), “We’ve got all the room on this screen to put whatever buttons you want! Every app can show the buttons that make sense to what you want to do!”. Steve describes a world where we can essentially morph the functionality of a device purely through software.


But we’ve not been doing that. Our software platforms have stagnated like our hardware has. Arguably, Android has basic usability issues that it’s still struggling with; only recently have the worse Bloatware offenders stopped totally crippling devices out-the-box. iOS’s icon-based interface hasn’t changed since it came out. Sure, more stuff has been added, but we’re tinkering with the edges – just like we’ve been doing with the hardware. We need something radically different.

One of the biggest problems I find with our current mobile operating systems is that they’re ignorant of the ecosystem they live within. With our apps, we’ve created these odd little spaces, completely oblivious to each other. We force you to come out of one and go in the front door of the next. We force you to think first not about what you want to do, but about the tool you want to use to do it. We’ve created beautiful rooms.

Turning on a smartphone forces you to confront the rows and rows of shiny front doors. “Isn’t our little room lovely” (they cry!) “Look, we’ve decorated everything to look like our brand. Our tables and chairs are lovely and soft. Please come this way, take a seat and press these buttons. Behold our content! I think you’ll find you can’t get this anywhere else… Hey! Don’t leave! Come back!”

“Hello madame. It’s great to see you, come right this way. Banking, you say? You’re in safe hands with us. Please take a seat and use this little pen on a string…”

With a recent iOS update, you’re now allowed you to take a piece of content from one room and push it through a little tube into the room next door.

Crippled by the paralysis of not alienating their existing customers, Android and iOS have stagnated. Interestingly, other vendors have made tantalizing movements away from this beautiful-room paradigm into something far more interesting. One of my favorite operating systems of all time, WebOS, was shipped with the first Palm Pre.


There was so much to love about both the hardware and software for this phone. It’s one of the tragedies of modern mobile computing that Palm weren’t able to make more of this platform. At the core, the operating system did one central thing really, really well – your services were integrated at a system level. Email, Facebook, Twitter, Flickr, Skype, contacts – all managed by the system in one place. This meant you could use Facebook photos in an email. Make a phone call using Skype to one of your contacts on Yahoo. You still had to think about what beautiful room you needed to go into to find the tools you needed, but now the rooms were more like department stores – clusters of functionality that essentially lived in the same space.

Microsoft took this idea even further with Windows Phone. The start screen on a Windows Phone is a thing of beauty – entirely personal to you, surfacing relevant information, aware of both context and utility. Email not as important to you as Snapchat? No worries, just make the email tile smaller and it’ll report just the number of emails you haven’t seen. Live and die by Twitter? Make the tile huge and it’ll surface messages or retweets directly in the tile itself. Ambient. Aware. Useful.



Sadly, both these operating systems have tiny market shares.

But the one concept they both share is a unification of content. A deliberate, systematic and well executed breaking down of the beautiful room syndrome. They didn’t, however, go quite far enough. For example, in the case of Windows Phone, if I want to contact someone I still need to think about how I’m going to do it. Going into the ‘People Hub’ shows me people (rather than the tools to contact them), but is integrated only with the phone, SMS and email. What happens when the next trendy new communication app comes along and the People Hub isn’t updated to support the new app? Tantalizingly close, but still no cigar.

What we need is a truly open platform. Agnostic of vendors and representing services by their fundamentally useful components. We need a way to easily swap out service providers at any time. In fact, the user shouldn’t know or care. Expose them to the things they want to do (be reminded of an event, send a picture to mum, look up a country’s flag, order tonight’s dinner) and figure out how that’s done automatically. That’s the way around it should be. That’s the way we should be thinking when designing the experiences of the future.


Consider Microsoft’s Hololens, which was recently released to developers outside of Microsoft. We can anticipate an explosion of inventiveness in the experiences created – the Hololens being a unique device leapfrogging the problem of beautiful rooms to augment your existing real-world beautiful rooms with the virtual.


Holographic interface creators will be forced to take into account the ergonomics of your physical world and work harmoniously, contextually, thoughtfully and sparingly within it. Many digital experience designers working today should admit to the fact that they rarely take into account what their users were doing just before or just after their app. This forces users to break their flow and adapt their behavior to match the expectations of the app. As users, we’ve become pretty good at rapid task switching, but doing so takes attention and energy away from what’s really important – the real world and the problems we want to solve.

Microsoft may be one of the first to market with Hololens, but VR and AR hardware is coming fast from the likes of HTC, Steam, Facebook and Sony. Two-dimensional interfaces are on the path to extinction, a singular event that can’t come quick enough.


Calabash Page Objects

Faster development of Calabash tests

While creating the page object classes in our Calabash mobile test suites at JUST EAT, we found ourselves repeating a lot of actions when waiting for, scrolling to and interacting with elements on the screen. We abstracted these actions into a library to avoid this unnecessary duplication of code and made these actions agnostic to screen size. This library has now been published as a ruby gem called calabash-page-objects.

Why use this?

Dealing with small screens

Sometimes you have to scroll to elements on small screens but not on larger screens. We initially used if-statements dealing with an environment variable for ‘small screen’ inside your test code – not good!

We wrote a method to scroll to an element if it wasn’t immediately there. This method was then included in many of the methods available to our elements; touching, inputting text, asserting presence etc.

Multiple scrollable views

When attempting to use Calabash’s default scroll method, we noticed that sometimes it didn’t appear to scroll the view we wanted if there were multiple scrollable views on the screen.

After looking into the Calabash methods, we noticed that you could perform scroll actions on locators. We wrapped up this method in the gem too so that we could pass both the element we’re searching for and the view it belongs in into all the helper methods. This became the ‘parent’ parameter that the gem methods can optionally take.

How to use?

The calabash-page-objects gem exposes two element classes, one for iOS the other for Android. They are implemented in the same way regardless of the platform under test. These element classes have methods for, waiting for them to be visible, waiting for them to disappear, touching them, etc. These methods all take variables in a consistent format.

More information

See the project on Github for more information, and a more detailed description of the methods and parameters. Feel free to fork it and contribute too.


Dependency Injection on Android

Back when I started writing Android apps in 2009 things were a little different. Apps were a whole new world of software development and everything was evolving, no one took apps too seriously and they were just a bit of fun.

Fast forward to the present day and the mobile app landscape has totally changed. Apps are now big business and are becoming cornerstones of companies strategies. At JUST EAT this is very much the case. We know our customers love using our apps and so we see it as really important that we create apps that are robust and give a fantastic user experience.

So as you may be able to tell, we’re serious about our apps, and serious developers use serious software tools and techniques. One of those software techniques we use in JUST EAT is Dependency Injection and the associated frameworks that go along with it.

It’s just a design pattern

Dependency Injection (DI) is a design pattern which has been around for while, but recently it has become more commonly used in the development of Android applications, due mainly to the implementation of some rather nifty DI frameworks. DI allows developers to write code that has low coupling and which can therefore be easily tested. The more complex and longer lived your Android software the more important it becomes to be able to test it effectively. At JUST EAT we see DI as key to allowing our code to be configurable and therefore testable, so creating a codebase we can have confidence in.  Even though we have a fairly large and complicated codebase, we can make releases regularly and quickly because we have robust testing which is achieved in part by using DI.  With that in mind hopefully I’ve convinced you that DI is worth a look, but first let’s have a quick recap of what DI is.

Basics of Dependency Injection

When we write code we will often find that our classes will have dependencies on other classes. So class A might need to have a reference, or dependency to class B. To make things a little clearer let’s look at the case where we have a Car class that needs to use an Engine class.

This code works fine, but the downside is that the coupling between the Car and the Engine is high. The Car class creates the new Engine object itself and so it has to know exactly what Engine it needs, in this case a PetrolEngine. Maybe we could do a little better and reduce the coupling, so let’s look at a different way of creating this Car class.

Here we have passed the Engine into the Car via the car’s constructor method. This means that the coupling between the two objects is now lower. The car doesn’t need to know what concrete class the Engine is, it could be any type of Engine as long as it extends the original Engine class. In this example since we have passed, or injected, the dependency via the Car classes constructor we have performed a type of injection known as constructor injection, we can also perform injection via methods and with the use of DI frameworks directly into fields. So that’s really all there is to DI. At its most basic it is just passing dependencies into a class rather than instantiating them directly in the class.

If DI is simple, why do we need Frameworks?

Now that we understand what DI is, it’s quite straightforward to start using it in our code. We simply look at what dependencies are needed and pass them via a constructor or a method  call. This is fine for simple dependencies, but you’ll soon find that for more complex dependencies things can start getting a little messy.

Let’s return to our example of a Car that has a dependency on an Engine. Now imagine that the engine also has it’s own set of dependencies. Let’s say it needs a crank shaft, pistons, block and head. If we follow DI principles we will pass these dependencies into the Engine class, that’s not so bad, we just need to create these objects first and pass them into the Engine object when we create that. Finally we pass the Engine to the Car.

Next let’s make our example a little more complicated. If we imagine trying to create classes for each part of an engine we can see that we would soon end up with possibly hundreds of classes with a complicated tree (more accurately it is a graph) structure of dependencies.


A simplified graph of dependencies for our example. Here the leaf dependencies have to be created first then passed to the objects that depends on them. All objects have to be created in the correct order.

To create our dependencies we would then have to carefully create all our objects in the correct order, starting with the leaf node dependencies and passing those in turn to each of their parent dependencies and so on until we reach the top most or root dependency.

Things are starting to get quite complicated, if we also used factories and builders to create our classes we can soon see that we have to start creating quite a lot of complicated code just to create and pass our dependencies, this type of code is commonly know as boilerplate code and generally it’s something we want to avoid writing and maintaining.

From our example we can see that implementing DI on our own can lead to creating a lot of  boilerplate code and the more complex your dependencies the more boilerplate you will have to write. DI has been around for a while and so has this problem, so to solve it Frameworks for using DI have been create. These frameworks make it simple to configure dependencies and in some cases generate factory and builder classes for creating objects, making it very straightforward to create complex dependencies that are easily managed.

Which DI framework should I use for Android?

Since DI has been around for a while there are unsurprisingly quite a few DI frameworks that we can choose from. In the Java world we have Spring, Guice and more recently Dagger. So which framework should we use and why?

Spring is a DI framework that’s been around for sometime. It’s aim was to solve the problem of declaring dependencies and instantiating objects. It’s approach was to use XML to do this. The downside to this was that the XML was almost as verbose as writing the code by hand and also validation of it was done at runtime. Spring introduced a number of problems while trying to solve the initial problems of using DI.

In the history of Java DI frameworks, Guice was really the next evolution after Spring. Guice got rid of the XML configuration files and did all of its configuration in Java using annotations such as @Inject and @Provides. Things were starting to look a whole lot better, but there were still some problems. Debugging and tracking down errors with applications built using Guice could be somewhat difficult. Additionally it still did runtime validation of the dependency graphs and also made heavy use of reflection, both of which are fine for server side applications, but can be quite expensive for mobile applications that are launched much more often on devices with much lower performance.

While Guice was a big step forward it really didn’t solve all the problems and its design was also not ideally suited for use on mobile devices. With that in mind a team of developers at a company called Square developed Dagger.

Dagger takes its name from our tree structure of dependencies. Remember more accurately it is a graph of dependencies and in this case the graph is actually a Directed Acyclic Graph or DAG hence the name DAGger. Dagger’s aim was to address some of the concerns of using Guice and especially using Guice on mobile devices.

Dagger took the approach of moving a lot of its workload to compile time rather than runtime and also tried to remove as much reflection from the process as possible, both of which really helped performance when running on mobile applications.  This was done at the slight expense of reducing the feature set offered by the likes of Guice, but for Android apps Dagger was still a step in the right direction. With Dagger we are nearly at a good solution for a DI framework that is suitable for mobile devices, but a team at Google decided that things could still be done a little better and so they created Dagger 2.

Dagger 2 does even more of its work at compile time and also does a better job of removing reflection, finally it also generates code that is even easier to debug than the original version of Dagger. In my opinion there really isn’t a better solution for DI on Android, so if you’re going to use a DI framework, I believe that Dagger 2 really is the easiest to use and debug with, while also having the best performance.

Getting started with DI on Android.

So where do you go from here? Luckily Dagger and Dagger 2 already have a strong following so there are plenty of tutorials and presentation to help you get up to speed.

The main Dagger 2 website can be found here and to get a good overview of Dagger 2 and it features there’s a great presentation by Jake Wharton that you can find here. It covers the basics and then goes on to discuss how Modules and Components work, while also covering the subject of scopes in Dagger 2. Finally to get you firmly on the road to using DI in Android here’s a list of handy tutorials:

Good overview of Dagger 2: //fernandocejas.com/2015/04/11/tasting-dagger-2-on-android

What is Dagger 2 and how to use it:


Scopes in Dagger 2 :


Using Dagger 2 with Espresso and Mockito for testing



Using Android Emulators on CI


In the JUST EAT Android team, we use a continuous integration system called TeamCity, which compiles and packages the app, installs it on our test devices, runs the tests on each one and then reports the result back to TeamCity. The team uses Git for version control of the code, and our build server is linked to activity on the repository and will automatically run jobs when certain events occur. The main problem I found myself solving with this setup was that the emulators would eventually crash if they were kept running.

The build server’s functional tests job

The team’s TeamCity build agent kicks off functional tests on a variety of devices each time there is a merge into the develop branch of the Android repository. We have a separate build job for each device to give us visibility of test successes/failures on a per-device basis. Some of the devices are real ones plugged into the build machine, while some are emulated using an Android emulator called Genymotion. We decided to test more on emulated devices than real ones due to problems with the physical devices losing wifi intermittently, running out of battery due to only being trickle-charged by the machine, and occasionally just losing connection to the machine (cables just add another point at which to fail!)

Genymotion Emulator

Genymotion Emulator running VB underneath

The first problem

Unfortunately, all Android emulators are sometimes prone to crashing if left running for a while. However,  Genymotion is still viewed by the Android community (and us!) as the best emulator programs for Android, especially in terms of speed, so giving up Genymotion wouldn’t have been the correct solution here. The emulators were left constantly for days, reinstalling the app and running test suite after test suite, and would always inevitably crash and require some manual rebooting. I decided to find a way to launch each device every time a suite was due to run on it, and close it again when the tests were complete.

Genymotion comes with its own shell as a separate program, which executes commands with the emulators including starting devices (but at first glance I couldn’t find a command to shut them down). You can start an emulator with the ‘player’ command:

I shut the emulator down with a ruby script just using the build machine’s process list. This means I can also kill the emulator task if it has frozen:

(This last number is 1, not 0, because the act of searching with grep creates a new process, and that process contains the string I’m grepping for! Grepception.)

Genymotion uses VirtualBox behind the scenes. When specifying the device parameter, you can either use the device’s name as displayed in Genymotion, or you can use its associated VirtualBox ID. I used the IDs because they would always be constant for the installation of that emulator, while one could easily change the title of the device in Genymotion’s main window at any time.

So I needed to find out the Virtual Machine IDs of each of my Genymotion devices. I did this with VirtualBox’s own VboxManage executable, which is in the VirtualBox installation directory:


So now I can launch the Galaxy S4 emulator with one command:

I can now execute the launching of each emulator as a build step inside their respective build jobs.

The second problem

The Android SDK has a program called ‘Android Debug Bridge‘, which is used for interaction between a system and a connected Android device. Each android device has its own serial number, which can be viewed by the command ‘adb devices’, with an optional ‘-l’ parameter also printing out extra useful information such as the model. Unfortunately, the device serials for all the emulators were dynamically-generated IP addresses and would be different every time an emulator was booted up. I haven’t found a way to set static device serials on emulators. I couldn’t set this in the VM settings either; you can alter the network configuration of a device, but not the serial ID of a device as it appears in adb.

The output for ‘adb devices -l’ looks like this:

The number on the left is the serial and there are several bits of information on the right of the line.

I collaborated with Beccy writing a script which runs after an emulator is launched. As it boots up, the script loops round once a second for 60 seconds, parsing the output from an ‘adb devices’ command. It reads each line of ‘adb devices -l’, splits up the chunks of information and maps them together in an array. Then the script takes a device_name parameter, sent in by TeamCity, and searches for this inside the array. If found, it returns the matching serial for that map. If not, it throws an error of ‘Device not found’.

If the device was found, the script will have written the device serial to a file, which I can then read in a later build step and use to tell adb to launch the tests only on that device. You can specify a single device when using adb by using its serial a ‘-s’ parameter:

The 3rd problem

Once a Genymotion emulator has opened, it appears in ‘adb devices’ while it is still booting up. This means the next build steps would fail to run the tests because the device wasn’t ready to receive commands like installing apps.

I got round this by using ADB again. With it, you can access the device’s own shell and therefore get extra info from and send more commands to the device. I used the following useful command to check if the device had finished its boot cycle or not:

This returns ‘running’ if the device is still booting and ‘stopped’ if it has booted. Now all I had to do was write a script that ran this command in a loop for up to 60 seconds and wait until the output of this shell command equalled ‘stopped’:

If the bootanim query returned ‘stopped’ within 60 seconds, the script would exit with success code 0, otherwise after the 60 seconds is up and the command hasn’t returned ‘stopped’, the script would exit with failure code 1

The 4th problem

When you start a Genymotion emulator using the ‘player’ command, the terminal you executed the task in would be stuck running that command until the emulator was closed again. This was a problem for our each of our build jobs, which would run in one shell from start to finish. For this reason, I put the emulator launch command (the one that uses ‘player’) in a ‘.sh’ script for each device, and executed them in the specific job’s terminal with the ‘open’ command. This spawned a new terminal, freeing the main one up immediately.

However, this meant that when the tests had run and the job had finished, this left a tower of dead terminals on the screen.

Screen Shot 2015-02-25 at 15.15.15

You can change Terminal’s preferences to exit once a command is complete. But don’t worry, this only affects my terminals which are spawned with the ‘open’ command; it doesn’t exit the terminal every time you do something normally.

Screen Shot 2015-02-25 at 15.17.40

Thanks for reading! =D

-Andy Barnett, Test Automation Engineer