Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings

Category : Tech-talk

184 views

Top 5 Tips for Building Just Eat on Amazon’s Echo Show

Hi, I’m Andy May – Senior Engineer in Just Eat’s Product Research team. I’m going to take you through some top tips for porting your existing Alexa voice-only skill to Amazon’s new Echo Show device, pointing out some of the main challenges we encountered and solved.

pasted-image-0-3

Since we started work on the Just Eat Alexa skill back in 2016, we’ve seen the adoption to voice interfaces explode in popularity. Amazon’s relentless release schedule for Alexa-based devices has fueled this, but the improvements in the foundational tech (AI, deep learning, speech models, cloud computing) coupled with the vibrant third-party skill community look set to establish Alexa as arguably the leader in voice apps.

From an engineering perspective adapting our existing code base to support the new Echo Show was incredibly easy. But, as with any new platform, simply porting an existing experience across doesn’t do the capabilities of the new platform justice. I worked incredibly closely with my partner-in-crime Principle Designer Craig Pugsley to take advantage of what now became possible with a screen and touch input. In fact, Craig’s written some top tips about exactly that just over here

In order to add a Show screen to your voice response you simply extend the JSON response to include markup that describes the template you want to render on the device. The new template object (Display.RenderTemplate) is added to a directives Array in the response.

For more details on the Alexa response object visit //developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interface-reference#response-body-syntax

Sounds simple, doesn’t it? Well, it’s not rocket science, but it does have a few significant challenges that I wished someone had told me about before I started on this adventure. Here are five tips to help you successfully port your voice skill to voice-and-screen.

1. You need to handle device-targeting logic

The first and main gotcha we found was that you cannot send a response including a template to a standard Echo or Dot device. We incorrectly assumed a device that does not support screens would simply ignore the additional objects in the response.

Our own Conversation Class that all Alexa requests and responses go though is built on top of the Alea Node SDK. The SDK did not exist when we first launched our Skill. We added a quick helper method from the Alexa Cook Book (//github.com/alexa/alexa-cookbook/blob/master/display-directive/listTemplate/index.js#L589) to check if we are dealing with an Echo Show or voice only device.

This method is called before we return our response to ensure we only send RenderTemplates to devices that support them.

Finally we extended our Response Class to accept the new template objects and include them in the response sent to Alexa. The result visual screens are displayed on the Echo Show alongside the spoken voice response.

2. Don’t fight the display templates

There are currently 6 templates provided to display information on the Echo Show. We decided to create one file this means the markup and structure is only declared once. We then pass the data we need to populate the template. Object destructuring, string literals alongside array.map and array.reduce make generating templates easy. We use Crypto to generic a unique token for every template we return.

4-tweakorder

Image of list – mapping basket to template listItems.

Image of basket list  – reducing basket to single string.

Markup is limited to basic HTML tags including line breaks, bold, italic, font size, inline images, and action links. Action Links are really interesting but the default blue styling meant we have so far had to avoid using them.

Many of the templates that support images take an array of image objects however just the first image object is used. We experimented providing more than one image to provide a fallback image or randomise the image displayed. The lack of fallback images means that we need to make a request to our S3 bucket to validate the image exists before including in the template.

Don’t try to hack these templates to get them to do things that weren’t designed for. Each template’s capabilities have been consciously limited by Amazon to give users a consistent user experience. Spend your time gently stroking your friendly designer and telling them they’re in a new world now. Set their expectations around the layouts, markup and list objects that are available. Encourage them to read Craig’s post.

3. Take advantage of touch input alongside voice

The Echo Show offers some great new functionality to improve user experience and make some interactions easier. Users can now make selections and trigger intents but touching the screen or saying the list item number “select number 2”.

It is your job to implement capture touch and voice selection. When a user selects a list item you code will receive a new request object of type Display.ElementSelected.

The token attribute you specify when creating the list is passed back in this new request object:

In the above example we receive the value ‘Indian’ and can treat this in the same way we would the cuisine slot value. Our state management code knows to wait for the cuisine intent with slot value or Display.ElementSelected request.

Finally we create a new Intent, utterances an a slot to handle number selection. If our new Intent is triggered with a valid number we simply match the cuisine value from the cuisine array in state with a index offset.

Find out more about touch and voice selection – //developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/display-interface-reference#touch-selection-events

4. Adapt your response based on device

The Echo Show provides lots of opportunities and features. In one part of our Skill we decided to change the flow and responses based on the device type.

When we offer users the opportunity to add popular dishes it made sense for us to shorten the flow as we can add use the screen in addition to the voice response.

We use the same supportsDisplay method to change the flow of our skill.

We use the same logic when displaying the list of popular dishes. Based on Amazon recommendations if the device supports display we don’t read out all the dishes.

You can find out more about our thoughts designing user experience for the Echo Show here.

5. The back button doesn’t work

The back button caused us some problems. When a user touches the back button the Echo Show will display the previous template. Unfortunately no callback is sent back to your code. This creates huge state management problem for us.

For example a user can get the checkout stage at this point our state engine expects only a 2 intents Pay Now or Change Something  (exc back, cancel and stop). If a Echo Show user touched back the template would now show our Allergy prompt. The state engine does not know this change has taken place so we could  not process the users Yes/No intents to move on from allergy as think the user is still on the checkout stage.

Just to add to this problem the user can actually click back through multiple templates. Thankfully you can disable the back button in the template response object:

To find out more about the Just Eat Alexa Skill visit //www.just-eat.co.uk/alexa

For more information visit on developing Alexa Display Interface visit  //developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/display-interface-reference

756 views

Troy Hunt – Hack Yourself First workshop at Just Eat

2016 was a year full of internet security issues from the Yahoo breach, to TalkTalk hack to US Election rigging and the massive Tesco Bank breach to an Internet crippling DDoS attack. Today, internet security is now no longer just the domain of techies and security experts, but the responsibility of all of us.

Hacked website

A close-up of Troy Hunt’s demo site with hacked videos inserted

I remember my first computer. It was a ZX Spectrum, running with 48K of RAM on a Z80 processor running at 3.5 MHz. It was on this rubber-keyed machine that I learnt about for loops, if clauses and how much fun it was getting a computer to do your bidding, even if it was only to print “HELLO” all the way down the screen.

Hello Hello

Today, many years later, I spend most days getting Just Eat’s computers to do what I want them to do. And it’s still as satisfying as it always was.

A few weeks ago, Troy Hunt came and visited Just Eat for the second year running, to lead a fresh group of our engineers through his two-day ‘Hack Yourself First’ security workshop. And I learned something new and interesting – how to get other people’s computers to do what I wanted them to…

(For those of you who don’t know, Troy is one of the world’s best known web security experts.)

Troy Hunt sees hacked site for the first time

Troy Hunt discovers his test site has been hacked by Rick Astley

Twenty Just Eat engineers participated in the workshop, which consisted of a mixture of an overview of some of the most common security flaws out there in the wild, taking us gently (and sometimes not-so-gently) through (among other things) SQL injection attacks, badly configured applications and poorly thought-out password policies. Not only did he show us what the implications were when these things happened, but showed us how to get our hands dirty and hack a demonstration website that he had made specifically to be hacked.

Now my interest was piqued. Of course, as a seasoned developer, I’d heard about most of the security flaws that Troy was talking about, but actually being able to hack a site and see what information gets compromised: getting someone else’s computer to do what I wanted it to do was even more satisfying than getting my own one to behave: having my trusty laptop break a website (albeit one written purposely to have these security holes), spam its reviews, and enumerate through all the registered users’ details in less than ten minutes was an eye-opener.

Rick-rolled Troy Hunt

Troy Hunt discovers his test site has been hacked

Troy’s workshop helped all of us to understand, through our own practical application & experience, that security is something we must all take responsibility for, and how to do this in a practical way.

Troy continues to be instrumental in highlighting security issues, and showing how to prevent or combat them (through his blog, his database of data leaks and his online courses). Our thanks to Troy for spending a couple of days giving us a fairly broad yet deep dive into some of these issues.

I for one was inspired to look deeper into this fascinating part of our industry, and the feedback suggests it wasn’t just me!

 

831 views

Xamarin 101, S01E01 – UI Tests

Xamarin 101 is a new series that we hope will prepare you and your team to use Xamarin in production. Each episode will focus on one particular topic on Xamarin development.

Subscribe to our meetup page to receive notifications about all our future events including the next Xamarin 101 episodes, we’ll also give you free pizza and drinks whilst you learn, win-win.
//www.meetup.com/London-Mobile-Dev.

Our first episode was about UI Tests, the full presentation is now available on our Youtube channel!

Two of our JUST EAT Xamarin Engineers also spoke at the event.

Xamarin UITest and Xamarin Test Cloud by Gavin Bryan

This talk covered Xamarin’s Automation Test library, which allows you to create, deploy and run automation tests on mobile devices, simulators and emulators. The library is based on Calabash and allows you to write automation tests in C# using NUnit in a cross-platform manner so that tests can be shared across different platforms if required.
The library is very rich in functionality, allowing quite involved and complex automation tests to be written. The talk gave an overview of some basic test automation and tools available for creating and running tests. The automation tests can be run both locally, in CI environments and in Xamarin Test Cloud (XTC). XTC is an on-demand cloud based service with over 2000 real mobile devices available to run your automation tests on.
We showed the options available for running automation tests on a variety of devices in XTC and showed the analysis and reporting that was available in XTC.

 

Presentation Assets
Slides – goo.gl/TXTVzI
Demo – goo.gl/QiKprk

____

BDD in Xamarin with Specflow and Xamarin UITest by Emanuel Amiguinho

Following Gavin’s presentation, it was time to bring BDD to Xamarin development using Specflow to fill in the gap between Gherkin Feature/Steps definition and Xamarin.UITest framework to have the best UI test coverage possible and good documentation that everyone inside of your team can understand (technical and non-technical personnel).

Presentation Assets
Slides – goo.gl/ITWen8
Demo – goo.gl/7BDpfp

____

Our next topic is databases and we are currently looking for speakers that have had experience with any type of local database in their development (SQLite, DocumentDB, Realm database, etc). If you are interested, please send an email outlining which database you want to talk about and your availability to:
emanuel.amiguinho@just-eat.com or nathan.lecoanet@just-eat.com

725 views

Tech talk: Towards a Docker and containerised future

Last week, Ben Hall came and talked to us about how Docker can be used, even within a Windows-hosted platform. This was really interesting, and has opened up a few lines of experimentation; thanks Ben!

Abstract

Container based deployments are rapidly becoming the de-facto standard for system deployments ranging from small wordpress sites to how Google deployment their clusters.

During this talk, Ben will discuss how you can architecture your applications for use with Docker and a container based deployment approach. Ben will introduce the current Container Patterns and approaches that are moving us towards a containerised future.

At the end, developers, testers and system administrators will understand the issues associated with this new way of thinking, how production environments need to change to support containers and the advantages they bring for maintainability across multiple environments and clusters.

Ben Hall

Ben is the founder of Ocelot Uproar, a company focused on building products loved by users. Ben’s worked as systems administrator, tester, software developer and launched several companies. Still finds the time to publish a book and speak at conferences. Ben enjoys looking for the next challenges to solve, usually over an occasional beer.

Ben recently launched Scrapbook (joinscrapbook.com), a hosted online environment for developers. Scrapbook helps break down the barriers to learning new technologies such as Docker & containers.

Recording

Tech talks at JUST EAT

This is one of the reciprocal tech talks that we arrange at JUST EAT. See full details here.