Just-Eat spectrum-bottom spectrum-top facebook google-plus instagram linkedIn pinterest reddit rss twitter_like twitter_reply twitter_share twitter_veriviedtwitter vimeo whatsapp youtube error_filled error file info-filled info loading star tick arrow arrowLeft arrowRight close download minus-filled minus move play plus-filled plus searchIcon settings

Tag : Automation

703 views

Customising Salesforce Marketing Cloud

Personalised marketing has evolved quickly in recent years and customising digital communications based on customer behaviour has become commonplace. With the widespread consumer adoption of mobile devices and social media, the number of channels across which marketing operations must be carried out has only increased – meaning data plays a crucial role in creating tailored, cross-channel customer interactions. Fortunately, a number of marketing management CRM solutions are available to greatly streamline this process. At JUST EAT, we have chosen Salesforce Marketing Cloud.

Marketing Cloud offers a comprehensive suite of services, including data and analytics, email editing, management of social media advertising, interactive website creation, and cross-channel marketing automation. We currently only use a subset of Marketing Cloud’s functionality but it is already proving to be a powerful enabling platform for the marketing team – helping them to build automated campaigns without the need to write code. However, we have found that there are many business requirements that still can not be fulfilled by the vanilla Marketing Cloud product. Fortunately, we can customise the experience and provide the marketing team with even more automation tools.

A (very) brief Marketing Cloud 101

We needed to make unique voucher codes available to campaigns from our main e-commerce platform. A worked example will best illustrate the problem we encountered when introducing this custom behaviour.

We have built a Windows service that sits inside our Virtual Private Cloud (VPC) in AWS and subscribes to a number of important messages published to our internal message bus. In turn, these messages are mapped to a structure that enables each one to be sent in a POST request to Marketing Cloud’s REST API – the information will be written to a new row in a Marketing Cloud Data Extension. Just think of Data Extensions as tables in a relational database. The following image shows a simple Data Extension with some test customer entries pushed to Marketing Cloud by our service.

data_extension

Marketing Cloud uses a contact model to provide a single view of a customer’s information across Data Extensions, so let’s assume that this Data Extension is correctly set up with our contact model, otherwise we couldn’t be able use this data in our Marketing Cloud campaigns.

The first step to building a campaign will be to build a simple automation in Marketing Cloud’s Automation Studio. Automations can be used for a number of purposes but we’ve found them particularly useful for running a series of activities to firstly query the data in our Data Extensions, in order to establish an audience for a campaign based on some criteria, and then trigger the running of the campaign for this audience. For example, we may want to run a campaign that sends out vouchers to an audience which only contains customers who haven’t recently placed an order. The image shows a simple automation with just two activities – a query and a trigger.

automation

The query activity will write the audience to another Data Extension which we define and the trigger will fire an event which will run our campaign for any contacts written to this Data Extension.

The campaign will be defined as a customer journey in Marketing Cloud. We can use Marketing Cloud’s Journey Builder to drag and drop the different activities that make up a customer journey from a palette onto a canvas. Example activities include sending an email or SMS, updating rows in Data Extensions, waiting for a period of time, or making decisions to send our contacts on different paths through the journey. We can define a simple journey that just sends an email. Note that Journey Builder also requires a wait activity before a contact exits a journey.

journey

Our entry event shows the event data source as our Data Extension that contains our audience. Each contact in this Data Extension will pass through this journey and should eventually receive an email based on a template that we define for the email activity.

Now we want to add an additional activity before sending the email that requests a voucher from our internal Voucher API to include in the email. This is the exact problem that we encountered in our recent work and, by default, there’s no way to do that from a customer journey. However, we can create a custom activity that will be available from the activity palette and allow us to do just that.

Building custom behaviour

A custom activity is simply a web application that is hosted on a web server. The structure that these applications must follow in order to be used as Journey Builder activities is well defined but there is still a great deal of flexibility with regards to the technology chosen to build the application. All of the basic examples provided by Salesforce are built using the Express web framework for Node.js so we decided to do the same as it seemed the path of least resistance. However, knowing what we know now, we could have just as easily built it using other web frameworks or technologies.

When a contact reaches our custom activity in a customer journey we want the following chain of events to occur…

  1. A voucher request is made from the journey to our web application back-end and the contact moves to a wait activity in the journey.
  2. The web application makes a request to our internal Voucher API and receives a voucher code in the response.
  3. The web application sends the voucher code to the Marketing Cloud REST API so that it can be written to a column in our campaign audience Data Extension against the contact record.
  4. The contact moves to the email activity in the journey where some server-side JavaScript inside the email template fetches the voucher code for that contact from the Data Extension and writes it to the email.

We need to write the voucher codes to a Data Extension in order to make them accessible to a Marketing Cloud email template.

sequence_diagram

The back-end for the web application is a fairly standard Express REST API that includes a number of endpoints required by Journey Builder. During a running journey, the voucher request is sent to an endpoint in order to execute the functionality required to complete steps two and three, listed previously. There are a few other endpoints that are only required by Journey Builder when the journey is being edited.

During the editing process, both Standard and custom activities in Journey Builder have a configuration wizard that displays in an HTML iframe in order to configure the activity after it is placed on the canvas. For example, for our voucher custom activity it makes sense for us to be able define voucher amount, validity period and other related parameters for that particular campaign. We also need to choose the Data Extension and column to which the voucher codes will be written. This wizard is provided by the front-end code of our web application.

config_wizard

Salesforce even provides FuelUX, a front-end framework which extends Bootstrap and provides some additional JavaScript controls. This enabled us to match the look and feel of the Marketing Cloud UI and include a picker for choosing the Data Extension and column for the voucher codes.

There are a couple of requirements for the front-end code to function correctly in Journey Builder. Firstly, Postmonger must be used in our code. It is a lightweight JavaScript utility for cross-domain messaging and is required as a mediator between our configuration wizard and Journey Builder. Secondly, the root of the front-end code must include a configuration file that contains, amongst other things, the URLs for our back-end endpoints, the inputs and outputs of the custom activity, and a unique application key.

We define the unique application key when we create a new application in the Salesforce App Center as an Application Extension and add our custom activity to this. We also need to provide the endpoint of our custom activity at this point. This step is required for connecting applications to the Marketing Cloud platform and will provide us with a generated Client ID and Client Secret to authenticate with Marketing Cloud and allow our custom activity to interact with the Marketing Cloud API.

Salesforce recommend to use Heroku for hosting custom activities. Heroku is a great option for this type of lightweight Node.js application but wasn’t ideal for us as we needed to interact with our Voucher API which sits inside our VPC. As a result, our custom activity is also hosted inside our VPC so communication with any internal resources is not an issue. This means that we only have to manage the security between our custom activity and Marketing Cloud without publicly exposing the endpoint to our Voucher API. Hosting within the VPC also allows us to take advantage of our internal stacks setup for logging and recording stats.

Following these steps we are now able to drag and drop our custom activity from the activity palette onto the canvas for use in the customer journey.

journey_new

Conclusion

Not only did we deliver a critical component that will be used across a number of our marketing campaigns, but we also opened up the possibilities of what can be done within the confines of a customer journey. Marketing Cloud offers some great automation tools for marketers but pairing it with the flexibility of our own platform in AWS should open up some interesting opportunities for coordination between the two. We will surely be exploring what other custom activities we can add to the marketing team’s toolset in order to further enable them to react quickly without the need to make amendments to our codebase.

1042 views

Offline UI testing on iOS with stubs

Here at JUST EAT, while we have always used stubs in Unit Tests, we tested against production public APIs for our functional and UI Testing. This always caused us problems with APIs returning different data depending on external factors, such as time of day. We have recently adopted the UI testing framework that Apple introduced at the WWDC 2015 to run functional/automation tests on the iOS UK app and stubs for our APIs along with it. This has enabled us to solve the test failures caused by network requests gone wrong or returning unexpected results.

Problem

For out UI Testing we used to rely on KIF but we have never been completely satisfied, for reasons such as:

  • The difficulty of reading KIF output because it was mixed in the app logs
  • The cumbersome process of taking screenshots of the app upon a test failure
  • General issues also reported by the community on the GitHub page

 
We believe that Apple is providing developers with a full set of development tools and even though some of them are far from being reliable in their initial releases, we trust they will become more and more stable over time.

Another pitfall for us is that our APIs return different values, based on the time of the day, because restaurants might be closed and/or their menu might change. As a consequence, the execution of automation tests against our public APIs was causing some tests not to pass.

Proposed Solution

Rethinking our functional tests from scratch allowed us to raise the bar and solve outstanding issues with a fresh mind.

We realised we could use the same technology used in our Unit test to add support for offline testing in the automation tests, and therefore we designed around OHHTTPStubs to stub the API calls from the app. Doing this was not as trivial as it might seem at first. OHHTTPStubs works nicely when writing unit tests as stubs can be created and removed during the test, but when it comes to automation tests it simply doesn’t work.

The tests and application run as different instances, meaning that there is no way to inject data directly from the test code. The solution here is to launch the application instance with some launch arguments for enabling a “testing mode” and therefore generating a different data flow.

We pass parameters to the app either in the setup method (per test suite):

or per single test:

In our example we pass two parameters to signal to the app that the automation tests are running. The first parameter is used to stub a particular set of API calls (we’ll come back to the naming later) while the second one is particularly useful to fake the reachability check or the network layer to avoid any kind of outgoing connections. This helps to make sure that the app is fully stubbed, because if not, tests could break in the future due to missing connectivity on the CI machine, API issues or time sensitive events (restaurants are closed etc).

We enable the global stubbing at the end of the application:didFinishLaunchingWithOptions: method:

The launch arguments are retrieved from the application thanks to the NSProcessInfo class. It should now be clearer why we used the STUB_API_CALLS_stubsTemplate_addresses argument: the suffix stubsTemplate_addresses is used to identify a special bundle folder in the app containing the necessary information to stub the API calls involved in the test.

This way the Test Automation Engineers can prepare the bundle and drop it into the project without the hassle of writing code to stub the calls. In our design, each bundle folder contains a stubsRules.plist file with the relevant information to stub an API call with a given status code, HTTP method and, of course, the response body (provided in a file in the bundle).

group_folders

This is how the stubs rules are structured:

plist

At this point, there’s nothing more left than showing some code responsible for doing the hard work of stubbing. Here is the JEHTTPStubManager class previously mentioned in the AppDelegate.

We created an utility category around OHHTTPStubs:

Having our automation tests running offline reduced the majority of red test reports we were seeing with our previous setup. For every non-trivial application, running all the test suites takes several minutes and the last thing you want to see is a red mark in C.I. due to a network request gone wrong. The combination of OHHTTPStubs and Apple’s test framework has enabled us to run the automation tests at anytime during the day and to completely remove the possibility of errors that arise as a result of network requests going wrong.

318 views

Everyone hates slow native application tests and so do we

Parallelising tests on real devices with calabash-android

When testing one of our products, we have to run our automation suite on multiple types of device, sometimes with added permutations when trialling components. We looked into using a third party cloud service with real devices, like Xamarin Test Cloud, but some of the devices weren’t available. Simulated devices weren’t an option as we are often testing push notifications. With the added advantages of being able to watch tests running and interact with them manually if necessary, we decided to look into strategies on site.

Running tests on one device at a time would be very time consuming, slowing feedback loops and release cycles so we looked into a way to run our tests in parallel. This would speed things up, be more scalable and as a bonus, help to highlight any concurrency issues.

The approach we chose was to set to run all of our tests on each device simultaneously using a rake multitask.

This multitask would invoke other tasks that would run against each device. The information about the devices to run on would be passed in via an environment variable, which would be used to generate the subtasks and put into a task list that was passed into the multitask.

The next step was to make all of the variables in our code thread-safe to prevent conflicts. This meant replacing environment variables with thread variables.

One place this was particularly inconvenient was when passing the id of the target device into the cucumber task, as using the standard method of passing in environment variables was no longer an option. Using a thread variable wouldn’t work either, as the cucumber task runs on a separate thread to the one where it was created. Our solution was to use the id from the report title for that thread and pull it out as part of the setup for each test run.

The final step was to make our tests run against multiple user accounts, so that we could avoid clashes between all of the threads. We started by making our step definitions agnostic to the account it was using, and then created an account per device that was selected from a mapping yml file when initialising.

If these kind of challenges interest you, we are hiring.

Thanks for reading!
Alan & Adrian

293 views

Pairing with Developers

Working on the same branch

When I first started out in testing, I used to work in teams where we would work for months on features, and then have them held up by weeks of testing and bug finding. These were the bad old days. I don’t know if anyone else has noticed, but I’m suddenly in an awesome world, where I’m now playing catch up with teams of developers who are able to move forward so fast with new features, that it’s almost a blur to me!

I’ve been looking for a way to keep the tests and code in sync with each other, and to ensure that we don’t end up doing releases to production with untested code.

The best example I have of this is recently working with one of our developers – ladies and gents – may I introduce Adam (Right)!

Beccy & Adam

Adam is one of our amazing front end developers, and recently he embarked on some work to add a cookie banner onto one of our newly redesigned responsive pages. The brief was as follows:

CWA-2354

Branching

The first thing we did was set up a branch that we could both work on. The reason being – we wanted the tests and the implementation to go in as one Pull Request into our repository. So we made our branch cwa_2354 (the Jira number!) – and I committed the feature file we had discussed beforehand into the branch:

My Guesses

For the step definitions, I took some educated guesses as to how Adam would structure the HTML. I took a guess that the cookie banner would be developed with a class of ‘cookieBanner’, and that when the page first loaded, the node would exist, and that on selecting ‘Hide’ the node with that class would not be there.

Adam started out by checking in his code to solve the above problem. He then took a look at my best guesses for the code, and unfortunately they were not 100% right :(. This was how I originally wrote the step definitions:

When Adam was ready we took a look at my guessed step definitions together, and Adam immediately saw that they wouldn’t work. He explained to me that the first time the search page was loaded, there would be a node with an id rather than a class of cookieBanner, and that when the ‘Hide’ button was selected, the node would still be present, but it would have a class of ‘hide’ applied to it. However, on subsequent page loads (i.e. once the user has seen the warning) the node would not be present at all. Adam and I had a conversation about this meaning we effectively had two different ‘hidden’ states:

  1. The case when the node is not present
  2. The case when the node is present and hidden

We discussed whose code should deal with this – either change Adam’s code to only have one hidden state, or change my test code to deal with the complexity – we didn’t see a problem either way, so we changed the test code. The final step definitions ended up like this:

Our Pull Request

Our joint pull request looked like this:

Joint PR

Thanks for reading!
~ Beccy

560 views

Using Android Emulators on CI

Introduction

In the JUST EAT Android team, we use a continuous integration system called TeamCity, which compiles and packages the app, installs it on our test devices, runs the tests on each one and then reports the result back to TeamCity. The team uses Git for version control of the code, and our build server is linked to activity on the repository and will automatically run jobs when certain events occur. The main problem I found myself solving with this setup was that the emulators would eventually crash if they were kept running.

The build server’s functional tests job

The team’s TeamCity build agent kicks off functional tests on a variety of devices each time there is a merge into the develop branch of the Android repository. We have a separate build job for each device to give us visibility of test successes/failures on a per-device basis. Some of the devices are real ones plugged into the build machine, while some are emulated using an Android emulator called Genymotion. We decided to test more on emulated devices than real ones due to problems with the physical devices losing wifi intermittently, running out of battery due to only being trickle-charged by the machine, and occasionally just losing connection to the machine (cables just add another point at which to fail!)

Genymotion Emulator

Genymotion Emulator running VB underneath

The first problem

Unfortunately, all Android emulators are sometimes prone to crashing if left running for a while. However,  Genymotion is still viewed by the Android community (and us!) as the best emulator programs for Android, especially in terms of speed, so giving up Genymotion wouldn’t have been the correct solution here. The emulators were left constantly for days, reinstalling the app and running test suite after test suite, and would always inevitably crash and require some manual rebooting. I decided to find a way to launch each device every time a suite was due to run on it, and close it again when the tests were complete.

Genymotion comes with its own shell as a separate program, which executes commands with the emulators including starting devices (but at first glance I couldn’t find a command to shut them down). You can start an emulator with the ‘player’ command:

I shut the emulator down with a ruby script just using the build machine’s process list. This means I can also kill the emulator task if it has frozen:

(This last number is 1, not 0, because the act of searching with grep creates a new process, and that process contains the string I’m grepping for! Grepception.)

Genymotion uses VirtualBox behind the scenes. When specifying the device parameter, you can either use the device’s name as displayed in Genymotion, or you can use its associated VirtualBox ID. I used the IDs because they would always be constant for the installation of that emulator, while one could easily change the title of the device in Genymotion’s main window at any time.

So I needed to find out the Virtual Machine IDs of each of my Genymotion devices. I did this with VirtualBox’s own VboxManage executable, which is in the VirtualBox installation directory:

Output:

So now I can launch the Galaxy S4 emulator with one command:

I can now execute the launching of each emulator as a build step inside their respective build jobs.

The second problem

The Android SDK has a program called ‘Android Debug Bridge‘, which is used for interaction between a system and a connected Android device. Each android device has its own serial number, which can be viewed by the command ‘adb devices’, with an optional ‘-l’ parameter also printing out extra useful information such as the model. Unfortunately, the device serials for all the emulators were dynamically-generated IP addresses and would be different every time an emulator was booted up. I haven’t found a way to set static device serials on emulators. I couldn’t set this in the VM settings either; you can alter the network configuration of a device, but not the serial ID of a device as it appears in adb.

The output for ‘adb devices -l’ looks like this:

The number on the left is the serial and there are several bits of information on the right of the line.

I collaborated with Beccy writing a script which runs after an emulator is launched. As it boots up, the script loops round once a second for 60 seconds, parsing the output from an ‘adb devices’ command. It reads each line of ‘adb devices -l’, splits up the chunks of information and maps them together in an array. Then the script takes a device_name parameter, sent in by TeamCity, and searches for this inside the array. If found, it returns the matching serial for that map. If not, it throws an error of ‘Device not found’.

If the device was found, the script will have written the device serial to a file, which I can then read in a later build step and use to tell adb to launch the tests only on that device. You can specify a single device when using adb by using its serial a ‘-s’ parameter:

The 3rd problem

Once a Genymotion emulator has opened, it appears in ‘adb devices’ while it is still booting up. This means the next build steps would fail to run the tests because the device wasn’t ready to receive commands like installing apps.

I got round this by using ADB again. With it, you can access the device’s own shell and therefore get extra info from and send more commands to the device. I used the following useful command to check if the device had finished its boot cycle or not:

This returns ‘running’ if the device is still booting and ‘stopped’ if it has booted. Now all I had to do was write a script that ran this command in a loop for up to 60 seconds and wait until the output of this shell command equalled ‘stopped’:

If the bootanim query returned ‘stopped’ within 60 seconds, the script would exit with success code 0, otherwise after the 60 seconds is up and the command hasn’t returned ‘stopped’, the script would exit with failure code 1

The 4th problem

When you start a Genymotion emulator using the ‘player’ command, the terminal you executed the task in would be stuck running that command until the emulator was closed again. This was a problem for our each of our build jobs, which would run in one shell from start to finish. For this reason, I put the emulator launch command (the one that uses ‘player’) in a ‘.sh’ script for each device, and executed them in the specific job’s terminal with the ‘open’ command. This spawned a new terminal, freeing the main one up immediately.

However, this meant that when the tests had run and the job had finished, this left a tower of dead terminals on the screen.

Screen Shot 2015-02-25 at 15.15.15


You can change Terminal’s preferences to exit once a command is complete. But don’t worry, this only affects my terminals which are spawned with the ‘open’ command; it doesn’t exit the terminal every time you do something normally.

Screen Shot 2015-02-25 at 15.17.40

Thanks for reading! =D

-Andy Barnett, Test Automation Engineer