Retailers may have gained a few more secret weapons to compete in today’s tough environment, thanks to many new features announced at this year’s Google I/O.
The Google team announced a multitude of new elements for the Google Home and Google Assistant, all of which have direct application to a retail environment. We were lucky enough to be a partner leading the charge of creating early experiences before the public had access.
Working with top retail brands such as The Home Depot, Staples, GameStop, and others, we are constantly evaluating ways to take advantage of new technologies to elevate their customer experience. With the new announcements at I/O, the door is opening for forward-thinking retailers to use Google Assistant as an AI platform to connect with customers and even sell more products.
Here are five ways to make that happen.
1) Penetration Across Devices
Potentially the most important announcement coming out of Google I/O was the broad platform outreach. With the addition of iPhone and IoT devices, the Google Assistant is now available on more than 100 million devices. The impact of this is huge, not because of the sheer number of devices, but because of the ability to create an ecosystem effect with your brand.
Imagine your customers being able to simply say “Order me the new call of duty from GameStop” on their iPhone. Then, in a couple of hours their watch gets a notification that your order from GameStop has been shipped. 2 days later, when driving home from work, their car gets a notification and says “Your Call of Duty game has arrived at your doorstep”. They get home and start playing. This is the type of future Google hopes to create.
This kind of experience focuses on the #1 thing customers care about most in retail, convenience. Connected platforms make it more convenient than ever for users to connect with brands at any point throughout their day.
The main announcement at Google I/O that made this possible was Google letting us know they would now support transactions on Google Assistant. Transactions can happen in a few different ways and create a seamless experience for your customers. Google transactions will support both facilitated and delegated payments.
With facilitated payments, Google will pick up all the heavy lifting for your team. Meaning Google will store the payment credentials for the user, and handle charging the user on your behalf. Google will provide order confirmation and payment UI/UX for you out of the box, in a way that users are comfortable with across all Google Assistant apps, so you won’t have to focus on optimizing the conversion flow for this process.
This also means that getting up and running on Google Assistant can be done with less effort.
If you already have stored payment credentials for a user and want to leverage them, Google will also support your current charge method and facilitate a simple API handoff to your system, all behind the scenes for the user.
3) Universal Carts
With the future moving towards an ecosystem environment, it is important for retailers to invest now towards a universal cart system. Google does not provide any cart storage, so the retailer’s backend will have to store what items are being added to the cart.
If you are a retailer and your website’s cart is different from your mobile app’s cart, this should be a top priority to fix. Consumers are expecting a universal experience across devices, and this expectation will continue to grow with the emergence of platforms like Google Assistant.
Things like carts, wishlists, and preferences are at the top of providing a unified customer experience across platforms.
4) Visual Responses
With the Google Assistant app coming to iPhone, Android Phones, and in the OS of the Pixel, there is a new interaction paradigm being brought to the forefront. This interaction paradigm is focused on both audio and visual responses. For example, when you speak to the Google Assistant app on an iPhone, you will receive both visual and audio responses. Deciding what types of responses during which types of questions should be the primary focus of your conversational interface design efforts. In the example given on stage at I/O, Google showed ordering Panera bread through the Google Assistant app.
In this app, the assistant displayed images in a card style view, with details below, where users are able to scroll through and select which type of food or drinks they want. To make your app powerful, it is appropriate to understand which items are spoken, and which are only displayed.
For instance, does the Assistant respond with “Got it. How about one of these cold drinks?” and not say the names of these drinks? What if a user is instead placing the same order on Google Home where they have access to the Google Home app to potentially see the visual elements, but it isn’t guaranteed? And if the app forces the user to pull out their phone to view the drinks and place the order, then doesn’t that defeat the whole value proposition of the Google Home in the first place?
This type of design is very important and the only way to get the right answers for your brand is through testing. In a subsequent article, I will follow up with unique tests and design learnings to help design teams find the best way for users to interact with these multimodal responses.
Performing content heavy tasks such as searching and browsing are still difficult even with the visual cues, but if your search and recommendation algorithms are strong enough to overcome the visual shortcomings of the conversational interface, you can still provide an excellent user experience.
5) Machine Learning
Google is investing big in machine learning, and showcased new TPU chips that allow machine learning algorithms to be computed on large datasets faster than ever before. Although retailers won’t be directly using these chips themselves, it is important to realize that machine learning is a way to gain insights into your customers like never before and if you aren’t exploring it yet, you should be.
Using machine learning algorithms, retailers can provide highly accurate and constantly updated product recommendations, foot-traffic forecasts, demand estimations, customer retention and churn evaluations, and marketing campaign optimizations. In addition, the combination of machine learning and computer vision has enabled highly accurate image recognition. Google demonstrated this usage with the Google Lens application. Using Google Search and location context, users can find locations or events simply by pointing their phone at signs on buildings.
Image recognition within the retail sector can enable customers to find product details or related products just by pointing their camera at an item they like. For example, a customer could go into a store, take a picture of a shirt they like and instantly see images of people wearing the same shirt in different settings, sort of like a virtual mannequin.
In addition, the user could be told things such as the current price, if the store has their size in stock somewhere on the racks, and how many loyalty points they will earn by purchasing the shirt today. As brick and mortar continue to be disrupted by online retailers, these types of experiences are drastically needed to drive in-store traffic. Retailers with this technology can provide today’s customer with the same convenience of online shopping, coupled with benefits of shopping in store — i.e., trying on products and being in a fun and social environment.
Google announced a lot of exciting new technology at I/O which can enable customers to interact with your brand like never before. Implementing many of these solutions into your customer experience has the chance to breathe new life into your omnichannel footprint.