August 20, 2018

Service Design: A glimpse into a better Customer Experience

Over the years the conversation within the creative realm, especially around design, has blurred as the industry reaches to explain the differences between the capabilities, process, and expectations of design. Our work has transformed further with the growth of digital technology. Today, we can we do anything we dream up. Fantasy is now reality. With this in mind, companies are looking for inventive ways to differentiate themselves from equally digitally savvy competition.

 

The latest trend is an emphasis on Customer Experience -- which we define as the relationship between an organization and its customers throughout the relationship lifecycle, delivering on the individual’s expectations in each moment of the journey.

Moments can be classified as an interaction with a product, the look of the application, or even a conversation with a call center representative. Basically, any direct or indirect communication with an organization will define how the customer experience is delivered.

Now, how do you design for a better Customer Experience? The design industry has aesthetics, interactions, experiences, and services -- typically conflated to align with job postings, client request, and the like. However, as a product of craft, it is critical first to recognize their functional differences.

  • Interaction Design is the detailed design of how users interact with a single touchpoint comprised of features.
  • Experience Design is the combination of interactions across multiple touch-points within a user’s journey.
  • Visual Design is the balance between aesthetic elements, aimed towards improving/enhancing the brand, and guiding users through the experience.
  • Service Design is the strategic connection of experiences across user journeys to create seamless user transitions.

Each design practice has its own set of research activities and methods to achieve its stated goal. Each holds a valuable and necessary place in the design process to be successful. One practice cannot replace the other. However, when stacked together they become an unbreakable offering for the Customer Experience.

Still with me? Hopefully, we’ve clarified some of the structure for success.

Service Design is so much more than a buzzword though. Lately, it’s been defined as a method of design-thinking, an activity to sell-in a better Customer Experience, or a process to showcase the connection between an experience and backend technologies. Designers might say it’s the combination of these things plus so much more.

In our view, Service Design looks at the entire ecosystem of an organization, both front and backstage interaction points, across the lifecycle of the Customer Experience. Having a clear view of the entire operation that makes up the organization and everyone involved will allow a design team to ideate against opportunity spaces and create a one-of-a-kind service.

Service Design isn’t exclusively digital either. Most services will have an element of both physical or human interactions. Digital can be the connection between the customer and these experiences. Below are some reasons companies should leverage service design and the methods to support it:

  • Bridge the gap between the silos. Often, organizations aren't considering how an experience fits into the current-state journey and affects others who deliver on the service. Other times, it can showcase what’s currently being worked on, successes and failures, and even possible obstacles.
  • Design together by being together. When running workshops, bringing people together from across the organization allows them not only to learn from each other, but more importantly to meet for the first time, put a face to a voice, and form relationships IRL. Additionally, working together increases the speed of delivery since everyone is on the same wavelength (and timezone).
  • A helpful tool to popularize. Being able to view how future experiences work in harmony with both the current and future state of a service showcases the impact and projected results -- arming clients with the information to demonstrate the potential of the service.

Now that we have a shared understanding of what Service Design is and why to use it, let's talk about what it takes to execute.

McDonald’s Big Mac has its’ special sauce. Coca-Cola Classic has its’ secret recipe. Service Design has blueprints. To illustrate, designers use the method of service blueprinting to document the findings and propose suggestions as well as concepts to support the conclusions. Service blueprinting is just one method of many in a designer’s toolbox. However, when combined with the right design research activities, ongoing collaboration, and sound methodologies, I’d argue it’s the most useful artifact a design team can produce.

A service blueprint is the combination of experiences that explores the relationships between business goals, emotions, mindsets, pain points, touch points, and technology ultimately creating a holistic view of the current system and a shared vision of the future. This future vision aims to showcase every experience needed to deliver on the service that meets, and exceeds, the demands of the users.

Think of it as professional sports. Consider the relationship of fans watching a game and all that goes into making it happen. The players, coaches, field, uniforms, announcer, and Jumbotron are all considered the front stage. This is the first-hand experience of the fan.

The professional league, team’s owner and front-office, athletic trainers and team personnel, venue staff and vendors, camera guy for the kiss cam, etc. could be considered backstage in that they all are critical to the experience of that fan but might not be a primary interaction.

However, there’s a lot more that goes into making the event unique and might be considered more important to the fan’s experience or even than the game itself. Service Design requires investigation and consideration from the moment this person became a fan of the team. Explore the implications of the fan’s decision to purchase a ticket to this particular game and who’s else is attending. Suggest how the fan will get to and from the game and all the activities done before kickoff. Allow the fan to have quicker entry into the venue. Help the fan make the right choice on what to eat. This doesn’t stop at the end of the game either. By delivering a better customer experience the fan will have a reason to keep coming back and will tell all their friends about the experience.

This comprehensive view of the future is critical for organizations to align across leadership, business functions, and technology stakeholders setting a solid foundation to work towards collectively.

With all this said, service design and the method of blueprinting is not required for every client. If the client is expecting a defined solution from a blueprint, they may be sadly disappointed. What the client will get is a series of validated concepts that their organization can deliver against for the foreseeable future -- each with moments that deliver against all user demands and expectations. When the client starts to implement a blueprint, remind them of the importance of experience design and the research methods used. It’s not another round of research, going deep into that particular experience to understand specifics.

Clearly, defining the client request will direct you as to whether service design and blueprinting is the right practice to leverage. Service design is built around the value in research and the knowledge gained. Trust in the findings and insights is hard however. It can lead to some pretty tough conversations with organizations around misalignments, conflicts of interest, and weak links on a team. If not everyone is on board, it’s not going to be a fun time.

Everything in design has its place and purpose. You’re not going eat McDonald’s for an anniversary dinner nor will you mix Coca-Cola with a nice glass of bourbon.

One thing to remember:  A service blueprint is just a glimpse into the future and needs to be treated as a living document that can be revisioned, changed, and expanded on. Technology changes everyday in ways that can help to deliver more unexpected and delightful moments to users. The need to adapt accordingly must be baked into the service blueprint.

With the foundation set, it’s much easier to make decisions on how to approach new initiatives. If done correctly, the service blueprint will showcase gaps, both high and low, in the current service, and beyond the proposed solutions, to produce a long-term roadmap outlining the opportunity and timeframe needed for success.

July 25, 2018

7 Uses of Augmented Reality That Will Matter to You, Your Business, and the World

Forget Pokemon Go.  That’s kids’ stuff.  Think instead about cardiology and the future of cities.  Think about what it is to reinvent the shopping experience.  Think about better bridges and more effective surgeries.  Think about better surgeons.  Think about more effective food distribution to nations in crisis, and more accurate strikes against terrorists.  Think about less collateral damage and improved economic policies.  Think about a trillion dollar industry.  Think about fixing your engine in minutes after a roadside breakdown. This is the promised land of Augmented Reality (AR) app development.

At YML, we are at the forefront of AR app development. From innovative mobile augmented reality designs to transformative user experiences, we have the talent and tools needed for your next AR design project.  

The future of Augmented Reality solutions

AR is a future within our grasp.  Moreover, it’s a future tech investors have sunk billions into and is already transforming how we buy, design, perceive, and think. AR is not just a flag planted in the future of commerce, it is the future of how we’ll see.  

With help from thought-leaders across the industry’s fields of marketing, tech, art, design, and medicine we ask you to consider these seven predictions for the top future uses of augmented reality solutions.

 

Ok, AR is coming, but what does it mean for me?

AR’s best weapon, iOS ARKit, will create a new, mobile Augmented Reality world

“The first thing to understand about AR is that it will change handheld computing,” says Charlie Fink, a writer at Forbes, “first by making things we are already doing much better and more social. The camera will become the primary mobile interface for many augmented reality app developers as they design their product. FB, Google Maps, and Snapchat will certainly take advantage of it for their AR designs, as will Apple itself (here’s another chance for them to revive Apple Maps).

In simplest terms, Apple’s iOS ARKit places a virtual world on top of the real one that is seen by your smartphone’s camera.  For the best example of how this will alter your smartphone behavior, check out this Twitter user’s application in Maps.  Watch the video and try not to think, “Wow, I will never get lost again.”  But that’s just the beginning; that’s what we can do today.  

Anyone who has ever assembled furniture, tried to figure out what is wrong with their car engine, or wondered more about a painting in front of them can easily understand the practical implications of having a virtual tutorial. Every sci-fi training scenario you’ve seen—from the X-Men’s DANGER ROOM at the mansion, to the woman in the red dress from the Matrix—is one step closer with iOS ARKit and AR app development.  Whether we’re training doctors, mechanics, or CIA agents, immersing them in virtual surroundings has never been so easy, thorough, or practical with the implementation of mobile augmented reality.

Wonder what’s over the horizon? Your smartphone and iOS ARKit can literally show you.

Furthermore, iOS ARKit’s ease of use will encourage more augmented reality app development, compounding scalability. The integration of AR design into every app, on every phone, will “Make the magic happen,” says Glen Gilmore, named by Forbes as a top 20 digital media influencer. “AR will move from fun games we sometimes play to rich content and capabilities we always use.” The key here is “always,” as in part of our every day, as a part of any app.  This is how AR will scale massively and irrevocably.

That sounds like a lot of tech and not a lot of fun, except….

The world, including work, will become a game through AR design

Everyone loves games! It’s ingrained into our culture and brains from a young age.  Gamification will be key to the adoption of augmented reality solutions: whether it’s sales or customer-service tasks introducing concepts like points, rewards, and scoreboards can make learning AR extremely fun and addictive.

Jobs — everything from waiters up-selling wine to retail employees restocking shelves — can be influenced by AR design. “Brands have already discovered the benefits of gamifying their mobile apps or products, increasing user engagement and brand loyalty. Now, businesses can use the same technique to help employees feel more invested in their work, more motivated to complete daily tasks, and happier in their jobs,” says Daniel Newman, Futurum’s founding partner.  He adds:

“Create a mobile AR system with scoreboards or game-like elements that react to objects or actions in the employee’s real-world environment, and let top performers earn immediate rewards or accrue points that can be turned in for later rewards, such as gift cards or paid time off. Gamify sales and customer service training by using AR to place employees in realistic situations, and then reward correct answers and behaviors.”

How does this translate into dollars and cents for brands and businesses?

Augmented Reality in e-commerce redefines its borders

Of course, search marketing is just the first step in a redefinition of what e-commerce can be.  Investors haven’t sunk $1.7 Billion into the VR and AR design and development market for no reason. The potential is enormous - a 2015 study by Walker Sands concluded that 35 percent of consumers said they would shop more online if they could interact with products virtually, and that was three years ago! As a recent Entrepreneur article explained, augmented reality solutions will make shopping more efficient, novel, and enjoyable in the following ways:

  •  Usefulness – Sephora’s mobile augmented reality app uses ModiFace tech to let users take selfies and then virtually apply makeup to their faces before making a purchase decision.
  •  Original ideas for the shopping experience – Apply Warby Parker’s “try-on” functionality, virtually, to a myriad of products.
  •  Customizability – Stores famous for customer service, like Nordstrom, could enlist AR app developers to design products that make their sales reps available while browsing, at point of sale, in the dressing “room,” or even when matching a recently purchased item with one’s wardrobe.

“Consumers are on the edge of widespread AR adoption,” says Brad Waid, international speaker named as the #14 top influencer in Augmented Reality.  “The world where Minority Report meets Joe Consumer is just around the corner.”

There is no end to the options: see your pizza cooking in the oven; have the chef take you through the tasty meal they designed; test to see whether the Barbie dreamhouse will fit in your child’s room; try the turning radius of a new car for your driveway…these all add up to a better, more thorough shopping experience. By leveraging augmented reality solutions, the result is more satisfied customers, more delight in the experience, more goodwill towards brands. With augmented reality in e-commerce shopping online goes from passive to active. Discover the impact that augmented reality app development can have on your success by enlisting our agency’s help in the design, development, and deployment. YML can assist you in becoming a leader in your industry with our award-winning AR mobile designs.

AR will let us see, try on, experiment with, and visualize the items we are looking to buy in a way that will render our current use of the word “search” redundant.  You will be able to see inside the mall or the aisle, know the ripeness of the fruit, assess the chaos of the checkout line, before you ever leave your home.  Life, especially consumerist life, is about decisions and they just got easier and better informed thanks to augmented reality in e-commerce.

“Augmented reality will contextualize our reality,” says AR expert Cathy Hackl.“This is the key. It will change not only the way the consumer experiences a brand but also change their behavior. You’ll start to see a shift in the way people shop for clothes with AR mirrors and AR apps that facilitate shopping for them.”

Partnering with a top augmented reality developer is the best way to ensure you provide customers with the ultimate e-commerce experience, and leverage this technology to yield maximum engagement and profits.

Ultimately, it all comes down to the product, which gets better, too.

Industrial Design gets a new engine thanks to Augmented Reality developers

Product design is one area where AR’s involvement is a virtual no-brainer. It will be a revolution.

Augmented reality solutions will empower designers to ditch 3D models and actually sit inside the cars that they’re creating.The cost of testing and experimenting drops dramatically.  Designers can try more things; cars get better and cheaper (less R&D equals a lower price sticker).

And this is just cars. Imagine what can be done for shipping, nuclear facilities, electrical plants, large-scale farms and factories, or machines that make other machines.

“When you look at, for instance, a 3D model inside a computer screen, you can’t truly understand its size in relation to the objects around it or the space that it’s supposed to be used in,” says Steven McMurray, senior product strategist at YML. “AR will have an immediate impact in solving this problem.”

AR will allow designers to stop imagining their product, and to see it and its applications, shortcomings, and potential--long before they begin to build it.

So, this is just about selling people better things?  No, we’re talking about…

AR design ensures safer and better working conditions

Economies, local or global, that depend on natural resources and manufacturing look to factories, mines, plants, and assembly lines as the vital arteries that carry their lifeblood. Augmented reality solutions will make these environments not only safer, but more productive and better equipped to deal with accidents.

Companies that are world leaders in professional-grade augmented reality app development, like DAQRI, have already produced a smart helmet that empowers workers and operators to become aware of unseen anomalies in their highly active, high-stress environments.  Thanks to the helmet’s Intel processor, workers can collect environmental data to spot dangers well in advance of any potential  breakdown, leak, or catastrophe.

Everywhere from the robotics assisted assembly lines in Detroit, to mega-factories in China, to potash mines in rural Canada, workers will be safer, more protected, and ultimately more productive thanks to the work of augmented reality developers.

 

That’s great for workers, but what about the rest of us?

Cities and their communities will benefit from AR app development

Everyday, cities collect and maintain huge amounts of data from how many people cross a street in a given day to traffic patterns to its criminal records. Now think about how AR apps can make the best use out of all of this data. Municipalities spend huge amounts of money planning for eventualities from the next big snow storm or something more caustic, like a terrorist attack, riots, the outbreak of an illness, or even a nuclear meltdown.  They do this by simulating hypothetical situations and training their first responders accordingly. This is, simply put, the whole ballgame. The people in charge respond to emergencies the way they’ve been trained to.

Augmented reality app development will change and enhance this process. Police, fire departments, and health practitioners — as well as those who direct them — could be made to “see” how such scenarios play out through mobile augmented reality solutions. Naturally, this will help them respond more efficiently.

But there are larger implications for cities that leverage AR app development.  After all, no matter how dire, emergencies are rare.  Architects and city planners will confront and utilize an entirely new world of transportation grids and cityscape models that utilize AR design to demonstrate to clients, city councils, and other officials.

Are there any benefits for the world at large?

Higher-level healthcare - mobile Augmented Reality provides access for millions

“As a former Registered Nurse, I am bullish on Augmented Reality and its future uses,” says Tamara McCleary, CEO of a sought-after tech and health marketing agency.  She points to the fact that 40 percent of nurses fail on the simple process of IV insertion on the first try. “Now we have at our fingertips AR devices on the market right now that externally visualize the vein of a patient and show the healthcare provider a clear 3D outline of the exact location of the patient’s veins and their precise anatomic structure. Your healthcare provider can see where the valves of the vein are located along the entire blood vessel, enabling a near perfect placement of the needle in just one perfect stick.”

We’re talking about IVs, the starting point of medical procedures. Imagine the possibilities with stents, brain surgery, ablations, and ligament repair once the potential of augmented reality solutions in healthcare are harnessed. “The incredible consumer relevance for anyone being able to harness the power of AR for surgery is limitless,” says McCleary.

An injured ligament is a perfect, layered example: AR is employed by the doctor to enhance the success of the surgery, and then by the patient during physical therapy. Mobile augmented reality could be used to creates guides for exercises, helping to hasten repair time, and prevent against re-injury—all the while gamifying the process.  

It’s not just about the market either.  She points to situations where a doctor is not available, where citizens or soldiers are forced to perform procedures.  “Augmented reality actually shows you on a 3D image what to do, where to cut, how deep, what it should look like.”  Moreover, for the 5 billion people worldwide who do not have access to safe and affordable surgery, “The lifesaving capacity and true hope that AR brings is mind-blowing to say the least.”

Conclusion

Augmented Reality is coming to every phone, app, and quite possibly, every surface in our lives.  It will redirect us, guide us, and help us make better choices, not only in a consumer context, but in our quotidian lives.  

New AR designs will change healthcare, production, design, marketing, advertising, and the entire shopping experience.  It will change how we learn and how we communicate. It will change everything.  AR will become inseparable from our apps and phone functions. As augmented reality app development continues on its path towards even greater innovation, this technology will evolve beyond mobile devices. Then, it’s true potential as a universal technology can be more fully realized by augmented reality developers, businesses, and individuals alike.  

These changes are coming. It’s time to prepare. And then innovate the next change. Contact YML today about how we can help develop and launch your AR design.

April 30, 2018

How we hire people

Building a world class design team at YML

I have hired over 100 people in my career.

One of the best was a cartographer, fresh out of college — a cartographer is a map maker, if you don’t know. He was a Frenchman, lovely guy, and I remember his interview well. He said there are not a lot of opportunities in the map making world, but it was his passion. He was a talented designer, his maps were beautiful, and he knew how to code. A project he showed me was an interactive map of Afghanistan and Pakistan, showing drone strikes and the estimated number of casualties at each location. He had sourced the live data from public records, and turned it into a human story. It was very moving. I was blown away. Very humbly he asked, “What could a map maker do here, at a digital agency?” I had to think for a minute, but my answer was “We make maps of the internet.” Sure, it was glib, but it sparked his imagination and the conversation turned to mapping the abstract realm of the worldwide web. He became one of the best UX and systems thinkers I have ever met. He could visualize the tangled mess of connections, user journeys, data points, etc. and redesign them with a simple precision that made me want to cry.

Over the years I have hired many folks with different strokes: architects, fashion designers, industrial designers, even one guy—an embedded programmer—who made parking meters. And they all taught me a valuable lesson: amazing talent can come from anywhere, all they need is a compelling portfolio and a chance to tell their story.

Cool right? Here’s how.

The portfolio

At YML, before we consider interviewing anyone, we look at their portfolio—comparing it to all the other candidates’. A portfolio is your calling card—it should not just show what you have done, but what you can do, what you want to do. We have all seen plenty of portfolios and have a pretty quick read on good vs. bad ones. A good portfolio shows work that’s ambitious and inspiring, and very well executed. Thoughtful, beautiful designs, process breakthroughs, clever ideas, and slick interactions, all jump out of the screen. As do glaring errors, typos, thoughtless designs, awkward process decisions, unworkable interactions, etc.—these will all get a candidate blacklisted, struck off the list of potential hires. Great work is important, but an exceptional portfolio site should be a good user experience too. Consider the audience: busy executives. Trust me, we don’t read much, so don’t write much. Let the work do the talking, focus your words on big, significant ideas, compelling points, quotes and callouts. Curate only your best work, because one bad project gets an instant rejection. If in doubt, don’t show it, or better still, dig deeper and make it great.

Additionally, we prefer real portfolio sites. Dribbble is okay, Behance too, but if you’re shooting for a senior position, you will need a bit more vision, process and/or storytelling to support your work. At best, Dribbble can be very good place to show your interaction and visual design—but at its worst, it’s superficial eye candy. For more on this, read this fantastic article, The Dribbblisation Of Design.

The interview

Okay. So that’s how to get a foot in the door. What’s next? The interview, of course. Here’s a mental checklist we apply to interviewees, when we meet them:

1. Energy: Do you bring it? Do you take it?
For me, this is the number one criteria. I can feel it when I meet someone. Are they inspired? Do they inspire? Is this a job or a lifestyle? We work in small teams, oftentimes in small rooms, with big clients. People who bring energy, who inspire others to do great work, they are the magic ingredient for this model.

2. Empathy: Do you have feeling? Can you connect?
We create products and experiences for people from all walks of life. We must understand them first, so we can design something they want. Empathy, listening, and responding is key to the design process. And it’s important in how we work together as well—we, of course, don’t tolerate jerks—even if they are talented.

3. Culture fit: Do you fit in, but add something as well?
We have a fantastic, inspiring, collaborative, nurturing culture of talented grownups, and we want to preserve it and enhance it. However, we aren’t seeking uniformity. Diverse backgrounds, approaches and opinions are welcome, and help make our work and our culture better.

4. Presentation: How well do you communicate your work?
We look for excellent communicators—both verbal, written and visual—ultimately entrusting them to present our work to clients and internal stakeholders. For entry and mid level positions, just going through some portfolio projects will do just fine — but for senior hires, a presentation is required. A good presentation is a clear articulation of the problem, and the path from strategy to design.

5. Experience: Do you know how to get things done?
This is definitely not a question of length of experience, which is irrelevant. Instead, it’s an assessment of the kind and quality of experience—a candidate’s understanding of the tools and processes, pitfalls and opportunities, common in the job. Inexperienced people won’t hit the ground running, or worse, they can misdirect the process, waste time and resources and negatively affect the quality of our work.

6. Attitude: Are you all in? Do you want it?
Skills can be taught. Attitude can’t. In an industry that’s always changing, someone with a good attitude looks for challenges and is constantly thinking of ways to improve and progress. We want people with positive attitudes that are upbeat, eager, and solutions focused. We find they thrive on feedback, embrace change, and they own it with a smile.

7. Impact: Will you make a difference?
Last, but certainly not least, we want people that we know will have an immediate, positive, lasting impact—on the work, on our clients, on YML. We’re building a world class design team, looking for complementary skillsets, backgrounds and approaches. We don’t want to hire the same kind of designer over and over again. We look for folks who will make our team greater than the sum of its parts.

One more thing

We definitely do not look for an Ivy League education—or any education for that matter. We simply don’t care if you went to Harvard, or never went to school, never studied, come from an underprivileged background, were homeschooled, or are completely self taught. So long as you do great work, have the right attitude, and know how to get the job done, you’re in.

And that’s it. If this sounds like you, or someone you know, get in touch. Also, any interview goes two ways. If you have thoughts on what you look for in an interview, we’d love to hear them.

Good luck!

December 19, 2017

Two Kolkatans Meet At Indiana’s Purdue University; Launch Y Media Labs In Silicon Valley

This article was originally published at Forbes. Check it out here.

Sumit Mehra and Ashish Toshniwal enjoy a morning coffee at Café Un Deux Trois, just off Times Square in New York City. They are the co-founders of Y Media Labs, a 270-person company that specializes in creating consumer apps for clients like Home Depot, PayPalSalesforce and Staples; 27 of their clients are Fortune 500 companies. Y Media Labs is headquartered outside San Francisco in Redwood City. This morning in New York City, the founders are preparing for various meetings with clients, investors and their local Y Media Labs team.

Sitting in one the bistro’s red booths with a street view, Mehra and Toshniwal, both in their mid-30’s and dressed in business casual wear, explain the evolution of Y Media Labs. Their story includes many serendipitous twists and turns, most notably, growing up two miles apart in Kolkata, India, but not meeting until students at Purdue University in 2001.

Sumit Mehra and Ashish Toshniwal, co-founders of Y Media Labs, enjoy a morning coffee in New York City.

The two had been tinkering away with startup ideas since 2004, but 2009, at the height of the economic recession, became a pivotal year. Both had been approved for H1-B work visas after a long, arduous process, despite the fact that usual quotas had not been filled.

[Read more about obtaining work visas and their suggestions to foreign-born entrepreneurs who want to launch in the U.S. here.]

In 2009 they both quit their jobs, Mehra at Yahoo, Toshniwal at a startup sold to Google. And most significant to their future, Apple opened their app store to outside developers. “That became really interesting,” says Mehra, “that’s when we started.”

Toshniwal puts the 2009 mobile landscape in perspective: “iPhone was a new technology.” The two get extra animated as they joke about the typical Wall Street guy with his Blackberry, the dominant mobile device at the time, who thought iPhone apps were just for high school kids downloading games for 99 cents, not enabling billions of dollars of transactions.

One of Y Media Labs first corporate clients was Safeway, headquartered in a multiple-building complex in the Bay Area. Toshniwal gleefully remembers standing in front of the VP, using the term “In our company” in their pitch. “And the company was right inside that room!” exclaims Toshniwal, prompting hysterical laughter from both. “Fortunately, we got the business,” says Toshniwal. They began to get additional clients like Sesame StreetFoot Locker and Symantec.

We engineered the SEO, really well,” explains Toshniwal. “Someone would type in ‘iPhone developer,’ our name sometimes would show up even before Apple,” he continues, “It was shocking!” They also got clients through word of mouth.

In 2010, they helped create the Montessorium app, teaching kids ABC’s and 123’s via touch on the iPhone and iPad. 48 hours after the app became available for download, which happen to be Apple’s 54th app, Toshniwal says they received an email. “Thank you, let me know how I can help. I love what you are doing. Best, Steve.

After a moment of giddy silence, they clarify in unison, “Steve Jobs!” Mehta whips out his phone to find the old email from Apple’s co-founder. “It came at 3 am in the morning,” Mehra says with glee.

They recall talking each other down and chose to believe someone playing a joke on them. Until the founder of Montessorium, who had received considerable pushback from Montessori groups, as their method of teaching and learning isn’t about touching a screen, also received an email from Steve Jobs. “Keep doing what you are doing, you’ll prove the critics wrong,” Toshniwal and Mehra recall.

The only funding Y Media Labs received in the early days were loans from family and friends, helping to bootstrap the business. “I think we paid the loans in the first 12 months,” says Mehra. MDC Partners became investors in 2015, allowing Y Media Labs to scale, opening offices in Indianapolis, Atlanta and Bangalore, India in addition to the New York and Silicon Valley locations.

Being immigrant co-founders has been an asset to the business, both Mehra and Toshniwal believe. “Entrepreneurs need to be scrappy, frugal,” says Mehra, “that’s a really important skill set. To be able to do a great job when you have nothing. Right? How do you make everything out nothing?” asks Mehra rhetorically. “When you grow up in different parts of the world, the opportunities and challenges are very different,” he adds.

“In the Bay area,” says Toshniwal, “you order this frappuccino…” Toshniwal pauses, noting that many get really upset if the wrong type of creamer has been used. “Come on!” he exclaims, “Get perspective here. I mean, where we come from… “ Mehra quickly interjects, “There is only one option!” They laugh, riffing off each other, understanding both worlds and the absurdity.

The conversation takes a serious turn. “The advantage we have,” says Toshniwal, “is not having this sense of entitlement.” Mehra agrees with a quick affirmative. “And being grateful for what we have,” Toshniwal continues, “I think that’s a huge advantage.”

Both their fathers were local business owners in Kolkata; Mehra’s worked with cotton yarn; Toshniwal’s with lighting. “They didn’t have huge VC funds to build their businesses. They built their businesses organically every single year,” says Mehra, “I think there was a lot of learning just looking at them.”

Y Media Labs employees also benefit from Mehra and Toshniwal’s sense of gratitude. Mehra proudly shows a slick video of a lush Hawaii property set to a cool jazz soundtrack, which they rented for their employees, free of charge. Mehra guesses 85 to 90% of the company has vacationed there since January. “This comes back to being grateful,” says Toshniwal, calling 2016 a “phenomenal” year.

Y Media Labs' Hawaii House

U.S. society makes it so easy for anyone to start a business,” says Mehra, noting that the bureaucracy and corruption found in many other countries, often mars the intent of building a business. “That’s what I love about the U.S.,” adds Mehra, “if you are somebody with ambition, you can do wonders. In the process, create so many jobs, build things that take us as a society forward.”

But Y Media Labs, or any other startup in the U.S. for that matter, didn’t simply arrive on a silver platter. “You’ve got to dig for opportunities, find them and work towards it,” Mehra says. Toshniwal adds, “It took us literally four years before we were a legitimate business; four years is a lot of time.” Mehra says, “So you are not always going to get it right the first time.

November 9, 2017

What YML is Doing to Help Reshape Early Education Learning in a Screen Obsessed Society

At some point in our lives, each one of us has experienced a feeling of complete awe while watching a small child expertly swipe to unlock a smart device or open mobile apps to enter their own digital universe.

Over here in the YML labs, we have long been curious of the role that emerging technologies play in learning and development for young brains.

Our curiosity started a few years back when we teamed up with education startup Montessorium to develop a full app suite aimed at empowering kids to learn at their own pace. That project turned out to be especially rewarding because we provided kids a fun way to learn everything from the alphabet to international geography. We also got some digital recognition from Steve Jobs once the app launched.

Since then, our curiosity in learning and development in early education has only grown, especially as AI development continues to advance in great strides. We decided to further explore how we could bring the power of artificial intelligence tools to education.

What research is saying:

Research shows that during the preschool years, expansive psychological growth takes place and the brain is particularly sensitive. We know that screen times can greatly affect the forming of neural pathways and the way brains develop.

But because technology changes in our society are nascent, the effects of those changes are still relatively unknown and often debated. Decades ago, researchers learned that young brains need tons of stimulation to develop normally.

As a result, parents were encouraged to expose their children to as many sensory stimulations as possible. Later, digital designs and technology became more integrated into our everyday lives. We started seeing studies suggest that children who had too much screen time were more likely to develop ADHD.

For instance, in one particular study, young mice were exposed to six hours of a light and sound show on a daily basis. Results showed that there were "dramatic changes everywhere in the brain,” Jan-Marino Ramirez, director of the Center for Integrative Brain Research at Seattle Children's Hospital, told NPR.

Results like this lead some researchers to believe that our brains being wired up all the time can’t be a good thing. We weren’t built for this kind of over-stimulation. On the other hand, some researchers believe that our brains have to evolve in the way it processes information because our world is increasingly becoming more fast-paced.

In the mice study mentioned, mice that were exposed to stimulation were able to stay calm in environments that typically stressed out those who didn’t experience as much excitement.

Leah Krubitzer, a neurobiologist at the University of California, Davis, thinks studies like this show that benefits of an overstimulated brain may outweigh its negatives. During last year’s Society for Neuroscience meeting in San Diego, Krubitzer explained that we already live in a world where overstimulation is the reality. This means our brains have, whether we like it or not, already changed. Using technology correctly, in a useful, healthy way, is just the kind of stimulation that will prepare children for an always on, fast-moving world.

Because what other option do we have? We can’t turn back the clock. We can’t teach our kids in the archaic ways our grandparents were taught. Those good ol’ days don’t exist anymore.

"Less than 300 years ago we had an industrial revolution and today we're using mobile phones and we interact on a regular basis with machines," Krubitzer said during the meeting. "So the brain must have changed."

The truth of the matter is, data on how screen time affects the brain isn’t large enough to draw sweeping conclusions. Just consider how last October, the American Academy of Pediatrics lifted its longstanding rule against any screen time for kids under two. This is a standard that had been put in place since 1999.

The current recommendation comes from the result of new research. It states that young children should get screen time to help them develop the abilities to transfer knowledge from screens to the real world.

Daniel Simmonds, a resident pediatrician at the University of Maryland in Baltimore who has a PhD in neuroscience, says the key is to stick to the middle ground somewhere between the past and the future. So let your little ones interact with technology, but don’t let AI replace social human interactions.

“So much of our brain is dedicated to sensing things and making movements around [those physical things],” said Simmonds, pointing out that the show “Sesame Street helps kids learn but it’s not going to help them learn if you just sit them in front of a TV without any human interactions.”

Further proving Simmonds’ point, a 2015 study found that when iPads were given to kindergarten students to share, those students outperformed students who had their own iPads.

The study’s researchers suspected that those results have to do with the fact that sharing an iPad boosted social interactions. This type of camaraderie is crucial for development in young children. Perhaps even more telling, students who weren’t provided iPads at all scored much lower on their end-of-year achievement test compared to students who had access to an iPad.

Breaking through archaic ways of learning

Armed with knowledge that back and forth interactions between children and a caregiver is critical to language and brain development, we built an educational app that uses machine learning and image recognition to help create engaging, interactive moments. In this particular project, when our custom built iOS app asks,"Can you show me the flag of Canada?" image recognition is then used to identify whether the child is holding up the correct index-sized flag or not. This recognition happens in real-time and is instantaneous.

Teaching children about the flags of the various countries in the world requires a bit of focus on machine learning. Achieving our goal of image classification and detection to work offline in real-time required several important decisions, like whether we wanted to go with an image classification or object detection approach. In the end, we decided object detection would allow our users the flexibility of showing multiple types of flags at once.

Other important decisions we had to make include selecting the right framework for mobile devices, the network for object detection, and feeding the right data into the network during training. All of these decisions are crucial to performance and accuracy of the end product.

Ultimately, we are excited about the beginning of our exploration and the possibilities ahead!

As humans, our history stretches back hundreds of millions of years and like all biological traits, our brains have changed just like the world around us. We can’t expect to get by on outdated ways of learning. However, the key here is to not use AI tools in a way that is meant to replace humans. The true power and purpose of technology is not to substitute for human interactions but to enhance that experience and bring to life what hasn't been imagined yet. Much like Simmonds said, the key is to take advantage of what AI development can offer, and that’s a lot when it comes to sharpening young minds through learning, interacting, and communicating.

“The integration of technology and physical learning is not new, and there’s a lot of potential for it,” said Simmonds.

July 25, 2017

The Bartender of the Future?

We're big fans of giving the open space and time to breathe life into creative innovation. There's a lot of focus on the day to day, but it's important to step back and think even more future-forward.

The YML Hackathon is one of our favorite ways to do just that. Every year we encourage employees around the world to break away from client work and dedicate 24 hours to creating with cross-functional teams.

Last year, our teams designed Kontrol, a new app for Tesla. This year? The bartender of the future. 

Niq is an intelligent bartender who lives inside of a motorized bartending station. He can recognize people and greet them by name, tell corny bartender jokes, give drink recommendations, and take orders for your favorite cocktails without a click of a button. Oh, and did we mention he has a British accent?

“Tapping buttons is just so old school,” says Sr. Product Manager Steven McMurray. Steven worked alongside a team of 2 engineers, 2 designers, our head of recruiting to build and design Niq. “ We built this as a proof of concept to show how the marriage of technologies such as Machine Learning, Computer Vision, Artificial Intelligence, IoT etc. will come together to create amazing user experiences in the near future.” 

How it works

When you walk up to the device, Microsoft vision facial verification API’s allow Niq to recognize faces and greet people by name. Siri translates speech to text to help Niq process what the user is saying. APi.ai was used to create the interaction model that takes the text from Siri and makes sense of it, allowing Niq to respond appropriately.

To make Niq feel more human, AWS Polly helped to turn text into lifelike, customizable speech using deep learning. Once Niq gets a command back from Api.ai that a user wants a drink, an Arduino board takes commands via bluetooth from the iPad to automatically start pouring the proper combination of alcohol and mixers into your glass. See the prototype in action:

[vc_video link="https://vimeo.com/226803603" align="center"]

Each time users order a drink, Niq saves their preferences and can be programmed to cut people off when they've had too many. While Niq is just a 24-hour prototype as of now, Niq 2.0 (see mock-ups below) will use machine learning to generate a recommendation engine that can suggest drinks that people with similar tastes might enjoy.

             

Cheers!

March 16, 2017

Innovations in Healthcare: Leveraging Alexa for Patients on the Box Platform

We’ve been working with our friends at Box to imagine and execute a future scenario in healthcare, applying Amazon Alexa and Box Platform to the challenge of tracking medications and compliance to a medication regime for patients in the home.

As Ross McKegney, Director of Platform at Box, recently announced:

At Box we spend a lot of time thinking about the future of work and building the cloud content management platform that will make this future a reality. Today we’re delighted to highlight one of our partners on this journey, Y Media Labs, who is working with Box to develop a series of visionary demo applications for regulated industries.

Take a look at how our Alexa skill can help patients and hospitals manage drug intake, ensure regulatory compliance by insurance agencies, provide personalized care by physicians and manage drug efficacy during drug trials.

At Y Media Labs, we build what others don’t dare to.

Are you interested in taking your personalization strategy to the next level, getting more sales and driving up your customer engagement metrics?

We can help you get there, but don’t take our word for it – our work speaks for itself.

February 10, 2017

Kontrol for Tesla App: New Ways to Optimize Your Driving Experience, in Style

We at Y Media Labs love Teslas.

We don’t just think Teslas are good for the environment, or that they’re the future of transportation. We also think Tesla cars are redefining drivers’ holistic user experience, comfort, safety and habits.

Tesla’s motto is “not to let the perfect be the enemy of the better.” That may be the case with the company’s new venture into the autonomous car market, but when it comes down to the user experience inside the car, we think we can aim for perfection . . . or at least simplicity.

Or maybe both!

Many developers struggle with finding the perfect balance between simplicity and functionality. In general, most struggle to achieve this noble goal and end up sacrificing one for the other, for good reason! Allowing users to complete a wide variety of tasks while keeping the overall experience simple and intuitive is no small feat.

With Kontrol for Tesla, we really wanted to achieve both – and we are confident that we did.

The Innovation Labs program at YML is responsible for identifying new and exciting ways to leverage existing technologies to make people’s lives better, easier and, yes, more fun! Every month we turn our attention toward inventing or improving a mobile user experience, and releasing the resulting apps for free. This month’s subject is Tesla.

Other Projects: Kontrol for Nest Thermostat | Kontrol for Tesla

Concept

Kontrol for Tesla: New app, endless opportunities

Tesla1

Our creative and development teams joined forces to build and launch Kontrol for Tesla, a new mobile application available today in the iOS App Store that Tesla owners all over the world can use to make their driving experience easier, better, more convenient and more efficient.

Kontrol for Tesla delivers all the features already available in the current Tesla app, plus a few more.

And because we’re such huge fans of Tesla, the app is completely FREE of charge for all those Tesla lovers out there to use!
Design

Design

A mobile experience to match the grace of the vehicle

tesla2

We wanted to design an app that the Tesla community would love. Tesla is an amazing brand with cutting-edge products, so it wasn’t difficult to find a creative direction in line with what the company offers to their customers.

The design was inspired by simply sitting in a Model S and feeling that the app should represent the vehicle it is controlling: it should be sleek, intuitive, and follow clean lines.

We also wanted to give the app a new, modern and uplifting makeover, which is why the app interface looks like this:
When iterating on the UI, we selected a dark color palette. Color was stripped away from the surface to make important information like battery charge levels and temperature controls pop out. The app was designed to be highly functional, yet feel upscale at the same time, just like a Tesla. We wanted to create an app with an easy balance between functionality and modern aesthetics.

Function

What’s new with Kontrol for Tesla: Simple interface, Touch ID car start, smart venting, 3D touch car unlock, honk and battery status

tesla3

First and foremost, the Kontrol for Tesla app keeps all the current functionalities of the Tesla app it got its inspiration from. If you start using our app, you will not lose access to any of the current benefits of the original version.
Kontrol for Tesla allows you to do all these amazingly cool things that you are already familiar with from Tesla’s own signature app:

  • Check charging progress in real time
  • Change the temperature in your Tesla before driving
  • Locate your car and track its movement
  • Flash lights and/or honk to find your car in a parking lot
  • Vent or close the panoramic roof
  • Lock or unlock car

But why create just a clone of the app? Instead, we wanted to wow you with additional functionalities as well, which is why Kontrol for Tesla gives you access to these new and exciting features only available on our application:

  1. Start your car with Touch ID – you won’t need to type in your password every time.
  2. Smart Climate - remotely heat up your car to your desired temperature before you even leave the house.
  3. Unlock, Honk and Check Battery Status with 3D touch or through the widget. No need to ever log into the app.
  4. Smart vent the car when it gets too hot! We’ll detect the temperature differentials between the internal cabin and the external environment to adjust the sunroof for you (this feature will only be enabled for Tesla models that have a sunroof).

Security

Kontrol for Tesla: Our process for handing best-in-class mobile app security protocols

We take privacy and security very seriously, and we understand that the data we are handling is very sensitive and NOT owned by Y Media Labs.

So here’s what we did to ensure that our beautiful application is completely in line with the best mobile app security practices, giving customers complete peace of mind while using our product:

User Credentials

Our app does not store the user’s credentials (username and password). Instead, this information is stored on Apple’s iOS Secure Keychain. Data stored in the iOS keychain cannot be accessed by other applications installed on the phone.

Additionally, we never use user credentials while communicating with Kontrol for Tesla’s server, unless we are communicating directly with Tesla’s API Server. Even when the app is running, we do not save / store the user credentials. Lastly, the app uses Touch ID authentication before accessing our users’ credentials from iOS’S Secure Keychain.

HTTPS and Apple Transport Security (ATS)

From iOS 9.0 onwards, Apple requires apps to use a technology called Apple Transport Security, which enforces all the client-server communication to be made securely through HTTPS protocols. Our app adheres to this practice.

SSL Pinning

Anytime our application communicates directly with Tesla’s API Server, the server will provide a certificate to the app. To evaluate the legitimacy of the communication we adhere to the following steps:

  • The app first evaluates the certificate provided by the Tesla server to check if the certificate is signed by a Certificate Authority (CA).
  • We check that the certificate provided by the server contains the Tesla API domain in the response.
  • Finally, if these two steps are validated, we match the certificate provided by the server and the certificate shared with our app.
  • If the user's device is jailbroken, we prevent the user from using the app while clearing their credentials, session token and local preferences.

The Road Ahead

More awesome features coming your way on Kontrol for Tesla

Source: Tesla

Did we mention we love Tesla???

We mean it.

We love it so much that we will not stop with the first iteration of our application. In fact, we are already thinking about another helpful feature that we want to build for this app:

  • Tracking personal versus business miles while driving your Tesla

We fully understand that many drivers use their car for both personal and business reasons. For now, there are no integrated features allowing Tesla drivers to easily track miles that are used for business purposes. Through this feature we will remove the hassle of noting down the personal and the business miles. Instead, the app will do it for you.

Summary

We are confident that Tesla is the future of driving in the US and abroad, and we are happy to contribute in any way we can to these exciting developments in personal transportation.

Download the Kontrol for Tesla app today to enjoy the free benefits of our apps. You’ve earned it by investing in the automotive innovation of the future.

December 14, 2016

In-Store Analytics with Ad Tracker – Do People Really Like Your Ads and Marketing Displays?

It’s easy to get analytics from videos that are posted online.

However, until today, there was never an easy way to get analytics reporting from an in-store video.

Imagine if you could get actionable data on who watched your video while shopping in your store.

We’re talking about who watched the video, for how long, along with the gender and age of the people watching your video. All of these data points delivered to you, in real time, without collecting any information from the customer.

cover-images-b-1

This is a question I know many decision makers in retail have asked themselves again and again: how can I measure my ad engagement in a way that is meaningful and actionable? Am I showing shoppers what they are looking for? Am I effectively communicating with my customers as they stroll around from one aisle to another, looking for products or inspiration?

What if there was a simple way to get an answer to these questions in real time?

This is why we built the In-Store AdTracker prototype.

To the average consumer, it’s a video that they watch as they shop at their favorite store. To the retailer, it’s powerful information that helps you make better decisions in a way that you couldn’t before.

Here’s how it works.

In-Store AdTracker – A Powerful, Simple Tool to Measure Ad Engagement

ad-tracker-1

We built a simple proof of concept using Google Mobile Vision API. Basically, we created a tracker software that can be installed on any Android device featuring a built-in camera. Smart TVs, monitors, all-in-one computers, tablets, phablets – you name it.

Our proof of concept is simple. You install the In-Store AdTracker on any compatible device and then you play any video in a fullscreen mode on it. Then the AdTracker does what is supposed to do: it measures the level of engagement people have with the ad.

The tracker reports on the average time spent by a user with an ad; it tracks whether people smile while watching the ad, as well as the demographics of the people watching the ad – like age bracket or gender.

Here’s a video of how this works.

Let’s go through a specific example and see what type of information you could automatically have access to.

Let’s say you’re a store that sells celebrity merchandise. T-shirts, mugs, posters, original autographs, etc. In this business, as you know, stars rise and fall overnight. Hit songs dictate who is in the spotlight and what people are talking about at any given time. Of course, you can always look at Billboard 100 and figure out who is at the top, but the question remains: in your city, among your customers, who is the most popular star? What type of merchandise should you stock for?

Our AdTracker can give you this answer without you lifting a finger.

Let’s suppose that the top five songs on Billboard 100 this week are the following:

  1. Rihanna - Needed Me
  2. Ariana Grande - Into You
  3. Adele - Hello
  4. Taylor Swift - Blank Space
  5. Sia - Cheap Thrills

If you want to only stock merchandise for two of these five stars and get the biggest bang for the buck, how would you do it?

Let’s say you will play each of these videos in your store on the same screen on a loop and you want to see how engaged your in-store shoppers are.

 

1. The AdTracker can aggregate the number of people who watched each video

The type of report you could get in your inbox looks something like this:

y-media-infographics-graph2-1

If you looked at this graph based on the number of people who watched the videos in the last hour, you could conclude that Sia and Taylor Swift are probably the best stars you should get merchandise for.

But if you wanted to know a little more about the people who stopped and watched the videos, you can get that, too. Are they males or females?

 

2. The gender of the people watching an ad

y-media-infographics-graph1-1

In this example, you can see that more males than females stopped and watched the videos. So make sure you stock your inventory appropriately!

As we all know, the age of a consumer impacts what they buy, the price tag they’re able to afford, and how often they return to a retail store. Which brings us to the next question you may have...

 

3. The age brackets of users watching an ad

y-media-infographics-graph3-1

Not surprisingly for a celebrity merchandise store, in this example, we see that the most engaged users were within the 16-20 bracket. They also have the lowest budget across all age brackets. So you better stock up on lower-end-priced merchandise!

A big indication of user engagement is the time they spend interacting with a digital product, whether that’s a site, a video, or an ad. And this brings us to the next thing we can automatically determine with our AdTracker...

 

4. The average time a user spends watching an ad

y-media-infographics-graph4-1

How cool is this? Now you know that the largest number of people watch Cheap Thrills and Blank Space and that the same users spend the largest amount of time on these videos. That is a recipe for success: number of people + engagement level.

Lastly, you may be interested in learning at what time of day people are most engaged with your in-store videos. That allows you to prioritize what videos are broadcast when and whether you want to run any time-sensitive promotions.

The AdTracker can capture that information as well!

 

5. Hourly views breakdown

y-media-infographics-graph5-1

By now, you’ve seen what this simple AdTracker is capable of. In summary, it can track any of the following:

  • How many people are watching your ads
  • The gender of the people watching your ads
  • Age brackets
  • Level of engagement by time spent on ad
  • Hourly views breakdown

But if you’re not in the celebrity merchandise business, you may still be a little skeptical, wondering how this can benefit your business.

How can retail companies leverage the AdTracker?

ad-tracker-2

Traditionally, the primary markets for video ads are TV and online sites where users must watch the ads before interacting with the content on the page.

But these are not the only places where video ads can be consumed. In fact, various companies small and big have found alternative channels and social contexts in which ads are being served, like:

  • Inside a store
  • Airport lounges
  • Waiting room for doctor’s appointments
  • While waiting in line at a store or food chain
  • Some bars have even installed monitors above urinals

No matter what situation customers may find themselves in, quite often there is “dead time” when distractions of any kind – including video ads – are more than welcome.

With the AdTracker, marketers can begin leveraging video ads and start collecting critical data about who watches these ads, for how long, and how the audience reacts to these video ads.

No more walking in the dark.

No more guessing.

No more uncertainty.

With the In-Store AdTracker, marketers can tell for sure if their efforts are working or not. You can easily determine how effective your strategies are and if your customers like what you are showing them.

And the coolest part? With our in-store AdTracker, all data is updated in real time and available to you on your phone, laptop or desktop.

November 22, 2016

Uber vs Lyft – Who is loved more? A deep dive analysis using Google’s Sentiment Analysis API

Have you taken a rideshare in America in the last 3 years?

If so, chances are good that it was with either Lyft or Uber. The two companies — both launched in the San Francisco Bay area — are monopolizing the ridesharing industry across most U.S. markets, and are constantly competing with each other for customers’ attention, retention and loyalty.

What if I told you there’s a (fairly) simple way to see how Lyft and Uber’s customers feel about them? That we can track loyalty and user satisfaction with each of these brands, can do so with a high degree of confidence, and that we're not talking about spending hundreds of hours collecting and analyzing every single opinion that's out there on the internet?

We’re also not talking about physically stopping people on the street and asking for their feedback. We’re talking about using actual data that can be easily extracted and analyzed to see how customers rate pretty much any company out there.

yml_uberlyft_large

Are you intrigued?

We certainly were when we decided to embark on this quest!

Instead of looking at anecdotal evidence about Uber and Lyft, we decided to use the power of Google’s recently released Sentiment Analysis API. We cannot overemphasize how powerful this API really is. Without it, this analysis would have taken us tens of hours (or more!), enormous amount of resources and would have cost a fortune!

Google’s Sentiment Analysis API allows us to extract and analyze people’s views on Lyft and Uber through a single API call. If there was ever a “the future is here” moment, this is it.

I don’t like to keep people waiting, so let’s dive right into the results. After the charts, we'll dive deeper into how it was done (read: technical)

I also need to state the obvious. Just because one company is more loved than another doesn’t mean that their business is inferior to the other, or that it's not doing as well.

Don’t shoot the messenger!

The results. Here's who's loved more.

We began our analysis of the Lyft vs Uber sentiment by looking at the latest reviews that customers left for the respective mobile applications on iTunes. Since both companies are primarily operating through their mobile apps, it sounded like the logical place to start. So what exactly did we do ? We extracted the 500 most recent reviews from iTunes and assigned a sentiment value for each review. Note that the cool part about the Google API is that it assigned a sentiment value based on the actual content of the review, not the number of stars a user gives to an app.

This is how the sentiment towards Uber looks based on these parameters:

01-uber-reviews-graph1-1

What do we learn from this? First, the overall ratings for Uber have been on a downward projection. At its best, Uber’s customers are “OK” with the service, giving it an average of 2.7 out of 5 starts. Second, we can see that the overall trend is not going in the right direction and that — at least based on the small sample we collected — Uber’s users are becoming more and more frustrated with the service, rating it lower and lower.

Now, how do things look for the Lyft application, using the same parameters?

04-lyft-graph2

As we can see, Lyft users have a much better opinion about the app than Uber users. We also notice two other critical things. First, Lyft’s ratings over the last five hundred users have been getting better and better over time. Second, Lyft’s ratings are more stable and show a lot less variation in the overall sentiment ratings than Uber. As a side note, it’s interesting to note that Lyft’s lowest average score across the 500 most recent reviews correspond to Ubers highest score during the same time period.

After we saw what people thought about Lyft and Uber in the app store we thought, "Hey, why not looking at the sentiment people exhibit towards the two companies on Twitter?" We had two reasons for choosing Twitter as a platform from which to extract information via the Google Sentiment Analysis API.

First, Twitter allows a larger number of data points to be extracted than iTunes, which provides more accuracy to the overall analysis and statistical model.

Second, customers often use Twitter to communicate with businesses when they have issues with them. Twitter serves as a public “naming and shaming” platform, where customers often expect to get some sort of reaction from the business they’re interacting with. How companies respond to the public naming and shaming shapes how often other people will engage with the brands through social channels.

Here’s what Lyft and Uber’s customer sentiment looks like on Twitter, based on the Google API analysis of the last 8000 tweets published on the platform using the @uber and @lyft hashtags.

03-uber-tweets-graph3-1

02-lyft-reviews-graph2

What we see from these charts is that both Lyft and Uber are struggling on Twitter. Both companies’ overall scores have been  decreasing steadily over time. There are various factors that could explain this trend:

  • Recent app releases have inadvertently impacted users’ perception of the app. This is often correlated with production bugs or a sluggish app performance.
  • Neither company allocates enough resources to support their Twitter feeds and get in touch with unsatisfied customers in order to solve whatever issues they’re reporting on Twitter.

If we take a bird's eye view of both Lyft and Uber across the last 1000 tweets and the last 500 reviews, a clear pattern starts to emerge. Let’s look at them:

05-avg-star-rating-1

The conclusion is pretty straightforward: Lyft gets significantly better reviews and sentiment ratings across platforms than Uber does.

Where it's true that Uber is more profitable and popular across most markets where it directly competes with Lyft, the latter’s ability to keep its customers more satisfied could pay off in the long term. It's certainly something the Lyft management tries to promote – the idea that when customers join Lyft, they’re not simply joining another ridesharing company — they’re joining a community. So far — from what we can tell — this strategy is translating into significantly better sentiment ratings for Lyft.

One of the other things we noticed about the Google Sentiment API is that businesses that operate internationally can watch trends happening across the world, and use country-specific breakdowns for sentiment analysis.

Let’s look at Uber’s and Lyft’s international presence and their respective ratings:

06-avg-starratings-country-1

Uber operates in multiple countries, so extracting regional data for it was fairly simple (more details on the technical implementation below!)

As we can see, Uber’s average sentiment hovers around 2 out of 5 points on the sentiment scale, with India and Singapore constituting Uber’s biggest detractor and enabler markets.

For Lyft, we could only pull data from the U.S. and Singapore, where Lyft operates through a partnership with the local ridesharing agency.

Comparing how customers look at Uber and Lyft in the countries where Lyft operates shows that in both cases, Lyft has the upper hand in terms of users’ perceptions and reviews towards its ride-sharing services.

To sum up, even when you look at data points for specific countries where both companies operate, Lyft still has the overall upper hand in terms of users’ perceptions, attitudes and sentiments.

The technical analysis behind: How we arrived at the results

Now that we've looked at data points about people’s perceptions of Lyft and Uber, we're sure you're interested in figuring out exactly how we go to the datasets we showed.

Let’s dive right in and learn how to use Google Sentiment API.

The API currently supports three kinds of analysis of text.

  1. Entities
  2. Syntax
  3. Sentiment

Entities

Entities API documentation gives this description

Finds named entities (currently finds proper names) in the text, entity types, salience, mentions for each entity, and other properties.

To understand its capabilities, let’s try passing in a sample tweet to this API.

It clearly identifies many entities in the statement. It even links to Wikipedia articles.

iPhone 7 : CONSUMER_GOOD

Apple, Y Media Labs : ORGANIZATION

CODRIN ARSENE : PERSON

This can be applied to some really good use cases. Let’s say we want to create a trending topics list. We can pass text through entity API to generate topics of interest and create trending categories. We can group related content and present suggestions.

Syntax

Advanced API that analyzes the document and provides a full set of text annotations, including semantic, syntactic, and sentiment information.

We have come a long way in contextual understanding of a sentence. This has been going on for over 50 years and we have finally managed to have arrived at a technological breakthrough where we can identify the contextual information at a much higher degree. To give you an example of how advanced this is, let’s add a grammatically correct sentence and see how the API breaks it down.

Time flies like an arrow; fruit flies like a banana

Flies was correctly identified based on context. Verb in first context and Noun in the second. ?

This API can be used to identify verbs, nouns and run specific analyses on words. If we're looking at generating stats on how those affect an article, then this is useful. From use case perspective, it's not quite as strong for analyzing our sentence and see if it's correctly inferring the context.

Sentiment Analysis

Advanced API that analyzes the document and provides a full set of text annotations, including semantic, syntactic, and sentiment information.

Sentiment analysis is quite powerful. API can deduce sentiments from arbitrary text. The API itself is straight-forward. Let’s take this ambiguous review for Uber.

It’s clear that the person loves Uber, but rated it 1 star. That’s painful for Uber. Let’s try and fit this through Google sentiment analysis.


Sure enough, it gives a great rating. Here's the rating chart.[2]

Apple iTunes provide RSS of customer reviews for apps in json format.

For example, Lyft iOS app, whose app id is 529379082, the RSS of customer reviews json can be found at : https://itunes.apple.com/rss/customerreviews/id=529379082/json

Similarly, we got the RSS of customer reviews for the Uber app, whose app id is 368677368 through: https://itunes.apple.com/rss/customerreviews/id=368677368/json

 

We wrote small Go code to parse the json body. For each of the reviews we called the Google sentiment analysis API to get the polarity and magnitude.

In our analysis, we were able to compare Lyft vs Uber by looking at the breakdown of reviews for specific countries where both companies operate. To fetch the RSS of customer reviews for Uber in different countries, replace the country code as specified in ISO_CODE_FOR_COUNTRY at “sg” in below url:

https://itunes.apple.com/sg/rss/customerreviews/id=368677368/json

For example, to get United States based reviews, the country code is “US” and the url will be :

https://itunes.apple.com/US/rss/customerreviews/id=368677368/json

So how is Google sentiment analysis different from just App Store ratings? Google sentiment analysis overcomes the user’s bias in giving star ratings and only considers the true description. We can also combine this with Twitter feeds sentiment analysis, along with other forums and internet feeds, to get overall sentiment from everyone. Then, instead of rating for just an app, we can obtain a rating for a Brand!

Twitter Stream ---> Google NL API ---> Google BigQuery ---> Google Data Studio [3]

If we set up architecture as shown above, we can easily generate sentiment analysis on brands, which is much more valuable.

Links to get you started with the Google API:

Summary

In this article we took a deep dive into the Google Sentiment Analysis API by leveraging its capabilities to compare two popular American ridesharing companies.

As we saw, this amazing API can provide lots of interesting and useful information for company executives. Knowing your brand engagement across markets and geographical regions, as well as your users’ and customers’ overall perception towards the brand, is critical to the overall success of any digital business.

The overall opportunities for Language Processing and Machine Learning platforms are endless. Across the board, companies receive a tremendous amount of feedback through various channels. Google Sentiment’s API is paving the way for developers and business executives to become aware of the overall sentiments their current or prospective users have towards their brand, products and services.

+

Make a Lasting Impact.

All rights reserved.
Y Media Labs
©2019