How we hire people

Building a world class design team at YML

I have hired over 100 people in my career.

One of the best was a cartographer, fresh out of college — a cartographer is a map maker, if you don’t know. He was a Frenchman, lovely guy, and I remember his interview well. He said there are not a lot of opportunities in the map making world, but it was his passion. He was a talented designer, his maps were beautiful, and he knew how to code. A project he showed me was an interactive map of Afghanistan and Pakistan, showing drone strikes and the estimated number of casualties at each location. He had sourced the live data from public records, and turned it into a human story. It was very moving. I was blown away. Very humbly he asked, “What could a map maker do here, at a digital agency?” I had to think for a minute, but my answer was “We make maps of the internet.” Sure, it was glib, but it sparked his imagination and the conversation turned to mapping the abstract realm of the worldwide web. He became one of the best UX and systems thinkers I have ever met. He could visualize the tangled mess of connections, user journeys, data points, etc. and redesign them with a simple precision that made me want to cry.

Over the years I have hired many folks with different strokes: architects, fashion designers, industrial designers, even one guy—an embedded programmer—who made parking meters. And they all taught me a valuable lesson: amazing talent can come from anywhere, all they need is a compelling portfolio and a chance to tell their story.

Cool right? Here’s how.

The portfolio

At YML, before we consider interviewing anyone, we look at their portfolio—comparing it to all the other candidates’. A portfolio is your calling card—it should not just show what you have done, but what you can do, what you want to do. We have all seen plenty of portfolios and have a pretty quick read on good vs. bad ones. A good portfolio shows work that’s ambitious and inspiring, and very well executed. Thoughtful, beautiful designs, process breakthroughs, clever ideas, and slick interactions, all jump out of the screen. As do glaring errors, typos, thoughtless designs, awkward process decisions, unworkable interactions, etc.—these will all get a candidate blacklisted, struck off the list of potential hires. Great work is important, but an exceptional portfolio site should be a good user experience too. Consider the audience: busy executives. Trust me, we don’t read much, so don’t write much. Let the work do the talking, focus your words on big, significant ideas, compelling points, quotes and callouts. Curate only your best work, because one bad project gets an instant rejection. If in doubt, don’t show it, or better still, dig deeper and make it great.

Additionally, we prefer real portfolio sites. Dribbble is okay, Behance too, but if you’re shooting for a senior position, you will need a bit more vision, process and/or storytelling to support your work. At best, Dribbble can be very good place to show your interaction and visual design—but at its worst, it’s superficial eye candy. For more on this, read this fantastic article, The Dribbblisation Of Design.

The interview

Okay. So that’s how to get a foot in the door. What’s next? The interview, of course. Here’s a mental checklist we apply to interviewees, when we meet them:

1. Energy: Do you bring it? Do you take it?
For me, this is the number one criteria. I can feel it when I meet someone. Are they inspired? Do they inspire? Is this a job or a lifestyle? We work in small teams, oftentimes in small rooms, with big clients. People who bring energy, who inspire others to do great work, they are the magic ingredient for this model.

2. Empathy: Do you have feeling? Can you connect?
We create products and experiences for people from all walks of life. We must understand them first, so we can design something they want. Empathy, listening, and responding is key to the design process. And it’s important in how we work together as well—we, of course, don’t tolerate jerks—even if they are talented.

3. Culture fit: Do you fit in, but add something as well?
We have a fantastic, inspiring, collaborative, nurturing culture of talented grownups, and we want to preserve it and enhance it. However, we aren’t seeking uniformity. Diverse backgrounds, approaches and opinions are welcome, and help make our work and our culture better.

4. Presentation: How well do you communicate your work?
We look for excellent communicators—both verbal, written and visual—ultimately entrusting them to present our work to clients and internal stakeholders. For entry and mid level positions, just going through some portfolio projects will do just fine — but for senior hires, a presentation is required. A good presentation is a clear articulation of the problem, and the path from strategy to design.

5. Experience: Do you know how to get things done?
This is definitely not a question of length of experience, which is irrelevant. Instead, it’s an assessment of the kind and quality of experience—a candidate’s understanding of the tools and processes, pitfalls and opportunities, common in the job. Inexperienced people won’t hit the ground running, or worse, they can misdirect the process, waste time and resources and negatively affect the quality of our work.

6. Attitude: Are you all in? Do you want it?
Skills can be taught. Attitude can’t. In an industry that’s always changing, someone with a good attitude looks for challenges and is constantly thinking of ways to improve and progress. We want people with positive attitudes that are upbeat, eager, and solutions focused. We find they thrive on feedback, embrace change, and they own it with a smile.

7. Impact: Will you make a difference?
Last, but certainly not least, we want people that we know will have an immediate, positive, lasting impact—on the work, on our clients, on YML. We’re building a world class design team, looking for complementary skillsets, backgrounds and approaches. We don’t want to hire the same kind of designer over and over again. We look for folks who will make our team greater than the sum of its parts.

One more thing

We definitely do not look for an Ivy League education—or any education for that matter. We simply don’t care if you went to Harvard, or never went to school, never studied, come from an underprivileged background, were homeschooled, or are completely self taught. So long as you do great work, have the right attitude, and know how to get the job done, you’re in.

And that’s it. If this sounds like you, or someone you know, get in touch. Also, any interview goes two ways. If you have thoughts on what you look for in an interview, we’d love to hear them.

Good luck!

Two Kolkatans Meet At Indiana’s Purdue University; Launch Y Media Labs In Silicon Valley

This article was originally published at Forbes. Check it out here.

Sumit Mehra and Ashish Toshniwal enjoy a morning coffee at Café Un Deux Trois, just off Times Square in New York City. They are the co-founders of Y Media Labs, a 270-person company that specializes in creating consumer apps for clients like Home Depot, PayPalSalesforce and Staples; 27 of their clients are Fortune 500 companies. Y Media Labs is headquartered outside San Francisco in Redwood City. This morning in New York City, the founders are preparing for various meetings with clients, investors and their local Y Media Labs team.

Sitting in one the bistro’s red booths with a street view, Mehra and Toshniwal, both in their mid-30’s and dressed in business casual wear, explain the evolution of Y Media Labs. Their story includes many serendipitous twists and turns, most notably, growing up two miles apart in Kolkata, India, but not meeting until students at Purdue University in 2001.

Sumit Mehra and Ashish Toshniwal, co-founders of Y Media Labs, enjoy a morning coffee in New York City.

The two had been tinkering away with startup ideas since 2004, but 2009, at the height of the economic recession, became a pivotal year. Both had been approved for H1-B work visas after a long, arduous process, despite the fact that usual quotas had not been filled.

[Read more about obtaining work visas and their suggestions to foreign-born entrepreneurs who want to launch in the U.S. here.]

In 2009 they both quit their jobs, Mehra at Yahoo, Toshniwal at a startup sold to Google. And most significant to their future, Apple opened their app store to outside developers. “That became really interesting,” says Mehra, “that’s when we started.”

Toshniwal puts the 2009 mobile landscape in perspective: “iPhone was a new technology.” The two get extra animated as they joke about the typical Wall Street guy with his Blackberry, the dominant mobile device at the time, who thought iPhone apps were just for high school kids downloading games for 99 cents, not enabling billions of dollars of transactions.

One of Y Media Labs first corporate clients was Safeway, headquartered in a multiple-building complex in the Bay Area. Toshniwal gleefully remembers standing in front of the VP, using the term “In our company” in their pitch. “And the company was right inside that room!” exclaims Toshniwal, prompting hysterical laughter from both. “Fortunately, we got the business,” says Toshniwal. They began to get additional clients like Sesame StreetFoot Locker and Symantec.

We engineered the SEO, really well,” explains Toshniwal. “Someone would type in ‘iPhone developer,’ our name sometimes would show up even before Apple,” he continues, “It was shocking!” They also got clients through word of mouth.

In 2010, they helped create the Montessorium app, teaching kids ABC’s and 123’s via touch on the iPhone and iPad. 48 hours after the app became available for download, which happen to be Apple’s 54th app, Toshniwal says they received an email. “Thank you, let me know how I can help. I love what you are doing. Best, Steve.

After a moment of giddy silence, they clarify in unison, “Steve Jobs!” Mehta whips out his phone to find the old email from Apple’s co-founder. “It came at 3 am in the morning,” Mehra says with glee.

They recall talking each other down and chose to believe someone playing a joke on them. Until the founder of Montessorium, who had received considerable pushback from Montessori groups, as their method of teaching and learning isn’t about touching a screen, also received an email from Steve Jobs. “Keep doing what you are doing, you’ll prove the critics wrong,” Toshniwal and Mehra recall.

The only funding Y Media Labs received in the early days were loans from family and friends, helping to bootstrap the business. “I think we paid the loans in the first 12 months,” says Mehra. MDC Partners became investors in 2015, allowing Y Media Labs to scale, opening offices in Indianapolis, Atlanta and Bangalore, India in addition to the New York and Silicon Valley locations.

Being immigrant co-founders has been an asset to the business, both Mehra and Toshniwal believe. “Entrepreneurs need to be scrappy, frugal,” says Mehra, “that’s a really important skill set. To be able to do a great job when you have nothing. Right? How do you make everything out nothing?” asks Mehra rhetorically. “When you grow up in different parts of the world, the opportunities and challenges are very different,” he adds.

“In the Bay area,” says Toshniwal, “you order this frappuccino…” Toshniwal pauses, noting that many get really upset if the wrong type of creamer has been used. “Come on!” he exclaims, “Get perspective here. I mean, where we come from… “ Mehra quickly interjects, “There is only one option!” They laugh, riffing off each other, understanding both worlds and the absurdity.

The conversation takes a serious turn. “The advantage we have,” says Toshniwal, “is not having this sense of entitlement.” Mehra agrees with a quick affirmative. “And being grateful for what we have,” Toshniwal continues, “I think that’s a huge advantage.”

Both their fathers were local business owners in Kolkata; Mehra’s worked with cotton yarn; Toshniwal’s with lighting. “They didn’t have huge VC funds to build their businesses. They built their businesses organically every single year,” says Mehra, “I think there was a lot of learning just looking at them.”

Y Media Labs employees also benefit from Mehra and Toshniwal’s sense of gratitude. Mehra proudly shows a slick video of a lush Hawaii property set to a cool jazz soundtrack, which they rented for their employees, free of charge. Mehra guesses 85 to 90% of the company has vacationed there since January. “This comes back to being grateful,” says Toshniwal, calling 2016 a “phenomenal” year.

Y Media Labs' Hawaii House

U.S. society makes it so easy for anyone to start a business,” says Mehra, noting that the bureaucracy and corruption found in many other countries, often mars the intent of building a business. “That’s what I love about the U.S.,” adds Mehra, “if you are somebody with ambition, you can do wonders. In the process, create so many jobs, build things that take us as a society forward.”

But Y Media Labs, or any other startup in the U.S. for that matter, didn’t simply arrive on a silver platter. “You’ve got to dig for opportunities, find them and work towards it,” Mehra says. Toshniwal adds, “It took us literally four years before we were a legitimate business; four years is a lot of time.” Mehra says, “So you are not always going to get it right the first time.

What YML is Doing to Help Reshape Early Education Learning in a Screen Obsessed Society

At some point in our lives, each one of us has experienced a feeling of complete awe while watching a small child expertly swipe to unlock a smart device or open mobile apps to enter their own digital universe.

Over here in the YML labs, we have long been curious of the role that emerging technologies play in learning and development for young brains.

Our curiosity started a few years back when we teamed up with education startup Montessorium to develop a full app suite aimed at empowering kids to learn at their own pace. That project turned out to be especially rewarding because we provided kids a fun way to learn everything from the alphabet to international geography. We also got some digital recognition from Steve Jobs once the app launched.

Since then, our curiosity in learning and development in early education has only grown, especially as AI development continues to advance in great strides. We decided to further explore how we could bring the power of artificial intelligence tools to education.

What research is saying:

Research shows that during the preschool years, expansive psychological growth takes place and the brain is particularly sensitive. We know that screen times can greatly affect the forming of neural pathways and the way brains develop.

But because technology changes in our society are nascent, the effects of those changes are still relatively unknown and often debated. Decades ago, researchers learned that young brains need tons of stimulation to develop normally.

As a result, parents were encouraged to expose their children to as many sensory stimulations as possible. Later, digital designs and technology became more integrated into our everyday lives. We started seeing studies suggest that children who had too much screen time were more likely to develop ADHD.

For instance, in one particular study, young mice were exposed to six hours of a light and sound show on a daily basis. Results showed that there were "dramatic changes everywhere in the brain,” Jan-Marino Ramirez, director of the Center for Integrative Brain Research at Seattle Children's Hospital, told NPR.

Results like this lead some researchers to believe that our brains being wired up all the time can’t be a good thing. We weren’t built for this kind of over-stimulation. On the other hand, some researchers believe that our brains have to evolve in the way it processes information because our world is increasingly becoming more fast-paced.

In the mice study mentioned, mice that were exposed to stimulation were able to stay calm in environments that typically stressed out those who didn’t experience as much excitement.

Leah Krubitzer, a neurobiologist at the University of California, Davis, thinks studies like this show that benefits of an overstimulated brain may outweigh its negatives. During last year’s Society for Neuroscience meeting in San Diego, Krubitzer explained that we already live in a world where overstimulation is the reality. This means our brains have, whether we like it or not, already changed. Using technology correctly, in a useful, healthy way, is just the kind of stimulation that will prepare children for an always on, fast-moving world.

Because what other option do we have? We can’t turn back the clock. We can’t teach our kids in the archaic ways our grandparents were taught. Those good ol’ days don’t exist anymore.

"Less than 300 years ago we had an industrial revolution and today we're using mobile phones and we interact on a regular basis with machines," Krubitzer said during the meeting. "So the brain must have changed."

The truth of the matter is, data on how screen time affects the brain isn’t large enough to draw sweeping conclusions. Just consider how last October, the American Academy of Pediatrics lifted its longstanding rule against any screen time for kids under two. This is a standard that had been put in place since 1999.

The current recommendation comes from the result of new research. It states that young children should get screen time to help them develop the abilities to transfer knowledge from screens to the real world.

Daniel Simmonds, a resident pediatrician at the University of Maryland in Baltimore who has a PhD in neuroscience, says the key is to stick to the middle ground somewhere between the past and the future. So let your little ones interact with technology, but don’t let AI replace social human interactions.

“So much of our brain is dedicated to sensing things and making movements around [those physical things],” said Simmonds, pointing out that the show “Sesame Street helps kids learn but it’s not going to help them learn if you just sit them in front of a TV without any human interactions.”

Further proving Simmonds’ point, a 2015 study found that when iPads were given to kindergarten students to share, those students outperformed students who had their own iPads.

The study’s researchers suspected that those results have to do with the fact that sharing an iPad boosted social interactions. This type of camaraderie is crucial for development in young children. Perhaps even more telling, students who weren’t provided iPads at all scored much lower on their end-of-year achievement test compared to students who had access to an iPad.

Breaking through archaic ways of learning

Armed with knowledge that back and forth interactions between children and a caregiver is critical to language and brain development, we built an educational app that uses machine learning and image recognition to help create engaging, interactive moments. In this particular project, when our custom built iOS app asks,"Can you show me the flag of Canada?" image recognition is then used to identify whether the child is holding up the correct index-sized flag or not. This recognition happens in real-time and is instantaneous.

Teaching children about the flags of the various countries in the world requires a bit of focus on machine learning. Achieving our goal of image classification and detection to work offline in real-time required several important decisions, like whether we wanted to go with an image classification or object detection approach. In the end, we decided object detection would allow our users the flexibility of showing multiple types of flags at once.

Other important decisions we had to make include selecting the right framework for mobile devices, the network for object detection, and feeding the right data into the network during training. All of these decisions are crucial to performance and accuracy of the end product.

Ultimately, we are excited about the beginning of our exploration and the possibilities ahead!

As humans, our history stretches back hundreds of millions of years and like all biological traits, our brains have changed just like the world around us. We can’t expect to get by on outdated ways of learning. However, the key here is to not use AI tools in a way that is meant to replace humans. The true power and purpose of technology is not to substitute for human interactions but to enhance that experience and bring to life what hasn't been imagined yet. Much like Simmonds said, the key is to take advantage of what AI development can offer, and that’s a lot when it comes to sharpening young minds through learning, interacting, and communicating.

“The integration of technology and physical learning is not new, and there’s a lot of potential for it,” said Simmonds.

The Bartender of the Future?

We're big fans of giving the open space and time to breathe life into creative innovation. There's a lot of focus on the day to day, but it's important to step back and think even more future-forward.

The YML Hackathon is one of our favorite ways to do just that. Every year we encourage employees around the world to break away from client work and dedicate 24 hours to creating with cross-functional teams.

Last year, our teams designed Kontrol, a new app for Tesla. This year? The bartender of the future. 

Niq is an intelligent bartender who lives inside of a motorized bartending station. He can recognize people and greet them by name, tell corny bartender jokes, give drink recommendations, and take orders for your favorite cocktails without a click of a button. Oh, and did we mention he has a British accent?

“Tapping buttons is just so old school,” says Sr. Product Manager Steven McMurray. Steven worked alongside a team of 2 engineers, 2 designers, our head of recruiting to build and design Niq. “ We built this as a proof of concept to show how the marriage of technologies such as Machine Learning, Computer Vision, Artificial Intelligence, IoT etc. will come together to create amazing user experiences in the near future.” 

How it works

When you walk up to the device, Microsoft vision facial verification API’s allow Niq to recognize faces and greet people by name. Siri translates speech to text to help Niq process what the user is saying. APi.ai was used to create the interaction model that takes the text from Siri and makes sense of it, allowing Niq to respond appropriately.

To make Niq feel more human, AWS Polly helped to turn text into lifelike, customizable speech using deep learning. Once Niq gets a command back from Api.ai that a user wants a drink, an Arduino board takes commands via bluetooth from the iPad to automatically start pouring the proper combination of alcohol and mixers into your glass. See the prototype in action:

[vc_video link="https://vimeo.com/226803603" align="center"]

Each time users order a drink, Niq saves their preferences and can be programmed to cut people off when they've had too many. While Niq is just a 24-hour prototype as of now, Niq 2.0 (see mock-ups below) will use machine learning to generate a recommendation engine that can suggest drinks that people with similar tastes might enjoy.

             

Cheers!

Innovations in Healthcare: Leveraging Alexa for Patients on the Box Platform

We’ve been working with our friends at Box to imagine and execute a future scenario in healthcare, applying Amazon Alexa and Box Platform to the challenge of tracking medications and compliance to a medication regime for patients in the home.

As Ross McKegney, Director of Platform at Box, recently announced:

At Box we spend a lot of time thinking about the future of work and building the cloud content management platform that will make this future a reality. Today we’re delighted to highlight one of our partners on this journey, Y Media Labs, who is working with Box to develop a series of visionary demo applications for regulated industries.

Take a look at how our Alexa skill can help patients and hospitals manage drug intake, ensure regulatory compliance by insurance agencies, provide personalized care by physicians and manage drug efficacy during drug trials.

At Y Media Labs, we build what others don’t dare to.

Are you interested in taking your personalization strategy to the next level, getting more sales and driving up your customer engagement metrics?

We can help you get there, but don’t take our word for it – our work speaks for itself.

Kontrol for Tesla App: New Ways to Optimize Your Driving Experience, in Style

We at Y Media Labs love Teslas.

We don’t just think Teslas are good for the environment, or that they’re the future of transportation. We also think Tesla cars are redefining drivers’ holistic user experience, comfort, safety and habits.

Tesla’s motto is “not to let the perfect be the enemy of the better.” That may be the case with the company’s new venture into the autonomous car market, but when it comes down to the user experience inside the car, we think we can aim for perfection . . . or at least simplicity.

Or maybe both!

Many developers struggle with finding the perfect balance between simplicity and functionality. In general, most struggle to achieve this noble goal and end up sacrificing one for the other, for good reason! Allowing users to complete a wide variety of tasks while keeping the overall experience simple and intuitive is no small feat.

With Kontrol for Tesla, we really wanted to achieve both – and we are confident that we did.

The Innovation Labs program at YML is responsible for identifying new and exciting ways to leverage existing technologies to make people’s lives better, easier and, yes, more fun! Every month we turn our attention toward inventing or improving a mobile user experience, and releasing the resulting apps for free. This month’s subject is Tesla.

Other Projects: Kontrol for Nest Thermostat | Kontrol for Tesla

Concept

Kontrol for Tesla: New app, endless opportunities

Tesla1

Our creative and development teams joined forces to build and launch Kontrol for Tesla, a new mobile application available today in the iOS App Store that Tesla owners all over the world can use to make their driving experience easier, better, more convenient and more efficient.

Kontrol for Tesla delivers all the features already available in the current Tesla app, plus a few more.

And because we’re such huge fans of Tesla, the app is completely FREE of charge for all those Tesla lovers out there to use!
Design

Design

A mobile experience to match the grace of the vehicle

tesla2

We wanted to design an app that the Tesla community would love. Tesla is an amazing brand with cutting-edge products, so it wasn’t difficult to find a creative direction in line with what the company offers to their customers.

The design was inspired by simply sitting in a Model S and feeling that the app should represent the vehicle it is controlling: it should be sleek, intuitive, and follow clean lines.

We also wanted to give the app a new, modern and uplifting makeover, which is why the app interface looks like this:
When iterating on the UI, we selected a dark color palette. Color was stripped away from the surface to make important information like battery charge levels and temperature controls pop out. The app was designed to be highly functional, yet feel upscale at the same time, just like a Tesla. We wanted to create an app with an easy balance between functionality and modern aesthetics.

Function

What’s new with Kontrol for Tesla: Simple interface, Touch ID car start, smart venting, 3D touch car unlock, honk and battery status

tesla3

First and foremost, the Kontrol for Tesla app keeps all the current functionalities of the Tesla app it got its inspiration from. If you start using our app, you will not lose access to any of the current benefits of the original version.
Kontrol for Tesla allows you to do all these amazingly cool things that you are already familiar with from Tesla’s own signature app:

  • Check charging progress in real time
  • Change the temperature in your Tesla before driving
  • Locate your car and track its movement
  • Flash lights and/or honk to find your car in a parking lot
  • Vent or close the panoramic roof
  • Lock or unlock car

But why create just a clone of the app? Instead, we wanted to wow you with additional functionalities as well, which is why Kontrol for Tesla gives you access to these new and exciting features only available on our application:

  1. Start your car with Touch ID – you won’t need to type in your password every time.
  2. Smart Climate - remotely heat up your car to your desired temperature before you even leave the house.
  3. Unlock, Honk and Check Battery Status with 3D touch or through the widget. No need to ever log into the app.
  4. Smart vent the car when it gets too hot! We’ll detect the temperature differentials between the internal cabin and the external environment to adjust the sunroof for you (this feature will only be enabled for Tesla models that have a sunroof).

Security

Kontrol for Tesla: Our process for handing best-in-class mobile app security protocols

We take privacy and security very seriously, and we understand that the data we are handling is very sensitive and NOT owned by Y Media Labs.

So here’s what we did to ensure that our beautiful application is completely in line with the best mobile app security practices, giving customers complete peace of mind while using our product:

User Credentials

Our app does not store the user’s credentials (username and password). Instead, this information is stored on Apple’s iOS Secure Keychain. Data stored in the iOS keychain cannot be accessed by other applications installed on the phone.

Additionally, we never use user credentials while communicating with Kontrol for Tesla’s server, unless we are communicating directly with Tesla’s API Server. Even when the app is running, we do not save / store the user credentials. Lastly, the app uses Touch ID authentication before accessing our users’ credentials from iOS’S Secure Keychain.

HTTPS and Apple Transport Security (ATS)

From iOS 9.0 onwards, Apple requires apps to use a technology called Apple Transport Security, which enforces all the client-server communication to be made securely through HTTPS protocols. Our app adheres to this practice.

SSL Pinning

Anytime our application communicates directly with Tesla’s API Server, the server will provide a certificate to the app. To evaluate the legitimacy of the communication we adhere to the following steps:

  • The app first evaluates the certificate provided by the Tesla server to check if the certificate is signed by a Certificate Authority (CA).
  • We check that the certificate provided by the server contains the Tesla API domain in the response.
  • Finally, if these two steps are validated, we match the certificate provided by the server and the certificate shared with our app.
  • If the user's device is jailbroken, we prevent the user from using the app while clearing their credentials, session token and local preferences.

The Road Ahead

More awesome features coming your way on Kontrol for Tesla

Source: Tesla

Did we mention we love Tesla???

We mean it.

We love it so much that we will not stop with the first iteration of our application. In fact, we are already thinking about another helpful feature that we want to build for this app:

  • Tracking personal versus business miles while driving your Tesla

We fully understand that many drivers use their car for both personal and business reasons. For now, there are no integrated features allowing Tesla drivers to easily track miles that are used for business purposes. Through this feature we will remove the hassle of noting down the personal and the business miles. Instead, the app will do it for you.

Summary

We are confident that Tesla is the future of driving in the US and abroad, and we are happy to contribute in any way we can to these exciting developments in personal transportation.

Download the Kontrol for Tesla app today to enjoy the free benefits of our apps. You’ve earned it by investing in the automotive innovation of the future.

In-Store Analytics with Ad Tracker – Do People Really Like Your Ads and Marketing Displays?

It’s easy to get analytics from videos that are posted online.

However, until today, there was never an easy way to get analytics reporting from an in-store video.

Imagine if you could get actionable data on who watched your video while shopping in your store.

We’re talking about who watched the video, for how long, along with the gender and age of the people watching your video. All of these data points delivered to you, in real time, without collecting any information from the customer.

cover-images-b-1

This is a question I know many decision makers in retail have asked themselves again and again: how can I measure my ad engagement in a way that is meaningful and actionable? Am I showing shoppers what they are looking for? Am I effectively communicating with my customers as they stroll around from one aisle to another, looking for products or inspiration?

What if there was a simple way to get an answer to these questions in real time?

This is why we built the In-Store AdTracker prototype.

To the average consumer, it’s a video that they watch as they shop at their favorite store. To the retailer, it’s powerful information that helps you make better decisions in a way that you couldn’t before.

Here’s how it works.

In-Store AdTracker – A Powerful, Simple Tool to Measure Ad Engagement

ad-tracker-1

We built a simple proof of concept using Google Mobile Vision API. Basically, we created a tracker software that can be installed on any Android device featuring a built-in camera. Smart TVs, monitors, all-in-one computers, tablets, phablets – you name it.

Our proof of concept is simple. You install the In-Store AdTracker on any compatible device and then you play any video in a fullscreen mode on it. Then the AdTracker does what is supposed to do: it measures the level of engagement people have with the ad.

The tracker reports on the average time spent by a user with an ad; it tracks whether people smile while watching the ad, as well as the demographics of the people watching the ad – like age bracket or gender.

Here’s a video of how this works.

Let’s go through a specific example and see what type of information you could automatically have access to.

Let’s say you’re a store that sells celebrity merchandise. T-shirts, mugs, posters, original autographs, etc. In this business, as you know, stars rise and fall overnight. Hit songs dictate who is in the spotlight and what people are talking about at any given time. Of course, you can always look at Billboard 100 and figure out who is at the top, but the question remains: in your city, among your customers, who is the most popular star? What type of merchandise should you stock for?

Our AdTracker can give you this answer without you lifting a finger.

Let’s suppose that the top five songs on Billboard 100 this week are the following:

  1. Rihanna - Needed Me
  2. Ariana Grande - Into You
  3. Adele - Hello
  4. Taylor Swift - Blank Space
  5. Sia - Cheap Thrills

If you want to only stock merchandise for two of these five stars and get the biggest bang for the buck, how would you do it?

Let’s say you will play each of these videos in your store on the same screen on a loop and you want to see how engaged your in-store shoppers are.

 

1. The AdTracker can aggregate the number of people who watched each video

The type of report you could get in your inbox looks something like this:

y-media-infographics-graph2-1

If you looked at this graph based on the number of people who watched the videos in the last hour, you could conclude that Sia and Taylor Swift are probably the best stars you should get merchandise for.

But if you wanted to know a little more about the people who stopped and watched the videos, you can get that, too. Are they males or females?

 

2. The gender of the people watching an ad

y-media-infographics-graph1-1

In this example, you can see that more males than females stopped and watched the videos. So make sure you stock your inventory appropriately!

As we all know, the age of a consumer impacts what they buy, the price tag they’re able to afford, and how often they return to a retail store. Which brings us to the next question you may have...

 

3. The age brackets of users watching an ad

y-media-infographics-graph3-1

Not surprisingly for a celebrity merchandise store, in this example, we see that the most engaged users were within the 16-20 bracket. They also have the lowest budget across all age brackets. So you better stock up on lower-end-priced merchandise!

A big indication of user engagement is the time they spend interacting with a digital product, whether that’s a site, a video, or an ad. And this brings us to the next thing we can automatically determine with our AdTracker...

 

4. The average time a user spends watching an ad

y-media-infographics-graph4-1

How cool is this? Now you know that the largest number of people watch Cheap Thrills and Blank Space and that the same users spend the largest amount of time on these videos. That is a recipe for success: number of people + engagement level.

Lastly, you may be interested in learning at what time of day people are most engaged with your in-store videos. That allows you to prioritize what videos are broadcast when and whether you want to run any time-sensitive promotions.

The AdTracker can capture that information as well!

 

5. Hourly views breakdown

y-media-infographics-graph5-1

By now, you’ve seen what this simple AdTracker is capable of. In summary, it can track any of the following:

  • How many people are watching your ads
  • The gender of the people watching your ads
  • Age brackets
  • Level of engagement by time spent on ad
  • Hourly views breakdown

But if you’re not in the celebrity merchandise business, you may still be a little skeptical, wondering how this can benefit your business.

How can retail companies leverage the AdTracker?

ad-tracker-2

Traditionally, the primary markets for video ads are TV and online sites where users must watch the ads before interacting with the content on the page.

But these are not the only places where video ads can be consumed. In fact, various companies small and big have found alternative channels and social contexts in which ads are being served, like:

  • Inside a store
  • Airport lounges
  • Waiting room for doctor’s appointments
  • While waiting in line at a store or food chain
  • Some bars have even installed monitors above urinals

No matter what situation customers may find themselves in, quite often there is “dead time” when distractions of any kind – including video ads – are more than welcome.

With the AdTracker, marketers can begin leveraging video ads and start collecting critical data about who watches these ads, for how long, and how the audience reacts to these video ads.

No more walking in the dark.

No more guessing.

No more uncertainty.

With the In-Store AdTracker, marketers can tell for sure if their efforts are working or not. You can easily determine how effective your strategies are and if your customers like what you are showing them.

And the coolest part? With our in-store AdTracker, all data is updated in real time and available to you on your phone, laptop or desktop.

Uber vs Lyft – Who is loved more? A deep dive analysis using Google’s Sentiment Analysis API

Have you taken a rideshare in America in the last 3 years?

If so, chances are good that it was with either Lyft or Uber. The two companies — both launched in the San Francisco Bay area — are monopolizing the ridesharing industry across most U.S. markets, and are constantly competing with each other for customers’ attention, retention and loyalty.

What if I told you there’s a (fairly) simple way to see how Lyft and Uber’s customers feel about them? That we can track loyalty and user satisfaction with each of these brands, can do so with a high degree of confidence, and that we're not talking about spending hundreds of hours collecting and analyzing every single opinion that's out there on the internet?

We’re also not talking about physically stopping people on the street and asking for their feedback. We’re talking about using actual data that can be easily extracted and analyzed to see how customers rate pretty much any company out there.

yml_uberlyft_large

Are you intrigued?

We certainly were when we decided to embark on this quest!

Instead of looking at anecdotal evidence about Uber and Lyft, we decided to use the power of Google’s recently released Sentiment Analysis API. We cannot overemphasize how powerful this API really is. Without it, this analysis would have taken us tens of hours (or more!), enormous amount of resources and would have cost a fortune!

Google’s Sentiment Analysis API allows us to extract and analyze people’s views on Lyft and Uber through a single API call. If there was ever a “the future is here” moment, this is it.

I don’t like to keep people waiting, so let’s dive right into the results. After the charts, we'll dive deeper into how it was done (read: technical)

I also need to state the obvious. Just because one company is more loved than another doesn’t mean that their business is inferior to the other, or that it's not doing as well.

Don’t shoot the messenger!

The results. Here's who's loved more.

We began our analysis of the Lyft vs Uber sentiment by looking at the latest reviews that customers left for the respective mobile applications on iTunes. Since both companies are primarily operating through their mobile apps, it sounded like the logical place to start. So what exactly did we do ? We extracted the 500 most recent reviews from iTunes and assigned a sentiment value for each review. Note that the cool part about the Google API is that it assigned a sentiment value based on the actual content of the review, not the number of stars a user gives to an app.

This is how the sentiment towards Uber looks based on these parameters:

01-uber-reviews-graph1-1

What do we learn from this? First, the overall ratings for Uber have been on a downward projection. At its best, Uber’s customers are “OK” with the service, giving it an average of 2.7 out of 5 starts. Second, we can see that the overall trend is not going in the right direction and that — at least based on the small sample we collected — Uber’s users are becoming more and more frustrated with the service, rating it lower and lower.

Now, how do things look for the Lyft application, using the same parameters?

04-lyft-graph2

As we can see, Lyft users have a much better opinion about the app than Uber users. We also notice two other critical things. First, Lyft’s ratings over the last five hundred users have been getting better and better over time. Second, Lyft’s ratings are more stable and show a lot less variation in the overall sentiment ratings than Uber. As a side note, it’s interesting to note that Lyft’s lowest average score across the 500 most recent reviews correspond to Ubers highest score during the same time period.

After we saw what people thought about Lyft and Uber in the app store we thought, "Hey, why not looking at the sentiment people exhibit towards the two companies on Twitter?" We had two reasons for choosing Twitter as a platform from which to extract information via the Google Sentiment Analysis API.

First, Twitter allows a larger number of data points to be extracted than iTunes, which provides more accuracy to the overall analysis and statistical model.

Second, customers often use Twitter to communicate with businesses when they have issues with them. Twitter serves as a public “naming and shaming” platform, where customers often expect to get some sort of reaction from the business they’re interacting with. How companies respond to the public naming and shaming shapes how often other people will engage with the brands through social channels.

Here’s what Lyft and Uber’s customer sentiment looks like on Twitter, based on the Google API analysis of the last 8000 tweets published on the platform using the @uber and @lyft hashtags.

03-uber-tweets-graph3-1

02-lyft-reviews-graph2

What we see from these charts is that both Lyft and Uber are struggling on Twitter. Both companies’ overall scores have been  decreasing steadily over time. There are various factors that could explain this trend:

  • Recent app releases have inadvertently impacted users’ perception of the app. This is often correlated with production bugs or a sluggish app performance.
  • Neither company allocates enough resources to support their Twitter feeds and get in touch with unsatisfied customers in order to solve whatever issues they’re reporting on Twitter.

If we take a bird's eye view of both Lyft and Uber across the last 1000 tweets and the last 500 reviews, a clear pattern starts to emerge. Let’s look at them:

05-avg-star-rating-1

The conclusion is pretty straightforward: Lyft gets significantly better reviews and sentiment ratings across platforms than Uber does.

Where it's true that Uber is more profitable and popular across most markets where it directly competes with Lyft, the latter’s ability to keep its customers more satisfied could pay off in the long term. It's certainly something the Lyft management tries to promote – the idea that when customers join Lyft, they’re not simply joining another ridesharing company — they’re joining a community. So far — from what we can tell — this strategy is translating into significantly better sentiment ratings for Lyft.

One of the other things we noticed about the Google Sentiment API is that businesses that operate internationally can watch trends happening across the world, and use country-specific breakdowns for sentiment analysis.

Let’s look at Uber’s and Lyft’s international presence and their respective ratings:

06-avg-starratings-country-1

Uber operates in multiple countries, so extracting regional data for it was fairly simple (more details on the technical implementation below!)

As we can see, Uber’s average sentiment hovers around 2 out of 5 points on the sentiment scale, with India and Singapore constituting Uber’s biggest detractor and enabler markets.

For Lyft, we could only pull data from the U.S. and Singapore, where Lyft operates through a partnership with the local ridesharing agency.

Comparing how customers look at Uber and Lyft in the countries where Lyft operates shows that in both cases, Lyft has the upper hand in terms of users’ perceptions and reviews towards its ride-sharing services.

To sum up, even when you look at data points for specific countries where both companies operate, Lyft still has the overall upper hand in terms of users’ perceptions, attitudes and sentiments.

The technical analysis behind: How we arrived at the results

Now that we've looked at data points about people’s perceptions of Lyft and Uber, we're sure you're interested in figuring out exactly how we go to the datasets we showed.

Let’s dive right in and learn how to use Google Sentiment API.

The API currently supports three kinds of analysis of text.

  1. Entities
  2. Syntax
  3. Sentiment

Entities

Entities API documentation gives this description

Finds named entities (currently finds proper names) in the text, entity types, salience, mentions for each entity, and other properties.

To understand its capabilities, let’s try passing in a sample tweet to this API.

It clearly identifies many entities in the statement. It even links to Wikipedia articles.

iPhone 7 : CONSUMER_GOOD

Apple, Y Media Labs : ORGANIZATION

CODRIN ARSENE : PERSON

This can be applied to some really good use cases. Let’s say we want to create a trending topics list. We can pass text through entity API to generate topics of interest and create trending categories. We can group related content and present suggestions.

Syntax

Advanced API that analyzes the document and provides a full set of text annotations, including semantic, syntactic, and sentiment information.

We have come a long way in contextual understanding of a sentence. This has been going on for over 50 years and we have finally managed to have arrived at a technological breakthrough where we can identify the contextual information at a much higher degree. To give you an example of how advanced this is, let’s add a grammatically correct sentence and see how the API breaks it down.

Time flies like an arrow; fruit flies like a banana

Flies was correctly identified based on context. Verb in first context and Noun in the second. ?

This API can be used to identify verbs, nouns and run specific analyses on words. If we're looking at generating stats on how those affect an article, then this is useful. From use case perspective, it's not quite as strong for analyzing our sentence and see if it's correctly inferring the context.

Sentiment Analysis

Advanced API that analyzes the document and provides a full set of text annotations, including semantic, syntactic, and sentiment information.

Sentiment analysis is quite powerful. API can deduce sentiments from arbitrary text. The API itself is straight-forward. Let’s take this ambiguous review for Uber.

It’s clear that the person loves Uber, but rated it 1 star. That’s painful for Uber. Let’s try and fit this through Google sentiment analysis.


Sure enough, it gives a great rating. Here's the rating chart.[2]

Apple iTunes provide RSS of customer reviews for apps in json format.

For example, Lyft iOS app, whose app id is 529379082, the RSS of customer reviews json can be found at : https://itunes.apple.com/rss/customerreviews/id=529379082/json

Similarly, we got the RSS of customer reviews for the Uber app, whose app id is 368677368 through: https://itunes.apple.com/rss/customerreviews/id=368677368/json

 

We wrote small Go code to parse the json body. For each of the reviews we called the Google sentiment analysis API to get the polarity and magnitude.

In our analysis, we were able to compare Lyft vs Uber by looking at the breakdown of reviews for specific countries where both companies operate. To fetch the RSS of customer reviews for Uber in different countries, replace the country code as specified in ISO_CODE_FOR_COUNTRY at “sg” in below url:

https://itunes.apple.com/sg/rss/customerreviews/id=368677368/json

For example, to get United States based reviews, the country code is “US” and the url will be :

https://itunes.apple.com/US/rss/customerreviews/id=368677368/json

So how is Google sentiment analysis different from just App Store ratings? Google sentiment analysis overcomes the user’s bias in giving star ratings and only considers the true description. We can also combine this with Twitter feeds sentiment analysis, along with other forums and internet feeds, to get overall sentiment from everyone. Then, instead of rating for just an app, we can obtain a rating for a Brand!

Twitter Stream ---> Google NL API ---> Google BigQuery ---> Google Data Studio [3]

If we set up architecture as shown above, we can easily generate sentiment analysis on brands, which is much more valuable.

Links to get you started with the Google API:

Summary

In this article we took a deep dive into the Google Sentiment Analysis API by leveraging its capabilities to compare two popular American ridesharing companies.

As we saw, this amazing API can provide lots of interesting and useful information for company executives. Knowing your brand engagement across markets and geographical regions, as well as your users’ and customers’ overall perception towards the brand, is critical to the overall success of any digital business.

The overall opportunities for Language Processing and Machine Learning platforms are endless. Across the board, companies receive a tremendous amount of feedback through various channels. Google Sentiment’s API is paving the way for developers and business executives to become aware of the overall sentiments their current or prospective users have towards their brand, products and services.

Introducing DISKOURSE: The New Social Platform Connecting Trump and Clinton Supporters

REDWOOD CITY, Calif., Sept. 23, 2016 /PRNewswire/ -- Y Media Labs today announces the launch of Diskourse, a new social media platform that instantly connects Hillary Clinton and Donald Trump supporters for civilized, one-on-one discussions on the issues that matter most in the volatile 2016 U.S. presidential election.

Now that the live presidential debates are here, technology can help the supporters of the two major party candidates to finally listen, and hopefully learn, from one another.

The Diskourse app can also alleviate what has become an all too common occurrence on Facebook: a post goes up about a hot-button issue like immigration or gun control and suddenly friends, co-workers and complete strangers are involved in nasty back-and-forth debates. Names are called. Insults are traded. No one wins. No one learns.

Diskourse wants to change that.

Don't stay trapped in your social media echo chamber; with Diskourse you'll meet people who want to hear the other side of the story.

You can engage with others, ask questions, and write thoughtful responses. The idea is to find consensus -- or politely agree to disagree.

Millennials represent one-third of the electorate. For them, personal opinions are more valued than news reports; 86% of Millennials report seeing diverse opinions on their social media feeds, according to the Media Insight Project.

"With all the toxic political rhetoric in this presidential campaign, Y Media Labs has leveled the playing field by creating an innovative platform where Donald Trump and Hillary Clinton supporters can safely connect. The Diskourse app encourages honest, one-on-one debate for all: Millennials, GenXers, Baby Boomers and the Greatest generations. After all, we could all use a little civility," said Robbie Abed, Director of Product Strategy, Y Media Labs.

Diskourse app is available on iOS for FREE on the App Store. Log-in via Facebook or email; only your first name appears, no other personal information is revealed or shared.

Simple. Safe. Civil. It's Diskourse.

+