By Prasad Pai, Technical Lead at YML | Jan 30

Presently, there is a major concern on the probability of the likelihood of having a global economic crisis.

In spite of the supposedly cautious mood adopted by few countries, nobody is willing to give a clear message on whether the next recession is just a week away or a year away. And then there are few others, who are giving indications that there is absolutely no economic slowdown at all.

Above all, the opinions of these financial gurus are changing daily from positive to negative outlook and even vice-versa. Everybody’s opinion is justifiable as predicting the future is not an easy task and everyone has his/her wealth of experience behind him/her.

Hence, we wanted to establish a quantifiable assessment to gauge and decide on what the world’s well known financial investors are thinking about the future of the economy.

We wanted to solve this problem through Machine Learning with the least available information/resources in the quickest possible way.

Data collection

To collect data for our problem, we cannot have a one-on-one discussion on a recurring (potentially daily) basis with these investors but we can scrape their interviews, discussions, speeches, etc from YouTube and their messages from Twitter.

To start with, we short-listed few financial behemoths and scraped the transcripts of YouTube videos through YouTube-Transcript-Api and Twitter feeds through Tweepy. We split each YouTube transcript into a duration of a minimum of 5 seconds each and then order them serially to preserve the time-series nature of data.

This is the summary of the data collected in our experiment:

Data summary

Let us focus on all the subsequent discussion in this article with Warren Buffet’s point of view.

Data Validation

As our data has been collected from YouTube and Twitter, we have to benchmark the authenticity and genuinity of the text data with the thoughts being as close to the financial world. This is necessary because we are going to train our models to predict the future of the economy and our text data transcripts have to be related to finance and economics.

While collecting the data we assumed that these financial investors are quite dedicated to their field and will mostly talk publicly every time related to finance and economics. But still, we have to validate our assumed heuristic.

We don’t wish to perform the recommended way of painstakingly filtering individual text statements in our dataset. Hence, we create a small sample of statements of what we believe talks about finance and economics and should represent the state of our dataset. Here is an example of one such sample set.

Custom_1: How is the economy doing in the United States of America?
Custom_2: The current state of affairs is not doing good.
Custom_3: Life will get difficult when inflation kicks in.
Custom_4: We are in a bull market.

a) Known language model embeddings

We generate the sentence embeddings of all the text transcripts in the dataset along with our artificially generated samples by making use of TensorFlow Hub’s Universal Sentence encoders embeddings.

You can experiment with other language model embeddings as well but we chose Universal Sentence encoder as it has been trained on a wide variety of topics. We plot these generated embeddings using TensorFlow’s embedding projector website. Upon performing T-SNE, we observe that most of the sentence embeddings quickly converge into one cluster along with our typically generated examples.

This is an indication that most of our text samples are related to the domain of finance and economics. Here is one of the example cluster what we observed in our experiments.

T-SNE convergence of dataset using Universal Sentence Encoders embeddings

b) Using custom-built language model embeddings

Another thing we have to validate and experiment is the coverage of our dataset. The dataset should extensively talk about as many concepts related to the finance and economics worlds. To check this aspect, we have to obtain a language model created out of general finance and economics.

We weren’t able to get any publicly available language model in this domain, so we ended up training our language model using free to use publicly available textbooks in finance and economics.

We generated the sentence embeddings for our dataset from the newly created language model specialized in finance and economics. We plotted the generated PCA components out of these sentence embeddings using embedding projector website and we were happy to observe that PCA components were wide-spread in all three dimensions.

This indicates that our dataset represents a wide range of subjects in our language model and is not restricted to one particular topic within our domain. Here is an example of PCA projections which we observed in our experiment.

PCA projections of the dataset using custom trained language model embeddings

We performed T-SNE on these sentence embeddings and we found that embeddings were converging into multiple dense clusters. Each cluster is an indication of a specific concept in our specialized domain of finance and economics and this proves the extensive coverage of various topics in our dataset.

On the whole, we are able to validate our heuristic that our financial gurus are speaking only of their area of interest. Here is an example of cluster projections using T-SNE.

T-SNE convergence of dataset using custom trained language model embeddings

Data Filtering

Though this particular dataset has been good enough for our experiment, we may always never encounter such good datasets. We may have a dataset that has text samples related to general discussion and not related to our desired subjects of finance and economics.

In such cases, we will have to filter out the samples whose sentence embeddings are located quite far from any of our artificially generated typical examples embeddings in a standard language model.

To achieve this, we make use of the NMSLIB library. We weed out all those text samples whose cosine similarity lies furthest from all of our custom-generated samples.

To attain a proper dataset in this crude but yet simple way, we may have to keep repeating this cycle of procedures described in data validation and data filtering section multiple times with several custom generated samples.

Sentiment analysis

Once we gather a good dataset of text samples, it is time to process them. Our problem statement is of arriving at a quantifiable measurement to forecast the economic outlook based on the public statements made by the financial investors.

Our dataset comprises only of finance and economic subjects and if we perform a simple sentiment analysis on these samples, we would be able to achieve a quantified metric to understand the underlying sentiments in the statements made by investors.

We make use of Google Cloud’s Sentiment Analysis from Natural Language APIs to perform sentiment analysis on each of the samples in our dataset. We get sentiment values ranging from -1.0 to 1.0 resembling bad to positive sentiment, thereby giving a sense of inner feelings of the person.

Training models

Now it is time to train the models. We have a univariate time series data comprising of sentiment values. Let’s train different types of models to solve our problem and compare them against each other. In each type of model, we split the initial 95 percent of data as training data and the trailing 5 percent as testing data.

a) LSTM model

We will start with a deep learning solution. We will make use of LSTMs in TensorFlow to train our model. After the training is over, we forecast the output one time-step at a time. The obtained result of predicted value vs ground-truth value is shown below. We are not plotting the confidence interval in our graphs as this is based on making predictions by using all the previous correct values after each time step as we proceed to predict the next timestep value.

Here are the graphs obtained in our experiments after training 10 and 25 epochs respectively.

LSTM test predictions at end of 10th and 25th epochs of training

b) ARIMA model

A deep learning solution doesn’t work well in scenarios where you have less amount of data and particularly when you are forecasting using a univariate dataset. We attempt to solve our problem using the statistical-based approach of ARIMA.

ARIMA will internally capture the trends inside the dataset but to do so, we have to transform the dataset to a stationary time series one. This method gives us a better result as we obtain a much smaller amount of test loss.

ARIMA test predictions

c) TensorFlow Probability model

TensorFlow has launched a new framework of TensorFlow Probability which can be used to leverage domain knowledge of various probabilistic models with deep learning. Like how we had employed simple models previously, we create an elementary model using TensorFlow Probability and fit our univariate dataset into it.

TensorFlow Probability can be trained to capture local, seasonal and many other trends inside the dataset which was either absent or little difficult to be explicitly instructed to do so in earlier models.

TensorFlow Probability test predictions

Comparison of different models

This is the average test loss we obtained in our experiments. Note however that these results are local to our dataset and need not necessarily conclude anything.

Loss summary

Understandably, we observe that the ARIMA model is giving the least test loss as our dataset was small and univariate in nature.

Forecasting economic outlook

Finally, we feed the entire dataset and we make use of our best model to predict the future economic outlook. This is the result we obtain in our experiment.

Forecasted Output: 0.100

We will however not emphasize this result as our experiment had several shortcomings which we are listing next and the quality of the result can be improved when we solve them.

Drawbacks in our experiment

  1. First and foremost is the data. We need data to be as recent as possible. As we had a limited amount of data, we had to scrape quite old videos and tweets from YouTube and Twitter respectively.
  2. Data has to be obtained periodically. We had completely ignored this aspect in our experiment and if it is not possible to obtain regularly spaced data, we have to interpolate the missing values.
  3. We evaluated sentiments of our dataset using a generally trained sentiment analysis tool. It would have been better had we created our own sentiment analysis tool which was specifically trained in finance and economics statements.
  4. We factored only sentiments of the statements made by the investor as the training attribute to our model. Though the sentiment is a major factor, yet there may be other minor factors worth exploring like assessing in what mood was the statement made, was it an interview or discussion, etc.
  5. We didn’t concentrate much on hyperparameter tuning as the motivation was to just prove our concept and we employed only simple models.

Future work

Apart from the above-listed problems, there are few other good things worth looking into in our experiment.

  1. The public statements made by investors keep coming every day and the dataset keeps evolving continuously. Online learning methods have to be integrated into our work and the best way to do this is to fit our entire pipeline into TensorFlow Extended flow.
  2. All three models used in our experiment may individually be good in certain cases and it is in the best interest to apply boosting techniques to improve the results.
  3. Club the individual investor’s economic outlook forecast to form a single score.

If you would like to take a look into code used in this experiment, you can look into my GitHub repository.

Y Media Labs is closely working with Google in improving the experience of TensorFlow to all its users across the world and is a part of one of our case studies of our work.


About the author

Prasad is a Machine Learning Engineer at Y Media Labs. He is currently responsible for developing prototypes showcasing machine learning capabilities to prospective clients and the development of full-fledged projects which involves experimentation with neural network architectures.

By James MacAvoy, October 29, 2019

Data. It’s a word that strikes fear and excitement in the hearts of all project managers, scrum masters, and project teams alike. 

We know we want it, but we’re not 100% sure what to do once we get it. 

“Now what.”

We request, remind, chase down, test for, and eventually receive this precious data - only to have these familiar questions raised:

  • Where do we fit this into our project life-cycle?
  • How do I make this data actionable?
  • Who ate my clearly labeled chicken salad sandwich in the office refrigerator?  (I know it was you, Jeff)

Although answering these questions is an important step, at the core we have to dig into why we have to ask these questions in the first place. 

1 / Fear of Data

The primary issue we have to deal with when it pertains to data is fear. 

At its root the inherent nature of data can force us to rethink our direction, disprove our hypothesis, or cause us to realize that we’re trying to solve the wrong problem. 

Any of these results can force a major shift in your project direction. For project managers in particular, who typically hate seeing their project plans flushed down the toilet, at first glance data can feel like a problem.

Data does present a problem also familiar to project management regarding the implications of data in the project and how do we mitigate potential issues.  The reality is, those questions are much easier to answer than potentially developing a product that is completely useless to the user.

As Shayna Stewart asks in her articleDoes the consumer find value in my product?”, data — no matter how scary it might be — allows us to answer that question before our product potentially falls flat with that consumer.  

2 / Project Management Life-Cycle

The standard project management life-cycle typically consists of:

Initiation, Planning, Execution, Performance Monitoring, and Closure. 

In a typical digital project, if we incorporate data at all then it is usually within the planning phase. Then, often to a lesser extent, the performance monitoring phase and even worse, usually with a brand new team with no historical knowledge.

To effectively deliver a consumer-centric product that adds value to our users we need to incorporate the use of data throughout the project life cycle.

This means that we need to continuously be reviewing our direction against any learned insights as well as continue testing to validate our hypothesis and the decisions we are making through the project.

Additionally, the considerations we make while running a project will need to be reconsidered. 

As Project Managers, it is ingrained in us to deliver a project that meets all scope requirements, on-time, and at/under budget. 

We’ve all seen the project management triangle of constraints - and likely seen the illustrations of how when one of those constraints is affected the overall quality of that project is in jeopardy. 

3 / Value Delivered

What is typically not considered in the triangle of constraints is an incomplete picture of project quality: in addition to these constraints we should be considering value.

We have all delivered a project over budget, or later than planned. All of those situations are never fun, but the far worse situation is delivering a product that the consumer finds no value in. If we do that, then it really doesn’t matter if it's over budget or late because it’s already a failure.  

A reasonable argument might be that the value is already factored into quality, which in a sense is true. But all too often the project lead’s focus on quality is based on requirements or at the very least a project brief. Without the necessary data those requirements could be wrong. 

In this scenario how we calculate quality is just one part of what we need to factor. When we consider the overarching value to the customer, our definition of quality could actively change, as it should.

But There Is Hope…

Much of what we have discussed above revolves around being comfortable with fear and uncertainty.

We have to know and understand that the more information that data provides, the more that it could change our best laid plans. 

Additionally, the more we incorporate data into the traditional project management methodology and process the more likely we are to see those fears come to fruition.  

However, as project leads there are ways that we can avoid the potential pitfalls described above.  If we incorporate data into every phase of the project management life-cycle, and plan for the potential disruption that this new information may cause, we are far less likely to be surprised when this disruption happens.  

“What do you mean we need to revisit the problem statement?”

We know there will always be changes to a project, but as long as we do not ignore all the information we could have, no matter how scary, we can get in front of that risk and minimize what causes this fear in the first place. 

Training ourselves to understand that change is good, disruption is good, and ultimately adding value to our consumer’s lives is best.

By Shawn Murphy-Hockett | June 20th 2019

June. It’s a month filled with color. Nature is inviting, nurturing, and allowing of all things different. It’s as if Mother Earth is begging us to celebrate her multicolored world. Vibrant flowers bloom. Warm sunshine clears the air.

And, in most places, rainbow flags sway in the breeze.

Throughout the month of June it’s becoming more and more common that people across the US, and the world, recognize Pride; a social movement rooted in commemorating the 1969 Stonewall Riots, a turning point in our country's LGBTQ+ history dedicated to promoting the self-affirmation, dignity, and equal rights of this historically marginalized group.

This colorful flag has gone through many revisions over the years but is now internationally recognized as the common symbol of the Pride movement.

It’s a significant, powerful moment each year.

But the way I see it, why should we limit ourselves to recognize Pride only one month out of the year?

It’s incredible that this singular event has garnered so much support, attention, and recognition for what PRIDE aimed to do — promoting equal rights, building an alliance for the LGBTQ+ community, and celebrating sexual diversity.  And it continues to evolve, now a giant festival in cities across the world. Pride has become a party — maybe the best one of the year (If you haven’t been out in the Castro yet, I 100% guarantee you will have one of the best nights of your life).

I’m grateful to have always felt safe, included, and wanted whenever I’ve walked into a gay bar. And the gay community doesn’t have to be welcoming to outsiders like me. If you think about it, I’m the one that is ‘invading’ their safe space.

Now - imagine that the entire world doesn’t feel safe for you to be your true authentic self.

What does this basic cis-gender straight white girl know about inclusivity when it comes to the LGBTQ+ community? Not enough.

I grew up in a very liberal, albeit, hippie household where my brother and I were always told that it didn’t matter the sex, race, socio-economic status, nationality, etc. of the HUMAN we loved. That doesn’t mean my parents still didn’t ask me all of the naive questions about my brother marrying a man. They meant no harm, they were honestly just curious about how this whole “gay-marriage” thing works. I am so grateful to my parents for being open-minded and loving their children no matter what, as I know that a lot of my friends were not so lucky.

Which is why as an adult, I’ve made it a personal goal that wherever life takes me, both personally and professionally, my environment must feel like home.

This is exactly why I chose to grow my career with Y Media Labs.

I knew it was right for me as soon as I finished my interview. Y Media Labs has an unconventional hiring philosophy. While hiring for fit is the norm, YML insists on hiring people who don’t fit, but rather come in and add to the culture through their differences.

This is all rooted in the idea that our culture is constantly evolving, and that while change can be hard, it’s vital to growth. The same culture of inclusivity and celebrating diversity that reverberates through the Pride movement lies at the foundation of YML.

There are three new projects specifically in place to help pursue, identify, and celebrate our differences more so than ever before.

Our newly established ‘Women’s initiative’ meets once a month to listen and learn from a fellow member’s presentation with topics such as confidence, culture, and women in the tech industry.

‘Passion Projects’ invites employees into the more unknown talents, backgrounds, and hobbies of their fellow co-workers. This can be anything from holding a wine tasting to learning about someone’s past life in a traveling circus (hasn’t happened yet, but a girl can dream).

Finally, we will be reigniting ‘Listening Sessions’ to have a safe place for those deep conversations about topics such as LGBTQ+ rights and safety in the workplace. The goal is to gather the team and create allies around how we can ALL do better to achieve foreseeable outcomes.

So, I’m challenging you to make a lasting impact. Make it a point to care. Openly. Colorfully. Lovingly. And not just in June. By observing our differences, we form a rainbow each day. For each other. For ourselves. For our values. Let’s hold ourselves accountable.

Together we can, and we will find a way to make the world a little more proud.

By Poojan Jhaveri

Every year product teams are super enthused to know what’s new in WWDC because they're eager to take advantage of how to increase retention or launch new products/features that could impact the users of 1.4Bn active devices around the world.

This year definitely lived up to expectations around Apple’s ecosystem and its vision of privacy and productivity.

Here’s a quick guide for navigating all of the uber-cool announcements from WWDC 2019 from a product manager perspective.

Product teams should start preparing for:

  • Privacy first
  • Customize around utility
  • Anywhere, anytime with watchOS6
  • Build for an ecosystem
  • Next-gen AR & ML

By the way, did you know that...

It will not be long before iOS 13 takes over this fall.

Privacy first

Apple has always been a strong proponent of user’s privacy and this WWDC they have taken the next step to extend the paradigm to apps and services, acting as a layer between the user and anyone that try to harvest data.

Single Sign-On - “Sign in with Apple” will allow users to authenticate or register within the apps by providing a tokenized or direct Apple ID. From a user perspective, this means not having to compromise their data for wanting to log in quickly. This will be required for any app that supports login using 3rd parties (Google, Facebook).

From a product strategy perspective, this means you’ll have to make explicit effort to get additional user data such as age, location, etc. On the flip side, reduced friction will lead to faster activations.

This combined with Apple Pay will be the UX of challenger brands on mobile.

Location privacy - Starting iOS 13, Apps can request for users’ location each time it is requested in addition to “always” and “while using.” Apple’s also blocking apps to infer location using Bluetooth and wifi.  This means that your location strategy and the UX around it will have to be revisited and taken advantage of especially from those users who have been less hesitant to sharing locational data.

Kids’ Data privacy - Apps in the kids' category will no longer be able to include 3rd party advertising or analytics software to send data. Product Managers will have to rely on user research more to understand app usage.

All in all, audit your apps for data privacy before it is too late.

Customize around utility

Users start building habits on features that Apple provides natively. And so, when a particular 3rd party app does not support it, it is seen as an app that is not outdated or worse - not delivering up to the users’ expectation.  

Dark Mode - There are two kinds - people who love dark mode and those who don’t. Regardless of the team you belong to, two biggest benefits of the dark mode are reduced eye strain at night and better contrast.

And with the possibility of auto switching during specific times, most apps and users will activate this. As a product manager, you must plan to take advantage of this and support dark mode for your app.

Core Haptics - Apps can now leverage a new customizable haptic engine to generate vibrations and audio. This can be used to reinforce action or get attention upon completion of core actions (eg. payment) or validations. Games can also use this framework to create advanced tactile games.

SwiftUI - With Xcode11, Apple introduced a new way of designing user interfaces for the app known as SwiftUI. SwiftUI will allow developers to preview their screen and write code at the same time.

As a product manager, you can take advantage of built-in functionality such as localization, accessibility support in addition to a productivity boost. On the other hand, this might call for some initial refactoring!

Begin by bringing your design and engineering teams together and planning for new design experience.

Anywhere, anytime with watchOS 6

Apple Watch is the best selling smart watch ever, but when it comes to utility, it fails to justify the real need. With watchOS 6, Apple has taken a huge leap to disconnect watchOS from iOS and expand its utility. Here’s how:

Independent Apps -  Developers can take advantage of frameworks and the hardware to build watchOS apps without needing an iOS companion app. WatchOS 6 also comes with App Store on the Watch so users can download apps anytime without needing their phone.

Audio Streaming - With Streaming API, apps can now directly stream audio to the watch using cellular or wifi. This had been limited only to Apple apps up until this point. With this feature, music, meditation, fitness apps will be able to include a live audio feed so as to be with the user wherever they go.

Extended Runtime - Currently, all watch apps (except for workout ones) become inactive after a specific period of time. With watchOS 6, Apps can now be designed for session-based use cases for alarm, self-care, physical therapy, mindfulness, and health monitoring.

It also opens up access to heart rate, motion, and location. As a PM for this category, you can now drive guided sessions or alert users on the watch.

Build for an ecosystem

User’s don’t think of multi-device journeys.

For them, it is about solving the use case as easily and quickly as possible.

With that in mind, here’s how we can take advantage of some of the new features to boost productivity:

iPadOS - Let’s face it - with an increase in iPhone sizes, iPad usage has changed a lot from being an iPhone with additional screen space to now being used as a functioning computer.

This year, Apple separated its OS for iPad as iPadOS directed for more multitasking use. iPadOS supports external drives, multiple windows, split view, and drag and drop gestures.

With this change, the more important question to ask is - Do you still need an iPad App and if so - how will it piece into the ecosystem along with iPhone & Desktop?

Bring your iPad apps to Mac - With project Catalyst, Apple will now allow iOS Apps to be ported over as MacOS apps. This means that the same code can be reused and modified across for iOS & macOS. Not everyone needs a macOS app though.

Here’s how you can decide on it:

  • Does your app function on its core use without requiring mobile hardware capabilities?
  • Does your app need to notify users and has a heavy daily engagement on the web?

If the answer to all these questions is yes, then you should reach out to your team about extending your iPad app.

NextGen: AR + ML

While not a lot has changed here in consumer usage, Apple continues to advance AR capabilities for games and enterprise usage before it is ready for Apple wearable glass.

Here are the features that you might want to take advantage of:

ARKit 3 - ARKit now supports real-time occlusion of people and objects in the environment. With this newly gained knowledge of people and their position in the virtual world, it will also be able to support Motion Capture, which is tracking and using body movement as input for AR. This can be leveraged in health or fitness apps to guide users through a session.

On the machine learning front:

Core ML 3 - For the first time, app developers will be able to take advantage of on-device machine learning and NLP across the watch, iPad and iPhone.  This means that the models can be updated with user-data to prepare a more personalized model all on the device. For example, it can be used for surfacing features or content around user’s usage over time.

VisionKit: Apple opens up its document scanning framework for all 3rd party apps. In addition, it can also provide detecting text from images and a heatmap of all areas of the image that the user might focus their attention on.

Brainstorm with your team to take advantage of Machine Learning for your app.

Bring this to life.

When should you start planning? Now.

iOS, iPadOS & WatchOS Developer Betas will be available in July with public release later in the fall (somewhere around Sept).

Let’s connect on how we can help you bring your ideas to life and get first-mover advantage before September.

Learn how we helped Staples become the world’s first Apple Pay retail integration.

June 3, 2019 — By Will Leivenberg

After months of working together informally, Y Media Labs is proud to announce it's partnership with Google focusing on their TensorflowLite technology. YML is helping the worldwide developer community learn and adopt TFLite technology — the core open source library to help developers of all levels understand and train machine learning models.

Google worked specifically with YML's Innovation Lab team. The T-Lite team challenged YML to translate a complex topic like machine learning into something approachable and easy to navigate. Since then YML has become a leader in the application of Tensorflow for developers worldwide.

"The YML team produced thoughtful and thorough documentation concentrating on how best to on-board developers from various backgrounds, ranging from machine learning to mobile and web," says Sumit Mehra, YML Founder and President. "The work ultimately laid the foundation for how developers across the spectrum can create world changing applications, not to mention a collaboration yielding four applications that make the framework as seamless as possible," Mehra added.

The partnership is ongoing.

May 29, 2019 — By Will Leivenberg

Listening to the dialogue between Stephen Clements, YML's CCO, and Damjanski is like overhearing two mischievous kids hatching their next scheme. It's whimsical, suspenseful, and kind of terrifying.

Only difference is it's happening over Zoom, and each of these guys are sporting a bit of gray hair.

The two have a storied past of creative endeavors and recently found their way back together about one year ago around an unexpected project. Damjanski had become fascinated with AI, to the point that he'd began considering if there was a way to actually collaborate with a program.

"Was there a way to integrate it into my thinking process?" said Damjanski about his original idea. "Could I create original work that demonstrated this concept, was ultimately the idea."

Stephen saw Damjanski's vision, and knew just the team to bring it to fruition — YML's Innovation Lab, the scrappy team of six based in YML's Bangalore office. What ensued was a plethora of emails, Zoom calls, sketches, math equations and countless cups of coffee over more than six months. Led by Innovation Labs Director Darshan Sonde and Kinar Ravishankar, YML helped not just conceptualize, but ultimately build the AI for Damjanski's project.

The final work would come to life as "Damjanski: Natural Selection," a collaboration between Damjanski, his longtime collaborator, Vasco, and YML that debuted May 1st at ONCANAL in New York City.

From left to right: Damjanski, Darshan Sonde, Kinar Ravishankar, Vasco.

As the exhibition website reads, Natural Selection "investigates ideas of collaboration with an AI and its integration into the artist’s practice."  Damjanski specifically wanted the AI to build around all the archived exhibition statements of the MoMA, in New York.

"The challenge was to create more lifelike speech for AI," said Darshan Sonde. "We had created lots of models, but in the end the model released by OpenAI worked the best. This wasn't the work we were used to doing, and it was especially different because this is for an art show, but that also gave us more liberty in what the text could generate."

The exhibition comprises a headset where people can interact with the AI to create new exhibition statements that will be delivered by a printer. Each statement is a new source of information that will inform the artist’s thinking process. Damjanski compares this process to the biological evolution of genes, which is driven by reproduction and survival in order to procreate or grow.

"The unique challenge was training the AI on data," Sonde said. "Data has to be large, so we had to write custom scripts to scrape the data from the MoMA website and cleanup and tweak it to generate good results."

"The YML team was an outstanding partner and I'm proud of the work," said Damjanski, who's work at the exhibition is live through the end of May, and open 7 days a week (11am–7pm) located at 322A Canal Street, New York City.


Damjanski is an artist living in a browser. He is a co-founder and member of the incubation collective, Do Something Good, and also the co-founder of the MoMAR gallery within New York’s Museum of Modern Art. More info:

For more information contact 

@d.a.m.j.a.n.s.k.i                  #oncanal

YML announced Wednesday its partnership with Earnin, the Fintech payday advance app.

YML will work closely with Earnin to create a category defining digital experience in the coming months.

Earnin helps workers track and cash out wages in real time. The YML work will focus on the development of their mobile app, which is built to aid workers in getting paid as soon they leave work with no loans, fees or hidden costs.

YML is committed to advancing the cause of the gig economy, and working with Earnin exemplifies that effort.

We were out of place no doubt. A Silicon Valley-based, technology-driven, design and innovation agency surrounded by some of the most elite marketers and advertisers in the industry, if not the world, in midtown New York City at The Times Center.

But then we started talking. The room quieted, and the audience was suddenly captivated.

Check out our presentation from CCO Stephen Clements and product strategist Shayna Stewart. Together they illuminated something that related to all parties in the room, whether representing an agency or brand — design is a tool for making businesses better. We're not creating art for art's sake. We're creating to make businesses better.

That idea is rooted in our DNA at YML, and it's how we make lasting impact.

And the impact was strong! We even got featured in Adweek.

Until next year.

Reach out to with any questions.

By Hsio Ling Hee

We are all scared that machines will take our jobs.

I was at an exhibition at the De Young Museum last year - the Cult of the Machine.  

A little iPad in the corner asked me, “what do you do for a living?”I typed, “partnerships”. It did not compute. I retyped, “sales”.

Bip, bip. “There is a 14% chance you will be replaced by a machine.”
Software engineers, as it turns out, are more prone - 42%.

It made me feel less bad about myself (suck it, computer scientists!) - but also realized how this little iPad made me vulnerable, less valuable, replaceable.

Source: CULT OF THE MACHINE, Young Museum

We are ultimately scared that AI will replace us, that it will replace our humanity.

By understanding what AI can and cannot do - I no longer feel threatened by AI. AI is a friend, not foe. AI is there, so that we can enhance our humanity. Do things that make us more human.

Caring for your family. Exploring other parts of the world and understand how other cultures live. Leaving a legacy behind for the next generation. 

Imagine ensuring the safety of your family with a lock that unlocks only for people you know. Imagine sharing stories with a new local friend, who you just met on your backpacking trip through South East Asia, with your phone as a translator. Imagine if disasters can be detected earlier if we watch out for warning signs, no one has to lose their home ever again.

This is magic - made possible by AI.

At YML, our Innovation team saw the light too. Actively experimenting and publishing findings since 2016, we have built models that make expressing your ideas and thoughts easier by predicting the next word as you write.

We have even proven that patience does pay off

So when the Google TensorFlow Lite team reached out and wanted to partner with YML to make machine learning more accessible - there was only one answer.

With the Google TFlite team, we built examples and documentation, so other developers may benefit from our experimentation and as a result, reduced the time they may need to deploy a solution to solve a human problem.

Source: AI in motion: designing a simple system to see, understand, and react in the real world

Our own experimentation in machine learning paid off.

Machines give us room to expand our human minds. It gives the mind much needed oxygen to birth a creative solution to a human problem.

Because if a machine with no consciousness (topic for another day) can do your job, wouldn’t you want to work on something more impactful and fundamentally worthy of your humanity?

Improving and evolving (albeit with help from our AI friends) - what is more human than that?

Nobody starts a company at the bottom of a recession. Well, nobody smart that is.

Read more


Join Our Newsletter