4 New Apple API’s You Didn’t Hear In The WWDC Keynote

June 7, 2017

With so much hype around Apple’s keynote announcements, it’s easy to miss the real innovations that come from the APIs released during the WWDC’s developer sessions.

In this article, we’ll highlight the top 4 APIs that you didn’t hear about in the WWDC keynote which could have a huge impact on your overall business.

#1 – NFC can now be used with apps

Elias Limneos | Twitter

While Near Field Communication, or NFC, has been around on iPhones for some time, developers haven’t had access to the API’s to create NFC experiences for devices… until now.

Apple released the Beta version of their CoreNFC library on Monday, enabling users on iPhone 7, 7 Plus and beyond to detect NFC tags and read messages that contain NDEF data.

With iOS11, users will be able to interact with the real world more seamlessly than ever before. Imagine simply tap your device to a location to get more information. Good riddance to the infamous QR scanning. Users can hold their phones to a museum exhibit to learn more about the artist or tapping your phone to a display in the store to get price and location details about products on display.

But the use cases do not stop here – and that’s what exciting about NFC codes. Think of them as the natural progression in the Internet of Things world. With NFC support, virtually every product out there can become smart.

You’re walking on the street and seeing a real estate monitor showing you houses nearby. You tap your phone to the window and now you have a 3D view of the house you’re interested in potentially buying. Eventually, with a simple NFC integration, you could basically transfer a file of a house to your phone which you can later further explore with the help of VR headsets.

NFC is also a “Eureka” moment for all retailers in the world. With NFC support, you can add smart tags to the products you want to sell the most. With a simple phone tap, the user could access Consumer Electronics reviews of the product standing in front of them or accessing, on-demand, detailed specs, reviews or even videos of the product. And most importantly, they would do this in a convenient and seamless fashion, ultimately helping them decide whether to buy that product or not. No more having to go home or search for product information on Google. With NFC support, companies can structure that information and make it available to users directly on the phone, right in the store, without forcing users to download an application beforehand.

#2 – Image Analysis thanks to Vision API

With a combination of Core Image, Core Machine Learning (we’ll address this later on), and the new Vision API’s, the use cases for what you can do with camera and photos has become wide open.

Apple also announced they are opening up their developer capabilities to create incredible new experiences with the iPhone camera. The Vision API, implemented as classes in Swift, can interpret, for starters, very specific use cases like face detection, horizon detection, text detection and barcodes.

Easily detect facial landmarks, such as a user’s mouth and eyes, so if you have an app that wants to put a fancy top hat emoji onto everybody’s head, your app can do it in real time, from the camera. Previously you had to have some expertise in Computer Vision and a lot of time to build something that sophisticated, now any iOS engineer can do it.

James Vincent | The Verge

Remember you took a gorgeous picture two years ago in front of the Statue of Liberty in New York? And you want to look at it again. With the new iOS version, you’ll be able to quickly find that picture in your photo app because the Vision API will automatically recognize and categorize pictures and objects based on faces, monuments, location or group of friends.

According to Apple, this API will do more than 11 billion computations to sort, classify and categorize your pics. This is great news for messaging apps in particular where users currently have no easy way to search for and retrieve a specific picture (WhatsApp, Instagram, Skype – we’re thinking of you – please use this API to make our lives easier!).

Another cool thing some of the big players could explore (Facebook, Instagram etc) is event driven picture notifications. In other words, if Facebook were to integrate with the Vision API and you’re at Pier 39 in San Francisco, Facebook could surface a picture you took there with some of your friends 5 years ago and encourage you to take another one and post it directly to Facebook. Through Machine Learning, the Vision API can act as a smart assistant for picture taking and creating memories.

#3 – Machine Learning on Device

Google has put a lot of effort behind its Tensorflow platform, enabling engineers to create and run machine learning models on a variety of devices. Recently they announced Tensorflow Lite, an Android deployment of Tensorflow models, to enable ultra-fast processing of data without having to make any network requests.

Apple is hot on their tail, today releasing the Beta version of their Core Machine Learning library.

Although the lower level abstractions such as Accelerate have been around, Apple has provided a top level layer for developers to integrate and run trained machine learning models directly on the device. And to take it a step further, they have even created a way to import trained models created in Tensorflow directly into your own app. Core Machine Learning automatically handles all of the hardware optimizations for CPU vs GPU detection and usage.

This means you can train things your own custom image classification models on Tensorflow in Python that detect whether a food is a “Hot Dog” or “Not Hot Dog”, then import that model straight into your app, and get the results from an image instantly, even with no network connection. The number of things that can be built using these technologies is expanding rapidly and the future of machine learning is clearly very bright, which is why Apple is clearly making it a priority.

#4 – ARKit, a powerful framework to create AR experiences

I know this one was announced in the Keynote, but it deserves some special mention. During the Keynote, Apple gave some basic controls around creating AR experiences by combining motion-sensing hardware information with the camera. They event provided plane detection and hit tests to find real-world surfaces and enable your app to interact with them. ARKit has the potential to be a very powerful framework for developers.

David Paul Morris | Bloomberg | Getty Images

Here’s what’s really cool about ArKit.

It can literally visually transform the physical space you’re taking a picture of into what it could become.

Say you want to buy a new couch from Crate & Barrel and you’re not sure how it would look like in your office. With ARKit, you can project the couch onto the space where you want to use and get a “real life” understanding of how this would look like if you make the purchase.

Another use case is remodeling your house. If you’re like me, you’ve probably used websites like Havenly, Decorist or Laurel & Wolf to get a virtual designer to redesign your living room or office. In today’s world, you need to send pictures from different angles for a designer to get a basic understanding of your space. With the ArKit you will be able to easily and conveniently create a 3D modeling of your room and send it over. It won’t be perfect, but it would be good enough for anyone to get an idea of your space.

Another less exciting but noteworthy aspect of Arkit is the fact that, in theory, every app using a video feature could not add video filters and “record” with augmented reality objects embedded in the short clip.

Unlike the more common “AR” experiences where objects sit in an arbitrary space in front of the user, ARKit enables developers to create highly interactive experiences where objects can detect and sit flat on tables, counters and other surfaces to make them feel truly life-like.

Please leave this field empty.


Thank You

Welcome to the YML family!