iOS 11 is unveiled in WWDC 2017, and there are bunch of cool new iOS features, and also lot newer opportunities for developers. In this blog we lists few important iOS 11 features and how this provides next level opportunities.
Apple provides Beta version of the iOS 11 for the developers and we are excited to try out cool features and be ready to update our customer’s app base too, before its formal release by summer 2017.
The big buzz today is Artificial learning and the big players like Google and Apple are getting ready with SDKs for Android and iOS developers respectively. Please to remind you that in recent Google I/O, they open sourced their TensorFlow library which has APIs for machine learning for android. In similar move, Apple released releasing the Beta version of their Core Machine Learning library (Core ML). Apple provided new Swift classes ,a top level layer for developers to integrate and run trained machine learning models directly on the device. With this, Apple iOS 11 provides next level of opportunities for the app developers on this big AI space.
ARKit, the new framework for Augment Reality apps:
Augment reality, provides next level opportunities for real buying experience, for example you can see how a furniture you planed to buy, fits at your drawing room, augmenting the furniture to your room and see how it fits. Another example would be, see which paint color greats works for a brand new house you are building. Now such a cool apps we can build for Apple iOS11.
ARKit, which is released in iOS 11, provides a cutting-edge platform for developing augmented reality (AR) apps for iPhone and iPad. With high performance processors, Apple A9 and A10 processors, which definitely enables fast scene understanding and lets build detailed and compelling virtual content on top of real-world scenes. We can take advantage of the optimizations for ARKit in Metal, SceneKit, and third-party tools like Unity and Unreal Engine.
Vision Framework, an image analysis framework:
This new framework enables to apply high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video.
With this, now its possible to detect objects in real time. This means now your iphone much intelligent, for example, to point the iPhone camera at a flower of rare species and have an app that searches additional information of the flower. Or, if you’re traveling, point camera at a historic landmark and an app can give you information on that landmark and history of it.
This not only applicable for real time objects, imagine you write a small note on a physical paper, now its possible using camera to convert the note to digital text. And importantly, the vision framework provides apis for face recognition. And now we can integrates with Core ML to run custom models on images.
Siri Kit extended support:
Apple Siri Kit was introduced in iOS 10, but now apple added more flexibility for the developers, by adding support for domains like ride sharing and personal payments.
Added the Lists and Notes domain to SiriKit to support using Siri for adding notes, interacting with to-do lists, and interacting with reminders.
Power of drag and Drop, yes finally:
Now its possible to drag and drop the content within app and within 2 apps, though on on iPhone, drag and drop is available only within an app.
The list with new iOS 11 features will grow further. We are excited to work on these new capabilities and will talk further with sample projects in our upcoming blogs.