Artificial Intelligence continues to gain more traction, now that companies such as Google, Microsoft and others, have released a suite of easy to use tools. It now enables us to create smarter apps, and open up a new range of applications. Xamarin doesn’t have any AI or Machine Learning capabilities itself, but does play a part in gathering data and displaying information from AI systems. We will look at the current state of AI and Machine Learning and how it is currently applicable to mobile applications.
Artificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence(AI), is a term used to classify a service that displays some level of intelligence. These systems can be based off complex machine learning algorithms, or even be a bunch of if-else statements. Services like Speech to Text or Face Recognition are examples of AI Services.
Machine Learning (ML) is the next step, and the results from machine learning, are often used to feed information into an AI Service. Azure Mzchine Learning or TensorFlow are examples of Machine Learning services. Machine learning, uses different algorithms to build model’s, that can extract predictive or category based information. Azure’s Machine Learning Algorithm Cheat Sheet shows some great examples. You can continue to train this model (retrain), to produce more accurate information over time.
Deep Learning (DL)
Deep learning goes ones step further, and uses a Neural Network, to gather information and create predictions or categorizations. The incredible part of Machine Learning is the ability to even perform unsupervised training to learn something, without being taught it first.
AI system’s can’t work without data being fed to them. Sometimes, this is information is acquired from other sources, however now might be the time to ensure that you are collecting data on your user’s actions to a degree that could be used for prediction. For example, if you have a mobile app that deals with shopping, you will want to track each user’s visit to each page, even if they do not buy.
In other cases a Video Stream or Camera may be used to detect faces, or certain objects. These images will processed on a server, and data sent back to highlight relevant information, e.g. the FaceAPI.
These are some use cases of gathering data that you might need to consider in future app builds, if you want to leverage AI.
AI & Machine Learning Services
There are many AI and Machine Learning Services, though just to highlight the common ones:
- Azure Cognitive Services
- Google Cloud Service (Speech, Translation, Vision, etc)
- Amazon Web Services (Lex, Polly, Rekognition)
- Azure Machine Learning
- Google Cloud Service (Machine Learning)
- Amazon Web Services (Machine Learning)
- TensorFlow (OSS Library not Cloud service)
Expanding upon gathering data, you may want to open up your app to receive and process requests from AI services, such as Siri or Cortana. Platform or 3rd Party based API’s can decipher intents, and pass these on to your app for processing. You may need need to use any AI services to complete the request, but you need to be ready to accept them.
TensorFlow Lite and even to some degree, Azure’s IoT Edge, are efforts to show the push of intelligence to edge devices. These types of libraries will enable business logic, to be performed locally on the edge device, without a need for the connection to the server.
The prime example here is TensorFlow Lite. It allows a trained model, such as one that recognizes objects in a video stream, to be deployed locally on a mobile device. You can them use the results of Machine Learning, without a connection to the server. Another upcoming library is Microsoft’s Embedded Learning Library, which I believe is designed for running on a Raspberry Pi, however I suspect it will be transferable to a mobile device.
You will find a lot of examples are based server side. Xamarin is unable to provide predictions, or train models, but will use an API to display information already generated from these.
General examples of AI you can use in your app includes, ChatBots, Recommendation Engines, Purchase Prediction, or Automated Reasoning. Automated reasoning can include taking many variables, and producing a predictive result for your app user. Take Google Map directions for example. It will calculate, your current location, speed limits, traffic congestion, and more, place into a model to predict the best route to your destination.
More specifically, I can see many examples of where AI, can enhance your app further. Bill cost prediction, though this would be done at the server side. Optimizing a news feed based upon what users interact or want to see the most of, a chat bot for instant customer support, alerts based upon anomalies in data, e.g. patient information, or even facial recognition, and GPS location to validate a timesheet entry, for hospitality workers.
There are plenty of services to help enable AI in your app. For easy to use AI services, you can look at Azure Cognitive Services. If you want to go deeper and produce results from trained models, you can have a look at Azure’s Machine Learning. A C# binding for TensorFlow has even being created called TensorFlowSharp, and also a Xamarin based library for the Face API.
While AI on the device is not quite here, services can help add some intelligence to your app.