Introduction To Machine Learning in iOS
Machine learning is a subset of artificial intelligence (AI). It is an application of artificial intelligence (AI) that provides systems with the ability to automatically learn and improve from experience without being explicitly programmed.
Machine learning is a study of computer algorithms that improve automatically through experience.
Machine Learning EcoSystem in iOS:-
We can use four different input sources for machine learning in iOS devices.
By using the Vision framework & custom core ML we can detect
3.Faces in videos and images
5. Motions inside images
6. Text Inside images.
Readily available core model examples for analyzing images:
In Shaadi perspective we can give a try to use this feature for verifying/approving photos uploaded by users on front end only.
ML can be used to perform speech recognition in many languages also convert speech into text for dictation.
In Shaadi perspective, we can use this feature in our “Free Text Search” feature & can provide a facility for users to search profiles by using speech in the user’s mother tongue.
We can use this framework to segment natural language text into paragraphs, sentences, or words, and tag information about those segments.
Sound Analysis framework can be used to analyze audio and recognize it as a particular type, such as laughter or applause.
The first and the foremost building block of machine learning is the model.
The Machine Learning model consists of an algorithm and data used to train that algorithm.
For example, when a user uploads a profile photo & for analysing various objects, landmarks in that photo the problem domain will be digital images of various objects, landmarks to be identified.
What is a Model?
Apple defines a model as “the result of applying a machine-learning algorithm to a set of training data”.
The model accepts input applies algorithm & learnings on the same and then predicts the suitable output. Lot’s of data represent most of the use cases required to train a model.
As shown below if we want the model to recognise and differentiate between waterfall and bridge correctly, lots of images should be fed to model and train it accordingly.
Will catch up with actual models in upcoming quarters till then keep training models.