How to Machine Learning applied to Google Photos? [Solved]
Google has acquired a lot of companies on machine vision and machine learning in past - to ensure they have the best people on the planet on this subject - i personally think - there lies the trap - cognitive image analysis and Automatic image description is not as simple as it seems and there are dozen of ways to solve this problem - and most of the companies and scientist which google has acquired are solving this problem in unique ways - and this creates the problem of clubbing these algorithms to achieve results in a perfect manner. I personally think .. they run multiple algorithms for different kind of analysis (coming from different peoples and teams) and then using machine learning to automatically understand the content of pictures - for the feedback ( they are using reinforced learning - in form of captha's and different online image matching games) - to provide relevant results. This approach is kind of distributed and unorganized at this point of time - but lets see .. how google tackles this in the long run.
Another Way to Machine Learning applied to Google Photos?
Google photos makes use of an convolutional neural network architecture similar to that used by Geoffrey Hinton's team in the ImageNet Large Scale Visual Recognition competition. The difference is only in terms of number of classes and training.
- In contrast to the 1000 visual classes made use in the competition,Google made use of 2000 visual classes based on the popular labels on google+ photos. They used labels which seemed to have visual effect on humans i.e photos which humans could recognize visually.
- They make use of FreebaseEntities which is the basis for knowledge graph in google search. These entities are used to identify elements in language independent way.In English when we encounter the word “jaguar”, it is hard to determine if it represents the animal or the car manufacturer. Entities assign a unique ID to each, removing that ambiguity.
- Google used more images to train than that used in the ImageNetcompetition. They also refined the classes from 2000 to 1100 to improve precision.
- Using this approach Google photos achieved double the precision that is normally achieved using other methods.
Note:- This Article on How to Machine Learning applied to Google Photos is Only Meant to Provide Information.