Artificial Intelligence (AI) and Machine Learning (ML) were among the major announcements at the I/O 2017. Google CEO, Sundar Pichai, said the company will be using AI, ML, Deep Learning (DL) and computer vision in all its products like Cloud, Google Assistants, Google Lens, Google Home and much more.
He also revealed that camera is turning into a search box with their latest application, Google Lens. With the help of machine learning, Google Lens will understand the world around you through the smartphone camera and help you take action based on the understanding.
It is a means of providing you with the right information in a meaningful way. All you need to do is point your camera at the object and click on the Google Lens icon. The device will automatically identify the object. Let’s say if you are pointing your camera towards a building, Lens will give you the name of the building. You can also connect to a Wifi network just by looking at the router sticker. This is an important step in bringing artificial intelligence into people’s life on a wider level.
Suggested Sharing in Google Photos:
While taking a photo, we insist that it be taken with our phone so that we do not have to wait for our friends to send it to us. Now, with machine learning abilities in Google Photos, it will remind you to share photos and even suggest the people you share it with. In a separate sharing tab, you will be able to find all your sharing activity. In just one tap and you are done.
Along with this, Google introduced a new sharing library feature. You can now share your photo library with anyone. You can choose to share your whole library or share a subset of it. You have full control on the sharing.
Google Assistant gets smarter, more capable than ever:
With the help of AI and ML, Google Assistant is smarter and more responsive than ever. It now has the ability to respond through both text and voice. You can simply type a query if you do not want to speak out loud in the public.
Google Lens has been integrated with Google Assistant that uses AI and has built-in image recognition. You simply have to click on a new button in the Assistant app. It will immediately launch lens and you can insert a photo into the conversation with the Assistant. Google Lens will process the data the photo contains and respond accordingly.
Google also announced that Assistant is coming to iOS devices. Users will be able to download the Assistant app and speak to the Assistant.
Intelligence in Cars:
Google wants to improve Intelligence and present a rich app ecosystem in the car industry. With a number of challenges like driver distraction, varying screen sizes, and different input mechanisms, Google is using Android Auto to provide a seamless experience to drivers with the new standalone phone app.
Some other areas where Machine Learning will be used are:
Hands-free calling on Google Home: Google Home to act like a new landline phone. By just saying “Hey Google, call Dad”, it will cause the device to dial the number from your personal number by detecting your voice.
Smart reply in Gmail: Gmail is getting smarter with its new machine learning abilities to read messages and suggest replies. For instance, if a question has been asked in the mail, the system will recommend a response. The smart reply system will be available on iOS and Android Gmail apps.
Google for Jobs: Google for Jobs is a new initiative that was announced at the annual developer’s conference. It will use Google’s search power and machine learning abilities to collect millions of job postings from different places. It has been embedded in Google search for faster, accurate results.