Google I/O 2018 was an impressive display of Google’s commitment to innovate their array of products, with improvements ranging from subtle software updates to a brand new hardware product with the Google Smart Display. Consumers and developers alike will surely be impressed with Google’s latest line of updates as they continue refreshing their product line with cosmetic and functional changes in order to provide all of their end-users with a seamless technological experience. Google also set the record straight that their commitment to A.I was not merely lofty platitudes but rather a real desire to infuse their products with new features that use predictive analytics and machine learning to more specifically cater to individual’s needs.
Our team at Productive Edge is particularly interested in Google’s foray into smart displays with Google Smart Display, the latest Android OS “Android P”, and Google’s integration of AR into Google Maps. Given that we have a wide variety of related services and projects, we are excited to see how Google approaches these and many other topics. Here’s our team’s top takeaways from the event:
The Focus on A.I.
The buzzword at the conference was Artificial Intelligence, with Google affirming a renewed commitment to spending more money on computer vision, natural language processing, and neural networks. Google fortified their commitment to improving their products by leveraging machine learning and neural networks, providing users connected experiences, and using big data and machine learning to serve up relevant content and experiences. Prior to the event, Google changed the name of its research unit from Google Research to Google A.I., stressing the importance of AI for future R & D. As such, AI has permeated itself into most of the Google suite, from Gmail to Google Maps and everything in-between.
Many distinguishing features that Google announced were micro-updates that increase ease of use and overall productivity for consumers. Spanning across the majority of their suite of products, Google aims to utilize A.I as a powerful resource to assist us with our routine tasks. For example, Google enhanced it’s Google Assistant technology with the ability to have “Continued Conversations.” Instead of constantly having to say “Ok Google” to activate the Assistant, you can now have a more casual, natural discussion with your smart device once the conversation has been initiated.
Google is also adding enhancements to Google Photos to make AI at the forefront of the user experience. Google Photos already has innovative auto-suggestions for photo editing, but it was buried in the editing section of the app. Now when users are viewing a picture, Google will make intelligent edit suggestions based upon saturation, gradation, and background. For example, Google will use its robust facial recognition software to determine that the same friend of yours appears in three pictures and will make a micro-suggestion to share the pics with her. They also announced the new ability to transform black and white photos into color using artificial intelligence and machine learning.
By making it easier and seamless to edit photos Google is subtly encouraging people to store more pictures with Google, contributing to their vast array of data and improving their data mining abilities. Google continues to utilize its AI capabilities to serve up content that is highly relevant and contextualized to the end-user. Google News is working to display stories that are relevant to the user in many different ways ,based off location and interests. This content will be in a “For You” section of the new, revamped Google News app. Google is also using its ability to crawl the web by launching a “Full Coverage” feature that will show how a news story is being covered by different sources in real-time.
New Google Smart Display
Amazon’s Echo Show has a new digital Google counterpart: the Google Smart Display. Launching in July, the Smart Display aims to pair Google’s Smart Assistant with robust video and display capabilities. Google certainly has a leg up on Amazon with Google’s Youtube (which is currently blocked on the Amazon Echo Show). With robust suggestion algorithms, YouTube can now be navigated completely hands-free with the Google Smart Display. One promising use-case scenario of this hands-free experience is with cooking: The Smart Display’s assistant paired with Youtube, enables budding chefs to watch how-to videos and set timers, all without having to fumble around with messy hands. Additionally, with the popularity of Google Hangouts, individuals can now video chat with friends and family without ever lifting a finger.
Google Maps – Making AR hyper-relevant
Google aims to not only be a source of information but also discovery, and is venturing into the world of AR with new Google Maps features that merge the physical and virtual worlds. Thanks to new enhancements and features, real-time navigational directions will overlay the user’s camera view on their mobile device. This reduces a common frustration whereby road signs are unclear or not readily apparent. This is a powerful use-case for AR and a great use of the tremendous amount of mapping imagery Google has collected from Street View and Google Earth. They will be using this data through Visual Positioning System (VPS), that will use images from Earth and Google Earth to more accurately understand where the user is located. Additionally, Google Maps will personalize suggestions and add a “For You” tab that will feature locations and events based upon your preferences.
During the conference, Google stressed that they want to give users more control over how they use their phone. Android’s latest software update, Android P, aims to provide users with more granular controls over app usage and notifications.
In Android P, users gain the ability to see a breakdown of how they use various apps and can then set timers for individual apps. The OS will prevent users after a timer expires from using the app by greying out the icon and not letting you into the app.
Android P is also making the “Do Not Disturb” mode more robust and tranquility inducing. It will not only turn off sounds and vibrations but also will silence all notifications. Users will also have ability to trigger “Do Not Disturb” mode by simply turning the phone over with the screen facing down. Additionally, the OS will recognize when users persistently swipe an app’s notifications without interacting with them and will prompt the user if they’d like to disable notifications for that particular app.
Android P will also make a significant change to the OS’ navigational schema for the first time in over six years. Instead of the omni-present button bar with back, forward, and home buttons, different navigation menus will appear depending on context and relevance. For example, a back button will only appear when it would make sense with the app you are currently using.
Android P likely won’t be finished until sometime this fall but is currently available in its beta version for developers and daring users.
Want to Know More?
Find out about the latest trends in tech and business by subscribing to the Productive Edge blog. Fill out the form, and we’ll be in touch.