Blog

Listing the most important new releases by Google

Date

30th January 2020

Read

6 min

Creator

Mark Forster

At its latest product launch event‚ Google announced a bunch of new devices‚ including the Pixel 2‚ Google Home Mini‚ Google Pixelbook‚ Google Home Max‚ Google Pixel Buds and Google Clips. The company also discussed its decision to take all hardware development in-house as part of its #madebygoogle campaign. Undoubtedly‚ though‚ the most interesting aspect of the latest line of devices is that at the core of all of them are the AI and machine learning capabilities Google has been diligently enhancing.

With this in mind‚ we will now look at the most important new releases to be announced by the tech giant. 

Google Mini

Google Mini is a donut-sized Google Home device which Google envisages will become our point of contact into the world of Artificial Intelligence in our homes. We have already seen in previous presentations from the business how Google Home uses AI to provide information to users.  

During this presentation‚ it was revealed that Google Home now not only works with other Google products like Chromecast‚ but with over 1000 home automation products already on the market. Google is also working with Nest to improve integration with its Learning Thermostat‚ as well as allied products.

Tools like Google Home and Amazon Echo Dot have been marketed as productive devices that enable users to improve their homes. But we all know only too well that for children‚ they can also act as toys to which silly‚ as well as important‚ questions can be posed. Google has now acknowledged this and is equipping Google Home to understand and provide appropriate answers for all such questions. This is made possible through AI technology that uses machine learning to understand the context of the questions. Only then can correct answers be given in a way that is fun and informative for children.

Google Pixelbook

Google also announced the new Pixelbook‚ a device that can be used as either a Chromebook or a tablet. The sleek design of this new offering was influenced by that of the Pixel phones. The Pixelbook also gives users access to the same apps they have downloaded on to their Android devices.

Along with the Pixelbook‚ Google launched the Pixelbook Pen. Although this looks like every other stylus on the market‚ there is one key differentiator: a little button on the top that lets users interact with content on the screen‚ as well as the pre-installed Google Assistant on the Pixelbook.

To show what is possible with this combination‚ Google ran a presentation showing a user pressing the button and circling an image of an individual on the screen. This instantly activated Google Assistant on the device and provided information on said individual. This implementation of AI opens up amazing possibilities for instant access to information.

Google is also going to use machine learning to recognise and understand users’ handwriting when using the Pixelbook Pen.

Google Pixel 2

Along with the above‚ Google also announced the new Pixel 2 devices that will be available for Android lovers everywhere. Aside from the obvious hardware upgrades (including the removal of the 3.5mm headphone jack) made as part of an update from previous Pixel models‚ the focus this time was on embedding machine learning and AI into the device.

To start with‚ Google simplified the process for accessing Google Assistant; now all it takes is a gentle squeeze of the smartphone. The Assistant can also be used to set up ‘routines’‚ which function like workflows.

Similarly‚ just as ‘Google it’ made its way into the popular lexicon‚ ‘Lens it’ is set to become the next phrase on every techie’s lips. Google Lens‚ which‚ for now at least‚ is being made available exclusively on the Pixel 2 devices‚ will use machine learning to recognise – and provide information about – whatever objects users point their phones at.

Another big focus for Google is AR. Previously‚ the company had announced it had been working on an AR feature for Android called ARCore. Now‚ it has confirmed that this will be available for users of the Pixel 2. Most notably‚ ARCore brings AI and machine learning into the domain of Augmented Reality as well. And‚ as proved by a small demonstration of AR stickers interacting with each other based on relationships set between them‚ this can really enhance the experience.

[object Object]

Image: Google

Google Pixel Buds

Google’s first venture into the earphones market with Google Pixel Buds was welcomed with huge applause. Admittedly‚ most of the company’s competitors had already released some form of smart hearable device. However‚ Google has found some interesting ways to differentiate itself on the market‚ backed by years of work within AI and machine learning. Thanks to Google Assistant integration‚ the earphones do a lot of smart things. But the best example of machine learning on this device has got to be its ability to integrate with Google Translate‚ providing almost instant live translation of conversations conducted in two different languages.

This feature alone could blow the competition away and turn Google Pixel Buds into the go-to smart hearable device. In particular‚ it could work wonders for tourists and delegates whose meetings can be complicated by language barriers.

Google Clips

The final announcement from Google related to the release of Google Clips‚ a small camera that can be placed anywhere in the house or clipped on to household objects. Using machine learning‚ this device identifies situations of potential importance to the user‚ and documents them through photographs and videos. A related app can then be used to select the most significant of these‚ which can be saved to Google Photos.

Unleashing the AI and Machine Learning Kraken

For years‚ Google has been silently collating information on user behaviour on its wide range of services‚ including search‚ maps‚ translation and photos. All this data has now been fed into machine learning algorithms that have come to learn a lot about what users want and how they interact with particular apps and services. This is what forms the basis for all the latest developments outlined above.

Indeed‚ Google is perfecting machine learning and incorporating it into the devices with which we interact every day for activities ranging from taking pictures to listening to music. And while some might argue such a high level of machine learning and AI could have its downsides‚ only time will tell whether users end up loving or hating these developments. 

At hedgehog lab‚ we absolutely love the fact that technology is set to enhance our lives by being proactive and providing us with information that we need (as long as all the information is processed in these devices rather than shared via servers). We are looking forward to using and learning from all the AI and machine learning features that Google has embedded into its latest set of devices.