What is the future according to Google? It’s a pretty exciting place.
Artificial intelligence rules supreme. All of the world’s information is available right at our finger tips, and, of course, Google is the company to provide it.
Machine learning is at the heart of everything Google does
Google may seem to have diversified in recent years, exploring everything from self-driving cars to smartphones. The truth is that machine learning is actually at the heart of everything it does. Google started off as a search engine, and naturally expanded into the machine learning and AI fields. This way Google can actually understand the questions you ask it and present relevant answers, rather than just listing search results with matching phrases. Search Engine Optimizers will be familiar with the ‘RankBrain’ algorithm which powers this smarter search. Google Assistant evolved from that same natural language processing, combined with voice recognition, made possible through machine learning.
Likewise, initiatives like Google Lens show us how machine learning can be used with computer vision in order to help us search for the things we encounter in the real world. Essentially, “AI first” is not a step away from search, but a natural progression of it.
But it goes much further.
Why Google needs hardware for its vision to work
Where does something like the Google Pixel fit into all of this? The answer is simple: in order to make the most of AI – which is ultimately a form of software – Google needs the right hardware to run it on. Google wants to become the go-to solution for AI, just as it is the go-to solution for search. That means it wants to put Google Assistant in your pocket.
Google Assistant faces competition from Apple, Microsoft, Amazon, and even Samsung. Seeing as AI is very likely to dominate the industry in the coming years, Google will have to fight to get ahead of that pack.
When you have Google Assistant in your pocket, why would you want to ask your echo dot to set a timer or keep a reminder?
As our own Bogdan Petrovan suggested in his recent article, Google may not actually care how many smartphones it sells. The key is to demonstrate to other OEMs how close integration with its services can help them satisfy customers and pressure companies to place the feature front and center. Because the Pixel and Pixel 2 exist as viable alternatives for consumers, OEMs need to ensure their devices also offer Google Assistant to stay competitive.
Google wants Assistant front and center on every smartphone— even iPhones! That means it needs to have some influence over the direction of both the hardware and software.
This symbiotic relationship works both ways. The hardware supports Google’s vision of conquering AI, but AI also births new hardware opportunities that couldn’t have existed otherwise. Google CEO Sundar Pichai said that he doesn’t just want Google’s hardware to use AI going forward, but wants AI to inspire future products that couldn’t have existed otherwise: Google Clips being the perfect example of this.
Even Google’s self-driving cars are an example of a machine learning application, reliant as they are on computer vision in order to identify hazards and react accordingly.
The role of the cloud
It’s become clear that Google has a very clear plan for the future, and it revolves around machine learning and AI. The aim is the same as it ever was: “Organize the world’s information and make it universally accessible and useful.”
It has become apparent that AI and machine learning offer the best tools to accomplish that aim. Hardware serves as a conduit between the user and machine learning, and encourages other OEMs to get on board by showing what’s possible.
Just to be clear, AI and machine learning are not one and the same: machine learning is just one aspect of AI that handles pattern recognition. As ever, Gary explains the differences best.
Right now, virtual assistants like Alexa, Siri, and Google Assistant work on the cloud. Your voice commands are saved, processed to some degree, and sent to a server for additional processing so that a response can be generated. This is necessary because most smartphones don’t have the requisite power for the intensive algorithms that machine learning relies on, like the pattern recognition necessary for understanding voice commands or recognizing distinctive patterns in images.
The hard work is done on the cloud instead. To do this, Google uses an initiative called TensorFlow – a library of useful machine learning algorithms, handled by the Cloud Tensor Processing Units (CTPUs) powering its servers. The exciting part is that developers are free to make use of these offerings through Google’s Cloud Platform. Have an idea that requires machine learning to work? Now you can make it a reality! This is another example of Google venturing into hardware, in order to lead the way in AI, but it also shows why the cloud is such a necessary part of its vision.
The problem is that AI applications are somewhat limited by being offloaded this way. Not only does it create an obvious speed bottleneck, it also introduces new security issues and requires an always-on internet connection.
If you have an idea that requires machine learning to work – well now you can make it a reality!
Luckily, we’re right on the verge of hardware that can offer on-board AI thanks to new Neural Processing Units (NPUs). Google’s Pixel 2 includes the Pixel Visual Core – the company’s first mobile chip which unsurprisingly is concerned with machine learning. The chip is designed to help support the HDR+ feature of the Pixel’s camera, which itself is a machine learning feature. This is the benefit afforded to Google by taking control over its hardware. In the future we could see this leading to more imaging and machine learning applications too. Other companies are likewise coming out with their own NPUs to better handle on-device AI applications.
Phones don’t strictly need these kinds of specialized chips to handle machine learning. Your GPU can do the same thing much more slowly and Android Oreo even has its built-in TensorFlow Lite to act as an embedded, lightweight solution for mobile devices. But specialist hardware will help to make services significantly faster and more powerful, while introducing entirely new applications and benefits – particularly in areas like security.
Google’s vision for the future
Let’s reassess that opening question: what does the future look like to Google?
We can still only guess, but based on everything we know, we can safely say that Google hopes you’ll be using Google Assistant to handle a whole range of tasks. Whether you want to set a reminder, find out where to buy a product by pointing at it, or hear a joke, you’ll ask your phone. Likewise, if you want to find a recipe, send a text, or check how long it will take to drive to work, you’ll choose Google Assistant. This might soon be handled on-board by your smartphone – whether that’s a Pixel, a Galaxy, or an iPhone. To that end we can speculate that Google will be experimenting with the Pixel Visual Core, bringing new AI functionality to Android and potentially prepping its next wave of hardware with more powerful NPUs.
Your phone will know you intimately and this will allow it to pre-empt your requests. It will send you reminders, keep your data safe and, of course, to provide you with personalized shopping recommendations.
But the same technology will also likely be powering a whole host of other tools and gadgets: from augmented reality offerings like Google Glass, to self-driving cars and smarter cameras. Third party developers will leverage this technology in a variety of ways we can’t even think of yet that could change our lives. Maybe we’ll have fridges that order our food for us because they know what we like to eat, or maybe we’ll be able to dictate articles and have a word processor improve our writing style as we do. But whenever we use an application like this, it will be powered by Google and Google will be getting a cut.
Everything Google has done since it first started indexing the web for search has been preparing it for this future – even if the company didn’t realize it at the time.
Will Google succeed?
So, will Google succeed in becoming the de-facto virtual assistant in a world where AI rules supreme?
Thanks to all the work that Google has done with search, it is in perhaps the strongest position to become the ubiquitous AI of choice. Through search, Google has been leaning on publishers to make their content more ‘AI friendly’. Initiatives like ‘structured data markups’ help bots to pull out the key details from a piece of content, such as ingredients needed for a recipe or the dates and times of a concert. This allows Google to actually answer questions, rather than just directing users to a web page.
This is extra code added to websites by developers to pinpoint crucial details. Google made this happen by leveraging its position as the number one search provider. Publishers had to play ball if they wanted to keep their sites at the top of the search engine results pages (SERPs). As a result, Google Search and Google Assistant get smarter.
Of course, anyone can choose to use rich snippets in this way, but no other company has the huge index of links to make full use the feature, nor the leverage with publishers to make such fundamental changes to the way that information is shared. Thanks to Android, Google has huge sway in the hardware space too.
Google is positioned as a formidable player to say the least and it has considerable resources and the necessary focus to ensure that it is ultimately victorious.
Google is positioned as a formidable player to say the least and it has considerable resources and the necessary focus to ensure that it is ultimately victorious. The Google.ai initiative is focussed on research as well as the development of tools like TensorFlow, Cloud TPUs, and applied AI. Numerous strategic acquisitions have only served to fortify its position and increase those resources.
But there are pockets of resistance. Shots have been fired and it seems that companies like Apple, Huawei, and Samsung won’t go down without a fight. By creating a dedicated button for Bixby, Samsung is taking a clear stand in a bid to take ownership of its own AI offerings. Likewise, the A11 chip in the new iPhone and the Kirin 970 in the Mate 10 are ‘Neural Processing Units’ designed specifically to handle on-board AI, which could be concerning for Google. Microsoft’s Cortana has the benefit of Bing and tight Windows integration. Amazon might not have the same search power, but it can offer some clever shopping-related features that won’t be possible anywhere else.
In short, we might well see a battle for AI supremacy in the coming years. The smart money is on Google, but who knows what the future holds.
Don’t you love it when the news sounds like the plot of a science fiction film?