Google is fully invested in advancing the potential of artificial intelligence. The company has released a bunch of tools, documentation, tutorials, and platforms to help developers utilize machine learning for applications. TensorFlow is one of their most important projects in this field. It’s an open-source development platform, helping teams and individuals to train models via machine learning. At the 3rd annual TensorFlow Developer Summit, Google announced the first alpha release of TensorFlow 2.0. The summit also introduced a lot of other stuff, which we’ll summarize below.
TensorFlow 2.0 Alpha
The first public alpha version of TensorFlow 2.0 is now available. While older versions of the platform were more useful for developers experienced with ML, TensorFlow 2.0 takes a huge step in simplifying the workflow for everyone. TensorFlow 2.0 will rely on tf.keras to make the environment easier to use. The developers behind the platform are also planning to shove off a handful of APIs from it. As they explain, there are a couple of tools that do almost exactly the same thing. As the team has decided to stick with tf.keras, it’s safe to remove many of the unnecessary APIs. TensorFlow also runs ops as soon as they’re called, due to its eager-first nature. The full release of TensorFlow 2.0 is set for Q2 0f 2019. You can see migration, installation, and reference instructions in the blog post from TensorFlow team.
Big data, while having its perks, also has some disadvantages. There are literally billions of devices generating new data day by day, but in most cases, they go through a centralized environment (data centers, clouds, etc). Functionally, the approach means less for the end-user, but it can raise the eyebrows of privacy conscious users. The TensorFlow team has acknowledged this flaw as they’re now letting developers go with a more decentralized approach. TensorFlow Federated is a new open-source framework that allows developers to use all the ML-training goodies from TF while keeping the data locally. The TensorFlow blog post mentioned an example of how Google trains keyboard texting predictions on-device. You can see all the technical details in the blog post from TensorFlow team.
ML specialists not only think about data but about privacy, too. I’m sure every now and then you stumble upon an advertisement suspiciously related to an email you recently sent or a message you received. To make sure that all those sensitive data are processed and trained securely, experts came up with new privacy-friendly ways of interacting with them. Google is investing a lot of effort into making sure that users are okay with processing their data. The blog post announcing TensorFlow Privacy includes one very important sentence: “Ideally, the parameters of trained machine-learning models should encode general patterns rather than facts about specific training examples.” This is the goal of AI—processing data without storing the sensitive parts of it, therefore making users less vulnerable.
TensorFlow Privacy uses the theory of differential privacy to process the data. That means that it will only learn the specific components of the data if they’re relevant enough. It will shove off the rare details while only processing and learning about the most important stuff. So, if you feed this article to TensorFlow Privacy, it would probably learn a thing or two about TensorFlow and my love of privacy, while blurring out the poorly used metaphors. This way, you can make sure that TensorFlow Privacy-powered ML platform doesn’t just take anything you give it and turn it into an ad or something. Here are more in-depth examples and references.
The Neural Network is currently the most efficient and accurate way of handling graphical content. Due to the nature of many of these new technologies, they are often misunderstood in their ways of functioning. That’s why Google decided to publish “Exploring Neural Networks with Activation Atlases.” Activation Atlases visualizes and explains exactly how Neural Networks process images.
As you can see in the video above, every neuron is connected to each other. The neural network processes all the data to recognize patterns about anything, whether it be the features of the face or the calligraphy. This helps neurons make the most accurate predictions collectively. To put it in short, Activation Atlases visualize the process of how AI makes decisions. You can learn more about it here.
Coral Development Platform
As I’ve already mentioned, Google is heavily invested in offering tools and services to make AI development easier. But, some people like to go beyond online services. That’s why Google introduced Coral. It is a development platform for in-house, localized hardware. It contains all kinds of software and hardware components, as well as neural network capabilities and all kinds of AI tools.
The first Coral-based hardware component, the Coral Dev Board, comes with an Edge TPU (Tensor Processing Unit). It offers high-performance ML capabilities. The Coral Dev Board also has WiFi, Bluetooth, RAM, and eMMC modules for all your needs. You can also add some hardware by yourself. You can see information about Coral and its devices on this link.