Introduction to TensorFlow

TensorFlow is an open-source software library for data-flow programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.  It is used for both research and production at Google,‍ often replacing its closed-source predecessor, DistBelief.

Starting in 2011, Google Brain built DistBelief as a proprietary machine learning system based on deep learning neural networks. Its use grew rapidly across diverse Alphabet companies in both research and commercial applications. Google assigned multiple computer scientists, including Jeff Dean, to simplify and refactor the code-base of DistBelief into a faster, more robust application-grade library, which became TensorFlow. In 2009, the team, led by Geoffrey Hinton, had implemented generalized back-propagation and other improvements which allowed generation of neural networks with substantially higher accuracy, for instance a 25% reduction in errors in speech recognition.

TensorFlow is Google Brain’s second generation system. Version 1.0.0 was released on February 11, 2017. While the reference implementation runs on single devices, TensorFlow can run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-purpose computing on graphics processing units). TensorFlow is available on 64-bit Linux, macOS, Windows, and mobile computing platforms including Android and iOS.

TensorFlow computations are expressed as stateful data-flow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays. These arrays are referred to as “tensors”. In June 2016, Dean stated that 1,500 repositories on GitHub mentioned TensorFlow, of which only 5 were from Google.

In May 2016, Google announced its Tensor processing unit (TPU), an ASIC built specifically for machine learning and tailored for TensorFlow. TPU is a programmable AI accelerator- designed to provide high throughput of low-precision arithmetic (e.g., 8-bit), and oriented toward using or running models rather than training them. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver an order of magnitude better-optimized performance per watt for machine learning.

In May 2017, Google announced the second-generation, as well as the availability of the TPUs in Google Compute Engine. The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops.

In February 2018, Google announced that they were making TPUs available in beta on the Google Cloud Platform.

In May 2017, Google announced a software stack specifically for Android development, TensorFlow Lite, beginning with Android Oreo.

On March 1, 2018, Google released its Machine Learning Crash Course (MLCC). Originally designed to help equip Google employees with practical artificial intelligence and machine learning fundamentals, Google rolled out its free TensorFlow workshops in several cities around the world before finally releasing the course to the public.

TensorFlow provides official Python API and C API and without API stability guarantee: C++, Go, and Java. Third party packages are available for C#, Haskell, Julia, R, Scala, Rust, and OCaml.

A “WebGL accelerated, browser based JavaScript library for training and deploying ML models” (where “for inference, TensorFlow.js with WebGL is 1.5-2x slower than TensorFlow Python with AVX. For training, we have seen small models train faster in the browser and large models train up to 10-15x slower in the browser”) was released by Tensorflow.org on March 30, 2018.

The above is a brief about TensorFlow which was compiled from various sites. Watch this space for more updates on the latest trends in Technology.

Leave a Reply

Your email address will not be published. Required fields are marked *