It is now easier than ever to build quantum machine-learning applications, thanks to Google. The tech giant this week released free open-source software that makes machine-learning more accessible and functional.
Dubbed TensorFlow Quantum, the tool is an add-on to TensorFlow toolkit, a machine-learning platform launched by Google in 2015. It is designed to simplify deep neural networks and provide reusable code so that developers do not have to write machine-learning code from scratch.
With TensorFlow Quantum, developers can code quantum apps without worrying about the hardware they are running on. It comes with a functionality that lets you switch between a real quantum computer and a simulation of one on a classical computer. There is also a debugger in the simulation environment that makes it easier to debug your quantum apps before deploying it on the main network.
According to Masoud Mohseni, TensorFlow Quantum project lead at Googe, the toolkit will allow coders to discover and experiment with new and reusable AI algorithms.
Nathan Killoran, a researcher at quantum computing startup Xanadu, told MIT Technology that the reason why Google is making the TensorFlow Quantum code public is to build a community around the project. The idea is to facilitate seamless sharing of code and ideas among developers so as to foster innovation in machine learning.
Quantum computing potential
Researchers have been working on quantum computers for decades because they promise much faster speeds. However, very little progress has been made—researchers have struggled to build working devices with enough qubits to make them competitive with conventional types of computers.
But with the TensorFlow Quantum toolkit, researchers are now able to work with quantum data and build their machine-learning models much faster.
Last October Google claimed its advanced computer Sycamore had achieved “quantum supremacy” for the first time, surpassing the performance of conventional devices. According to Google’s researchers, Sycamore was able to complete a random task in three minutes 20 seconds, a task that the world’s best computer would take 10, 000 years to complete.
However, those figures were disputed by IBM, which is also working on quantum computers of its own.
“We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity,” IBM researchers Edwin Pednault, John Gunnels, and Jay Gambetta said in a blog post.
Follow Ledger Africa on Twitter.