Ad
Ad
Ad
News

Researchers Use Photon-based Units to Enable Complex Machine Learning

Machine learning and artificial intelligence using neural networks is a new approach to replicate brain functionalities for numerous applications, according to researchers.

According to a recently published paper Applied Physics Reviews, researchers have proposed a new approach to perform computations required by a. neural network, using light instead of electricity. In this new approach, a photonic tensor core performs multiplications of matrices in parallel, improving the speed and efficiency of current machine learning algorithms.

In machine learning, neural networks are trained to learn and perform unsupervised decisions and classifications on unknown data. Once a system is trained and developed, it can produce an inference to recognize and classify objects and patterns and find signatures in the data. The photonic TPU stores and processes data side-by-side. It features an electro-optical interconnect, which allows the optical memory to be efficiently read and processed with the photonic TPU to interface with other architectures.  

Mario Miscuglio, one of the authors of the paper, explained that the team found that integrated photonic platforms that integrate efficient optical memory and response, obtaining the same operations as a tensor processing unit. But these algorithms consume a fraction of the power and have higher throughput and, when opportunely trained, can be used for performing inference at the speed of light.

Most networks unravel multiple layers of interconnected neurons, aiming to mimic the human brain. An efficient way to represent these networks is a composite function that multiplies matrices and vectors. This representation allows the performance of parallel operations through architectures specialized in vectorized operations like matrix multiplication.

However, the network becomes more complex as the intelligence and the accuracy of the prediction becomes high. Such systems demand larger amounts of data for computation and more power to process that data. Current digital processors suitable for deep learning, like graphics processing units and tensor processing units, are limited in performing more complex operations with greater accuracy by the power required to do so and by the slow transmission of electronic data between the processor and the memory.

The team showed that their TPU could be done 2-3 levels higher than an electrical TPU. Photons may also be an ideal match for computing the node-distributed networks and engines performing intelligent tasks with high throughput at the edge of systems such as 5G. Data signals at network edges may already exist in the form of photons from surveillance cameras, optical sensors, and other sources.

The team explained that specialized photonic processors could save a significant amount of energy, improve response time, and reduce data center traffic. For the end users, that data is processed much faster, because a large portion of this data is pre-processed, meaning only a piece of the data needs to be sent to the cloud or data center.

Write A Comment