Physics-aware analytic-gradient training of photonic neural networks
Photonic neural networks (PNNs) have emerged as promising alternatives to traditional electronic neural networks. However, the training of PNNs, especially the chip implementation of analytic gradient descent algorithms that are recognized as highly efficient in traditional practice, remains a major...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177940 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Photonic neural networks (PNNs) have emerged as promising alternatives to traditional electronic neural networks. However, the training of PNNs, especially the chip implementation of analytic gradient descent algorithms that are recognized as highly efficient in traditional practice, remains a major challenge because physical systems are not differentiable. Although training methods such as gradient-free and numerical gradient methods are proposed, they suffer from excessive measurements and limited scalability. State-of-the-art in situ training method is also cost-challenged, requiring expensive in-line monitors and frequent optical I/O switching. Here, a physics-aware analytic-gradient training (PAGT) method is proposed that calculates the analytic gradient in a divide-and-conquer strategy, overcoming the difficulty induced by chip non-differentiability in the training of PNNs. Multiple training cases, especially a generative adversarial network, are implemented on-chip, achieving a significant reduction in time consumption (from 31 h to 62 min) and a fourfold reduction in energy consumption, compared to the in situ method. The results provide low-cost, practical, and accelerated solutions for training hybrid photonic-digital electronic neural networks. |
---|