Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

dc.citation.issue8
dc.citation.rankM21a
dc.citation.spage675
dc.citation.volume3
dc.contributor.authorCoelho, Claudionor
dc.contributor.authorKuusela, Aki
dc.contributor.authorLi, Shan
dc.contributor.authorZhuang, Hao
dc.contributor.authorNgadiuba, Jennifer
dc.contributor.authorAarrestad, Thea Klaeboe
dc.contributor.authorLončar, Vladimir
dc.contributor.authorPierini, Maurizio
dc.contributor.authorPol, Adrian Alan
dc.contributor.authorSummers, Sioni
dc.date.accessioned2024-06-14T08:22:36Z
dc.date.available2024-06-14T08:22:36Z
dc.date.issued2021-06-21
dc.description.abstractAlthough the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton–proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of O(1)μs is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.
dc.identifier.doi10.1038/s42256-021-00356-5
dc.identifier.issn2522-5839
dc.identifier.scopus2-s2.0-8510841804
dc.identifier.urihttps://pub.ipb.ac.rs/handle/123456789/92
dc.identifier.wos000664332400001
dc.language.isoen
dc.publisherNature Portfolio
dc.relation.ispartofNature Machine Intelligence
dc.relation.ispartofabbrNat. Mach. Intell.
dc.rightsrestrictedAccess
dc.titleAutomatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
dc.typeArticle
dc.type.versionpublishedVersion
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Loncar2021_nature_machine_intelligence.pdf
Size:
2.08 MB
Format:
Adobe Portable Document Format
Description:
Collections