Neuromorphic Computing Across the Stack

Devices, Circuits and Architectures

Aayush Ankit, Abhronil Sengupta, Kaushik Roy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.

Original languageEnglish (US)
Title of host publicationProceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-6
Number of pages6
ISBN (Electronic)9781538663189
DOIs
StatePublished - Dec 31 2018
Event2018 IEEE Workshop on Signal Processing Systems, SiPS 2018 - Cape Town, South Africa
Duration: Oct 21 2018Oct 24 2018

Publication series

NameIEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation
Volume2018-October
ISSN (Print)1520-6130

Conference

Conference2018 IEEE Workshop on Signal Processing Systems, SiPS 2018
CountrySouth Africa
CityCape Town
Period10/21/1810/24/18

Fingerprint

Sparsity
Data storage equipment
Networks (circuits)
Computing
Co-design
Event-driven
Spike
Workload
Learning systems
Activation
Machine Learning
Chemical activation
Bandwidth
Hardware
Neural Networks
Neural networks
Resources
Necessary
Requirements
Energy

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • Signal Processing
  • Applied Mathematics
  • Hardware and Architecture

Cite this

Ankit, A., Sengupta, A., & Roy, K. (2018). Neuromorphic Computing Across the Stack: Devices, Circuits and Architectures. In Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018 (pp. 1-6). [8598419] (IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation; Vol. 2018-October). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/SiPS.2018.8598419
Ankit, Aayush ; Sengupta, Abhronil ; Roy, Kaushik. / Neuromorphic Computing Across the Stack : Devices, Circuits and Architectures. Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1-6 (IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation).
@inproceedings{d4d203d1bacc403abadbee8a6c57bd15,
title = "Neuromorphic Computing Across the Stack: Devices, Circuits and Architectures",
abstract = "Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.",
author = "Aayush Ankit and Abhronil Sengupta and Kaushik Roy",
year = "2018",
month = "12",
day = "31",
doi = "10.1109/SiPS.2018.8598419",
language = "English (US)",
series = "IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1--6",
booktitle = "Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018",
address = "United States",

}

Ankit, A, Sengupta, A & Roy, K 2018, Neuromorphic Computing Across the Stack: Devices, Circuits and Architectures. in Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018., 8598419, IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation, vol. 2018-October, Institute of Electrical and Electronics Engineers Inc., pp. 1-6, 2018 IEEE Workshop on Signal Processing Systems, SiPS 2018, Cape Town, South Africa, 10/21/18. https://doi.org/10.1109/SiPS.2018.8598419

Neuromorphic Computing Across the Stack : Devices, Circuits and Architectures. / Ankit, Aayush; Sengupta, Abhronil; Roy, Kaushik.

Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 1-6 8598419 (IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation; Vol. 2018-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Neuromorphic Computing Across the Stack

T2 - Devices, Circuits and Architectures

AU - Ankit, Aayush

AU - Sengupta, Abhronil

AU - Roy, Kaushik

PY - 2018/12/31

Y1 - 2018/12/31

N2 - Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.

AB - Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.

UR - http://www.scopus.com/inward/record.url?scp=85061379381&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061379381&partnerID=8YFLogxK

U2 - 10.1109/SiPS.2018.8598419

DO - 10.1109/SiPS.2018.8598419

M3 - Conference contribution

T3 - IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation

SP - 1

EP - 6

BT - Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Ankit A, Sengupta A, Roy K. Neuromorphic Computing Across the Stack: Devices, Circuits and Architectures. In Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1-6. 8598419. (IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation). https://doi.org/10.1109/SiPS.2018.8598419