Invited - Cross-layer approximations for neuromorphic computing: From devices to circuits and systems

Priyadarshini Panda, Abhronil Sengupta, Syed Shakib Sarwar, Gopalakrishnan Srinivasan, Swagath Venkataramani, Anand Raghunathan, Kaushik Roy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Citations (Scopus)

Abstract

Neuromorphic algorithms are being increasingly deployed across the entire computing spectrum from data centers to mobile and wearable devices to solve problems involving recognition, analytics, search and inference. For example, large-scale artificial neural networks (popularly called deep learning) now represent the state-of-the art in a wide and ever-increasing range of video/image/audio/text recognition problems. However, the growth in data sets and network complexities have led to deep learning becoming one of the most challenging workloads across the computing spectrum. We posit that approximate computing can play a key role in the quest for energy-efficient neuromorphic systems. We show how the principles of approximate computing can be applied to the design of neuromorphic systems at various layers of the computing stack. At the algorithm level, we present techniques to significantly scale down the computational requirements of a neural network with minimal impact on its accuracy. At the circuit level, we show how approximate logic and memory can be used to implement neurons and synapses in an energy-efficient manner, while still meeting accuracy requirements. A fundamental limitation to the efficiency of neuromorphic computing in traditional implementations (software and custom hardware alike) is the mismatch between neuromorphic algorithms and the underlying computing models such as von Neumann architecture and Boolean logic. To overcome this limitation, we describe how emerging spintronic devices can offer highly efficient, approximate realization of the building blocks of neuromorphic computing systems.

Original languageEnglish (US)
Title of host publicationProceedings of the 53rd Annual Design Automation Conference, DAC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781450342360
DOIs
StatePublished - Jun 5 2016
Event53rd Annual ACM IEEE Design Automation Conference, DAC 2016 - Austin, United States
Duration: Jun 5 2016Jun 9 2016

Publication series

NameProceedings - Design Automation Conference
Volume05-09-June-2016
ISSN (Print)0738-100X

Other

Other53rd Annual ACM IEEE Design Automation Conference, DAC 2016
CountryUnited States
CityAustin
Period6/5/166/9/16

Fingerprint

Cross-layer
Networks (circuits)
Computing
Approximation
Neural networks
Magnetoelectronics
Neurons
Energy Efficient
Hardware
Data storage equipment
Logic
Spintronics
Alike
Synapse
Data Center
Requirements
Building Blocks
Workload
Artificial Neural Network
Neuron

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Modeling and Simulation

Cite this

Panda, P., Sengupta, A., Sarwar, S. S., Srinivasan, G., Venkataramani, S., Raghunathan, A., & Roy, K. (2016). Invited - Cross-layer approximations for neuromorphic computing: From devices to circuits and systems. In Proceedings of the 53rd Annual Design Automation Conference, DAC 2016 [a98] (Proceedings - Design Automation Conference; Vol. 05-09-June-2016). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1145/2897937.2905009
Panda, Priyadarshini ; Sengupta, Abhronil ; Sarwar, Syed Shakib ; Srinivasan, Gopalakrishnan ; Venkataramani, Swagath ; Raghunathan, Anand ; Roy, Kaushik. / Invited - Cross-layer approximations for neuromorphic computing : From devices to circuits and systems. Proceedings of the 53rd Annual Design Automation Conference, DAC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. (Proceedings - Design Automation Conference).
@inproceedings{feca0fe178484b0894870e35a68047bc,
title = "Invited - Cross-layer approximations for neuromorphic computing: From devices to circuits and systems",
abstract = "Neuromorphic algorithms are being increasingly deployed across the entire computing spectrum from data centers to mobile and wearable devices to solve problems involving recognition, analytics, search and inference. For example, large-scale artificial neural networks (popularly called deep learning) now represent the state-of-the art in a wide and ever-increasing range of video/image/audio/text recognition problems. However, the growth in data sets and network complexities have led to deep learning becoming one of the most challenging workloads across the computing spectrum. We posit that approximate computing can play a key role in the quest for energy-efficient neuromorphic systems. We show how the principles of approximate computing can be applied to the design of neuromorphic systems at various layers of the computing stack. At the algorithm level, we present techniques to significantly scale down the computational requirements of a neural network with minimal impact on its accuracy. At the circuit level, we show how approximate logic and memory can be used to implement neurons and synapses in an energy-efficient manner, while still meeting accuracy requirements. A fundamental limitation to the efficiency of neuromorphic computing in traditional implementations (software and custom hardware alike) is the mismatch between neuromorphic algorithms and the underlying computing models such as von Neumann architecture and Boolean logic. To overcome this limitation, we describe how emerging spintronic devices can offer highly efficient, approximate realization of the building blocks of neuromorphic computing systems.",
author = "Priyadarshini Panda and Abhronil Sengupta and Sarwar, {Syed Shakib} and Gopalakrishnan Srinivasan and Swagath Venkataramani and Anand Raghunathan and Kaushik Roy",
year = "2016",
month = "6",
day = "5",
doi = "10.1145/2897937.2905009",
language = "English (US)",
series = "Proceedings - Design Automation Conference",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
booktitle = "Proceedings of the 53rd Annual Design Automation Conference, DAC 2016",
address = "United States",

}

Panda, P, Sengupta, A, Sarwar, SS, Srinivasan, G, Venkataramani, S, Raghunathan, A & Roy, K 2016, Invited - Cross-layer approximations for neuromorphic computing: From devices to circuits and systems. in Proceedings of the 53rd Annual Design Automation Conference, DAC 2016., a98, Proceedings - Design Automation Conference, vol. 05-09-June-2016, Institute of Electrical and Electronics Engineers Inc., 53rd Annual ACM IEEE Design Automation Conference, DAC 2016, Austin, United States, 6/5/16. https://doi.org/10.1145/2897937.2905009

Invited - Cross-layer approximations for neuromorphic computing : From devices to circuits and systems. / Panda, Priyadarshini; Sengupta, Abhronil; Sarwar, Syed Shakib; Srinivasan, Gopalakrishnan; Venkataramani, Swagath; Raghunathan, Anand; Roy, Kaushik.

Proceedings of the 53rd Annual Design Automation Conference, DAC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. a98 (Proceedings - Design Automation Conference; Vol. 05-09-June-2016).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Invited - Cross-layer approximations for neuromorphic computing

T2 - From devices to circuits and systems

AU - Panda, Priyadarshini

AU - Sengupta, Abhronil

AU - Sarwar, Syed Shakib

AU - Srinivasan, Gopalakrishnan

AU - Venkataramani, Swagath

AU - Raghunathan, Anand

AU - Roy, Kaushik

PY - 2016/6/5

Y1 - 2016/6/5

N2 - Neuromorphic algorithms are being increasingly deployed across the entire computing spectrum from data centers to mobile and wearable devices to solve problems involving recognition, analytics, search and inference. For example, large-scale artificial neural networks (popularly called deep learning) now represent the state-of-the art in a wide and ever-increasing range of video/image/audio/text recognition problems. However, the growth in data sets and network complexities have led to deep learning becoming one of the most challenging workloads across the computing spectrum. We posit that approximate computing can play a key role in the quest for energy-efficient neuromorphic systems. We show how the principles of approximate computing can be applied to the design of neuromorphic systems at various layers of the computing stack. At the algorithm level, we present techniques to significantly scale down the computational requirements of a neural network with minimal impact on its accuracy. At the circuit level, we show how approximate logic and memory can be used to implement neurons and synapses in an energy-efficient manner, while still meeting accuracy requirements. A fundamental limitation to the efficiency of neuromorphic computing in traditional implementations (software and custom hardware alike) is the mismatch between neuromorphic algorithms and the underlying computing models such as von Neumann architecture and Boolean logic. To overcome this limitation, we describe how emerging spintronic devices can offer highly efficient, approximate realization of the building blocks of neuromorphic computing systems.

AB - Neuromorphic algorithms are being increasingly deployed across the entire computing spectrum from data centers to mobile and wearable devices to solve problems involving recognition, analytics, search and inference. For example, large-scale artificial neural networks (popularly called deep learning) now represent the state-of-the art in a wide and ever-increasing range of video/image/audio/text recognition problems. However, the growth in data sets and network complexities have led to deep learning becoming one of the most challenging workloads across the computing spectrum. We posit that approximate computing can play a key role in the quest for energy-efficient neuromorphic systems. We show how the principles of approximate computing can be applied to the design of neuromorphic systems at various layers of the computing stack. At the algorithm level, we present techniques to significantly scale down the computational requirements of a neural network with minimal impact on its accuracy. At the circuit level, we show how approximate logic and memory can be used to implement neurons and synapses in an energy-efficient manner, while still meeting accuracy requirements. A fundamental limitation to the efficiency of neuromorphic computing in traditional implementations (software and custom hardware alike) is the mismatch between neuromorphic algorithms and the underlying computing models such as von Neumann architecture and Boolean logic. To overcome this limitation, we describe how emerging spintronic devices can offer highly efficient, approximate realization of the building blocks of neuromorphic computing systems.

UR - http://www.scopus.com/inward/record.url?scp=84977079250&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84977079250&partnerID=8YFLogxK

U2 - 10.1145/2897937.2905009

DO - 10.1145/2897937.2905009

M3 - Conference contribution

AN - SCOPUS:84977079250

T3 - Proceedings - Design Automation Conference

BT - Proceedings of the 53rd Annual Design Automation Conference, DAC 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Panda P, Sengupta A, Sarwar SS, Srinivasan G, Venkataramani S, Raghunathan A et al. Invited - Cross-layer approximations for neuromorphic computing: From devices to circuits and systems. In Proceedings of the 53rd Annual Design Automation Conference, DAC 2016. Institute of Electrical and Electronics Engineers Inc. 2016. a98. (Proceedings - Design Automation Conference). https://doi.org/10.1145/2897937.2905009