Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization

    Research output: Contribution to journalArticlepeer-review

    3 Scopus citations

    Abstract

    Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this burden is through low-bit quantization. However, the introduced quantization error causes a drop in the classification accuracy and requires design rethinking. To benefit from the hardware-friendly power-of-two (POT) and additive POT quantization, we explore various gradient estimation methods and propose quantization error-aware gradient estimation that manoeuvres weight update to be as close to the projection steps as possible. The clipping or scaling coefficients of the quantization scheme are learned jointly with the model parameters to minimize quantization error. We also apply per-channel quantization on POT and additive POT quantized models to minimize the accuracy degradation due to the rigid resolution property of POT quantization. We show that comparable accuracy can be achieved when using the proposed gradient estimation for POT quantization, even at ultra-low bit precision.

    Original languageBritish English
    Pages (from-to)61264-61272
    Number of pages9
    JournalIEEE Access
    Volume11
    DOIs
    StatePublished - 2023

    Keywords

    • Deep neural network
    • gradient estimation
    • non-uniform quantization

    Fingerprint

    Dive into the research topics of 'Gradient Estimation for Ultra Low Precision POT and Additive POT Quantization'. Together they form a unique fingerprint.

    Cite this