Dynamically generated compact neural networks for task progressive learning

Rupesh Raj Karn, Prabhakar Kudva, Ibrahim M. Elfadel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Task progressive learning is often required where the training data become available in batches over the time. Such learning has the characteristic of using an existing model trained over a set of tasks to learn a new task while maintaining the accuracy of older tasks. Artificial Neural Networks (ANNs) have a higher capacity for progressive learning than other traditional machine learning models due to the availability of a large number of ANN parameters. A progressive model that uses a fully connected ANN suffers from long training time, overfitting, and excessive resource usage. It is therefore necessary to generate the ANN incrementally as new tasks arrive and new training is needed. In this paper, an incremental algorithm is presented to dynamically generate a compact neural network by pruning and expanding the synaptic weights based on the learning requirements of the new tasks. The algorithm is implemented, analyzed, and validated using the cloud network security datasets, UNSW and AWID, as well as the image dataset, MNIST.

Original languageBritish English
Title of host publication2020 IEEE International Symposium on Circuits and Systems, ISCAS 2020 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728133201
StatePublished - 2020
Event52nd IEEE International Symposium on Circuits and Systems, ISCAS 2020 - Virtual, Online
Duration: 10 Oct 202021 Oct 2020

Publication series

NameProceedings - IEEE International Symposium on Circuits and Systems
Volume2020-October
ISSN (Print)0271-4310

Conference

Conference52nd IEEE International Symposium on Circuits and Systems, ISCAS 2020
CityVirtual, Online
Period10/10/2021/10/20

Fingerprint

Dive into the research topics of 'Dynamically generated compact neural networks for task progressive learning'. Together they form a unique fingerprint.

Cite this