Transfer learning: a friendly introduction

Asmaul Hosna, Ethel Merry, Jigmey Gyalmo, Zulfikar Alom, Zeyar Aung, Mohammad Abdul Azim

Research output: Contribution to journalArticlepeer-review

53 Scopus citations


Infinite numbers of real-world applications use Machine Learning (ML) techniques to develop potentially the best data available for the users. Transfer learning (TL), one of the categories under ML, has received much attention from the research communities in the past few years. Traditional ML algorithms perform under the assumption that a model uses limited data distribution to train and test samples. These conventional methods predict target tasks undemanding and are applied to small data distribution. However, this issue conceivably is resolved using TL. TL is acknowledged for its connectivity among the additional testing and training samples resulting in faster output with efficient results. This paper contributes to the domain and scope of TL, citing situational use based on their periods and a few of its applications. The paper provides an in-depth focus on the techniques; Inductive TL, Transductive TL, Unsupervised TL, which consists of sample selection, and domain adaptation, followed by contributions and future directions.

Original languageBritish English
Article number102
JournalJournal of Big Data
Issue number1
StatePublished - Dec 2022


  • Domain adaptation
  • Image classification
  • Machine learning
  • Multi-task learning
  • Sample selection
  • Sentiment classification
  • Transfer learning
  • Zero shot translation


Dive into the research topics of 'Transfer learning: a friendly introduction'. Together they form a unique fingerprint.

Cite this