Abstract: To achieve satisfying classification performance, traditional machine learning (TML) usually assumes that there are abundant labeled data with i.i.d. distribution to train a good model. However, there always lacks of labeled data and are expensive to annotate data, thus TML may fail. Fortunately, there may exist large amount of labeled data from related source domains but with different distributions. Along this line, Transfer Learning (TL) is proposed to adapt the knowledge from related domain to improving the performance in target domains. In this talk, I will overview the concepts of transfer learning, and recent studies of transfer learning. Then, I will introduce our efforts made on the research on transfer learning algorithms. Finally, some possible future research directions are pointed out.
Fuzhen Zhuang is an associate professor in the Institute of Computing Technology, Chinese Academy of Sciences. His research interests include transfer learning, multi-task learning and recommendation systems. He has published more 70 papers in some prestigious refereed journals and conference proceedings, such as IEEE TKDE, IEEE TNNLS, IEEE Trans. on Cybernetics, ACM TIST, Information Sciences, Neural Networks, KDD, IJCAI, AAAI, WWW, ICDE, ACM CIKM, ACM WSDM, SIAM SDM and IEEE ICDM.