Yiduo Guo (郭一铎)

alt text 

PH.D. candidate,
Wangxuan Institute,Peking university
Beijing, China
E-mail: 2001111363@stu.pku.edu.cn

About me

I am a fourth-year PH.D. candidate at Wangxuan institute, Peking university. I am advised by Prof. Dongyan Zhao and work closely with Prof. Bing Liu. I am interested in designing agents that can continually learn new knowledge and enabling foundation models to evolve with time. I was a research intern at Microsoft Research Asia, under the supervision of Duan Nan and Yaobo Liang.

Research

My research interests include:

  • LLM-based agents that can improve themselves with environmental feedback.

  • Continual pretraining or finetuning foundation models

  • Online Class incremental Learning

Preprint

  1. Learning to Plan with Natural Language Large language models can learn and update natural langauge task plan based on environmental feedback and then use the learned plan to solve reasoning tasks.

  2. AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models A human-level examination evaluation for artificial general intelligence.

  3. PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion A multi-turn and multi-modal PPT task compeletion benchmark.

Recent publications

  1. Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer with Fine-tuning Slow and Fast, ACL2023
    Yiduo Guo, Yaobo Liang, Liu Bing, Dongyan Zhao, Nan Duan
    We propose a phrase-related and location-related paramater protection mechanism to reduce cross-lingual knowledege forgetting

  2. Class-Incremental Learning based on Label Generation, ACL2023
    Yijia Shao, Yiduo Guo,Dongyan Zhao, Liu Bing
    We find that generation loss is better than cross-entropy loss for class incremental learning in pre-trained language models.

  3. Dealing with Cross-Task Class Discrimination in Online Continual Learning, CVPR2023
    Yiduo Guo, Liu Bing, Dongyan Zhao
    We find that establishing cross-task boundaries is a vital problem for class incremental learning

  4. Online Continual Learning through Mutual Information Maximization, ICML2022
    Yiduo Guo, Liu Bing, Dongyan Zhao
    We find that learning holistic representation is neccessary for class incremental learning and achieve it by mutual information maximization.

  5. Adaptive Orthogonal Projection for Batch and Online Continual Learning, AAAI2022
    Yiduo Guo, Wenpeng Hu, Dongyan Zhao, Liu Bing
    We propose an advanced gradient orthogonal algorithm to avoid the forgetting problem.

Full list of publications in Semantic Scholar.

Academic service

Reviewer for recent two years

  • Neurips

  • ICML

  • ICLR

  • ACL

  • CVPR

  • AAAI

More details in Google Scholar