Research
I am interested in natural language processing and artificial intelligence, especially in the following directions:
(1) To make NLP models capable of continually learning multiple tasks and transferring knowledge.
(2) To make NLP models more robust, interpretable and efficient.
(3) To enable NLP models to benefit from and for other modalities and humans.
|
|
Distilling an End-to-End Voice Assistant from Speech Recognition Data
Will Held,
Ella Li,
Michael Ryan,
Weiyan Shi,
Yanzhe Zhang,
Diyi Yang
Model and Demo Release, 2024
website / training code / eval code / bibtex
An end-to-end voice assistant.
|
|
Design2Code: How Far Are We From Automating Front-End Engineering?
Chenglei Si*,
Yanzhe Zhang* ,
Zhengyuan Yang,
Ruibo Liu,
Diyi Yang
Preprint, 2024
website / code / data / bibtex
A benchmark for screenshot-to-html/css transformation.
|
|
Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Evaluation
Zijun Liu,
Yanzhe Zhang ,
Peng Li,
Yang Liu,
Diyi Yang
COLM, 2024
code / bibtex
A dynamic framework for multi-LLM-agent collaboration with auto agent evaluation.
|
|
Auditing Gender Presentation Differences in Text-to-Image Models
Yanzhe Zhang ,
Lu Jiang,
Greg Turk,
Diyi Yang
EAAMO, 2024
website / code / data / bibtex
A metric to evaluate attribute-wise differences between genders in text-to-image models.
|
|
Enhanced Visual Instruction Tuning for Text-rich Image Understanding
Yanzhe Zhang ,
Ruiyi Zhang,
Jiuxiang Gu,
Yufan Zhou,
Nedim Lipka,
Diyi Yang,
Tong Sun
NeurIPS Workshop on Instruction Tuning and Instruction Following, 2023
website / code / data / bibtex / Improved version TRINS, CVPR 2024
A multimodal (vision-language, to be honest) large language model that can read text.
|
|
Robustness of Demonstration-based Learning Under Limited Data Scenario
Hongxin Zhang,
Yanzhe Zhang ,
Ruiyi Zhang,
Diyi Yang
EMNLP, 2022
code / bibtex
Astonishingly find that random tokens strings work well as demonstrations.
|
|
Continual Sequence Generation with Adaptive Compositional Modules
Yanzhe Zhang ,
Xuezhi Wang,
Diyi Yang
ACL, 2022
code / bibtex
Add and reuse adapters strategically in continual sequence generation.
|
|
Continual Learning for Text Classification with Information Disentanglement Based Regularization
Yufan Huang*,
Yanzhe Zhang* ,
Jiaao Chen,
Xuezhi Wang,
Diyi Yang
NAACL, 2021
code / bibtex
Augment regularization in continual text classification with two simple auxiliary tasks.
|
Service
Volunteer: NAACL 2021.
Reviewer: EMNLP 2022, ICLR 2023, EACL 2023, ACL 2023, EMNLP 2023, CoLLAs 2024, ARR (Oct 2023, Dec 2023, Feb 2024, April 2024).
|
|