Finetuned Language Models Are Zero-Shot Learners Finetuned Language Models Are Zero-Shot LearnersThis paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot perarxiv.org 제안 모델: FLAN배경선행 모델의 문제점제안 모델: 선행 문제..