questionet

Improving Language Understanding by Generative Pre-Training (GPT1) 본문

Deep learning/NLP 주요 논문

Improving Language Understanding by Generative Pre-Training (GPT1)

orthanc 2021. 4. 12. 20:09

자연어는 텍스트 함축textual entailment, 질의응답question answering, 의미 유사도 평가semantic similarity assessment 문서 분류 등 광범위한 태스크 래인지를 가진다. 
레이블되지 않은 텍스트는 엄청 많은 반면 특정 태스크를 위해 레이블된 자료들은 드물다

We demonstrate that large gains on these tasks can be realized by generative pre-raining of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. 
우리는 레이블되지 않은 다양한 종류의 말뭉치를 가지고 사전훈련하여 생성 모델을 만들어 내고
이걸 특정 태스크 수행을 위해 파인튜닝하여 사용할 수 있는 가능성과 얻을 수 있는 이득을 논증하고자 한다.

Comments