The paper proposes a new method for training language models, which is based on the idea of self-supervised learning. The method is called "Self-supervised Learning with Contrastive Predictive Coding", or "CPC" for short. CPC works by training the language model to predict the next word in a sequence of words, given the previous words. This is done by creating a contrastive loss function, which measures the similarity between the predicted word and the actual next word. The model is then trained to minimize this loss function. The authors show that CPC can achieve state-of-the-art results on a variety of language modeling tasks, including language inference, question answering, and text summarization. They also show that CPC can be used to pre-train language models, which can then be fine-tuned for downstream tasks.
What is one of the goals of the Freedmen's Bureau?
How did the southern economy change during Reconstruction?
What was the main cause of the Civil War?
Previous