M6: a Chinese Multimodal Pretrainer

M6: a Chinese Multimodal Pretrainer

M6: A Chinese Multimodal Pretrainer Junyang Lin1∗, Rui Men1∗, An Yang1∗, Chang Zhou1, Ming Ding2, Yichang Zhang1, Peng Wang1, Ang Wang1, Le Jiang1, Xianyan Jia1, Jie Zhang1, Jianwei Zhang1, Xu Zou2, Zhikang Li1, Xiaodong Deng1, Jie Liu1, Jinbao Xue1, Huiling Zhou1, Jianxin Ma1, Jin Yu1, Yong Li1, Wei Lin1, Jingren Zhou1, Jie Tang2y, Hongxia Yang1y 1Alibaba Group, China 2Tsinghua University, China {junyang.ljy,menrui.mr,ya235025,ericzhou.zc,yichang.zyc,zheluo.wp}@alibaba-inc.com {wangang.wa,jiangle.jl,xianyan.xianyanjia,wanglin.zj,zhangjianwei.zjw}@alibaba-inc.com {zhikang.lzk,xiaodongdeng.dxd,sanshuai.lj,zhiji.xjb,zhule.zhl,jason.mjx,kola.yu}@alibaba-inc.com {jiufeng.ly,weilin.lw,jingren.zhou,yang.yhx}@alibaba-inc.com {dm18,zoux18}@mails.tsinghua.edu.cn,[email protected] ABSTRACT consists of over 1.9TB images and 292GB texts. To the best of our In this work, we construct the largest dataset for multimodal pre- knowledge, this is the largest dataset in Chinese for pretraining training in Chinese, which consists of over 1.9TB images and 292GB in both multimodality and natural language. The dataset collected texts that cover a wide range of domains. We propose a cross-modal from the webpages consists of different types of data and covers a pretraining method called M6, referring to Multi-Modality to Multi- large scale of domains, including encyclopedia, question answer- Modality Multitask Mega-transformer, for unified pretraining on ing, forum discussion, product description, etc. Also, we design the data of single modality and multiple modalities. We scale the sophisticated cleaning procedures to ensure that the data are of model size up to 10 billion and 100 billion parameters, and build high quality. the largest pretrained model in Chinese. We apply the model to a Furthermore, in order to sufficiently leverage such a large amount series of downstream applications, and demonstrate its outstanding of high-quality data, we propose to build an extremely large model performance in comparison with strong baselines. Furthermore, we that can process data of multiple modalities and adapt to differ- specifically design a downstream task of text-guided image gen- ent types of downstream tasks. Thus we propose a novel model eration, and show that the finetuned M6 can create high-quality called M6, referring to MultiModality-to-MultiModality Multitask images with high resolution and abundant details. Mega-transformer. The model is based on the transformer, and it is pretrained with multiple tasks. Pretraining endows the model with KEYWORDS the capability of single-modality and multimodality understanding and generation. Based on the architecture of M6, we build M6-10B Multimodal Pretraining; Multitask; Text-to-Image Generation and M6-100B, which are scaled up to 10 billion and 100 billion pa- 1 INTRODUCTION rameters respectively. To be more specific, M6-100B is the recent largest model pretrained on Chinese data. We apply the model to Pretraining has become a focus in the research in natural language a series of downstream applications, including product descrip- processing (NLP) [1, 2, 7, 16, 18, 19, 27, 31, 37, 44, 49]. The recent tion generation, visual question answering, community question GPT-3 with over 175 billion parameters demonstrates that large answering, Chinese poem generation, etc., and our experimental models trained on big data have extremely large capacity and it can results show that M6 outperforms a series of strong baselines. outperform the state-of-the-arts in downstream tasks especially in Another contribution of this work is that we first incorporate the zero-shot setting. Also, the rapid development of pretraining pretraining with text-to-image generation. Following Ramesh et al. in NLP sparkles cross-modal pretraining. A number of studies [4, [32], we leverage a two-stage framework for image generation. To 11, 17, 22, 24, 25, 28, 29, 38, 51] have created new state-of-the-art arXiv:2103.00823v4 [cs.CL] 29 May 2021 be more specific, we apply a trained vector-quantized generative performances for various cross-modal downstream tasks. adversarial network to representing images with discrete image A pity is that most recent studies focus on the pretraining on codes, and we then use the pretrained M6 to learn the relations be- English data. There are lack of both large-scale datasets in Chinese tween texts and codes. Such learning can bridge the two modalities and large-scale models pretrained on the data of Chinese. Therefore, and enables controllable text-to-image generation. in this work, we develop a large-scale dataset M6-Corpus, which To summarize, the contributions of M6 are as follows: Permission to make digital or hard copies of all or part of this work for personal or • We collect and build the largest Chinese multi-modal pre- classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation training data in industry, which includes 300GB texts and on the first page. Copyrights for components of this work owned by others than ACM 2TB images. must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, • to post on servers or to redistribute to lists, requires prior specific permission and/or a We propose M6 for multimodal pretraining in Chinese, and fee. Request permissions from [email protected]. we scale the model size to up to 10 and 100 billion parameters. Conference’17, July 2017, Washington, DC, USA © 2021 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 ∗Equal contribution. https://doi.org/10.1145/nnnnnnn.nnnnnnn †Corresponding author. Both M6-10B and M6-100B are the recent largest multimodal Though there are a tremendous amount of text resources and im- pretrained model. ages on the world wide web, the corpus for multimodal pretraining • M6 is versatile and exceeds strong baselines by 11.8% in VQA, is assumed to be better when satisfying the following properties: 18.4 in image captioning, and 10.3% in image-text matching. (1). the sentences should be fluent natural language within a nor- Furthermore M6 is able to generate high-quality images. mal length, and should not contain meaningless tokens, such as • With carefully designed large-scale distributed training op- markups, duplicate punctuation marks, random combinations of timizations, M6 has obvious advantages in training speed characters, etc.; (2). the images should be natural and realistic, and and greatly reduces training costs, creating the possibility the resolutions of the images need to be identifiable by humans; (3). for more widespread use of multi-modal pretraining. both the texts and images should not contain illegal content, such as pornography, violence, etc.; (4). the images and texts should be 2 DATASET semantically relevant; (5). the datasets should cover a wide range of We collect and develop the largest multi-modality and text dataset in fields, say sports, politics, science, etc., and therefore it canendow Chinese for now, which is one of the key contributions of this paper. the model with sufficient world knowledge. In this section, we first identify the limitations of existing datasets and then describe the construction and preprocessing procedure of our proposed dataset. 2.3 Dataset Construction 2.1 Existing Datasets Based on the requirements above, we collect data of both plain texts and image-text pairs. There are different types of data, including The construction of large-scale corpus with high quality and do- encyclopedia, crawled webpage, community question answering, main coverage is crucial to Chinese pretraining. In early previous 1 forum, product description, etc. We present the details in Table 3. works, the Chinese Wikipedia is one of the most frequently used The collected corpus consists of both plain-texts and image-text datasets to train Chinese language models. It contains 1.6GB texts pairs, which is compatible with the designed text-only and multi- (around 0.4B tokens) covering around 1M encyclopedia entries. An- modal pretraining tasks. Also, the data has a large coverage over other corpus with a comparable size is the THUCTC[39] dataset, domains, such as science, entertainment, sports, politics, common- which includes 740K news articles. However, with the rapidly in- sense of life, etc. We have also compared some characteristics of creasing capacity of recent language models, the scale of these our corpus with existing datasets used for Chinese pretraining in existing datasets is clearly insufficient. Recently, Cui et al. [5] em- Table 2. The size of our dataset is much larger than the previous ploys unreleased extended data that are 10 times larger than the ones. To our knowledge, this is the first large-scale, multimodal and CN-Wikipedia to pretrain their Chinese language model. Xu et al. multidomain corpus for Chinese pretraining. [47] released a 100GB corpus named CLUECorpus2020, which is re- We implement sophisticated preprocessing to obtain clean data. trieved from the multilingual Common Crawl dataset. However, the For text data, we first remove HTML markups and duplicate punc- scale of the datasets is still insufficient to facilitate super large-scale tuation marks, and we only reserve characters and punctuation pretraining compared with existing English pretrained models. For marks that are in Chinese and English. We remove the topics that example, GPT-3 contains 175B parameters and is trained on 570GB are shorter than 5 characters and contents shorter than 15 charac- texts. Meanwhile, the dataset should contain image-text pairs rather ters. We further apply in-house spam detection to remove sentences than plain texts for multi-modal pretraining. that contain words related to certain political issues, pornography, or words in the list of dirty, naughty, and other bad words. In order 2.2 Standards for a High-quality Dataset to preserve the linguistic acceptance of the texts, we implement a To perform large-scale multi-modal pretraining and learn complex language model to evaluate their perplexities, and sentences with world knowledge in Chinese, the dataset is highly required to pro- high perplexities are discarded.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us