DOBF: a Deobfuscation Pre-Training Objective for Programming Languages

DOBF: a Deobfuscation Pre-Training Objective for Programming Languages

DOBF: A Deobfuscation Pre-Training Objective for Programming Languages Baptiste Roziere * 1 2 Marie-Anne Lachaux * 1 Marc Szafraniec 1 Guillaume Lample 1 Abstract and phrases (Sun et al., 2019), sampling masked words ac- cording to their frequencies (Lample & Conneau, 2019), Recent advances in self-supervised learning have replacing words with plausible alternatives (Clark et al., dramatically improved the state of the art on a 2020), etc. Overall, most of these pre-training objectives wide variety of tasks. However, research in lan- boil down to denoising auto-encoding tasks with different guage model pre-training has mostly focused on methods to add noise to the input, using arbitrary noise func- natural languages, and it is unclear whether mod- tions. In our case, we are interested in pre-training deep els like BERT and its variants provide the best pre- learning models for programming languages. As in natural training when applied to other modalities, such language, pre-training was shown to be effective for source as source code. In this paper, we introduce a new code (Feng et al., 2020; Roziere et al., 2020). However, pre-training objective, DOBF, that leverages the these studies both rely on the original MLM objective pro- structural aspect of programming languages and posed by Devlin et al.(2018), which was initially designed pre-trains a model to recover the original version for natural languages and does not leverage the particular of obfuscated source code. We show that mod- structure of source code. We argue that this objective is ac- els pre-trained with DOBF significantly outper- tually suboptimal in the context of programming languages, form existing approaches on multiple downstream and propose a new objective based on code obfuscation. tasks, providing relative improvements of up to 13% in unsupervised code translation, and 24% Code obfuscation consists in modifying source code in order in natural language code search. Incidentally, we to make it harder for humans to understand, or smaller while found that our pre-trained model is able to de- keeping its behaviour unchanged. In some ancient inter- obfuscate fully obfuscated source files, and to preted languages, name minimization could also reduce the suggest descriptive variable names. memory usage of the program. Today, it is used to protect in- tellectual property by preventing people from understanding and modifying the code, to prevent malware detection, and 1. Introduction to compress programs (e.g. Javascript code) to reduce net- work payload sizes. Moreover, C compilers discard variable Model pre-training with self-supervised methods such as names, and current rule-based and neural-based decompil- BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), ers generate obfuscated C code with uninformative variable XLM (Lample & Conneau, 2019) or XLNet (Yang et al., names (Fu et al., 2019). Obfuscators typically apply sev- 2019), has become ubiquitous in Natural Language Process- eral transformations to the code. While some operations ing (NLP), and led to significant improvements in many can be reversed (e.g. dead code injection), the obfuscation tasks. These approaches are based on the Masked Language of identifier names—renaming every variable, method and arXiv:2102.07492v2 [cs.CL] 16 Feb 2021 Modeling (MLM) objective, which consists in randomly class with uninformative names—is irreversible and has a masking words from an input text, and training a model substantial impact on code comprehension (Gellenbeck & to recover the original input. In the original approach pro- Cook, 1991; Takang et al., 1996; Lawrie et al., 2006). posed by Devlin et al.(2018), a fraction of selected masked words is replaced by masked tokens, another is replaced By analyzing the overall structure of an obfuscated file, by random words, and another remains unchanged. Since an experienced programmer can always, with time, under- then, a myriad of studies have proposed to modify the MLM stand the meaning of the obfuscated code. For instance, objective, either by masking contiguous spans of text (Song in the obfuscated example in Figure1, one can recognize et al., 2019; Joshi et al., 2020), masking named entities the function and guess that it implements a breadth-first search algorithm. We also expect neural networks, that ex- *Equal contribution 1Facebook AI Research 2Paris Dauphine cel in pattern recognition, to perform well on this task. We University. Correspondence to: Baptiste Roziere <[email protected]>, propose to pre-train a model to revert the obfuscation func- Marie-Anne Lachaux <[email protected]>. tion, by training a sequence-to-sequence (seq2seq) model to convert obfuscated functions, where names of functions DOBF: A Deobfuscation Pre-Training Objective for Programming Languages and variables have been replaced by uninformative names, initial sentence given the corrupted one. Lample & Con- back to their original forms. Suggesting proper variable neau(2019) noticed that the masked words are often easy and function names is a difficult task that requires to un- to predict, and proposed to sample the 15% masked words derstand what the program does. In the context of source according to their frequencies instead of uniformly. This code, it is a more sensible, but also a more difficult task way, rare words are sampled more often, making the pre- than MLM. Indeed, we observe (c.f. Figure1) that pre- training task more difficult for the model, which results in a dicting the content of randomly masked tokens is usually better learning signal and faster training. Sun et al.(2019) quite simple, as it often boils down to making syntax related also noticed that recovering the tokens masked by MLM is predictions (e.g. predicting that was has been masked out too simple in some contexts (e.g. predicting the two tokens is a parenthesis, a semi-column, etc.). These simple pre- “Harry Potter” is much harder than predicting only “Harry” dictions actually provide little training signal to the model. if you know the next word is “Potter”). To address this issue, In practice, MLM also masks out variable names, but if a they proposed to mask phrases and named entities instead of given variable appears multiple times in a function, it will individual tokens. Joshi et al.(2020) and Song et al.(2019) be easy for the model to simply copy its name from one of made a similar observation and proposed to mask random the other occurrences. Our model does not have this issue, spans of text. They showed that this simple modification as all occurrences of masked variables are replaced by the improves the performance on many downstream NLP tasks. same VAR_i special tokens. In this paper, we make the following contributions: Alternative objectives. Other pre-training objectives have been proposed in addition to MLM. For instance, De- vlin et al.(2018) also uses the next sentence prediction • We present DOBF, a new pre-training objective based (NSP) objective, a binary classification task that consists in on deobfuscation, and show its effectiveness on multi- predicting whether two input sentences follow each other ple programming languages. in the original corpus. The NSP objective was originally • We show that DOBF significantly outperform MLM designed to improve the performance on downstream NLP (e.g. BERT) on multiple tasks such as code search, tasks, but recent studies (Lample & Conneau, 2019; Liu code summarization or unsupervised code translation. et al., 2019) showed that training MLM on stream of sen- tences to leverage longer context, and removing the NSP • We show that, by design, models pre-trained with objective improves the quality of pre-training. To improve DOBF have interesting applications and can be used the sample-efficiency of MLM (where only 15% of tokens to understand functions with uninformative identifier are predicted), Electra (Clark et al., 2020) proposed to re- names. Besides, the model is able to successfully de- place (and not mask) some tokens with plausible alternatives, obfuscate fully obfuscated source files. and to train a network to detect the tokens that have been replaced. They showed that this new Replaced Token Detec- In the next section, we discuss the related work. Then, tion (RTD) objective matches the performance of RoBERTa we present our objective, and the downstream tasks we while using four times less computational resources. Dong consider for fine-tuning. Finally, we present our results and et al.(2019) proposed a model that combines multiple pre- the potential applications of our model. training tasks, including bidirectional, but also left-to-right and right-to-left language modeling objectives. Lewis et al. (2019) also proposed different pre-training objectives, to de- 2. Related work tect whether input sentences have been permuted, whether Masked Language Modeling pre-training. Large pre- tokens have been deleted or inserted, etc. trained transformers such as BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019) led to significant improvements Code Generation Pre-training. Recent studies showed in the majority of natural language processing tasks. The that pre-training methods developed for natural language quality of pre-training mainly comes from the MLM ob- processing are also effective for programming languages. jective (i.e. the cloze task), that allows the model to make For instance, Feng et al.(2020) proposed CodeBERT, a predictions by leveraging left and right contexts, unlike RoBERTa-based model trained on source code using the causal language modeling (CLM) where the model predic- MLM and RTD objectives. They showed that their model tions are only conditioned on previous words. In MLM, the performs well on downstream code generation tasks and model takes as input a sentence and uniformly selects 15% outperforms previous pre-training approaches. Kanade et al. of its tokens. Of the selected tokens, 80% are replaced by (2020) applied MLM and the next sentence prediction ob- a special symbol [MASK], 10% are left unchanged, and jectives to pre-train models on Python code.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us