Discamus continentiam augere, luxuriam coercere
Home -> Publications
Home
  Publications
    
all years
    2018
    2017
    2016
    2015
    2014
    2013
    2012
    2011
    2010
    2009
    2008
    2007
    2006
    2005
    2004
    theses
    techreports
    presentations
    edited volumes
    conferences
  Awards
  Research
  Teaching
  BLOG
  Miscellaneous
  Full CV [pdf]






  Events








  Past Events





Publications of Torsten Hoefler
Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Tal Ben-Nun, Alice Shoshana Jakobovits, Torsten Hoefler:

 Neural Code Comprehension: A Learnable Representation of Code Semantics

(In Advances in Neural Information Processing Systems 31, presented in Montreal, Canada, Curran Associates, Inc., Dec. 2018)

Abstract

With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that with a single RNN architecture and pre-trained fixed embeddings, inst2vec outperforms specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.

Documents

download article:
 

BibTeX

@incollection{ncc,
  author={Tal Ben-Nun and Alice Shoshana Jakobovits and Torsten Hoefler},
  title={{Neural Code Comprehension: A Learnable Representation of Code Semantics}},
  year={2018},
  month={Dec.},
  booktitle={Advances in Neural Information Processing Systems 31},
  location={Montreal, Canada},
  publisher={Curran Associates, Inc.},
  source={http://www.unixer.de/~htor/publications/},
}

serving: 54.198.212.30:52726© Torsten Hoefler