your-photo

John Hewitt

johnhew [at] stanford.edu

Hi! I’m a third-year PhD student in computer science, conducting research in natural language processing at Stanford University. I am grateful to be co-advised by Chris Manning and Percy Liang, and to be supported by a NSF Graduate Research Fellowship.

For Fall 2020, I’m a Research Intern at Google AI!

I aim to design systems that robustly and efficiently learn to understand human languages to the end of advancing human communication and education, and to teach others.

Feel free to look me up on Google Scholar or Twitter, or take my CV.


Current Research Interests

  • Analyzing neural representations of language
  • Theoretical expressiveness of neural networks under memory constraints
  • Memory and symbolic structure in sequence modeling
  • Out-of-domain robustness in NLP
  • Multilinguality and low-resource language processing

News

  • [October 2020] New work with Ben Newman, Percy Liang, and Chris Manning on how EOS tokens affect length extrapolation has won Outstanding Paper Award at BlackBoxNLP! ArXiv!
  • [September 2020] I’ll be presenting a poster on the expressivity of RNNs for generating bounded hierarchical languages at the Conference on the Mathematical Theory of Deep Learning!
  • [September 2020] We’re publishing a theory paper at EMNLP that proves the title: Recurrent neural networks can generate bounded hierarchical languages with optimal memory. We’ll have a preprint soon!
  • [Fall 2020] I’m joining Google AI for an internship this fall, hosted by Vincent Zhao and Kelvin Guu. Looking forward to some great research and learning.
  • [September 2020] I gave a talk at NLP with Friends! Thanks to all for a lively discussion.
  • [June 2020] Summary paper written for a general science audience, describing our recent work in emergent linguistic structure in self-supervised neural networks, out now in PNAS!
  • [May 2020] New work with Ethan Chi and Chris Manning on cross-linguistic syntactic structure in multilingual BERT accepted to ACL and available on ArXiv!
  • [April 2020] I’ve been awarded a National Science Foundation Graduate Research Fellowship!
  • [Winter 2020] I’m TAing Stanford CS 224n: Natural Language Processing with Deep Learning!
  • [November 2019] I’m giving a talk at Berkeley on Probing Neural NLP: Ideas and Problems!
  • [November 2019] Designing and Interpreting Probes with Control Tasks was named runner up best paper at EMNLP 2019! Slides of the talk available here.
  • [August 2019] My work with Percy Liang on designing and understanding neural probes with random control tasks has been accepted to EMNLP! paper and blog post are both available.
  • [July 2019] I’m giving a talk at Amazon AI on finding and understanding emergent linguistic structure in neural NLP!
  • [June 2019] Gave a talk at NAACL’19 on structural probes! PDF of the slides now available.
  • [April 2019] I had a great time being interviewed by Waleed Ammar and Matt Gardner on structural probes! You can find the new NLP Highlights podcast episode on the topic here.
  • [March 2019] I’m presenting a poster on syntax in unsupervised representations of language at the Stanford Human-Centered Artificial Intelligence Institute Symposium
  • [Feb 2019] My work with Chris Manning on methods for finding syntax trees embedded in contextual representations of language has been accepted to NAACL 2019!
  • [Oct 2018] I’ve started offering office hours for research-interested Stanford undergraduates!

Research Office Hours for Undergraduates (ROHU)

An open time for undergraduates looking for advice and discussions on natural language processing research. learn more.

Due to COVID-19, we’re not meeting in person. However, I’m happy to set up a meeting with any undergraduate interested in NLP or ML research!! No experience required.

  • Publications
  • Blog
  • Projects
  • About

    Tidbits

    This talk by Rajiv Gandhi, to whom I am grateful. For you if you think, like I used to, that research–or any success in STEM–is out of your reach.

    Scott Aaronson’s old note on frameworks for reasoning about large numbers, for enjoyment

    Kevin Knight’s note on unix commands, to help you with your bash skills

    The Fundamental Whiteboard Difficulty (Scott Aaronson):

    I figured that chalk has its problems—it breaks, the dust gets all over—but I could live with them, much more than I could live with the Fundamental Whiteboard Difficulty, of all the available markers always being dry whenever you want to explain anything.

    I highly suggest Arch Linux for its configurability and the educational experience it provides…

    Contact

    Take my school email johnhew@stanford, and predict the TLD using your internal knowledge base.

Join My Newsletter

Sign up to receive weekly updates.

x