A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.



Comparing Pre-trained Language Models with Semantic Parsing

21 minute read


In my last post, I showed how adding ELMo features to a seq2seq model improved performance on semantic parsing tasks. Recently, I have been experimenting with adding OpenAI GPT and BERT to the model in order to compare their performance against ELMo’s. All the data, configuration files, and scripts needed to reproduce my experiments have been pushed to the GitHub repository. I’m excited to share my results!

Applying Unsupervised Pretraining to Language Generation: Semantic Parsing + ELMo

8 minute read


For those who haven’t heard it yet, NLP’s ImageNet moment has arrived; approaches such as ULMFiT, ELMo, OpenAI GPT, and BERT have gained significant traction in the community in the last year by using the unsupervised pretraining of language models to achieve significant improvements above prior state-of-the-art results on a diverse set of language understanding tasks (including classification, commonsense reasoning, and coreference resolution, among others) and datasets. (For more on unsupervised pretraining and the motivations behind it, read the blog post about NLP’s ImageNet moment I have linked above.)

Rick and Morty & Metamodernism: Always “both-neither,” Never “either-or”

24 minute read


In examining any piece of science fiction, considering the context of the work, whether historical, cultural, philosophical, etc., is of the utmost importance. “Literature & the Future” is missing a text that accurately reflects the context of today; that is, a text should be included that is representative of the way that our society and culture presently thinks of futurity. The TV show Rick and Morty, specifically the episode “Rixty Minutes,” is the best candidate for a text of this nature. Humanity is now living in “the future” that the thinkers discussed in class speculated about in the past, so it is desirable to consider what the concept of futurity means in an age where humans are simultaneously more connected and isolated than ever before. In essence, “Rixty Minutes” should be included as a “missing text” for the class syllabus because it self-reflexively offers a metamodern, integrative worldview as a solution for the crisis of human existence as it presently exists in the age of technology.

Reproducing SOTA Commonsense Reasoning Result in with a OpenAI’s Pretrained Transformer Language Model

7 minute read


I wanted to write this blog post to share a bit of interesting code I’ve been working on recently. Earlier this year, OpenAI achieved SOTA results on a diverse set of NLP tasks and datasets utilizing unsupervised pretraining, nearly identically the same approach as the one ULMFiT used to achieve SOTA on several text classification datasets. However, OpenAI used the new Transformer architecture instead of the AWD LSTM used by ULMFiT and trained on a billion token corpus instead of ULMFiT’s Wikitext-103.

The Need for ML Safety Researchers

4 minute read


I recently came across this article in the New York Times, entitled “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots.” I found it quite thought provoking, even though the mainstream media’s accounts of these topics and debates always leave much to be desired (note: if you mention The Terminator, The Matrix, and 2001: A Space Odyssey in a discussion about AGI and superintelligence, you’ve already lost me).


less than 1 minute read


Welcome to my blog! I’ll be writing about my various academic interests here, including machine learning, deep learning, natural language processing, and AI alignment. I hope you enjoy!





Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.