Michael Chang

I am an MIT student studying computer science, currently doing research in CSAIL and BCS with Professors Josh Tenenbaum and Antonio Torralba. Previously I worked at Google, at the University of Michigan with Professor Honglak Lee, at the MIT Media Lab with Professor Pattie Maes, and as Strategy Lead in the MIT Solar Electric Vehicle Team. I will be pursuing a Ph.D. at U.C. Berkeley at Berkeley AI Research beginning fall 2017.

Google Scholar  /  LinkedIn  /  Github  /  Twitter  /  Swimming

[ News | Talks | Research | Readings]

News
  • February 2017: "A Compositional Object-Based Approach to Learning Physical Dynamics" has been accepted to ICLR 2017.
  • March 2016: "Understanding Visual Concepts with Continuation Learning" has been accepted to ICLR 2016 Workshop.
  • March 2015: Press Article - Finger-Mounted Reading Device for the Blind, with Roy Shilkrot and Marcelo Polanco.
Talks
  • March 2017. OpenAI, San Francisco. "A Compositional Object-Based Approach to Learning Physical Dynamics"
  • February 2017. Harvard NLP, Cambridge."Learning Visual and Physical Models of the Environment"
  • January 2017: Google, Cambridge."Learning Visual and Physical Models of the Environment"
  • April 2016: MIT EECScon, Cambridge."Understanding Visual Concepts with Continuation Learning"
Research

I am interested in recursive self-improvement, learning compositional programs, curiosity, and theory of mind.

A Compositional Object-Based Approach to Learning Physical Dynamics
Michael B. Chang, Tomer D. Ullman, Antonio Torralba, Joshua B. Tenenbaum
Proceedings of the International Conference on Learning Representations (ICLR), 2017
project webpage / code / poster / spotlight talk (NIPS Intuitive Physics Workshop)

The Neural Physics Engine (NPE) frames learning a simulator of intuitive physics as learning a compositional program over objects and interactions. This allows the NPE to naturally generalize across variable object count and different scene configurations.

Understanding Visual Concepts with Continuation Learning
William F. Whitney, Michael B. Chang, Tejas D. Kulkarni, Joshua B. Tenenbaum
International Conference on Learning Representations (ICLR) Workshop, 2016
project webpage / code

This paper presents an unsupervised approach to learning factorized symbolic representations of high-level visual concepts by exploiting temporal continuity in the scene.

Readings

Here are some of my past and current readings that have changed the way I think. Also take a look at Lucas Morales' reading list, the MIT Probabilistic Computing Project's reading list, and Jüergen Schmidhuber's recommended readings.

Longer Works

The Society of Mind - Marvin Minsky

Three Kingdoms - Luo Guanzhong

The Beginning of Infinity - David Deutsch

The Little Prince - Antoine de Saint-Exupéry

Zhuangzi - Zhuangzi

The Structure of Scientific Revolutions - Thomas Kuhn

Hegemony or Survival - Noam Chomsky

Reinforcement Learning: An Introduction - Richard S. Sutton and Andrew G. Barto

Gödel, Escher, Bach: an Eternal Golden Braid - Douglas Hofstadter

Structure and Interpretation of Computer Programs - Harold Abelson and Gerald Sussman with Julie Sussman

The Feynman Lectures on Physics - Richard P. Feynman, Robert B. Leighton, Matthew Sands

Shorter Works

A Psalm of Life - Henry Wadsworth Longfellow

You and Your Research - Richard Hamming

Building Machines that Think and Learn Like People - Brendan M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman

Steps Toward Artificial Intelligence - Marvin Minsky

As We May Think - Vannevar Bush

The Philosophy of Composition - Edgar Allan Poe


website template credits