Michael Chang

I am a Ph.D. student advised by Professors Sergey Levine and Tom Griffiths in the computer science department at U.C. Berkeley. I am a member of Berkeley AI Research.

Prior to coming to Berkeley, I spent time at Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) under the supervision of Professor Jürgen Schmidhuber.

I graduated in 2017 with a B.S. in Computer Science from MIT, where I researched in CSAIL and BCS under the supervision of Professors Josh Tenenbaum and Antonio Torralba.

Previously I worked at Google, at the University of Michigan Ann Arbor with Professor Honglak Lee, at the MIT Media Lab with Professor Pattie Maes, and as Strategy Lead in the MIT Solar Electric Vehicle Team.

Google Scholar  /  LinkedIn  /  Github  /  Twitter  /  Goodreads  /  Swimming

[ News | Talks | Teaching | Research | Readings | Heroes]


I am interested in the inductive biases and algorithmic constraints that guide learning agents to learn to develop their own languages for representing problems and modeling their world.

My current hypothesis is that a learner needs to exploit compositionality in both its internal representations as well as its internal computations to continuously accumulate and organize knowledge in a way that useful for is extrapolation and transfer beyond the data it has seen before.

I have pursued this research vision from several perspectives: unsupervised learning of disentangled representations, neural architectures that capture regularities in environment dynamics, bridging perception and symbolic reasoning, hierarchical approaches to problems solving, and compositional neural program induction.

Automatically Composing Representation Transformations as a Means for Generalization
Michael Chang, Abhishek Gupta, Sergey Levine, Thomas Griffiths
ICML workshop Neural Abstract Machines & Program Induction v2, 2018

This paper connects and synthesizes ideas from reformulation, metareasoning, program induction, hierarchical reinforcement learning, and self-organizing neural networks. The key perspective of this paper is to recast the problem of generalization to a problem of learning algorithmic procedures over representation transformations: discovering the structure of a family of problems amounts to learning a set of reusable primitive transformations and their means of composition. Our formulation enables the learner to learn the structure and parameters of its own computation graph with sparse supervision, make analogies between problems by transforming one problem representation to another, and exploit modularity and reuse to scale to problems of varying complexity.

Representational Efficiency Outweighs Action Efficiency in Human Program Induction
Sophia Sanborn, David Bourgin, Michael Chang, Thomas Griffiths
Proceedings of the 40th Annual Conference of the Cognitive Science Society, 2018

This paper introduces Lightbot, a problem-solving domain that explores the link between problem solving and program induction. This paper departs from work in hierarchical learning that hypothesize that hierarchies accelerates the discovery of shortest-path solutions to a problem by segmenting the solution into subgoals. Instead, we investigate a setting in which the hierarchical solutions that humans discover minimize the complexity of the underlying program that generated the solution rather than minimize the length of the solution itself.

Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions
Sjoerd van Steenkiste, Michael Chang, Klaus Greff, Jürgen Schmidhuber
Proceedings of the International Conference on Learning Representations (ICLR), 2018
Press: NVIDIA article
project webpage / code

We present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modeling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge.

Relational Neural Expectation Maximization
Sjoerd van Steenkiste, Michael Chang, Klaus Greff, Jürgen Schmidhuber
NIPS workshop on Cognitively Informed Artificial Intelligence, 2017
Oral Presentation, Oculus Outstanding Paper Award

We propose a novel approach to common-sense physical reasoning that learns physical interactions between objects from raw visual images in a purely unsupervised fashion. Our method incorporates prior knowledge about the compositional nature of human perception, enabling it to discover objects, factor interactions between object-pairs to learn efficiently, and generalize to new environments without re-training.

A Compositional Object-Based Approach to Learning Physical Dynamics
Michael B. Chang, Tomer D. Ullman, Antonio Torralba, Joshua B. Tenenbaum
Proceedings of the International Conference on Learning Representations (ICLR), 2017
Press: Science Magazine article (accompany video | featured segment)
project webpage / code / poster / spotlight talk (NIPS Intuitive Physics Workshop)

The Neural Physics Engine (NPE) frames learning a simulator of intuitive physics as learning a compositional program over objects and interactions. This allows the NPE to naturally generalize across variable object count and different scene configurations.

Understanding Visual Concepts with Continuation Learning
William F. Whitney, Michael B. Chang, Tejas D. Kulkarni, Joshua B. Tenenbaum
International Conference on Learning Representations (ICLR) workshop, 2016
project webpage / code

This paper presents an unsupervised approach to learning factorized symbolic representations of high-level visual concepts by exploiting temporal continuity in the scene.


Here are some of my past and current readings that have changed the way I think.

Longer Works

Holes - Louis Sachar

Harry Potter - J. K. Rowling

The Society of Mind - Marvin Minsky

三國演義 Romance of the Three Kingdoms - 羅貫中 Luo Guanzhong

The Beginning of Infinity - David Deutsch

The Little Prince - Antoine de Saint-Exupéry

莊子 Zhuangzi - 莊子 Zhuangzi

The Structure of Scientific Revolutions - Thomas Kuhn

Hegemony or Survival - Noam Chomsky

Gödel, Escher, Bach: an Eternal Golden Braid - Douglas Hofstadter

Structure and Interpretation of Computer Programs - Harold Abelson and Gerald Sussman with Julie Sussman

The Feynman Lectures on Physics - Richard P. Feynman, Robert B. Leighton, Matthew Sands

Republic - Plato

道德經 Tao Te Ching - 老子 Laozi

Discussions on Youth - Diasaku Ikeda

Shorter Works

A Psalm of Life - Henry Wadsworth Longfellow

You and Your Research - Richard Hamming

Building Machines that Think and Learn Like People - Brendan M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman

Steps Toward Artificial Intelligence - Marvin Minsky

As We May Think - Vannevar Bush

The Philosophy of Composition - Edgar Allan Poe

The Differences Between Tinkering and Research - Julian Togelius

Others' Reading Lists

Lucas Morales' reading list

MIT Probabilistic Computing Project's reading list

Jürgen Schmidhuber's recommended readings

Gerry Sussman's reading list

Marcus Hutter's reading list

Patrick Winston's reading list

Tom Griffiths' reading list

Scott Aaronson's reading list


Scott Aaronson's blog


Here are some of my heroes that have shaped my worldview.

證嚴法師 Master Cheng Yen

Happiest is the person whose heart is filled with love.

We cannot love when filled with suspicion; we cannot forgive when unwilling to believe; we cannot trust when filled with doubts.

Blessed are those who have the ability to love and be loved by others.

Clear conscience brings peace of mind; the greatest happiness comes from the pleasure of giving and helping others.

Having the ability to help others is a blessing.

To forgive others is, in fact, being kind to ourselves.

Being filial is not making our parents unduly worry about us.

Do whatever it takes to do what is right. Do whatever it takes to not do what is wrong.

Being angry is a form of torturing ourselves with the mistakes of others.

Only when we light up our heart can we inspire others to do the same.

If our thoughts are upright and wholesome, we can always be at ease and evil cannot come near.

To a beautiful heart, everything appears beautiful.

Do not fear making mistakes in life, fear only not correcting them.

李小龍 Bruce Lee

Knowledge will give you power, but character respect.

As you think, so you shall become.

Mistakes are always forgivable, if one has the courage to admit them.

If you love life, don't waste time, for time is what life is made up of.

The key to immortality is first living a life worth remembering.

Do not pray for an easy life, pray for the strength to endure a difficult one.

The self-sufficient stand alone - most people follow the crowd and imitate.

Notice that that the stiffest tree is easily cracked, while the bamboo or willow survives by bending with the wind.

Patience is not passive; on the contrary it is concentrated strength.

What is defeat? Nothing but education. Nothing but the first step to something better.

Success means doing something sincerely and wholeheartedly.

It is compassion rather than the principle of justice which can guard us against being unjust to our fellow man.

Absorb what is useful. Discard what is not. Add what is uniquely your own.

Defeat is a state of mind. No one is ever defeated until defeat has been accepted as reality.

Empty your cup so that it may be filled.

website template credits