In this workshop we consider the question of how we can learn meaningful and useful representations of the data.  There has been a great deal of recent work on this topic, much of it emerging from researchers interested in training deep architectures.  Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise as a means of learning invariant representations of data and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics. Bayesian nonparametric methods and other hierarchical graphical model-based approaches have also been recently shown the ability to learn rich representations of data.