Probabilistic Graphical Models

EE 639 Advanced Topics in Signal Processing and Communication

Fall 2014 


.

[Home] [Lectures] [Final Project]

For the final project, you are asked to provide a survey study on one of the following topics in machine learning. For each topic, I have selected two papers for you and you are required to add at least three more to the list. The goals of the final project are for you

a.      To summarize key ideas of an advanced or emerging topic in machine learning

b.      To provide a critical analysis for each paper based on its strengths and weaknesses

c.      To evaluate available software or demonstrate with your own toy examples

d.      To speculate important future research directions and open questions on your chosen topic

e.      To educate the rest of the class on this topic in an easy-to-understand manner

Your grade will be based on a 0.5 hour in-class presentation (12/10 & 12/12) and a final report (due 12/15) in double-column format with at least 6 pages (MS-word template or LaTeX template). The presentation will be judged by me, Nikky and your peers. The quality of the report and presentation will be graded based on the above criteria.

In order to learn as much as possible from each other, I would like to have a different topic for each of you. Please send me your order of preference of all the topics by 11/11 and I will make the final topic assignment on 11/12 in class.

The topics and presenters are as follows:

1.      Deep Learning  (Presenter: Qingguo Xu)

a.      Y. Bengio, “Learning deep architectures for AI,” Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1–127, 2009.

b.      A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.

2.      Variational Approximate Inference  (Presenter: Zach Bessinger)

a.      Fox, C. W., & Roberts, S. J. (2012). A tutorial on variational Bayesian inference. Artificial Intelligence Review, 38(2), 85-95.

b.      Hoffman, M., Bach, F. R., & Blei, D. M. (2010). Online learning for latent Dirichlet allocation. In advances in neural information processing systems (pp. 856-864).

3.      Ensemble Learning (Presenter: Yajie Zhao)

a.      Criminisi, A., Shotton, J., & Konukoglu, E. (2012). Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Foundations and Trends® in Computer Graphics and Vision, 7(2–3), 81-227.

b.      Murphy, K., Torralba, A., & Freeman, W. (2003). Using the forest to see the trees: a graphical model relating features, objects and scenes. Advances in neural information processing systems, 16, 1499-1506.

4.      Semi-supervised Learning (Presenter: Jayanti Andhale)

a.      Gibson, B. R., Rogers, T. T., & Zhu, X. (2013). Human SemiSupervised Learning. Topics in cognitive science, 5(1), 132-172.

b.      Zhou, Z. H., & Li, M. (2010). Semi-supervised learning by disagreement. Knowledge and Information Systems, 24(3), 415-439.

5.      Transfer Learning

a.      Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10), 1345-1359.

b.      Long, M., Wang, J., Ding, G., Sun, J., & Yu, P. S. (2013). Transfer Feature Learning with Joint Distribution Adaptation. In Computer Vision (ICCV), 2013 IEEE International Conference on (pp. 2200-2207). IEEE.

6.      Gaussian Processes

a.      Seeger, M. (2004). Gaussian processes for machine learning. International Journal of Neural Systems, 14(02), 69-106.

b.      Wang, J. M., Fleet, D. J., & Hertzmann, A. (2008). Gaussian process dynamical models for human motion. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(2), 283-298.