Seminar: Pattern Analysis and Machine Intelligence 16/17

Seminar Pattern Analysis and Machine Intelligence WS 16/17

Reviewing the latest research in machine learning, intelligent systems, systems and software engineering. Your lecturers are Prof. Dr. Nils Bertschinger, Prof. Dr. Matthias Kaschube and Prof. Dr. Visvanathan Ramesh. ( QIS/LSF )

For any questions please contact us


Bachelors students are required to give a presentation only, Masters also need to hand in a report about their topic (~ 5-10 pages), weighting for grade 50/50. Presentations will be around 45 minutes plus discussion with the class. Course language is English. We will meet every week and presence is mandatory. Either choose one of the topics below and search for literature for yourself (papers, book-chapters, etc.), or choose a paper from here or request one from any professor above. Registration is mandatory and will be passed to the examination-office.

Presentation Dates

  • 27.10.2016 – Decisions on topics and dates
  • 03.11.2016 – no talk
  • 10.11.2016 – Lars P.: SVMs
  • 17.11.2016 – Andres F.: Music classification
  • 24.11.2016 – Philipp T.: Goal-driven deep Learning – sensory cortex
  • 01.12.2016 – Margarita M.: Twitter opinion mining/text processing
  • 08.12.2016 – Hans-Joachim H.: Probabilistic topic modeling
  • 15.12.2016 – Michael W.: Deep learning (nature)
  • 12.01.2017 – Jiawei H.: Bayesian analysis of GARCH and stochastic volatility
  • 19.01.2017 – Merali P.:Deep face recognition
  • 26.01.2017 – Julia: Variational inference
  • 02.02.2017 – Tobias K: Avoiding pathologies of deep architectures
  • 09.02.2017 –
  • 16.02.2017 –
  • 23.02.2017 –


  • General introduction: Review papers of ML
  • Deep Learning: Modern neural networks
  • Classics:
    • Neural networks
    • Standard algorithms of ML, e.g. EM
  • Applications
    • Neuroscience
    • Finance/Economics
    • Medicine
    • Computer vision (e.g. face recognition)
    • Robotics (e.g. autonomous cars)
    • Recommendation systems (network models)
    • Natural language processing
  • Systems/Architectures
    • Integration of subsystems
    • Platforms/Probabilistic programming (e.g. Tensorflow)
    • Big Data
  • Theoretical background:
    • Probability theory: Bayesian decision theory
    • Neural networks: Universal approximation

List of papers

* General philosophy
  Build, Compute, Critique, Repeat: Data Analysis with Latent Variable Models
  Model-based machine learning

* Classics:
  - Neural networks
    A Sociological Study of the Official History of the Perceptrons Controversy
  - Standard algorithms
    Maximum Likelihood from Incomplete Data via the EM Algorithm
    Linear Dimensionality Reduction: Survey, Insights, and Generalizations
    Variational Inference: A Review for Statisticians
    - Sampling algorithms
      Sampling methods (chapter 11)
      Bishop, PRML, Springer 2006
      MCMC Using Hamiltonian Dynamics
      Elliptical slice sampling
* Applications
  - Ethics
    The social dilemma of autonomous vehicles
  - Finance/Economics
    - Volatility modeling:
      Bayesian analysis of GARCH and stochastic volatility: modeling leverage, jumps and heavy-tails for financial time series
      Generalized Wishart processes
    - Macroeconomics/Econometrics:
      Large Bayesian Vector Autoregressions
  - Natural language processing
    Probabilistic topic models
  - Robotics
    Particle Filters in Robotics
  - Biology
    Spiking neurons can discover predictive features by aggregate-label learning
    Using goal-driven deep learning models to understand sensory cortex
  - Computer Vision
    DeepFace: Closing the Gap to Human-Level Performance in Face Verification
    Deep Face Recognition

* Systems/Architectures
  - Probabilistic programming
    - Stan (
      Stan: A probabilistic programming language for Bayesian inference and optimization
    Black-Box Stochastic Variational Inference in Five Lines of Python
    Probabilistic machine learning and artificial intelligence
  - Deep learning
    Marginal Space Deep Learning: Efficient Architecture for Detection in Volumetric Image Data
  Deep Boltzmann Machines  
  Recurrent Models of Visual Attention

* Theoretical background
  - Deep learning
    Avoiding pathologies of deep architectures
    On Random Weights and Unsupervised Feature Learning
    Dropout as a Bayesian Approximation: Insights and Applications
    A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction

* Tutorials and reviews 
  - Deep learning
    Deep Learning in Neural Networks: An Overview
    Deep Learning Algorithms with Applications to Video Analytics for A Smart City: A Survey
    Deep Learning