Research

Systems Engineering for Safe and Certifiable Cognitive Systems:

Safe and certifiable autonomous cognitive systems require combining the best theory and practice of model-based systems engineering with latest advances in statistics, AI, machine learning and cognitive sciences. Our efforts in Frankfurt have been on methodologies, platforms, and tools leading to safe, certifiable AI systems. Our initiative, seeded by funding from the BMBF for the Bernstein-Focus in NeuroTechnology (2009-2016), focuses on trans-disciplinary systems research linking insights from systems engineering, neuroscience, and cognitive science/psychology. Our emphasis is on methodologies, platforms, and tools leading to safe, certifiable AI systems.  We view the human brain as an evolved system, with a flexible learning architecture designed by nature to solve a range of specific tasks in a class of environments that enhances the survival of humans.   Model-driven systems engineering is a discipline that formalizes application domain specification, i.e. task performance requirements and contextual models, and translates them into system designs.  Systems engineering in the context of computer vision has its origins from the early 90’s and has been refined over the years through practice [1, 2, 4].  At a high-level, the architectures inspired from systems engineering principles have parallels to models of brain function. The system is massively parallel and perform feed-forward decomposition of input visual signal into constituent modalities (e.g. color, motion, texture, shadow, reflection, contours, etc.) thus allowing for efficient indexing into a rich memory structure. Generated hypotheses can then be refined via a dynamic, recurrent process to converge to an interpretation. While both engineering and brain science views ([13]) of the architectures agree at this higher level, ongoing work is on engineering platforms to facilitate rapid design and validation of real-world applications. Our framework allows for parallel execution and exploration of the tradeoffs and systematic fusion of model-based and modern deep machine learning approaches to address context-sensitivity, explainability, and various degrees of safety.

Recent Research Results:

Computer simulations can play a dominant role in evaluating the behavior of alternative implementations and systematic performance evaluation and validation. Moreover, they can synthesize data for the data-hungry deep learning methods. In our recent work in ‘simulation for cognitive vision’ we address a range of questions such as: a) What is the impact on computer graphics engine rendering fidelity on machine learning performance? b) How much can we reduce the amount of real data required for machine learning through use of simulated data? c) How can we bridge the gap between the deviation between simulated data statistics and real-world statistics?, etc.
In our applied systems work, we have combined model-based and data-driven machine-learning principles to demonstrate our cognitive architecture designs wherein expectation models in context are used for estimating world state, monitor behaviors and identify anomalies. These application examples include: video surveillance/security, brake-light on/off detection in automotive, fine crack/defect classification in bridge infrastructure, and behavior monitoring in scientific applications.

Research Themes:

Our focus will be on methods, systems and frameworks that enable the engineering of safe-AI systems that are context-sensitive, provide explainability of achieved results, and whose architectural complexity scales with the complexity of the context, task and accuracy requirements. We believe that this requires a synthesis of the best practices in modern machine learning practice with classical systems engineering methods. Synthesizing the next generation platforms and tools to realize safe-AI products and solutions will necessitate the organization and formalization the space of (contexts, tasks, performance requirements) (i.e. (C,T,P)) and how they map to (programs and parameters). We seek answers to specific questions such as: What is the space of intelligent tasks?, In what specific contexts are these intelligent tasks to be performed and with what specific performance requirements in terms of accuracy, robustness, and computational efficiency? What are the appropriate representations for contextual models? Is there a design theory for how these model representations along with task and performance requirements map to programs?, i.e. What are the design patterns for cognitive solutions (programs) to address given (C, T, P) choices? While our application focus over the years has been largely computer vision, we envision that our methodology and framework will scale to other domains. Thus, we will collaborate with other professors to focus on multi-modal systems involving vision, acoustics/speech, language and other modalities.

Participation in Collaborative Research Networks:

Due to the chair’s broad background in science, technology and management, we have been focusing, over the last few years, on the organic development of a network structure including academic labs, startups, and large companies to address the open challenge of developing design automation tools for systematic engineering of safe-AI systems. We envision the need to participate (and develop) an open innovation network addressing four pillars:

1) an academic education and interdisciplinary research network linking neuroscience, psychology, applied mathematics and statistics, systems engineering and allied disciplines,

2) a modern training network addressing training of system scientists and engineers addressing gaps in traditional university setups emphasizing a blend of theory and practice involving large projects,

3) a business eco-system providing specific real-world problems and critical domain expertise required for applying AI to society, and finally,

4) an open innovation network involving software and systems engineering platforms that enable rapid prototyping and accelerated evaluation of alternative AI system products and solutions.

We envision expanding our extensive world-wide academic and industrial research network to enable our team to participate and make impact in this dynamic emerging field. 

References:

  1. V.Ramesh, Performance Characterization of Image Understanding Algorithms, Phd Dissertation (Supervisor: R. Haralick), U of Washington, March 1995.
  2. T. Binford, et al., Bayesian Inference in Model-Based Machine Vision, Uncertainty in AI (3), 1989, Levitt, Kanal and Lemmer (Eds.), North Holland.
  3. W. Mann, 3D Object Interpretation from monocular images, Phd Dissertation, Stanford University, 1996.
  4. M. Greiffenhagen et al, Design, Analysis and Engineering of Video Monitoring Systems: A case study, in Proc of IEEE, Special Issue in Video Surveillance, Nov. 2001.
  5. M. Greiffenhagen et al. “The systematic design and analysis cycle of a vision system: a case study in video surveillance.”Computer Vision and Pattern Recognition, CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 2. IEEE, 2001.
  6. V. D. Shet et al, Predicate Logic based Image Grammars for Complex Pattern Recognition, International Journal of Computer Vision (IJCV), Special Issue on Stochastic Image Grammars, 2011.
  7. S. C. Zhu and D. Mumford, A Stochastic Grammar of Images, Foundations and Trends in Computer Graphics and Vision, Vol. 2, No. 4 (2006) 259–362
  8. U. Grenander and M. Miller, Pattern Theory: From Representation to Inference,  (Oxford Studies in Modern European Culture), 2007.
  9. Y. Bengio, Learning Deep Architectures for AI, Foundations and Trends in Machine Learning Vol. 2, No. 1 (2009) 1–127.
  10. Y. Bengio, Deep Learning of Representations: Looking Forward, in Statistical Language and Speech Processing, Lecture Notes in Computer Science Volume 7978, 2013, pp 1-37
  11. Marr, D. (1982). “Vision: A computational investigation into the human representation and processing of visual information”,. MIT Press.
  12. Carpenter, G.A. & Grossberg, S. (2003), Adaptive Resonance Theory, In Michael A. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks, Second Edition (pp. 87-90). Cambridge, MA: MIT Press
  13. C. Von der Malsburg. A Vision Architecture Based on Fiber Bundles . Front. Comput. Neurosci. Conference Abstract: Bernstein Conference 2012.
  14. T. Poggio et al, Models of visual cortex, , Scholarpedia, 2013, 8(4):3516
  15. Daniel Kahneman,. Thinking, Fast and Slow. Farrar Straus and Giroux, 2011.
  16. A. Sloman, Virtual Machines in Philosophy, Engineering & Biology, Published in: Proceedings Workshop on Philosophy & Engineering WPE-2008
  17. B. J. Baars, Global workspace theory of consciousness: toward a cognitive neuroscience of human experience?, Progress in Brain Research, Vol. 150, ISSN 0079-6123, 2005.

Publications (since 2011):

1. Seifert P., Hota R., Ramesh V., & Gruenewald, B., Chronic within-hive video recordings detect altered nursing behaviour and retarded larval development of neonicotinoid treated honey bees, Nature: Scientific Reports volume 10, Article number: 8727 (2020)

2. Kunfeng Wang, Fei-Yue Wang, Visvanathan Ramesh, Ashish Shrivastava, David Vázquez, Fuxin Li:
Generating virtual images for promoting visual artificial intelligence. Neurocomputing 394: 112-113 (2020)

3. Neil A. Thacker, Carole J. Twining, Paul D. Tar, Scott Notley, Visvanathan RameshFundamental Issues Regarding Uncertainties in Artificial Neural Networks. CoRR abs/2002.11152 (2020)

3. Mundt, M., Majumder, S., Murali, S., Panetsos P., Ramesh, V. (2019): Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset. IEEE CVPR, 2019.

4. Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Visvanathan RameshOpen Set Recognition Through Deep Neural Network Uncertainty: Does Out-of-Distribution Detection Require Generative Classifiers? ICCV Workshops 2019: 753-757

5. Mundt, M., Majumder, S., Weis, T., Ramesh, V. (2018): Rethinking Layer-wise Feature Amounts in CNNs. NIPS Workshop, 2018.

6. Hess,T.,  Mundt M., Weis, T./ Ramesh, V. (2017): Large-scale Stochastic Scene Generation and Semantic Annotation for Deep Convolutional Neural Network Training in the RoboCup SPL, International RoboCup Soccer Workshop, 2017, Japan (Nominated for Best Paper).

7. Weis, T., Mundt M./ Harding P., Ramesh, V. (2017): Anomaly Detection for Automotive Visual Signal Transition Estimation, IEEE ITSC 2017.

8. V. S. R. Veeravasarapu, Constantin A. Rothkopf, Visvanathan RameshAdversarially Tuned Scene Generation. CVPR 2017: 6441-6449

9. V. S. R. Veeravasarapu, Constantin A. Rothkopf, Visvanathan Ramesh: Model-Driven Simulations for Computer Vision, WACV 2017: 1063-1071

10. Ernst, J., Singh, M., Ramesh, V. (2012): Discrete texture traces: Topological representation of geometric context. Proceedings of IEEE CVPR 2012: 422-429

11. Vasu Parameswaran, Vinay D. Shet, Visvanathan Ramesh:
Design and Validation of a System for People Queue Statistics Estimation. Video Analytics for Business Intelligence 2012: 355-373

12. Vinay D. Shet, Maneesh Singh, Claus Bahlmann, Visvanathan Ramesh, Jan Neumann, Larry S. Davis:
Predicate Logic Based Image Grammars for Complex Pattern Recognition. Int. J. Comput. Vis. 93(2): 141-161 (2011)

DBLP Page:

https://dblp.uni-trier.de/pers/r/Ramesh:Visvanathan.html