专业 激情 持久 卓越
新闻
当前位置: 首页 > 实验室介绍 > 新闻

牛津大学徐旦博士访问实验室并做报告

发布日期:2019-06-18     返回

Brief Bio:

Dan Xu is currently a Postdoc in the Visual Geometry Group (VGG) in the Department of Engineering Science at the University of Oxford, under the supervision of Prof. Andrea Vedaldi and Prof. Andrew Zisserman. He received his Ph.D. in Computer Science from the University of Trento in 2018, under the supervision of Prof. Nicu Sebe in Multimedia and Human Understanding Group (MHUG). He was a research assistant in the Department of Electronic Engineering at the Chinese University of Hong Kong under the supervision of Prof. Xiaogang Wang. His research mainly lies on computer vision, multimedia and machine learning. Specifically, he is interested in deep learning and its applications in 2D and 3D scene understanding involving various topics such as scene depth prediction, object contour detection, pedestrian detection, scene parsing, and visual SLAM. He has published 10+ CCF A conferences and journals, including NIPS, CVPR, ACM MM and TPAMI. He received the Intel Best Scientific Paper Award at ICPR 2016 and the Best Paper Award Nominee at ACM MM 2018.

  2019年6月18日星期二下午4点30分,牛津大学徐旦博士来访实验室,关于“Learning Structured and Multi-Modal Deep Representations for Regression, Detection and Segmentation”报告在北京大学深圳研究生院C126举行。

Introduction of the talk:

In this talk, Dan presented three different works towards the target of autonomous driving, covering scene depth estimation, pedestrian detection and joint learning of scene depth and scene parsing. For scene depth estimation, a new continuous multi-scale CRF model, and its implementation in convolutional neural network for end-to-end optimization has been introduced. For pedestrian detection, he illustrated first a model for learning cross-modal deep representations between RGB and Thermal domains, and then elaborated how to transfer the learned representations for improving pedestrian detection under hard environmental conditions. At the end, a joint deep learning framework for simultaneously dealing with multi-tasks within a unified model, such as object boundary detection, surface normal estimation, depth estimation and scene parsing, have been presented.