Two Auton Lab thesis defenses next week! Mark your calendars please...

Artur Dubrawski awd at cs.cmu.edu
Tue Apr 28 11:28:50 EDT 2020


Dear Autonians,

Please join me in attending 2 (yes, two) excellent virtual presentations by
our own Maria De-Arteaga and Chao Liu, both of which are scheduled for the
next week.

(btw, I do not remember when was the last time we had more than one
doctoral thesis defense scheduled in one week at the Auton Lab...)

Maria's defense will be on Monday May 4th at 11am,
The official announcement will be shared soon.

Chao's defense is scheduled for Thursday May 7th at noon.
The official announcement with the zoom link is included below.

Please help seeing these outstanding colleagues move to the next levels of
their professional lives by attending these presentations and cheering for
them :)

Cheers,
Artur

-----

Date:  07 May 2020

Time:  12:00 p.m.

Place: *Virtual Presentation* https://cmu.zoom.us/j/2623852919

Type:  Ph.D. Thesis Defense

Who:  Chao Liu

Title:  Vision with Small Baselines


Abstract:
3D sensing with portable imaging systems is becoming more and more popular
in computer vision applications such as autonomous driving, virtual
reality, robotics manipulation and surveillance, due to the decreasing
expense and size of RGB cameras. Despite the compactness and portability of
the small baseline vision systems, it is well-known that the uncertainty in
range finding using multiple views and the sensor baselines are inversely
related. On the other hand, besides compactness, the small baseline vision
system has its unique advantages such as easier correspondence and large
overlapping regions across views.

The goal of this thesis is to develop computational methods and small
baseline imaging systems for 3D sensing of complex scenes in real world
conditions. Our design principle is to physically model the scene
complexities and specifically infer the uncertainties for the images
captured with small baseline setups.

With this design principle, we make four contributions. In the first
contribution, we propose a two-stage near-light photometric stereo method
using a small (6 cm diameter) LED ring. The imaging system is compact
compared to traditional photometric stereo systems. In the second
contribution, we develop an algorithm to simultaneously estimate the
occlusion pattern and depth for thin structures from a focal image stack,
which is obtained either by varying the focus/aperture of the lens or
computed from a one-shot light field image. As the third contribution, we
propose a  learning-based method to estimate per-pixel depth and its
uncertainty continuously from a monocular video stream, with small camera
baselines across adjacent frames. These depth probability volumes are
accumulated over time as more incoming frames are processed sequentially,
which effectively reduces depth uncertainty and improves accuracy,
robustness, and temporal stability. Finally, using a pair of high
resolution camera and laser projector, we develop a high spatial resolution
Diffuse Optical Tomography (DOT) system that can detect accurate boundaries
and relative depth of heterogeneous structures up to a depth of 8mm below a
highly scattering medium such as whole milk.

We showcase the application of a small baseline vision system for in-vivo
micro-scale 3D reconstruction of capillary veins and develop a system for
real-time analysis of microvascular blood flow for critical care. We
believe that the computational methods developed in this thesis would find
more applications of compact 3D sensing under challenging conditions.



Thesis Committee Members:

Srinivasa G. Narasimhan, Co-chair
Artur W. Dubrawski, Co-chair
Aswin C. Sankaranarayanan
Manmohan Chandraker, University of California, San Diego


A copy of the thesis document is available at:

https://www.dropbox.com/s/cz75koh96ragy4x/thesis-small-baseline.pdf?dl=0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20200428/e8ed136f/attachment-0001.html>


More information about the Autonlab-users mailing list