Banner

Warren Grundfest Lectures in Computational Imaging - First Friday of Each Month

The Grundfest Lecture series highlights rising stars in computational imaging. The series is co-organized by UCLA and Caltech. This is a named lecture in honor of the late SPIE Fellow Prof. Warren Grundfest (UCLA). The lecture includes an honorarium for junior candidates. Inspired by the hardships junior researchers face during COVID-19, Akasha Imaging provides a small monetary award to all speakers with a non-permanent academic position (e.g. PhD student, postdoc). Lectures occur on the First Friday of each month at 12 noon California time. The lead organizer is Pradyumna Chari.


Organizers




Prof. Achuta Kadambi

UCLA

Prof. Katie Bouman

Caltech

Pradyumna Chari

UCLA




Speakers


coming soon

Ewa Nowara

PhD at Rice University

Can cameras really measure vital signs? Algorithms and systems for camera-based health monitoring in unconstrained settings.

Imagine when you looked at someone, you could see their heartbeat. A suite of techniques called imaging photoplethysmography has recently enabled contactless measurements of vital signs with cameras by leveraging small intensity changes in the skin caused by cardiac activity. Measuring vital signs remotely is advantageous in several applications, including virtual doctor appointments, especially relevant during a pandemic, as well as more comfortable sleep monitoring, or monitoring of prematurely born infants. However, the camera-based physiological signals are very weak and easily corrupted by varying illumination, video compression artifacts, and head motion. Therefore, most existing methods only work in controlled settings and fail in realistic applications. We developed a denoising deep learning algorithm based on convolutional attention networks that can faithfully recover physiological signals even from heavily corrupted videos. Moreover, our denoising algorithm can recover subtle waveform dynamics, previously not possible to measure with cameras. We also discuss how we can improve the performance of deep learning methods and avoid overfitting when training on very small and not diverse datasets.

5

March, 2021
12 noon PT
[Sign-up]



coming soon

Dr. Emma Alexander

Postdoc, UC Berkeley

Differential Defocus in Cameras and Microscopes

Image defocus provides a useful depth cue in computer vision, and can also be used to recover phase information in coherent microscopy. In a differential setting, both problems can be addressed by solving simple equations, known as Depth from Differential Defocus and the Transport of Intensity Equation. Relating these governing equations requires putting them on equal footing, so we'll look at the assumptions common to photography and microscopy applications, and go through a gentle introduction to coherence, light fields and Wigner Distribution Functions, and generalized phase. We'll show that depth from defocus can be seen as a special case of phase recovery, with a new interpretation of phase for incoherent settings.

2

April, 2021
12 noon PT
[Sign-up]



coming soon

Professor Akane Sano

Assistant Professor at Rice University

Digital Health and Wellbeing: Data-Driven and Human-Centered Personalized and Adaptive Assistant

Imagine 24/7 rich human multimodal data could identify changes in physiology and behavior, and provide personalized early warnings to help you, patients, or clinicians for making better decisions or behavioral changes to support health and wellbeing. I will introduce a series of studies, algorithms, and systems we have developed for measuring, predicting, and supporting personalized health and wellbeing for clinical populations as well as people at increased risk of adverse events, including ongoing COVID-19 related projects. I will also discuss challenges, learned lessons, and potential future directions in digital health and wellbeing research.

7

May, 2021
12 noon PT
[Sign-up]



coming soon

Pratul Srinivasan

Google Research

Enabling an Image-Based Graphics Pipeline with Neural Radiance Fields

Neural Radiance Fields (NeRFs) have recently emerged as an effective and simple solution for recovering 3D representations of complex objects and scenes from captured images. However, there is still a long way to go before we will be able to use NeRF-like representations of real-world content in graphics pipelines as easily as standard computer graphics representations of artist-designed content, such as textured triangle meshes. I will discuss and review NeRF, and then talk about some of our recent work towards extending NeRF to enable more of the functionality we expect from the 3D representations we use in computer graphics.

4

June, 2021
12 noon PT
[Sign-up]



coming soon

Joshua Rapp

Research Scientist at Mitsubishi Electric Research Laboratories

One Photon at a Time: Compensating for Nonideal Electronics in LiDAR Imaging

Forming a digital image—with a conventional camera or computational imaging system—requires converting properties of light into a sequence of bits. The mechanisms transforming optical energy into digital signals can often be idealized or ignored, but problems such as quantization or saturation become noticeable when imaging at the limits of the sensors. One such example arises in the case of single-photon lidar, which aims to form 3D images from individual photon detections. In this talk, we consider two factors that prevent perfect recording of each photon arrival time with infinite precision: finite temporal quantization and missed detections during detector dead times. We show that incorporating nonidealities into our acquisition models significantly mitigate their effect on our ability to form accurate depth images.

2

July, 2021
12 noon PT
[Sign-up]





Similar Lecture Series


TUM Visual Computing Group: AI Lecture Series

Seebelowtheskin Webinar Series

SPACE Lecture Series