Warren Grundfest Lectures in Computational Imaging - First Friday of Each Month

The Grundfest Lecture series highlights rising stars in computational imaging. The series is co-organized by UCLA and Caltech. This is a named lecture in honor of the late SPIE Fellow Prof. Warren Grundfest (UCLA). The lecture includes an honorarium for junior candidates. Inspired by the hardships junior researchers face during COVID-19, Akasha Imaging provides a small monetary award to all speakers with a non-permanent academic position (e.g. PhD student, postdoc). Lectures occur on the First Friday of each month at 12 noon California time. The lead organizer is Pradyumna Chari.


Prof. Achuta Kadambi


Prof. Katie Bouman


Pradyumna Chari



coming soon

Ewa Nowara

PhD at Rice University

Can cameras really measure vital signs? Algorithms and systems for camera-based health monitoring in unconstrained settings.

Imagine when you looked at someone, you could see their heartbeat. A suite of techniques called imaging photoplethysmography has recently enabled contactless measurements of vital signs with cameras by leveraging small intensity changes in the skin caused by cardiac activity. Measuring vital signs remotely is advantageous in several applications, including virtual doctor appointments, especially relevant during a pandemic, as well as more comfortable sleep monitoring, or monitoring of prematurely born infants. However, the camera-based physiological signals are very weak and easily corrupted by varying illumination, video compression artifacts, and head motion. Therefore, most existing methods only work in controlled settings and fail in realistic applications. We developed a denoising deep learning algorithm based on convolutional attention networks that can faithfully recover physiological signals even from heavily corrupted videos. Moreover, our denoising algorithm can recover subtle waveform dynamics, previously not possible to measure with cameras. We also discuss how we can improve the performance of deep learning methods and avoid overfitting when training on very small and not diverse datasets.


March, 2021
12 noon PT

coming soon

Dr. Emma Alexander

Postdoc, UC Berkeley

Differential Defocus in Cameras and Microscopes

Image defocus provides a useful depth cue in computer vision, and can also be used to recover phase information in coherent microscopy. In a differential setting, both problems can be addressed by solving simple equations, known as Depth from Differential Defocus and the Transport of Intensity Equation. Relating these governing equations requires putting them on equal footing, so we'll look at the assumptions common to photography and microscopy applications, and go through a gentle introduction to coherence, light fields and Wigner Distribution Functions, and generalized phase. We'll show that depth from defocus can be seen as a special case of phase recovery, with a new interpretation of phase for incoherent settings.


April, 2021
12 noon PT

coming soon

Professor Akane Sano

Assistant Professor at Rice University

Digital Health and Wellbeing: Data-Driven and Human-Centered Personalized and Adaptive Assistant

Imagine 24/7 rich human multimodal data could identify changes in physiology and behavior, and provide personalized early warnings to help you, patients, or clinicians for making better decisions or behavioral changes to support health and wellbeing. I will introduce a series of studies, algorithms, and systems we have developed for measuring, predicting, and supporting personalized health and wellbeing for clinical populations as well as people at increased risk of adverse events, including ongoing COVID-19 related projects. I will also discuss challenges, learned lessons, and potential future directions in digital health and wellbeing research.


May, 2021
12 noon PT

Similar Lecture Series

TUM Visual Computing Group: AI Lecture Series

Seebelowtheskin Webinar Series

SPACE Lecture Series