|My website has moved here! This site will no longer be updated as of June 2018.|
Take time to stop and smell the flowers :)
I completed my Ph.D. in Applied Mathematics at UC Davis in Spring 2018.
My advisor was Naoki Saito.
My research interests include audio signal processing, music information retrieval, and applied harmonic analysis.|
I received a B.A. in Mathematics from Tufts University in May 2010, and an M.S. in Mathematics with concentration in Applied Mathematics from the University of Iowa in May 2012.
In the summers of 2015 and 2016, I was a research intern at Smule, doing audio signal processing research. In 2015, I worked on acoustic feedback detection and automatic classification of audio recordings based on their vocal and instrumental content. In 2016, I developed user listening tasks to obtain labeled data for algorithm training, built a Python framework for audio feature extraction and machine learning tasks, and applied machine learning to gender and age classification for a large dataset of vocal recordings.
Some of my hobbies include music, hiking, ultimate frisbee, languages, and traveling. I play piano, I sing, and I strum chords on a ukelele.
Can you spot me in this picture?
My research is within the field of time-frequency analysis, the study of
signals which have time-varying oscillatory properties. Examples of such signals
include medical signals (like ECG and EEG readings), mechanical signals
(such as vibration measurements), and my favorite application, audio signals
(including speech, music, and sonar recordings). |
Currently, most of my work involves the Synchrosqueezing transform, a tool which enables the separation and reconstruction of amplitude-phase components for a certain class of signals which have multiple time-varying oscillations of different frequencies.
My first project is focused on enhancing the Synchrosqueezing transform using adaptive time-frequency representations. The Synchrosqueezing transform enables the sharpening of information derived from the short-time Fourier transform (STFT) or the continuous wavelet transform (CWT) along the frequency axis, but does nothing to sharpen information along the time axis. Our goal is to replace the STFT and CWT with adaptive time-frequency representations, which can enable more precise resolution in time (or in frequency) when desired. In particular, we have explored the use of a "quilted" STFT which enables the choice of different analysis functions in different regions of the time-frequency plane.
My second project is focused on data sonification. Here, the goal is to detect certain data trends by associating sounds to the data. We seek to detect environmental changes hidden in large sets of temperature and fluid motion data. For this project, we have used the Synchrosqueezing transform to extract instantaneous frequency information from the data, and we have mapped this frequency information to pitch curves in an audible range. This process leads to a musical piece which is essentially "created by nature," with direct correspondence to oscillations in the data.
Fall 2012: MAT 21D (Vector Analysis)|
Winter 2013: MAT 21B (Integral Calculus)
Spring 2013: MAT 21C (Partial Derivatives and Series)
Winter 2015: MAT 207B (Methods of Applied Mathematics)
Spring 2015: MAT 25 (Advanced Calculus)
Fall 2015: MAT 17A (Calculus for Biology and Medicine)