Thursday, December 13, 2018

URSP Student Gabriel Earle Studies Acoustic Monitoring of Manufacturing Infrastructure

My name is Gabriel Earle and I am a Civil Engineering student here at Mason. I became interested in
doing research this semester in order to further my academic pursuits and gain some different perspectives on Civil Engineering. Last Spring, I met with Dr. David Lattanzi, a structural engineering professor here at Mason, and he shared with me details about an upcoming project that involved audio data. I was thrilled to hear this, because I had a lot of experience with audio from my hobbies as a music producer and drummer, and I never expected to be able to use this knowledge in an academic setting.

This semester I worked with the guidance of Dr. Lattanzi and one of his graduate students, Jeff Bynum, on the project. The primary focus of our work this semester was to evaluate the feasibility of using acoustic data (audio) signals for inspecting and analyzing the movements of manufacturing infrastructure. We explored using machine learning to detect what kind of movements the machines were making. Specifically, a convolutional neural network (CNN) which attempts to learn the characteristic features in the audio of each kind of movement the machine could make. CNNs are a technique primarily used in image processing, so in order to implement acoustic data, we created spectrograms of our data set. Spectrograms are 3-dimensional visual representations of audio which display frequency intensity over time for the entire audible frequency spectrum. We utilized the spectrograms frequency intensity data much like color intensity data might be used in a traditional image processing approach. 

So far, our methods are gaining more and more capability to segment the audio data into machine movements that are verifiably correct against video data. We hope to continue to work on our data models and continue to look into the damage detection side of the research question in the coming weeks and months.