This semester I worked with the guidance of Dr. Lattanzi and one of his graduate students, Jeff Bynum, on the project. The primary focus of our work this semester was to evaluate the feasibility of using acoustic data (audio) signals for inspecting and analyzing the movements of manufacturing infrastructure. We explored using machine learning to detect what kind of movements the machines were making. Specifically, a convolutional neural network (CNN) which attempts to learn the characteristic features in the audio of each kind of movement the machine could make. CNNs are a technique primarily used in image processing, so in order to implement acoustic data, we created spectrograms of our data set. Spectrograms are 3-dimensional visual representations of audio which display frequency intensity over time for the entire audible frequency spectrum. We utilized the spectrograms frequency intensity data much like color intensity data might be used in a traditional image processing approach.
So far, our methods are gaining more and more capability to segment the audio data into machine movements that are verifiably correct against video data. We hope to continue to work on our data models and continue to look into the damage detection side of the research question in the coming weeks and months.