Spring 2013 URSP Participant Bradley 'Luc' Hylton:
In the first of core Game
Design audio composition and editing classes [GAME 250], Dr. Martin made the
point that a lot of the exact sounds we mix when creating digital copies of
music and sound effects in games wouldn’t necessarily be heard by the player,
because we were using high-end audio monitors, where most people would be using
a wide variety of far cheaper audio equipment. Later in the semester we
discussed about how microphones and speakers from the mid-1960’s were often
better than modern devices. This ignited a dissonance in my head; how is it
that we constantly have better and better home visuals and people are willing
to spend more money for the best graphics, when they are missing out on a lot
in the core of the sound. Around the same time, I read this news article on
some highly skilled blind people who were capable of riding bicycles, through
use of rapid human echolocation. The idea has since been implanted in my brain
that with or without visual, realistic echo could add a lot to a game, in
emersion, audio quality and accessibility to the blind and visually impaired.
This project is my exploration
into making these realistic echoes live-rendered at play time a reality,
specifically testing in the Unity engine. As a Game Design major, any creative
game project, especially one as programming focused and ambitious as this is a
great boost to my employability and takes me a few steps closer to my real
goals and dreams, being an Indie developer of enough skill and means to make
any project I desire and is largely a tech demo for a larger game design idea
I’d like to pursue in the future and honestly advancement in consumer audio
quality is also intrinsically among my personal goals, because I feel that
there is so much that people miss out on in music, film and games through
deprecated quality.
This semester, most my free
time is split between my two major projects; this one and my Senior project. I
still take time for personal fitness, get enough sleep and have social time,
devoting a cumulative average of 20-25 hours on the two projects; often
spending the entirety of the weekend coding. Specifically to my work on Audio
rendering in Unity, progress has been slow; as of this writing, three attempted
methods have failed and my time devoted to this project is devoted to coding
the fourth method, which being that it’s built from relative scratch and not
relying on calculations intended in engine for other purposes, I feel and hope
will be successful. The possibility of using this method is something I
stumbled upon while reading the library reference for Unity engine; that rather
than relying on the default physics, light-based rendering or modifying the
default 3D audio to allow for realistic echo, I could directly access the mesh
data and parse the models manually. I hope to have a working prototype based on
this fourth method within a week of reading acoustics books, the Unity
references and coding. Final goal is to have a usable audio system for game
developers to use to have realistic echo in their 3D environments.