The vocabulary we use to describe music can be tough enough for a human to grok (really, what does it mean when a guitar riff is “crunchy”?) but a team of tinkerers from Birmingham City University aren’t interested in helping people understand that language. Nope, instead, they’ve cooked up a way to teach your computer what you mean when you throw around words like “bright” or “fuzzy” or, yes, “crunchy” with a program they call the SAFE Project.
New software launched today by researchers at Birmingham City University aims to reduce the long periods of training and expensive equipment required to make music, whilst also giving musicians more intuitive control over the music that they produce. The developed software, showcased today at the British Science Festival, trains computers to understand the language of musicians when applying effects to their music. The software (the SAFE Project) uses artificial intelligence to allow a computer to perceive sounds like a human being. The development of the software was motivated by the lack of statistically-defined transferable semantic terms (meaningful words) in music production. The software allows users to use key words to process sounds, e.g. ‘warm’, ‘crunchy’ or ‘dreamy’, rather than technical parameters. Users can also label their created sounds under key words, over time allowing a whole series of sounds to be grouped together and further strengthening the searches that musicians make when searching for specific types of sounds.
Leave a Reply