Are Guitar Pedals the Key to A.I.?

Are Guitar Pedals the Key to A.I.?

Guitar pedals are more popular than ever.  If you’re wondering what a “guitar pedal” is, it’s one of these.

 

Basically, it’s a small electronic box that contains electronics (digital or analog), and is designed to add a “special effect” to the signal of your electric guitar.  This “effect” occurs in real-time, as you play, and can be any number or combination of different “effects”.  These have been around for decades, and have been fairly common since the 60’s.

 

Currently, guitar effects come in every flavor imaginable.  From specialty units, to multi-effect units, to digital-modeling units, the market is only getting bigger and more varied every year.  Most modern effect pedals are predominantly digital (and have been since the 80’s), and utilize a type of technology known as DSP, or “digital signal processing”.  This is, in short, a subset of computer science that focuses on altering a flow of information in real-time, in order to improve, utilize, or re-purpose that flow of information.  This is usually achieved by taking an analog signal of some sort (like the output of an electric guitar), and converting it into digital information (known as A/D conversion) in order to process the signal in the digital realm.  In the case of guitar, an additional conversion stage is needed to turn the signal back into analog audio that we can listen to (this is known as D/A conversion).

 

This applies to Artificial Intelligence, because it’s also a lot like how our natural intelligence functions.  Our brains are constantly “signal processing” the sensory input that we experience.  We are constantly and subconsciously filtering out non-essential information, processing visual data, audio data, etc.  

 

One of the major issues facing A.I. at the moment, is how to problem of determining context.  A great example of this would be A.I. systems used for self-driving cars.  According to Lex Fridman in an appearance on “The Joe Rogan Experience”, auto manufacturers ran into problems when trying to program a car to stop if it “sees” the side profile of a bicycle in front of it.  Obviously, you don’t want a car to side swipe a bicyclist that, for example, might run a stop sight in front of a self-driving car.  The issue was, during real-world test runs, the self-driving software would come to a screeching halt if the car in front of it had a bicycle stowed sideways across the tailgate.  Without the ability to discern and instantly react within contextual boundaries is essential.  

 

How does DSP solve this?  Because it allows for the nearly instantaneous qualification of incoming data, you are then able to react to it in a contextually appropriate way.  It seems like the A.I. community is acknowledging this as well, although it seems like (for them) it could be a recent realization.  Meanwhile, the audio world has several decades worth of research and development in this department.  It makes you wonder how many of our future technological breakthroughs have already been “invented”, but not properly engineered yet. 

 

 

https://www.saama.com/applications-signal-processing-machine-learning/