Once again, I am on the topic of my Bodacious Acoustic Diagnostic Astoundingly Superior Spectromatic (BADASS) display. In my previous blog on this, I focused on the physical implementation of the presentation cabinet and its display and control panels. Today's musings are on the electronics side of things.
Have you ever noticed how each design decision has a ripple-on-effect that influences other decisions downstream? For example, my decision to use Adafruit's NeoPixel Strips -- the type with 30 NeoPixels per meter -- means I have to use my Arduino Mega microcontroller development board to drive the display. This is because I'm also using Adafruit's NeoPixel libraries, which only run on Arduino Unos and Megas.
Of course, I know that there are other possibilities. For example, I could use an FPGA to perform the DSP spectrum analysis and drive the display. However, this would actually end up slowing me down. Why? Well, I'm currently thinking of this project as having the following major elements:
I'm currently working on building the display itself. As soon as that's done, I want to start playing with it. If I were to use an FPGA to drive the display, I'd have to spend some time and effort getting everything up and running. By comparison, using the Arduino Mega means I can dive right in with gusto and abandon -- I can be literally flashing my LEDs in a couple of seconds.
This also explains why I'm not looking at using one of Cypress Semiconductor's PSoC 5LP (Programmable SoC) devices. I know these little rascals contain a 32-bit ARM Cortex-M3 microcontroller along with programmable analog and programmable digital fabric. Also, I recently learned that there is a library available that can be used to manage the same controllers that are used in Adafruit's NeoPixels. Sad to relate, however, I've never actually played with a PSoC, so once again I would have to spend time learning "stuff" -- and this is "quality time" I could be using to play with my LEDs.
The bottom line is that, initially, all I will have will be my Arduino Mega driving the LEDs on my BADASS display as illustrated below. In reality, I could use a single digital output to control all 256 LEDs, but the Arduino Mega has 54 digital input/output pins to play with, so I think it will be easier to use a separate pin to drive each column in my display.
In the fullness of time, I hope to end up with a lot of cunning display routines that I can use to present my spectrum data, but therein lies the rub...
I want my Arduino Mega to focus all its attention on driving the display and providing me with cool effects like peak-hold and fading pixels and falling pixels and suchlike. This means that I will eventually be looking at using another device to read the analog audio data stream and extract the spectral components using appropriate DSP algorithms (the nitty-gritty details of which will form a discussion for another day).
This other device could be an FPGA or it could be another microcontroller development platform. In the case of the latter, one platform I'm definitely considering is the chipKIT Max32, which boasts a 32-bit PIC32 microcontroller from Microchip. In addition to having exactly the same physical footprint and connector map as the Arduino Mega (making it easy for me to wrap my brain around), this little beauty runs at 80 MHz and has 512 KB of flash program memory and 128 KB of SRAM data memory.
Now, you might be wondering about the Arduino Due, which is also a 32 bit machine. At 84 MHz, the Due runs a tad faster than the Max32; the Due has 512 MB of Flash just like the Max32; and it has 96 KB of SRAM, which is 32 KB (25%) less than the Max32. If I've learned anything in this life, it's that more SRAM is generally a good thing (LOL).
Another point against the Arduino Due is that it runs at 3.3 V, as compared to the Arduino Mega, which runs at 5 V. This could be problematical when it comes to connecting the two together, because the Arduino website warns that providing higher voltages like 5 V to an I/O pin on the Arduino Due could damage the board. The chipKIT Max32 also runs at 3.3 V, but the folks at Microchip assure me that its digital I/O pins will tolerate 5 V signal levels.
But we digress... Whatever device I choose to implement the DSP algorithms to perform the spectral analysis, I'm going to need some way to connect it to my Arduino Mega as illustrated in the following diagram:
Two potential interface schemes that immediately spring to mind are SPI (Serial Peripheral Protocol) and I2C (Inter-Integrated Circuit). But should I use one of these, or should I develop my own?
SPI, which dates as far back as 1979, is a single-master protocol, which means that one central (master) device is in charge of initiating all communication with one or more slaves. A clock signal called SCLK is sent from the master to all of the slaves; a common data signal called MOSI (master-out, slave-in) is used to communicate data from the master to all of the slaves; a second common data signal called MISO (master-in, slave-out) is used to communicate data from all of the slaves to the master; and then there are separate SS (slave-select) signals for each of the slaves, thereby allowing the master to specify with which slave it wishes to communicate.
I2C, which was developed in 1982, is a multi-master protocol that requires only two signal lines -- SCL (serial clock) and SDA (serial data) -- both of which are bidirectional. In this case, each device connected to the I2C bus has its own unique address. Whichever device initiates a data transfer on the bus is considered to be the master at that time, while all of the others become slaves. The master signals that a communication is about to begin, then it transmits the address of the slave with which it wishes to communicate. Following an acknowledgement from the slave, the master starts to transmit or receive data. The underlying mechanism is quite "interesting," but -- as users -- we don’t have to worry about it, because everything is handled in hardware and/or software.
I've been using I2C quite a lot recently in my Inamorata Prognostication Engine project. I use it to communicate among my Arduino and my RTC (real time clock), two motor controller shields, and my RGB LCD shield, all of which came from those little scamps at Adafruit.com.
Having said this, all of my usage thus far has involved an intelligent master (my Arduino) communicating with relatively dumb slaves. Things are a little different in the case of my BADASS display, in which both of the devices are intelligent. On the one hand, we have the Arduino working on its cunning display effects and driving the NeoPixel strips; on the other hand, we have the other device sampling the audio stream and extracting the spectrum data. Both of these activities can be implemented asynchronously to each other.
One thing I could do is to make the other device the master. Every time it completes a cycle of taking a sample and performing its DSP magic, it could transmit this data to the Arduino. The problem would be if this communication interrupts the Arduino when it's in the middle of writing to the NeoPixel strips. The timing of these strips is a tad temperamental, so any interruption could result in undesirable artifacts on the display.
Alternatively, I could make the Arduino the master. Every time it completes an update of the display, it could send a request for new data to the other device, but then I run the risk of interrupting that little rascal in the middle of its cogitations and calculations.
Yet another option is to create a custom interface as illustrated below. The downside to this is that it consumes 13 pins, but -- as I mentioned earlier -- in the case of this project I have "pins to burn." The upsides to creating my own protocol are that it's incredibly simple, it's computationally lightweight, and it does exactly what I want it to do.
Let's walk through this step-by-step as illustrated in the waveform diagram below. When my Arduino finishes updating the display from the current cycle, it will place its "yo" output signal in its active (low) state (1), at which point it will sit there waiting for something to happen.
Meanwhile, let's assume that the other device has taken a sample from the audio stream and is currently performing its DSP magic. The result of this will be to store the spectrum data into 16 "buckets" (numbered from 0000 to 1111 in binary) each of which will contain a value representing the current peak amplitude for that "bucket." These amplitude values map onto the columns in the display, each of which contains 16 pixels. Thus, the amplitude values will range from 00000 in binary, meaning no frequency component, to 10000 in binary, representing the maximum amplitude or top-most LED.
As soon as the other device has completed its current calculations, it takes a look at the "yo" signal coming from the Arduino. If this signal is in its inactive (high) state, then the other device will simply take a new sample from the audio stream and start a new round of DSP calculations. Alternatively, if the "yo" signal is in its active state, the other device will respond by placing its "what" signal in its active (low) state (2).
Next, the Arduino sets up the address (0000 to 1111) of the bucket in which it is interested on its 4-bit "that[3:0]" bus (3), after which it places its "gimmie" signal in its active (low) state (4). When the other device sees the "gimmie" signal go active, it responds by taking the data from the specified "bucket" and presenting it on its 5-bit "this[4:0]" bus (5), after which it places its "take" signal in its active (low) state (6).
When the Arduino sees the "take" signal go active, it knows it can read the data value from the "this[4:0]" bus. Once it's read this data, it places its "gimmie" signal in its inactive state (7). When the other device sees the "gimmie" signal go inactive, it returns its "take" signal into its inactive state (8), after which we don't care what's on the "this[4:0]" bus (9).
The Arduino then sets the address of the next "bucket" of interest on its "that[3:0]" bus (10), returns it's "gimmie" signal in its active state (11), and off we go again. Once the Arduino has gained access to the spectrum data associated with all 16 "buckets," it places it's "yo" signal in its inactive state (12), after which it starts to update the main display. Meanwhile, as soon as the other device sees the "yo" signal go inactive, it responds by setting its "what" signal into its inactive state (13), after which it goes off to take a new sample from the audio stream and start a new round of DSP calculations.
After chatting to a number of other engineers, my impression is that creating one's own custom interface for this sort of thing is a lot more common than one might think. The advantage with regard to something as simple as my interface is that it's easy to understand, it has a small memory footprint, it can run at a really high speed, and I absolutely know what's happening.
So, that's what I'm thinking at the moment. What do you think? Is a custom interface the way to go -- (have you created one yourself?) -- or would you always try to stick with a standard protocol?
文章评论(0条评论)
登录后参与讨论