Hi @BitBob,
I’m glad you’re interested! I’ve emailed you a copy of your device’s factory test data. You can try plotting that on a log/log graph to see that frequency response effect I was talking about, which is why we include a calibration filter in the calibration file.
After reviewing your posts, I think I can cover all of your questions with the following background information.
Why we don’t plan to add down sampling AA filters again for future analog products.
To clarify, future analog inputs will still have AA filters, but these will be fixed, and will be tuned for the bandwidth of the device. Anti-aliasing filters are quite important for many signal recording applications, especially those where frequency content is quite important. However, something we’ve learned after launching these products is that for oscilloscopes, and many other time domain signal analysis areas, aliasing can actually be a good thing. anti-aliasing filters can hide real signal components, filtering out quite a lot of information. In time domain, when a signal is heavily aliased, it shows up as an unmissable pattern which should quickly inform the user that there is higher frequency content in the signal they can’t see properly. At least in the scopes we tested, they made no attempt to filter the data before down sampling (when zoomed out in time), and for the use cases we’ve looked at, seeing the aliased signal was greatly preferable compared to seeing a nicely filtered signal that would indicate that no higher frequency content is present. I don’t know if all scopes lack AA down sampling filters, or just the ones we tested. (Note, I did these evaluations a very long time ago, and I don’t remember the specifics)
XML vs. JSON calibration files
The XML file is the original, official calibration file format, which we still generate at the factory for new units. This is the native format used by the Logic 1 software.
For Logic 2, we internally convert it to a more structured format. The Logic 2 software converts the XML file to the JSON format.
Every time the software detects a device is connected, it will check to see if a new calibration file is available, even if it has already downloaded a calibration file. Our calibration server provides the last updated date of each calibration file, and the software checks that against the local file. If the local file is older than the date returned by the server, the software will download the new file, over-writing the original. In practice, the only time we updated calibration files were released was shortly after we started shipping, when we needed to release DC calibration without AC support, as mentioned in that blog post you found.
The filename format you mentioned is as follows:
${deviceId}-${lastUpdated}-${PARSING_VERSION}.cal
The -1
at the end of the filename is there in case we need to change the JSON format, which would require all existing json files to be ignored and regenerated.
If you modify the json calibration file and restart the software, your modifications should be loaded into the application.
If you delete the json file, then a new XML file will be downloaded (ignoring any existing file), then it will be converted to json and saved.
Your analysis of the filter is correct, and yes changing b0 to 1.0 and the rest to 0.0 should result in a identity filter.
There is one filter definition for every sample rate of every channel. Note that really low samples rates, 625 KS/s and lower, actually only decimate to ~3 MHz in the hardware. We run the calibration filter at this data rate, then perform additional decimation in the software down to the final rate.
Where the filter is run
In your Xilinx based unit, there are 2 filters in the signal chain.
First is the CIC filter in the FPGA. this combines filtering and down sampling into one operation. Like you mentioned, we down sample in the FPGA to reduce USB bandwidth needs. This allows you to record more signals at once, record digital signals faster, and uses less bandwidth which generally makes the USB stream more reliable for long duration captures. CIC filters do not have any taps, instead we just specify the down sample ratio. That data is streamed to the PC, where we use the PC CPU to run the calibration filter on the data stream. This implementation is heavily optimized, and will automatically use SSE2, AVX, or AVX2 CPU instructions depending on what CPU you have.
Group Delay
It’s been a really long time since I’ve had to work on this, but one tricky part of building a mixed signal device (analog and digital recording) is getting the digital and analog signals properly synchronized. There is considerably different delays between the voltage at the input pin and when the data is captured by the FPGA, and then further delays are inserted by the various filters.
In short, we have a basic fixed model for the delay from the fixed components. in the analog path, relative to the digital path, the main contributing factors are the AFE group delay (including the analog filter), the ADC pipeline latency, and the latency between the ADC and the FPGA. The CIC filter adds it’s own delay, which is fixed, and then the calibration filter adds more delay, which varies filter to filter, which is why it’s stored in the calibration file.
Also stored in the calibration file is the filter impulse response. This is important because this is effectively how long the filter takes to “warm up”. If you simply start up an IIR filter, using zeros for the input and output history you don’t have, you will often get garbage data up until the impulse response length. Our software will discard these “warm up” output samples, to make sure that the first data you see is correct.
Note that the group delay and the impulse response length work to partially cancel each other out, as the way to compensate for delays in the signal path is discard samples from the start.
Interpolated lines in the UI.
When you zoom into the signal to the point where the data points are more than 1 horizontal pixel apart, we will draw smooth lines connecting the points. We use cubic spline interpolation for this. Fun fact, we tried several more traditional up sampling filters, and none were as a good at actually connecting the points.
Lastly, to see why the calibration filter is really needed, I recommend recording a 1KHz square wave. Zoom out to look at the whole wave, with and without filtering. I recommend taking a look at this at 50 MS/s, but lower rates should still capture the effect. I can see the effect in some of the screenshots you have shared, but it’s much more pronounced at lower frequencies.