I’m seeing an issue where the first few analog samples of a well-known voltage source is “ramping up” before the samples are actually reliable. The amount of unreliable samples depend on the sampling rate. I see this for all sampling rates on the analog sampling except 50 MS/s.
There’s also a general pattern that the first analog samples arrive before the first digital sample, and all samples after the first digital sample are “good”.
Using this for automated testing means we have to manually try and filter out the noise-data, which is frustrating - especially since we don’t know the root cause, and are merely guessing on what the right way to do this is. We see this both in the data exported through the automation API and straight in the GUI, as well as csv dumps.
… looks like you don’t have a calibration file installed for your specific Logic hardware.
You can manually download/install the calibration, following:
If you want a deeper dive into the analog filtering & calibration settings, see this post:
(You can optionally ‘hack’ the calibration file to increase the analog bandwidth / reduce the filtering applied)
However, without any calibration file, you can get strange behavior as described above. In particular, you likely don’t have a mGroupDelay setting that aligns the analog/digital timing for your specific device/filtering settings, nor do you have a mImpulseLength setting that should ‘trim’ the leading analog samples and avoid the behavior you’ve described.
Thank you - we’ll look into it. We’ve not used calibration so far as the units are not connected to the internet on purpose, and we don’t need absolute accuracy, rather relative accuracy.