The expected read byte would be 0x41, but as noticed the falling edge at the end (MSB) is interpreted as a rising (notice the up-arrow on the falling edge), and 0xC1 is incorrectly parsed/read.
Turn on the glitch filter just for the clock channel.
@saleae: there is a fairly good argument here for turning the glitch filter on by default at high sampling rates for the clock for synchronous protocols such as SPI and I2C. Could that be done as a default option in the analyzer’s configuration dialog so there is a chance the user is aware of it?
I really like the idea of moving the glitch filter closer to the analyzers. It actually solves some pretty challenging technical hurdles too.
Running all the data through the glitch filter while recording (which we do now) is very slow and un-editable, while recording the raw data but then applying the glitch filter on the fly for all uses (rendering, trigger, export, analyzers, measurements, etc) is a massive technical challenge, and slows the performance of all of those.
However just glitch filtering on the fly for analyzer channels reduces the performance impact, while allowing us to record the original, unmodified data so that the glitch filter settings can still be edited.
We could move the glitch filter settings into the analyzer settings, add sane defaults, or at a minimum explain why you might want to use it, and persist your settings the next time you use that analyzer type (just like we do with the other analyzers).