Triggering and scope view - share your ideas with us!

I believe the reason for that is because it takes MUCH longer to “train” the analyzer on sophisticated signals. These have good use in manufacturing lines, but nowhere near as much on the lab bench.

1 Like

Analyzers are incredibly domain-specific. What’s perfectly ok on a serial port would be a critical failure in other logic. Generally they’re packaged (as analyzer configs) to meet one domain’s needs.

Of course, it means that we need to build tools that you can quickly configure for your use case.

Well I’d prioritize it after a fast-refresh triggering display. IMO that’s part of the 20% and it would only get used if it had a bundle of useful configs I could use without “training”

The reality is, especially on mixed-signal stuff, that you “notice” things that don’t look right the first time you see them. The concern with an analyzer is it would either A. be silent when you know there’s a problem, or B. be screaming FIRE when the signal is fiine.

1 Like

I agree with @Pat: “Analyzers are incredibly domain-specific”. Humans are pretty good at spotting patterns and deviations from patterns so letting the mark 1 eyeball do the initial heavy lifting makes good sense.

Providing the tools to allow users to then build their own analysis systems when and as they are needed is where I think Saleae should focus. A great example of that is the HLA API. That has been fantastically useful to me with my current project. But the HLA I’ve developed is too specific to our hardware to be directly useful to anyone else.

Some of the suggestions in the pipeline such as providing analog data to HLAs and generating analog trace data from HLAs are likely to be much more useful to users than trying to provide domain specific answers.

2 Likes

Replicating the basic features of an oscilloscope would be tremendous. My use case for oscilloscopes is mostly leaving them running while I swap parts in and out on a breadboard. This sort of interactive use is infeasible without live display. For this, I don’t need any measurements, only the visual display.

Since you asked about aspects where Saleae could go beyond traditional scopes, I’d first like to say that 8 channel scopes cost as much as a car and 16 channel might not even exist (if they do, they’d cost as much as a house).

And then there is memory depth. This is where Saleae really pulls ahead of scopes. Two of the items Pat mentioned, 3c (decay) and 3e (step through history), could be combined. That is, you have a slider that chooses the n^th capture to be brightest, and the captures before and after that are progressively dimmer. I think this would be visually striking and could end up being useful. For instance, you could easily see whether the captures before and after a glitch were abnormal, and you could see how a signal has drifted over time. These captures should be timestamped.

The XY display on my entry level Rigol is terrible. I think there are two reasons: low memory depth and a gap between captures. I could see making a very nice XY plot on the computer. In fact, it could even be an XYZ plot you could rotate in 3D. I don’t think anybody else does this. I can’t name a use case, but I’m sure it’d get used. XY plots are for things like voltage vs. current, or voltage into a filter vs. voltage out. A third channel here is useful for the same reasons it’s useful elsewhere. Or, if not 3D, it could be plotting XY, YZ, and XZ. Of course, this is gravy. The biggest benefit will be from the traditional basic scope features.

2 Likes

Yes! I think you’ve hit the nail on the head - to me, in my normal usage, a scope is for exploring/visualization. Then, if I need more detailed analysis, I turn to the Logic. That’s why the scope display needs to be responsive and quick, without needing to set up too many things first. I just want a ‘real time’ view of what’s going on wherever I am probing.

1 Like

XYZ plots would be insanity, I think I could come up with some uses:

not an incredibly practical use for XY plots, but check out Jerobeam’s Youtube channel:

I kept my ancient raster oscilloscope around just for this music.

1 Like

Providing the tools to allow users to then build their own analysis systems when and as they are needed is where I think Saleae should focus.

I completely agree. That’s why we built the marketplace and that’s why we’re having this discussion :slight_smile:

The more we understand your workflows (and your bugs), the better we can build that infrastructure. We would also like to organize a meetup soon to share our plans and hear your feedback directly.

One of the killer features of Saleae is that I can leave it running for hours and capture all the data. However, sometimes when you’re looking for something very specific, that can be cumbersome. I would love to see some kind of “fast segmentation”. Based on a trigger, capture x seconds after that and keep capturing in the same windows for x seconds eveytime that trigger happens. Of course this is very useful when trigger can have advanced features or be coupled to decoders.
For example: trigger every time this signal remains high or low for x time or when uart sends this string, etc.
I agree that saleae shouldn’t strive to be a scope, but can certainly benefit from scope features.

1 Like

First of all, I really enjoy your product, and I think the way you are going with HLA and openness with respect to the API is perfect. I did not write a HLA, but I might start in the nearer future.

When it comes to features that I would find extremely useful (but have not found), it would be the ability to load/import/compare/merge all (or specific) channels from multiple captures. Right now, I am trying to make sense of SPI captures of multiple devices (at multiple “strategic” times), where I am not sure if the devices are working properly or not. It would be awesome to be able to correlate, say, MOSI and MISO channels of multiple captures. I am working around this by multiple instances of Logic 2, but actually finding where the differences are is a bit clumsy (scrolling has to be performed in both instances, …)

It might be already enough to overlay one capture with another, “reference” capture (think reference waveforms from digital storage 'scopes). I must admit that I haven’t fully thought this through, but in the digital domain there may even be more opportunities (diff calculations, …), albeit I have to admit that I wouldn’t know what to do with it (and we could always export the data and feed it to external diff tools).

1 Like

Snap. Comparing captures is something I’ve been thinking about too and I’m running two instances of Logic2 right now for exactly that purpose. It seems to me to be a really hard problem to solve - even harder than a good text diff tool, but with many of the same problems.

The first major problem is being able to time align different parts of the captures because most real signals will have subtle or significant variations in the timing of interesting events. Maybe that means being able to slice one of the traces into chunks so different chunks can be aligned with different portions of the reference trace?

Comparing SPI (or whatever) analyzer text output for the two traces could benefit from standard text diff processing and depends less on time. That may be a quick way in a lot of cases to identify areas of interest. Being able to highlight trace areas that map to text differences and align them in time could be quite powerful.

1 Like

Now that I looked a bit deeper through some of the captures, I noticed a few “spikes” where I’m not sure if it’s something that the DUT does or if it’s a glitch-response of the Logic 8 Pro.

For this, I propose two things:

  1. Find/mark outlier events (e.g., the clock, I think, would be mostly stable with respect to frequency/duty cycle for one capture). Those cases where the clock signal “glitches” potentially might be interesting to look at, especially if it’d change the way the protocol analyzer might treat the data. From a certain zoom level, the little triangle markers could be used to find mismatches/anomalies, but for a longer-running capture (a few seconds is already enough), it’s a bit tedious to scan through the entire thing just to spot those. Maybe it would be great to pin those occasions where the frequency exceeds a certain multiple of its standard deviation or something like that. If that’s possible with a HLA I might whip something up for it. I don’t think any of my 'scopes could do this.
  2. Display dots of analog channels instead of “just” an interpolated wave. I reckon I could’ve used the scope to detect if the glitch “was real” (i.e., caused by the DUT) or if it stems from the way the Logic 8 Pro reacts. Knowing when exactly what values were sampled would help me, or at least that’s my conception/hope :slightly_smiling_face: The option to switch view in the analog channels would be great.
1 Like

Find/mark outlier events

Features like this could really help debugging if they are done well imo - Auto-detection of analyzers and their settings, diffing two channels, and more.
I don’t think that you can implement it with HLAs, but you can do that with a measurement extension.

See (and add your vote) https://ideas.saleae.com/b/feature-requests/display-analog-samples-with-dots-as-well/

Some of us are quietly hanging out for that :grin:

2 Likes

It would be really nice to have basic analog math functions - add, sub, mult, inv.

Also - FFT would be really cool, at least for audio spectrum if not higher.

Not sure why I wasn’t notified of a reply. Regardless:

What does it mean? Are you interested in a continuous trigger with holdoff time?

The holdoff part doesn’t really matter. Just the continuous part. Update will of course be limited by the hardware/software processing time to something like 30Hz, or whatever. Or even 1Hz. Not the important part.

Of course, our number one rule is: Never change the view if the user hasn’t asked for it :slight_smile:

Then why do you do it? When I re-activate a capture, the view changes and the view isn’t restored when the trigger happens again. I have to re-zoom to the portion I want to view, generally close to the trigger.

Do you need analog or protocol triggers or only digital triggers?

Just the basic triggers already available would be sufficient. Protocol would be nice, but it’s been years and we’ve yet to have a continuous trigger feature added. I’d rather have that than wait another 5 years for a protocol trigger.

This sort of feature is used when debugging hardware and you want to see what happens after a given trigger, such as a particular line going high. Maybe the issue is not repeatable and you need to repeat it multiple times. Maybe you’re trying to get a sense of the jitter in the timing between signals. These are basic debugging tools.

Then why do you do it? When I re-activate a capture, the view changes and the view isn’t restored when the trigger happens again.

Not on purpose, I can assure you that. Sounds like we have a bug. Does it happen every time?

We’ll have a basic version of continuous protocol triggers in about a week. Sorry for the (very) long wait…

View persistency can also be useful in some circumstances.

Minimum features should be:

  • Normal and auto trigger
  • Trigger by conditions (also protocol events) and multiple conditions, e.g ch1 high and ch2 both edges
  • View persistance
  • Cursors to measure levels / time
  • Some buffer to record tracks before and after the trigger event