AFE gain with fixed full scale range setting

Hi!

I’m getting acquainted with the Logic MSO – exciting kit! I’ve used many bench top ‘scopes; this is my first PC-based oscilloscope!

Looking through the datasheet I wanted to understand more clearly the relationship of the analog front end programmable gain (0dB to 38dB) when using fixed full-scale range settings.

The datasheet says

The attenuation and amplification settings are automatically calculated by the Logic 2 software based on either

  1. The zoom and pan of the graphical interactive viewable window, or
  2. The fixed full scale range set in per-channel settings.

The second point applies in my case. In Logic 2 I see the “Input Vertical Range” setting slider allows fixed values incrementing: 10, 15, 20, 30 40, 50, 75, 100mV etc.

What I’d like to clarify is how the programmable gain is set based on these range settings.

Example 1: If I set the vertical range to 15mV, will the gain be selected such that the input is amplified to the ADC’s max input of 800mV? In this case 800/15 = 53.333… The closest gain (less than this) would be 34dB = 50.11…

Example 2: If I set the vertical range to 750mV, the PGA gain would be 0dB = 1 as the input is already close to the ADC’s max input.

If this is documented somewhere please let me know :slight_smile:

Thank you!

Hi @bryce.wilkins,

I’m glad you’re excited about the device! This is a great question, and something we put quite a bit of effort into.

This is one of the differences between our oscilloscopes and traditional bench oscilloscopes. As far as I know, most (if not all) bench oscilloscopes have a 1:1 relationship between their volts/div setting and their analog front end (AFE) gain stages. Each click of the knob corresponds to a specific AFE setting, and the full scale range of the input lies up with the bounds of the screen.

Our hardware AFE is very similar to traditional bench scopes, but the way our software is where things diverge. Like you mentioned, our software allows you to set an arbitrary vertical range, and in the default mode (where the view state controls the AFE settings) the software searches all possible AFE configurations for a full scale range that covers the visible area with the minimum amount of overshoot.

Your assumption is approximately correct. However, the block diagram in our datasheet is simplified, and what’s not shown there is a fixed gain coefficient. That gain coefficient is actually measured under every AFE configuration on every channel at the factory and stored in the device’s on board calibration data, So there is not one fixed value for all devices. (It varies device to device, as well as channel to channel and AFE config to AFE config. There are 3 calibrated coefficients per AFE setting and per channel)

We don’t directly expose this in the software at the moment, mainly because no one has asked and we haven’t worked on any applications where it would be useful yet. That said, this is important if you want to visually see every bit of that 12 bit ADC resolution.

That said, you can indirectly see the true full scale range for any given user selected voltage range.

First, you will need to turn on fixed voltage mode in our software, so you can zoom out on a signal and see where the rails are. Fixed voltage mode allows you to set the full scale range and offset of an input independently of the view’s zoom and pan:

Then, you will need to feed in a signal that hits the rails. In this example, I’m using the square wave generator that comes with MSO.

Here I have added cursors showing the range:

So with the range set as 1.5V, I am getting a real range of 1.82V, about 21% more full scale range than selected.

If I change the fixed range to 1.8V, I see that the true input range is still 1.82V. However when I set the fixed range to 1.9V, I get a wider range of 2.31V.

I did also want to add that there are other factors involved in selecting the AFE settings besides just gain. There are certain limits to how much of an offset voltage can be applied for each AFE setting. For example when you are looking at a small range, but with a large offset, the software will first use an AFE setting that attenuates the signal significantly, then gain it back up again. This results in a higher noise signal. but with even larger offsets and smaller full scale ranges, it’s not possible to gain the signal back up enough, and so the software must select a significantly higher full scale range (lower gain) than desired. This problem affects traditional bench scopes in the same way, and you will see error messages on your scope when the offset is out of range for a given full scale range.

Generally speaking though, AC input coupling mode should be used instead to address these cases by removing the DC offset, allowing you to see the AC components of the signal in high fidelity.

We haven’t spent a lot of time yet exposing these details in the software or the documentation. Please let us know if you have a specific application where this information would be helpful! Or, if this is mainly just curiosity, feel free to ask follow up questions.

Let us know what you think about using your Logic MSO too!

Thanks,
Mark

1 Like

Thanks @markgarrison ! Super appreciate your insightful reply and demonstration. That was very helpful and I could reproduce on my end :grinning_face: I wanted to provide a few comments.

This is one of the differences between our oscilloscopes and traditional bench oscilloscopes. As far as I know, most (if not all) bench oscilloscopes have a 1:1 relationship between their volts/div setting and their analog front end (AFE) gain stages. Each click of the knob corresponds to a specific AFE setting, and the full scale range of the input lies up with the bounds of the screen.

The fixed range option is novel and I was quite excited that it would mean the AFE gain would be fixed, and then I could zoom and pan the view without causing some hardware switching under the hood.

Thanks for the insight on the fixed gain coefficient – that detail is good to know! Are these fixed gain coefficients typically small (~0dB), or could they be like 1.2x? (Mostly just curious here…)

… That said, this is important if you want to visually see every bit of that 12 bit ADC resolution.

This gets to my motivation. I hope to put all those 12-bits to use for the highest resolution view of my signal by making the input fixed range just a little bigger than my signal. That’s the normal thing to do with any scope, of course, but knowing how the gain is changing behind the scenes is potentially helpful.

If I change the fixed range to 1.8V, I see that the true input range is still 1.82V. However when I set the fixed range to 1.9V, I get a wider range of 2.31V.

Excellent example. I did some similar tests and I will experiment more to further understand it. This definitely helps when considering how the input signal will span the full range of ADC codes.

Please let us know if you have a specific application where this information would be helpful! Or, if this is mainly just curiosity, feel free to ask follow up questions.

Part of it is curiosity, in that to use our tools effectively we often need to have some insights to how they are working. But also, I have a use case for measuring small signals at high (vertical) resolution and wondered what impact the AFE could have on the small signal.

Not specific to Logic MSO, but what is good practice for signal amplitude input to a scope? A signal that is 800mVp-p before the scope input (with AFE gain = 0dB) or 10mVp-p at the input and an AFE gain of 80x both have 800mVp-p signal going into the ADC. Will the difference be the susceptibility to noise over the probe “transmission line”? How does one make the trade-off? (Apologies this has diverged a bit from the Logic MSO…)

Let us know what you think about using your Logic MSO too!

I’m liking the interface in Logic 2 – there’s a lot of information on the display but its clearly laid out to see exactly the operating settings at a glance. Even that little padlock for "fixed range” :star_struck: I’m getting familiar with the new icons to control the instrument, different recording modes etc, and probing around all the settings :slight_smile:

Definitely looking forward to working with is the MSO API in Python!

1 Like

For information on improving oscilloscope precision, see Tools to Boost Oscilloscope Measurement Resolution to More than 11 Bits | Tektronix

… specifically: Minimize attenuation to maximize signal to noise ratio.

Your intuition is right, it is better to have more ‘signal amplitude’ (within the scopes analog hardware limits) than using AFE to compensate (gain) it, as you will gain the noise and introduce some error from the gain factor/circuit itself.

Finally, some tips for better analog signal captures:

  • Probe Compensation: Compensate your probes at the specific fixed setting used (the signal path capacitance may vary for different AFE settings)
  • Proper Grounding:
    • Connect the scope probes ground to good DUT ground reference
    • Minimize the grounding path (e.g., use the ‘spring ground’ vs. alligator clip)
  • Noise Isolation:
    • Keep the DUT/probes away from any noise sources (laptop/monitor screens, AC power lines or other power cables, other electronics, etc.)
    • Use a lower noise USB power supply (the one provided by Saleae)
  • Noise Filtering: For lower frequency captures (< 20 MHz), turn on bandwidth limit filter to reduce the high frequency noise
  • Wait for Scope Warm-Up: For best results you can wait until the scope is fully ‘warmed up’ to its normal/ideal operating temperature (~30 minutes)
1 Like

hi @BitBob , thank you for this additional insight!

I read through the Tektronix document. Great read! I have been testing with the Logic MSO input set as “1x” so there is no attenuation of the input signal. Excellent to see that was mentioned.

Your additional tips are great. :+1: I can especially do a better at Noise Isolation. I think I will also look into shielding of my DUT.

The analog probes that come with the Logic MSO are 10X fixed passive probes, and the attenuation is built into the probe. Therefore, the setting on the GUI needs to match the probe so the displayed voltage is accurate.

In general, a 10X passive probe is a good choice for typical oscilloscope use. It has higher impedance (less influence on DUT circuit) and supports higher frequency content than a 1X passive probe. Some probes have a switch to go between 1X and 10X attenuation, while others are fixed at one setting (like Saleae’s included Logic MSO probes are fixed to 10X only).

Therefore, if you set the GUI to 1X, but use Saleae’s probes – expect to see a 10X error in your measured voltages (i.e., 9V source would display as only 0.9V due to mismatch between probe attenuation and GUI setting). 10X probes will reduce the voltage to the scope’s input by 1/10th (10:1 voltage divider), giving a higher input range.

Note: some higher-end scopes can auto-detect & set probe settings based on additional circuitry built into the probe connector, but the Saleae just has a simple SMA (or BNC adapted) connector w/o any extra ‘smarts’ for auto-detecting the probe type or scale/attenuation.

1 Like

hi @BitBob

Yup! That lines up with my experience in this. As I’m measuring small voltages I’m using a piece of coax to the scope input from my DUT, and the 1X setting on the GUI so that the display correctly shows the voltage.

I’ll experiment with the Logic MSO AFE gain more, and think about how to condition my DUT signal appropriately before the scope input, so that I could have the AFE configured with a gain of 0dB…

And also thinking about grounding and shielding at my DUT…

I’m not an expert on this, but I think it is important to match impedances on each end of a coax cable to maximize signal fidelity. I think coax typically has 50 Ohm impedance, while a 1X probe tries to match the 1 MOhm input impedance of the oscilloscope.

Here’s another article, which includes schematics for different probe types: https://www.allaboutcircuits.com/technical-articles/an-introduction-to-oscilloscope-probes/

… and and even more in-depth paper about scope probes, if interested:

As per the Logic MSO specs it only has the typical 1 MOhm input impedance, but no option to switch to a 50 Ohm mode. I think a 50 Ohm impedance termination is for matching the typical output impedances of waveform generators, RF transmitters, or ‘active’ probes (with a special circuit that drives the output from probe to scope). If needed, I think you can add a 50 Ohm load BNC terminator externally, but the pedantic purists would make a claim about how that isn’t quite the same.

I suppose you could experiment with different probing options and see which method provides the best results (assuming you have a known reference to use, or other ‘standard’ scope to compare):

  • Coax wire directly into scope input (or BNC adapter, if needed)
  • 10X Saleae probe directly into scope input
  • 1X probe into BNC adapter, into scope input

… but, depending on the signal characteristics and other settings, you might not actually notice much of a difference between these options anyway :wink:

Finally, here’s my understanding:

  • 10X probe: best for general use and higher frequency signals (i.e., 200+ MHz, but the 10:1 divider might affect ability to view very low amplitude signals)
  • 1X probe: low amplitude, low frequency signals (less bandwidth than 10X probe, ~10 MHz max, or so)
  • Coax wire: may have other issues, depending on the specific signal/cable characteristics
    • May have more signal distortion and/or ringing at higher frequencies (without proper load/termination)?
      • Signal reflections might disturb the DUT source w/o matching termination?
      • Matching termination might also overload the DUT, if not actively driving the signal/load?
    • May have a higher capacitive load vs. 1X probe?

… so I’d recommend sticking with a real scope probe and use the built-in AFE that is designed to handle low amplitude signals, unless I knew another input setup would work (and would consult with EE/HW experts for advise for any ‘non-standard’ setup). Otherwise, a potential ‘fix’ in probing may make it worse, not better.

What type of signal are you probing? What are the offset voltage levels, drive strength and signal amplitude/frequencies?

Edit: one other good article about passive scope probes and probing methods:

1 Like

@markgarrison

I wanted to cycle back with some of my results to compare.

Fixed range = 469mV yielded real range = ~585mV

Fixed range = 589mV yielded real range = ~740mV

Fixed range = 741mV yielded real range = ~1187mV

So, in my case, I could set “Fixed range = 600mV” and the 12-bit ADC would span input signals up to 740mV without clipping. I could zoom in and look at individual samples and see the vertical resolution looked about 740mV/2^12 = 180uV.

There isn’t a know what the overall AFE gain is based on the fixed range setting – it’s a bit more complicated than I originally thought – but that seems OK for now. I have to dig into my application more and see if this becomes more important later on.

You are absolutely right, @BitBob . I have cobbled things together while I’m getting familiar with Logic MSO. I have not thought through the impedance matching well…

Thank you for all the references and reading material – very insightful and great reading!

Nice summary of the 10x, 1x, coax options. That highlights the core differences and aspects I need to check further, especially for my coax solution.

I do need to experiment a bit, and have a Keysight N2870A passive probe, 1:1, 35 MHz. I should try it and compare with the coax, and see what difference exists. I would like to get the impedance matching “right” even if there’s no visible difference with the specific signals I’m looking at.

And I should compare to the Saleae 10x probe…though still intuitively I would opt not to use it because of attenuating my signal…but this is the only official supported Saleae probe.

Perhaps Saleae can make a 1x probe? :slight_smile:

I’m experimenting with a high side current shunt for measuring power consumption of hearing aids. The hearing aid battery voltage is 1.4V, and the battery current is in the mA and lower range. I need to keep the burden voltage low, so the shunt resistance is going to be low…but big enough to have some “signal”. I’ll have a couple of gain stages to amplify the shunt voltage and drive the signal into the probe. So low voltages, and frequencies < 1MHz.

Here are a couple of preliminary results using an arbitrary waveform generator to feed 1MHz square wave, 2mVp-p, 0mV offset, High-Z load into Logic MSO. Identical signal to both channels.

  • I’m using the smallest signal from the waveform generator so the Logic MSO AFE will be gaining this quite a bit (80x ?) before the the ADC.

Channel 1 (orange): BNC > coax > SMB > Logic MSO

Channel 2 (green): BNC to probe-tip adapter > Keysight N2870 1:1 35MHz probe > BNC-to-SMB adapter > Logic MSO

Channel 1 is showing a little DC offset…that could be related to the probes or the signal generator output. Needs further looking into.

Channel 2 does not have the offset.

The rise time is a little slower on Channel 2, may be the probe bandwidth is a factor there.

I swapped the coax on Channel 1 for the Saleae 10x probe, and still used the minimum vertical range, which is now 100mV for Channel 1 (keeping 10mV vertical range on Channel 2).

The 1x probe is the clear choice for this small signal.

I’m showing the limits of what I can zoom in vertically. @markgarrison Could the “View Vertical Range” be updated to allow even more zooming in? :smiley:

If I change the input signal from 2mVp-p to 80mVp-p, and set the vertical input and view ranges to 100mV for both channels, this is the result:

Channel 1 (Saleae 10x probe) is a bit more noisy than Channel 2 (Keysight 1x probe), which I think is expected due to the signal attenuation of 10x probe.

1 Like

Nice write-up on the probe comparisons. I haven’t tried to see +/- 2 mV inputs, but it looks like the minimum Logic MSO full scale range is +/- 10 mV (at SMA input, or 1X), or +/- 100 mV (for 10X probe) making a 1X probe the better option for this fine of a vertical scale and signal frequency. Depending on the device-specific calibration and ADC capability, the vertical resolution will (roughly) be:

  • 9-bit ADC: ~20 mV/2^9 (512) = ~40 uV/bit (1X) or ~400 uV/bit (10X)
  • 12-bit ADC: ~20 mV/2^12 (4096) = ~5 uV/bit (1X) or ~50 uV/bit (10X)

You’ll need to update (de-rate further) for the vertical overshooting mentioned earlier, but above numbers should be a rough gauge of the Logic MSO’s capability.

Depending on the frequency you want, you can consider oversampling (averaging) the ADC to improve the resolution at the expense of maximum sample rate. In this case, noise actually helps as long as it is ‘random’ and more than ~1 ADC bit in magnitude (which shouldn’t be a problem at this scale). So far, Saleae doesn’t provide this feature directly, but you should be able to emulate it with the Logic MSO API, or post-process the exported analog capture data. Not the same as having a real ADC, but might be good enough for your use case and could take advantage of Saleae’s higher MSO sample rate to boost the vertical resolution further.

Note: there are also other current/power monitoring options, such as:

… and probably many others. I’ve passively looked out of curiosity, but haven’t bought any for my own toolbox yet nor any demand at work. Thus, I have no personal experience (or affiliation) with any of these suggestions other than checking out various online reviews w/ positive results (i.e., future toy/toolbox R&D).

Further optimization of your setup will depend on how much data you want to collect (for how long of a test), and how high of a sample frequency. Any need for NIST-traceable calibrations for your power measurement equipment? Are you doing long-term power usage (discharge curves over the full battery life), or more detailed analysis of shorter-term sleep/wake cycles or other power mode switching transient tests that might need a higher resolution and/or more dynamic (auto scaling) range?

If considering any of these suggestions, make sure to read the documentation carefully, as the lower cost tools may have limited capabilities or lack hardware protections. I think you would want to align with your design and not smoke a brand new toy^M^M^M tool.

PS: I hope I’m not spoiling your (re)inventing fun :wink:

1 Like

Thanks for reading my note, @BitBob

Note: a typo there: 200 should be 20 I think.

Yes, I was thinking the same – the Logic MSO sample rate of 1GSa/sec rate is high enough that oversampling and averaging could reduce noise but not reduce the bandwidth to the point it impacts my DUT signal of interest.

I noticed Logic MSO has a “High Resolution” mode for Single Shot acquisitions so I was going to test with that to start. Is the kind of averaging you’re mentioning different to that?

Yeah, the API would also allow more fun to be had with filtering.

Yup, I have seen several of the those current monitors. I have the Nordic nRF PPK2. It’s quite amazing for its cost and the GUI tool. I do want to compare my results against that tool.

I am reinventing the wheel – just wanting to have some fun learning Logic MSO, specifics of the AFE, appreciate probe design complexity, etc. and put this to some purpose.

Whether I need some way to dynamically switch current ranges I’m not entirely sure yet…

No, not spoiling the fun at all! I’ve really appreciated your comments. It highlighted aspects I’d overlooked (cable impedance mismatch, noise isolation, …). The links and reading material has been super helpful. Thank you!