Calibration Data: How to Use What You Know

  How well do you know what you know, and how well do you need to know it anyway?

We choose, purchase and maintain test equipment because we want answers: how big, how pure, how fast, how wide, and so on. The answers are essential to our success in design and manufacturing, but they come at a cost. Therefore, we want to make the most of them, and I have written previously about improving measurements by adding information.

There are many ways to add information, including time averaging of repetitive signals and subtracting known noise power from a measurement. I’ve recently discussed using the information from periodic calibration of individual instruments as a way to get insight into the likely—as opposed to the specified—accuracy for actual measurements. If you’re paying for calibration and the information gathered during the process, it’s wise to make the most of it. Here’s an example, from calibration, of the measured frequency response of an individual PXA signal analyzer:

Frequency response of one PXA signal analyzer as measured during periodic calibration. The measured performance and measurement uncertainty are shown in comparison to the warranted specification value.

Frequency response of one PXA signal analyzer as measured during periodic calibration. The measured performance and measurement uncertainty are shown in comparison to the warranted specification value.

In the cal lab, this analyzer is performing much better than its hard specs, even after accounting for measurement uncertainty. That’s not surprising, given that the specs must account for environmental conditions, unit-to-unit variation, drift, and our own measurement uncertainty.

Of course, if you’re using this particular instrument for a similar measurement in similar conditions, it’s logical to expect that flatness will be closer to the measured ±0.1 dB than to the specified ±0.35 dB. How can we take advantage of this extra performance?

Not surprisingly, the answer depends on a number of factors, many specific to your situation. I’ll offer a few thoughts and guidelines here, gathered from experts at Keysight.

Begin by understanding your measurement goals and responsibilities. You may be looking for a best estimate rather than a traceable result to use in the design phase, knowing the ultimate performance will be verified later by other equipment or methods. In this situation, the minimum and maximum uncertainty values shown above (dotted red lines) might lead you to comfortably expect ±0.15 dB flatness.

On the other hand, you may be dealing with the requirements and guidelines in standards documents such as ISO17025, ANSI Z540.3 and ILAC G8. While calibration results are relevant, relying on them is more complicated than using the warranted specs. The calibration results apply only to a specific instrument and measurement conditions, so equivalent instruments can’t be freely swapped. In addition, you must also explicitly account for measurement conditions rather than relying on the estimates of stability and other factors that are embedded in Keysight’s spec margins.

These factors don’t rule out using calibration results in calculating total measurement uncertainty and, in some cases, it may be the most practical way to achieve the lowest levels of measurement uncertainty—but using them can complicate how you verify and maintain test systems. You’ll want to identify the assumptions inherent in your methods and have a process to verify them, to avoid insidious problems.

Measurement uncertainty is not the only element of test plan design, and calibration results can help in other ways. Consider the measured and specified values for displayed average noise level (DANL) in the following graph.

The actual and specified average noise levels of a PXA signal analyzer are shown over a range of 3.6 to 50 GHz. Where measurement speed is a consideration, the actual DANL may be a better guide than the specifications in optimizing settings such as resolution bandwidth.

The actual and specified average noise levels of a PXA signal analyzer are shown over a range of 3.6 to 50 GHz. Where measurement speed is a consideration, the actual DANL may be a better guide than the specifications in optimizing settings such as resolution bandwidth.

In this example the actual DANL is 5 to 10 dB better than specified, and this has implications for the test engineer. When measuring low-level signals or noise, it’s necessary to select an RBW narrow enough to reduce the noise contributed by the signal analyzer. Narrow RBWs can lead to slow measurements, so there’s a real benefit to understanding the actual noise level as a way to use RBWs that are as wide—and therefore as fast—as possible.

When your measurements and test plans are especially demanding, it makes sense to use all the information available. Guardbanding is part of a Keysight calibration service that includes the most complete set of calibration results such as those above. For easy access to calibration results without tracking paper through your organization, you can use the free Infoline service that comes with all calibrations.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Error Vectors, Steganography, and Hiding in Plain Sight

  Fascinating connections between very different phenomena

In engineering, one of the most interesting experiences is to encounter an analog of something familiar, but in an entirely different field. I bet we’ve all had this recognition of similarity and felt the intellectual thrill of discovering parallels and symmetry. It’s also the source of theoretical breakthroughs, as described in Thomas Kuhn’s classic The Structure of Scientific Revolutions.

I can claim nothing so grand, but thought it might be interesting to summarize the journey that began with my efforts to understand and explain digital demodulation and the resulting error vector signals 20 years ago.

In a previous post, I explained the meaning of the error vector signal and how it represented the residual after the intended digital modulation was removed from a signal. The magnitude of the error vector signal (EVM) is well known and frequently used as an overall quality metric; however, the full, complex signal is more powerful in terms of diagnostics and insight.

The error vector is calculated as the complex difference between a measured signal and one with the same symbol sequence and perfect modulation. In performing demodulation with a vector signal analyzer, I figured it should be possible to hide a small modulated signal inside a larger one, making it almost impossible to detect unless one already knew of its presence. The error vector residual after demodulation should then be due mostly to the hidden signal and one should be able to demodulate it. Here’s an example of my signal spectra and results.

One signal—about 30 dB smaller—is hidden inside another in the spectrum at left. After the larger signal is demodulated and removed, the resulting error vector signal is successfully demodulated at right.

One signal—about 30 dB smaller—is hidden inside another in the spectrum at left. After the larger signal is demodulated and removed, the resulting error vector signal is successfully demodulated at right.

I was surprised at how well this process worked, even with differences in signal power of 30 dB or more. Noise didn’t seem to be a major problem unless it caused a significant number of symbol errors at the physical layer. When those occurred, they fouled up the calculation of the perfect signal that is subtracted from the received one, preventing accurate calculation of the residual.

I explained these experiments to a VSA-savvy R&D project manager: he said it looked like I’d “created an oddball version of CDMA.” It took me a while to appreciate the significance of what he’d said, but it did indeed seem to be an analog to CDMA.

When I ran across a paper about steganography, however, I recognized the similarity immediately. Though steganography comes in many forms and has a long history, I find the most instructive and satisfying examples to be graphic ones such as this pair.

The image of the cat at right is hidden in the image of the tree at left. The hidden image is recovered from the cover (carrier?) image by removing everything but the two least significant bits of the color channels and re-scaling the result. (Images from Wikimedia Commons)

The image of the cat at right is hidden in the image of the tree at left. The hidden image is recovered from the cover (carrier?) image by removing everything but the two least significant bits of the color channels and re-scaling the result. (Images from Wikimedia Commons)

A critical element in any version of this process is how the respective signals from the cover and hidden image are separated. Orthogonal codes and image intensity are just two of many approaches; you can see others at the Wikipedia link above.

The examples of different steganography types and signal-separation techniques are nearly endless, and in wireless communications I suspect MIMO is another one. In wireless it also seems that processing tasks such as separating signals from noise or dealing with low—or even negative—signal-to-noise ratios can be viewed through the lens of steganography.

Share
2 Comments ↓

Tagged with: , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Measurement theory, Signal analysis, Signal generation, Wireless

Time-Qualified Trigger: A Powerful Addition to the RF Toolbox

  When you’re playing cat-and-mouse with tricky signals

Hertz-minded RF engineers are becoming more and more comfortable with the time domain and, in particular, with simultaneous operations and signal analysis in the time and frequency domains. Part of the reason is that modern systems—from radar to communications—must be optimized in both domains. Of course, many systems are also frightfully complex in both domains, but that’s a post for another day.

The other part of the reason for this dual-domain focus is defensive: things can go wrong in both domains and engineers will need to find and fix the problems—or at least convincingly point to the guilty party.

Fixing a problem often begins with a clear, unambiguous look at the signal in question. That’s not much of a challenge for a CW signal, and even pulsed or TDMA signals can be handled with proven techniques that have been around for years.

Unfortunately, getting a clear look at the contents of today’s crowded frequency bands is difficult, and getting more so. You’re often looking for one signal among many, and it may be present for only a tiny fraction of your measurement time. To compound the elusiveness, the signal may also be hopping or switching channels.

The challenge is obvious in unlicensed spectrum like the ISM bands, where there are lots of users, minimal supervision, and many different signal types. Even in the licensed bands you may need to track down brief, unintended emissions from users on other bands, including harmonics, spurious or transient switching effects.

As is so often the case, the solution is to take advantage of powerful DSP and borrow something from oscilloscopes, our friendly experts in the time domain: the time-qualified trigger (TQT).

As the name implies, this trigger adds one or more time-qualification parameters to other trigger types such as frequency mask or IF magnitude. Here’s the TQT applied to a magnitude trigger in the 89600 VSA software:

A time-qualification parameter T1 is applied to an IF magnitude trigger on an RF pulse. A data-acquisition trigger is generated only if the pulse stays above the IF magnitude level (dashed horizontal line) for an interval longer than T1. A pre-trigger (negative) delay is used to move the data acquisition earlier and capture the entire pulse.

A time-qualification parameter T1 is applied to an IF magnitude trigger on an RF pulse. A data-acquisition trigger is generated only if the pulse stays above the IF magnitude level (dashed horizontal line) for an interval longer than T1. A pre-trigger (negative) delay is used to move the data acquisition earlier and capture the entire pulse.

The simplest time qualification is to trigger when an event lasts longer than a selected time T1. Using the two available time parameters, T1 and T2, provides three additional qualifications:

  • Duration less than T1
  • Duration between T1 and T2
  • Duration less than T1 and greater than T2

Of course, the point in time when the trigger conditions are all satisfied is unlikely to be the point at which you want to begin sampling and measurement. The VSA solutions include adjustable trigger delay to solve this problem, and the negative (pre-trigger) delay is frequently the most useful. It allows you to wait until you’ve found the exact signal you want and then go back in time to see the beginning of it or even earlier events.

Speaking of time travel, triggers such as TQT and IF magnitude can also be used in the VSA software to initiate signal recordings or time captures. Complete, gap-free blocks of frequency and time data can be streamed to memory for any—or multiple—types of post processing. Both center frequency and span can be changed after capture, to examine different signals and to track down cause and effect.

For RF engineers, the frequency domain is also a critical qualifier, and the frequency mask trigger (FMT) in real-time spectrum analysis is a powerful complement to TQT. FMT and TQT can be used together to qualify measurement data blocks in both domains simultaneously, trapping fleeting signals or capturing dynamic behavior with the speed and thoroughness of a hungry barn cat chasing lazy mice.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

Phase Noise, Frequency Multiplication, and Intuition

  This is why we can’t have nice things at microwave and millimeter frequencies

Well, of course, we can have nice things at very high frequencies, but it’s more difficult and gets progressively harder as frequencies increase. I just couldn’t resist invoking the “can’t have nice things” meme, and to parents everywhere it has a certain resonance.

In many applications, the operating frequencies of the systems we design and test are increasing as part of our endless quest for available bandwidth. From direct improvements in data throughput to increasing resolution in radar-based synthetic vision, the requisite tradeoffs apply equally to test equipment and DUTs.

An intuitive understanding of life at higher frequencies was on my mind recently after reading an email that mentioned a classic rule of thumb: A perfect frequency doubler increases phase noise by 6 dB. Here’s an example of a synthesizer output at successively doubled frequencies.

Successive doubling of the output of a frequency synthesizer increases phase noise by 6 dB for each step.

Successive doubling of the output of a frequency synthesizer increases phase noise by 6 dB for each step.

Of course, if the doubler is not a perfect device, then the increase will be larger than 6 dB because the doubler adds noise or undesirable phase deviation.

Why 6 dB? Perhaps that’s where different intuitive approaches can help. Years ago, when I first heard it, the 6 dB figure made sense from a time-domain perspective. If a deviation is constant in terms of time, the phase deviation at twice the frequency will be twice as large. Doubling the phase deviation—a linear term—will increase sideband power by 6 dB.

Heuristically, this intuitive approach feels correct, but I’ve learned to be cautious about relying too much on my intuition. Fortunately, more rigorous calculations—albeit based on approximations and simplifications—yield the same answer. Until I wrote this post, I didn’t realize that my approach also involved a version of the small-angle approximation.

A more general expression of this relationship applies to multipliers other than two:

20 log10 (N) dB where N is the multiplier constant

In practical microwave and millimeter systems, multipliers greater than two are common, placing a real premium on the phase noise performance of the fundamental oscillators. This applies equally to microwave and millimeter test equipment, in which the choice of local oscillator frequencies is a balance between performance at fundamental frequencies and required range of multipliers or harmonic numbers.

That balance can indeed yield nice things at very high frequencies. Here’s an example of the phase noise of a signal analyzer at 67 GHz using external mixing.

This measurement of a low-noise millimeter source reveals the phase noise of a Keysight PXA X-series signal analyzer using a V-band smart external mixer at 67 GHz. The DUT, a PXG signal generator with a low-noise option, has even lower phase noise.

This measurement of a low-noise millimeter source reveals the phase noise of a Keysight PXA X-series signal analyzer using a V-band smart external mixer at 67 GHz. The DUT, a PXG signal generator with a low-noise option, has even lower phase noise.

Frequency dividers are another example of this relationship, and can be treated as multipliers with a constant less than one. For example, a divide-by-two circuit (N = 0.5) yields an improvement of 6 dB, making it a practical and effective way to reduce phase noise.

Where do you get your insight into relationships such as this? Do you lean on visual approaches, mathematical calculations or something else altogether? Feel free to add a comment and share your perspective.

Share
No Comments ↓

Tagged with: , , , , , , ,
Posted in Aero/Def, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Cringe-Worthy Wireless: Industry and Personal History

  Admirable explanations and embarrassing memories

On a recent long-distance drive across western states, we encountered lightning and its usual transient interference across the AM radio band. That distinctive sound crackling from the receiver links two things in my mind: the first spark-gap wireless transmitters and my own unintentional transmissions from a spark gap driving a seven-foot Tesla coil.

Like many RF engineers, I’m fascinated by the history of radio, including the first steps on the path to where we are today. Unfortunately, I didn’t get much of this in school because our lectures only went as far back as the invention of negative feedback in the late 1920s. Practical spark-gap transmitters predated this by several decades.

That early history is enlightening, and I wanted to share an excellent—and underappreciated—explanation of it: a 1991 episode of the British Channel 4 series The Secret Life of Machines. It’s an understatement to call the series quirky and low-budget, but it’s also brilliant and entertaining. In here I do my best to create effective explanations of technical topics, but the hosts of this series have talent that I can only envy.

To see what I mean and get a glimpse of the earliest history of wireless, take a look at the series episode The Secret Life of the Radio Set. This YouTube link is one of several places where you can see the episode and others. You might want to look at the episode before reading the rest of this post. Go ahead. I’ll wait.

Welcome back. In the video, I was particularly struck by the sparks in both the transmitters and receivers. By the time I saw it, I was aware of the growing problems with spectral crowding and interference, and was working with the narrowband TDMA technologies that were being introduced to enable second-generation wireless phones. Videos of the spark-gap transmitters were an effective attention-getter in all kinds of talks about new and more spectrally efficient systems.

Early in my life as a practicing engineer my extracurricular activities included spark gaps and circuits that were the very opposite of spectrally efficient. In my defense, I didn’t come up with the design and, anyway, it was for a good cause. Here are a couple of pictures of the building process of that seven-foot Tesla coil.

Winding a mile of fine insulated wire on a Plexiglas tube to form the final stage of a Tesla coil. The completed winding is shown at right and, yes, that is a plumber’s toilet flange serving as a base anchor at the far end. Also yes, on the left that is your humble author as a younger, darker-haired practicing engineer.

Winding a mile of fine insulated wire on a Plexiglas tube to form the final stage of a Tesla coil. The completed winding is shown at right and, yes, that is a plumber’s toilet flange serving as a base anchor at the far end. Also yes, on the left that is your humble author as a younger, darker-haired practicing engineer.

The completed Tesla coil was inefficient by every measure. It was large and used high-voltage capacitors made from three-foot square panes of glass with heavy aluminum foil glued to each side. It was power hungry, driven by three neon-sign transformers that each produced 15 kV and 200 mA. I didn’t realize it at the time but it was a spectral monster, radiating power over a bandwidth that makes me cringe when I think about it now. It even made all 12 fluorescent tubes in the garage ceiling glow every time we switched it on.

Fortunately, we operated it for only a few seconds at a time, as part of a charity Halloween show. It was the centerpiece of our “Frankenstein laboratory,” sending bolts of lightning as the monster came to life and broke free to terrorize the crowd. Kids would run from the lab in a panic, only to get right back in line for the next show.

As with the radio industry of the last century, I quickly moved on to much more narrowband and civilized electromagnetic radiators. But every time I hear lightning crackle on the AM radio or the clattery, ringing buzz of a spark gap, I think of the true meaning of broadband and hope there is some sort of statute of limitations on my spectrum transgressions.

Share
1 Comment ↓

Tagged with: , , , , , , , ,
Posted in Hazards, History, Off-topic (almost)

Improving Measurement Accuracy with Information You Already Have

  Apply information about the individual instruments on your bench

I suppose design and measurement challenges can be a valuable contribution to job security. After all, if a clever and creative person like you has to struggle to hit the targets and balance the tradeoffs, you can’t be replaced with someone less talented—or by a mere set of algorithms.

However, this general promise of increased job security is scant comfort when you’re dealing with a need to improve yield, reduce the cost of test, increase margins, or otherwise engineer your way out of a jam. From time to time, you need a new tactic or insight that will inspire a novel solution to a problem.

This is ground we have walked before, looking for ways to transcend the “designer’s holy triangle” and previous posts have explained how adding information to the measurement process can be a powerful problem solver. One approach is to take advantage of published measurements of typical performance in test equipment to more accurately estimate measurement uncertainty.

In a comment on the post that described that approach, Joe Gorin explained it clearly: “What good is this accuracy if it is not a warranted specification? How can it be used in my measurement uncertainty computations? This accuracy is of great value even when not warranted. Most of us who deal with uncertainty must conform with ISO standards which call for using the statistical methods of the GUM (Guide to the Expression of Uncertainty in Measurement). The GUM, in an appendix, explains that the measurement uncertainty should be the best possible estimate, not a conservative estimate.”

To arrive at the best possible estimate, another—often overlooked—source of information is available to many of us: calibration reports for individual instruments.

The power level accuracy of an individual microwave signal generator is shown in a report generated during periodic calibration. The guaranteed specification is shown as green dashed lines (high and low limits) while blue dots represent specific measurements and the pink brackets indicate the associated uncertainty.

The power level accuracy of an individual microwave signal generator is shown in a report generated during periodic calibration. The guaranteed specification is shown as green dashed lines (high and low limits) while blue dots represent specific measurements and the pink brackets indicate the associated uncertainty.

It may not surprise you that the measured performance of this signal generator is much better than the guaranteed specifications. After all, generic specifications must apply to every one of that model and account for environmental conditions and other factors that apply to only a minority of use cases. In this example, instrument-specific information can be added to the process of determining total measurement uncertainty, yielding a substantial improvement.

Keysight calibration services test all warranted specifications for all product configurations. The resulting calibration data is available online in graphic and tabular form at Infoline for Keysight-calibrated instruments, a process that’s much easier than tracking down paper certificates inside your organization. This testing regime and data access is not universal in the industry, so if you’re not using Keysight calibration services you’ll need to check with your vendor.

The optimal use of this additional information will depend on your needs and the specifics of your measurement situation. So far I’ve only described the availability of the data, but I’m looking deeper into the practicalities of using it and will share more in my next post on this topic.

In addition, a discussion and excellent set of references are available in a paper discussing methods for pass/fail conformance that complies with industry standards.

I didn’t learn about calibration in school and my exposure to it in practice has been sporadic. However, I’ve been learning more about it in the past few months and have been impressed with the measures taken in factory and field calibration to ensure accuracy and determine its parameters. You should take advantage of all that effort—and the calibrations you pay for—whenever it will help.

Share
No Comments ↓

Tagged with: , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Pulse Analysis: From Many, One*

  Lots of measurements of a stochastic process may provide the deterministic number you seek

For much of my measurement career, many measurement situations have been a search for The One True Number, or at least the closest approximation I could manage. I have complained about measurements that are more stochastic than deterministic and how noise makes my work life difficult in multiple ways, including excess consumption of my remaining days on this mortal coil.

To be fair, I have also had to recognize the occasional usefulness of noise, and generally accept that it’s an inescapable part of our universe. It’s similar to my views on insects: I don’t like most of them, but I’m pretty sure there would be big problems if they weren’t here.

Recently, I’ve been looking at tools and techniques for measuring RF pulses in radar applications, and it seemed that I had entered a kind of alternate measurement domain. In the past, I’ve made many measurements of individual radar pulses, usually with the 89600 VSA software. Using a wide range of RF front ends, this software quantifies anything you might want to know about a pulse: all kinds of frequency, amplitude (average power, power vs. time, on/off ratio), timing, and modulation parameters such as chirp linearity or modulation quality. With the VSA’s time capture and repeated playback capabilities, you can make most of these measurements on a single pulse (from one, many).

No matter how accurate or comprehensive those measurements may be, they are inadequate in one important respect for applications such as radar: They do not account for the consistency of the pulses in question. The VSA software has taken a pulse-by-pulse approach and generally does not indicate repeatability, stochastic characteristics, or trends in the pulse trains or sequences.

Understanding some aspects of radar performance requires a kind of meta-analysis, quantifying the trends or repeatability limits of various parameters of the signals in question. The recent addition of option BHQ to the 89600 VSA software adds this large-scale statistical view in the form of a measurement application for pulse analysis. One typical measurement, aggregating the behavior of a multitude of pulses, is the histogram.

This histogram of best-fit FM results summarizes the behavior of thousands of pulses, automatically identifying and quantifying outliers.

This histogram of best-fit FM results summarizes the behavior of thousands of pulses, automatically identifying and quantifying outliers.

Radar is a prime example of a system in which repeatability is of critical importance, and where trend behavior can be invaluable in design optimization.

The inevitable question, however, is which parameter to analyze for trends or other statistical measures. This is where the experience, insight and intuition of the radar engineer come into play. As is true in wireless, this is another example of measurement software, powerful DSP and large multi-trace displays working together to leverage the talents of the design engineer.

The radar measurement application automatically identifies and measures large numbers of pulses. Multi-trace displays with both graphical and tabular data take advantage of an engineer’s pattern recognition to spot anomalous behavior or identify connections and causes.

The radar measurement application automatically identifies and measures large numbers of pulses. Multi-trace displays with both graphical and tabular data take advantage of an engineer’s pattern recognition to spot anomalous behavior or identify connections and causes.

Clever software and processing power is no substitute for engineering skill, but it helps distill the magnitude and complexity of pulse trains filled with complex signals. While it may not yield a single value as The One True Number, it can mitigate the risks of measuring too few pulses or analyzing too few parameters together.

If you’re interested in this sort of data reduction and analysis, please visit www.keysight.com/find/radar.

* ”From many, one” is a common translation of “E pluribus unum” from the Great Seal of the United States

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis

Dynamic Range and a Different Kind of Analyzer

  Signals and noise in the optical realm

It looks like I’m not the only one who finds myself wrestling with noise quite a bit, and recent developments in digital photography spurred me to briefly depart from my usual focus (pun intended) on the RF world .

I’m not departing very much, though, because digital photography can be seen as a two-dimensional type of signal analysis. Not surprisingly, many of the electrical engineers I know have at least a hobbyist interest in photography, and for quite a few it’s more than that. Our engineering knowledge helps a lot in understanding the technical aspects of making a good photograph, and I’d like to explain one recent development here.

The megapixel race in digital imaging is abating, perhaps because sensor resolution now exceeds the performance of some lenses and autofocus systems. I see this as a positive development, shifting attention to other important factors such as sensitivity or low-light performance. Sensitivity is as critical as resolution in those all-too-common situations when light is scarce and camera shake or subject movement render long exposures impractical.

Camera sensitivity goes back to the days of film, and the parameter called ISO quantifies it. In film, this sensitivity is related to grain size, but in digital imaging it’s more closely related to gain applied to the signal coming from the sensor. In an interesting correspondence, high ISO settings in a digital camera will produce noisier images that echo the coarser grain of high-ISO film.

This dance of gain and noise is awfully familiar to all of us, and I wonder if we should be suggesting to the digital imaging folks some sort of measure based on noise figure.

Today’s best digital cameras offer impressive sensitivity, driving new emphasis on a parameter near and dear to all of us: dynamic range. In the last several years, dramatic improvements in dynamic range have made cameras that are almost ISO-invariant, and this provides a big benefit for photographers.

Here’s my crude attempt at a graphical representation of the situation.

This digital image “tone flow” diagram shows how a scene with wide dynamic range may be clipped and compressed in the process of capture and conversion to JPEG format. If you rotate this diagram 90 degrees to the left, it corresponds well with the amplitude levels of an RF signal measurement.

This digital image “tone flow” diagram shows how a scene with wide dynamic range may be clipped and compressed in the process of capture and conversion to JPEG format. If you rotate this diagram 90 degrees to the left, it corresponds well with the amplitude levels of an RF signal measurement.

For RF engineers, this is familiar territory. Wider dynamic range in a measurement tool is always a good thing, and sometimes there is no substitute.

Taking advantage of this ISO-invariance is simple, though perhaps not intuitive. Instead of exposing normally for a challenging scene, the metering is set to capture desired highlights as a raw sensor output—not JPEG—file format. This may leave parts of the scene apparently underexposed, but the raw format preserves the full dynamic range of the sensor, and this allows all the tones to be brought into the desired relationship for the end result. In an ISO-invariant camera deep shadows may be brought up several stops or more without significant noise problems.

The result is more easily demonstrated than described, and an article at dpreview.com discusses the theory with examples. The folks at DPReview even consulted with Professor Eric Fossum, the inventor of the modern CMOS camera sensors that make this possible.

In a related article they also discuss the sources of noise in digital imaging, and once again there are parallels to our common vexations. I’m sure Boltzmann is in there somewhere.

Share
No Comments ↓

Tagged with: , , , , ,
Posted in Off-topic (almost), Signal analysis

Faster-Sweeping Signal Analyzers: An Invisible Technology that Just Works

  With a benefit or two that should not remain invisible

Though we don’t always think of them in quite this way, signal measurements such as low-level spurious involve the collection of a great deal of information, and thus can be frustratingly slow. I’ve described how the laws of physics sometimes help us, but this bit of good fortune confers only a modest benefit.

Some years ago, the advent of digital RBW filters in signal analyzers brought gains in speed and performance. The improved shape factor and consistent bandwidth yielded better accuracy, and the predictable dynamic response allowed sweep speeds to be increased by a factor of two to four. The effects of a faster sweep were correctable in real time as long as the speed wasn’t increased too much.

The idea of correcting for even faster sweep speeds was promising, and the benefits have gotten more attractive as spurious, harmonics and other performance specifications get ever tighter. To meet these requirements, the principal technique for reducing noise level in a spectrum or signal analyzer is to reduce RBW, with noise floor dropping 10 dB for each 10x reduction in RBW.

Unfortunately, sweep time lengthens with the square of the RBW reduction. A 100x increase in measurement time for a 10 dB improvement in signal-to-noise is a painful tradeoff.

As has occurred in the past, clever algorithms and faster DSP have combined to improve measurements and relieve the tedium for the RF engineer:

These two measurements cover the same frequency span with the same resolution bandwidth. Option FS1 in the Keysight X-Series signal analyzers (bottom) improves measurement speed by about 50 times.

These two measurements cover the same frequency span with the same resolution bandwidth. Option FS1 in the Keysight X-Series signal analyzers (bottom) improves measurement rate by about 50 times.

Fast ASIC processing in the signal analyzer corrects for the frequency, amplitude and bandwidth effects of sweeping the RBW filters at speeds up to about 50 times faster than the traditional minimal-error speed. This improvement applies to swept—not FFT—measurements and is most beneficial when RBW is approximately 10 kHz or greater.

While the speed benefits are obvious, another may be nearly invisible:  narrower RBWs also [update: see note below]  improve repeatability.

This graph compares the repeatability (vertical axis) of fast sweep and traditional sweep. The lower level and shallower slope of the blue line show both improved repeatability and less dependence on sweep time.

This graph compares the repeatability (vertical axis) of fast sweep and traditional sweep. The lower level and shallower slope of the blue line show both improved repeatability and less dependence on sweep time.

The magnitude of the speed improvement depends on measurement specifics and analyzer configuration, but they’re achieved automatically and with no tradeoff in specifications. If slow measurements are increasing your ambient level of tedium, find more information about this technique in our fast sweep application note.

Note: Improved measurement speed and repeatability are alternative benefits in this case, contrary to the implication of my original wording. You can use the same measurement time and get improved repeatability, or you can improve measurement time without improving repeatability. I apologize for the confusion.
Share
2 Comments ↓

Tagged with: , , , , , , , , , , , ,
Posted in Aero/Def, EMI, History, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Measurement Statistics: Comparing Standard Deviation and Mean Deviation

  A nagging little question finally gets my attention

In a recent post on measurement accuracy and the use of supplemental measurement data, the measured accuracy in the figure was given in terms of the mean and standard deviations. Error bounds or statistics are often provided in terms of standard deviation, but why that measure? Why not the mean or average deviation, something that is conceptually similar and measures approximately the same thing?

I’ve wondered about standard and average deviation since my college days, but my curiosity was never quite strong enough to compel me to find the differences, and I don’t recall my books or my teachers ever explaining the practicalities of the choice. Because I’m working on a post on variance reduction in measurements, this blog is the spur I need to learn a little more about how statistics meets the needs of real-world measurements.

First, a quick summary: Standard deviation and mean absolute—or mean average—deviation are both ways to express the spread of sampled data. If you average the absolute value of sample deviations from the mean, you get the mean or average deviation. If you instead square the deviations, the average of the squares is the variance, and the square root of the variance is the standard deviation.

For the normal or Gaussian distributions that we see so often, expressing sample spread in terms of standard deviations neatly represents how often certain deviations from the mean can be expected to occur.

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

Totaling up the percentages in each standard deviation band provides some convenient rules of thumb for expected sample spread:

  • About one in three samples will fall outside one standard deviation
  • About one in twenty samples will fall outside two standard deviations
  • About one in 300 samples will fall outside three standard deviations

Compared to mean deviation, the squaring operation makes standard deviation more sensitive to samples with larger deviation. This sensitivity to outliers is often appropriate in engineering, due to their rarity and potentially larger effects.

Standard deviation is also friendlier to mathematical operations because squares and roots are generally easier to handle than absolute values in operations such as differentiation and integration.

Engineering use of standard deviation and Gaussian distribution is not limited to one dimension. For example, in new calculations of mismatch error the complementary elements of the reflection coefficient both have Gaussian distributions. Standard deviation measures—such as the 95% or two standard deviation limit—provide a practical representation of the expected error distribution.

I’ve written previously about how different views of data can each be useful, depending on your focus. Standard and mean deviation measures are no exception, and it turns out there’s a pretty lively debate in some quarters. Some contend, for example, that mean deviation is a better basis on which to make conclusions if the samples include any significant amount of error.

I have no particular affection for statistics, but I have lots of respect for the insight it can provide and its power in making better and more efficient measurements in our noisy world.

Share
2 Comments ↓

Tagged with: , , , , , , , , , , , , , , ,
Posted in Measurement theory

About

My name is Ben Zarlingo and I'm an applications specialist for Keysight Technologies.  I've been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis.  For the past 20 years I've been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions.  I work at the interface between Keysight’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments.  Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.