Wednesday, November 17, 2010

Real-world Radar Detector Tests

A recent caller had a question: "You test in the desert with no traffic around, no trees, nothing. But I don't live in the desert and there are cars and trees and hills where I drive. So why don't you do a real-world detector test?"

A fair question and one easy to answer: There's no way to conduct a meaningful test of radar detectors in the real world. It would be worse than useless: misleading and more important, almost certainly unfair to some of the products. Here's why.

A comparison test generally pits one model against another, sometimes it's a before-and-after comparison of the same model's performance following a significant modification. But generally it's a shootout and like any shootout, there's only one winner.

Editors love shootouts because they sell magazines and from the late eighties until 2002, I conducted dozens of tests for magazines both here and abroad. Testing was laborious but writing the stories was much tougher. While it's easy to savage an under-performing detector and garner some laughs in the process, the tester has to keep in mind that even failures usually are collaborative efforts that consumed thousands of man-hours of labor and a lot of money. Nobody sets out to create a loser; with rare exceptions, the people behind the product did their best. Sometimes that just isn't good enough. Regardless, every participant in a shootout deserves a fair shot.

Unlike some gadgets, radar detectors can't be tested in parallel; with only a handful of exceptions, their local oscillators interfere with one another. (Even when one doesn't cause another to alert, sophisticated DSP-controlled radar detectors often dial back sensitivity to eliminate the nuisance signal, chopping sensitivity—warning range—to almost nothing in the process.) So they have to be tested separately; on rare occasions this may occur several hours apart if the number tested is unusually large or conditions dictate.

During those hours, none of the variables in the testing environment can change. This includes radar antenna alignment, radar detector mounting position (we use an elaborate test fixture to maintain alignment); target vehicle, traffic, even the weather. (Blowing dust, for example, significantly influences microwave propagation and detection range.)

The terrain used for the test site has a huge influence on detector performance. Placing detector and radar on level ground facing each other makes the detector's job easy, particularly when there's little foliage and no structures or terrain to block the signal. One operator, a guy who sells product endorsements, tests like this and not surprisingly, no detector he's ever tested has failed to be awarded his seal of approval.

Our Straightaway Test site is composed of two parallel, 2.5-mile-long straight stretches offset by a half-mile-long lateral section of road that spirals down to hop across a low-water crossing. It's not a down-the-throat shot but to a detector, it's reasonably close.

Here, the Rocky Mountain Radar RMR-D210, one of the most inept detectors I've ever tested and an electronic twin of the fabled RMR-D312, could spot most of our radars from 5,100 feet away. In these conditions, that's plenty of range. Although most of the radars could easily reach out nearly 3,500 feet here, target-capture range in practice is usually more like 700 to 800 feet.

But at our Curve/Hill site, everything is stacked in favor of the radar, exactly like you'll find in the real world. At this site the radar vehicle is hidden in the middle of a plunging S-curve and picking off targets as they pop into view only 650-odd feet away. The police vehicle can't be seen until nearly the moment when the radar locks-in a speed. And many radar detectors won't help here.

The radar beam is pointing at a sharp angle uphill, across the target's direction of travel and skyward—not at the detector. This off-axis signal is vastly more difficult to spot, the very reason why we test at this site. The Rocky Mountain Radar RMR-D210 belatedly squawked an alert at 600 feet, a few car lengths after the radar had already locked-in a target speed.

Even the Escort RedLine, proven the world's best radar detector in sheer sensitivity (warning range) in our recent test, alerts here at only about 3,300 feet. (I say "about" because the number varies slightly every time we test at this location, despite elaborate procedures to replicate conditions.) But in that other test it spotted the same radar from 14.25 miles away—in flat, featureless desert it should be noted.

Clearly, the test site and test procedures exert a huge influence on the results. So does the radar and its operation. At our Long-range Straightaway test site, the radar vehicle driver's door is aligned with our paint mark on the pavement and each radar antenna is aligned with a bubble level, then aimed at a reflector stake we hammered into the ground years ago. It's about 1,000 feet down the road and points directly to the terminus of the test site.

With the radar transmitting, the target car carrying the detectors takes its position at the edge of the test site, 5.4 miles away. Using one of the control detectors we've tested at this site continuously since 2003, we check to see if it detects each of the four to six radars being used for the test. If it doesn't, the driver instructs the radar operator by radio to make tiny changes in antenna alignment until the detector sounds a continuous alert. Then the test begins.

But even with these safeguards, sometimes there are surprises. After using the same Stalker radar for several years of testing, once during this calibration sequence the control detector inexplicably couldn't hear the Stalker. No amount of antenna realignment could induce it to detect the previously detectable radar. We tried another control detector with identical results.

At wits' end, finally I asked the radar operator to replace the antenna with a spare unit. In response, the detector barked an alert at maximum range. After some experimentation by switching between the two, we found that the recently-repaired antenna we'd started with could be detected, but detection range now was 20 percent less than in prior tests. In a later chat with Stalker's chief engineer I learned that a running change had been made to the antenna components and, thinking they were doing me a favor, Stalker had replaced the antenna innards with those of the new unit. It looked the same, but detectors somehow found it far more challenging to spot.

This is why, to be accurate, the tester has to control every variable. That includes traffic. Recently we tested detection range when the radar was behind the detector. The Valentine One has a rear-facing radar antenna and detects radar from behind just as well as it does from the front. This has sold a lot of radar detectors for Valentine but the truth is, any competent detector will detect radar coming from behind. The signal shoots past the target, reflects from a road sign or nearby structure—sometimes from the back of an 18-wheeler's trailer—and straight into the detector's antenna, setting it off.

From experience I knew that detection range by conventional detectors of rear radar is heavily influenced by the presence of 18-wheelers. Follow one with a polished-aluminum trailer door at 50 feet and it'll bounce back a strong enough signal from a radar four miles back to drive a high-end detector into a frenzy. Drive 50 feet in front of one and it will effectively block the signal, reducing even the V1's range by up to 90 percent or more.

And sure enough, with the radar car sitting next to I-10, detection range varied from 0.3 mile to over 2.0 miles. The difference was caused by truck traffic and the moving target car's proximity to road signs. It took endless hours to make a few good sets of runs not influenced by passing trucks, an illustration of why testing on public roads is generally a lousy idea.

These instances depict why "real-world radar tests" prove nothing. Most who earnestly offer these as evidence of detector performance, often on social media sites and YouTube, often aren't even using their own radar. They're depending on an anonymous signal, maybe a state trooper seen earlier parked at roadside.

But they're not operating the radar, the trooper is. And if he's like most, he'll be switching between the front antenna and the rear, also often placing the unit on RF Hold before taking another snapshot of a likely violator. Once on RF Hold, there's no signal present to set off a detector. Or he may adjust the antenna alignment or reposition his car, completely altering the beam strength as the detector sees it.

In the mid-nineties we were engaged by a radar manufacturer to conduct the first comparison test all of the front-line moving radar units. For the test I hired the Colorado State Patrol's chief radar instructor as an assistant. I didn't really need his help but felt that a veteran CSP sergeant as an observer would help to defuse the inevitable claims by the losers that the test was rigged.

Leaving the sergeant in the radar vehicle to mind the hardware, we began testing. After an hour spent making repeated passes, the numbers weren't making sense. Maximum target-capture range of the first radar model, an MPH Python II, was varying wildly, jumping from barely 1,200 feet on some passes to nearly 3,500 feet on subsequent runs. In the total absence of other traffic, all other variables being equal, there is no way on Earth radar should behave this way.

Exasperated and bewildered, finally we drove back to the radar vehicle. After chatting for a bit, I casually asked the sergeant: "Mike, by chance did the radar antenna get bumped or something while we were out there?"

"Didn't bump it but I did adjust it a bit," he said. "Figured you might get better range."

Now things made sense. Every time he tweaked the antenna alignment by a few millimeters, everything changed.

After admonishing the radar instructor not even to breathe on the antenna for the duration of the test, we started over. And this time the results were consistent. Moral: don't assume that someone with a Radar Instructor title necessarily knows much about radar, an observation I've since had occasion to confirm more than once.

On some YouTube flicks the videographer is using a radar speed trailer parked on the shoulder. Employing one of these as a radar source is equally fraught with peril since the results are heavily skewed by passing traffic. Just as that rear radar test illustrates the influence of large vehicles on detection range, even pickup and SUV-sized vehicles passing between detector and radar trailer will cause sudden drops in detection range as they block the beam. And it's entirely possible that a vehicle may pull onto the shoulder, disrupting the beam for the duration of its visit. But the videographer probably won't notice; he's half a mile down the road, driving the car, filming the action and staring intently at the detector, waiting for an alert.

Under these conditions there's no testing being performed; it's just some enthusiastic guys driving around aimlessly and watching the detector. When one performs differently than another, they have no idea why; they just report it as fact. This makes for entertaining YouTube footage but as an accurate comparison test, forget about it. There's no control of the variables.

A complex product like a radar detector doesn't lend itself to real-world tests, which is the whole point of product testing. By controlling the variables, a competent tester can consistently replicate test results within a window of confidence. The process isn't perfect, but anything else purporting to be a test is merely theater and rarely a reflection of a radar detector's true performance.