Get more detail on J.D. Power studies
The writer is an economics professor at Washington and Lee University and an Automotive News PACE Awards judge.
To the Editor:
Regarding "Lexus again tops J.D. Power dependability study; Detroit 3 narrow gap," autonews.com, Feb. 13: The average reported in quality surveys shows the gradual (but over time substantial) improvement that is confirmed by casual observation of new and used cars.
However, I am skeptical of the brand-level data. How large are the sample sizes for different makes and hence the statistical margin of error?
Then there are the biases inherent in survey methodology, including selective reporting and confirmation bias.
The quality gap between Lexus and Toyota is too large to be credible. First, luxury car buyers are pampered, and a casual mention at a regular service checkup gets issues addressed without the need to make a special trip to the dealership. So problems are surely underreported.
Then there's confirmation bias: Owners of such vehicles want to believe they spent their money well and adjust their perception of reality accordingly.
I have to assume that those undertaking such surveys at J.D. Power and Associates understand those issues. They may have a sense of confirmation bias, though that's a hard nut to crack. But surely they estimate their margins of error and compare respondents and nonrespondents to gauge selection bias.
The rank-order method of reporting is part of their business model, lending itself to the spate of news stories and (for some) tags in their advertising.
Given that, journalists ought to ask the people at J.D. Power for more detail and not serve as passive marketers for them.