For matters of credibility, we overwhelmingly avoid reporting on studies that are actually vendor marketing departments pushing studies whose results were likely determined before the first question was asked. This gets us into a thorny editorial debate. If the idea or results are valid, should the source matter? The answer is that it probably shouldn't, as long as the methodology seems valid and fair.
This particular report got a lot of attention after another publication, Internet Retailer—which we actually have a lot of hard-earned respect for—ran a story about the abandoned shopping cart report. The study was done by Listrak, which sells software that—you guessed it—chases online consumer after they've abandoned a shopping cart. The headline on the Listrak statement was "E-mail Service Provider Listrak Conducts 'Shop and Abandon' Cart Study Using Internet Retailer 500 List." So we have a vendor that sells this stuff saying that retailers should buy more of it and a publication reporting on the study while publicizing its own brand.
None of this disproves the results, but I'd be a lot more comfortable if it was The Wall Street Journal covering a Forrester report. Still, if the methodology is solid, nothing else matters much.
This is Listrak's description of its methodology: "Between June 15 and June 30, 2009, two Listrak employees visited each of the Internet Retail 500. They shopped and abandoned carts on 398 sites. The remaining 102 sites, or just more than 20 percent, could not be shopped or included in the study, either because they required a credit card number to put items in a shopping cart, or they did not require an E-Mail sign-in, meaning abandoning a cart could not trigger an abandonment E-Mail."
Starting with the Internet Retailer 500 list was a good start, although if the vendor is going to drop a publication's brand into a news release, they should at least get the name right. No matter. It's the next line that's a problem. The question of significance for this announcement is heavily based on these being the largest retailers. The statement references that some 102 sites were excluded. If almost all of them were from the bottom of the list, that's not a big problem. If they were mostly from the top, it's a huge problem.
So what then triggered the exclusion? Exclusion Number One: "They required a credit card number to put items in a shopping cart." Setting aside the nitpick that they probably meant required a credit or a debit card number, this was an exclusion? Granted, this technique is not the shrewdest thing in the world for an e-tailer to do, as it sharply discourages browsing and experimentation, which often leads to sales. But why not just give a credit card number to test the system? It doesn't seem a valid reason to exclude them.
Unless someone thought it through and concluded such an action likely would eliminate the tire-kickers with no intention to buy and that the inclusion of such sites would reduce the need for abandoned cart chasing. Exclusion Number Two: "Did not require an E-mail sign-in." Many sites, wisely I would argue, give shoppers an option to shop as a guest or to sign in. The guest mode is much more anonymous, which is a good approach for those who don't want to be chased down if they opt to buy elsewhere. As a research methodology, it's a completely legitimate exemption. But given the fact that many of the largest sites are offering their anonymous shopping, would excluding this eliminate some of the largest brands?
We debated whether we should even note this study, but concluded that we should present the data—along with our concerns. The premise of the report—which is that retailers should use the data they have to push for more sales—is a valid one, but I just wish the messenger didn't have such an obvious incentive to come to that conclusion.