Our initial reaction was that the word "personal" was meant in a social media sense, such as LinkedIn considering every connection of every one of your connections as part of your network—or friends of friends on Facebook or anyone who follows you on Twitter. But apparently this isn't the issue, as one spokesperson for the report said it was considered true "friends and family."
First off, this runs into a common problem we have with surveys. The survey didn't bother to explain what it meant by terms—such as "personal recommendation," with an emphasis on whether it meant that the shopper personally knew, and knew well, the person giving the recommendation—and didn't question participants as to what they meant.
And secondly, "personal recommendations" came in dead last, in seventh place. This raises another concern about such surveys. Did the sequencing of the options play a major role? In a Web survey, people often choose the first couple of answers that resonate with them and move to the next question. Did personal recommendations fare so poorly because most people had moved on before they even saw that option? That's important to consider before reassessing how important personal recommendations are.
That report spokesperson, John McKenna, had a different take. He guessed—and he stressed it was just a guess—that it was more of a statistical issue. "I think the answer here is that personal recommendations on retail products are not offered as frequently as how often people check out online reviews for retail goods," he said. "For example, out of 10 purchases, you may have only gotten two recommendations from friends. Although you may have listened to them 100 percent of the time (2/2), it was only factored into approximately 18 percent of the purchases."
That's an interesting thought, but that seems an unlikely thought process for someone filling out a survey, who is asked how influential personal recommendations are. He/she would probably just say how much weight such recommendations are given.
From a psychological perspective, this gets even trickier. The real question should be: "How much purchase-decision weight do you give to a personal recommendation of someone whose opinion you trust?"
The assumption for years—and I would suggest it's a very valid assumption—is that when a shopper gets a strong recommendation from a friend whose opinion on that topic he/she trusts, that will be extremely influential, more influential than anything else. (By on that topic, I simply means that people value opinions differently. If a shopper is trying to decide which high-end photo lens to purchase, a recommendation from a friend who is a well-respected professional photographer would likely be given extremely strong weight, whereas a music recommendation from that same friend might get zero credibility.)
This is the essence of our objection to many surveys put out these days. They can ask very good questions, but by not following up with clarification questions, they can't know exactly what the surveyed meant and certainly can’t know why they said what they said and believe what they believe. Without that terminology agreement and without asking—and getting concrete answers to—those why questions, execs can't draw any reliable conclusions. And if no reliable conclusions can be drawn, what's the point of the survey?