Drilling Down Newsletter # 14 -
November 2001 - Customer Surveys
Drilling Down - Turning Customer
Data into Profits with a Spreadsheet
*************************
Customer Valuation, Retention,
Loyalty, Defection
Get the Drilling Down Book!
http://www.booklocker.com/jimnovo
Now also available online through
Amazon and Barnes & Noble
Prior Newsletters:
http://www.jimnovo.com/newsletters.htm
-------------------------------
Drilling Down Newsletter # 14 -
November 2001
In this issue:
# Call for Case Studies
# Best of the Best
Customer Marketing Articles
# Tracking the Customer LifeCycle:
Longer Term Effects
# Questions from Fellow Drillers
----------------------------------
Hi again folks, Jim Novo here. This month we have a call for
case studies from a new magazine, a few very interesting customer
marketing article links (they could all be subtitled, "Hey, let's
try to do it right this time"), and a fellow Driller with a
question on survey bias. We also come to a fork in the road in
the series on Customer Latency, with you making the call on where we
go next on our exploration of High ROI Customer Marketing
techniques. FYI: the last newsletter was longer than usual due
to the ROI "proof" on the Latency promotion. I've made
this one 2/3 the usual length to try and "make it up" to
you, as things tend to get busy this time of year for many.
OK? Let's do some Drillin'!
Call for Case Studies
=============
There's a new magazine coming out in January called Optimize, from
CMP. The idea behind the mag is to present practical real world
implementations of technology, and tell readers how it was
done. "Theories" have to be backed up with case
studies from the people who did the work. I've spoken to
editor-in-chief Brian Gillooly about some articles, since I think the
Drilling Down method is a classic example of what this magazine is
about. But I need some of you to come forward with actual
implementation stories you are willing to have published.
Here's more on the magazine:
http://www.optimizemagazine.com
I have a ton of "soft" testimonials:
http://www.jimnovo.com/testimonials.htm
And a bunch of stuff I can't say anything about because folks just
do not want anybody to know what they're doing. But there has to
be concrete implementation details in these articles for Optimize
magazine. Most of the above folks can't be bothered or don't
want to reveal the outcome of their work for various reasons - like
their boss will blow a gasket. Too bad. So...do I have any
takers out there? All you have to do is supply the raw data; I
will write up the article for you, and you get final approval of the
contents. By the way, we can mask the actual results - but the
material has to come from a real company where a real human is willing
to go on record saying, "I did this, it works, and (required) here
is how." The case doesn't have to be complex,
earth-shattering, or from a Fortune 500 company. Just everyday
people making it work. Great opportunity for a smaller business to put
themselves on the map with the press.
Can you help me out? E-mail any idea for a case study based
on the techniques in the Drilling Down book here.
Thanks for your help!
Best of the Best Customer Retention Articles
====================
After a boatload of great DM News articles for the last article
update, I find for this newsletter no "must read" articles
they are about to lock away in the paid archives . So here's a
few other must read articles (which don't expire) you may have
missed. The first 2 URL's are too long for the newsletter, so
the links take you to a page with more info on what is in the article
and a direct link. The last is a direct link to an article on my
site.
Note to web
site visitors: These links may
have expired by the time you read
this. You
can get these "must read" links e-mailed to
you
every 2 weeks before they expire by subscribing to the newsletter.
SAS
chief: 'Analytics' an overused term October 17, 2001
searchCRM.com Oh man, is this guy a classic. Kicked butt and
took no prisoners. And he's right. Analytics should be
about prediction, else they're just reports. Too many of the
current analytical apps are just reporting tools, and if you
don't know what to report on, are not much help.
Next-Wave
Business Analytics
October 12, 2001 Information Week
And there it is, folks. The next wave is here, and it's what
I've been saying right here for a year now. "Change is the major
concern in business... What's needed are business analytics tied to
sensors and thresholds that can alert managers to the slightest nuance
of change..." In other words, not absolute measurement, but
relative measurement. Hmmm...
Should You Build A Data Warehouse?
October 29, 2001 Drilling Down Site
Article by a data architect friend of mine outlining the less
expensive alternatives to a data warehouse - what they are, what
they can do - and a unique approach to proving the case for or against
the selection of any alternative. Definitely in the tradition of
High ROI Customer Marketing - spend only when you have to, and when
you do spend, do it at the point of maximum impact.
Tracking the Customer LifeCycle:
Longer Term Effects
=====================
If you are new to our group, you might want to read the first
four parts of this series.
Last month, we looked at how to execute a Latency-based promotion and
use the two core rules of High ROI Customer Marketing:
1. Don't spend until you have to
2. When you spend, spend at the point of
maximum impact
By focusing your resources squarely on the problem, each dollar you
spend works much harder. By waiting for the trip wire you
narrowed the population you were promoting to, weeding out people you
would normally waste money on. And by acting when the wire was
tripped, you spent at the point of maximum impact. This approach led
to a 114% return on the promotion, as result of the "found
profits" you generate when you surgically prevent customer
defections with ultra-targeted promotions.
We were left with a question, though. This promotion was not
designed to extend the customer LifeCycle, but to add value to the
LifeCycle. Did we actually extend the LifeCycle, and how would
you measure this effect?
That's the topic for this month.
Recall you made $8000 in 90 days after paying back an
investment of $7000 with this promotion. You generated a bottom
line profit of $1.60 per customer - without even looking at what
happens to the customer after the 90 day promotional period is
over. But it is very likely you did something else with your promotion
- you extended the LifeCycle of the customer - and this is how you
track these "LifeCycle extension" effects.
All the customers in both the test (received promotion) and control
(did not receive promotion) groups were 3x buyers who failed to make a
4th purchase by 180 days after their first purchase. This was
the Latency "trip wire" selected to trigger the sending of
the promotion. So let's look at tracking these two groups for
another 90 days, and look at continuing purchase activity using what I
call the Hurdle Rate method.
A Hurdle Rate is simply the percentage of customers in a group who
have "at least" a certain amount of activity. You
define the behavior hurdle they have to reach, and measure the
percentage of customers who have achieved this "threshold"
(rate). If you track these percentages over time, you can use
them to compare the actual and potential value of customer groups as a
whole.
At the point of the promotion, 0% of both groups had made a 4th
purchase. Recall we measured the profitability of the promotion
over a 90-day period after we sent it to customers. To
track the Hurdle Rates for each group, we ask, "What percent had
made at least 1 more purchase at 30 days, at 60 days, and at 90
days after the 90-day promotion was over, in both the test and control
groups?"
We know some percentage of both groups made a purchase during the
promotion, because there were revenues generated in both groups.
We made a profit in the first 90 days because the revenues were much
higher for the test than control group. So at the beginning of
this "post promotion" tracking, we see 1% of test and 3% of
control have made 4 or more purchases. For the following 90
days, the data might look like this:
% 4 or more purchases.........Control......Test
End of 90-day Promotion........1%............3%
30 Days After Promotion End..1%...........5%
60 Days After Promotion End...2%...........8%
90 Days After Promotion End...2%.........10%
Realize this: we have already made money on this promotion, a 114%
ROI. We have already added value to the LifeCycle, increasing
LifeTime Value - no matter how long a "LifeTime" is (does it
really matter, as long as you are making profits?)
But as you can see from the chart above, we also extended the
LifeCycle itself, because the percentage of customers exceeding the
"4 or greater Hurdle" in the test group is far higher than
the percentage of customers over the same Hurdle in control, and it
appears to be growing over time.
There is a group of customers in the test group who just keep on
keeping on - and this percentage (10% at 90 days after Promotion End)
is much higher than both the initial group who responded to the
promotion and made a 4th purchase (3%) and the test group.
What's going on with that?
It's called the Halo Effect. It represents customer activity
stimulated by the promotion which did not occur within the promotional
period. Now we don't know exactly where it's coming from, and we
can't show any measure of profit from it (we defined our
promotion period as 90 days) , but it is clearly there, plain as the
nose on your face.
Recall when describing the original promotion, I stated,
"Response doesn't matter; what matters is actual buying behavior.
When you use control groups, you pick up buying behavior you
never could have measured by just looking at response
rates."
This "buying behavior you never could have measured" is
the Halo Effect, working magic during the promotion.
People you have no way to track will respond to the promotion.
They want to make a purchase but forget the coupon, for example. So
they go ahead and make the purchase anyway - because the promotion
"woke them up" to a need.
After the promotion is over, the same thing continues. It's
the Halo Effect again, working after the promotion. For
example, people think about participating in the promotion but wait
too long. They've missed it. But they're now in a new
state of awareness about your company because of the promotion,
and so are more likely to make a purchase given any random positive
stimulus. Perhaps some product appears on a TV show. Maybe
a competitor promoted a product to them, the customer remembers you
sell it also, and prefers your store.
It doesn't really matter. Fact is fact, and because of your
promotion, you extended the customer LifeCycle. You created a
situation where people became more likely to purchase from your
company in the future, as demonstrated by the chart
above.
Not bad for a beginner. In the first 90 days, your promotion
created present value - real bottom line, measurable ROI - which adds
Value to the customer LifeCycle (LifeTime Value). In the
2nd 90 days, your promotion created future value - accelerated repeat
purchase rates - by extending the length of the LifeCycle of the
customer.
CFO sings your praises! At last, somebody who can prove
they are making more money than they are spending with
marketing!
-----------------------
Well folks, I think I've fulfilled the objective of this
series. In the first article, I stated:
"I'm going to back up a second and explain in a more general
sense how metrics like Latency are used, and in particular, address
some of the misconceptions people have regarding customer value-based
and relationship marketing techniques. Much of CRM is based on
these fundamental ideas. You do not need to live on the bleeding
edge of technology to take advantage of a customer-based management
philosophy."
That's how we started down this current path. Now that you
know where we end up (for now), you might want to skim the whole
series again when you have time. I'm sure it will make more
sense to you as "one article" as opposed to five articles
over five months. Here's
where you start.
So I think I've accomplished the goal from the first article stated
above. But I could go on. And on. So, you tell me.
You just read Part Five in the series. Do you want a Part Six?
If so, send me any e-mail (blank OK) here.
If you think we're starting to beat a dead horse here, and want me to
go in a different direction, send an email here
and tell me what you would like to read about in the world of Customer
Valuation, Retention, Defection, or Loyalty. Helpful: Also tell
me if this series was too easy, just right, or too difficult for your
needs. I'll take the feedback and we'll push ahead next month
with the business of Drilling Down into customer data.
-----------------------------
I can teach you and your staff the basics of high ROI customer
marketing using your business model and customer data, and without
using a lot of fancy software. Not ready for the expense and
resource drain of CRM? Get CRM benefits using existing resources
by scheduling a roundtable or workshop. Details
here.
-------------------------------
Questions from Fellow Drillers
==================
Q: I have really enjoyed reading your newsletters. Keep up
the outstanding work!!
A: Well, thank you for the kind words.
Q: Typically, most marketers offer an incentive for their
customers' valuable feedback on a survey . I am interested in
knowing if studies have proven or disproven that this offering skews
the results of the survey. Any insight you can provide on this
topic would be greatly appreciated.
A: I don't know of any "tight" studies on this I can link
to. In general, rewards of any kind skew results, so I'm not
sure anybody would bother studying it. How much skew occurs
depends on the objective of the survey, the product, and the offer, so
any study would have limited application in different situations.
The whole question of bias in surveys keeps the academic community
alive with perpetual white papers. The key is to be consistent
with your approach and look at trends. The first survey doesn't
mean much as a stand-alone effort; the real question is, are things
getting better or worse? The important number to look at is not
the "absolute" level of any parameter, but the relative
change in a parameter - the change over time.
With or without reward, you will introduce bias - it' the nature of
this work. Some types of people answer, others don't; you always
get bias! What you want to do is *control* the bias, and one of
easiest ways to do this is to use the same survey method (and
incentive, if you use one) each time you survey. The real issue
is to set up something your company feels comfortable being consistent
with, and look at the trends. If you ask if people are
"satisfied" and 20% are, this is meaningless. What matters
is the next time you ask, is it 25% or 15%?
Hope this answers your question.
Jim
===================
That's it for this month's edition of the Drilling Down newsletter.
If you like the newsletter, please forward it to a friend!
Subscription instructions are top and bottom of this page.
---------------------------
If you're in a tight spot on a customer marketing program or CRM
initiative (it just doesn't pay out / can't prove it makes money) and
need some help making it profitable, check out my project-oriented
services:
------------------------------
Any comments on the newsletter (it's too long, too short, topic
suggestions, etc.) please send them right along to me, along with any
other questions on customer Valuation, Retention, Loyalty, and
Defection here.
'Til next time, keep Drilling Down!
- Jim Novo
Copyright 2001, The Drilling Down Project by Jim Novo. All
rights reserved. You are free to use material from this
newsletter in whole or in part as long as you include complete
attribution, including live web site link and/or e-mail link. Please
tell me where & when the material will appear.
|