Personalization: Coming Full Circle, Part I

Arthur O'Connor

Updated · Apr 18, 2001

Photo Arthur O'ConnorWe’ve come a long way in personalization. So far, in fact, that in many ways we’re right back where we started!

What was once a carefully defined (as well as manually intensive and very expensive) niche strategy, personalization has recently become a grab bag of uncoordinated, incompatible, and overlapping tactics targeted at general consumers.

Lured by the bold promises of one-to-one marketing and “mass customization,” many businesses embraced personalization without a strategic framework to make the appropriate level of investment or an understanding of customer lifecycle value or channel of influence. In many cases (as with so many doomed CRM implementations), companies have undertaken personalization initiatives without even a management commitment to customer supremacy, enabling organizational incentives or infrastructure.

After so many failed — or at least disappointing — experiments with clickstream analytics, rules-based personalization and data mining solutions, businesses are realizing that the complexities of our individual customers aren’t so readily captured and gleaned from Web servers logs or huge data warehouses — no matter how fancy or expensive the analytics.

Given the tremendous investment required to get a meaningful look at customers, businesses are now focusing on the concept of “service to value.” In essence, they are returning to the belief that not all customers are created equal and that some deserve and need higher levels of profiling and service delivery than others.

In this, the first part of a two-part column, I will set the historical backdrop of personalization and discuss the first of three categories of personalization technologies and how they’ve impacted business practices.

The Good/Bad Old Days
Once upon a time, before corporate managers believed that technology was the answer to everything, personalization was a niche strategy. This strategy involved paying very close and careful attention to what used to be called “regular customers” — the biggest and best repeat buyers who generated the bulk of the revenue (the proverbial 20 percent that generates 80 percent of the business).

So businesses devoted a lot of time, energy and attention to these regular customers. Sales managers took it upon themselves to know and teach other employees about the unique needs and interests of these individuals. On any given day, ordinary customers would come and go, but when Ms. Regular Customer showed up at the store, everybody jumped and focused on meeting her interests, needs and concerns.

In providing these customers special treatment, businesses incurred more costs, but the investment was worth it. Nobody had to perform a return on investment (ROI) or cost benefit analysis to justify the added expense and effort. They did it because it made good business sense.

The Advent of Database Marketing
Then technology came along, with the promise of “proactively managing consumer relationships through developing customer intimacy, anticipating their needs, and delivering unique shopping and service experiences.” Or some such drivel.

In all started in the 1980s, and it was called database marketing. This involved the practice of dividing consumers into discrete segments based on analysis of their purchases (usually credit card transactions), credit history and other financial data as well as standard demographic and new “lifestyle” or “psycho-graphic” information.

Based on this analysis, customers (people like you and me, and for that matter, everyone) were labeled, categorized, typecast, and pigeonholed into groups called market segments. These groups were then solicited relentlessly, based on some statistician’s assumption of what “we” liked.

The success of such efforts? Not good. The targeting criteria were simplistic and primitive, and many of the assumptions were just plain wrong. It’s no accident that these efforts coincided with a consumer backlash in the form of consumer protection legislation against telemarketing and direct response activities.

Then came the Internet in the last decade, which, along with advances in data storage, analysis and communications technologies, brought us new advances in personalization, which I’ve grouped in the following three categories, ranked in increasing complexity and cost:

  1. Simple Web-based recommendation engines and clickstream analytics
  2. Business rules-based systems (which can be cross channel)
  3. Advanced data analytics and data warehousing

Simple Web-Based Analytics
One of the great beauties of the Web is that every move your prospect makes — every link or banner clicked on and every search conducted — can be meticulously recorded in an extremely cost-effective way — by merely configuring the Web server’s log file to record such data. Perhaps better still, individual customers can be readily “recognized” when they return to your site, through client-side (cookies) or server-side (user agent identification) techniques.

The good news was that businesses could accumulate a wealth of data about their online prospects. The bad news is that — as with most offline customer data — most organizations really didn’t know what to do with all this information. Aside from the ability to produce fancy reports of how many pages were served, who looked at them and what content they looked at, the online reports provide a very limited view of who these people are and their unique interests and needs, and thus have unclear economic value in terms of actionable marketing or sales information.

One of the most promising Web-based personalization technologies has been collaborative filtering. This relatively inexpensive technology compares information and identifies behavioral patterns through a simple (some would argue, shallow) analysis of data relationships. This personalization technology operates on the assumption that groups of users share similar tastes — so that if you like product A, you’ll probably like product B, which many product A buyers have also bought. While this rule doesn’t hold up for many product and service categories, it has been successfully used for books and movies.

By collecting expressed preferences by groups of users, collaborative filtering can be an effective recommendation engine (although, in most applications, it is used to serve targeted content). The advantage is that it’s easy and cheap, and, because it is based on expressed preferences — either explicitly provided, based on information from online forms, responses to inquiries, category searches, or implicitly, by recording the pages a user accesses and/or product they buy — collaborative filtering provides richer and more adaptive personalization than simple market demographic analysis.

The downside is that it results in only rough categorization of customers, from an extremely product-centric view (it’s only about their preferences for certain products — it’s not about why they prefer them). And if context is not factored in the approach (whether the purchase is made as a gift to someone else), the personalization results can be horrendous. Just imagine the recommendations a collaborative filtering engine is going to generate to a classical music buff after he buys an Eminem CD for his nephew’s birthday.

In the next column, I’ll discuss the advantages and disadvantages of rules-based personalization and advanced analytical and data mining solutions, and discuss where personalization as a strategy is headed.

Arthur O’Connor is a senior manager in the financial services practice of KPMG Consulting specializing in customer-facing strategy as well as related architectural and organizational issues. An accomplished author, speaker and consultant, Arthur is one the country’s leading experts in customer relationship management (CRM) and eCRM.

More Posts By Arthur O'Connor