Strategy in Advertising. by Leo Bogart

Matching Media and Messages to Markets and Motivations

 

What one little ad can do

 

The job hunter reads the classified ads with a purpose and with attention strongly focused. Perhaps at not quite the same level of intensity, the woman who is in the market for a new fall suit will study ads to get ideas on the specifics of style and pricing before she goes on her shopping trip. Most retail advertising is designed to have this kind of immediate effect. Retailers can measure the direct pull of a specific ad from mail and phone orders or from the increased volume of traffic at the sales counter. An ad can pull traffic but not sales if the merchandise does not live up to the promises. Unlike the retailer, the national advertiser rarely expects that any individual advertisement will produce a visible sales response unless it announces a radically new product, or represents a special offer of limited duration, or has a strong fashion or fad appeal. Rather, he expects that the individual ad, along with others in the campaign, will produce a cumulative effect when they are exposed repeatedly to the same people. If the consumer's disposition toward a brand is apt to be built more out of actual use than out of casual, unwanted advertising exposures, an advertisement's duty may not merely be to add its trivial mite to the pile of previous trivial exposures, but also to produce an immediate effect on the very few people who were already (whether they knew it or not) "ready to buy" and predisposed to attend to the message with more than casual interest. Because national advertisers generally acknowledge that the impressions created by a single product advertisement are at a low level of intensity, field studies of advertising effects normally try to measure the impact of a whole campaign, rather than that of any one individual message.55 The more directly one focuses the attention of experimental subjects on the communications being studied, the easier it is to measure responses that disappear in the confusion of the real marketing world. But can traceable effects be measured for a single national ad under normal conditions of exposure?

This question was addressed in a large-scale field experiment run by the Newspaper Advertising Bureau in 1968. In six cities of different sizes, in different parts of the country, matched samples of home delivery routes were selected. The subscribers in each sample received copies of their morning newspapers with six specially prepared and inserted pages that included a selection of ads, each averaging a quarter page in size. (The fake pages were undetectable, as it turned out.) For 18 packaged goods brands, half the papers distributed contained an ad for one brand, while the comparable sample got a competing ad. For 6 other packaged goods and for seven durable items (like cars and refrigerators), the substitutions were randomized. Thus each set of ads represented a control for the other. About 30 hours after the papers were delivered, personal home interviews were conducted with 2,438 housewives. They were asked about all their shopping "today" and "yesterday:' with the aid of a series of questions that reminded them of different kinds of stores and products. They were asked also about their purchase plans for "today" and "tomorrow" and then questioned about the product categories and brands covered in the experiment.
Only at the end of the interview were they asked specifically about their reading of the newspaper and their recollection of the ad. So the key comparisons were between women whose papers carried each ad and those whose papers didn't. In the day and a half between delivery of the test paper and the interview, respondents were, of course, exposed to a variety of advertisements in all the test product categories and in all media, but these could be assumed to carry equal weight in the test and control groups. We were interested in measuring the effects of a single ad over and above the normal flow of advertising messages. How advertising relates to sales and attitudes can be studied by first considering only those people who had the opportunity to receive each test ad in their home-delivered newspaper. If they paid attention and absorbed the copy points of an ad, were they also more likely to buy the advertised product?

To answer this question we compared purchases of the advertised brand by people with varying degrees of exposure and memory of the advertising. The group who best remembered the ad were those who could prove recall by playing back a spontaneous description. Of these, the ones who "connected" with it were the ones to whom it communicated best *; others, when the ad was actually shown to them, remembered noting or reading it; still others who had stopped to read something on the page did not remember the test ad. Then there were those who remembered opening the spread but had not read the ad or any other ad or article on the page. Finally, there were the "unexposed" who said that they had not opened the page on which the ad appeared in the paper. (*As I described on page 118, personal "connections" between ad content and the reader's own life measure an ad's ability to arouse involvement with the product. The concept was originally defined by Herbert Krugman as ". . . conscious bridging experiences or personal references . . . that the subject makes between the content of the persuasive stimulus and the content of his own life:' otherwise vaguely "in the market." By converting purchase intentions into actual buying, they may have temporarily reduced the pool of potential purchasers of the test brands.)

Among those who could prove recall of an ad, 9 in 1,000 bought the advertised brand in the next day and a half. Among those who did not open the page at all, or who opened it but read nothing on it, only 2 in 1,000 bought the brand. Those who could prove recall of the ad, and especially those who played back personal connections with it, were much more likely to say the brand was one they preferred. These findings would at first glance seem to demonstrate conclusively that advertising communication is strongly linked to sales effects. However, a skeptic might still legitimately ask, "Do they buy the brand because they remember the ad message, or do they remember the message because they are (perhaps for totally extraneous reasons) customers for the product?" Had the study rested its case solely on people's memory of the advertising they had read, there would still be strong doubt on the subject of cause and effect. Fortunately this question had been anticipated in the experimental research design. Taking the aggregate of all the ads and brands under study, we found that in comparison with the control group, the test group showed 14 percent more purchases of the advertised brand (a difference that could occur by chance only once in eight times), a 10 percent greater brand share (a difference that could occur by chance only once in six times), 15 percent more sales of any brand of the advertised product for the six cases where this comparison could be made (a difference that could occur by chance only once in 12 times), and 4 percent more first choices of the advertised brand for purchase "next time:' about the same for the 24 packaged goods and the 7 durables and for those who were in the immediate market for a product and those who were not (a difference that could occur by chance only once in 8 times). The research also found parallel 30-hour results for a sample of television commercials that were measured as a by-product of the basic experimental design. There was one finding that ran counter to expectations: fewer women said they planned to purchase the test brand. Perhaps a partial explanation is that the ads worked to trigger faster buying action on the part of women who were otherwise vaguely "in the market". By converting purchase intentions into actual buying, they may have temporarily reduced the pool of potential purchasers of the test brands.
The overall consistency of the experimental findings corroborates the earlier conclusion that advertising communications cause sales, quite apart from whether the reverse is also true.

What one little ad can't do

Apart from the overall results, it was also possible to look at the variability of performance among the advertisements measured, in terms of all the available measurements of readership and sales effects. The first conclusion that emerges from this analysis is that while on balance and over the long-haul advertising promotes sales and improves reputation, individual ads may not merely fail to produce results; they may produce negative results, as was found in the case of three ads in this test. As I have pointed out, there is no truth in the common idea that advertising pressure always works in a favorable direction, that mere exposure of consumers to the advertiser's message is bound to attract rather than to repel them. An ad may convey unintended communications that arouse irrelevant fantasies. Copy, visuals, models, background, all convey symbolic meanings that may or may not enhance the product message. One might even infer that a bad ad or commercial that might simply disappear down the "memory hole" of the average reader or viewer would arouse more visible negative effects on the part of a real, live prospect whose mind was already on the product and who was really paying attention to the message. On the other hand, a strong advertisement heightens awareness not only of the advertised brand but of the generic product category, so it may add strength to the competition in an inelastic market. There is always great resistance on the part of advertising practitioners to the evidence that a good deal of advertising fails to do the job or is actually harmful. Null or negative results are commonly encountered in field surveys and are as commonly explained away on the grounds that the evidence is statistically inconclusive. Since single ads may have a reverse effect, the risks are less when the advertising researcher measures a campaign in which, on balance, the results are more likely to show movement in the desired direction. If some ads work against the advertiser's interests, this should only heighten interest in techniques that might help to distinguish good ads from bad ones. For this purpose, all the ads in our study were ranked in terms of each performance variable on which there were data. The findings indicated that in a pair of competitive ads, one may be superior by some yardsticks of communication, the other by different yardsticks. Recognition and recall rank orders were closely (but not perfectly) correlated, just as they were shown to be (at the +.92 level between recognition and aided recall) in a major comparison of print advertising rating methods conducted years earlier by the Advertising Research Foundation.

57 Brand preference, not unexpectedly, showed a moderate relationship both with sales on the one hand and with proven recall on the other. But do measures of ad readership tell us whether an ad is serving the advertiser's sales objectives? Our data showed almost no relationship between an ad's sales performance when compared with other ads and its comparative readership performance, as measured either by recognition or recall. It is apparent that an ad may arouse widespread attention and high reader ship without persuading the few people in the immediate market who are ready to buy. Conversely, an ad may rank low in its appeal to the general reader and still have a strong sales effect upon the very few prospective customers in the immediate market. Obviously, it is in the interest of any advertiser to win maximum attention, but his task does not end at that point. At a time when advertising communications proliferate rapidly, there is a strong temptation for advertising craftsmen to resort to the gimmickry and technical virtuosity that arouse attention through their ingenuity, startle effect, or entertainment value; in the process the brand's identity and the basic persuasive story may be lost.

Evidence in support of this assertion comes from a secondary study we conducted among 83 decision-makers (company brand and advertising managers, agency account executives, creative, media, and research people) in major advertising centers. The experts did very well in predicting readership performance; their record in predicting attitude change was mixed, and they could not predict which ads would sell more of the brand. The last and critically important finding follows logically from our earlier discovery that sales results had only a chance relationship to the ads' ability to win attention. But it is precisely the prediction of attention value that is the expert's stock in trade and that in effect makes him an expert. Only the pretesting of an ad's persuasive power (as opposed to its attention value) and pretesting among consumers actively in the market can be expected to reduce the level of random error that even the most talented of advertising professionals introduces into his predictive judgments of advertising performance. Advertisers should be concerned with the cumulative effect of all their advertising, in a mix of media, and in context, rather than with the effects of a single message. Advertisements resonating with each other in our vast marketing system have cumulative effects that are quite different both from their individual effects and even from the marketer's original intention.

Experiments and creativity.

There is an imbalance between the amount of laboratory experimentation on advertising effectiveness and the amount done in the field. Most studies of advertising effects are done in the field, and too often they suffer from inadequate design and inadequate controls. For every dollar of research money expended, advertising efficiency can often be increased more by pretesting creative approaches than by studies of completed campaigns in actual operation. Laboratory experiments are more likely to come up with significant differences that lend themselves to meaningful interpretation and that can be translated into realistic action. The cost of truly scientific investigation of the problems that most advertisers want to research under the heading of effectiveness is very often out of all proportion to the cost of the advertising itself. If such research is to be done, it should be honestly done in the name of science and not justified in terms of its practical utility to the decisions of advertising managements. However, there is an aspect of advertising whose effectiveness lends itself to extremely profitable research, with a pay-out that is much more immediate than any comparison of media. This is the creative aspect. Carl Hendrikson, a pioneer market researcher, once told me of an experimental study he made on the comparative effectiveness of print and radio advertising just before the TV era began. An advertising message was prepared for a brand of toothpaste in two forms: a print ad and a recorded commercial. There were two make-believe brands of toothpaste used in this comparison. Each person interviewed saw the print ad for one brand and heard a recording of the commercial for the other brand. He was then offered a tube of toothpaste and given his choice of the two brands in question. Offhand this sounds a good deal like many of the experimental intermedia comparisons that have been made over the years. But the results were tabulated as soon as they came in, in groups of 25. In the first few groups, the recorded message enjoyed an advantage over the print ad. In subsequent groups, the two were about equal.

Later on the print message did much better. In other words, as the study progressed, the print message did progressively better and the recorded message did progressively poorer. The explanation for this became clear when the recording as it sounded after many playings was compared with the quality of a fresh pressing. It still sounded pretty good. But something of the resonance, the tone values, the subtle qualitative inflections of the announcer's voice had deteriorated with the repeated playing of the record. This new variable completely reversed the position of the two media that were being compared. Minor variations in the creative handling of a message may have more to do with the nature of what is communicated than the difference between media.

The knowing practitioner of advertising is under no illusion that copy tests, whatever ingenious or devious devices they may use, can substitute for talent, taste, or insight. Herbert Krugman points out that ads are tested and compared once, but that in real life we encounter advertisements many times, and our responses change in the process. Direct response advertisers are constantly evaluating media vehicles and comparing their productivity in the most immediate terms. Coupon returns make it possible to compare the direct power and cost efficiency of different publications with an identical message. Using a coupon for a grocery item involves a small purchase decision, but the U.S. Army used an inquiry coupon in recruitment advertising bound into 44 magazines. (Naturally, there was no way to ascertain the effect of the advertising on persons who enlisted without returning the coupon.) The cost per inquiry ranged from $4.68 for Teen to $393.28 for Motorcyclist, and the cost per enlistment ranged from $165.42 for Jet to $84,050 for Newsweek. This may reflect differences in the audience profiles of the publications, but it also reflects the fit between their editorial environments and the creative approach in the particular ad. The variations in performance within a medium are far greater than those among different media. Any time that several ads or commercials are tested, one is apt to come away with a lion's share while the others creep in with only a small percentage of favorable response.

Just as copy tests show how ads and commercials differ widely in their power to persuade and convince, so standard reader" ship and commercial recall services document the variations in the power of ads or commercials to register conscious remembered impressions upon the mind of the reader or viewer. For instance, the median four-color magazine food ad gets a Starch noting score of 52 percent among women, but it can get as low as 30 percent, or as high as 59 percent. The median liquor newspaper ad, about a quarter-page in size, is noted by one out of five men readers, but some of the ads get only a few percent noting and others get over 50 percent. Consider two outdoor campaigns with an identical number and quality of billboard displays in the same community. One for Standard Oil showed an 82 percent higher recall than one for Shell. With a recall measure that demands more of the audience's memory, the gap between the strongest and weakest advertising looks even greater. Gallup and Robinson report that 15 best remembered TV food commercials scored 39 times better than the lowest 15. Sponsor identification studies show similar variations in the ability of viewers to associate commercials with the programs they view. An Art Carney TV show and a Bob Hope show some years ago had almost the same talent and network time costs. The rating of the Carney show was 17; the rating of the Hope show was 41. There is no report of a national rating service in which one cannot find fantastic variations in the size of audiences delivered by programs with identical production and time costs. In a series of studies done by Benton & Bowles, Arthur Wilkins found that in some cases 50 percent of the TV commercial audience was just viewing TV while the program was on, while in other cases fewer than 30 percent were just watching. This type of variation would not be revealed by any rating figures, yet the level of commercial recall in the attentive segment averaged 60 percent higher than in the less attentive segment. Studies on the creative side of advertising lend themselves far better to exact experimentation of the laboratory type than do studies that compare media. When we make media comparisons in the laboratory, we measure something different from the normal kind of media exposure that takes place in real life. The medium's strong and weak points, relative to other media, are not necessarily in proper proportion. When we confine our comparisons to alternative creative approaches within a medium, the conditions of exposure can be held constant and the planner can concentrate on the variations in the message itself. This is the area in which research investments to improve the effectiveness of advertising can have the greatest leverage on the final results.

Among advertisers there seems to be a widespread assumption that a dollar spent to advertise is always more productive than a dollar spent in figuring out how to advertise. There is no standard, accurate measurement of the total investment in advertising research. In fact, it is difficult to judge how much of what companies spend to study consumer behavior and market trends ends up with applications to advertising strategy. In 1983, perhaps half a billion dollars was being spent on advertising research of one kind or another, ranging from syndicated media measurements down to ad concept tests. This figure is modest in relation to the size of the job to be done and the number of questions that remain unanswered. It is also inadequate when considered in relation to the potential opportunity for increasing the efficient use of the vast sums actually invested in advertising. By any yardstick, as we have seen, there is tremendous variability in the performance of advertising that uses identical space or time budgets in a given medium. Suppose that through research the effectiveness of a given advertisement can be increased by 100 percent; that the sales it generates can be doubled. How much would the research be worth? Marginal utility theory tells us that a firm should be willing to spend additional money on advertising up to the point where the extra sales it produces yield an additional dollar of profit above and beyond the cost of manufacturing, distributing, and advertising the product, plus the cost of researching and advertising. Is it worth half a million dollars of research funds to double the return from a $1 million advertising expenditure? Maybe the value added in sales efficiency is worth only a quarter of a million dollars. In any case, it would appear to be worth more than $7,500 but that is roughly what the average advertiser would spend on advertising research three-quarters of one percent of advertising expenditures! In industry, the R&D function represents a far greater percentage of manufacturing output. In the aerospace industry, R&D budgets are 26.5 percent of sales; in the hardware end of the communications business equipment and electrical machinery R&D is 9.5 percent. Even in the automotive and transportation equipment field, it is 3.5 percent. Yet in making material goods, the unknowns are perhaps even less formidable than they are in the field of communication and persuasion. The businessman's riposte to this argument, Lester Frankel has pointed out, is that his investment in advertising research might be greater if he were convinced that it would actually pay off in increased efficiency. But obviously the value of the research investment in turn requires further research. Zero Mostel phrased a somewhat similar problem at the time of the Army-McCarthy hearings in a song, "Who will investigate the man who investigates the man who investigates me?" Marketing and advertising are characterized by a constant search for cheap research answers to expensive business questions, most of which cannot be answered within the framework of any individual firm's advertising research budget. Perhaps this reflects a subtle pressure on the researcher within a large corporation to emulate his corporate associates and peers by producing assembly line statistics on a schedule of his own.

Marketers are so obsessed with the compulsion to inventory the latest figures on circulation and audience that they are prone to forget the far more important and fundamental question of what happens to their messages in the minds of the media consumers. Since the structure of advertising research puts the great burden of financing on the media, it is not surprising to find that a disproportionately large amount of the research on advertising represents the dull and repetitive measurement of media audiences, while a disproportionately small amount is concerned with actual content. Yet the leverage on successful advertising performance is vastly greater for creative research than for media research. What goes into an advertising message is always more idiosyncratic in con and form than the choice of the medium in which the message is to appear. The capabilities of the medium remain constant over a broad array of messages and campaigns. The content and form of a message are highly specific to the product, brand, and creative approach, hence are less likely to be researched. In some sense or other, any advertising message, or for that matter any communication, may be said to have an effect. Can the effect be measured? I have tried to show that it can be; sensitive instruments in the psychological laboratory can track the changes it sets off in the pattern of brain waves, the pulse rate, the dilation of the pupil of the eye, the electrical conductivity of the skin. People often answer questions after exposure differently than they did before. Their actual purchase records look different. And yet, a true description of the effects of communication is not to be found in such numbers.

This, the Second Edition of Strategy in Advertising was published by NTC Business books in 1986 and was hailed as "the new bible of the agency business" by Barry Loughran, then President and CEO of Doyle Dane Bernbach International. It is still considered a "must read" for most account and media people and it won't harm creatives to have a look through it even though it sometimes is devestatingly boring and out of date, as some facts never die or change.

You may find similar books like "Creative Strategy in advertising" at Barnes & Noble today. 

Adland® is supported by your donations alone. You can help us out by buying us a Ko-Fi coffee.
Anonymous Adgrunt's picture
comment_node_story
Files must be less than 1 MB.
Allowed file types: jpg jpeg gif png wav avi mpeg mpg mov rm flv wmv 3gp mp4 m4v.