<<

March 7, 2012

# 7379

To media agency executives, media directors and all media committees.

POV: Making Sense of Current Local TV Market Measurement

This document is intended to raise awareness around the available Local Nielsen data streams and the playback audiences each stream captures, as well as demonstrate how the inconsistent industry use of these streams is impacting and potentially undermining standard practices and comparable accountability in the marketplace. The 4A’s thanks MediaVest’s Maribeth Papuga, 4A’s Local Video/Audio Committee member, for drafting this report.

Donna G. Campbell Senior Vice President Media Services National Snapshot There has been much discussion in recent years about the rapid shifts in television viewing patterns and the impact time-shifted viewing may have on advertising commercials. Nationally, Nielsen addressed this issue by providing a data stream that specifically averages all commercial pods across three days of program viewing (both live and time-shifted) to produce an average commercial rating for a program or what is termed a ―C3‖ rating. This stream is only available for national ratings as it combines the program ratings delivered across all Nielsen meter HH’s in the US. Additionally, it is an average rating across multiple days of viewing in order to capture the potential audience measured against a commercial minute during playback. Although this effort still draws critics, it has been generally accepted by the industry as the currency by which national buyers and sellers negotiate.

Local Differences Local television markets do not have similar data to match this currency or a data stream equivalent due to the large number of local markets (210 DMAs) and the unique and varied measurement collection standards available in each market. There are 25 Local People Meter (LPM) markets that measure audiences daily, but due to the collection of quarter-hour sampling bases within individual meter market audience segments, the data can not be captured or viewed on a minute-by-minute basis which precludes an ability to determine specific viewership to program or scheduled commercial minutes. As a working surrogate, the local markets have received three additional streams of data from Nielsen that include: Live Only (real time without playback), Live plus Same Day (playback within the same day until 3 a.m. following day), Live plus 3 Day (playback across three days — 75-hour interval), and Live plus 7 Day (playback across a seven- day period or 168 hour interval). At the time the national C3 currency was agreed upon by the national buyers, the local market buyers generally utilized Live Only data as way to distinguish viewers who are the least likely to skip commercials.

Page 2

This practice enabled advertisers who bought into the national C3 rating stream to feel confident that the local market negotiations were focused on reporting audiences who were considered the most likely to see the commercial. However, as more understanding is assigned to time-shifted viewing, there is evidence that a percentage of audiences do not skip through commercials when aired in playback. Additionally, the Live Only audiences continue to become more volatile and unpredictable as DVR penetration increases across more households and the playback potential increases. This creates more fragmentation and erodes stability of the ratings against these smaller local market audience samples. For this reason, it is necessary that the local market buyers and those investing dollars in local markets understand the differences of each available data stream.

More Consumer Viewing Choices = More Fragmented Viewing Patterns As technological advances offer consumers more viewing choices both in program content and viewing flexibility, it creates new challenges for advertisers in reaching audiences across a growing of media channels and time-shifted options. While television still captures larger audiences than other media channels, the segments associated with specific program viewing are becoming significantly more fragmented. This is evident in national ratings and the network programs demonstrate smaller audiences during the live telecast and variations of incremental viewers during time-shifted viewing over the course of several days.

Locally, this fragmentation is even more pronounced as individual markets are not all measured with the same technology. The top 25 markets in the country are measured via local people meters (LPMs) that capture demographic information for each day. An additional 31 markets capture household ratings via a meter box every day, but the actual demographic measurement for these markets is still collected through paper diaries four to six times per year. Finally, the remaining 154 markets are measured solely through paper diaries which normally occur four times per year. With such varied data collection methods and the continued advent of new technologies that enable consumers to time shift program content, it has become increasingly difficult to develop one standard measurement value for currency across local markets.

Local Market Samples and Data Collection ≠ National Samples and Data Collection National television programs are distributed in local markets via local television station affiliates. Additional programs are produced locally or purchased under syndication agreements to supplement the remaining hours of the day. The majority of U.S. markets have core channel spectrum dedicated to the four top network affiliates (i.e., ABC, CBS, NBC, FOX) and additional channels that deliver national network programs distributed by CW and MyNet, as well as those affiliates associated with the larger Spanish language networks of Univision and . Page 3

As cable systems gained traction in the 1990s, more programming options were created and distributed through subscription-based cable systems. With the growth of home entertainment choice in local markets, the program viewership began to flatten across the channels even though total hours of television viewing increased. Fast forward to the present. We find that entertainment distribution channels have expanded to online delivery or through paid downloads that are contracted with new service providers. As consumers in local markets face wider entertainment delivery choices and broader options specific to their cable or ADS (alternate delivery systems) providers, it has become more difficult to measure these fragmented audiences. Nationally, the doubling of video sample households in the 2000s offered some additional stabilization in measurement across an increasing wave of options, but it too is dependent on aggregating audiences that continue to splinter. This pushed national samples to adopt a more granular delivery of audiences on a minute-by-minute basis, which was possible due to the consistent collection methods surrounding the national sample. In turn, this has enabled buyers transacting with broadcast and cable network sales teams to more closely identify audience viewership during program commercial breaks. Since networks control their own programming and selectively assign local breaks to station and cable system affiliates, they are in a strong position to agree to average these minutes when forming the C3 rating stream. For point of reference, the split of commercial time allocated to network sales versus local station affiliates is generally 75% network/25% local in broadcast and limited to an average of 1.5–2 minutes per hour for a local cable system. This average varies by cable network as some smaller networks may not even offer local ad integration. As U.S. DVR penetration has nearly doubled in the last three years (26% in 2008, to 41% in 2011), the national effort to more effectively identify and isolate commercial minutes in an hour has offered the opportunity to review viewing patterns across first-run programs, sports, news and entertainment in combination with commercial minutes. However, these C3 ratings are still an average of the commercial minutes across a three-day period (live +3 day playback of program) so it provides value to audiences who may delay viewership of a program and watch it at a later time. And due to the high number of commercial minutes dedicated to national advertisers in each program, this can account for well over 15–20 minutes of average commercials.

In contrast, each of the 210 locally measured Nielsen markets and their audiences are measured via multiple forms of set-top box, meter/diary, or diary only which may not sufficiently capture the independent nuances and changing viewing habits that exist within each market.

This can in turn create more audience instability when reviewed on an individual market basis. Nielsen’s local measurement service has endeavored to keep pace with refinements to their samples and created additional data streams that capture and isolate local market audiences to satisfy the business needs of both vendors and advertisers. However, as viewership patterns shift, and the marketplace maintains inconsistent use of the multiple data streams, it is much Page 4

more difficult to provide accurate and comparable local market currency values. This applies to the currency used between buyers and sellers, as well as the data that fuels the industry cost aggregator, SQAD. Due to the fact that local buyers, sellers and agencies use a combination of Live Only, Live+ SD, Live + 3 day or Live +7 day to report program delivery, it is creating a strong need for the industry to push for more prominent identification that designates each trading currency when comparing local program inventory, purchased and posted schedules, third-party reporting and importantly, when measured against any local television market audience cost aggregation.

All of the data streams impact the Prime program ratings the most. Since consumers don’t generally time-shift live programs, there is little delayed viewing of local early morning or local news programming. Therefore, the area of most concern to these data streams is Prime, which is also one of the dayparts that offers local market stations limited inventory control versus programs that originate or are produced at the station.

As technology improves and consumers are given more opportunities to time-shift programs, there has been significant variance in program ratings related to each available data stream. In reviewing the following example of Prime programs across six LPM markets for November 2011, this point is clearly demonstrated with the audiences captured during the Live plus 7 day playback contributing an audience rating, which is generally 50% higher than the Live telecast. Page 5

Additional program examples can be found in the appendix. All ratings represent a three-week Program Average for two demographics (A18-49 & A25-54) for each available data stream.

RATING ADULTS 18-49 RATING ADULTS 25-54 MARKET DAY TIME PROGRAM AFFIL STATION LIVE LIVE SD LIVE +3 LIVE +7 LIVE LIVE SD LIVE +3 LIVE +7 CHICAGO MON 9-10P HAWAII 5o CBS WBBM 1.8 2.5 3.6 3.7 2.5 3.5 4.8 5.0 DENVER MON 9-10P HAWAII 5o CBS KCNC 2.3 3.0 3.3 3.6 3.5 4.4 4.9 5.4 LOS ANGELES MON 10-11P HAWAII 5o CBS KCBS 1.5 1.6 3.0 3.1 1.9 2.0 3.6 3.9 MINNEAPOLIS MON 9-10P HAWAII 5o CBS WCCO 4.0 4.3 5.5 5.9 5.0 5.1 6.7 7.1 NEW YORK MON 10-11P HAWAII 5o CBS WCBS 1.5 1.7 2.9 3.2 2.4 2.6 4.0 4.4 SEATTLE MON 10-11P HAWAII 5o CBS KIRO 3.0 3.7 4.9 5.4 3.8 4.7 6.2 7.1 Source: Nielsen LPM Overnights- November 2011 (3-week ) Program Average- Hawaii 5-O Source: Who’s Watching TV, LLC The dates include Oct 31-Nov 20 2011 (3 week Average)

RATING ADULTS 18-49 RATING ADULTS 25-54 MARKET DAY TIME PROGRAM AFFIL STATION LIVE LIVE SD LIVE +3 LIVE +7 LIVE LIVE SD LIVE +3 LIVE +7 CHICAGO SUN 7-8P ONCE UPON A TIME ABC WLS 2.3 3.9 5.0 5.3 3.0 4.8 5.4 6.2 DENVER SUN 7-8P ONCE UPON A TIME ABC KMGH 3.4 4.9 6.1 6.3 4.5 6.0 7.1 7.4 LOS ANGELES SUN 8-9P ONCE UPON A TIME ABC KABC 1.9 3.6 5.3 5.6 2.4 4.2 6.2 6.7 MINNEAPOLIS SUN 7-8P ONCE UPON A TIME ABC KSTP+ 3.2 3.9 4.4 4.6 4.0 4.8 5.7 6.0 NEW YORK SUN 8-9P ONCE UPON A TIME ABC WABC 3.0 3.8 4.5 4.7 3.6 4.5 5.3 5.4 SEATTLE SUN 8-9P ONCE UPON A TIME ABC KOMO 4.4 6.2 7.2 8.0 4.7 6.6 8.0 8.8 Source: Nielsen LPM Overnights- November 2011 (3-week ) Program Average- Once Upon a Time Source: Who’s Watching TV, LLC The dates include Oct 31-Nov 20 2011 (3 week Average)

RATING ADULTS 18-49 RATING ADULTS 25-54 MARKET DAY TIME PROGRAM AFFIL STATION LIVE LIVE SD LIVE +3 LIVE +7 LIVE LIVE SD LIVE +3 LIVE +7 CHICAGO TUES 9P-10P PARENTHOOD NBC WMAQ 0.9 1.8 2.6 2.8 1.2 2.1 3.0 3.3 DENVER TUES 9P-10P PARENTHOOD NBC KUSA 3.0 3.9 4.8 5.5 3.5 4.3 5.1 5.9 LOS ANGELES TUES 10P-11P PARENTHOOD NBC KNBC 0.7 1.1 2.2 2.4 1.1 1.5 2.9 3.2 MINNEAPOLIS TUES 9P-10P PARENTHOOD NBC KARE 3.3 4.1 5.4 5.7 3.9 4.9 6.4 6.7 NEW YORK TUES 10P-11P PARENTHOOD NBC WNBC 1.4 1.9 2.5 2.7 1.9 2.4 3.4 3.6 SEATTLE TUES 10P-11P PARENTHOOD NBC KING 2.3 2.8 4.9 5.5 2.9 3.5 5.9 6.3 Source: Nielsen LPM Overnights- November 2011 (3-week ) Program Average- Parenthood Source: Who’s Watching TV, LLC The dates include Oct 31-Nov 20 2011 (3 week Average)

Upon review, one may not detect a problem as these streams are clearly comparable and identified. However, this transparency is not available across the millions of spot schedules negotiated in the local markets. When different buyers and sellers transact across various data streams, there can be very different pricing results associated with the audience cost of a particular program, which makes it difficult to ascertain a comparable price point across stations, programs, as well as the overall price stability of the market. For example, if an individual program delivers a Live Only A25-54 rating of 2 and a Live + 7 A25-54 rating of 2.9 in a market, it will deliver a very different audience cost when this rating is divided into its unit cost. To demonstrate, let’s assume that two individual market buyers each paid $500 for the above mentioned spot on the same station, but both used a different data stream to report their audience cost. The A25-54 Cost Per Point (audience cost) for the Live only rating delivers a $250 cost while the Live +7 rating delivers a $173 cost. Without any unit price or other comparative reference point, one might believe that the buyer who delivered the $173 audience cost is providing a much better value than the one delivering $250 audience cost. But in fact both still paid the same unit rate from the vendor but chose to link audience delivery to a different data stream.

Page 6

Note: For demonstration purpose only-rating and cost not applicable to a specific market

This cost example is used to demonstrate in the simplest of terms the mounting confusion that exists within the local television marketplace and how difficult it is to accurately identify audience market costs both for comparable advertising schedules that may be used by third- party measurement services as well as any aggregate industry comparisons that determine the competitive cost of dayparts and markets.

The dominant source for local audience cost aggregation and comparison, SQAD, continues to issue reports with its long-term practice of schedule cost collection that is based on contributor- reported purchases, but it currently does not offer any segregation of its projected cost data relative to individual streams. This is due in part to the dependence on the industry contributors’ ability to properly isolate and reference the purchase data with the appropriate data streams used during negotiations. Without proper reference, SQAD can only offer guidance on the percentage of buys that its contributing sources used with each stream through a periodic poll which is then captured and posted within its monthly newsletter. Given the universal use of SQAD across the local marketplace, the service is losing some value without being able to build in additional transparency for data stream sourcing and the audience cost relationship to each stream. At this time anyone using aggregate cost data or comparable resources is cautioned to fully identify the details behind the ratings and cost data that fueled the comparable reports.

Due to the various opinions surrounding usage and preference of the available local television data streams, the 4A’s Local Video/Audio Committee is not promoting or discounting the use of any stream through issue of this document. However, the Committee strongly urges the industry to practice more vigilance when audience costs are used and compared in the local marketplace, and that partners are made aware that the data is not readily comparable unless all inputs (i.e., cost and audience ratings) are available for an equitable relationship.The Committee continues to promote more understanding around these audience streams and the impact these streams have in creating disproportionate views of the cost of television advertising in local markets. Page 7

Appendix

Source: Who’s Watching TV, LLC The dates include Oct 31-Nov 20 2011 (3 week Average)