Discovery Channel Acquires Revision3 Top Digital Video Provider

Purchase  of Revision3 Will Bolster Discovery’s Leadership in Creating & Delivering Content Across All Screens

Revision3 acquired by Discovery Channel

Discovery Communications announced today that it has entered into an agreement to acquire San Francisco-based digital video provider Revision3. Leveraging Revision3’s vast experience in creating engaging online video content in a cost-effective manner, the transaction helps fuel Discovery’s strategy of being the number one nonfiction media company on all screens.

Revision3 was founded in 2005 by Kevin Rose, (I call him Genius) also founder of Digg.com and TechTV, Jay Adelson (Digg, Equinix) and David Prager (TechTV). Louderback, former editor-in-chief of PC Magazine and a veteran of TechTV, joined Revision3 as CEO in 2007. Louderback and team will continue their leadership of Revision3 under the new ownership structure.

*For those of you that don’t know the history of Digg.com and Kevin Rose, we will be sharing a post later this week regarding how Digg.com and Revision3, under Kevin and Jay’s direction, helped change the way the media delivers your news online today!

“Discovery’s mission to ignite viewers’ curiosity and its history of pioneering new platforms – from cable to HD to 3D – make it the logical leader in this explosive new wave of digital video growth,” said JB Perrette, Chief Digital Officer, Discovery Communications, who made today’s announcement. “With Revision3’s industry-leading management team and roster of great talent, we look forward to cultivating more original content and fresh personalities that resonate with passionate communities online and across all platforms, while enhancing our innovative marketing solutions for advertising partners.”

The leading independent Internet digital video production company, with more than 23 million monthly unique viewers across 27 digital channels, Revision3 has created a technology and distribution platform that powers the scalable production, monetization and distribution of video content for passionate online communities. With programs hosted by authentic online celebrities, including top bloggers, Twitter stars and YouTube sensations, the company’s content aligns with many of Discovery’s top linear program genres, such as tech, cooking and popular science. Revision3 boasts one of the 10 largest networks on YouTube and distribution with more than 40 other partners including iTunes, Google, AOL, Yahoo!, TiVo, Roku, Boxee, CNET and Zune.

“Revision3 has always focused on creating compelling programs featuring authentic hosts that sit at the center of engaged and targeted communities,” said Jim Louderback, CEO of Revision3. “We’re huge fans of Discovery’s networks, and couldn’t imagine a more appropriate company to team up with to develop the future of original web-based video.”

For this transaction, Discovery Communications was advised by Paul, Weiss, Rifkind, Wharton & Garrison LLP, and Revison3 was advised by RBC Capital Markets and Gunderson Dettmer Stough Villeneuve Franklin & Hachigian, LLP.

The acquisition is subject to customary closing conditions, and the parties expect the closing to occur on or before June 1.

 

Matt Cutts Explains How Google Search Works

Users visit Google and enter their search terms or “keywords” and in half a second Google displays the results. Sounds super easy doesn’t it? Behind the scenes a whole lot more is happening to give you the best results possible. On Monday, Google launched a video to help explain how the massive search engine actually works.

Matt Cutts, an all around great guy and software engineer head of Google’s web spam team, details in the YouTube video shown below how the search engine giant scours the web on a daily basis to provide the most accurate and up-to-date results to users.

Google does a great job in my opinion, better than most. I know Google has it’s haters, but I’m a true long time fan ! Take a moment someday and search Bing, Yahoo and Google for the same keywords and see which search engine returns the best results.

Here is a transcript of the Matt Cutts video shown above.

Hi, everybody. We got a really interesting and very expansive question from RobertvH in Munich. RobertvH wants to know–

Hi Matt, could you please explain how Google’s ranking and website evaluation process works starting with the crawling and analysis of a site, crawling time lines,frequencies, priorities, indexing and filtering processes within the databases, et cetera?

OK.

So that’s basically just like, tell me everything about Google. Right?

That’s a really expansive question. It covers a lot of different ground. And in fact, I have given orientation lectures to engineers when they come in. And I can talk for an hour about all those different topics, and even talk for an hour about a very small subset of those topics. So let me talk for a while and see how much of a feel I can give you for how the Google infrastructure works, how it all fits together, how our crawling and indexing and serving pipeline works. Let’s dive right in.

So there’s three things that you really want to do well if you want to be the world’s best search engine. You want to crawl the web comprehensively and deeply. You want to index those pages. and then you want to rank or serve those pages and return the most relevant ones first. Crawling is actually more difficult than you might think.

Whenever Google started, whenever I joined back in 2000, we didn’t manage to crawl the web for something like three or four months. And we had to have a war room. But a good way to think about the mental model is we basically take page rank as the primary determinant. And the more page rank you have– that is, the more people who link to you and the more reputable those people are– the more likely it is we’re going to discover your page relatively early in the crawl.

In fact, you could imagine crawling in strict page rank order, and you’d get the CNNs of the world and The New York Times of the world and really very high page rank sites. And if you think about how things used to be, we used to crawl for 30 days. So we’d crawl for several weeks. And then we would index for about a week. And then we would push that data out. And that would take about a week. And so that was what the Google dance was.

Sometimes you’d hit one data center that had old data. And sometimes you’d hit a data center that had new data. Now there’s various interesting tricks that you can do. For example, after you’ve crawled for 30 days, you can imagine re-crawling the high page rank guys so you can see if there’s anything new or important that’s hit on the CNN home page.

But for the most part, this is not fantastic. Right?  Because if you’re trying to crawl the web and it takes you 30 days, you’re going to be out-of-date. So eventually, in 2003, I believe, we switched as part of an update called Update Fritz to crawling a fairly interesting significant chunk of the web every day.

And so if you imagine breaking the web into a certain number of segments, you could imagine crawling that part of the web and refreshing it every night. And so at any given point, your main base index would only be so out of date. Because then you’d loop back around and you’d refresh that. And that works very, very well.

Instead of waiting for everything to finish, you’re incrementally updating your index. And we’ve gotten even better over time. So at this point, we can get very, very fresh. Any time we see updates, we can usually find them very quickly. And in the old days, you would have not just a main or a base index, but you could have what were called supplemental results, or the supplemental index. And that was something that we wouldn’t crawl and refresh quite as often. But it was a lot more documents.

And so you could almost imagine having really fresh content, a layer of our main index, and then more documents that are not refreshed quite as often, but there’s a lot more of them. So that’s just a little bit about the crawl and how to crawl comprehensively. What you do then is you pass things around. And you basically say, OK, I have crawled a large fraction of the web. And within that web you have, for example, one document. And indexing is basically taking things in word order.

Well, let’s just work through an example.

Suppose you say Katy Perry. In a document, Katy Perry appears right next to each other. But what you want in an index is which documents does the word Katy appear in, and which documents does the word Perry appear in? So you might say Katy appears in documents 1, and 2, and 89,and 555, and 789.

And Perry might appear in documents number 2, and 8, and 73, and 555, and 1,000. And so the whole process of doing the index is reversing,so that instead of having the documents in word order, you have the words, and they have it in document order. So it’s, OK, these are all the documents that a word appears in.

Now when someone comes to Google and they type in Katy Perry, you want to say, OK, what documents might match Katy Perry? Well, document one has Katy, but it doesn’t have Perry. So it’s out. Document number two has both Katy and Perry, so that’s a possibility. Document eight has Perry but not Katy. 89 and 73 are out because they don’t have the right combination of words. 555 has both Katy and Perry. And then these two are also out.

And so when someone comes to Google and they type in Chicken Little, Britney Spears, Matt Cutts, Katy Perry, whatever it is, we find the documents that we believe have those words, either on the page or maybe in back links, in anchor text pointing to that document. Once you’ve done what’s called document selection, you try to figure out, how should you rank those? And that’s really tricky.

We use page rank as well as over 200 other factors in our rankings to try to say, OK, maybe this document is really authoritative. It has a lot of reputation because it has a lot of page rank. But it only has the word Perry once. And it just happens to have the word Katy somewhere else on the page. Whereas here is a document that has the word Katy and Perry right next to each other, so there’s proximity. And it’s got a lot of reputation. It’s got a lot of links pointing to it.

So we try to balance that off. You want to find reputable documents that are also about what the user typed in. And that’s kind of the secret sauce, trying to figure out a way to combine those 200 different ranking signals in order to find the most relevant document. So at any given time, hundreds of millions of times a day, someone comes to Google. We try to find the closest data center to them. They type in something like Katy Perry.

We send that query out to hundreds of different machines all at once, which look through their little tiny fraction of the web that we’ve indexed. And we find, OK, these are the documents that we think best match. All those machines return their matches. And we say, OK, what’s the creme de la creme? What’s the needle in the haystack? What’s the best page that matches this query across our entire index? And then we take that page and we try to show it with a useful snippet. So you show the key words in the context of the document. And you get it all back in under half a second. So that’s probably about as long as we can go on without straining YouTube.

But that just gives you a little bit of a feel about how the crawling system works, how we index documents, how things get returned in under half a second through that massive parallelization.

I hope that helps. And if you want to know more, there’s a whole bunch of articles and academic papers about Google, and page rank, and how Google works. But you can also apply to there’s jobs@google.com, I think, or google.com/jobs, if you’re interested in learning a lot more about how search engines work. OK. Thanks very much.
“The Anatomy of a Large-Scale Hypertextual Web Search Engine”: http://research.google.com/pubs/archive/334.pdf

Get hired by Google and learn even more: http://www.google.com/intl/en/jobs/index.html

Want your question to be answered on a video like this? Follow Google on Twitter and look for an announcement when we take new questions: http://twitter.com/googlewmc

More videos: http://www.youtube.com/GoogleWebmasterHelp
Webmaster Central Blog: http://googlewebmastercentral.blogspot.com/
Webmaster Central: http://www.google.com/webmasters

January 2012 U.S. Online Video Rankings

comScore, Inc. (NASDAQ: SCOR), a leader in measuring the digital world, today released data from the comScore Video Metrixservice showing that 181 million U.S. Internet users watched nearly 40 billion videos of online video content in January.

Top 10 Video Content Properties by Unique Viewers

Google Sites, driven primarily by video viewing at YouTube.com, ranked as the top online video content property in January with 152 million unique viewers, followed by VEVO with 51.5 million, Yahoo! Sites with 49.2 million, Viacom Digital with 48.1 million and Facebook.com with 45.1 million. Nearly 40 billion videos views occurred during the month, with Google Sites generating the highest number at 18.6 billion, followed by Hulu with 877 million and VEVO with 717 million. The average viewer watched 22.6 hours of online video content, with Google Sites (7.5 hours) and Hulu (3.2 hours) demonstrating the highest average engagement among the top ten properties.

Top U.S. Online Video Content Properties Ranked by Unique Video Viewers
January 2012
Total U.S. – Home and Work Locations
Content Videos Only (Ad Videos Not Included)
Source: comScore Video Metrix
Property Total Unique Viewers (000) Videos (000)* Minutes per Viewer
Total Internet : Total Audience 181,115 39,995,849 1,354.7
Google Sites 151,989 18,633,743 448.7
VEVO 51,499 716,608 62.2
Yahoo! Sites 49,215 538,260 57.4
Viacom Digital 48,104 507,046 58.0
Facebook.com 45,135 248,941 22.0
Microsoft Sites 41,491 558,017 51.3
AOL, Inc. 40,991 419,783 51.4
Hulu 31,383 877,388 189.0
Amazon Sites 27,906 86,705 19.7
NBC Universal 27,096 95,034 17.2

*A video is defined as any streamed segment of audiovisual content, including both progressive downloads and live streams. For long-form, segmented content, (e.g. television episodes with ad pods in the middle) each segment of the content is counted as a distinct video stream.

Top 10 Video Ad Properties by Video Ads Viewed

Americans viewed 5.6 billion video ads in January, with Hulu delivering the highest number of video ad impressions at 1.4 billion. Adap.tv ranked second overall (and highest among video ad exchanges/networks) with 652 million ad views, followed by BrightRoll Video Network with 598 million, Tremor Video with 580 million and Specific Media with 398 million. Time spent watching video ads totaled more than 2.3 billion minutes during the month, with Hulu delivering the highest duration of video ads at 540 million minutes. Video ads reached 47 percent of the total U.S. population an average of 38 times during the month. Hulu delivered the highest frequency of video ads to its viewers with an average of 43, while ESPN delivered an average of 20 ads per viewer.

Top U.S. Online Video Ad Properties Ranked by Video Ads* Viewed
January 2012
Total U.S. – Home and Work Locations
Ad Videos Only (Content Videos Not Included)
Source: comScore Video Metrix
Property Video Ads (000) Total Ad Minutes (MM) Frequency (Ads per Viewer) % Reach Total U.S. Population
Total Internet : Total Audience 5,558,261 2,329 38.4 47.3
Hulu 1,446,618 540 43.1 11.0
Adap.tv 651,531 395 10.8 19.8
BrightRoll Video Network** 598,353 370 6.1 32.3
Tremor Video** 580,302 314 12.6 15.0
Specific Media** 397,941 187 5.6 23.2
Auditude, Inc.** 386,702 151 9.7 13.1
Microsoft Sites 385,581 149 11.2 11.2
SpotXchange Video Ad Marketplace** 356,755 207 10.3 11.3
ESPN 343,801 131 20.0 5.6
Viacom Digital 286,024 123 12.8 7.3

*Video ads include streaming-video advertising only and do not include other types of video monetization, such as overlays, branded players, matching banner ads, homepage ads, etc.
**Indicates video ad network
†Indicates video ad exchange

Top 10 YouTube Partner Channels by Unique Viewers

The January 2012 YouTube partner data revealed that video music channels VEVO (50.6 million viewers) and Warner Music (29.7 million viewers) maintained the top two positions. Gaming channel Machinima ranked third with 23.8 million viewers, followed by Maker Studios Inc. with 12.5 million, FullScreen with 11.6 million and Big Frame with 8.2 million. Among the top 10 YouTube partners, VEVO demonstrated the highest engagement (62 minutes per viewer) and highest number of videos viewed (696 million), while Machinima exhibited the second highest engagement (60 minutes per viewer) and number of videos viewed (347 million).

Top YouTube Partner Channels* Ranked by Unique Video Viewers
January 2012
Total U.S. – Home and Work Locations
Content Videos Only (Ad Videos Not Included)
Source: comScore Video Metrix
Property Total Unique Viewers (000) Videos (000) Minutes per Viewer
VEVO @ YouTube 50,563 695,947 61.8
Warner Music @ Youtube 29,718 187,672 27.5
Machinima @ YouTube 23,799 347,380 60.4
Maker Studios Inc. @ YouTube 12,505 135,301 47.4
FullScreen @ YouTube 11,579 50,292 17.6
Big Frame @ YouTube 8,167 42,106 18.8
BroadbandTV @ YouTube 8,016 29,695 15.8
Bigpoint @ YouTube 7,864 43,146 21.1
Blizzard @ YouTube 7,572 13,021 4.1
Demand Media @ YouTube 7,296 19,804 9.4

*YouTube Partner Reporting based on online video content viewing and does not include claimed user-generated content

Other notable findings from January 2012 include:

  • 84.4 percent of the U.S. Internet audience viewed online video.
  • The duration of the average online content video was 6.1 minutes, while the average online video ad was 0.4 minutes.
  • Video ads accounted for 12.2 percent of all videos viewed and 0.9 percent of all minutes spent viewing video online.

About comScore
comScore, Inc. (NASDAQ: SCOR) is a global leader in measuring the digital world and preferred source of digital business analytics. For more information, please visit www.comscore.com/companyinfo.