Tag Archives: video

Building livestream video chat in HTML 5


Hi, my name is Kane and during last couple of weeks I’ve been working on random video chat in raw JavaScript. Result could be seen at mitchat.com and in this post I’m going to describe my experience and technology that’s been used to achieve final result.

I apologize in advance if my English is incorrect at times — it’s not my native language.

Server

I’m a long time Linode customer so that’s what i’m using to host MitChat. Basic VPS for $20/month with Ubuntu 12.04 on top. Thanks to Linode and their awesome 2 TB outbound traffic I don’t have to think about server change for a while.

Stack

Laravel 4.1 (PHP framework) sitting behind nginx on 443 port (SSL). Alongside we’ve got a node.js running on the same server and same domain, but on 444 port. Node, because it’s in JS, it’s fast and more importantly has great WebSocket module (socket.io) — main pipeline for our video stream. But details on that below.

For version control I’m using git and python’s awesome Fabric library for deployment to the production server.

Here’s a picture of the structure and its workflow:

Back end stack and basic workflow

Ok, that was back end part. Now to the front end.

This one is pretty slim: require.js to handle structure and module loading. On top of that we’ve got jQuery, Backbone, Underscore, glfx.js, and some private modules to handle dates, modal window and some other things.

Now to the meat.

PHP back end

Back end is very thin and there’s really not that much to explain. Index page is being generated by Laravel PHP framework — this allows me to handle assets and environment values. It also generates unique user IDs (and will enable Twitter sign up in the future). And that’s all that PHP does.

Node.js back end

This one is also a very thin layer. What node does is handles WebSocket connections to the users, accepts text messages and video frames, sends text messages and frames to correct users, handles user events such as disconnects and requests to find a person to talk to. And no more. We want to keep server as light and as fast as possible in order to deal with constant stream of video data.

JS front end

Here comes the largest part of the app. Backbone provides structure for the MitChat — i’ve been using Backbone for over a year and I feel very comfortable with it, plus, for a simple app Ember.js or Angular.js would be overkill and MitChat is certainly not a large project. Backbone handles just two views and a handful of events. One view is Global — deals with some general stuff, and the other one is ChatView where most of the stuff is concentrated.

Let’s look at what ChatView does:

  • Initialize WebSocket connection
  • Detect browser (via Bowser) and checks if it supports WebP image format
  • Handle WebSocket events — text feed, video feed, administrative feed
  • DOM Events:
  • — Start new chat
  • — Close current chat
  • — Input field typing (emit socket message to the Stranger telling him that you’re typing)
  • Initialize Media module
  • Media module events:
  • — Redraw event that deals with video feed

I’m not going to talk about stuff like DOM events or WebSocket events — those things are pretty trivial. I think Media module will be much more interesting.

Media

This is a key element of the MitChat. First, we capture webcam video via awesome WebRTC function called getUserMedia(). This method captures video stream from your web camera and sends it into the element (not in the DOM). After that, video data from the is being transferred to normal canvas in 320x240px (cropped from the center if needed). It has to be done because getUserMedia() won’t return stream in a specific size you want — you’ll get something close to it. Chrome might return one size, while Firefox returns the other. And we can’t have that — resulting feed will be distorted. This is a reason why we are passing video frames to the canvas.

Now that we have a proper image in 320x240px we need to deal with blur. For this one we’ll be using WebGL shader via awesome library glfx.js written by Evan Wallace. Stuff is not so simple here too. Since glfx.js uses WebGL technology we can’t apply filters directly to our first canvas. Instead, we have to use another canvas with ‘experimental-webgl’ context. This “effects” canvas accepts texture from the normal canvas and then glfx.js applies triangle blur shader.

Media workflow

And only now we’ve got our blurred video feed in blazing 10fps. Every frame generates “redraw” event described in ChatView part.

You might ask why MitChat uses WebGL instead of creating blur on canvas via JS. And the answer for that question is speed — JS-generated blur is slow and CPU intensive, WebGL on the other hand is very fast and efficient. Unfortunately, this actually affects user base — you won’t be able to use MitChat on the device and browser that doesn’t have that feature.

So, we’ve got our feed and we are calling redraw events. Each event has a callback that contains code which receives video frames. Then each frame is being sent via websocket to the server and then to the Stranger you’re talking to. On the Stranger’s side of the chat, he receives your frames, dumps them into the Image (to decode base64) and then into the  — background for the chat.

FPS

While we are at frames, let’s talk about fps. Since we are not using actual data streams to send compressed video like dedicated software would (Skype, flash-based apps), we have to watch our traffic — frames have to be sent and received consecutively so that there wouldn’t be any jump or lags. That’s why frames have to be as lightweight as possible. For that, we’ve got two formats: jpeg and webp. At this moment webp format only available on webkit-based browsers, so this one is available only if both chatting users’ browsers support it, otherwise MitChat will use jpeg with fluctuating quality and lower fps. Lower, because of the size of the compressed frames — jpeg weights about 3-4x times of the webp frame.

Seems like we are done here. Obviously, this is a very rough and generalized description of how MitChat works, but i’m assuming if you’re interested in this stuff, you’ll be able to fill gaps by yourself.

Written by

Technology is ruining my reputation


So when you’re hired to make a video or film, whether it be music video, commercial, corporate promo, short film, feature… whatever your field… there’s a certain expectation that comes from the client.

In their head they’ve paid a lot of money for a professional to come in and do something that they couldn’t do themselves. They’ve paid a premium for a very special skillset and (usually) they are expecting someone to turn up and make them feel like Hollywood movie stars.

SO HOW ON EARTH DO I EXPLAIN THIS?

This is the Blackmagic Design Pocket Cinema Camera and it costs £700!

This camera looks like something my sister got for Christmas, something all the hipsters are using to get into their lomo photos. You know the ones that they just take on a normal compact camera with a 35mm lens and then add a “emerald” filter on Facebook?

The price isn’t much different either. There are plenty of camera that look like this on sale in PC World/Curry’s right now (Zeiss lens not included). You can probably buy one from Argos for god’s sake!

We’ve just bought one of these little pieces of crap and although it looks just like the camera your auntie just sold because she can take better phones on her Samsung Galaxy it’s a whole lot more than that.

This little baby can shoot 12 bit RAW video onto an SD card as well as 10-bit PRO RES 422 Video with a ‘log’ picture style that is just stunning.

In case that doesn’t mean much to you, you don’t get RAW video on cameras less than £15,000 (correct me if I’m wrong on that) [ML on 5D mkiii excluding] and you don’t get 10-bit PRO RES on anything below the C300 at £11,000 ish. The power of this camera is staggering!

Yes the menus are crap.

Yes you have to record sound to an external recorder.

Yes the battery life is rubbish.

Yes it’s a M43 sensor so you’ll need lens adapter for most of your lenses.

HOWEVER! I WAS BROUGHT INTO FILMMAKING WITH 35MM ADAPTERS FOR XL-1s and JVC H100s.

I REMEMBER EVERYONE SAYING THE CANON 5D mkii WOULD NEVER WORK AS A PRODUCTION CAMERA.

Now where are we? DSLR production is an accepted format for a lot of productions and video cameras have had to respond with the likes of the c100 and c300 becoming smaller and closer to the DLSR format.

YET!

I HAVE A DIFFERENT ISSUE!

HOW THE HELL DO I TURN UP TO A PROFESSIONAL SHOOT WITH THIS TINY LITTLE THING?

Granted I can add a rig to it like this…

But to be honest I often like to go hand held or steadicam and rails just aren’t practical.

So I’d like to make a strange request.

CAN TECH STOP GETTING SO SMALL PLEASE!

It’s ruining my rep turning up with something that looks like a kids’ camera!

It looks ridiculous on a rig too!

Written by

Owner and Senior Creative Director of @BanterMediauk | I have more pet #lizards than I have pairs of shoes.

Video Syndication Startup Vidible Raises $3.35M Round Led By Greycroft


Next Story

Vidible, a startup connecting buyers and sellers of video content, is confirming that it has raised a $3.35 million Series A led by Greycroft Partners.

The round was first revealed in a regulatory filing in late December, but the company is only confirming the news and sharing details now. In addition to Greycroft, IDG also participated in the new funding, according to Vidible co-founder and President Tim Mahlman (pictured).

Mahlman give me a quick demo of the product. He said that the current methods of syndicating videos are “archaic,” with very little control or transparency. For example, he said publishers looking for videos usually have to go through an unsorted Media RSS feed.

With Vidible, on the other hand, content buyers can search for different kinds of videos, or they can just include the Vidible tag on their site and relevant videos will be played automatically. The content creators, meanwhile, have control over where their videos get played, and both sides have access to analytics.

Greycroft’s John Elton argued that Vidible is taking advantage of three broad trends — the growth in video consumption, the “increasing demand from content sites for video,” and the “increasing demand from advertisers for video impressions.” When asked if he thinks we’ll see a growing number of sites choosing to syndicate videos created by others, rather than create the videos themselves, he noted that most newspaper companies (for example) didn’t create TV channels either, “So why do we think they’re going to be able to do that for online video?”

“I think it’s a new medium,” Elton added. “There are people that do it very well, that are looking for more distribution, and there are publishers looking for content that’s appropriate for their site.”

He also said that he’s impressed by Vidible’s focus on monetization. The content buyer pays a set rate based on impressions, then they can either run their own ads with the videos or run ads from one of Vidible’s network partners.

Mahlman and his co-founder/CEO Michael Hyman both have ad tech experience (Hyman’s company Oggifinogi was acquired by Collective, while Mahlman has held positions at companies like Turn and BlueLithium), and apparently they’ve been working on Vidible for the past year. Mahlman said the beta version of the product launched over the summer, with 100 video providers now signed up and more than 1 billion impressions served each month.

“We’ve been focused on R&D until now,” Mahlman said. “Now it’s a matter of building out the business arm.”

He added that Vidible is also looking to expand internationally.

http://techcrunch.com/2014/01/03/vidible-series-a/

maxresdefault

Discovering User Generated Video Content. An Email Interview With Ryan Parent Of VidRack.


What is it?  How does it work? I put these questions together.

VidRack has created a software that allows people to add a video camera to their website. This allows website visitors to record and submit video content right on website. A webmaster can place a record button anywhere on their site and can customize text around the button to offer incentives or instructions on specific types of videos that they want submitted. When a website visitor clicks the record button it will activate their WebCam or cell phone camera and allow them to record a video right on the website. Videos are collected privately. The website owner can then download the original video files that were created on their website. They can then filter through submitted videos to determine which they would like to use for marketing or feedback purposes. The typical uses of our product are to easily gather video testimonials and reviews, video interviews and auditions, news submission, a way for customers fans or members of a network to engage with each other. It even has uses in online discussion forums or dating and networking websites.
How many people use it?
We released our first product at the end of October 2013 and we’ve had just over 500 people sign up for our plugin. We are currently building account management software that will allow us to determine exactly how many people are actively using the product and how they’re using it. We’re currently getting 10-15 sign ups a day.
 What do you do?
I have a background in Internet marketing and mainly focus on driving targeted organic traffic through SEO, social media and email marketing. Outside of this I help with the vision for our company along with product testing.
index
How did you come up with the name? 
We wanted a name where we could get the .com for a reasonable price. It had to be two syllables or less, under eight characters, easy to say and spell and somehow was related to what we do. Since a video rack is where people find new videos store their videos and get their videos from we thought the VidRack was perfect for us!
Where is the company headquartered?
We’re located in Ottawa Ontario Canada
Yearly revenue?  Any funding?
We are currently self-funded as our company is still quite new. We only began operations in June 2013. You have not started generating revenue at this point. Our main focus is developing a loyal user base and then adding premium features to our service.
How many people do you have working it?  Who are they and what are their duties?
I currently have one business partner who is also from a marketing and sales background. He also has technical experience so he works closely with our developers to create and refine the product. We also have designer who is working closely with us to give a refined experience for our users. Our development is being outsourced overseas. We’re currently looking for an experienced technical partner to lead all development.
How did you come up with this idea for the company?  
I was managing a church website and I thought it would be great to have members of our network connect with each other through video. I started asking people to submit videos with their stories and words of encouragement so that I could share them with others and create a sense of community. I had no shortage of people willing to record and submit videos the problem was when I presented them with the various options to create submit video they all started flaking out on me. There was no easy way for our website visitors to record and submit video content right on our website.
Why did you create this company?  
I created the company because I wanted to solve the original problem I had encountered. Through doing that I have also realized I have a huge passion for marketing and video marketing.
What and/or who was your inspiration?
The main inspiration for me is the idea of tribal marketing. We all want to feel connected and a part of something greater. We want to belong to a group and we also want to be heard within that group. One the greatest forces pushing humans to take action is our desire to escape loneliness. User generated video content allows people to connect with each other within a given network. That connection is in direct opposition to loneliness.

How the Video of Saddam Hussein’s Execution Went Viral Before You Owned a Smartphone


William Youmans's avatar image By William Youmans  December 30, 2013

how, the, video, of, saddam, hussein's, execution, went, viral, before, you, owned, a, smartphone,

How the Video of Saddam Hussein‘s Execution Went Viral Before You Owned a Smartphone
Image Credit: AP

One of the earliest leaked, newsworthy cell phone recordings to go viral was of the December 30, 2006 execution of former Iraqi dictator Saddam Hussein. It revealed the potential of mobile phone recordings to undermine the official telling of news events.

Today, citizen-produced media content is integral to the news we consume, whether used in regular coverage or posted directly to a social media platform.  The photos and videos taken by ordinary people at scenes of violence, from the Boston Marathon bombing to chemical weapons attacks in Syria, shape how those events will be remembered by those of us who only saw the destruction from afar.

Rarely does one video totally alter the tenor of news coverage, however. The leaked cell phone video of Saddam Hussein’s hanging did just that, making it a notable moment in citizen journalism history.

The execution was a tale of two recordings: the controlled, stage-managed official video and the viral, leaked cell phone clip. They depicted the same event but with important differences that expose the promise and peril of mobile recording devices for changing how news is produced and consumed.

The Backstory

Iraq’s new government rushed the deposed tyrant’s hanging after a lengthy, controversial trial. If they thought it would help bring a cessation to the sectarian and factional fighting that wracked an Iraq under American military occupation, they were wrong.

Many saw the trial itself as corrupt, political theater. The conclusion of a guilty verdict seemed preordained. The fallen tyrant tried to use it as a podium for grandiose rhetoric. Critics of the war questioned the legality of the invasion that brought about the deposed leader’s trial.

Image Credit: AP

However, the continued fighting was not clearly related to Hussein’s fate. There was historical score-settling, longstanding tribal and regional rivalries, shifting alliances with foreign actors, including the United States and Iran, and groups motivated by religion or ideology conducted campaigns of violence. They all made Iraq bleed.

Hussein’s execution proved to be but a hiccup in the complex civil war that embroiled Iraq in 2006-2007 and it still felt today. The country is being rocked by its highest levels of violence since 2007.

The Two Videos

On the day of the hanging seven years ago, US helicopters flew Iraqi officials and witnesses who testified against Hussein to the site of the execution, an old military intelligence facility. It was the Hussein regime’s execution chamber.

The first video was the official one, taken on a professional grade camera. It was broadcast on networks around the world. The audio track, however, was not provided by the Iraqi government.

Less than 48 hours afterwards, someone leaked online a cell phone video of the execution. Whoever shot it was below, in front of the gallows, while the official cameraman was at the top.  Since the guests were made to give up their cell phones before entering, it had to be smuggled in.

This video circulated online after it was posted on the video website, www.anwarweb.net.

Warning (Graphic video):

The poor quality recording is a reminder of how bad cell phone cameras were. Yet, the grainy and jerky two-and-half minute video proved to be the more potent and memorable of the videos.

The sound caught in the mobile phone recording told a very different story. The audio hinted at the deep sectarianism and lack of procedural care that made the hanging seem more like vulgar retribution than the outcome of a just and sound legal proceeding.

Some of those in attendance chanted “Moqtada” repeatedly.  This referred to a Shia political leader and cleric whose Grand Ayatollah father was gunned down in 1999, probably on Hussein’s orders.

When Hussein was hung, 1:40 into the video, the snap of his neck was audible, betraying a gruesomeness absent from the official clip. The on-looking officials and guards hovered over his body, shouting in celebration. Someone yelled, “The tyrant has fallen!”

The Fallout

The leaked video made for alluring TV. It was off-script and unmanicured, outside the desires of professional image managers.  It seemed real. The video went viral and news media had to follow.

Visit NBCNews.com for breaking news, world news, and news about the economy

Al-Jazeera showed an edited version. Soon, it was on the American networks that previously aired the official version.  American news anchors comforted viewers that it would stop short of the actual moment of the trap door’s release. Hordes went online to see the video unfiltered.

The stark differences in the two videos were quite apparent to ABC News’s senior vice president at the time, Bob Murphy:

“It’s a different angle on the same event. It has much more audio and ambient sound. They’re clearly taunting him. It’s a much more hostile environment than you get from watching the video this morning. The earlier video makes it seem much more passive and serene than it actually was.”

It was seen as lacking the composed, disciplined ceremonial style of an official execution.

Some argued this leaked version benefitted Hussein. Michael Newton and Michael Scharf felt that the “taunting made Saddam appear somewhat stoic and dignified as his evil life drew to an end.”  They argued that it let him appear more convincingly as the Arab nationalist hero of Iraq, the primary message of his defense during the trial.

Hisham Melham, of Al-Arabiya, said that the cell footage undercut the government’s narrative: “He was not trembling or in a state of panic as some Iraqi officials claimed him to be before the videos were released.”  It made Hussein “a sort of victim or martyr” and appear “more dignified than his executioner.”

After this execution video leaked, it generated new discussion about the execution. Vivian Salama wrote,“[t]he role of citizen journalists had never been so prominent as in the coverage of Saddam Hussein’s demise.”  The clip was discussed extensively in the blogosphere.

The leaked video was also compelling because of what it said about Iraq.  The country still suffers from broken legal and political institutions. It sees fighting along the same factional lines that came to the fore in the leaked video.

As John Burns of the New York Times said about the video:

“This whole event had the most terrible, ghastly — I’m sorry to use the phrase — beauty about it, in the sense that it told us so much, almost in a Shakespearean way, about all else that is happening in Iraq.”

The Lesson for Consuming News

Traditional news media from CNN to Al Jazeera seek such user-generated content. They encourage viewers to record events and upload the files that let them report more fully. They work to verify the content they receive, of course, to make sure the videos accurately depict what they claim.

The differential impact of these two videos shows that verification of videos does not erase subtle bias. Both quite accurately represented the same event, but the angles, lighting and audio were different enough to completely alter the meaning and reception of the video.

Big news stories, such as Mitt Romney’s presidential campaign rhetoric behind closed doors and the extracurricular activities of Toronto’s infamous mayor Rob Ford were spawned by these kinds of videos.

On the anniversary of Hussein’s hanging, it is worth remembering how a smuggled cell phone camera shattered the official story of the dictator’s final moments, and what it means for consuming leaked videos, or first-hand, amateur recordings today.

Like us on Facebook:

William Youmans's avatar image

William Youmans

Assistant Professor, School of Media and Public Affairs, The George Washington University.

 

The Streaming Media Way Back Machine – My Strategy for Broadcast.com from 1999


Found this as I was cleaning up some backups from almost 15 years ago. Thought it would be interesting to let people see what my goals were for our merger with Yahoo back then.

Strategic Issues for Yahoo Broadcasting Group

June 18th 1999

General Strategic Issues:

Historically we have built value by adding content, expanding our network, building new business products, whether biz svc or advertising, before anyone else. Because we were first for all of these , we had a huge advantage and there was not a significant cost beyond hardware , people and bandwidth to accomplish these.

I think we are getting past this stage, as an industry. I think we can expect that people will look at throwing money at content providers in order to try to play catch up. In order to still be attractive to all of our partners, biz svcs, advertising, and content, we must find new ways of adding value. That new way is via proprietary software

We have always said that software would be the last thing we added . We have taken this tact because there were so many other variables that were still evolving. From software being re-genned every 3 months from Microsoft and Real Networks, the network, and learning what we can sell and what customers wanted to buy. We have reached the point now where our key value adds and differentiation have to come from software we produce in house.

We have to be able to demonstrate to our partners that our 4 years of experience have given us the base upon which to build applications that create unique opportunities . Our competition is falling into a trap thinking that internet based radio is a key offering. The real key offering is monetizing all digital offerings, regardless of whether its audio/video/flash or on the outside and pushing down costs so that everything we do can happen in a lights out , no touch environment driven by software.

My feeling is that going forward, our biggest challenges will be

  1. Hiring Quality SW developers and Project Managers
  2. Developing and Supporting Leading Edge products that give us a sales and productivity advantage
  3. Having the balls to be willing to take chances and sell products and services that people don’t know they need yet.
  4. User Generated Content
    1. Ability for users to deliver live or on-demand content to a bcst server in native streamed protocols (non-HTTP)
      1. Build or Buy Software
      2. Subsidized hopefully by Microsoft
      3. License of Real Producer or modify encoder from Real Networks
      4. Possible license of WebKapture.com Software
      5. ability to provide low cost easy to use devices for digital encoding (dazzle)
      6. ability to provide streaming server plugins that automatically find an available server and host the content on that server

5. Ability to monitor usage and enforce limits in real time

  1. Ability to bill based on usage, length, subscription basis
  2. Ability to report on usage and users
  3. Music detection for copyright protection via comparisonics or getmedia

6. Lights out complete automation of encoding and serving systems

  1. Automation of Investment to provide a single port on Piso Audio/Video Matrix Switch for EVERY source of live content
  2. Ability to control any port to any port (or multiport) automation in realtime with realtime reporting
      1. With time based event triggers
      2. With tone (commercials) event triggers
      3. With Scene Transition event triggers (Islip)
      4. With Music to Voice to Silence Detection event triggers
      5. With Quality Control via noise tolerance detection
      6. Report within a programming matrix what programming is playing from which port to which encoder to which server, in text and graphical model with double click drill downs

7. Quality Control reporting

One thing we don’t do, that would be a marketing goldmine, is to track the quality of our user connections. It would be very simple. All of our servers report the number packets, delivered/lost and buffering. We should be using this information in real time to show our users and our customers the confidence we have in our systems and how they are actually better.

  1. With complete database integration of all programming for programming guide and personalization purposes.
    1. A user will be able to select from the programming guide of live and on demand content. The guide will know the source of the content and create using asp or cgi programming a personal station
    2. This will be a drag and drop system where a user will be able to chose from thousands of programs, or content items, and drop them in their personal schedule/calendar at a specific time, or chose a Network of preselected programs and modify that.

a. User will have the option to download to their choice of devices if the content is eligible and they have paid for the right

  1. There will a database of user selections for each user, and a user history of activity.
  2. Advertisers will be able to select the demo or psychographics of users and have their commercials inserted in to the user stations
  3. For pre programmed stations, or over the air stations, the advertiser will be able to insert commercials using the same user profiles through the use of Windows Media trigger driven switching or for real media, a plugin will have to be written the recognizes the trigger and inserts the media feed into the user specific stream
  4. Realtime reporting of usage with user name/email, by content type, by geography, by psychographic demographics for the purpose of providing advertisers the ability to monitor and SWITCH their ads in realtime. This
  5. would mean an advertiser, or even a network programmer could program and ad in realtime based upon the number of viewers.listeners, and their response to an ad.
  6. All clicks and movement throughout the site would be tracked and maintained in a user movement database for data mining, not for sale

Bottom line is any content available to any user, with any other content interwoven inside of it., with complete user selectability and every click tracking and user identification

  1. Corporate Self Reservation and Broadcast Systems

The key to 99% margins in this business is the ability to allow corporate users to schedule , produce and broadcast their own audio/video based events, and to allow them to create and manage their own Programming guides , quickly and easily. This requires a very easy to use system, comparable to audio teleconferencing systems, with additional integrated Portal Style Programming Guides and complete backend reporting and billing.

1. To do this we basically need to reinvent how we produce events to make them a single hardware and software package. This package must be something that any idiot can use on their own. From a complete camera production kit, toaudio couplers, to switches. We need to package a completely integrated system that is a leave behind hardware system

2. We need the same solution from a network perspective. We must be able to go in to a client, just as an audioteleconferencing company installs an audio bridge and a T1 with X ports, we must also install the hardware, and the T1 with X capacity, along with encoders and servers, all prewired, and a control server that acts as a host, either locally or at bcst that manages everything and communicates back to us

3. We can start this, as phase one by offering it audio only. A customer sets up an audio conference call using traditional means, and we integrate it into a webserving environment by dialing in a coupler, connected to a Pisa port, or by having a 24×7 hardwired connection to the port, and just having it “ join the call”

There are some packages that do this, Vstream, some TelSoft apps. I would prefer to see us buy before build if the price is reasonable and focus on adding video to the app.ication rather than trying to start from scratch

First step is to hire a project manager who is an experienced programmer, preferably from the video teleconferencing industry

We need this to spec out and define roles and manage to completion.

We also need programmers dedicated to this application and its maintenance

  1. We of course must get to video as quickly as possible , offering a turnkey solution for companies to put in conference rooms, AV rooms, or on their desktops, and even for laptops
  2. Part of the solution must include indepth reporting in realtime. Companies must know who is using. The cost, who is attending and in depth information on the quality of service of the broadcast. Was there buffering at all, for who, where, how can it be fixed
  3. All information must be able to be distributed to 3 party applications. Companies must be able to let their HR systems or marketing systems know who attended, how long, did they interact, where did they watch, at what bit rate, etc.
  1. Media Management – Indexing
  1. A core competency for BCST Group is to manage and index large quantities of audio and video in a search, choose and download manner.
    1. Video content can be searched by indexing the closed captioning that comes with TV content, or with content , such as biz content, where it is financially worthwhile to add closed captioning.
    2. Video and audio content that is talking head , no background noise can be indexed through trained speech recognition (trained for a specific show where the voices are consistent), can be indexed
    3. Video and audio content that doesn’t have a transcript of any sort can only be indexed by setting metadata at the time of the encoding. We have an opportunity to set standards for how this data is indexed and encoded if we move quickly. This allows users to make searches on an unlimited number of inserted key words
    4. The challenge is in creating a low cost system that can scale to Thousands of PETABYTES of data. Just as BCST has created space between us and the rest through scaling our streaming infrastructure, the ability to create a system that uses off the shelf hard disk storage to reduce costs, and that scales will create a competitive advantage in terms of the cost of hosting, and the ability to add content
      1. We need to have at least one person who works on developing and implementing this hardware and programmers who work on using access information on developing the appropriate content distribution architecture that integrates the hardware and storage management software with reports of usage. This will allow the amount of MB of content delivered to be in balance with the distribution variables that act as constraints, throughput of the serving app, throughput of the network segment, throughput of the disk storage system and throughput of the file serving mechanism
  1. Index content grows in value much like network usage. The more nodes to a newtork the greater the value of the network, the more indexed , searchable video, the greater the value of the catalog of video.
  2. Content volume, traffic volume and scalability of infrastructure are the key to success, and first mover creates the magnet for new content and users
  3. An additional marketing need will be to productize this so that business services can sell Hosted solutions or self service solutions to corporations and so that we can leverage the value of all the content we have to offer content on a subscription basis for viewing and/or downloading

MetaData Search Tool

Internet TV Stations

User programmed

Pre programmed

Download with copyright protection

Subscription Service

Custom Player

User Created Broadcast Network

Getting Wide via Content acquisition

Broadband video acquisition centers on the network

Digital TV Broadband bandwidth

CNBC Killer

CNN killer

Political TV/Pres Debates

Movies with Trimark

Quality Control of user streams – integration to routers

Active Directory Integration of files and users into 2 directories

 

Photo: Aidon / Getty Images

Originally published on blogmaverick.com

Posted by:Mark Cuban

 

duScoPg

The History Channel Is Running Out Of Material


Source

duScoPg

Marketers, Don't Just Publish; Be a Publisher

Marketers, Don’t Just Publish; Be a Publisher


Marketers, Don't Just Publish; Be a Publisher

If you want to be a successful marketer today, you need to be part data analyst and part publisher. We hear a lot about the former, but the latter is critical too. We’re all publishers (that includes me and our own marketing and sales site, of course) but many companies are only starting to come to terms with what that actually means in terms of talent and processes.

Anyone who’s a marketing leader in a large organization should watch this video. Robert Tas, managing director and head of digital marketing at JP Morgan Chase, explains what’s important when it comes to becoming a sophisticated publishing company.

As he says, one of the areas JP Morgan Chase sees as a big opportunity is content experiences across all sorts of devices 24 hours a day. It requires a fundamentally different way to look at information, and how to present it to people. It’s part of the content avalanche I’ve discussed earlier. Part of the great complexity around content is not just generating it, but the pipes you need to have in place to vet it and place it wherever (and whenever) your customers are. So this isn’t about managing writers and designers; it’s about publishing, which means the ability to manage a sophisticated content supply chain with processes, checks, oversight, metrics, and operations.

For a regulated business, content creation can be disconcerting. And one of the most important relationships to foster is the one with your legal department. But publishing has to become a core competency of any business today if it wants to stay connected to its customers and future customers.

Who is your brand’s editor-in-chief. Are you publishing like a publisher?

(VIA. David Edelman – Linkedin – McKinsey partner leading Digital Mktng Strategy Practice)

webm-558x156

Google’s VP9 video codec nearly done; YouTube will use it


One of the biggest video sites on the Net will use Google’s next-generation video compression technology after it’s fully defined on June 17.

webm-558x156

Google plans to finish defining its VP9 video codec on June 17, providing a date on which the company will be able to start using the next-generation compression technology in Chrome and on YouTube.

“Last week, we hosted over 100 guests at a summit meeting for VP9, the WebM Project’s next-generation open video codec. We were particularly happy to welcome our friends from YouTube, who spoke about their plans to support VP9 once support lands in Chrome,” Matt Frost, senior business product manager for the WebM Project, said in a blog post Friday.

WebM is Google’s project for freeing Web video from royalty constraints; the WebM technology at present combines VP8 with the Vorbis audio codec. Google unveiled WebM three years ago at the Google I/O show, but VP8 remains a relative rarity compared to today’s dominant video codec, H.264.

Because VP9 transmits video more efficiently than the current VP8 codec, the move will be a major milestone for Google and potential Web-video allies such as Mozilla that hope to see royalty-free video compression technology spread across the Web. However, even VP8 is still dogged by a patent-infringement concern from Nokia, and VP9 hasn’t yet run the intellectual property gauntlet.

Those using H.264 must pay patent royalties, and its successor, HEVC aka H.265, follows the same model.

H.265 is more efficient than H.264, offering comparable video quality at half the number of bits per second, and Google and its allies hope to bring a similar performance boost going from the current VP8 codec to VP9. That could help with mobile devices with bad network connections and could cut network costs for those with streaming-video expenses.

The VP9 bitstream definition, which describes how video is compressed into a stream of data so it can be transmitted efficiently over a network, has been in beta testing for a week, Frost said.

Paul Wilkins, a Google codec engineer, detailed the final schedule for the VP9 bitstream definition Thursday in a mailing list post.
WebM will be updated to accommodate the new video codec and a new audio codec called Opus, too, said another Google employee, Lou Quillio.
“The existing WebM container will be extended to allow VP9 and Opus streams,” Quillio said on the mailing list.

(VIA. Stephen Shankland – CNET)