HTML5 Native Video Streaming With WebRTC

I did a tounge-in-cheek video response to Web Bos this week who sumbitted this epic video for FluentConf showcasing his all around HTML5 video awesomeness.  In this video he streams directly to his browser and applies effects on the fly from the console.  If this doesn’t impress you, you may be heavily medicated.

Uh - yeah.  Like A Boss.  Or rather like a Bos.

Now I saw this and had 2 thoughts…

1. That’s awesome. My video now sucks.

2. How the heck did he do that?

In the name of education and also to increase my own chances of getting picked up for FluentConf, I made an HTML5 video video of my own.  Video video.

My video wasn’t nearly as good, but I really enjoyed working with the new methods for capturing native video with nothing but a browser.  No plugins, just Chrome and JavaScript.


First lets do a little primer on WebRTC.  This is a project that was open sourced by Google last year in an attempt to enable Real Time Communication within a browser, handled natively by the browser.  Hence the RTC name.

Apparently, WebRTC has been around for a while and according to the WebRTC site is:

"Already integrated with best-of-breed voice and video engines that have been deployed on millions of end points over the last 8+ years."

It appears that now Google is attempting to make this the defacto standard for real time communication in the browser.

WebRTC provides a layer of abstraction which allows developers to use whichever supported “signaling” protocol they wish.  Additionally WebRTC has a broad range of codecs for audio and video as well as networking support for buffering and to prevent against packet loss.

There’s a whole site dedicated just to WebRTC and I encourage you too read up on it here. Especially if you are having trouble sleeping.

Browser Support

WebRTC is currently supported in the dev branch of Chrome along with Canary - which you can think of as the bleeding edge of Chrome.  I use the stable branch of Chrome along with Canary for one main reason.  You can run Canary side-by-side with your existing stable Chrome install without any adverse affects.  They use different profiles.

Once you have downloaded either the dev or Canary builds, you have to enable WebRTC.  There are two ways to do this.  The documentation says to enable it by launching it with the “—enable-media-stream” switch.  This is to complicated for me, so intead just launch Chrome and go to chrome://flags.  That will take you to a GUI page where you can scroll down an enable WebRTC support.

Who gives you amazing screenshots with unnecessary arrows?  It’s just what I believe in.

Streaming The Video

Streaming the video to your browser is dead simple.  First you have to have a webcam for obvious reasons.

Start with a simple HTML5 video tag in your page.  Remember, if the video tag is not supported by the browser, you will see the text in between the open and close video tags.

It’s time to drop down into the JavaScript and fire up the webcam.  

The method we are going to be looking at is found off the navigator object which comes along for the ride with a supported browser.  The function we will call is webkitGetUserMedia().  This is incredibly Chrome specific at the moment, but it’s the bleeding edge of HTML5.

This method takes in 3 arguments. The first is a string denoting the type of media we are going to get - in this case “video”.  The second is the callback for when the stream is acquired and the third is a function that executes if there is an error while getting the stream.

Note that the error function does not work if WebRTC is not supported.  If the browser doesn’t support it, it dies silently but you can see it under the covers in your dev tools.

Assuming that your browser supports WebRTC and you got the stream, you enter the callback for success.  The success callback is where you need to select the video tag on your page and assign it the stream from the navigator.

The complete code for a page that displays your beautiful face via the webcam looks like this…

That’s pretty darn simple code for what you get. Now the cool thing is that this video is an HTML DOM object that you can manipulate however you want!  In my video, I just shrunk it down to 0 by 0, delayed and then brought it back to size using jQuery animate functions.

If you are viewing this blog on a supported browser, you should see yourself below.  If you see nothing, it’s because you aren’t supported.  I could wrap in a try catch but it’s late and I’m lazy.

In Wes’s video, he does some really cool stuff by adding effects to the stream.  I would write a blog post on that, but you should come to FluentConf and check out Wes’s session instead.

A special thanks to Wes Bos for reviewing this post and pointing out that Opera does in fact have support for getUserMedia as well.

blog comments powered by Disqus