The easiest way to livestream to millions of users is to use social media platforms like Facebook, Instagram, YouTube or Twitch. These platforms are great for brand awareness, but give you limited options when it comes to content ownership, audience access and monetization.
The best way to livestream is to build your own platform where you can invite viewers to watch. This strategy will protect you from censorship, gain you access to better analytics, and more options on how you can monetize your streams. You just need to be prepared to make time and financial investment in order to reap the benefits of having full control over your content.
In this article, we’ll look at livestream platforms, their terminology and protocols. Then we’ll examine five livestream providers that enable content creators to maintain ownership of both their content and audience.
Different Kinds of Livestream Platforms
To get started, at the very least you’ll need a livestream media server and your own website. There are three types of livestream platforms you can consider working with:
With the self-hosted option, you’ll find a number of open-source and enterprise livestream media servers you can download and install yourself. You can host the software on a local server, or on a compute platform such as AWS, Azure, Digital Ocean, Linode. You’ll also need to set up storage space for your livestream recordings and your on-demand video content. For enterprise servers, you’ll need to purchase a license.
The cloud option is where the platform provider has already installed the software on their multi-tenancy infrastructure. All you have to do is simply create an account and you’re good to go. Scaling and technical issues are handled by the provider. You’ll be billed a flat rate per month for the license to use the software, plus infrastructure costs that you’ll incur while running livestream sessions.
The API-driven option is similar to the cloud option, except that you only pay for the usage. This is a far more cost-effective pricing strategy, since you only pay when a livestream is running or when you’ve stored video content on their platform. These platforms also have far better documentation that’s suited to developers.
While it appears that the most affordable solution is the way go, you may need choose the other options due to requirements such as:
- extra low latency
- 24×7 streaming
- local network only streaming
- access to technical expertise i.e. a web developer
- limited time to market
Features offered by platforms vary, and you may find a commercial provider has already implemented most of the back-end logic you need for your app. You should also note that, when it comes to pricing, long-term contracts are often priced lower per month than short-term contracts.
Next, let’s get familiar with the main terminologies:
A livestream is technology that segments video streams or files into small chunks, which allows viewers to watch without downloading the entire file. The term live video is referred when the recording is happening in real time.
Video-on-demand, or VOD, is simply a service for streaming a pre-recorded show, film or event. When setting up a livestream session, you need to enable the record feature such that viewers coming later to your livestream can still watch the entire show from the start.
An encoder is either a hardware device or some software that takes a high-quality, uncompressed video source from a camera and encodes the video stream into a compressed format that’s optimized for transmission over limited internet bandwidth. Hardware encoders are more expensive but tend to be more reliable. Software encoders are more affordable, but they don’t run in dedicated environments, which makes them prone to interruptions from other applications.
A livestream media server is software that accepts transmission data from an encoder and then re-transmits into multiple streams at different quality to be delivered to audiences.
Transcoding is the process of converting a compressed video stream into an even better compressed format that can stream with minimum buffering at the highest possible quality. This is done by the livestream media server.
An edge server is simply a relay for streams originating from the livestream media server. Edge servers are often located near the streamer’s location and are used to offload the burden on the livestream server. They also help in reducing latency.
Latency is the delay often measured between the time a camera captures a frame till the time a streamer sees the frame on their device. With standard protocols, latency can vary between 10 to 40 seconds. Low latency protocols can achieve delays of less than three seconds.
Streaming an event from your location to your audience is essentially a for-step process:
- Create a stream on your server. This will provide you with an RTMP URL which you’ll use to upload your stream.
- Configure your encoder and hit the stream button. This will upload your video source to your server.
- Your server will receive your stream and transcode it into multiple streams with varying quality.
- A member of your audience will connect to your livestream server via your website. Depending on your viewer’s network speed, an appropriate stream will be delivered.
While there’s a number of protocols that have been developed for streaming content, there are only three that you should be concerned with.
The Real-time Messaging Protocol (RTMP) is a streaming protocol that’s used to transmit optimized streams from your encoder to your livestream server. If you’re concerned about security, you can use RTSP, which uses SSL. There’s also RTMPE, which encrypts streams using Adobe’s security standards.
HTTP Live Streaming is the most widely used delivery streaming protocol supported by every platform. It uses a technique known as ABR (Adaptive Bitrate) to break down videos into smaller chunks (ten seconds or less). It then encodes the chunks into different quality levels, which allows viewers to switch to a different quality stream in the middle of the video.
HLS has a latency that can vary between 10 to 40 seconds. There’s a newer variant called Low Latency HLS that was released in late 2020. It allows for reduced latencies of three seconds or less.
Web Real-time Communication is an open project originally developed for peer-to-peer communication. However, providers have figured out how to use the technology to provide unheard of latencies of 500ms and less to deliver livestreams to viewers across a vast geographic distance.
They have also figured out a way to scale up WebRTC to serve thousands of viewers, since standard WebRTC is limited to 60 participants in one session.
Livestream Interface Development
Once you’ve set up your livestream media server, you’ll need to build an interface where viewers will watch your livestream. The interface can be either a website or a mobile app. You’ll need to install a video player to connect and decode streams from your server. As you gather more traffic to your interface, your livestream infrastructure will scale up to keep up with demand and ensure a smooth streaming experience for everyone.
Your platform provider will often provide you with a custom video player. Alternatively, they can recommend several options, which include:
Video.js, which is open source and supports HLS and DASH by default. There are a number of community skins and plugins available that can installed to extend its features.
JWPlayer, which is a commercial player that supports HLS out of the box. It supports video gallery, 360 videos and ad integration. It starts at $10 per month.
THEOPlayer, which is a commercial player with affordable pricing based on impressions. It comes with rich SDKs, and supports ad integration, analytics and digital rights management.
To build a mobile app, you’ll need to use a mobile SDK — Android or iOS. There are multiple ways you can build a website:
- as a single page HTML
- using a content management system
- with server-based development — such as Django or…