Video Compression FAQ

  • Updated

What is Video Compression?

Video footage is composed of a series of still images, or frames, shown in sequence, at a certain speed (typically between 23.98 and 60 frames per second for NTSC and 25 or 50 fps for PAL ). In the world of digital video, these frames are made of millions of pixels, representing bits of data. When raw video footage is sent to an encoder, the amount of data fed into it is often far too large for any conventional internet connection to send or receive. This would make live video recording and distribution impossible without the use of video compression technology, which is a process used to reduce redundancy within your broadcast and make it easier to view.


Does Compression Lower My Video’s Quality?

While video compression does reduce the overall file size of your video by changing certain aspects about its quality, this process was designed in such a way as to make sure that any changes are minimal to unnoticeable. 


How Does Compression Work?

Resi’s encoders take raw video footage and remove as much redundant data (pixels) as possible without compromising the overall quality or watchability of the video. When a video is compressed, it is typically run through a process referred to as motion-compensated DCT (discrete cosine transform) video coding. When a video is compressed in this way, it scans your frames to determine whether they fall into one of three categories:

  • Keyframes (also known as i-frames) - Keyframes are fully rendered images that are set at a certain recurring frequency.
  • P-frames (also known as predicted frames) - They compile enough data to tell what is missing in the next image based on information gathered from the previous I-frame.
  • B-frames can compile data based on what is in the previous P-frame and the next I-frame.

image1.png

Once these frames are sorted based on the keyframe interval determined by your codec, the pixels within these frames are then segmented together into macroblocks (groups of pixels usually ranging from 4x4 to 64 x 64 pixels depending on the codec used). These macroblocks are used to predict how pixels will change between each frame through the use of powerful computational hardware and an algorithmic technique known as motion compensation.


What is the Difference Between Video Compression, Encoding, and Transcoding?

While both compression and encoding are two parts of the same process, they are not necessarily the same thing. When your signal is run through an encoder, it goes through the process of compressing the video to reduce the overall file size. This compressed video is then transcoded (translated from one code to another) so that it can be distributed on certain third-party platforms that have unique standards for how content is displayed. This entire process is what comprises encoding: the application of video compression and conversion to a certain format or container file.


What is a Video Codec?

You can think of a codec like a set of instructions for an encoder to follow when it goes through the process of compressing your video and encoding it at a specific resolution and framerate. Resi’s encoders utilize two of the most common video codecs for web distribution: H.264 and HEVC (also referred to as H.265). While there are some differences between how these two codecs operate, HEVC is widely considered to be more efficient in terms of bandwidth consumption. However, it may not be suitable for all streaming situations.


What is the Difference Between H.264 and HEVC?

Before getting into the difference between H.264 and HEVC (sometimes referred to H.265) encoding standards, we should understand how they are similar. For starters, both have the capability to support 4k streaming. They both use fundamentally similar, if technically different, processes for video compression: motion-compensated DCT video coding. However, the specific ways that these two coding standards utilize these techniques make it difficult to always determine which is the correct for specific situations. However, there are a few things you should know about each codec that might make it easier to determine which will be a better choice for your streaming needs.

HEVC is often considered a more advanced version of the H.264 codec because it utilizes, among other things, a processing unit known as a Coding Tree Unit (CTU). Rather than segmenting an image into regularly sized macroblocks, as is the case with H.264 encoding, HEVC macroblocks can range in size. By starting with a larger CTU (a macroblock of 64 x 64 pixels), larger groups of pixels can be transferred from one frame to the next without changing them, thus requiring less data. This essentially means that more pixels can be transferred on a lower bandwidth. For the pixels that do change, they can then be further partitioned into even smaller blocks (or coding units) based on lighting, colorspace, keyframe distance, and a number of other factors that are analyzed by your encoder. This means that smaller segments of the images change overall, which can result in a higher quality broadcast (especially in environments with low light, high amounts of contrast, or lots of motion) while using far less data than H.264.

However, it is important to note that H.264 is a slightly older codec and may be more suitable for broadcasting to third-party sites and social media. Otherwise, additional transcoding may need to occur if you use HEVC, which can degrade the quality of your broadcast. This is why we generally recommend using HEVC for multi-site distribution and H.264 for broadcasting to web or social media. However, this is not always clear in the case of broadcasting to both or when using a dual-channel encoder. In addition, HEVC requires much more powerful hardware to run, which does not make it ideal if you are using a ProPresenter plugin on a computer with limited processing capabilities.


How Can I Prevent Artifacts or Blocky Video?

There are a number of reasons why you may notice blocks or other artifacts in your video broadcast. Before anything else, you should make sure your recording environment and equipment are properly configured. This means adjusting the overall light levels in the room, including any stage lights or LED walls that may be creating higher amounts of contrast, which can cause artifacts to appear. If you still notice artifacts or blocks even after adjusting your equipment, lighting, etc., you can try increasing the bitrate on your encoder preset. However, this is not always an option, especially when streaming to the web or social media.

If none of these adjustments can be made to your broadcast, and if you are using the H.264 codec, you can try disabling hardware acceleration in the encoder preset. Deactivating hardware acceleration moves the work of encoding H.264 from the encoder's GPU to its CPU. This is sometimes called "libx264" or "libx encoding." While libx264 can, in some cases, rival the quality of HEVC encoding. However, this comes at the cost of greater power consumption and significant heat generation, which most Resi encoder's aren't really equipped to handle on a regular basis. At this time, the only Resi encoder rated for libx264 is the E4300.

Was this article helpful?
0 out of 0 found this helpful