Copyright (c) Hyperion Entertainment and contributors.

Difference between revisions of "Graphics Composited Video"

From AmigaOS Documentation Wiki
Jump to navigation Jump to search
Line 42: Line 42:
 
= Example =
 
= Example =
   
  +
Some notes about the example code:
Coming soon...
 
  +
* CompositeYUV.c - demonstrates basic YUV=>RGB compositing. It also demonstrates how to use CompositeTags() to render to a window's rastport. Developers should pay close attention to fillWindowWithBitMap(), as it's critical to lock the rastport layer while rendering, or occasional graphics corruption could occur while the user moves/resizes the window.
  +
* CompositeYUVExt.c - demonstrates using compositing effects with YUV bitmaps (e.g., alpha blending). Also demonstrates setting the YUV standard (COMPTAG_SrcYUVStandard), and using a custom YUV=>RGB matrix (COMPTAG_SrcYUVMatrix)
  +
* CompositeYUVBlitStream.c - shows how to use DMA to stream video frames into VRAM for display.
  +
  +
More coming soon...

Revision as of 22:54, 30 September 2015

Description

Modern graphics cards like the Radeon HD don't generally have overlay hardware. Hardware overlay is an old and rather obsolete method to accelerate video playback by allowing video frames to be displayed directly in their native YUV format.

Rather than try to emulate overlay using the GPU, AmigaOS supports YUV formats directly via the CompositeTags() function. This is called composited video which combines the advantages of textured video (the overlay replacement on other OSes) with all the power of compositing.

Some key features/advantages of composited video are:

  • Accelerates video playback by enabling planar YUV video frames to be rendered directly to a bitmap.
  • No limit to how many videos can be displayed simultaneously (overlay is restricted to 1 or 2 hardware surfaces).
  • Can be rendered anywhere from full-screen to multiple videos in a webpage (on AmigaOS, overlay is restricted to a window).
  • Supports both SD and HD YUV video standards and even custom YUV to RGB transformation matrices (NOTE: you can exploit this to get brightness, contrast & saturation adjustment for free).
  • Can use alpha blending for combining video with other graphics; this allows subtle anti-aliased blending, unlike overlay's on/off colour-keying.
  • Can use video frames directly with the full power of CompositeTags(); this includes alpha blending, rotation, warping, cross-fades, vertex-arrays, etc.

The last item in the list above introduces a whole set of unique possibilities that previously weren't possible. It could be used for much more than just faster video display; it could be used for real-time video effects, from cross-fades to 3D transitions and more.

Performance

Composited video (and overlay) improve performance in two ways:

  1. It reduces the bandwidth required to copy video frames to the graphics card (YUV420p bitmaps are 37.5% the size of an equivalent 32-bit RGBA bitmap).
  2. It shifts the task of converting from YUV to RGB (the graphics card's native format) from the CPU to the GPU. The GPU is better suited to this task and it frees the CPU up to work on other things like decoding the next frame.

The net result is that it takes less processing power and bus bandwidth to display the same video.

What is YUV?

Basically, the Y channel stores a pixel's brightness/luminance, while the U & V channels store the colour information. Video files use this format instead of RGB because humans see brightness differences at a higher resolution than gray scale. So, we can get away with storing the colour channels at a lower resolution than the brightness without people noticing it. Effectively, we've compressed the video frame while maintaining high visual quality.

Much more information about YUV is available on Wikipedia.

Key Points

  • You can detect if composited video is available by performing a test render with the COMPFLAG_HardwareOnly flag set, and checking the return code
  • Use the PlanarYUVInfo structure when locking PIXF_YUV4x0P bitmaps, so that you have pointers to each plane
  • Make no assumptions about the layout of the Y, U, & V planes; use the pointers and strides as provided. In the past, some people have run into trouble by assuming that the U & V planes are stored one after the other in memory
  • Standard Definition (SD) and High Definition (HD) video have slightly different YUV specifications, which means that they have slightly different YUV => RGB conversion matrices (the standards being BT.601 and BT.709, respectively). These matrices can be chosen with the COMPTAG_SrcYUVStandard tag; use COMPYUV_BT601 for SD video, and COMPYUV_BT709 for HD video
  • It's also possible to use a custom YUV =>RGB matrix via the COMPTAG_SrcYUVMatrix tag. In fact, this feature allows you to roll in other transformations, like hue, saturation, brightness and contrast adjustments for free (see CompositeYUVExt.c)
  • When streaming video, it's highly recommended that you write the frames to a BMF_USERPRIVATE in main memory, and blit that across to a matching bitmap in VRAM. This will use DMA on platforms where it's supported to copy the frames. See CompositeYUVBlitStream.c

Example

Some notes about the example code:

  • CompositeYUV.c - demonstrates basic YUV=>RGB compositing. It also demonstrates how to use CompositeTags() to render to a window's rastport. Developers should pay close attention to fillWindowWithBitMap(), as it's critical to lock the rastport layer while rendering, or occasional graphics corruption could occur while the user moves/resizes the window.
  • CompositeYUVExt.c - demonstrates using compositing effects with YUV bitmaps (e.g., alpha blending). Also demonstrates setting the YUV standard (COMPTAG_SrcYUVStandard), and using a custom YUV=>RGB matrix (COMPTAG_SrcYUVMatrix)
  • CompositeYUVBlitStream.c - shows how to use DMA to stream video frames into VRAM for display.

More coming soon...