Post by tugouxp
now , the html5 <video> tag player render the picture with "appsink" or
"webkit_video_sink", both need convert the yuv format picture to RGB picture
in order to composite to one image with UI planer. although the convert
finished by GPU acceleration, but it seems still not fully utilized the
hardware capacity on the platform with supporting "overlay" feature display
device. so, is there anyway to improve this situation. for example, use a
seperate "overlay buffer" besides the "main surface", and then let the
display device do the merge operation of the two buffer.
did the webkigtk only supports one layer for all resources (yuv frame, gui,
subtitle ....)? and how can the webkitgtk to achieve that mentioned above?
While the idea of using a dedicated video overlay for playback of <video>
elements is tempting, there are a few reasons why it is a bad idea in the
context of a Web engine. The main two âproblemsâ for using hardware video
- Other Web content may need to be painted on top of the video. Most hardware
based video overlays can only be shown on top of the rest of the graphics
- Web content, including <video>, is subject to CSS styling and arbitrary
transformations (scaling, rotation, skewing, and even 3D!). Again, most
graphics hardware cannot do this to a hardware overlay.
For the requirements of a Web engine, the best course of action is decoding
the video (potentially used hardware-assisted codecs) into textures which
then can be composited by the GPU. If there's something GPUs are very good
at is moving pixels around with all kinds of transformations applied to
Many GPUs support formats other than RGB (for example with the extension
EXT_YUV_target ), so it should be possible to decode video to e.g. YUV
into an image texture that then the GPU can directly handle. This being
said, I have no idea what's our level of support in WebKit for this â if
there is any at all. Maybe others can comment on this.