GSoC 2017: Preprations before project submission

This week we sent the decoder for review on the mesa-dev mailing list. Julien helped me fix the more advanced edge cases / bugs in the H.264 encoder while I made changes that were asked for by the reviewers. I’ll talk about all that in below in brief.

Code review

This week we had two rounds of reviews made for the decoder part only.

First review

The first set of patches consisted of two changes:

  1. st/omx_tizonia: add –enable-omx-tizonia flag and build files : Makes changes to enable the build flag for tizonia. Also adds targets/omx-tizonia.
  2. st/omx_tizonia: Add AVC decoder : Adds everything under st/omx including the decoder.

These patches were not committed since there were a few issues that I tried to address in second round of reviews.

Second review

The patches that were sent after addressing the issues were the following:

  1. st/omx_bellagio: Rename state tracker and option : Renames old bellagio state tracker and option. I had trouble sending this patch because of it being too large so I ended up sending all the patches again by mistake.
  2. st/omx_tizonia: Add –enable-omx-tizonia flag and build files : Same patch as #1 in earlier patches.
  3. st/omx_tizonia: Add entrypoint : Split from patch #2 from earlier to add only st/omx_tizonia/entrypoint.* and related files.
  4. st/omx_tizonia: Add H.264 decoder : Split from patch #2 to add only decoder.

There were generally no issues with patch #1 and was acked by Christian König.

About the other patches, after some discussion, it was decided that it’s better for the merging to be postponed until both omx_bellagio and omx_tizonia have the shared code refactored out to avoid duplicacy. The changes about general functions like “put_screen” and “get_screen” were sent for review here to check if this was the right way. About the other changes we tried out some experimental changes in this commit. This didn’t quite work since we need to pass the private types to “slice_header” and there doesn’t seem to be any elegant way yet to do so without having to adding OMX IL bits to gallium/auxiliary/vl which will be undesirable. So Julien decided to postpone this task until after monday.

Bugs and fixes

While the patches were being reviewed Julien helped with fixing some issues / bugs in the project.

EGLImage wrong colours fixed

The wrong colours while using EGLImage were finally fixed. The fix is to select a matrix that does YUV to RGB conversion.

EOS buffer clear error fix

Also related to EGLImage was the issue that it failed to clear the video buffers at the end of decoding process. The fix was to increment reference counter of the resource texture.

EGLImage hook issue false negative

The EGLImage hook failing issue was in fact not an issue. I used wrong pipeline to run the decoder. The right pipeline is

MESA_ENABLE_OMX_EGLIMAGE=1 GST_GL_API=gles2 GST_GL_PLATFORM=egl gst-launch-1.0 filesrc location=vid.mp4 ! qtdemux ! h264parse ! omxh264dec ! glimagesink

Artefacts at start of video fixed

The issue involving artefacts was indeed related to overriding the SetParameter. The whole discussion can be found here. A similar change was made to tizonia’s vp8dec.

Deadline next week

With EGLImage working the project has been a success. The work left is to address the edge cases if possible before submitting the project. Merging the commits to the project is also good to have. The next post will be about the whole project summary which will also serve as final submission for this GSoC project.

GSoC 2017: Delays in code cleanup

This week the some of the issues from last time were fixed, some are still under work and some new ones surfaced. Julien has been busy this week and has been unable to provide much input which slowed down some work. I’ll discuss this week’s developments in brief below.

Seeking issue (mostly) fixed

The most important issue form last week was seeking fail. After some digging I found that it was simply because the output buffer was reset before being cleared in “reset_streams_parameters” when the output port was disabled. Simply removing the line that resets the buffer fixed the crash that happened while seeking. The only little problem left is that now the decoder spawns lots of errors similar to this in terminal when trying to seek.


This however does not affect the video playback. The same behaviour is also seen with the existing bellagio based OMX IL state tracker. Being a joint problem with gst it shouldn’t be a blocking the project.

Overriding input port SetParameter

The first go we had was on #2 . Though the exact reason not yet known, the issue could be related to dropped buffers so we decided to override the component’s SetParameter. In bellagio you can simply replace component’s pointer for the SetParameter function like this example from h264enc.c

comp->SetParameter = vid_enc_SetParameter;

In tizonia, to do so you need to add a separate port with it’s SetParameter function overridden with the custom version. The full commit can be found here.

Though this change didn’t fix the issue, it would be good in general such as avoiding unnecessary reconfiguration when there is no resolution change. Since this is an advanced feature it shouldn’t block the decoder patches from getting reviewed.

Encoder crash

Previously I assumend that the crash in the encoder was related to the changes I made recently. After some digging and checking I found that the commits that used to work earlier have stopped working with the same error. So this error might be due to external factors like Gstreamer. We are still waiting to find a fix after the decoder is done.

Preparing commits

Originally the plan was to get the decoder reviewed soon. The first task was to rebase the branch on the latest mesa master branch and resolve the conflicts. The result after resolving conflicts can be found on the gsoc-rebase branch which is separate from gsoc-dev to validate the rebase. It configures successfully but fails to compile. The related issue can be found here. After this hopefully the decoder will be ready for review.

Up next

Since the project end period is looming near the priority is to get the decoder and the encoder reviewed as soon as possible. The issues still present and the new ones that surfaced are slowing down this process. The next few weeks will surely require some hard work.

GSoC 2017: Third phase starts

Majority of last week was spent on cleaning up the decoder for review. The numerous commits were merged to form 3 commits here. The focus has been on H.264 decoder to be working without EGLImage which will be added later in a single commit once both H.264 decoder and encoder are finalised. I’ll talk about the developments in brief below.

Writing to EGLImage

One of the big problems from the last time was to write to the EGLImage. Although everything looked fine the output was still blank. As I said in the last post, the reason was that the code responsible for decoding was never being executed.

The fix that I added to avoid the crash wasn’t actually the right way to avoid the crash we had earlier. Thanks to help from Julien we fixed the crash with the simple change

- if ((!next_is_eos) && ((p_prc->p_outhdr_->nFilledLen > 0) || p_prc->eos_)) {
+ if ((!next_is_eos) && ((p_prc->p_outhdr_->nFilledLen > 0) || p_prc->use_eglimage || p_prc->eos_)) {

The reason was that nFilledLen remains always 0 for EGLImage so the decoder failed to free the output buffer which ultimately lead to crash.

After this fix the decoder was ultimately able to show the output but there arose a new problem: the output colours were wrong. The issue is being tracked in this thread. The output was almost black and white with no colours. For this we asked Christian (author of the bellagio based state tracker) about support. The fix we tried was using “vl_compositor_set_buffer_layer” instead of looping over the layers. The commit with complete changes is available here.

After applying these changes the output changed, but not as expected. The output still has wrong colours but not same as before as shown below


Performance comparison

Even though the output is still not perfect Julien suggested that it would be good to have some data about comparison about CPU usage when using EGLImage and when not using it.

I tested the decoder using three videos of different types using the “top” command:

I took the min, max and average of the CPU usage data I got. Following are the tabulated results:


CPU usage percentage

Let’s visualise the average CPU usage as it is more significant:


Average CPU usage when using EGLImage vs when not

As can be seen using EGLImage is much more efficient compared to when not using. The CPU usage also increases dramatically when in non EGLImage mode with increasing video quality. For the other case it’s only about 1% increase when rendering a 4k movie compared to a 720p movie. Also the peak usage is 6.7% for when using EGLImage whereas it is as high as 91.3 for the other case.

Commits cleanup

Other than the work involving EGLImage we spent some significant amount of time on cleaning up the commits for final review. The focus as of now is the decoder. After that comes the encoder and finally the changes involving EGLImage support. The current progress can be checked on the gsoc branch.

Following individual changes were made:

Including some other minor changes.

Moving forward

The top priority issue at the moment is the seeking failure in H.264 decoder. Other than that there are a bit of artefacts at the start of video and the decoder fails to play with gst-play which also need attention. Ultimately the goal is to get the decoder reviewed.

Other than that a new bug popped in the H.264 encoder with the introduction of the change involving FreeBuffer which also needs to be fixed soon.

GSoC 2017: H.264 encoder improvements and EGLImage

With the H.264 encoder component working we moved to the big goal of the project i.e. adding EGLImage support in the H.264 decoder. In the following sections I’ll talk about the work done this week and future goals briefly.

H.264 encoder improvements

We earlier had problem with the input port that it tried to free buffer which was already NULL. The only check it had was for the “is_owned” property (from tizport.c)

if (OMX_TRUE == is_owned)
      OMX_PTR p_port_priv = OMX_DirInput == p_obj->portdef_.eDir
                              ? ap_hdr->pInputPortPrivate
                              : ap_hdr->pOutputPortPrivate;
      free_buffer (p_obj, ap_hdr->pBuffer, p_port_priv);

The is_owned value can’t be set from outside since the struct tiz_port_buf_props_t is defined in tizport.c and not exposed through any header.

The workaround was, as pointed out by Julien, to use super_UseBuffer instead of super_AllocateBuffer inside h264e_inport_AllocateBuffer.  In fact super_Use or super_AllocateBuffer do the same thing except that the latter allocates some memory and sets the flag ”owned” that is later checked while clearing the buffer. This change also has the advantage that it avoids making Tizonia to allocate buffer and to free them , which is inefficient.

Along with that there were some minor improvements too.

EGLImage support

Most of this week was spent on figuring out how to add the EGLImage support to the H.264 decoder component. We went through various approaches and it still isn’t finalised how it will be done but we have made some progress in getting the video_buffers.

From the 2 approaches to get the “struct pipe_resource *” from the EGLImage I used the first approach since, according to Julien, is safer because it calls lookup_egl_image which
seems to go all the way down and internally calls _eglCheckResource so it
does some validity check on the input pointers.

static struct pipe_resource *
st_omx_pipe_texture_from_eglimage_v1(EGLDisplay egldisplay, EGLImage
  _EGLDisplay *disp = egldisplay;
  struct dri2_egl_display *dri2_egl_dpy = disp->DriverData;
  __DRIscreen *_dri_screen = dri2_egl_dpy->dri_screen;
  struct dri_screen *st_dri_screen = dri_screen(_dri_screen);
  __DRIimage *_dri_image = st_dri_screen->lookup_egl_image(st_dri_screen,

  return _dri_image->texture;

The first commit can be found here.

This change only served as getting the pipe_resource. It was also not working since the decoder was trying to make a target from pipe_resource which wasn’t available since the output port was disabled. To prevent that case I added the changes here. This made the error to disappear but still the decoder failed later when trying to end the frame.

Next we dropped how targets were made. After some discussion it was decided to store the video buffers made from the pipe resource and use them as needed.

video_buffer = vl_video_buffer_create_ex2 (p_prc->pipe, &templat, resources);


util_hash_table_set(p_prc->video_buffer_map, p_prc->p_outhdr_, video_buffer);

Another  problem that I was getting that the decoder was getting RGBA EGLImage (1 target) whereas the decoder renders to NV12 which is 2 pipe_texture per frames. The following was added as fix for that.

assert (p_res->format == PIPE_FORMAT_R8G8B8A8_UNORM);

templat.buffer_format = PIPE_FORMAT_R8G8B8A8_UNORM; // RGBA
templat.chroma_format = PIPE_VIDEO_CHROMA_FORMAT_NONE;
templat.width = p_res->width0;
templat.height = p_res->height0;
templat.interlaced = 0;

memset(resources, 0, sizeof resources);
resources[0] = p_res;

video_buffer = vl_video_buffer_create_ex2 (p_prc->pipe, &templat, resources);

The video_buffer is then retrieved later on when h264d_fill_output needs to write to the output.

dst_buf = util_hash_table_get(p_prc->video_buffer_map, output);

The full commit message is here.

Next week’s work

After these changes the decoder compiles and runs without error but fails to write the output to the screen. I found that instead of making more than one video_buffers the decoder makes only one video_buffer. The decoder runs normally when the video isn’t running but when the video starts the main function “h264d_prc_buffers_ready” is never called by the IL core. This is the reason behind the black screen / no output from the decoder. Many parts of the decoder are still to be done/finalised like clearing the video_buffers. Next week we expect to get the decoding to work normally and some more progress.

GSoC 2017: Vid enc working

This week we got the encoder but not without some workarounds. I’ll talk briefly about the fixes for the last week’s problems, the improvements still to be made and the future plan below. The commits can be seen here.

Registering  Multiple components

This blocking issue was fixed this week thanks to Juan’s support. The root of the problem was that Tizonia does not have specialised loaders to load multiple components. You can check out Juan’s much in depth answer here. So instead, we had to provide a generic name to the component and add the other “components” as roles of that component. The following code from entrypoint.c registers the generic component name:

/* Initialize the component infrastructure */
tiz_comp_init (ap_hdl, OMX_VID_COMP_NAME);

And you set the roles for the components as usual. The difference here is that the roles do not have names like they do in bellagio. For example, in vid_dec.c the three video dec roles have different names:

strcpy(comp->name, OMX_VID_DEC_BASE_NAME);
strcpy(comp->name_specific[0], OMX_VID_DEC_MPEG2_NAME);
strcpy(comp->name_specific[1], OMX_VID_DEC_AVC_NAME);
strcpy(comp->name_specific[2], OMX_VID_DEC_HEVC_NAME);

strcpy(comp->role_specific[0], OMX_VID_DEC_MPEG2_ROLE);
strcpy(comp->role_specific[1], OMX_VID_DEC_AVC_ROLE);
strcpy(comp->role_specific[2], OMX_VID_DEC_HEVC_ROLE);

So in gst-omx we can directly use the role name to select the component role as


which is the value of OMX_VID_DEC_MPEG2_NAME.

For the tizonia based component we have to provide both component name and the role, which is also more accurate:


So with these changes the problem with loading multiple components was almost solved. Almost because we still haven’t tested the per role hook registration API which I’ll talk about later below.

Working H.264 encoder

The encoder is able to encode the video. There was a need to add new ports with some changes in functionality. The h264einport and h264eoutport are derived from tizavcport and tizvideoport respectively. The main need for them is that the h264einport replaces the pBuffer of the buffer header with a custom pointer. The replacement occurs here:

r = enc_AllocateBackTexture(ap_hdl, idx, &inp->resource, &inp->transfer, &(*buf)->pBuffer);

The other changes are regarding management of this custom pointer.


The encoder still uses some workarounds which will need to be addressed later on.

Freeing the buffer pointer

The new pBuffer pointer can’t be simply cleared with free(). Trying to do so causes an error to be thrown. In the bellagio based encoder simply setting the buffer to NULL solves the problem.

buf->pBuffer = NULL;

return super_FreeBuffer(typeOf (ap_obj, "h264einport"), ap_obj, ap_hdl, idx, buf);

Inside super_FreeBuffer, a check makes sure that it doesn’t try to clear an already empty buffer:


But in tizonia there doesn’t exist any check for the same. From tizport.z

if (OMX_TRUE == is_owned)
      OMX_PTR p_port_priv = OMX_DirInput == p_obj->portdef_.eDir
                              ? ap_hdr->pInputPortPrivate
                              : ap_hdr->pOutputPortPrivate;
      free_buffer (p_obj, ap_hdr->pBuffer, p_port_priv);

The only property it checks is the is_owned property which is internal to tizport.c. To avoid that I used this patch in the dev environement

commit f3e3f40611129c9d3f942b05b7eb66e37d198ace
Author: Gurkirpal Singh <gurkirpal204@gmail.com>
Date: Mon Jul 17 00:12:06 2017 +0530

tizport: check ap_hdr->pBuffer exists before clearing

diff --git a/libtizonia/src/tizport.c b/libtizonia/src/tizport.c
index 58bac53..8d48b3a 100644
--- a/libtizonia/src/tizport.c
+++ b/libtizonia/src/tizport.c
@@ -1187,7 +1187,9 @@ port_FreeBuffer (const void * ap_obj, OMX_HANDLETYPE ap_hdl, OMX_U32 a_pid,
OMX_PTR p_port_priv = OMX_DirInput == p_obj->portdef_.eDir
? ap_hdr->pInputPortPrivate
: ap_hdr->pOutputPortPrivate;
- free_buffer (p_obj, ap_hdr->pBuffer, p_port_priv);
+ if (ap_hdr->pBuffer) {
+ free_buffer (p_obj, ap_hdr->pBuffer, p_port_priv);
+ }

p_unreg_hdr = unregister_header (p_obj, hdr_pos);

Another way to avoid the crash could be to provide a fake buffer by allocating memory to the pBuffer with malloc() / calloc() just before releasing. What approach to use we’re still deciding upon it.

Releasing the input header

Another problem that arose while adding the encoder was clearing the input header. With bellagio, like in vid_enc.c you can set

priv->BufferMgmtCallback = vid_enc_BufferEncoded;

where vid_enc_BufferEncoded writes to the output buffer. The omx_base_filter_BufferMgmtFunction takes care of providing the buffers and calling the BufferMgmtCallback at the right time.

The following call sends the input buffer to the management function

return base_port_SendBufferFunction(port, buf);

Tracing the program, I found that the vid_enc_BufferEncoded is called two times before the buffer is cleared. The first time the whole function runs. The second time it hits

if (!inp || LIST_IS_EMPTY(&inp->tasks)) {
input->nFilledLen = 0; /* mark buffer as empty */
enc_MoveTasks(&priv->used_tasks, &inp->tasks);

which marks the buffer to be cleared.

With tizonia, the component has to implement it’s own buffer management. Currently the component doesn’t uses queue for holding buffers like bellagio internally does so it has to clear the the buffer before requesting a new one. To provide similar functionality I made changes to call h264e_manage_buffers until the condition “if (!inp || LIST_IS_EMPTY(&inp->tasks))” is satisfied in this patch. Unfortunately this didn’t quite work as expected. While running it with gst-launch with this pipeline

gst-launch-1.0 filesrc location=~/mp4_tests/10\ Second\ countdown.mp4.mp4 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! omxh264enctiz ! h264parse ! avdec_h264 ! videoconvert ! ximagesink

The video runs for a bit then stops running and hangs. I tried debugging it with gdb, setting breakpoints on all functions in h264eprc.c and adding the command “continue” to all of them so that I could just check how the control flows. Strangely while debugging in this manner the video never hanged. Instead it reached the end and gave some port flush errors. Again trying it without gdb I got same results. This made it really hard to debug. In the end I reverted the commit to try other things. The crashes caused by it didn’t help either.

The second approach I used was to just move the block that clears the buffer at the end. The commit can be found here. This assumes that the inp->tasks is empty which has been in the case from the bellagio st/omx/h264enc traces. Still this could be considered as a workaround which might need work later. With this change the video plays.

What’s next?

Since the enc is almost in working state, next week, Julien will review it while I’ll focus around adding supporting EGLimage support. Juan is working on adding a per role EGLImage registration API that would allow different roles to have different hooks. Julien and Christian provided inputs on how the task could be accomplished.

GSoC 2017 video stuck issue fixed

After last week’s work the decoder is now able to play videos normally. The video getting stuck at the end was fixed along with other improvements that I talk about below.

Signalling EOS

Earlier I had left signalling end of stream for later to do. This only needed to propagate the flag on the ouput buffer as

p_prc->p_outhdr_->nFlags |= OMX_BUFFERFLAG_EOS;

But it was a little tricky because of the sliding window. The decoder should not release the output buffer before all the buffers in the window have been decoded when EOS is reached. https://github.com/gpalsingh/mesa/commit/69985535972593a4ccd9630efcb3106f8737a8d2 fixed that issue.

That along with the error correction in https://github.com/gpalsingh/mesa/commit/97995b031149ad7d709aca908fd1d263579c1560 made the video close smoothly on ending.

No more dependence on gst-omx for port configuration

For development earlier we were using workarounds for nStride other parameters. Also the component depended on the client to update the corressponding ports. With the patch in https://bugzilla.gnome.org/show_bug.cgi?id=783976 that behaviour was changed. Removing the workaround and using this patch the component stopped working. It was because the outport needed to be enabled first by the component before sending any output. https://github.com/gpalsingh/mesa/commit/32f535834fb7c11095830e49380a3612dc6e8563 made it able to start the decoding process but later on I was again faced another problem.

The decoder assumed that the input buffer is cleared after it’s read. But now in case the out port is disabled the input buffer can’t be cleared. In that case the decoder needs to wait until the right time to release the claimed buffer. https://github.com/gpalsingh/mesa/commit/868577f36ebb6435615938ccae19e90abee237bb fixed that issue by adding checks in decode_frame and get_input_buffer and separating the buffer shifting logic from decode_frame.

Reading stream parameters from headers In https://github.com/gpalsingh/mesa/commit/32f535834fb7c11095830e49380a3612dc6e8563 the decoder sends a “fake” event to just enable the out port as

    tiz_srv_issue_event ((OMX_PTR) p_prc, OMX_EventPortSettingsChanged, OMX_VID_DEC_AVC_OUTPUT_PORT_INDEX, OMX_IndexParamPortDefinition, /* the index of the struct that has been modififed */ NULL);

while reading the header. https://github.com/gpalsingh/mesa/commit/0d79ca5b760ee4b9c3655c94a56ef75065a329fa changes this by actually reading the values and updating them. This also removed dependency on the h264d_SetParameter workaround which was removed.

Moving on

The component works fine with normal streams but fails with streams which change parameters. Also while seeking there is a possibility that the out port will fail to flush and a timeout will occur. The major goal now is to add support for OMX_UseEGLImage which allows faster decoding process.

GSoC 2017: Working H.264 decoder

Last week we had the decoder reading the stream but the output was all wrong. It was just a green screen as below. The reason was the output buffer being filled with 0s.


After this week’s work the decoder is able to play the video



Now let’s discuss briefly what I’ve been working on this week.

Buffer management

The first thing that I did was to add proper buffer management to the decoder. At first all the work related to clearing buffers was being done either in h264d_prc_buffers_ready or in decode_frame. Now that is done in h264d_manage_buffers in a single place. It currently doesn’t handle the case when when stream reaches  the end.

Port disable and enable

Another little addition was the ability to enable and disable ports. This change was done in parallel to tizonia’s vp8dec which is being used as guide for the new mesa/gallium component. https://github.com/gpalsingh/mesa/commit/6b33081cb3bfdd90b2d2944276a51386e06472a2

Correcting the video output

The main task was to show the correct output instead of the green screen. This took majority of my time and involved two steps.

Debugging green screen

First we needed to find out why it was happening. After checking the inputs I found that the decoder was doing the decoding work right so the the error most probably lied in the output process. As Julien pointed out, in YUV plane output from a buffer with all 0s is green. Following this clue I found that the one of the stride values was 0.


for (i = 0; i < height; i++) {
         memcpy(dst, src, width);
         dst += dst_stride;
         src += src_stride;

Which meant that the it was failing to write anything to the buffer.

Both tizonia’s vp8 decoder (vp8dec) and mesa/gallium’s bellagio omxh264dec have different methods to update the value.

Tizonia’s vp8dec reads the stream information from the first buffer in frame, updates the nStride (along with other parameters) and notifies that the output port settings have been changed.

omxh264dec replaces the component handle’s SetParameter function to set the nStride whenever the index is OMX_IndexParamPortDefinition.

Adding the ability to set nStride and nFrameWidth to correct values

The ideal way to fix it was to do it like how vp8dec does (read info from stream and update port parameters). This needed inspecting the gst-omx/gstreamer code to dig out how the information is parsed since omxh264dec currently does not do so. This turned out to be pretty time consuming so I came up with a quick fix to replace the component handle’s SetParameter with a different version like omxh264dec does. The update needs to be done on input port since the output port parameters get overridden due to slaving behaviour.

What still doesn’t work

Even thought it works there are still some improvements still to be made. The video gets stuck at the end and the output also has unexpected thin black strips on top and bottom sometimes. In addition to that the way the decoder sets the output port parameters currently is not ideal which I’ll discuss in detail later on. The decoder shows wrong output for stream where the resolution changes on the fly.

Moving forward

This week our focus will be first on fixing the video being stuck at the end issue. Next will come finding a better solution to set the output port parameters.

Setting up development environment for Gallium state tracker development

In this post we will be looking at the process to set up the development environment that I used for working with state trackers in Mesa/Gallium. We will be using gst-uninstalled as the uninstalled environment.

Get the main packages

These are the packages we’ll be testing. We’ll be using Mesa and libva. Mesa has the state trackers and libva because Mesa uses it. We won’t be installing them now as the testing environment still needs to be set up.

First make a folder where all the stuff goes.

$ mkdir ~/tutorial
$ cd ~/tutorial

Then clone mesa into it.

$ git clone git://anongit.freedesktop.org/git/mesa/mesa

Next is libva.

$ git clone https://anongit.freedesktop.org/git/vaapi/libva.git

Install bellagio

$ sudo apt install libomxil-bellagio*

Install tizonia (originally from https://github.com/tizonia/tizonia-openmax-il#installation)

$ curl -kL https://github.com/tizonia/tizonia-openmax-il/raw/master/tools/install.sh | bash
# Or its shortened version:
$ curl -kL https://goo.gl/Vu8qGR | bash
$ sudo apt-get update && sudo apt-get upgrade

Setting up testing environment

Get Gstreamer

(From the official homepage) GStreamer is a library for constructing graphs of media-handling components. Applications can take advantage of advances in codec and filter technology transparently. Developers can add new codecs and filters by writing a simple plugin with a clean, generic interface. Here we will be using the gst-omx and some other plugins for testing.

First build the dependencies for Gstreamer.

$ sudo apt-get build-dep gstreamer1.0-plugins-{base,good,bad,ugly}

Get the installer script

$ curl -O https://cgit.freedesktop.org/gstreamer/gstreamer/plain/scripts/create-uninstalled-setup.sh

Edit the script and add gst-omx and gstreamer-vaapi to “MODULES”

MODULES=”gstreamer gst-plugins-base gst-plugins-good gst-plugins-ugly gst-plugins-bad gst-libav gstreamer-vaapi gst-omx”

or you can clone them later manually. Other options like BRANCH and UNINSTALLED_ROOT can also be changed to your liking.

Run the script and wait for it to finish.

$ sh create-uninstalled-setup.sh

This will install the plugins in ~/gst directory.

Using the environment

The script will give you additional information before exiting which you should take note of. The instructions here are general.

Enter the environment with

$ ~/gst/gst-master.

This will take you a to ~/gst/gst-master in the uninstalled environement (a bit similar to python’s venv).


$ exit

to exit out of the uninstalled environment anytime.

To make it easier to use you can link it to bin in your home.

$ mkdir ~/bin; ln -s ~/gst/gst-master ~/bin/gst-master

And then add this line in your bashrc / bash_profile

export PATH=$PATH:~/bin

Load the changes

$ source ~/.bashrc

Now you can simply use it like a command

$ gst-master

More details could be found here: https://arunraghavan.net/2014/07/quick-start-guide-to-gst-uninstalled-1-x/

Firmware installation (Optional)

You need to do this only if you are using NVIDIA graphics card like I did.

Use the following commands to install the firmware for your NVIDIA graphics card:

$ mkdir /tmp/nouveau
$ cd /tmp/nouveau
$ wget https://raw.github.com/imirkin/re-vp2/master/extract_firmware.py
$ wget http://us.download.nvidia.com/XFree86/Linux-x86/325.15/NVIDIA-Linux-x86-325.15.run
$ sh NVIDIA-Linux-x86-325.15.run --extract-only
$ python2 extract_firmware.py  # this script is for python 2 only
# mkdir /lib/firmware/nouveau
# cp -d nv* vuc-* /lib/firmware/nouveau/

More details could be found on the official page: https://nouveau.freedesktop.org/wiki/VideoAcceleration/

Installing modules

Now that we have the base set up, we can proceed to installing the packages to finish the setup process. Note: You should be uninstalled environment before proceeding.

Install Mesa

First move to the mesa directory

$ cd ~/tutorial/mesa

Then use the autogen.sh script to configure everything

./autogen.sh \
 --enable-texture-float \
 --enable-gles1 \
 --enable-gles2 \
 --enable-glx \
 --enable-egl \
 --enable-gallium-llvm \
 --enable-shared-glapi \
 --enable-gbm \
 --enable-glx-tls \
 --enable-dri \
 --enable-osmesa \
 --with-egl-platforms=x11,drm \
 --with-gallium-drivers=nouveau,swrast \
 --with-dri-drivers=nouveau,swrast \
 --enable-vdpau \
 --enable-omx \
 --enable-va \

Note the the prefix is ~/gst/master/prefix. It is necessary to use it otherwise mesa will be installed in default location. In case you used a different UNINSTALLED_ROOT and/or BRANCH you should change it accordingly. Also nouveau is in case for NVIDIA graphics card which is the driver.

Finally make and make install.

$ make -j8
$ make install

This will install mesa in ~/gst/master/prefix and could be used in the environment. This means even if you break something during testing your system wide installation will be unharmed.

Install libva

Similar to Mesa, we just need to give the prefix before configuring

$ cd ~/tutorial/mesa
$ ./autogen.sh --prefix=/home/gurkirpal/gst/master/prefix/
$ make -j8
$ make install

Instal Gstreamer and modules

The modules downloaded by the gst-uninstalled script still need to be installed before using them.

Just need to move to the ~/gst/master directory and use the automated script in ~/gst/master/gstreamer/scripts/git-update.sh which will do all the work for you.

You will have to edit the script to add extra modules like gstreamer-vaapi (only those which don’t need extra arguments)

 gstreamer-vaapi \
 gst-editing-services \
 gst-rtsp-server \

or you can choose to do it manually, starting with gstreamer then gst-plugins-base then any others in any order

$ ./autogen.sh
$ make -j8

We still need to install gst-omx manually because we will be using different omx targets

$ cd gst-omx
$ ./autogen.sh --with-omx-target=bellagio
$ make -j8

Note that we need to use “make install” with the gstreamer modules.

Checking the environment

Here are some tips to check if you have set up the environment correctly.

Checking mesa install


$ glxinfo

should give different info from when invoked outside the environment.

For example, Outside the env

$ glxinfo | fgrep "OpenGL core profile version string"
OpenGL core profile version string: 4.3 (Core Profile) Mesa 17.2.0-devel - padoka PPA

In the uninstalled env

$ glxinfo | fgrep "OpenGL core profile version string"
OpenGL core profile version string: 4.3 (Core Profile) Mesa 17.2.0-devel (git-5ff4858)

Checking firmware install

In case you installed the NVIDIA firmware, after installing the firmware you should have lots of files in /lib/firmware/nouveau/. For brevity I’m only showing the number of files that I have.

$ ls -l /lib/firmware/nouveau/ | wc -l

The number of files you have should need not be exactly same.

Checking Gstreamer install

Just check if you are using the uninstalled version of gst-*

$ which gst-inspect-1.0
$ which gst-launch-1.0

Checking gst-omx install

$ gst-inspect-1.0 omxh264dec

should show information about the omxh264dec plugin. Otherwise you’ll get an error message like this

$ gst-inspect-1.0 omxh264dec
No such element or plugin 'omxh264dec'

Checking VA-API installation

Running vainfo should give output similar to as shown below

$ LIBVA_DRIVER_NAME=nouveau vainfo
libva info: VA-API version 0.40.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'nouveau'
libva info: Trying to open /home/gpalsingh/gst/master/prefix/lib/dri/nouveau_drv_video.so
libva info: Found init function __vaDriverInit_0_40
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.40 (libva 1.8.0.pre1)
vainfo: Driver version: mesa gallium vaapi
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :    VAEntrypointVLD
      VAProfileMPEG2Main              :    VAEntrypointVLD
      VAProfileVC1Simple              :    VAEntrypointVLD
      VAProfileVC1Main                :    VAEntrypointVLD
      VAProfileVC1Advanced            :    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointVLD
      VAProfileH264Main               :    VAEntrypointVLD
      VAProfileH264High               :    VAEntrypointVLD
      VAProfileNone                   :    VAEntrypointVideoProc

You can use the following pipeline to test if gstreamer-vaapi is working fine:

$ gst-launch-1.0 filesrc location=any_video.mp4 ! qtdemux ! h264parse ! vaapidecodebin ! videoconvert ! ximagesink

This should bring up a video window without any audio.

To quit switch to terminal and press Ctrl + C.

You can find clips for testing at http://www.h264info.com/clips.html

Understanding recursion visually with fractals

Recursion is one of the more difficult type of concept that begginer computer programmers have to face. Simply making a fibonacci sequence doesn’t really guarantee complete understanding. Usually one has to get some experience before getting a good feel of the concept. But there’s an easier way. If you could see what happens under the hood you could get a better understanding more quickly. I found that fractals are perfect for this purpose beside being beautiful to look at.
Fractal is basically a reccuring pattern. If you zoom into an ideal fractal, you couldn’t tell how much we have zoomed in. One famous example of fractals, the mendelbrot set, is shown below.

Photo by Wolfgangbeyer

Setting up

Fractals can be drawn with ease if we have some turtle like object to draw objects. I chose to do it in C to make it a little challenging but you could use any other language you like. If you don’t understand or like the code just don’t bother and stick to the bigger details only. I used a very simple turtle structure with configurable angle and side variables. I also made some accompanying functions to serve as an abstraction layer to control turtle movement. All the code used in this post is available here in my github repo.

Our first fractal

Armed with the basic knowledge, we are ready to draw our first fractal, the Koch snowflake. It basically involves dividing a line into three( equal ) parts and then converting the middle one into a baseless triangle, repeat over and over again. In steps:

  1. Split the line into three parts.
  2. Convert middle part into an equilateral triangle without the base.
  3. Repeat the same procedure for all the new individual lines.

You don’t really need to do the procedure infinite times. We usually stop after doing it for a finite number of times. These number of times is called iterations. Here is the result of increasing number of iterations.

  1. One iteration only


At this time if you don’t get how this happened I suggest you reread the previous paragraph

2. Two iterations

This is where the fun starts. Now in the one iteration, we stop after making a single triangle but in this case we would further divide each of the four lines:

koch2But where does the base line go? If you are having this question then you must see some before I explain it

3. Three iterations


4. Four iterations


Now we can see the pattern. Instead of drawing the line first and then splitting it, the program first splits down the work down until it reaches the base case. Then it starts doing the work for all the smaller parts one by one. So the above design would be drawn from the left to the right as is without erasing any line, as we like to think about it. I know it still isn’t very obvious if you are new. One thing that could be done is to run the code in a debugger to actually see how the control flow works.

But this isn’t the Koch snowflake I was talking about. It is just a part of it. The snowflake is formed by first making an equilateral triangle and then applying the algorithm to each of it’s sides. The result is this:


In different colors:


The Sierpinski Arrowhead

OK, that was great for our first fractal, but that’s not the only thing we could do! Another kind of fractal that we could make is the Sierpinski arrowhead. This one adds a little extra detail to the algorithm but the main idea is same as before. In steps:

  1. Draw a line( not really in case of program ). Think of the line as the base of an equilateral triangle.
  2. Move up the left side of the triangle upto it’s middle.
  3. Move parallel to the base until we meet the next side.
  4. Move down the side until we touch the base again.

Now lets get to drawing:

  1. One iteration

sier1Again read the above paragraph if you don’t really get it. Good. Now let’s move on further .

2. Two iterations

sier23. Three iterations

sier3fourth, fifth, …

6. Sixth iteration

sier6Wow! did you see that coming? We started with a simple rule and got an unexpected result. But if you look closely, You will see that all the smaller parts of the curve are the same as in iterations 2.

In different colors:


This is the power of recursion. The basic idea is that you can apply the same algorithm to the larger thing and it’s individual parts with ease. Recursion is used frequently in computer like quick-sort, merge-sort, graph searching, towers of Hanoi… and making beautiful fractals. This is just the tip of the iceberg. You can find fractals everywhere in nature. Even in your body!( the blood vessels ). You can find more about them on the Internet if you like( there are also tutorials teaching how to make the Mandelbrot set mentioned in the start ).


I got the idea for this post from an exercise of the book Think Python. I picked the name “pica” for the turtle from the book Squeak: Learn Programming with Robots