Technology

Video Live Wallpaper, Part 3


In the last post, we saw what we needed to do in order to use FFmpeg to decode a video. In particular, we loaded a video, decoded a frame, and saved it into pFrameConverted->data[0]. All that remains is to display this frame on the phone. For that we’re going to use GLWallpaperService. It works just like a regular WallpaperService except that you can run OpenGL code.

For the remainder, I’m going to assume that you know how to create a live wallpaper—nothing fancy, just the basic “hello world” live wallpaper. Another prerequisite is that you’ve read Dranger’s FFmpeg tutorials, namely Tutorials 1 and 2.

The idea is to grab a video frame, draw the frame on the texture, then draw the texture on the screen using the correct aspect ratio.

  1. Download and install GLWallpaperService. (There’s really nothing to it. It works just like a regular WallpaperService. The “tricky” part is using it with your project. From the project root directory, create a folder called lib. Put GLWallpaperService.jar inside this folder. If you use Eclipse, follow the install instructions as per the GLWallpaperService README file. Otherwise, make sure build.properties contains the line jar.libs.dir=lib. That’s it! You’re ready to create a GLWallpaper. Feel free to check out the sample code that came with it.)
  2. Load the video via FFmpeg in GLWallpaperService onCreate(). You may be tempted to load the video in the Renderer or Engine onCreate(). I found it better to load the video in GLWallpaperService onCreate()as the other two methods get called more frequently. Make sure to load the video into a 2D array that is smaller than the texture (see the next step).This function loads the video from NDK:
    void Java_com_videolivewallpaper_NativeCalls_loadVideo
    (JNIEnv * env, jobject this, jstring fileName)  {
      szFileName = (*env)->GetStringUTFChars(env, fileName, &isCopy);
      // Register all formats and codecs 
      av_register_all();
      // Open video file   
      if(av_open_input_file(&pFormatCtx, szFileName, NULL, 0, NULL)!=0) {
        __android_log_print(ANDROID_LOG_DEBUG,
                            "video.c",
                            "NDK: Couldn't open file");
        return;
      }
      // Retrieve stream information */ 
      if(av_find_stream_info(pFormatCtx)<0) {
        __android_log_print(ANDROID_LOG_DEBUG,
                            "video.c",
                            "NDK: Couldn't find stream information");
        return;
      }
      // Find the first video stream 
      videoStream=-1;
      int i;
      for(i=0; i<pFormatCtx->nb_streams; i++)
        if(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) {
          videoStream=i;
          break;
        }
      if(videoStream==-1) {
        __android_log_print(ANDROID_LOG_DEBUG,
                            "video.c",
                            "NDK: Didn't find a video stream");
        return;
      }
      // Get a pointer to the codec contetx for the video stream 
      pCodecCtx=pFormatCtx->streams[videoStream]->codec;
      // Find the decoder for the video stream 
      pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
      if(pCodec==NULL) {
        __android_log_print(ANDROID_LOG_DEBUG,
                            "video.c",
                            "NDK: Unsupported codec");
        return;
      }
      // Open codec 
      if(avcodec_open(pCodecCtx, pCodec)<0) {
        __android_log_print(ANDROID_LOG_DEBUG,
                            "video.c",
                            "NDK: Could not open codec");
        return;
      }
      // Allocate video frame (decoded pre-conversion frame) 
      pFrame=avcodec_alloc_frame();
    }

    The variables have been declared as per Dranger’s Tutorials and if you want to know what’s going on that’s also the best place to go. However, I can make two comments here: (1) fileName is a string of the form file:/scard/filename. (Yup, that’s to load from the SD card.) (2) This code is really dumb. It should return different codes based on failure but it just exits regardless of what happens. (Hint: Fix this once you get it to work.)

  3. Create the texture to display the video. So what should we use for the texture dimensions? They must be powers of 2. After trial and error, I found that using a texture that is one power of 2 less than the screen dimensions works pretty well. Ex: If the screen is 320×480, the texture is 256×256. The reason is because OpenGL does a good enough job of scaling the texture smoothly.Assuming you set the texture dimensions, initialize the texture in Renderer onSufaceChanged():
    glDeleteTextures(1, &texture);
    //setup textures 
    glEnable(GL_TEXTURE_2D);
    glGenTextures(1, &texture);
    //...and bind it to our array 
    glBindTexture(GL_TEXTURE_2D, texture);
    //Create Nearest Filtered Texture 
    glTexParameterf(GL_TEXTURE_2D,
                    GL_TEXTURE_MIN_FILTER,
                    GL_NEAREST);
    glTexParameterf(GL_TEXTURE_2D,
                    GL_TEXTURE_MAG_FILTER,
                    GL_NEAREST); // Use GL_LINEAR for better quality 
    glTexParameterf(GL_TEXTURE_2D,
                    GL_TEXTURE_WRAP_S,
                    GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D,
                    GL_TEXTURE_WRAP_T,
                    GL_CLAMP_TO_EDGE);
    //setup simple shading 
    glShadeModel(GL_FLAT);
    glColor4x(0x10000, 0x10000, 0x10000, 0x10000);
    int rect[4] = {0, TEXTURE_HEIGHT, TEXTURE_WIDTH, -1*TEXTURE_HEIGHT};
    glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
    //Create blank texture 
    glTexImage2D(GL_TEXTURE_2D,           /* target */
                 0,                       /* level */
                 GL_RGBA,                 /* internal format */
                 TEXTURE_WIDTH,           /* width */
                 TEXTURE_HEIGHT,          /* height */
                 0,                       /* border */
                 GL_RGBA,                 /* format */
                 GL_UNSIGNED_BYTE,        /* type */
                 NULL);

    The best place to put the OpenGL initialization is in the Renderer onSurfaceChanged(). If for some reason the dimensions of the wall screen change (say for example, because the user takes out the keyboard), you’ll probably need to recreate the texture and do other stuff relative to the new screen dimensions. This makes onSurfaceChanged() an ideal place to put initialization code.

    I’ll explain glTexParamateriv below.

  4. Grab a video frame using FFmpeg.This gets the next video frame each time you call it in NDK:
    void Java_com_videolivewallpaper_NativeCalls_getFrame
    (JNIEnv * env, jobject this)  {
      // keep reading packets until we hit the end or find a video packet   while(av_read_frame(pFormatCtx, &packet)>=0) {
        static struct SwsContext *img_convert_ctx;
        // Is this a packet from the video stream? 
        if(packet.stream_index==videoStream) {
          avcodec_decode_video(pCodecCtx,
                               pFrame,
                               &frameFinished,
                               packet.data,
                               packet.size);
          // Did we get a video frame? 
          if(frameFinished) {
            if(img_convert_ctx == NULL) {
              /* get/set the scaling context */
              int w = pCodecCtx->width;
              int h = pCodecCtx->height;
              img_convert_ctx =
                sws_getContext(
                               w, h, //source 
                               pCodecCtx->pix_fmt,
                               TEXTURE_WIDTH,TEXTURE_HEIGHT,
                               TEXTURE_FORMAT,
                               SWS_FAST_BILINEAR,
                               NULL, NULL, NULL
                               );
              if(img_convert_ctx == NULL) {
                return;
              }
            } /* if img convert null */
            /* finally scale the image */
            sws_scale(img_convert_ctx,
                      pFrame->data,
                      pFrame->linesize,
                      0, pCodecCtx->height,
                      pFrameConverted->data,
                      pFrameConverted->linesize);
            /* do something with pFrameConverted */
            /* ... see drawFrame() */
            /* Free packet since we no longer need it */
            av_free_packet(&packet);
            return;
          } /* if frame finished */
        } /* if packet video stream */
        // Free the packet that was allocated by av_read_frame 
        av_free_packet(&packet);
      } /* while */
      //reload video when you get to the end 
      if (loopVideo == JNI_TRUE) av_seek_frame(pFormatCtx,videoStream,0,AVSEEK_FLAG_ANY);
    }

    Again, if you want to know what’s going on, read the Dranger tutorials. The only major modification is that it uses sws_scale instead of an older, obsolete function (see Part 2). Generally speaking, it loads the frame into a 2D array (pFrameConverted->data) via sws_getContext and sws_scale. sws_scale does the actual scaling and format conversion and sws_getContext sets this up.

    One last interesting tidbit, the last line reloads the video when it gets to the end. (JNI_TRUE is there because the C language does not have built-in booleans, so you have to use NDK’s boolean constants.)

  5. Display the video in Renderer onDrawFrame(). This is the million dollar question. But we’ve actually alreadydone all the hard work. It’s now just of matter of using OpenGL to render the frame as a texture.Render the frame using glTexSubImage in
    onDrawFrame():

    void Java_com_videolivewallpaper_NativeCalls_drawFrame
    (JNIEnv * env, jobject this)  {
      glClear(GL_COLOR_BUFFER_BIT);
      glTexSubImage2D(GL_TEXTURE_2D, /* target */
                      0,          /* level */
                      0,  /* xoffset */
                      0,  /* yoffset */
                      TEXTURE_WIDTH,
                      TEXTURE_HEIGHT,
                      GL_RGBA,    /* format */
                      GL_UNSIGNED_BYTE, /* type */
                      pFrameConverted->data[0]);
      glDrawTexiOES(0, 0, 0, s_w, s_h);     /* s_w,s_h=screen dimensions */
    }

    So what the heck is going on here? In an earlier step, we created a blank texture using glTexImage2D. However, to actually draw each frame, we use glTexSubImage2D. The reason is that glTexImage creates a texture each time, while glTexSubImage updates an existing texture. So we get better performance (as in 24 fps vs 10 fps) by using glTexSubImage.

    Also, you may have been expecting the code to draw a texture over a quad but the helper methods glTexParamateriv and glDrawTexiOES take care of that for you. (Thanks to Richq’s GLbuffer for this tip.) You pick what part of the texture to display with glTexParameteriv via a cropping rectangle. The parameters of rect are {xOffset, yOffSet, width, height}. A negative height flips the texture to the correct orientation. glDrawTexiOES draws the texture to the screen. Its parameters are (xOffset, yOffset, ???, width, height). You can play with these settings to display the video with the correct aspect ratio.

That’s it! There are some details missing (like how to initialize the video). But I’ll make the code available at github once I’m done. I’m actually still working on the video wallpaper. (Or you could just check out Dranger’s tutorials.) So far I’m getting 15-24 fps—videos recorded with the Android camera run at 24 fps. Higher resolution videos run slower. Not bad.

Oh and one more important thing. I’m actually not sure of FFmeg’s licensing requirements. From its website, it seems that all you have to do is make the source code available.

Troubleshooting

For crisper video, use GL_LINEAR instead of GL_NEAREST with glTexParameter.

You are initializing and calling OpenGL from inside the Renderer, right? You need a GL context in order for OpenGL to work.

See also: Part 1, Part 2

Advertisements

10 thoughts on “Video Live Wallpaper, Part 3

  1. Hello,

    I have managed to build the ffmpeg library and include it in a project, however I can’t decode any video because avcodec_video_decode always returns a negative value. I have tried it using different video formats and still no results. I enabled almost enerything that can be enabled in the build script.

    The codex context is initialized properly, the codec name and video resolution are correct but avcodec_video_decode fails. The code is almost identical with the one from Dragner (with the sws_scale part being different).

    Any ideas what I might be doing wrong? Any help would be very, very much appreciated :).

  2. Sorry for the really late reply.

    You can double check that you are enabling the right switches. Part 1 of this series gives some sane switches.

    Are you sure the codec is supported? Chances are that it is unless it is some obscure codec. You can test that its supported with avcodec_open(). Are you enabling the non-free codecs?

    Other than that. I don’t know. Check out the code here—it pretty much also follows the Dranger tutorial.

  3. Hi,

    Do you have any sample code for this type of project? I am having troubles with some of the code…specifically the “jobject this” and the “GetStringUTFChars” parts give me errors. The first one gives me an error saying it needs a ‘,’ or ‘…’ before ‘this’. The second one says “base operand of ‘->’ has non-pointer type ‘_JNIEnv’.
    Thank you!

    1. Disfrutais los vacaciones con un poco de distancia a las cosas que pasan en Alemania!!! Con un poco distancia son las cosas algunas vezes mas claro. El ultimo dia de los vacaciones es muy ce!!car!!Los amigos de Canarias

    2. Totalmente de acuerdo. Vergonzoso es que no se recorten dietas, ni viajes, ni la calefacción del congreso… y que manden a freír espárragos la I+D, valor diferenciador y generador de empleo, la educación y la salud. Y yo me pregunto, ¿quién va a mantener el país? Nos graduaremos y nos marcharemos a otro, donde generaremos ganancias. La inversión que el estado español ha hecho en nuestra educación, la rentabilizarán otros países.

    3. toadface,Just a little note to let you know that the “diary” published by Kazuo Hori maybe the following book:堀和生「1905年日本の竹島領土編入」『朝鮮史研究会論文集』  第24号、1987 But this book was published in 1987, 82 years after the incorporation, and it doesn’t seem to be a diary.

  4. hi this is rely great, i do all what you say, but can you make one read me file how to set up your project, when i install on emulator and run it it have error

  5. ya i done it….its awesome…can u tell me if possible to using here any 2d images using to draw canvas or texture mapping i tried to implement but i can’t.is there any way to implement ?

    now my need is in same screen i have so many views i mean one video and two or more images kindly guide me how to ?

    thank u so much……

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s