Software:Camera Module V4L2 Usage
The bug 2.0 camera module (see Hardware:Camera_2.0_Specs) consists of a 3MP camera connected to the Image Signal Processing (ISP) pipeline embedded in the OMAP processor. The blocks diagram provides an overview of the system and Linux kernel drivers.
The Camera module contains a feature rich 3MP image sensor. The sensor can be configured to export various resolutions, framerates, and image formats. The sensor contains an ISP of its own that can produce color processed images. The image from the camera module is sent to the ISP embedded in the OMAP processor on the bug base. The OMAP ISP block is shown in blue above. It consists of two hardware modules:
- CCDC: The CCDC captures the raw images from the sensor, formats it appropriately in system memory, and performs basic image processing routines.
- RESIZER: The RESIZER, which can resize the image to make it larger or smaller. The RESIZER does its work in the hardware, so it requires no CPU usage to change an image size.
The ISP also contains a PREVIEW engine which is not depicted in the above block diagram that contains image processing functions that are not necessary with the bug 2.0 camera module because the image sensor itself integrates the same functionality.
The ISP can output either RAW data or RESIZER data through Video 4 Linux 2 (V4L2) device nodes as shown in the block diagram.
The complexity of the OMAP ISP has spurned the need for features beyond the scope of the V4L2 (Video4Linux2) API. As such, a kernel media framework that supports the OMAP ISP is in active development as part of the V4L2 project (http://http://meego.gitorious.org/maemo-multimedia/omap3isp-rx51). The bug camera kernel module utilizes this new media framework. Because most V4L2 applications (such as gstreamer, mplayer, vlc, ffmpeg, etc) do not support this new media framework, to use the bug 2.0 camera module, you must first run a setup application that sets up the OMAP ISP pipeline to your desired configuration. After initializing the ISP pipeline, you can use traditional V4L2 applications. Alternatively,you can also write your own V4L2 application that integrates the media pipeline setup to avoid the necessity of running a precursor application. In other words, to use V4L2 applications, you must first configure the media kernel framework (blue box in block diagram).
There are two different ways to setup the ISP media pipeline from the command line prior to running tradional V4L2 applications. One is using a low level command line application called
media-ctl and the other is to use the more convenient
For complete, fine-grained control over the ISP pipeline, use the
media-ctl application. You will need to compile this yourself after cloning it from
media-ctl -h to see how to use the this program to setup the media pipeline. This is not the recommended method for configuring the media pipeline on the bug, but is only mentioned for advanced users that need control beyond that provided by the
bug2v4l2 application is a command line tool to setup the ISP via the Linux kernel media framework. It provides an interface to set most of the
option supported by the Bug and the camera module hardware configuration. Unless you need finer grained control, it is recommended that you use the
bug2v4l2 application for setting up the OMAP ISP and camera module as it is more simple and convenient.
Make sure the bug2v4l2 application is installed on your bug by performing the following commands at the command line:
opkg update opkg install bug2v4l2 bug2v4l2-examples
There are three basic configuration options that you must select to configure the ISP media pipeline. They include:
- The device output (e.g. /dev/video2) that you will capture from.
- The image format (e.g. YUV, Bayer GRBG, RGB555, etc.)
- The image resolution (e.g. 2048x1536, 640x480, etc.)
Device Output Selection
The three device outputs available on the ISP from which you can capture images are the
RESIZER. The block diagram above shows the
RESIZER optoins. If no other V4L2 devices are plugged in, these are typically assigned device nodes
/dev/video6 respectively. The
RAW node corresponds to the
CCDC output as specified in the OMAP3530 TRM. It is the raw data as captured from the image sensor. The
PREVIEW node corresponds to the
output of the image processing pipeline. The
RESIZER output corresponds to the output of the hardware image resizer block.
-d option is used to select the desired output destination node of the ISP media pipeline.
bug2v4l2 -d <RAW|PREVIEW|RESIZER>
N.B. Given the bug 2.0 camera module contains a 3MP camera that also integrates an internal ISP, it is typically not necessary to use the
PREVIEW output as the image sensor itself can produce color processed images. Furthermore, it is not possible using
bug2v4l2 to pipe the
PREVIEW data through the
RESIZER. You will need to the
media-ctl application to setup the ISP media pipeline if this advanced feature is required.
Image Format Selection
-f option lets you specify the type or format the of image stream. For example:
bug2v4l2 -f <YUV|UYVY|BGGR|GRBG|RGB565|RGB555>
YUV and UYVY are YUV422 format with different endianess. The BGGR and GRBG are Bayer formats with different color phases. Typically you will want to capture in YUV mode if you are not familiar with these options as it is the color processed image and is supported by most V4L2 applications.
Image Resolution Selection
-g option selects the resolution of the image captured
from the image sensor.
bug2v4l2 -g WxH
W and H are the width and height respectively. So, for example, to capture a VGA image, you would do
bug2v4l2 -g 640x480
-g option specifies the size of the RAW image. If you
are using the RESIZER output, the resizer hardware will resize the
image to that specified by the
-r option. For example,
suppose you want to capture 1024x768 image and downsample it using the
resizer to 320x240.
bug2v4l2 -d RESIZER -g 1024x768 -r 320x240
The resizer can make it convenient for changing the resolution of the sensor without restarting the image stream and thus requiring the automatic exposure control on the image sensor to restart. Note the resizer can only downsample by 4x in each dimension. The resize can upsample your image as well if the
-r geometry is larger than the
Selecting An Appropriate Configuration
The various hardware and software options available may seem daunting to setup, so below are some simple rules to follow to set up the video stream to your desired resolution and format.
- Select the resolution you desire of the output stream. If you are doing video, then you typically will want a smaller image resolution such as 320x240 (QVGA), 640x480 (VGA), or 1280x720 (HD 720p). Still captures are typically done at full resolution: 2048x1536 (3MP). There is a tradeoff between frame rate and resolution as discussed next.
- Select the raw image format. The image sensor driver supports three full field of view resolutions. They are 640x480, 1024x768, and 2048x1536. You can select raw image formats of any dimension lower than than these three geometries, but the driver will automatically start center cropping to achieve the resolution. This means, for example, if you selected a raw image resolution of 320x240, that would get the center crop of the 640x480. Alternatively, the 640x480, 1024x768, and 2048x1536 images are achieved by scaling the image via binning where appropriate. Of the 2048x1536 image resolution is the maximum resolution supported and correspondingly runs at the slowest framerate of 3fps. The 1024x768 is a downsample of 2x in each dimension and runs at 10fps. The 640x480 is a accomplished by a downsample of approximately 3x and runs at 14fps. For these reasons, it is recommended that you select a RAW image format that corresponds to the 640x480, 1024x768, or 2048x1536. Select the size that just larger than or equal to the final image resolution you picked in step #1. For example, if you want a 320x240 image stream, pick a raw resolution of 640x480. If you want to be able to switch between all available image resolution, then set your raw image format to full resolution 2048x1536. You will set the raw image format via the
- Select your output node. If your raw image format matched your desired image format and will never change, then use the RAW output (e.g.
-d RAW. If
you need to fine tune your image resolution or if you want to change it quickly, the select the RESIZER as your output (e.g.
-d RESIZER. If you select the RESIZER, then you will use the
-r WxH to specify the final output.
For example, suppose you want a 320x240 image stream, then set the raw format to 640x480 and use the resizer to drop it to 320x240:
bug2v4l2 -d RESIZER -g 640x480 -r 320x240
To setup a full resolution image that you will never need to change the size of:
bug2v4l2 -d RAW -g 2048x1536
To setup a full resolution image that you switch between full resolution and a lower resolution:
bug2v4l2 -d RESIZER -g 2048x1536 -r 2048x1536 # capture a full resolution image # now switch it back to low res for an LCD preview bug2v4l2 -d RESIZER -g 2048x1536 -r 320x240
All the above example default to YUV data. Suppose you want to test out some color interpolate algorithms and you want the raw Bayer data from the sensor. In that case, you would use the
-f option to select GRBG data as follows:
bug2v4l2 -f GRBG -d RAW -g 2048x1536
Or suppose you have an application that uses UYVY data, which is the same YUV data with a swapped endianess between the Y and U/V samples.
bug2v4l2 -f UYVY -d RAW -g 1024x768
You can use ffmpeg to directly transcode a video stream from the camera module.
First make sure you have the required packages installed:
opgk update opkg install libtheora libvorbis liboil ffmpeg
To use ffmpeg, you will need to specify the framerate of the input stream. The framerate is printed in the STDERR output of when running the bug2v4l2 program. Suppose you want to capture a QVGA (320x240) MPEG4 stream. First setup the ISP media pipeline:
root@bug20:~# bug2v4l2 -g 640x480 -f YUV -d RESIZER -r 320x240
This sets up the image sensor to output 640x480 YUV data. The resizer then resizes the image to 320x240. The output of this command is:
mt9t111_detect: Read MT9T111 CHIP ID = 0x2680 mt9t111_set_format applying 2048x1536 patch mt9t111_set_format applying YUV mode mt9t111_set_format applying 640x480 patch mt9t111_set_format applying YUV mode Subdev format set: YUYV 640x480 on pad bug_camera_subdev 3-0038/0 Subdev format set: YUYV 640x480 on pad OMAP3 ISP CCDC/0 Subdev format set: YUYV 640x479 on pad OMAP3 ISP CCDC/1 Subdev format set: YUYV 640x479 on pad OMAP3 ISP resizer/0 Subdev format set: YUYV 320x240 on pad OMAP3 ISP resizer/1 Framerate: 14/1 fps
The last line says the frame rate is 14 fps (frames-per-second). Now we
can run the
bug2v4l2 command to get the device node of
root@bug20:~# bug2v4l2 -p /dev/video6
So the device node of the RESIZER is
/dev/video6. With the deivce node and the framerate, we can now run the ffmpeg:
ffmpeg -r 14/1 -s 320x240 -f video4linux2 -i /dev/video6 -f mp4 test1.mp4
-r option is where you specify the framerate and the
-i option is where you specify the device node. Press
'q' to stop recording. Now you have an mpeg4 video file called test1.mp4 saved to your disk.
Checkout the man page for ffmpeg to see your many transcoding options. For example, you can increase the bitrate by using the
-b option to get a better looking (but larger) video.
The above example can all be done is a single line:
ffmpeg -r 14/1 -s 320x240 -f video4linux2 -i $(bug2v4l2 -g 640x480 -f YUV -d RESIZER -r 320x240) -f mp4 test1.mp4
Gstreamer is a library for constructing graphs of media-handling components. It contains a plugin to source data from Video4Linux2 devices and can be used from either the command line (using gst-launch) or as a dynamically linked shared library (including lots of language bindings).
There are lots of gstreamer plugins, but here are some you will likely want to install:
opkg update opkg install gstreamer gst-plugin-video4linux2 gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-plugin-ffmpegcolorspace gst-plugin-ximagesink gst-plugin-xvimagesink gst-plugin-autodetect
Setup the video stream:
bug2v4l2 -g 640x480 -f YUV -d RESIZER -r 320x240
Examples of using the
gst-launch to do various things:
If you have the bug video module installed, you can open a shell on the X terminal. Then you can stream live video to with the following command:
export DISPLAY=:0.0 gst-launch v4l2src device=$(bug2v4l2 -p) ! ffmpegcolorspace ! autovideosink
Then you will see the video stream on your monitor.
Now we can get fancy and add some text overlay (install the gst-plugin-cairo opkg package):
gst-launch v4l2src device=$(bug2v4l2 -p) ! ffmpegcolorspace ! cairotextoverlay text="Hello" ! ffmpegcolorspace ! autovideosink
Suppose you don't want to display the video but just want to transcode it and capture it to file for play back on another device. Here is how to capture a raw YUV video stream and wrap it in an avi file format:
gst-launch v4l2src num-buffers=100 device=$(bug2v4l2 -p) ! avimux ! filesink location=/var/volatile/test2.avi gst-launch filesrc location=/root/test2.avi ! avidemux ! autovideosink
The second line is how you would use gstreamer (potentially on a different machine) to play back the video.
vlc or any other video player will also play it back.
Here is a motion jpeg video (make sure you have
gst-launch v4l2src num-buffers=100 device=$(bug2v4l2 -p) ! jpegenc ! avimux ! filesink location=/var/volatile/test1.avi gst-launch filesrc location=/var/volatile/test1.avi ! avidemux ! jpegdec ! autovideosink
Once again, the second line is what you use to play it back.
The processor doesn't seem to be able to keep up with theora/ogg vorbis encoding in real time:
gst-launch v4l2src num-buffers=500 device=$(bug2v4l2 -p) ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=test1.ogg
So you can instead capture it as YUV or motion jpeg and then encode it in theora format.
Now imagine that you want to stream a video over wifi for instance: This example uses the udpsink to stream mjpeg images to the remote computer(192.168.1.178):
bug2v4l2 -g 640x480 gst-launch v4l2src device=$(bug2v4l2 -p) ! ffmpegcolorspace ! jpegenc ! udpsink host=192.168.1.178 port=5000
which will have to tun a command to receive the video:
gst-launch-0.10 udpsrc port=5000 ! jpegdec ! autovideosink
Or use the foolowing commands instead(faster):
bug2v4l2 -g 640x480 gst-launch v4l2src device=$(bug2v4l2 -p) ! ffmpegcolorspace ! ffenc_mjpeg ! rtpjpegpay pt=96 ! udpsink host=192.168.2.112 port=5000
And to receive:
gst-launch-0.10 -v udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)JPEG, payload=(int)96, ssrc=(uint)389011018, clock-base=(uint)3589870568, seqnum-base=(uint)64941" ! rtpjpegdepay ! ffdec_mjpeg ! xvimagesink
opkg list gst-plugin-* to see all the various gstreamer plugins available on the bug.