Controlling which capture frames are included in the saved video

From edgertronic high speed video camera
Jump to navigation Jump to search

Warning: This feature does not exists. If it ever exists, this feature was added because for the current software design, it was easy (and a customer asked for it). I anticipate a future version of the camera software will not be able to support frame dropping due to an anticipated change in how we do video encoding. This means if you use this feature, you will be locked into using an older software release.

The first edgertronic camera shipped with software that allowed one video to be captured at a single frame rate then the captured video was automatically saved. The software was improved to allow multiple videos to be captured followed by saving those captured videos in the order they were captured. The camera was enhanced with background save to allow captured videos to be saved, (LIFO or FIFO) while new videos were being captured. All of the above was done at a single capture frame rate. Next came capturing a video at multiple frame rates (pre-trigger rate, post1-trigger rate, and post2-trigger rate). In all these cases all the captured frames were saved (unless the user chose to trim the video while it was being saved).

All of these new capabilities were made available to all customers, including the customer who bought the first camera we sold, at no additional cost by simply downloading the latest camera software and letting the camera do an update.

The feature described below, which I call frame drop, allows customer provided code installed into camera via the user added URL feature, to control if a frame is included in the saved video or if the frame should be dropped. Frame drop can not be used by software outside the camera.

Radar use case

I will start by describing a use case where, due to the latency between when the event occurs and when the trigger occurs, the multi-rate capture feature is not an optimal solution. The latency is typical when using a radar to trigger the camera. To keep the example simple, single rate capture is shown. You can use frame drop with multi-rate capture as well.

To make the example fun, I included a simple linear ramp from real-time (30 fps capture playing at 30 fps) playback speed, ramping up to slow motion playback speed (600 fps capture playing at 30 fps), then ramping back down to real-timl-time (30 fps capture playing at 30 fps) playback speed.

Desired saved video

In the diagram below you can see the 5 phases of the video we want to save. The camera is configured to capture at 600 fps. By selectively dropping frames out of the saved video, we can simulate any frame rate that is slower than the 600 fps capture rate. Since this is a quantized process, the playback may appear jerky (such as capturing at XXX fps and playing back at YYY fps).

Selective-frame-drop.svg

Notice that the horizontal time axis represents the capture time scale, not the playback time scale. The vertical axis represents the desired rate the captured video frames are included in the saved video. The saved video is assumed to playback at 30 fps.

Frame dropping

To analyze the frame dropping algorithm needed, we examine each of the five regions shown in the diagram above.

The lead in and follow through where captured frame rate of 600 fps and has a playback frame rate of 30 fps. This means every group of 600 / 30 or 20 frames needs to be saved as one frame. So we simply allow a frame, starting with the first captured frame, to be saved, followed by dropping then next 19 frames.

The slow motion region was captured frame rate of 600 fps and a playback frame rate of 600 fps. This means we let don't drop any frames.

Then, to make it interesting, we need a linear frame dropping algorithm to create the ramp up and ramp down visual effect.

The implementation is straight forward, where we have a case/switch type software design to allow us to choose the right frame drop algorithm based on which of the five regions we are in. The trick is determining when to switch from one region to the next.

Camera support for frame dropping

At the camera software implementation level, frame dropping is supported using the GStreamer pad probe mechanism. Stated simply, this means before each video frame is encoded GStreamer invokes a callback which return True if the frame should be encoded into the saved video file or False if the frame should be dropped. The user supplied method I refer to as frame_drop_callback().

The CAMAPI register_frame_drop_callback() method allows you to provide a method that gets invoked each time the camera is about to save a video, a video frame buffer is passing through, or the last frame of the video being saved has been processed. A frame drop state enumeration is used to indicate these three events:


CAMAPI_FRAME_DROP_UNKNOWN      = 0
CAMAPI_FRAME_DROP_INSTALLING   = 1
CAMAPI_FRAME_DROP_SAVE_START   = 2
CAMAPI_FRAME_DROP_VIDEO_FRAME  = 3
CAMAPI_FRAME_DROP_SAVE_END     = 4

The frame_drop_callback() method you provide has the following signature:

frame_drop_callback(frame_drop_state, frame_number=None, filename=None, frame_count=None)
    """
    If frame_drop_state is INSTALLED, then True needs to be returned to handshake that a
    valid callback was registered.
    If frame_drop_state is SAVE_START, then a video is about to be saved.  The filename
    and total frames captured are included. If frame_drop_state is VIDEO_FRAME, then a
    video frame can be included or dropped depending on this method's return value.
    The first captured frame is frame 0 (which is a different convention than elsewhere
    we the trigger frame is frame 0). If frame_drop_state is SAVE_END, then all the video
    frames to be saved have been processed.  The number of frames in the
    saved video are included.
    Restriction: must save at least 7 frames.  If the filename is already in use, then the
    camera will add a unique identifier to the end.
    More restrictions: The probe is in before the overlay GStreamer element, so frame number
    overlay will not work as you would expect.  This might get fixed before the real code release.
    Returns: True if video frame should be saved, False otherwise (when frame_drop_state = CAMAPI_FRAME_DROP_VIDEO_FRAME)
    """

Tracking frame number during save

In the diagram above, the horizontal axis is capture time based on the position of the trigger event. We need to transfer those region boundaries to the corresponding frame number, where for this feature we say the first captured frame is frame 0. to use real values, lets assume each of the five regions is 1 second long, meaning 5 seconds of action was captured at 600 fps or there is a total of 5 sec * 600 frames/sec or 3,000 frames. The captured frames will be in the range from frame 0 up to frame 2,999.

For each 1 second region, there are 600 frames captured.

Region First frame
in region
Last frame
in region
Lead in 0 599
Ramp up 600 1,199
Slow motion 1,200 1,799
Ramp down 1,800 2,399
Follow through 2,400 2,499

Working implementation

There are three approaches that I dreamed up to support frame dropping. In the first approach, all camera control is done using URLs that were added as part of your frame dropping user added URL code. In the second approach, you configure and control the camera normally and the only logic in the user added URL code is there to perform frame dropping. In either approach, you can include a new tab that will appear in the webUI settings modal allowing you to have a UI to configure any frame dropping settings you expose. In the third approach, a list of frames to save is provided with the selective_save() call.

For the working example, I used the second approach, where it divides the number of frames in a capture into 5 equally sized regions, and applies the logic described above to those regions. The working implementation ships with the camera (http://10.11.12.13/http://sc1f.home/static/sdk/app_ext) and can also be found in the software release directory. You will also find the very poorly named foo extension which shows how to extent the webUI.

I ran out of time before I could add webUI code, so in the hard coded example the first video saved has all its frames saved, and then the follow on videos have the frame dropping algorithm applied.

To install your user added URL code, simply following the file naming convention, put the file on the SD card and power on the camera.

Specifying frame to be saved

The frames to save could be a list containing parameters to specify the frames to save. A parameter would be in one of several forms:

  • Single frame number
  • Starting and ending frame numbers along with keep ratio
  • Starting and ending frame numbers along with a ramp value

For the example above, which has 5 regions, we can specify the frames to save parameters as follows:

Region frame save parameter
Lead in (0, 599, (1, 20))
Ramp up (600, 1199, (1, 20), (1, 1))
Slow motion (1200, 1799, (1, 1))
Ramp down (1800, 2399, (1, 1), (1, 20))
Follow through (2400, 2499, (1, 20))

The save_parms JSON dictionary passed into selective_save() including the above save_frames information could be:

{
  buffer_number:1,
  filename: "variable_speed_video",
  save_frames: ( 
                 (0, 599, (1, 20)), 
                 (600, 1199, (1, 20), (1, 1)), 
                 (1200, 1799, (1, 1)), 
                 (1800, 2399, (1, 1), (1, 20)),
                 (2400, 2499, (1, 20))
               )
}

If you specify the save_frames parameter, then the start_frame and end_frame parameters are ignored.

Example videos

Since I used the second approach, I could configure the camera for review before save, then save the same video multiple times to create these videos.