Packet containing different content than expected when using DV container

I’m having problems with my own movie player I wrote using the FFmpeg libraries.
The problem appears with a test DV clip created by Apple’s Compressor. It’s a generated clip, so I can
upload it if it helps.

Let me quickly outline what I’m doing:

- I read video frames into a frame queue which is used as a source for playback
- I read audio samples into a ring buffer which is used as a source for playback

To not risk audio dropouts, I’m reading about 200 ms audio samples more than I would need to display the
respective video frames.
While reading audio samples, I’m storing all video packets that come along in a video packet queue.
When the next video frame is requested, I’m searching the queue first and read from the file only when the
queue is empty.
This procedure worked fine for all other formats I’ve been working with so far (mostly MOV and MXF).

I’ve been analyzing the issue for a couple of hours now and it seems to me that the packets that I put to the
queue contain different content when I decode them later on.
Actually, it seems that all packets in the queue decode to the frame that would be next in line when reading
from the file.

Is there anything I could do to prevent this from happening?
As I said I would be happy to upload the file. It’s 2 minutes of DV PAL, about 430 MB in size. I could make it smaller.

Thanks in advance!


(Continue reading)

Massimo Battistel | 21 Oct 18:34 2014

Reading exif data from jpeg programmatically

I would like to know if it possible to read jpeg EXIF data using libavformat/libavcodec.

I can see there is an exif tag list in ffmpeg sources, but I don't know how to access to it.

Using "metadata.c" example in doxygen, I can't read anything. AVFormatContext->metadata is null. 
Note that jpg file has exif data because I can read it with exiftool.

Can someone please point me to some doc/examples?


Libav-user mailing list
Min Wu | 17 Oct 10:55 2014

APIs for sending encoded video to ffserver

Hello. I am new to ffmpeg.

What I need is using ffmpeg 2.4.2 APIs to encode frames generated by opencv and send the result to a streaming server. 

Following examples of ffmpeg, I can do H.264 encoding and saving the result to a file. But I was not able to find any example or API to send them to a streaming server such as ffserver. I just found examples to use command line tools of ffmpeg + ffserver to do on-line streaming. 

So I wonder if it is possible to use ffmpeg API to do this kind of work. 

Thank you.

Libav-user mailing list
Sergio Basurco | 16 Oct 08:52 2014

Rate control for mpeg4 encoding

Hi all,

I have an application that encodes mpeg4 video. The encoder was an adaptation of the decoding_encoding_8c-example. It works well, but bitrate fluctuates without much control. Currently, these are my AVCodecContext settings:

    c->bit_rate = 5000000;
    c->width = this->m_width;
    c->height = this->m_height;
    c->time_base.den = 25;
    c->time_base.num = 1;
    c->gop_size = 0;
    c->pix_fmt = this->pix_fmt;

    av_opt_set(c->priv_data, "preset", "ultrafast", 0);
    av_opt_set(c->priv_data, "tune", "zerolatency", 0);

And I get up to 70Mbps depending on the image being encoded. I have also tried to set these parameters:

    c->bit_rate = 12e06;
    c->rc_min_rate = c->bit_rate;
    c->rc_max_rate = c->bit_rate;
    c->rc_buffer_size = c->bit_rate * 30;

And it does limit bandwidth, but after a couple minutes of encoding I get the error:

    [error] ffmpeg error evaluating rc_eq (null)

My guess is that I'm missing some parameters. I don't quite understand what the rc_buffer_size value should be either.

-- Sergio Basurco, Coherent Synchro
Libav-user mailing list
Sethuraman V | 14 Oct 21:50 2014

Editing a video file

Hi All,

I want to read a frame from a video file, edit it and write back the frame in replacement of the read frame. However I am finding it difficult to accomplish this.

I can get a frame from the video file using ffmpeg provided examples and editing the frame/image is also done separately, the problem arises when I want to write back the image to the video file.

I went through remuxing example given in ffmpeg example, to write a new video file all together with the edited content. The problem is that the remuxing example is not working for different videos of same format.

While fixing the reported error for a given input video file, a different error is thrown for another set of video files. I am using MPEG-1/2 file as my video source.

I want to accomplish this for my academic work submitted, please provide me with some inputs, as materials related to video editing through 'C' interface seems really rare.

I can't understand why an 'I' frame read from a video file can't be written/encoded back to the video file. For other frames 'P' and 'B' there are dependencies, so writing/encoding the complete frame can't be done, but why this applies to 'I' frame too, which is a separate frame on its own.

Please provide me some clarity on how to handle this, any help is appreciated, Thanks!

​- ​

Libav-user mailing list
Ben Morris | 14 Oct 15:59 2014

RTSP stream only received on port 554 via UDP



I am running an RTSP server using Live555 and receiving the stream using ffmpeg. The exact ffmpeg version is the Zeranoe build from September 13th 2014.


Everything runs as expected on localhost but when connecting to the server from a remote machine, the connection will fail when using UDP. The only port that it will work on is 554, which is the default RTSP port. Any other port will timeout. This has also been tested in ffplay where it will move over to a TCP connection and successfully establish the connection. However, I do not wish to use TCP for my video stream as it is a live feed.


The necessary ports have been forwarded on my router (554-564) and UPnP is enabled. Futhermore, when using the Live555 test RTSP clients, the connection is established instantly when using any port, although I cannot find out if the connection is over UDP or TCP.


Is there a known issue with only being able to use certain ports for RTSP? The timeout is happening in avformat_open_input() where I am passing in a pre-allocated context set up with a callback to handle the timeout and the format information is being guessed from the url. The call is as so:


AVDictionary* opts = NULL;

if (avformat_open_input(&m_formatCtx, url, NULL, &opts) != 0)

return false;


The RTSP stream url is in the form:




Any information or advice would be greatly appreciated.



Ben Morris

Jet Stone Studios


Libav-user mailing list
Fabrice Alcindor | 14 Oct 15:32 2014

avformat_open_input - remaining data


For a streaming application  I would  use data chunk from a buffer using custom I/O. My understanding is that while opening the file with avformat_open_input(), stream is split in packets. However data chunk might not be complete packet. I'm looking for a way to manage remaining data of incomplete packet to ensure continuity of the stream. Is there something related to this in AVFormatContext or AVIOContext. Maybe AVIOContext::buf_ptr? 

Libav-user mailing list
ALCINDOR Fabrice (MM | 14 Oct 08:50 2014

[LIBAVFORMAT] Build of the library



What is the straight-forward way to build  only the libavformat library. “make” command in the source directory is not th good way.

My purpose is to get the original makefile of the library to integrate it in another project.


Thanks in advance,




Confidential Notice: This message - including its attachments - may contain proprietary, confidential and/or legally protected information and is intended solely for the use of the designated addressee(s) above. If you are not the intended recipient be aware that any downloading, copying, disclosure, distribution or use of the contents of the above information is strictly prohibited.
If you have received this communication by mistake, please forward the message back to the sender at the email address above, delete the message from all mailboxes and any other electronic storage medium and destroy all copies.
Disclaimer Notice: Internet communications cannot be guaranteed to be safe or error-free. Therefore we do not assure that this message is complete or accurate and we do not accept liability for any errors or omissions in the contents of this message.
Libav-user mailing list
Ben Mesander | 13 Oct 17:41 2014

bug report: avcodec_decode_video2 & pthreads


  Recently I wrote some code using libavcodec & friends to extract the first frame from a mpeg2ts containing H.264 and encode it as PNG still image with user specified dimensions.

  Later, when I wanted to display a default PNG image in the case where the ts file was missing or corrupted, I realized I could use the same code and provide it a PNG file as input and have it "transcode" to a PNG file that had the correct dimensions. While I could write separate code to do this, it was convenient to use the same code calling FFmpeg APIs to do both tasks.

  However, when I did this PNG->PNG transcode, on some machines my code worked, and on others, the code failed. I tracked this down through my code and down into FFmpeg. The machines (both real and virtual) I was running on had 2-4 cores, and I noted multiple threads were spun up to decode the PNG image, and something went wrong on some machines but the code worked fine on others. So this is a hard bug report to reproduce. I have trimmed my code down to something which just reads in a PNG and tries to decode it, hopefully it's small enough to be useful.

  For what it's worth, whatever the race is, I found on some machines the attached code always works, and on others, it never works. I am using CentOS 6.4 amd64, and had the same RPM manifest on machines that worked and machines that didn't. I spent a lot of time trying to track down environmental issues and I could not find any.

  If the underlying ffmpeg library is compiled withe --disable-pthreads the code always works correctly. If that is not the case, sometimes avcodec_decode_video2() returns the number of bytes in the PNG, but does not set the frame complete flag to true. When this happens, the output is corrupt, and subsequent calls to avcodec_decode_video2() fail.

  Exact flags used to compile ffmpeg and command line to compile C test program are in comments in the C code.


Ben Mesander
(303)570-1606 | Email | vCard | Web | Company Blog | LinkedIn
Attachment (bugreport.c): text/x-csrc, 5511 bytes
Libav-user mailing list
Hung Nguyen | 13 Oct 13:51 2014

[resize video] Lost duration


I am taking transcoding example from ffmpeg source code, try to modify it to fit our current situation. 
This example work well, even with customise IOCallback or writing into output file. 
The problem I am facing now is, every time I try to change video size when initialise encoder, the output
video does not have video duration: 

 enc_ctx->height = 360;
 enc_ctx->width = 640; 

But if I give it the origtial w/h, it’s fine. 
 enc_ctx->height = dec_ctx->height;
 enc_ctx->width = dec_ctx->width; 

The problem is, I have to change it size in order to encode it with different resolution, resize with scale
filter and/or change bitrate…
Loosing duration, video cannot play well in some application (especially Google chrome).

Thanks in advance if someone could give me a hint, what can be the problem, where to look at, what are going
wrong with transcoding.c (if changing w/h before open encoder and encode).


Tilak Varisetty | 9 Oct 16:47 2014

Function definition for reading packets

Hello all, 

I would like to know how the function int (* AVInputFormat::read_packet) (struct AVFormatContext *, AVPacket *pkt) works. I cannot find the function definition for how *read_packet is mapped in the structure AVInputFormat.

I want to know what the function is doing at the run time.. 

I would really appreciate any further details on this. 


Libav-user mailing list