Philip Schneider | 21 Nov 18:14 2014

Programmatically setting bit_rate ?

Greetings -

I have some code that encodes video (only); this code was basically cloned from the
“video_encode_example()” function provided on the FFmpeg web site. I’m using codec
AV_CODEC_ID_MPEG2VIDEO, and pixel format AV_PIX_FMT_YUV420P.

The sample code has "c->bit_rate = 400000”. For a moderate-sized image (1280 x 720) at 30fps, the
resulting movie shows terrible visual artifacts that look like typical significant undersampling. I
increased the bit_rate by a factor of 10, that is, to 4000000. The resulting movie output looks fine at this
value — no significant visual artifacts. Poking around the web, I see folks using values quite a lot
higher than this, but without any explanation about how their bit_rate was determined...

I have no expertise in video technology or software (the code in question is simply used as a utility to dump
movies out of a 3D graphics application), and only enough experience with the FFmpeg libraries to get this
simple task done. 

Given those caveats, can someone explain how I can programmatically set the bit_rate? Of course at the time
it’s set, I know the size of the frames and the frame rate. Note that I’m presuming that a
“sufficient” bit_rate might be dependent on frame size and frame rate… :-)


— Philip
Ankush Wadke | 21 Nov 11:37 2014

avcodec 56 crash

I am working on ffmpeg since some time now, previously i was using a build from 16-4-2014 and due to some bugs in it i recently switched to the latest build i.e 17-11-2014 and what i notice is that the avcodec56.dll crashes many times at avcodec_decode_video2(). and after a av_seek_frame frame MTS file starts to play jerky all of this was working fine in the 16-4-2014 build.
below is the call stack snapshot attached.

Thank you
Libav-user mailing list
Md Aslam | 20 Nov 07:04 2014

Sequence of images to video


I want to make a video from sequence of images  in android using ndk r10c with ffmpeg 2.4.3. i have also compiled ffmpeg but i do not know which method to use in JNI c file to complete this task . if you guide me i am very thankful all of you. 


Libav-user mailing list
Andrey Shvyrkin | 19 Nov 12:58 2014

H264 multithreading

Hi, I develop the player to decode the stream from the camera. My 
program execution occurs on the device on ARM-architecture to improve 
performance and use multi-threading.

avCodecContext->thread_count = 0;
avCodecContext->thread_type = FF_THREAD_SLICE | FF_THREAD_FRAME;

But on some cameras decoding occurs normally, while on others do not 
come frames.
If I add a flag

avCodecContext->flags2 |= CODEC_FLAG2_CHUNKS;

, then all the cells occurs decoding, but it turned off multithreading. 
If not, I can use multi-threading for all streams. How better to 
implement enable / disable multithreading?
YIRAN LI | 19 Nov 07:54 2014

How to stream via HTTP to some destination receiver?

Hi guys,

VLC has a functionality to stream to a HTTP address so that any machine can open that network address to view the stream.

Now I'm looking for the similar functionality in ffmpeg to stream ogg stream but couldn't find it.

The command line I've used was:

$ ffmpeg.exe  -rtbufsize 1500M -f dshow -i video="Microsoft LifeCam VX-1000" -r 20 -bufsize 1024k -vcodec libtheora -qscale:v 1 -f ogg

ffmpeg version N-67742-g3f07dd6 Copyright (c) 2000-2014 the FFmpeg developers
  built on Nov 16 2014 22:01:52 with gcc 4.9.2 (GCC)

Input #0, dshow, from 'video=Microsoft LifeCam VX-1000':
  Duration: N/A, start: 4822.365000, bitrate: N/A
    Stream #0:0: Video: rawvideo, bgr24, 320x240, 30 tbr, 10000k tbn, 30 tbc

What I wanted to do is to capture from the webcam and encode it to ogg stream so that it can be viewed anywhere (just include http://ip:port/a.ogg in to a HTML5 video tag)

Could anyone tell me why that command failed :(

Great thanks!
Libav-user mailing list
Ankush | 19 Nov 05:27 2014

Help with MTS seeking

Hi guys been a long time since i am trying to figure this out. I have a
Mpeg TS(.MTS) coded file and i want to perform a frame accurate seeking
on it. By frame accurate i mean i want to play the file backward but
av_seek_frame(FormatCtx, videoStreamIndex, seekTS, AVSEEK_FLAG_ANY)
doesn't seem to help here I tried with AVSEEK_FLAG_BACKWARD aswell but
doesn't seem to get to a I-frame. There are I-frames at distance of 13
and there are key frames at the distance of 260(checked using a utility
software) can anyone help me with this or atleast point me to a proper
direction so that i can work that way.

Thank you.
Libav-user mailing list
user | 18 Nov 04:49 2014

MPEGTS video stream help

I have a raw mpg file I want to stream over udp. I have other PES streams I want to send alongside with the video
streams, do I need to use an encoder ? 

need help in ffmpeg

this is code for decoding h264 stream from camera using ffmpeg. But I want to use hardware acceleration for decoding video. I Can not find good sample. There are a lot of question about this topic but there is no answer

string url = <at> "rtsp://admin:123456 <at>"; FFmpegInvoke.av_register_all(); FFmpegInvoke.avcodec_register_all(); FFmpegInvoke.avformat_network_init(); AVFormatContext* pFormatContext = FFmpegInvoke.avformat_alloc_context(); if (FFmpegInvoke.avformat_open_input(&pFormatContext, url, null, null) != 0) throw new Exception("Could not open file"); if (FFmpegInvoke.avformat_find_stream_info(pFormatContext, null) != 0) throw new Exception("Could not find stream info"); AVStream* pStream = null; for (int i = 0; i < pFormatContext->nb_streams; i++) { if (pFormatContext->streams[i]->codec->codec_type == AVMediaType.AVMEDIA_TYPE_VIDEO) { pStream = pFormatContext->streams[i]; break; } } if (pStream == null) throw new Exception("Could not found video stream"); AVCodecContext codecContext = *(pStream->codec); int width = codecContext.width; int height = codecContext.height; AVPixelFormat sourcePixFmt = codecContext.pix_fmt; AVCodecID codecId = codecContext.codec_id; var convertToPixFmt = AVPixelFormat.PIX_FMT_BGR24; SwsContext* pConvertContext = FFmpegInvoke.sws_getContext(width, height, sourcePixFmt, width, height, convertToPixFmt, FFmpegInvoke.SWS_FAST_BILINEAR, null, null, null); if (pConvertContext == null) throw new Exception("Could not initialize the conversion context"); var pConvertedFrame = (AVPicture*)FFmpegInvoke.avcodec_alloc_frame(); int convertedFrameBufferSize = FFmpegInvoke.avpicture_get_size(convertToPixFmt, width, height); var pConvertedFrameBuffer = (byte*)FFmpegInvoke.av_malloc((uint) convertedFrameBufferSize); FFmpegInvoke.avpicture_fill(pConvertedFrame, pConvertedFrameBuffer, convertToPixFmt, width, height); AVCodec* pCodec = FFmpegInvoke.avcodec_find_decoder(codecId); if (pCodec == null) throw new Exception("Unsupported codec"); // Reusing codec context from stream info, // as an alternative way it could look like this: (but it works not for all kind of codecs) // AVCodecContext* pCodecContext = FFmpegInvoke.avcodec_alloc_context3(pCodec); AVCodecContext* pCodecContext = &codecContext; if ((pCodec->capabilities & FFmpegInvoke.CODEC_CAP_TRUNCATED) == FFmpegInvoke.CODEC_CAP_TRUNCATED) pCodecContext->flags |= FFmpegInvoke.CODEC_FLAG_TRUNCATED; if (FFmpegInvoke.avcodec_open2(pCodecContext, pCodec, null) < 0) throw new Exception("Could not open codec"); AVFrame* pDecodedFrame = FFmpegInvoke.avcodec_alloc_frame(); var packet = new AVPacket(); AVPacket* pPacket = &packet; FFmpegInvoke.av_init_packet(pPacket); int frameNumber = 0; while (true) { Console.WriteLine("frame: {0}", frameNumber); if (FFmpegInvoke.av_read_frame(pFormatContext, pPacket) < 0) throw new Exception("Could not read frame"); if (pPacket->stream_index != pStream->index) continue; int gotPicture = 0; int size = FFmpegInvoke.avcodec_decode_video2(pCodecContext, pDecodedFrame, &gotPicture, pPacket); if (size < 0) throw new Exception(string.Format("Error while decoding frame {0}", frameNumber)); if (gotPicture == 1) { byte** src = &pDecodedFrame->data_0; byte** dst = &pConvertedFrame->data_0; FFmpegInvoke.sws_scale(pConvertContext, src, pDecodedFrame->linesize, 0, height, dst, pConvertedFrame->linesize); byte* convertedFrameAddress = pConvertedFrame->data_0; var imageBufferPtr = new IntPtr(convertedFrameAddress); using (var bitmap = new Bitmap(width, height, pConvertedFrame->linesize[0], PixelFormat.Format24bppRgb, imageBufferPtr)) { bitmap.Save( <at> "frame.buffer.jpg", ImageFormat.Jpeg); } } //frameNumber++; } FFmpegInvoke.av_free(pConvertedFrame); FFmpegInvoke.av_free(pConvertedFrameBuffer); FFmpegInvoke.sws_freeContext(pConvertContext); FFmpegInvoke.av_free(pDecodedFrame); FFmpegInvoke.avcodec_close(pCodecContext); FFmpegInvoke.avformat_close_input(&pFormatContext);

How use hardware acceleration for decoding stream(hwaccel)?

Libav-user mailing list
Sebastien Bonopera | 14 Nov 16:32 2014

AVStream and multiple clients


I succesfully implemented a server using ffmpeg libraries that send a stream to a client. This stream is feed by capturing screenshots of my desktop. Now, I'm wondering how to send this same stream to multiple clients...

I tried several things such as simply use one copy of AVStream, AVFormatContext, AVCodec, AVCodecContext, AVOutputContext per client. However, whenever I call av_write_frame ()
, I got an "Floating point exception".

There is the gdb's backtrace:
Program received signal SIGFPE, Arithmetic exception.
0x000000000051f470 in compute_pkt_fields2 ()
(gdb) bt
#0  0x000000000051f470 in compute_pkt_fields2 ()
#1  0x0000000000520538 in av_write_frame ()
#2  0x00000000004922fa in myFunction (this=0x7fffe40008f0,  ...

How I am suppose to do to send the same stream to different clients ? Is there any tutorial/example ? I tried to read ffserver.c, but it seems this one launches one application (ffmpeg) per clients.


Libav-user mailing list
Zach Swena | 14 Nov 17:00 2014

Re: How to get TS packets directly in to my program from FFmpeg


I am developing a software for managing an ATSC TV broadcast and would like to use FFmpeg as the encoding engine.  The structure of my program requires that it be fed 188 byte mpeg 2 TS packets.  Could someone point me toward the best way to utilize FFmpeg such that I can control the start/stop and acquire the TS packets in my program directly without writing to hard drive or broadcasting them over the network?  I need to be ready to encode at the close of a relay so I do not miss the first few seconds of the emergency alerts and also add black padding on either end.  The solution I think I am going to need is to continuously encode with FFmpeg and implement a circular buffer in my program to capture the moments just prior to a relay closure.  My program can then process and send the packets.  I haven't linked to FFmpeg before, so any pointers would be appreciated.  I am using Qt Creator and c++ and mingw32 for my development environment.  I need to be able to process the packets in my program to add splice flags, and other things, so outputting directly from FFmpeg isn't desirable.  Is FFmpeg stable enough to run in this manner 24/7 assuming my code is also?

 This project will be hosted at:


PS: I may have some more questions or comments once I finish analyzing the FFmpeg stream for PCR compliance.  I currently have to re-stamp PCR using libvlc, but FFmpeg has DTS, so I can't test that until I update the re-stamping code or finish writing a PCR analyzer.  Course PCR causes stutter with decode on TV sets.
Libav-user mailing list
coordz | 14 Nov 12:41 2014

AV_CODEC_TYPE_TIFF packet size problems

I'm using the AV_CODEC_TYPE_TIFF to work with a 5184x3456 pixel image with
R16G16B16, i.e. AV_PIX_FMT_RGB48. When I read the image in I get a packet
of a size of 107665156 bytes which tallies nicely with the 107495424 bytes
of uncompressed pixel data that should be in the packet. I then use
sws_scale() to add an alpha channel, i.e. go to AV_PIX_FMT_RGBA64.

When I write the image data out as AV_PIX_FMT_RGB48 (using sws_scale() to
remove previously added alpha channel), AV_CODEC_TYPE_TIFF generates a
packet of 108363208 bytes which, again, seems fine.

Now I try to write the file out as AV_PIX_FMT_RGBA64, I get an error

"[tiff  <at>  000000000F49A0A0] Invalid minimum required packet size 2293265920
(max allowed is 2147483631)"

My first worry with this is that for RGBA64 I should get a packet size of
about 143327232 bytes (the pixel data size) not 2293265920 bytes (about
5184 x 3456 x 3 channels x 2 bytes x 2 (unknown factor???)). What's going
on here? I don't believe this is correct so is this a bug with

Also, looking up the code that generates this error (ff_alloc_packet2() in
utils.c) it looks like I'm restricted to images that fit in a packet of a
bit less than MAX_INT which is about a 30MPixel image. I'm assuming this
is a very difficult restriction to get around in the avcodec code?

To work around these issues I'm thinking of using the lossless compression
in TIFF to try to reduce the packet size. How do I instruct
AV_CODEC_TYPE_TIFF to do this compression?