Kristensen, Odd-Arild | 31 Jan 22:22 2015

How is cropping and scaling implemented

I have been looking the the implementation details for cropping and scaling. I found filter_frame() in vf_crop.c which seems to implement cropping, but I cannot trace where this function is being called. If filter_frame() does perform cropping, how does


frame->data[0] += s->y * frame->linesize[0];
frame->data[0] += s->x * s->max_step[0];

in any way change the pixels of the video?



Scaling on the other hand seem to be done by sws_scale() if I am not mistaken. But the wording in the documentation confuses me. Does it change/convert the pixel format of the video, or does it actually scale the video/frame to a different resolution?


Thanks for any clarification.

_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Zhang Rui | 30 Jan 10:00 2015
Picon

The proper approach to pass options to nested format

In http.c, a field named chained_options is used to pass options to
underlying tcp protocol. In hls.c, options (only for http) is copied explicity
by name to pass down to nested url context.

But, when open a m3u8 file in file/pipe protocol, which contains
segments in http url, the "user-agent" option doesn't accepted by
outer file/pipe protocol, nor be passed by any callback.

And same problem exists in concat.c/concatdec.c

I'd like to know, is it OK to introduce something like
int (*read_header2)(struct AVFormatContext *, AVDictionary **options);
in AVInputFormat

Turning off decoder delay

I have an MXF file with AVC Intra material in it.
Is there a chance to turn off the codec delay (7 frames in our case)?
It’s only intra frames so it was my understanding that no delay should be necessary.

Any hints appreciated.

Thanks,

Flo
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Max Vlasov | 28 Jan 12:45 2015
Picon

Is it possible to detect unused/invalid packets (without decoding)

Hi.

As many before me, I tried to implement a more strict approach to FrameNum/FrameCount functionality with libav.

The idea was to read all packets saving pts and keyframe flag (without decoding) and make a list of them in order of ptses. After this we have a ready FrameCount and when one needs to jump to an exact frame number, the way to go is to search inside this array for closest keyframe packet before, seek and read packets (also without decoding) until earliest keyframe before the one and then read AND decode until the desired frame.

This approach has some limitations (time for example usually proportional to the I/O reading speed), but apart from this, many videos behaves well. But some not and they probably have a case of unused/invalid packets. I think so because the logic above assumes every pts is visited by decoder when it produces frames. If some packets are not used then we expect the frame num 123 to have timestamp (time1), but in reality since some packets are not used the frame 123 (as decoder produces them) has timestamp (time1 + something ).

When I looked at real examples of videos with problems, I noticed that there were packets with very small sizes like 7 bytes or something. I suspect them to be invalid, but I can not rely only on the size in order to make such assumptions. Is there more reliable way to see that a packet is not used/invalid?

Thanks

Max
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Philip Schneider | 27 Jan 17:13 2015
Picon
Picon

Encoding with H264 help/advice needed...

Greetings -

Video n00b here…

I’ve been using FFmpeg to encode a sequence of raw RGB frames to a movie file, in a C/C++ app. I basically took the code from here:


The video_encode_example() function synthesizes each frame’s data; in my case I have a data pointer, width, height, and rowbytes, so I just use sws_scale() to convert to YUV and I’m good to go. So far I’ve been using AV_CODEC_ID_MPEG2VIDEO. Plays on every movie player I have on my Mac (VLC, QuickTime Player, etc, etc)

For obvious (?) reasons I want H.264. Swapping in that codec (using x264 or openh264) gives me a movie. I can play this on a Mac with VLC, but QuickTimePlayer doesn’t like the format (tries to “convert” and says “QuickTime Player can’t open <filename>”.

The “file” command gives me back this:

    JVT NAL sequence, H.264 video <at> L 13

It does not surprise me much that simply dropping in a different codec yields a movie that can’t be opened by some particular player, given all the possible format and codec properties/settings.

But, it seems that I should be able to configure the necessary codec and/or container properties such that I get a movie file that can be opened with a wider variety of apps (particularly QuickTime Player, given it’s supposed to support a variety of formats and codecs)

I’m guessing I’m just missing a few settings, but I really don’t know where to start looking. Any help/advice/pointers/documentation would be greatly appreciated…

- Philip

_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Bradley O'Hearne | 26 Jan 06:45 2015

Re: Would anyone find an OS X / Cocoa / Swift wrapper for Libav useful?

On Jan 25, 2015, at 10:28 PM, Thomas Worth <dev <at> rarevision.com> wrote:

> Have you given AVFoundation a try? Is portability critical? Why stress over ffmpeg when you've got AVFoundation?

Hey Thomas….thanks for the reply. To answer your question: I’m dealing with several use-cases, and
across all of those use cases, there’s a number of reasons: 

- In one use-case, I have to support QTKit, not AVFoundation.

- In another use-case, I have to support Snow Leopard, in which AVFoundation does not exist. 

- I do not believe that AVFoundation supports per-frame encoding, it supports encoding for whole assets or
inputs — I believe per-frame encoding is only provided in the VideoToolbox framework as of OS X
Mavericks. 

- In the other use-cases, which have the luxury of using any OS X version (and thus, AVFoundation can be used)
there are codecs and video formats required which AVFoundation does not support. 

- Presently Apple does not have any public framework that provides real-time network video streaming
capabilities, and with certain protocols (like RTSP or RTMP). 

If an app only has to be Yosemite (or possibly Mavericks) compatible with and encoder and video format
supported (which my present requirements don’t yet allow), I believe AVFoundation / VideoToolbox /
AudioToolbox could handle the capture and encoding needs in my use-case. But I’d still have to have the
stream-publishing needs handled by a third-party library like FFmpeg or LIVE555. 

Brad

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Zach Swena | 25 Jan 21:31 2015
Picon

Stream mapping and multi program muxing with FFmpeg mpegts

Hi,

Can anyone either explain, or point me to a good explanation of how FFmpeg handles muxing with the mpegts muxer?  I have split a multi program stream in to seperate files, but I am having a hard time wrapping my mind around how FFmpeg stream routing api works.  What is a good resource to learn about that short of reading the source code?  What I need to do is output a multi program mpeg transport stream.

Zach
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Hoang Bao Ngo | 20 Jan 16:15 2015
Picon
Picon

ffmpeg with NVENC, can't find encoder

Hi, I tried ffmpeg with --enabled-nvenc and I could encode video streams through ffmpeg.

Ex: ffmpeg -i something -c:v nvenc Test.yuv

But when I'm trying to find the encoder through


 avcodec_find_encoder_by_name("nvenc");


, it didn't seem to exist.


Tried it with Brainiarc7 version/patch of NVENC, but it was the same issue.

Linking problem?


Mvh Hoang Bao Ngo

_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Sylvain Fabre | 23 Jan 18:51 2015

Getting error concealment stats

My application is decoding a H264 stream caming from an MPTS. From time to time, the stream has errors and i can see such message in the logs :

 

...

[h264 <at> 0x7ff187e8ea20] Cannot use next picture in error concealment

[h264 <at> 0x7ff187e8ea20] concealing 625 DC, 625 AC, 625 MV errors in P frame

[h264 <at> 0x7ff187e8ea20] Cannot use next picture in error concealment

[h264 <at> 0x7ff187e8ea20] concealing 802 DC, 802 AC, 802 MV errors in P frame

[h264 <at> 0x7ff187e8ea20] Cannot use next picture in error concealment

[h264 <at> 0x7ff187e8ea20] concealing 968 DC, 968 AC, 968 MV errors in P frame

[h264 <at> 0x7ff187e8ea20] Cannot use next picture in error concealment

[h264 <at> 0x7ff187e8ea20] concealing 682 DC, 682 AC, 682 MV errors in P frame

...

 

The avcodec_decode_video2() always returns a positive value, hence i am not able to detect such errors in the stream from the client application.

What is the way to get the error concealment statistics at the API level ?

 

Thanks, SFA

 

_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Bradley O'Hearne | 23 Jan 03:02 2015

Would anyone find an OS X / Cocoa / Swift wrapper for Libav useful?

All, 

I’ve been on the Libav-user mailing list 2-3 years now, having worked on FFmpeg integration for various
clients. While I develop on Windows and Linux platforms, my primary development has been spent over the
past 6 years has been on OS X with respect to desktop (my core business is mobile development: iOS, Android,
and newly Windows Phone). Over the past two years, my work with FFmpeg has been primarily on the OS X
platform. 

During that time, I have run into a number of issues about which I’ve appealed to this mailing list, and one
of the things which various list members (some of which were, as I understand it, Libav devs) raised was
that most (if not all) of the Libav devs did not either use or have access to OS X to be able to speak to or
support problems which manifested on OS X, and I strongly suspect platform issues could have influenced
some of the problems experienced. In one case, someone even encouraged me to just simply “change
platforms”, which of course cannot always be done, especially if you are serving clients who have hired
you to make things work on that specific platform. 

This is not the best situation, and it is kind of a dubious label for Libav to be declared as working and/or
supported on OS X if there are no devs using OS X or supporting it. I have generally found answers to most of
the issues I have had to work through, but finding those answers has been slow-going, and a tedious
process. My purpose here is to poll the list members to ask if anyone would find any value at all if someone
created an OS X / Cocoa / Swift (and possibly iOS) wrapper for Libav? 

Perhaps I’m the only one on the planet using Libav on Apple platforms, though I’m betting I’m not
(actually I know I’m not, as a few have contacted me off-list). Also, it might take some of the headache
away from the Libav devs who don’t use Apple platforms to answer some support questions. For us who like
apples, a nice, clean Swift API might be very nice, and save a lot of time and headaches. I might be able to
produce such a thing, first somewhat limited and rudimentary, and then mature it over time. 

Would anyone be interested in such an API? 

Thank you in advance for your replies. 

Cheers, 

Brad

Brad O'Hearne
Founder/Lead Developer
Big Hill Software LLC

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Max Vlasov | 21 Jan 12:31 2015
Picon

Audio sample rate conversion and fixed frame size

Hi,

When sample rate conversion is needed, one can face another problem. Some codecs reports they don't have CODEC_CAP_VARIABLE_FRAME_SIZE capability so accept only some fixed frame size buffers with an exception for the last block. I tried to find a working example, but it seems they often lack full support for such cases. Obviously one may cache converter results and output by necessary blocks, but this involves supporting several buffers for planar data and knowing exact format.

With own experiment I tried to do the following:
1) For the required dest_frames use infamous formula
  av_rescale_rnd(swr_get_delay(...) + inp_frames,...
trying to detect the number of input frames (inp_frames here) necessary for at least dest_frames (src_frames). Either with binary search or just by using an incrementer
2) call swr_convert with src_frames as the number of input frames, but pass exactly dest_frames as output. So the converter have to cache some input frames since I limited the output buffer.

The approach worked at least for some cases, but there are problems I faced:
- I have to use much larger dest_frames (currently it is twice as large). Otherwise sometimes swr_convert reports making several bytes less than I requested (1535 instead of 1536). I suspect this is because swr_get_delay is approximate in most cases. The question is whether should I fix this multiplier (x2) or use some other approach for this detection.
- I can not figure out how correctly get the last cached frames from the converter and not violate the rule for frame_size (it should have the same size with every step except the last one). When I fed the last input bytes, I probably already get non-standard output so another call with Null and 0 as input will produce extra non-standard block.

Any help will be appreciated

Thanks,

Max
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user

Gmane