icecast protocol

Hi to all!
In new version of ffmpeg (2.4.1) added new feature - icecast protocol with url of output stream like: 
icecast://[<username>[:<password>] <at> ]<server>:<port>/<mountpoint>

Does anyone have a working example with this protocol (without oggfwd)?

Sincerely, 
Strekalovskiy Alex.
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Krishna | 1 Oct 22:09 2014
Picon

Return frames in decode order

Hi, 

I wanted to know if it is possible to for libavcodec to return frames in decode order instead of display order? Is there any flag that will enable this?

I am using av_read_frame()  and avcodec_decode_video2() to read a video frame-by-frame. 

Thank you! 

--
Regards,
Krishna
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Alberto Martín Fernandez | 1 Oct 17:48 2014

Overlay filter (add Watermark to video)

Hi;

I am using ffmpeg 1.0.9 & SDL 1.2 and i am trying to add a watermark.
I started to implement my media player basing on the ffplay example, adding funtionalities, buttons...

I have to implement an overlay filter, so i do:

FILTER 1:(Video base) INPUT
   avfilter_graph_create_filter(&filt_src, avfilter_get_by_name("buffer"), "in", buffersrc_args,NULL,graph);

FILTER 2: (overlay filter, watermark(png)) INPUT
   avfilter_graph_create_filter(&bufferovrlay_ctx,avfilter_get_by_name("overlay"), "overlay",  argsStrOvrlay, NULL, graph);

FILTER 3: (buffersink filter) OUTPUT
    avfilter_graph_create_filter(&filt_out, avfilter_get_by_name("ffbuffersink"),                                     "out", NULL, buffersink_params, graph);

Then i call :
avfilter_graph_parse(graph, "buffer=video_size=640x360:pix_fmt=0:time_base=1001/3000:pixel_aspect=0/1 [in_1]; buffer=video_size=512x128:pix_fmt=2:time_base=1/1:pixel_aspect=0 [in_2]; [in_1] [in_2] overlay=0:0 [out]; [out] buffersink", inputs, &outputs, NULL);

Then i call avfilter_graph_config(graph, NULL);
and it returnrs -22 error (AVERROR_INVALIDDATA)

I have some questions:

Which are the correct parameters to set in a overlay filter (argsStrOvrlay)?
I set sprintf_s(argsStrOvrlay, sizeof(argsStrOvrlay), "video_size=%dx%d:pix_fmt=%d:time_base=1/1:pixel_aspect=0", pCodecCtxImage->width, pCodecCtxImage->height, pCodecCtxImage->pix_fmt);

I am not sure if i need to link the filters using avfilter_link.

Has anyone an example of adding a watermark image to a video using overlay filter?
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Selmeci, Tamás | 1 Oct 14:26 2014
Picon

MPEG-TS continuity counter

Hello all!

My program creates multiple small MPEG-TS files. The output file
contains three AVStreams:
- one for the video;
- one for the audio;
- one for user-specific data;

User-specific data:
I've found the way to put these data into a custom PID, and my MPEG-TS
analyzer shows that the continuity counter of the PID increases as it
should. The problem is that when a new MPEG-TS file is created, the
counter resets.

Can I tell somehow the MPEG-TS multiplexer the desired start continuity
counter value for my specific PID? (It obviously can be done by hacking,
but first I'd prefer the nice path)

Thanks, regards,
--

-- 
Selmeci, Tamás
http://www.open-st.eu/

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Wolfgang Lorenz | 30 Sep 11:06 2014
Picon
Picon

Private streams in MPEG-TS

Hello list readers,

I'm trying to interleave some continuous private data into an MPEG-TS
container alongside to some audio data. As far as I know, MPEG-TS
supports private streams, but I can not get it, to work cleanly.

I've attached a little working example to show, what I've got so far
and what problems I've encountered.

The example file works like this:
* Muxing:
  * Open a format context for the MPEG-TS container.
  * Add a stream with codec-type AVMEDIA_TYPE_DATA.
  * Open file, write header.
  * Fill packets with 1024 bytes of random data (either only digits '0'
    to '9', or char (0 - 255), and write them to the container.
  * Write trailer, close file, free format context.
* Demuxing:
  * Open format context and file.
  * Find and write out stream info.
  * Read packets and write out some info.
  * Free format context.

Results, when running the example:
* With digit-only data ('0' - '9'):
  * Stream's codec-type is AVMEDIA_TYPE_UNKNOWN
  * Packet sizes are correct 1024 bytes, as generated while muxing.
* Using full bytes:
  * Stream's codec-type is AVMEDIA_TYPE_AUDIO
  * I guess FFMPEG tries to decode the not-audio data using either mp3
    (FFMPEG 2.2.3) or aac (FFMPEG 2.4.1).
  * Packet sizes are not preserved. :-(

I can work around these problems, by ignoring all warning and error
messages, as well as the codec-types. I can just use a dedicated stream
ID identify the data stream. I can put the size of each packet at the
beginning of each packet, to demangle the package data. But I'd rather
have a cleaner solution, where the codec-type is set correctly and
FFMPEG does not try to decode my data.

So... What have I done wrong? What is the correct way to achieve this?

Thanks,
  Wolfgang
Attachment (private_stream.c): text/x-c++src, 3670 bytes
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Peter Pan | 30 Sep 09:54 2014
Picon

how to convert NV21 to YUV420P with swscale

Hello

I'm trying to convert NV21 image to YUV420P, 
for x264 encoder to encode it.

my code is:

static int scale(char* lin, x264_picture_t* pic,
 int in_width, int in_height, int out_width, int out_height){
  if(convertCtx == NULL){
    convertCtx = sws_getContext(in_width, in_height, PIX_FMT_NV21, 
out_width, out_height, PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
  }

  AVPicture src;
  avpicture_fill(&src, lin, PIX_FMT_NV21, in_width, in_height);
  int h = sws_scale(convertCtx, 
   (const uint8_t **)(&lin), src.linesize,
   0, in_height, 
   (uint8_t * const*)(pic->img.plane), pic->img.i_stride);
  return h;
}

the images are recorded from android camera with such code:
params = mCamera.getParameters();
params.setPreviewFormat(ImageFormat.NV21);
mCamera.setParameters(params);

And I will get the video buffer from onPreviewFrame callback.

the code works well if the src image is PIX_FMT_YUYV422, 
with android camera set to 
params.setPreviewFormat(ImageFormat.YUY2);

but when I try to convert NV21 images on some other devices,
the image broke. the image looks like its purple and green part 
of the pixels are separated.

please help me with this convert code. thank you.

Thanks,
Peter
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Marcus Johnson | 30 Sep 02:31 2014
Picon

(no subject)

I'm trying to decode audio using libavformat and codec, it's working fine so far, but I'm having trouble getting the decoded buffer into a single array to hold all samples, I read that there's a function called avpicture_layout, and that's basically exactly what I need, except it's for AVPicture, and I need it for AVFrame, what do I do? I see nothing like that in the docs.
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Gregory King | 29 Sep 13:53 2014

Achieving lowest possible latency RTMP stream - Client configuration

I have node-rtsp-rtmp-server (https://github.com/iizukanao/node-rtsp-rtmp-server) serving up an
RTMP stream nicely and I can configure strobe media player to play with very little latency (fraction of a
second) using something like this setup (https://gist.github.com/iizukanao/de7f3c1200c1513f159e)

However, whenever I use ffmpeg I come across more buffering than I require for my application.  I happen to be
using kxmovie on ios as the client and I've recompiled ffmpeg with the default buffer for rtmp set to 0 from
3000, but this appears to make no difference (though my server is now acknowledging the buffer is being set
to 0).

So I'm really looking for some pointers as to what parameters I need to tweak in ffmpeg to achieve the similar
low latency that strobe media player is achieving.  Any hints?

Thanks,
G
Mert Gedik | 28 Sep 23:18 2014
Picon

VideoToolbox.framework in iOS8

Hello everyone,

ffmpeg supports hardware decoding/encoding in macosx with VideoToolbox.framework, is there anyone that tried to compile and test this with iOS8 SDK ?

Thanks,

- MG
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Pradeep Karosiya | 26 Sep 13:54 2014
Picon

Repeating frame while encoding using ffmpeg api

Hi,

I'm trying to create small screen capture application using ffmpeg encoding
api.
The application mostly working fine when I encode each frame as per
specified frame rate.
However I would like to do some changes in above approach. If there is no
change on screen (frames are exactly same) then I don't want to encode the
duplicate frames and want ffmpeg to keep the last displayed frame till
something changes on screen. I'm doing following steps.
1. Check difference between previous and current frame, if difference == 0
then skip frame else add frame to encoder.
2. In add_frame_encoder I'm encoding the previous screen with previous time
stamp and setting extra_delay between current time stamp and previous time
stamp in repeat_pict field of the encode frame.
frame->repeact_pict = 2*fps*extra_delay
where extra_delay is the duration for ffmpeg to display the frame. 

However the above approach is not working fine. Even though video duration
is correct. When I'm playing the video the frame rate is reduced and I don't
see repeat_pict option is working. 
For example I'm encoding a video for 30 seconds at 10 fps containing many
duplicate frames. The final 
number of frames matches the frame which are added for encoding but I don't
see any frame repeated by ffmpeg and final fps is reduced to match the video
duration.
Please suggest what could be going wrong.

Regards,
Pradeep

--
View this message in context: http://libav-users.943685.n4.nabble.com/Repeating-frame-while-encoding-using-ffmpeg-api-tp4660473.html
Sent from the libav-users mailing list archive at Nabble.com.
sbwn | 24 Sep 10:42 2014
Picon

[libavcodec][C] convert audio files

Hello,
I'm trying to make a C++ program that convert music files, but it doesn't work.

main.cpp : http://pastebin.com/kTN45Fn1
functions.cpp : http://pastebin.com/aRwdHxy7
Makefile : http://pastebin.com/BzBEBicH
Program output :
[mp3 <at> 0x21eb200] max_analyze_duration reached
Input #0, mp3, from 'test.mp3':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2013-11-29 03:30:21
    encoder         : Lavf53.21.1
  Duration: 00:03:58.15, start: 0.000000, bitrate: 192 kb/s
    Stream #0.0: Audio: mp3, 44100 Hz, stereo, s16, 192 kb/s
Output #0, mp4, to 'out.mp4':
    Stream #0.0: Audio: libvorbis, 44100 Hz, stereo, s16, 192 kb/s
[mp4 <at> 0x21ee5c0] Codec for stream 0 does not use global headers but container format requires global head ers
audio encode error-22 Invalid argument
 
avcodec_encode_audio2 returns -22 and the error string is "Invalid argument", I don't know more about the error

Thank you to help me 
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user

Gmane