k.savkov | 5 Feb 15:53 2016
Picon

Zero PTS in decoded subttiles.

Hi.
I'm currently working on the subtitles decoding and facing a problem 
with timestamps. When I'm using AVCodecContext which I have recived 
while demuxing from AVFormatContext->streams->codec, everything is nice 
and good, but when I'm trying to decode subtitles with AVCodecContext 
allocated by avcodec_alloc_context3, I'm getting subtitle with zeroes in 
PTS, start_time and end_time. Here is the example:

AVCodecContext from AVFormatContext
PTS: 4700000
ASS: Dialogue: 0,0:00:04.70,0:00:06.74,Default,,0,0,0,,where are you 
going so early?

AVCodecContext allocated by me
PTS: 0
ASS: Dialogue: 0,0:00:00.00,0:00:00.00,Default,,0,0,0,,where are you 
going so early?

In the project I'm working on it's prefered for the packets to be 
decoded in some other place rather then in demuxer, so what is the right 
way to open the right codec context for subtitles?

With best regards, Kirill.
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Ratin | 2 Feb 23:30 2016
Picon

Questionable libav code

libavcodec has codes like this one :

static AVPacket *add_to_pktbuf(AVPacketList **packet_buffer, AVPacket *pkt,
                               AVPacketList **plast_pktl)
{
    AVPacketList *pktl = av_mallocz(sizeof(AVPacketList));
    if (!pktl)
        return NULL;

    if (*packet_buffer)
        (*plast_pktl)->next = pktl;
    else
        *packet_buffer = pktl;

    /* Add the packet in the buffered packet list. */
    *plast_pktl = pktl;
    pktl->pkt   = *pkt; <===========================
    return &pktl->pkt;
}


Here a struct variable is meant to be copied over via assignment, is that 100% correct to always work the way was intended?  Given that the struct pkt is a big struct which has raw bytes that are malloc'd. I was always trained to avoid such struct assignment operations. What do people think?

Ratin
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Shupeng Lai | 1 Feb 19:43 2016
Picon

How can I really pick up the correct format when storing incoming stream?

I want to re-mux a incoming h264 stream. But how could I pick the correct AVOutputFormat for the AVFormatContext?

Currently I used:

AVOutputFormat* fmt = av_guess_format(NULL, "xxx.avi", NULL);

// Open the context
//---------------------------------------------------------------------
outFormatCtx = ffmpeg::avformat_alloc_context();

//Set the output format
//----------------------------------------------------------------------
outFormatCtx->oformat = fmt;

And everything works fine.
However, if I change the first line to:
av_guess_format("h264",NULL, NULL); 
the recorded stream cannot be played because of bad header/tailer.

Is there a more smart way of picking the correct AVOutputFormat to be used?
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Shupeng Lai | 1 Feb 19:11 2016
Picon

How can I really pick up the correct format when storing incoming stream?

I want to re-mux a incoming h264 stream. But how could I pick the correct AVOutputFormat for the AVFormatContext?

Currently I used:

AVOutputFormat* fmt = av_guess_format(NULL, "xxx.avi", NULL);

// Open the context
//---------------------------------------------------------------------
outFormatCtx = ffmpeg::avformat_alloc_context();

//Set the output format
//----------------------------------------------------------------------
outFormatCtx->oformat = fmt;

And everything works fine.
However, if I change the first line to:
av_guess_format("h264",NULL, NULL); 
the recorded stream cannot be played because of bad header/tailer.

Is there a more smart way of picking the correct AVOutputFormat to be used?
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Liam E-P | 1 Feb 07:12 2016
Picon

More complex watermarking filters using DCTs etc.

Hi all,

There has been lots of research around using the discrete cosine transform (DCT) value of blocks [1] for embedding information.

I'm wondering if there's any source code out there for these slightly more complex and powerful watermarking filters?

Cheers,
Liam Edwards-Playne.

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Kiran | 1 Feb 11:46 2016

How to create two output stream

Hi,

I am using libav in my app for showing an RTSP feed. But I would like 
to  "restream" this same input stream to cloud. Is there a good example 
for using the same input stream for two output streams?

Kiran G
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Arthur Muller | 29 Jan 21:14 2016

Help with sws_scale

Hello,

 

I’m trying to convert an RGB image to YUV to generate a video. My image is sized 640 x 480. These are my steps:

 

unsigned char rgbdata[3*640*380]; // contains my rgb data

uint8_t *inData[1] = { rgbdata };

int inLinesize[1]  = { 3*640 };

AVFrame *frame;

 

frame = av_frame_alloc();

frame->format = AV_PIX_FMT_YUV420P;

frame->width  = 640;

frame->height = 480;

av_frame_get_buffer (frame,32);

 

sws_ctx = sws_getContext (640,480,AV_PIX_FMT_RGB24,640,480,AV_PIX_FMT_YUV420P,SWS_FAST_BILINEAR,0,0,0);

sws_scale(sws_ctx,inData,inLinesize,0,480,frame->data, frame->linesize);

 

I do have checks after allocating the frame and getting the context,  but I removed them here for clarity. The program crashes in sws_scale.

 

Can anybody point me in the right direction?

 

Thanks.

 

-Arthur

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Arthur Muller | 28 Jan 22:45 2016

How to convert "-q:v 1" to API form

Hello,

 

I am using the API version of ffmpeg. I have a sequence of 30 png files and want to generate a mp4 slide show with no loss (if possible).

 

In the command line, I use

 

ffmpeg_g -framerate 10 -start_number 1 -i image%02d.png -c:v mpeg4 -q:v 1 file.mp4

 

and the resulting file has good quality. Using the API version of ffmpeg I’ve managed to generate the mp4 file by first converting the png files to yuv format. That was not a problem. But I haven’t figured out what I have to do in my code to mimic the behavior of “-q:v 1” to get the better quality?

 

Can anybody help?

 

Thanks.

 

-Arthur

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

Build errors with OpenH264

Hi guys,

I’ve been using the FFmpeg libs with OpenH264 without issues.
I just wanted to rebuild the FFmpeg libs with the latest stable version (2.8.5) and I’m now seeing build errors.
I also tried with the latest snapshot.

It appears that the FFmpeg wrapper doesn’t work properly with the current version of the OpenH264 library.
Is there a place where I can file a bug and what info will be needed?

Attached is the log (the essential part, the whole build log is very long).

Thanks and best regards.


CC libavcodec/latm_parser.o
CC libavcodec/lcldec.o
CC libavcodec/lclenc.o
CC libavcodec/libopenh264enc.o
CC libavcodec/ljpegenc.o
CC libavcodec/loco.o
libavcodec/libopenh264enc.c:49:81: error: use of undeclared identifier 'SM_AUTO_SLICE'; did you mean 'SM_RASTER_SLICE'?
    { "slice_mode", "Slice mode", OFFSET(slice_mode), AV_OPT_TYPE_INT, { .i64 = SM_AUTO_SLICE }, SM_SINGLE_SLICE, SM_RESERVED, VE, "slice_mode" },
                                                                                ^~~~~~~~~~~~~
                                                                                SM_RASTER_SLICE
/usr/local/include/wels/codec_app_def.h:331:3: note: 'SM_RASTER_SLICE' declared here
  SM_RASTER_SLICE         = 2, ///< | according to SlicesAssign    | need input of MB numbers each slice. In addition, if other constraint in SSliceArgument is presented, need to f...
  ^
libavcodec/libopenh264enc.c:51:83: error: use of undeclared identifier 'SM_ROWMB_SLICE'; did you mean 'SM_RASTER_SLICE'?
    { "rowmb", "One slice per row of macroblocks", 0, AV_OPT_TYPE_CONST, { .i64 = SM_ROWMB_SLICE }, 0, 0, VE, "slice_mode" },
                                                                                  ^~~~~~~~~~~~~~
                                                                                  SM_RASTER_SLICE
/usr/local/include/wels/codec_app_def.h:331:3: note: 'SM_RASTER_SLICE' declared here
  SM_RASTER_SLICE         = 2, ///< | according to SlicesAssign    | need input of MB numbers each slice. In addition, if other constraint in SSliceArgument is presented, need to f...
  ^
libavcodec/libopenh264enc.c:52:107: error: use of undeclared identifier 'SM_AUTO_SLICE'; did you mean 'SM_RASTER_SLICE'?
    { "auto", "Automatic number of slices according to number of threads", 0, AV_OPT_TYPE_CONST, { .i64 = SM_AUTO_SLICE }, 0, 0, VE, "slice_mode" },
                                                                                                          ^~~~~~~~~~~~~
                                                                                                          SM_RASTER_SLICE
/usr/local/include/wels/codec_app_def.h:331:3: note: 'SM_RASTER_SLICE' declared here
  SM_RASTER_SLICE         = 2, ///< | according to SlicesAssign    | need input of MB numbers each slice. In addition, if other constraint in SSliceArgument is presented, need to f...
  ^
libavcodec/libopenh264enc.c:132:29: error: no member named 'sSliceCfg' in 'SSpatialLayerConfig'
    param.sSpatialLayers[0].sSliceCfg.uiSliceMode               = s->slice_mode;
    ~~~~~~~~~~~~~~~~~~~~~~~ ^
libavcodec/libopenh264enc.c:133:29: error: no member named 'sSliceCfg' in 'SSpatialLayerConfig'
    param.sSpatialLayers[0].sSliceCfg.sSliceArgument.uiSliceNum = avctx->slices;
    ~~~~~~~~~~~~~~~~~~~~~~~ ^
5 errors generated.
make: *** [libavcodec/libopenh264enc.o] Error 1
make: *** Waiting for unfinished jobs....

_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
Sunny Shukla | 27 Jan 16:40 2016
Picon

Not able to compile the same code when used with gstreamer

Hi guyz,

I made a encoder application using ffmpeg libs and it worked.

I used the same application and plug it into my gstreamer encoder plugin.

I linked the required libs inside my gstreamer source under src/Makefile.am as below

libgstmyencoder_la_CFLAGS = $(GST_CFLAGS) -I/usr/local/include
libgstmyencoder_la_LIBADD = $(GST_LIBS) -lm -L/usr/local/lib/ -lavdevice -lavformat -lavfilter -lavcodec -lswresample -lswscale -lavutil

Am getting error in the below code inside my gstreamer plugin...

   codec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if (!codec) {
        g_print("H264 Codec not found\n");
        return 1;
    }

Error:

$ gst-launch-1.0 -e filesrc location=/home/sunny/Videos/rawvideo_yuv420.yuv blocksize=460800 ! video/x-raw,format=I420,width=640,height=480,framerate=30/1 ! myh264encoder silent=true  ! videoparse width=640 height=480 framerate=30/1 !  autovideoconvert ! autovideosink
gst_myencoder_class_init....................
gst_myencoder_init....................
gst_myencoder_set_property....................
Setting pipeline to PAUSED ...
libva info: VA-API version 0.35.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: va_openDriver() returns -1
libva info: VA-API version 0.35.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: va_openDriver() returns -1
libva info: VA-API version 0.35.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: va_openDriver() returns -1
gst_myencoder_open....................
H264 Codec not found
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstMyEncoder:myencoder0: Could not initialize supporting library.
Additional debug info:
gstvideoencoder.c(1428): gst_video_encoder_change_state (): /GstPipeline:pipeline0/GstMyEncoder:myencoder0:
Failed to open encoder
Setting pipeline to NULL ...
Freeing pipeline ...

------------------------------------------------------------------------------------------------------------------

I found out the avcodec_find_encoder is a function and is available with following SOs
$ grep -nrw "AV_CODEC_ID_H264" . --include=*.so.*
Binary file ./libavcodec/libavcodec.so.57 matches
Binary file ./libavdevice/libavdevice.so.57 matches
Binary file ./libavformat/libavformat.so.57 matches
Binary file ./libavfilter/libavfilter.so.6 matches

Can anyone please suggest what am missing ? Am not getting what I missed


--
Sunny Shukla
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user
pratik | 27 Jan 10:57 2016
Picon

ffmpeg for creting video from images

Hi,

I am facing problem creating video from images with audio and duration of 5 seconds each frame.

First/Last image skipped and not staying for 5 seconds.

 

Can you please let me know how you can help me and what you need?

 

Regards,

Pratik

 
 
_______________________________________________
Libav-user mailing list
Libav-user <at> ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

Gmane