Xiang, Haihao | 1 Apr 02:54 2015
Picon

Re: vaapi h.264 decoding with libav


You should use a player with VAAPI enabled.

e.g.  MPV + ffmpeg/libav

$> mpv --hwdec=vaapi --vo=vaapi <video file>

You will see the message of 'Trying to use hardware decoding' in the log
and CPU usage drop a lot if using VAAPI.

Thanks
Haihao

> Hello!
> 
> I am using ffmpeg/libav 2.5.3 on Haswell 4770 (HD 4600).
> I am using libav for decoding h.264 video stream.
> My ffmpeg build supports vaapi hardware decoder.
> 
> ...
> AVCodec *pCodec = FindDecoder(AV_CODEC_ID_H264);
> ...
> 
> How to check am I using hardware decoder or not?
> 
> P.S. 
> I have several ffmpeg builds with vaapi and without, But don`t see any difference in performance. Maybe
cpu is very powerfull?
> I am decoding two separate 1080p yuv420 streams in realtime.
> 
(Continue reading)

Tanim Islam | 31 Mar 18:59 2015
Picon

Re: ffpreset

On Tuesday, March 31, 2015, Clément Champetier <cnt-NLaYl5Zsib1jbfGpGd42Mg@public.gmane.org> wrote:

Hello,

I work on an opensource project, AvTranscoder (https://github.com/mikrosimage/avTranscoder), which is basically a project to give a high level API of ffmpeg in C++, Java and python.

To easily launch some encodes, we created preset files, with the same idea of ffpreset: text files with a list of key/value to set several options of the encoder before the process.

I saw some useful commands in the ffmpeg project, to manage preset files:
We would like to manipulate this type of command to get presets, like these ones:

Unfortunately, no functions are exposed in your library to manage the presets.
What is your position about it? Is it in your road map to give the developers the ability to manipulate ffpreset from outside?

Best regards,

Clement

How do you guys differ from the Handbrake project? 


--
Sent from Gmail Mobile
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Georgi Rosenov Stefanov | 31 Mar 09:02 2015
Picon

SDP from memory buffer

Hello all,

I am wondering which is the best way or just "THE WAY" to open a stream from SDP stored into memory buffer ?
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Kamlesh Mishra | 30 Mar 15:00 2015

Error on AVFORMAT_OPEN_INPUT with TS files

Hello All
As per the ffmpeg libraries, I was trying to read a "ts" file with "  avformat_open_input()"  API.
The API returns me an error as "cannot find stream information"

Please find the command line output for the "ffprobe" on the ts file. ( file 2.log)
Requesting you to give me support on the ffmpeg api for TS file support.
The Sample TS files are taken from "http://www.w6rz.net/"


Thanks
Kamlesh Mishra




Disclaimer:
This e-mail may contain confidential information and is intended only for the person to whom it is addressed. If you are not the intended recipient you may not disclose, distribute or copy this document in any manner whatsoever.
Prime Focus does not accept any liability for damage, loss or expense arising from this e-mail or from accessing its attachments.

Attachment (2.log): text/x-log, 122 KiB
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Georgi Rosenov Stefanov | 30 Mar 09:44 2015
Picon

RTP packets with payload different to 96

Thanks for sharing your knowledge Thomas.
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Tim1961 | 29 Mar 17:45 2015
Picon

ld: warning: PIE disabled.

Can anyone shed light on this warning?

/ld: warning: PIE disabled. Absolute addressing (perhaps -mdynamic-no-pic)
not allowed in code signed PIE, but used in _ff_h264_decode_mb_cabac from
libavcodec.a(h264_cabac.o). To fix this warning, don't compile with
-mdynamic-no-pic or link with -Wl,-no_pie/

I'm building universal libs to support arm7, arm64, i386 and x86_64
(ordinary iOS development) and I *think* this is coming from one of the x86
targets. The details: Mac OS X 10.10, Xcode 6.2, Ffmpeg 2.6.1 (as well as
2.5.x). I have the latest versions of gas-preprocessor.pl, as well as yasm
1.3 from MacPorts. My config flags definitely include --enable-pic, and I'm
definitely not using -mdynamic-no-pic or -no_pie.

Google is turning up nothing but I can't be the only one experiencing
this... right?

Many thanks,
Tim

--
View this message in context: http://libav-users.943685.n4.nabble.com/ld-warning-PIE-disabled-tp4660998.html
Sent from the libav-users mailing list archive at Nabble.com.
Georgi Rosenov Stefanov | 25 Mar 16:39 2015
Picon

RTP packets with payload different to 96

Hello All

I am tring to stream over local network and to watch the stream using VLC

I am seding only video, without audio.
I am using format name "rtp" and as file name I send IP:PORT
It works VLC is showing the stream, but I am forced by ffmpeg to use SDP file.

If I do not use SDP file, VLC player says
"

SDP required:

A description in SDP format is required to receive the RTP stream. Note that rtp:// URIs cannot work with dynamic RTP payload format (96)."


It is no problem to use SDP, but I want to know how to send packets with payload different to 96 (dynamic payload)


I have tried to study the code and I think that the payload could be changed according to the codec I use. The possible payload types could be one of


"  {25, "CelB",       AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_NONE, 90000, -1},
  {26, "JPEG",       AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_MJPEG, 90000, -1},
  {28, "nv",         AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_NONE, 90000, -1},
  {31, "H261",       AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_H261, 90000, -1},
  {32, "MPV",        AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_MPEG1VIDEO, 90000, -1},
  {32, "MPV",        AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_MPEG2VIDEO, 90000, -1},
  {33, "MP2T",       AVMEDIA_TYPE_DATA,    AV_CODEC_ID_MPEG2TS, 90000, -1},
  {34, "H263",       AVMEDIA_TYPE_VIDEO,   AV_CODEC_ID_H263, 90000, -1},"


Am I right?


My file is H.264 and I do not transcode anything.

Is there any way to send packets with payload different to 96 without transcoding ?

_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Brendan Jones | 27 Mar 04:26 2015
Picon

av_parser_parse2 crash

Hi,
 
I am trying to decode my network dvr h264 socket stream directly using av_parser_parse2 but I get a crash when I try to parse the last portion of the buffer I'm passing to the parser.
 
For example, if I read 80000 bytes from socket I can parse and then decode most of the data but the last piece passed to av_parser_parse2 causes a crash.
 
I have tried different ways to make sure I not trying to pass more than there is in the buffer to the parser.
 
I'm not very knowledgeable regarding h264 but I suspect it might be because the parser can't parse the last portion of data  due to it wanting more from the socket.
 
Is there something I can do to avoid this crash ?
 
Here is my loop just in case it is something I am doing wrong..
 

int retval;


uint8_t *inbuf = new uint8_t [20000000];


int inbuf_start = 0;


int inbuf_len = 0;

 

inbuf_start = 0;


inbuf_len = 0;


while (socket->waitForReadyRead())

 

{


if(socket->bytesAvailable() > BUFFER_SIZE*2 )

 

{


QByteArray ba = socket->readAll();


memcpy(inbuf + inbuf_len, ba.data(), ba.size() );


inbuf_len += ba.size();

 


qDebug() << "read bytes in buffer "<<ba.size();


if (ba.size() == 0)

 

{


cerr << "read 0 bytes data." << endl;


continue;

 

}



while (inbuf_len)

 
 
 
 

{


qDebug() << "before parse total to parse = "<<ba.size();


qDebug() << "parsed already "<< inbuf_start;


qDebug() << "left to parse "<<inbuf_len;


av_init_packet(&packet2);


packet2.data = 0;


packet2.size = 0;


uint8_t *pout;


int pout_len;


len = av_parser_parse2(parser, c, &pout, &pout_len,

 

inbuf + inbuf_start, inbuf_len,


AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);

 


inbuf_start += len;


inbuf_len -= len;


packet2.data = pout;


packet2.size = pout_len;


if (len)

 

{


retval = avcodec_decode_video2(c, picture, &got_picture, &packet2);


if (got_picture && retval > 0)

 

{


display_frame(c->pix_fmt, picture, screen_pixz, screen_contextz, wind, stride);

 


}


}



av_free_packet(&packet2);

 

}



}


}
 
 
Thank you.
 
Brendan
 
_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Reddi, Praveen | 26 Mar 16:59 2015

Ffmpeg unable to initialize swsContext for H264 frame data received through RTP Payload

Hello All,

 

I am trying to decode H264 video frames received through RTSP streaming.

I followed this post: How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

I was able to identify start of the frame & end of the frame in RTP packets and reconstructed my Video Frame.

But I didnt receive any SPS,PPS data from my RTSP session. I looked for string "sprop-parameter-sets" in my SDP(Session Description Protocol) and there was none.

 

Reconstructing Video Frame from RTP Packets:

Payload in the first RTP Packet goes like this : "1c 80 00 00 01 61 9a 03 03 6a 59 ff 97 e0 a9 f6"

This says that it’s a fragmented data("1C") and start of the frame("80"). I copied the rest of thepayload data(except the first 2 bytes "1C 80").

Following RTP Packets have the "Payload" start with "1C 00" which is continuation of the frame data. I kept adding payload data(except the first 2 bytes "1C 00") into the byte buffer for all the following RTP Packets.

When I get the RTP packet with payload starts with "1C 40", which is end of the frame, I copied the rest of the payload data(except the first 2 bytes "1C 40") of that RTP Packet into the byte buffer.

Thus I reconstructed the Video Frame into the byte buffer.

Then I prepended 4 bytes [0x00, 0x00 , 0x00, 0x01] to the byte buffer before sending to the decoder, because I didnt receive any SPS, PPS NAL bytes.

When I send this byte buffer to the decoder, decoder fails when it tries to initialize sws Context.

Am I sending the NAL bytes and video frame data correctly?

 

Appreciate any help on this.

 

Thanks,

Praveen

_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Alessio Volpe | 24 Mar 17:31 2015
Picon

RTSP Audio/Video Synchronization

Hi, this is my program:

-------------------------------------------------------------

#include <stdio.h>
#include <stdlib.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
#include <sys/time.h>

time_t get_time()
{
  struct timeval tv;

  gettimeofday( &tv, NULL );

  return tv.tv_sec;
}

int main( int argc, char* argv[] )
{
  AVFormatContext *ifcx = NULL;
  AVInputFormat *ifmt;
  AVCodecContext *iccx_video, *iccx_audio;
  AVCodec *icodec;
  AVStream *ist_video, *ist_audio;
  int i_index_video, i_index_audio;
  time_t timenow, timestart;
  int got_key_frame = 0;

  AVFormatContext *ofcx;
  AVOutputFormat *ofmt;
  AVCodecContext *occx;
  AVCodec *ocodec;
  AVStream *ost_video, *ost_audio;
  int o_index_video, o_index_audio;

  AVPacket pkt;

  int ix, ix_video, ix_audio;

  const char *sFileInput;
  const char *sFileOutput;
  int bRunTime;

  //Indirizzo RTSP
  sFileInput = "rtsp://10.4.1.175/media/video1";

  //File di output
  sFileOutput = "camera.avi";

  //Tempo di run dell'acquisizione
  bRunTime = 15; //Registra 15 secondi

  // Initialize library
  av_log_set_level( AV_LOG_DEBUG );
  av_register_all();
  avcodec_register_all();
  avformat_network_init();

  //
  // Input
  //

  //open rtsp
  if ( avformat_open_input( &ifcx, sFileInput, NULL, NULL) != 0 ) {
    printf( "ERROR: Cannot open input file\n" );
    return EXIT_FAILURE;
  }

  if ( avformat_find_stream_info( ifcx, NULL ) < 0 ) {
    printf( "ERROR: Cannot find stream info\n" );
    avformat_close_input( &ifcx );
    return EXIT_FAILURE;
  }

  snprintf( ifcx->filename, sizeof( ifcx->filename ), "%s", sFileInput );

  //search video stream
  i_index_video = -1;
  for ( ix = 0; ix < ifcx->nb_streams; ix++ ) {
    iccx_video = ifcx->streams[ ix ]->codec;
    if ( iccx_video->codec_type == AVMEDIA_TYPE_VIDEO ) {
      ist_video = ifcx->streams[ ix ];
      i_index_video = ix;
      break;
    }
  }
  if ( i_index_video < 0 ) {
    printf( "ERROR: Cannot find input video stream\n" );
    avformat_close_input( &ifcx );
    return EXIT_FAILURE;
  }


  //search audio stream
  i_index_audio = -1;
  for ( ix = 0; ix < ifcx->nb_streams; ix++ ) {
    iccx_audio = ifcx->streams[ ix ]->codec;
    if ( iccx_audio->codec_type == AVMEDIA_TYPE_AUDIO ) {
      ist_audio = ifcx->streams[ ix ];
      i_index_audio = ix;
      break;
    }
  }
  if ( i_index_audio < 0 ) {
    printf( "ERROR: Cannot find input video stream\n" );
    avformat_close_input( &ifcx );
    return EXIT_FAILURE;
  }

  //
  // Output
  //

  //open output file
  ofmt = av_guess_format( NULL, sFileOutput, NULL ); //Return the output format
  ofcx = avformat_alloc_context();
  ofcx->oformat = ofmt;
  avio_open2( &ofcx->pb, sFileOutput, AVIO_FLAG_WRITE, NULL, NULL );

  // Create Video output stream
  ost_video = avformat_new_stream( ofcx, NULL );
  ost_audio = avformat_new_stream( ofcx, NULL );

  avcodec_copy_context( ost_video->codec, iccx_video ); //Copia il codec dello stream di input
  avcodec_copy_context( ost_audio->codec, iccx_audio );


  ost_video->sample_aspect_ratio.num = iccx_video->sample_aspect_ratio.num;
  ost_video->sample_aspect_ratio.den = iccx_video->sample_aspect_ratio.den;

  // Assume r_frame_rate is accurate
  ost_video->r_frame_rate = ist_video->r_frame_rate;
  ost_video->avg_frame_rate = ost_video->r_frame_rate;
  ost_video->time_base = (AVRational){ost_video->r_frame_rate.den, ost_video->r_frame_rate.num}; //ost->time_base = av_inv_q( ost->r_frame_rate ); //error
  ost_video->codec->time_base = ost_video->time_base;

  // Create Audio output stream
  ost_audio->sample_aspect_ratio.num = iccx_audio->sample_aspect_ratio.num;
  ost_audio->sample_aspect_ratio.den = iccx_audio->sample_aspect_ratio.den;


  ost_audio->r_frame_rate = ist_audio->r_frame_rate;
  ost_audio->avg_frame_rate = ost_audio->r_frame_rate;
  ost_audio->time_base = (AVRational){ost_audio->r_frame_rate.den, ost_audio->r_frame_rate.num}; //ost->time_base = av_inv_q( ost->r_frame_rate ); //error
  ost_audio->codec->time_base = ost_audio->time_base;

  avformat_write_header( ofcx, NULL );

  snprintf( ofcx->filename, sizeof( ofcx->filename ), "%s", sFileOutput );

  //start reading packets from stream and write them to file

  av_dump_format( ifcx, 0, ifcx->filename, 0 ); //INFO INPUT
  av_dump_format( ofcx, 0, ofcx->filename, 1 ); //INFO OUTPUT

  timestart = timenow = get_time();

  ix_video = 0;
  ix_audio = 0;

  double video_pts, audio_pts;

  av_init_packet( &pkt );

  double audio_time, video_time;

  while ( av_read_frame( ifcx, &pkt ) >= 0 && timenow - timestart <= bRunTime ) { //&& (getchar() != 'q')){
    av_packet_rescale_ts(&pkt, ofcx->streams[i_index_video]->codec->time_base, ifcx->streams[i_index_video]->time_base);
      if ( pkt.stream_index == i_index_video ) { //packet is video
       //Make sure we start on a key frame - UN I-FRAME
      if ( timestart == timenow && ! ( pkt.flags & AV_PKT_FLAG_KEY ) ) {
        timestart = timenow = get_time();
        continue;
      }
      got_key_frame = 1;

//      video_pts = (double)ost_video->pts.val * ost_video->time_base.num / ost_video->time_base.den;
//      audio_pts = (double)ost_audio->pts.val * ost_audio->time_base.num / ost_audio->time_base.den;

      pkt.stream_index = ost_video->id;
//      /* prepare packet for muxing */
//     pkt.dts = av_rescale_q_rnd(pkt.dts, ofcx->streams[i_index_video]->codec->time_base, ofcx->streams[i_index_video]->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//     pkt.pts = av_rescale_q_rnd(pkt.pts, ofcx->streams[i_index_video]->codec->time_base, ofcx->streams[i_index_video]->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//     pkt.duration = av_rescale_q(pkt.duration, ofcx->streams[i_index_video]->codec->time_base, ofcx->streams[i_index_video]->time_base);


      pkt.pts = ix_video++;
      pkt.dts = pkt.pts;

//      /*Also, some streams have multiple ticks-per-frame, so if the video runs at double speed you might need to this right below the above line:

//        pkt.pts *= ifcx->streams[0]->codec->ticks_per_frame;
//        pkt.dts *= ifcx->streams[0]->codec->ticks_per_frame;

      //av_write_frame( ofcx, &pkt );
      av_interleaved_write_frame( ofcx, &pkt );
    }
    else{ //packet is audio

        pkt.pts = ix_video++;
        pkt.dts = pkt.pts;

    //av_write_frame( ofcx, &pkt );
    av_interleaved_write_frame( ofcx, &pkt );

    }

    //CICLO PER SINCRONIZZARE E SCRIVERE SU DISCO

//    printf("vpcopy[%d].pts = %d", i, vpcopy[i].pts);
//    printf("\n");

//    if(i == 30) {
//        for(j=0; j<30-1; j++)
//        {
//            min = j;

//        for(k=j+1; k<30; k++)
//          if(vpcopy[j].pts < vpcopy[min].pts) //cambiare questa condizione per invertire l'ordine
//            min = k;

//        temp=vpcopy[min];
//        vpcopy[min]=vpcopy[j];
//        vpcopy[j]=temp;

//        printf("vpcopy[%d].pts = %d", i, vpcopy[i].pts);
//        printf("\n");

//        av_interleaved_write_frame( ofcx, &vpcopy[j] );
//        }
//        i = 0;
//    }


    av_free_packet( &pkt );
    av_init_packet( &pkt );

    timenow = get_time();
  }
  av_read_pause( ifcx );
  av_write_trailer( ofcx );
  avio_close( ofcx->pb );
  avformat_free_context( ofcx );

  avformat_network_deinit();

  return EXIT_SUCCESS;
}

-------------------------------------------------------------

I would like to synchronize the video and audio.

How should I
use the pts and dts?



_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user
Melitonas | 22 Mar 11:36 2015
Picon

"cannot find -lavcodec" when compiling opencv-2.4.11 with ffmpeg version git-2015-03-21 and python2.7.6

Hello everyone,

I am on Centos 6.5 64bit

ffmpeg is compiled successfully by following instructions here:  http://trac.ffmpeg.org/wiki/CompilationGuide/Centos

I get error when compiling opencv-2.4.11 with python2.7.6 and ffmpeg version git-2015-03-21"

after make I get error. Did I compile the ffmpeg wrong or is the version of ffmpeg too high for opencv?

bob <at> localhost~/tmp/opencv-2.4.11/build $ make
[  1%] Built target opencv_core_pch_dephelp
[  1%] Built target pch_Generate_opencv_core
[  6%] Built target opencv_core
[---sniped---]
[ 77%] Built target pch_Generate_opencv_contrib
[ 83%] Built target opencv_contrib
Linking CXX shared library ../../lib/cv2.so
/usr/bin/ld: cannot find -lavcodec
collect2: ld returned 1 exit status
make[2]: *** [lib/cv2.so] Error 1
make[1]: *** [modules/python/CMakeFiles/opencv_python.dir/all] Error 2
make: *** [all] Error 2


_______________________________________________
Libav-user mailing list
Libav-user@...
http://ffmpeg.org/mailman/listinfo/libav-user

Gmane