yinlijie2011 | 1 Aug 14:19 2011

problem about create file

Dear,
    I use live555 library for my program that receive stream media from RTSP server and save them to a MP4 file per one minute. With time went by, more and more MP4 files will be create and save media.
   I know use QuickTimeFileSink::createNew() can create MP4 file pointer, but I don't know use which function can free the pointer when it get stream media enough time. In openRTSP's source, use Medium::close() to free MP4 file's pointer. I try use this function, but my program can't receive stream media when I use it, why?
  My step is, first, send PAUSE signal; second, use Medium::close()  free MP4 file pointer and create new one; finally, send PLAY signal.
  Thank you! 
Yin Lijie


_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Ross Finlayson | 2 Aug 16:59 2011

Re: problem about create file


On Aug 1, 2011, at 8:19 AM, yinlijie2011 wrote:

    I use live555 library for my program that receive stream media from RTSP server and save them to a MP4 file per one minute. With time went by, more and more MP4 files will be create and save media.
   I know use QuickTimeFileSink::createNew() can create MP4 file pointer, but I don't know use which function can free the pointer when it get stream media enough time. In openRTSP's source, use Medium::close() to free MP4 file's pointer. I try use this function, but my program can't receive stream media when I use it, why?
  My step is, first, send PAUSE signal; second, use Medium::close()  free MP4 file pointer and create new one; finally, send PLAY signal.

This should work (I think):

1/ Send the RTSP "PAUSE" command (using "RTSPClient::sendPauseCommand()")
2/ Call Medium::close(pointer-to-your-MP4-file-object)
3/ Create a new MP4-file-object
4/ Call "startPlaying()" on your new MP4-file-object
5/ Send the RTSP "PLAY" command (using "RTSPClient::sendPlayCommand()")


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Matt Schuckmannn | 3 Aug 00:45 2011

Confused about how to generate fmtp line for H.264 source for SDP

I'm working on up grading our use of Live555 RTSP server code to the 
latest version of the library, our old version was at least a couple of 
years old.

In the new code it appears that the default behavior is to obtain the 
sps, pps, etc from the h.264 fragmenter (see auxSDPLine() for 
H264VideoRTPSink) however the fragmenter is not created until 
ContinuePlaying is called which is way to late to generate the fmtp line 
for the SDP. So I'm confused how this all supposed to work.

I'm not sure if I should over ride the auxSDPLine() in my class derived 
from H264VideoRTPSink and format the fmtp line by hand or if I should 
try creating the fragmenter when I construct my H.264 RTPSink class or?

Please advise.

Thanks,
Matt S.
Ross Finlayson | 3 Aug 15:14 2011

Re: Confused about how to generate fmtp line for H.264 source for SDP


On Aug 2, 2011, at 6:45 PM, Matt Schuckmannn wrote:

I'm working on up grading our use of Live555 RTSP server code to the latest version of the library, our old version was at least a couple of years old.

Good heavens; there have been *many* improvements and bug fixes since then!


In the new code it appears that the default behavior is to obtain the sps, pps, etc from the h.264 fragmented

Yes.  Now, the SPS and PPS NAL units are assumed to be in the input NAL unit stream (and are extracted from there).

That means that if we're streaming a H.264 stream 'on demand' (e.g., from a unicast RTSP server), we have to do a little trick (hack) to get this information for use in the stream's SDP description, before we start delivering to the first client.  Basically, we have to 'stream' the input source to a dummy sink, until we see the data that we need.

The place to do this is in your subclass of "ServerMediaSubsession" for H.264 video.  Specifically, you reimplement the "getAuxSDPLine()" virtual function.

For a model of how to do this, see our implementation of "H264VideoFileServerMediaSubsession".  You will presumably do something similar, except with your own subclass.  (Of course, as always, you will also implement the "createNewStreamSource()" and "createNewRTPSink()" virtual functions.)


I'm not sure if I should over ride the auxSDPLine() in my class derived from H264VideoRTPSink

No, you should need to change (or reimplement) that code.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Matt Schuckmannn | 3 Aug 19:04 2011

Re: Confused about how to generate fmtp line for H.264 source for SDP

On Wednesday, August 03, 2011 6:14:13 AM, Ross Finlayson wrote:
> On Aug 2, 2011, at 6:45 PM, Matt Schuckmannn wrote:
> 
>> I'm working on up grading our use of Live555 RTSP server code to the 
>> latest version of the library, our old version was at least a couple 
>> of years old.
> 
> Good heavens; there have been *many* improvements and bug fixes since then!

Yes well, when we first adopted live555 we need RTPS over TCP to work 
with sending commands like SET_PARAMETER during the stream, so we did 
our own fix for that which made it difficult to keep up with your 
changes, and until recently we didn't have time to convert over to your 
new code.

> Yes. Now, the SPS and PPS NAL units are assumed to be in the input NAL 
> unit stream (and are extracted from there).

Is that a safe assumption, isn't optional to include the SPS and PPS 
NAL units in the stream? or at the very least make the very infrequent?

> 
> That means that if we're streaming a H.264 stream 'on demand' 
(e.g., 
> from a unicast RTSP server), we have to do a little trick (hack) to get 
> this information for use in the stream's SDP description, before we 
> start delivering to the first client. Basically, we have to 'stream' the 
> input source to a dummy sink, until we see the data that we need.
> 
> The place to do this is in your subclass of "ServerMediaSubsession" for 
> H.264 video. Specifically, you reimplement the "getAuxSDPLine()" virtual 
> function.
>
Hmm, I'll look into it, but my encoder gives me the sps and pps when I 
initialize it, it seems like it would be easier to just hand the sps 
and pps to the rtp sink and just re-implement auxSDPLine in my class, 
pretty much like I used to do. Is there a reason your recommending this 
approach

> For a model of how to do this, see our implementation of 
> "H264VideoFileServerMediaSubsession". You will presumably do something 
> similar, except with your own subclass. (Of course, as always, you will 
> also implement t
he "createNewStreamSource()" and "createNewRTPSink()" 
> virtual functions.)

Thanks,
Matt S. 
Warren Young | 4 Aug 01:28 2011

Trick play based pause interacts badly with Enseo STBs

If you use live555MediaServer to stream MPEG-2 in a TS to an Enseo 
HD2000 STB, pause and resume works fine as long as you haven't used the 
indexer to build .tsx files, or you disable trick play handling in 
MPEG2TransportFileServerMediaSubsession::pauseStream():

> void MPEG2TransportFileServerMediaSubsession
> ::pauseStream(unsigned clientSessionId, void* streamToken) {
>   if (fIndexFile != NULL) { // we support 'trick play'
>     ClientTrickPlayState* client = lookupClient(clientSessionId);
>     if (client != NULL) {
>       client->updateStateOnPlayChange(False);
>     }
>   }
>
>   // Call the original, default version of this routine:
>   OnDemandServerMediaSubsession::pauseStream(clientSessionId, streamToken);
> }

That is to say, the STB will resume a paused RTSP stream for an indexed 
MPEG-2 TS correctly if you remove everything but the last line, causing 
the server to behave as if the TS file doesn't have an index purely for 
the purposes of pause handling.

With the .tsx file and problem code present, resuming causes the 
displayed video to look like digital satellite TV when there's snow on 
the dish.  Jerky playback, occasional pauses, macroblocks decoding in 
the wrong place/time...ugly stuff.  Additionally, it looks like the STB 
is giving up and trying to restart the stream at some point.

While debugging this, we noticed that VLC doesn't send RTSP PAUSE when 
you press its Pause button.  We assume it's working like a DVR in this 
instance, just buffering the streamed data to use when you unpause.  I 
mention this that's the only other RTSP clients we have on hand.  We 
lack another client that does send RTSP PAUSE, so as far as we can see, 
there's nothing wrong with disabling trick play support for pause.

We assume there *is* some bad consequence, since the code probably 
wasn't written for no reason.  Which client does this hack break?

We have a solution to our problem, hacky though it is, but we'd be 
happier if the server just did the right thing with Enseo STBs out of 
the box, so we're willing to help you look into this, Ross.  We can send 
you one of these STBs, plus tools and information that will help you 
debug the issue.

I've attached a transcript of the RTSP conversation between an Enseo 
HD2000 and live555MediaServer.  You see it start the stream, then pause 
and attempt to resume it.

One of the things you'll notice in the transcript is that the STB sends 
a GET_PARAMETER request for "position" on pause.  I couldn't find any 
documentation online saying what the client is supposed to get in 
response, only the RTSP RFC saying this is an open extension mechanism. 
  We don't know which RTSP server implements this parameter (Kasenna or 
SeaChange, probably), but we assume the purpose is to let the STB tell 
the server where to resume from.

We considered trying to modify the server to send a real reply for this 
parameter by looking at the trick play index, but then stumbled across 
our current hacky fix.
OPTIONS rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 1
User-Agent: Enseo HD-2000

RTSP/1.0 200 OK
CSeq: 1
Date: Wed, Aug 03 2011 23:05:17 GMT
Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE, GET_PARAMETER, SET_PARAMETER

DESCRIBE rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 2
Accept: application/sdp
User-Agent: Enseo HD-2000

RTSP/1.0 200 OK
CSeq: 2
Date: Wed, Aug 03 2011 23:05:17 GMT
Content-Base: rtsp://172.20.0.242:8554/4609.ts/
Content-Type: application/sdp
Content-Length: 400

v=0
o=- 1312412717518667 1 IN IP4 172.20.0.242
s=MPEG Transport Stream, streamed by the LIVE555 Media Server
i=4609.ts
t=0 0
a=tool:LIVE555 Streaming Media v2011.07.21
a=type:broadcast
a=control:*
a=range:npt=0-368.008
a=x-qt-text-nam:MPEG Transport Stream, streamed by the LIVE555 Media Server
a=x-qt-text-inf:4609.ts
m=video 0 RTP/AVP 33
c=IN IP4 0.0.0.0
b=AS:19395
a=control:track1
SETUP rtsp://mongo:8554/4609.ts/track1 RTSP/1.0
CSeq: 3
Transport: RTP/AVP;unicast;client_port=1234-1235
User-Agent: Enseo HD-2000

RTSP/1.0 200 OK
CSeq: 3
Date: Wed, Aug 03 2011 23:05:17 GMT
Transport: RTP/AVP;unicast;destination=172.20.200.144;source=172.20.0.242;client_port=1234-1235;server_port=6970-6971
Session: 7F56B9A0

PLAY rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 4
Session: 7F56B9A0
Scale: 1.000000
Range: npt=0.000000-
User-Agent: Enseo HD-2000

RTSP/1.0 200 OK
CSeq: 4
Date: Wed, Aug 03 2011 23:05:17 GMT
Scale: 1.000000
Range: npt=0.000-
Session: 7F56B9A0
RTP-Info: url=rtsp://172.20.0.242:8554/4609.ts/track1;seq=55654;rtptime=321112806

GET_PARAMETER rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 5
Session: 7F56B9A0
User-Agent: Enseo HD-2000
Content-Type: text/parameter
Content-Length: 12

position

RTSP/1.0 200 OK
CSeq: 5
Date: Wed, Aug 03 2011 23:05:17 GMT
Session: 7F56B9A0

PAUSE rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 6
Session: 7F56B9A0
User-Agent: Enseo HD-2000

RTSP/1.0 200 OK
CSeq: 6
Date: Wed, Aug 03 2011 23:05:24 GMT
Session: 7F56B9A0

GET_PARAMETER rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 7
Session: 7F56B9A0
User-Agent: Enseo HD-2000
Content-Type: text/parameter
Content-Length: 12

position

RTSP/1.0 200 OK
CSeq: 7
Date: Wed, Aug 03 2011 23:05:24 GMT
Session: 7F56B9A0

PLAY rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 8
Session: 7F56B9A0
Scale: 1.000000
Range: npt=0.00-
User-Agent: Enseo HD-2000

RTSP/1.0 200 OK
CSeq: 8
Date: Wed, Aug 03 2011 23:05:27 GMT
Scale: 1.000000
Range: npt=0.000-
Session: 7F56B9A0
RTP-Info: url=rtsp://172.20.0.242:8554/4609.ts/track1;seq=2269;rtptime=321987898

GET_PARAMETER rtsp://mongo:8554/4609.ts RTSP/1.0
CSeq: 9
Session: 7F56B9A0
User-Agent: Enseo HD-2000
Content-Type: text/parameter
Content-Length: 12

position

RTSP/1.0 200 OK
CSeq: 9
Date: Wed, Aug 03 2011 23:05:27 GMT
Session: 7F56B9A0

_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Ross Finlayson | 4 Aug 03:51 2011

Re: Confused about how to generate fmtp line for H.264 source for SDP

Yes. Now, the SPS and PPS NAL units are assumed to be in the input NAL unit stream (and are extracted from there).

Is that a safe assumption, isn't optional to include the SPS and PPS NAL units in the stream? or at the very least make the very infrequent?

Yes, it's uncommon for a supplier of a H.264 stream to know the SPS and PPS NAL units, but for those NAL units not to appear in the stream.  Most commonly, these NAL units appear in the stream, but you don't know what they are - and can find them out only by scanning the stream.  We now do that for you.


Hmm, I'll look into it, but my encoder gives me the sps and pps when I initialize it, it seems like it would be easier to just hand the sps and pps to the rtp sink and just re-implement auxSDPLine in my class, pretty much like I used to do. Is there a reason your recommending this approach

Yes, the reason was that it would probably save you work.  If the SPS and PPS NAL units also appear in the stream (which, for most encoders, they will), then you don't have to do any extra work (except perhaps duplicate some code that we already have in "H264VideoFileServerMediaSubsession").


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Ross Finlayson | 4 Aug 06:12 2011

Re: Trick play based pause interacts badly with Enseo STBs

We assume there *is* some bad consequence, since the code probably wasn't written for no reason.  Which client does this hack break?

The intention of the code is to make sure that the server's state (specifically, its record of where in the stream it is) is accurate, so that a subsequent RTSP "PLAY" command will cause the stream to get resumed from the correct place.  This code was written specifically for the Amino STB (which was the first client we used that did 'trick play' operations).


we're willing to help you look into this, Ross.  We can send you one of these STBs, plus tools and information that will help you debug the issue.

Sure.  You can find the mailing address on the front page of our web site: http://www.live555.com/


One of the things you'll notice in the transcript is that the STB sends a GET_PARAMETER request for "position" on pause.  I couldn't find any documentation online saying what the client is supposed to get in response

This hasn't been standardized at all.  If your STB is relying on this, then it's broken.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Aamer Sattar | 4 Aug 12:28 2011
Picon

Current playing position in live555MediaServer

Hi,
I am working on "live555MediaServer" class. Now to make our own Media Server, we can reuse MPEG1or2DemuxedServerMediaSubsession which is subclass of OnDemandServerMediaSubsession.

Now my question is that, by using the function

float MPEG1or2DemuxedServerMediaSubsession::duration() const {    return fOurDemux.fileDuration(); }

This will give us the file duration in seconds and milliseconds of any media file. How can we get the current
playing (seeking) position when we are streaming any media file (e.g. mpg) by using live555MediaServer program
inside MPEG1or2DemuxedServerMediaSubsession subclass.

Regards,
AAMER
_______________________________________________
live-devel mailing list
live-devel@...
http://lists.live555.com/mailman/listinfo/live-devel
Matt Schuckmannn | 4 Aug 18:19 2011

Re: Confused about how to generate fmtp line for H.264 source for SDP

Ok, thanks for the input. 

Matt S. 

On Wednesday, August 03, 2011 6:51:37 PM, Ross Finlayson wrote:
>>> Yes. Now, the SPS and PPS NAL units are assumed to be in the input 
>>> NAL unit stream (and are extracted from there).
>>
>> Is that a safe assumption, isn't optional to include the SPS and PPS 
>> NAL units in the stream? or at the very least make the very infrequent?
> 
> Yes, it's uncommon for a supplier of a H.264 stream to know the SPS and 
> PPS NAL units, but for those NAL units not to appear in the stream. Most 
> commonly, these NAL units appear in the stream, but you don't know what 
> they are - and can find them out only by scanning the stream. We now do 
> that for you.
> 
> 
>> Hmm, I'll look into it, but my encoder gives me the sps and pps when I 
>> initialize it, it seems like it would be easier to just hand the sps 
>> and pps to the rtp sink and just re-implement auxSDPLine in my class, 
>> pretty much like I used to do. Is there a reason your recomme
nding 
>> this approach
> 
> Yes, the reason was that it would probably save you work. If the SPS and 
> PPS NAL units also appear in the stream (which, for most encoders, they 
> will), then you don't have to do any extra work (except perhaps 
> duplicate some code that we already have in 
> "H264VideoFileServerMediaSubsession").
> 
> 
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
> 
> 
> 
> _______________________________________________
> live-devel mailing list
> live-devel@...
> http://lists.live555.com/mailman/listinfo/live-devel

Gmane