[Ffmpeg-devel-irc] ffmpeg.log.20141007
burek
burek021 at gmail.com
Wed Oct 8 02:05:01 CEST 2014
[00:02] <sacarasc> Redirecting stderr as well as stdout?
[00:07] <voip_> sacarasc, we do 2>ffmpeg.log its works well
[00:08] <sacarasc> Try &> ffmpeg.log
[00:09] Action: sacarasc goes to bed.
[00:21] <voip_> sacarasc, thank you i will try
[00:31] <voip_> sacarasc, i did &> ffmpeg.log, process doestn go to backround, but wors if i do: &> ffmpeg.log &
[00:33] <nikolala> hi
[00:44] <nikolala> was reading https://trac.ffmpeg.org/wiki/Encode/HighQualityAudio, if I understand with the native aac encoder for LQ Stereo I can use 192kbps but for HQ stereo I should use 240Kbps (recommended), and if I transpose this to 5.1 I have 192 * 6/2 = 576kbps for LQ 5.1 and 720kbps for HQ 5.1 ?
[00:46] <nikolala> I wonder if it doesn't sound a bit overkill to reach 576kbps and 720kbps for 5.1
[02:42] <Akagi201> How can I tailor libavformat to only contain rtmp and rtsp?
[02:49] <c_14> --disable-everything or --disable-all and then start enabling the things you want.
[03:39] <Dogild> hi guys, is there an easy way to make a screencast on Mac OS x ?
[05:46] Action: hendry wonders if I can use ffmpeg to playback a few seconds of my video to ensure i have the rotation correct
[05:47] <c_14> ffplay ?
[05:47] <hendry> c_14: oh yes, that should do it
[05:47] Action: hendry is trying to rotate iphone videos before uploading them to S3
[05:52] <hendry> but ffplay doesn't quit after playing back (2sec) duration
[05:52] <c_14> ffplay -t 2 ?
[05:52] <c_14> oh
[05:53] <c_14> ffplay doesn't exit by default on eaf
[05:53] <c_14> *eof
[05:53] <c_14> -autoexit
[05:54] <hendry> hhmmm, ffplay plays the videos the right way up
[05:55] <c_14> use -noautorotate
[05:56] <c_14> If the video is taken by an iphone it'll have rotation metadata embedded in the container. ffplay reads that and rotates automatically.
[05:57] <hendry> c_14: pity other players can't seem to be able to do that :/
[05:57] <c_14> Which players you try?
[05:59] <hendry> c_14: mplayer, google-chrome-unstable (once turned the MOV->mp4)
[05:59] <hendry> ideally ffmpeg could autorotate when converting to mp4
[06:00] <hendry> https://twitter.com/kaihendry/status/513566680308408320
[06:01] <hendry> ffmpeg's API to rotate is kinda crazy https://github.com/wacrea/ffmpeg-mediainfo-autorotate/blob/master/index.php#L20
[06:04] <c_14> Well, the ticket was last updated a few days ago, so something might be happening on that front. Also, tbh I'd probably use the rotate filter instead of the transpose filter. It's easier to understand.
[06:09] <hendry> c_14: -vfilters "rotate={90,180,270}" ?
[06:09] <c_14> eh, it's in radians
[06:09] <c_14> so PI/4,PI,3*PI/4
[06:09] <c_14> If I get my radians right.
[06:10] <c_14> You should also delete the rotation metadata or set it to 0
[06:12] <hendry> c_14: oh... how do I do that?
[06:12] <c_14> >-metadata:s:v rotate="" or -metadata:s:v rotate=0
[06:13] <hendry> ffprobe -v 0 -of flat=s=_ -show_format IMG_4155.MOV <-- doesn't seem to show rotation info for some reason. A plane `ffprobe` does show it however
[06:13] <c_14> I think it's stream metadata.
[06:13] <c_14> should be in -show_streams somewhere
[06:13] <hendry> c_14: on the subject of metadata, if I want to type in a description into the video so I can maybe find it again one day in the feature, how would i do that?
[06:14] <c_14> -metadata:s description="foobar"
[06:14] <c_14> eh
[06:14] <c_14> without the :s
[06:14] <hendry> i wonder if it's possible to get that metadata out from the MP4 from the browser
[06:15] <c_14> Depends on the browser.
[06:16] <hendry> c_14: oh, which browser does it? i am between Chrome 39.0.2171.7 and FF
[06:16] <c_14> I don't know of one that does, but it could be implemented in a browser is what I meant.
[06:19] <hendry> hmm, there doesn't seem to be a desc API to the video object in HTML https://html.spec.whatwg.org/multipage/embedded-content.html#the-video-element
[06:27] <hendry> PI/2 is allegedly 90 degrees... arg
[06:27] <c_14> eh, right
[06:27] <c_14> replace all the 4s with 2s in what I said above.
[06:28] <hendry> argh, i think i will stick to repeated transpose calls
[06:28] <hendry> PI will confuse people
[06:28] <c_14> And transpose won't?
[06:28] <hendry> transpose=1 is one turn clockwise
[06:29] <hendry> -vf transpose=1,transpose=1 two turns clockwise
[06:29] Action: hendry works around insane API
[07:42] <hendry> interesting how ffmpeg responds to my iphone 240fps videos! http://s.natalian.org/2014-10-07/1412660525_1364x742.png
[09:00] <hungnv> what does `[h264 @ 0x155ef20] Current profile doesn't provide more RBSP data in PPS, skipping` means?
[09:25] <bencc> I'm trying to convert RTP vp8 video and opus audio to mp4
[09:25] <bencc> http://dpaste.com/2HXBT0M
[09:25] <bencc> I'm getting 160 fps so obviously doing something wrong
[09:26] <bencc> can I tell ffmpeg to automatically get the fps from the rtp stream or do I need to manually set it?
[09:36] <bencc> with "-vsync 2" video is ok
[09:36] <bencc> now I'm not getting audio
[09:38] <bencc> ok. audio is fine
[09:38] <bencc> but it seems that audio and video are out of sync
[09:44] <sacarasc> bencc: Is it converting at 160fps, or is the video output 160fs/
[09:44] <sacarasc> *fps
[09:45] <sacarasc> Yeah, that's just converting at 160fps.
[09:45] <bencc> sacarasc: the rtp is from chrome over webrtc
[09:45] <bencc> I'm decrypting it and sending as is to ffmpeg
[09:46] <bencc> after using "-vsync 2" ffmpeg show 30 fps
[09:46] <bencc> do I need to set it manually or is there a way to get it from the sdp or rtp stream?
[10:12] <Wader8> Hello
[10:12] <Wader8> how do I stop ffmpeg from continuing when TIMECODES are wrong
[10:12] <Wader8> for example, to create a "break" when the error "ignoring -ss -to" appears
[10:14] <Wader8> It happens from time to time that i mess up the hour .... i have like -ss 02:18:43 -to 01:19:25 - I don't want FFMPEG to continue decoding as it's totally useless, i have to monitor for this, otherwise i could left it out to do .... why important because i have a BATCH of 20 operations
[10:15] <Wader8> I just don't know the ffmpeg syntax enough, but I want the batch to continue, so it should output a basic error, when it does that, batch will detect and go to the next entry
[10:17] <Wader8> i don't know why -ss -to is even a "warning" at all
[10:17] <Wader8> if i wasn't setting them in the first place for the sole purpose of getting rid of the rest, why would it be just a warning
[16:16] <stack> hi, I'm converting a bunch of images back to an mp4 file, ffmpeg -framerate 1/5 -i ./frames/frame%03d.png -c:v libx264 -pix_fmt yuv420p out.mp4 , frames dir contains 10 files named frames/frame0001.png , but I get an error [image2 @ 0xf3b1c0] Could find no file with path './frames/frame%03d.png' and index in the range 0-4 . ./frames/frame%03d.png: No such file or directory .. why?
[16:17] <c_14> %04d
[16:17] <stack> dumb me :)
[16:23] <stack> ok now I should find out how to control the output video speed that was the problem
[16:24] <c_14> adjust the framerate?
[16:24] <c_14> https://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images
[16:24] <stack> yes I found info on the wiki
[16:24] <stack> that one exactly
[16:45] <Wader8> guys i'll just paste what i asked from many hours before, i'll be back later to respond if necessary
[16:45] <Wader8> how do I stop ffmpeg from continuing when TIMECODES are wrong
[16:45] <Wader8> for example, to create a "break" when the error "ignoring -ss -to" appears
[16:45] <Wader8> It happens from time to time that i mess up the hour .... i have like -ss 02:18:43 -to 01:19:25 - I don't want FFMPEG to continue decoding as it's totally useless, i have to monitor for this, otherwise i could left it out to do .... why important because i have a BATCH of 20 operations
[16:45] <Wader8> I just don't know the ffmpeg syntax enough, but I want the batch to continue, so it should output a basic error, when it does that, batch will detect and go to the next entry
[16:45] <Wader8> i don't know why -ss -to is even a "warning" at all
[16:45] <Wader8> if i wasn't setting them in the first place for the sole purpose of getting rid of the rest, why would it be just a warning
[16:46] <Wader8> that's it, brb 1h
[16:48] <c_14> Wader8: Hmm, just checked the code. Doesn't seem to be an option for it yet. Open a feature request on trac and I'll see if I can get around to it later.
[16:51] <c_14> Oh, and if you can think of a name for the option, I would much appreciate it. I'm terrible at naming things.
[17:17] <Keshl> c_14: What's the option do, oÉo?
[17:30] <t_p> how to convert tiff to jpg?
[17:30] <t_p> http://www.mediafire.com/view/n31gdk5a7a067uc/a.tif
[17:30] <t_p> i can't do it with ffmpeg
[17:31] <t_p> the result is like http://www.mediafire.com/view/d72d31inavcnje9/test.jpg
[17:31] <t_p> I can accomplish it with imageJ
[17:32] <JodaZ> seems to be just a level problem
[17:32] <t_p> JodaZ: can you give the detail?
[17:32] <JodaZ> t_p, what kinda cells are those?
[17:33] <kepstin-laptop> t_p: I get https://www.kepstin.ca/dump/a.jpeg for that image, seems to be as expected...
[17:33] <t_p> no, please see my test.jpg
[17:33] <t_p> this is not all black
[17:34] <t_p> you can open the a.tif with imageJ
[17:34] <JodaZ> t_p, the one you and kepstin-laptop made aren't all black either, just darker
[17:34] <kepstin-laptop> ... oh, that's a 16bit tif image
[17:34] <kepstin-laptop> none of the apps I have can display it correctly.
[17:34] <t_p> yes, 16bit tif image
[17:34] <t_p> kepstin-laptop: http://imagej.nih.gov/ij/
[17:35] <t_p> you can open the tif with imageJ
[17:35] <JodaZ> t_p, note that for science you never ever want jpeg's anyways
[17:35] <t_p> JodaZ: yes, but i just want to start with jpeg with simple
[17:36] <JodaZ> ?
[17:36] <kepstin-laptop> hmm. I'd assume that most of the apps are rescaling the 16bit values to 8bit values linearly.
[17:36] <t_p> JodaZ: my final purpose is to convert many tiff to mp4 video
[17:37] <JodaZ> t_p, just filter them to be brighter
[17:37] <t_p> JodaZ: can you give me the command line text?
[17:38] <kepstin-laptop> yeah, my guess is that the tiff image isn't actually using the full 16bit range, but rather only has fairly low values; or alternatively, it requires a gamma correction that isn't being done.
[17:38] <t_p> kepstin-laptop: good
[17:39] <t_p> i think the mechanism of converting jpg and mp4 video is same
[17:39] <t_p> kepstin-laptop: any more tips?
[17:40] <kepstin-laptop> hmm. the standard way of doing gamma correction in ffmpeg is lutrgb, but I suspect that it only takes 8-bit input, which means you'd get nasty banding in the output.
[17:41] <t_p> what's -pix_fmt gray16be
[17:41] <t_p> "-pix_fmt gray16be"
[17:41] <JodaZ> t_p, just convert the tiff to a 8bit png before handing it to ffmpeg
[17:41] <JodaZ> use imagemagic
[17:42] <t_p> convert to 8bit is lossless?
[17:43] <kepstin-laptop> t_p: obviously not; when you convert 16bit to 8bit you have to throw away half of the data.
[17:44] <kepstin-laptop> apparently your imageJ tool is doing a better job of picking which half of the data to throw away than ffmpeg is.
[17:44] <t_p> why ffmpeg can't convert on the 16bit format?
[17:44] <kepstin-laptop> t_p: it is, in a way similar to how most other tools I've tested to by default
[17:44] <JodaZ> t_p, doesn't matter, since you are only doing it to make a video, and videos are lossy and 8bit if you want it to play on anything
[17:45] <kepstin-laptop> e.g. you get the same result by opening that image in the gimp
[17:45] <t_p> the gimp is also all black?
[17:46] <kepstin-laptop> not all black, just very dark
[17:47] <JodaZ> kepstin-laptop, tiff might have a level specifier on top of the raw 16bit data so it would display it brighter
[17:47] Action: kepstin-laptop notes that imagemagick also gives a very dark result on that image
[17:48] <JodaZ> so what doesn't give a dark result for that image?
[17:48] <JodaZ> is it an 16bit image?
[17:49] <c_14> Keshl: The option will make FFmpeg err out when -ss > -to . I'm thinking something like -strict_seeking or so, but I'm not sure.
[17:49] <kepstin-laptop> JodaZ: the only known tool that gives a bright result for the image is the 'imageJ' tool that t_p is using to compare
[17:50] <Keshl> c_14: o.O... Odd feature, but I don't know what to name it, sorry. x.x
[17:51] <kepstin-laptop> hmm. if you run "imagemagic a.tif -auto-level a.jpeg" you get the same result as t_p's desired result
[17:51] <kepstin-laptop> which means that the tool t_p is using is probably just automatically doing some levels stretching
[17:51] <kepstin-laptop> and the image is actually dark
[17:52] <JodaZ> kepstin-laptop, imageJ propably does auto level correction for displayin git
[17:52] <JodaZ> eh, yeah, you said that
[17:52] <kepstin-laptop> you wouldn't want to separately auto-level each frame of an animation, tho, that would look bad
[17:53] <kepstin-laptop> so it would be probably be best to find some suitable value for -level in imagemagick, and run all of the images through it with the same -level setting
[17:53] <t_p> you say imageJ can make result brighter?
[17:53] <JodaZ> kepstin-laptop, would a ffmpeg brightness filter operate on the 16bit data?
[17:53] <kepstin-laptop> JodaZ: I'm not sure. I suspect most of the filters are probably 8-bit only
[17:57] <JodaZ> imageJ says "Display range: 66 - 4433"
[17:57] <JodaZ> so i asume that it only found 16bit numbers in that range and is using it to display
[17:58] <kepstin-laptop> and what do you know, imagemagick "convert a.tif -level 66,4433 a.png" looks pretty nice :)
[18:00] <t_p> kepstin-laptop: it is awesome
[18:01] <t_p> does ffmpeg have such options?
[18:01] <Wader8> okay c_14 , just got back but have some stuff to do around here ... oh i'll think of something for the name, I just happen to be the naming guy, and don't like when programmers make it so context-specific sometimes hehe (generally, not talking about ffmpeg)
[18:07] <kepstin-laptop> t_p: looks like the 'geq' filter in ffmpeg can operate on 16-bit data
[18:08] <kepstin-laptop> ... er, maybe not
[18:08] <kepstin-laptop> no, it's being dithered to 8bit before the filter, it looks like.
[18:08] <t_p> how to use it?
[18:11] <kepstin-laptop> you don't want to use it, it's doing the 8bit conversion before the geq filter.
[18:11] <JodaZ> kepstin-laptop, can you force it with -pix_fmt ?
[18:12] <t_p> what's the value?
[18:13] <kepstin-laptop> the only filters that accept gray16 as input are decimate, extractplanes, idet, psnr, and yadif.
[18:13] <JodaZ> kepstin-laptop, rgb48 ?
[18:13] <kepstin-laptop> I wonder if some take a higher-bit-depts yuv or rgb format that could be used
[18:16] <kepstin-laptop> lets see... colorchannelmixer can do RGB48. Could probably do some horrible hack with that to scale the image brighter :/
[18:21] <t_p> ok
[18:22] <kepstin-laptop> t_p: your best option right now is probably to batch-convert the images with imagemagick or some other tool before using them in ffmpeg
[18:22] <kepstin-laptop> bit if you feel like doing some coding, it's a great opportunity to go and add high-bit-depth support to some ffmpeg filters :)
[18:23] <t_p> it is better to do lossless convertion
[18:23] <t_p> but it seems hard to achieve?
[18:23] <JodaZ> t_p, what video format do you think supports 16bit anyways?
[18:23] <t_p> any, mp4 for example
[18:24] <kepstin-laptop> which codec were you planning to use in your mp4 file?
[18:24] <t_p> libx264?
[18:25] <kepstin-laptop> x264 only goes up to 10bit, and the resulting files have limited playback compatibility
[18:25] <t_p> so do you have any suggestion?
[18:26] <kepstin-laptop> if you're making a video that's just for entertainment rather than scientific analysis, do an 8bit encode.
[18:26] <t_p> H.264 = libx264?
[18:26] <kepstin-laptop> libx264 is a library containing an encoder for the H.264 format
[18:26] <t_p> i want to do scientific analysis
[18:27] <kepstin-laptop> then don't use a lossy video format, just look at the tiff files directly.
[18:28] <t_p> ok
[18:29] <t_p> imageJ can be converted to avi
[18:31] <kepstin-laptop> yeah, but that would also be converting to 8bit and using a (probably) lossy codec.
[18:32] <JodaZ> lossless 16bit mjpeg
[18:32] <JodaZ> obviously
[18:32] <JodaZ> xD
[18:32] <t_p> JodaZ: what do you mean?
[18:32] <JodaZ> nvm
[18:33] <t_p> impossible?
[18:34] <JodaZ> yes
[18:34] <JodaZ> lossless video is not practical, lossless 16bit video is even more impractical
[18:40] <kepstin-laptop> I mean, you could throw it in a file with e.g. ffv1 codec, but I dunno what you'd do with it after that.
[18:45] <kepstin-laptop> (huh, that's actually kind of amusing, saving that image as ffv1 in an mkv file is somewhat better compression than the original tiff)
[18:46] <kepstin-laptop> of course, you run into the problem that there's nothing really useful to do with the file except maybe extract tiff frames from it
[18:48] <JodaZ> kepstin-laptop, thats because the tiff is uncompressed
[20:21] <voip_> Hello guys
[20:21] <voip_> We are transcoding live streams with ffmpeg. Special program checks linux ps, and restarting ffmpegs.
[20:21] <voip_> Some times after several hours we have ffmpeg pid's state Sl, but in fact ffmpeg stops transcoding
[20:21] <voip_> How to solve this issue ?
[20:57] <Marcin_PL> Hello. Can I stream somehow transcode's output through ffmpeg on the fly?
[21:00] <BtbN> what?
[21:05] <kepstin-laptop> Marcin_PL: like, pipe the output from some process into ffmpeg? sure.
[21:07] <Marcin_PL> kepstin-laptop: Yeah, I tried, but ffmpeg somehow can't see the input. Maybe transcode cannot produce output to a pipe?
[21:08] <kepstin-laptop> oh, you're talking about the 'transcode' tool specifically, not transcoding in general
[21:09] <Marcin_PL> Yes, I do :) But the point is deshaking the video (stabilization of hand recording)
[21:09] <kepstin-laptop> what do you need the 'transcode' tool for specifically? I'd assume most things it can do can probably be done in ffmpeg by itself nowadays
[21:11] <Marcin_PL> I'm updating now the ffmpeg, but the point is default deshake in transcode looks perfect for me, but '-vf deshake' in ffmpeg doesn't. And vidstabdetect/vidstabtransform aren't working for me.
[21:12] <Marcin_PL> I'm using MJPEG+PCM16le inside AVI as input and I'd like to got same at output
[21:12] <Marcin_PL> transcode can do the trick, but produces HUGE raw files (from 95 MB pumped up to 1,4 GB)
[21:13] <kepstin-laptop> eh, 1.4gb isn't that big of a raw file. Fits in ram on my machines even.
[21:13] <kepstin-laptop> :)
[21:14] <Marcin_PL> OK, but it's 2 minutes video 640x480& I'd like MJPEG rather. :)
[21:15] <Marcin_PL> WHat should I got a machine to stabilize 1.5 hour video. :D
[21:15] <kepstin-laptop> but yeah, sounds like your problems are convincing transcode to use an alternate output format or pipe, not with ffmpeg
[21:15] <Marcin_PL> I was trying to pipe it as raw, and catch it in ffmpeg, producing mjpeg
[21:16] <Marcin_PL> OK, and what's wrong with the mjpeg and -vf vidstabdetect/transform?
[21:17] <Marcin_PL> Wait a moment, I'll paste the output
[21:19] <Marcin_PL> ffmpeg -i $1 -acodec copy -vcodec mjpeg -q 10 -vf deshake stab_$1 works perfect (but the output is not nice, without zoom, with mirroring edges instead, or other mess around)
[21:19] <Marcin_PL> $1 stands for filename after script name of course
[21:21] <Marcin_PL> Other bad thing is over-blur some moving colors, e.g. black letters painted on yellow jacket.
[21:22] <Marcin_PL> It has green shadows moving after it.
[21:22] <Marcin_PL> And it's not fault of mjpeg, I checked.
[21:23] <Marcin_PL> So I tried pair like that: http://pastebin.com/CUA3KktF
[21:28] <Marcin_PL> Output of 1st pass (filename replaced): http://pastebin.com/VFk1qxNk
[21:59] <F3r> Hello, i wanted to know why 'silencedetect' filter does not get any results on an video file that is fully silenced. ffmpeg -i INPUT_FILE -af silencedetect=n=-40dB:d=1 -f null -
[21:59] <F3r> this is the command line i use,
[21:59] <F3r> thanks!!
[22:11] <Marcin_PL> F3r: I'm just guessing& Has it got any audio stream?
[22:16] <F3r> this is the output i get
[22:16] <F3r> [silencedetect @ 0xadbe780] silence_start: 0.0448526 frame=27456 fps=14178 q=0.0 Lsize=N/A time=00:17:36.27 bitrate=N/A video:1716kB audio:181952kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[22:26] <Marcin_PL> Gahhh, now I can't update Mint/Debian& :/
[22:26] <Marcin_PL> ffmpeg from its repo neither.
[22:38] <luancardoso> Hi everyone. I'm trying to decode a rtsp stream and randomly getting a RTCP_BYE in the packet. In the method rtcp_parse_packet, inside rtpdec.c, it's reading a payload with length 24 but the payload has 48. Changing this method to read a 48 payload works fine. Is this correct? Am I missing something?
[23:16] <bencc> ffmpeg can receive RTMP but what exactly does it means?
[23:16] <bencc> I'm using RTMP media server
[23:16] <bencc> Flash clients connects to it with a NetConnection, create NetStreams and send and receive webcam and mic streams
[23:17] <bencc> can I just forward all RTMP messages from a client to ffmpeg or do I need to create a custom RTMP connection with custom messages?
[23:20] <rjp421> bencc, you would need to find the correct stream URI to use as the input in ffmpeg
[23:21] <rjp421> which media server
[23:23] <bencc> rjp421: https://github.com/processone/oms
[23:23] <bencc> what do you mean by stream URI?
[23:24] <bencc> rjp421: the server can handle RTMP connection
[23:24] <bencc> what does RTMP stream means?
[23:25] <rjp421> i.e. rtmp://media.whohacked.me/live/livestream
[23:25] <rjp421> you want to play or publish with ffmpeg?
[23:25] <bencc> I want to receive RTMP stream and transcode it to hls
[23:26] <bencc> maybe not transcode but segment
[23:27] <bencc> rjp421: in a flash app you first create a NetConnection and connects to the server
[23:27] <bencc> than you send a createStream message
[23:27] <bencc> so I'm not sure what rtmp://media.whohacked.me/live/livestream does
[23:28] <rjp421> for adobe media server, i use: ffmpeg -loglevel info -i 'rtmp://media.whohacked.me/live/livestream' -c:v copy -g 48 -acodec copy -f tee -map 0:v -map 0:a '[f=flv]rtmp://media.whohacked.me/livepkgr/livestream?adbe-live-event=liveevent'
[23:29] <rjp421> but i could play and republish (but not transcode) from the server-side actionscript..
[23:30] <bencc> when requesting a URI there is no nc.connect() and nc.createStream() ?
[23:30] <rjp421> to transcode you might want " -c:v libx264 -profile:v main -level 31 -vf "fps=24,scale=854:480,format=yuv420p" -g 48 -acodec aac -strict -2 -ar 44100 -ac 2 -ab 96k -af aresample=async=1:min_hard_comp=0.100000:first_pts=0 -sn" after the input
[23:31] <rjp421> the URI is made up of the path to the application, and the name of the netstream... i think it depends on the server
[23:31] <rjp421> but to play with ffmpeg you need to enable the videoSampleAccess on the NetStream
[23:32] <bencc> what is videoSampleAccess?
[23:32] <rjp421> allows access to the raw video/audio data
[23:32] <bencc> so maybe ffmpeg automatically connects and send a createStream message?
[23:33] <bencc> in Flash?
[23:33] <rjp421> ffmpeg uses librtmp to play and publish, watch the output using -loglevel verbose
[23:34] <bencc> so it emulates a Flash session?
[23:34] <rjp421> yes, videoSampleAccess on the flash NetStream.. and yes it will create its own version of a netconnection etc
[23:35] <bencc> cool
[23:35] <bencc> simpler than I thought
[23:36] <bencc> I thought that maybe I should wrap the video packets in RTMP without a session
[00:00] --- Wed Oct 8 2014
More information about the Ffmpeg-devel-irc
mailing list