[Ffmpeg-devel-irc] ffmpeg.log.20140528
burek
burek021 at gmail.com
Thu May 29 02:05:01 CEST 2014
[00:00] <b_jonas> LukeMaxwell: you can use the -report option to make ffmpeg write all the log output to a regular file
[00:01] <b_jonas> LukeMaxwell: and you might want to increase the log level to -v 99 as well
[00:03] <llogan> increased verbosity is often more annoying than useful
[00:57] <LukeMaxwell> sorry llogan I went to the store real fast
[01:05] <LukeMaxwell> okay so here's the new command that I ran
[01:06] <LukeMaxwell> ffmpeg -v 99 -report -f v4l2 -i /dev/video0 -an -r 20 -s 640x480 /tmp/testnow.mkv
[01:22] <LukeMaxwell> here's the output: http://lukemaxwell.info/ffmpeglog.txt
[01:43] <LukeMaxwell> Can anyone help me? llogan b_jonas
[01:58] <LukeMaxwell> It only recorded video for one second
[02:16] <llogan> LukeMaxwell: did you tell it to stop?
[02:17] <LukeMaxwell> Yes
[02:17] <LukeMaxwell> I hit ctrl C
[02:17] <llogan> what's wrong with the output?
[02:17] <LukeMaxwell> it was running for a good 30 seconds before I told it to stop
[02:22] <En-ett_> hi
[02:22] <En-ett_> Small question. Does having a certain resolution indicate what aspect ratio you have?
[02:22] <En-ett_> I would have thought so
[02:22] <c_14> depends on the sar
[02:22] <En-ett_> Like, if I have 1920x1080p video
[02:22] <En-ett_> it means that aspect ratio is inevitable 16:9
[02:22] <En-ett_> right?
[02:23] <c_14> Only if the pixels are square, if they aren't it doesn't.
[02:23] <En-ett_> but yet, I'm using this software that converts videos from mts <-> avi, and it gives me the option to choose res and aspect ratio both
[02:23] <En-ett_> c_14: hmm ok
[02:23] <En-ett_> c_14: any idea how may I easily determine the aspect ratio of a video?
[02:23] <En-ett_> (I know that res is 1440 x 1080)
[02:24] <En-ett_> by going to vlc -> tools -> codec info
[02:24] <c_14> ffprobe will tell you
[02:24] <c_14> it's the dar
[02:24] <c_14> dar -> display aspect ratio
[02:27] <c_14> In case you're wondering (width/height)*sar = dar
[02:29] <quattro_> is it possible to transcode and save a .avi -> mp4 and while transcoding streaming the content?
[02:29] <llogan> http://ffmpeg.org/ffmpeg-formats.html#tee
[02:29] <llogan> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs#Teepseudo-muxer
[02:30] <sacarasc> Can you do that at all with ffmpeg and mp4?
[02:30] <sacarasc> MOOVs and all that.
[02:31] <llogan> i was assuming that the streaming was not mp4, but the local is.
[02:36] <LukeMaxwell> hey llogan can you help me?
[02:37] <llogan> LukeMaxwell: try ffmpeg -report -f v4l2 -i /dev/video0 -t 30 -f null -
[02:37] <LukeMaxwell> okay one second
[02:39] <LukeMaxwell> it's running but let's see if it freezes up at all
[02:40] <llogan> can you provide the output?
[02:40] <LukeMaxwell> llogan: Okay so ffmpeg stopped its' self and said Encoder did not produce proper pts, making some up.
[02:43] <llogan> LukeMaxwell: can you provide a link to the complete output?
[02:44] <llogan> also the output of "v4l2-ctl --list-formats-ext"
[02:45] <LukeMaxwell> yes one second
[02:45] <LukeMaxwell> llogan: can I run v4l2-ctl -report --list-formats-ext ?
[02:48] <llogan> no
[02:54] <LukeMaxwell> how would i capture the output?
[02:55] <sacarasc> The text output.
[02:55] <llogan> probably by copy and pasting from your console
[02:58] <LukeMaxwell> okay
[03:13] <littlebat> LukeMaxwell: ffmpeg ... 2>&1 | tee -a ffmpeg.log
[03:15] <LukeMaxwell> thanks littlebat
[03:49] <LukeMaxwell> well now my desktop is having magical issues with wifi
[04:08] <benlieb> I was recently recommended to use this command "ffmpeg -y -f lavfi -i color=c=black:s=720x362:sar=8/9:r=ntsc:d=4 -t 4 -vf..." to create an "empty" video. Where can I find documentation for the key values after the -i ?
[04:08] <benlieb> they don't seem to be part of a filtergraph and the docs on the -i only mention a file.
[04:11] <benlieb> I was just disconnected from the server. So if anyone answered my last question could you please repeat the answer?
[04:14] <c_14> benlieb: That's a lavfi input virtual device
[04:14] <c_14> https://ffmpeg.org/ffmpeg-devices.html#lavfi
[04:15] <benlieb> c_14: tnx yet again :)
[04:22] <benlieb> c_14: that's interesting, but still not thorough docs. Are there more? I just don't understand it. And none of their examples use ffmpeg or -i
[04:26] <benlieb> I guess I'm to understand that everything after the -i is it's own filtergraph
[04:27] <c_14> pretty much
[04:46] <benlieb> c_14: Would you be able to help me understand this command better:
[04:46] <benlieb> `ffmpeg -y -i #{@dir}/intros/#{@lesson.id}.mp4 -i #{@dir}/raw_lessons/#{@lesson.id}.mp4 -filter_complex 'aevalsrc=0:d=4[a1];[0:0][a1][1:0][1:1]concat=n=2:v=1:a=1:unsafe=1[v][a]' -map '[v]' -map '[a]' #{@dir}/mp4_big/#{@lesson.id}.mp4`
[04:46] <benlieb> I can't put the filter_complex graph into "english"
[04:48] <benlieb> aevalsrc=0 is generate silence, yes?
[04:48] <c_14> 'create 4 seconds af silence and store it in a1; take the first stream from the first input file and a1 and concatenate it with the first stream of the second input file and the second stream of the second input file and don't fail if the segments have different formats'
[04:48] <c_14> 'and save the video in v and the audio in a
[04:49] <benlieb> what does the ; do?
[04:49] <benlieb> I thought filters were separated by ,
[04:50] <c_14> filters are separated with commas, filterchains are separated with semicolons and the filtergraph is all the filterchains combined
[04:53] <benlieb> so aevalsrc=0:d=4[a1] is a filter...
[04:53] <c_14> no, aevalsrc=0:d=4 is a filter
[04:53] <c_14> [a1] is a variable if you will
[04:54] <benlieb> the docs for aevalsrc look nothing like that https://ffmpeg.org/ffmpeg-filters.html#aevalsrc
[04:55] <c_14> yeah they do?
[04:55] <c_14> It's just >examples > generate silence with a duration tacked on
[04:56] <benlieb> I mean the concept of "where" to put that silence, i.e. the 'variable'
[04:58] <c_14> The docs for aevalsrc will only show you the doc for a single filter, look here: https://ffmpeg.org/ffmpeg-filters.html#Filtergraph-syntax-1
[04:59] <c_14> Or read this: https://trac.ffmpeg.org/wiki/FilteringGuide
[05:03] <benlieb> c_14 ok so each filter has "pads" on either side... except when it's a source or a sink
[05:03] <c_14> yep
[05:03] <benlieb> so aevalsrc=0:d=4[a1] has nothing on the left because it's a source
[05:03] <c_14> yep
[05:04] <benlieb> in this the pads are in [ ]?
[05:04] <benlieb> [0:0][a1][1:0][1:1]concat=n=2:v=1:a=1:unsafe=1[v][a]
[05:04] <benlieb> so [v][a] are the output pads
[05:05] <benlieb> and it needs 4 input pads...
[05:05] <c_14> yep, 2 video input pads, 2 audio input pads 2*1 + 2*1 = 4
[05:06] <benlieb> In this [0:0][a1][1:0][1:1]
[05:06] <benlieb> did [a1] exist before aevalsrc "defined" it?
[05:06] <c_14> nope
[05:07] <c_14> you can try leaving out the aevalsrc filterchain and ffmpeg should complain that a1 doesn't exist
[05:07] <benlieb> the concat docs don't seem to explicitly state the order of the input pads
[05:08] <benlieb> also, how do you know if [0:0] is audio or video?
[05:08] <c_14> By looking at the source file.
[05:08] <c_14> you can also use [0:v]
[05:08] <c_14> That will grab all the video streams
[05:08] <c_14> or [0:v:0]
[05:08] <c_14> that will grab the first video stream
[05:09] <c_14> ditto for audio
[05:09] <c_14> ditto for subtitles
[05:09] <benlieb> how do you "look" at the source file?
[05:09] <c_14> ffprobe
[05:09] <c_14> ffprobe $file
[05:10] <c_14> the normal ffmpeg output will list it as well
[05:11] <benlieb> what does is normal ffmpeg output
[05:11] <benlieb> what is?
[05:12] <benlieb> just ffmpeg file?
[05:12] <c_14> anytime you do anything with ffmpeg and don't throw away stderr
[05:13] <benlieb> exiftool shows metadata but doesn't show the streams
[05:14] <c_14> Just ffprobe $file and it'll have all the streams in the output.
[05:15] <benlieb> exiftool is showing me something confusing: https://gist.github.com/pixelterra/246947bfe27315cfab21
[05:16] <benlieb> it says Image Width : 640
[05:16] <benlieb> but Source Image Width : 720
[05:16] <benlieb> Image Height & Source Image Height are both : 362
[05:18] <benlieb> Also says Image Size : 640x362
[05:18] <benlieb> What is the difference between these things?
[05:20] <c_14> I'm guessing image width has the pixel aspect ratio applied to it.
[05:27] <benlieb> does the [v][a] at the end of concat also "create" those "variables"
[05:27] <benlieb> or are these predefined.
[05:27] <benlieb> c_14: ^
[05:28] <c_14> it creates them as well
[05:31] <benlieb> c_14 what's your day job? You work with video for work?
[05:31] <c_14> nah, systems administration
[05:31] <c_14> I play around with videos for fun.
[05:33] <benlieb> is there a book on ffmpeg?
[05:34] <c_14> There probably is somewhere, I wouldn't know of one though.
[05:35] <benlieb> I wonder if I put the silent audio in the "intro" creation command, then it would make the concat command much simpler, no? Probably could just concat "implicitly"
[05:35] <benlieb> `ffmpeg -y -f lavfi -i color=c=black:s=720x362:sar=8/9:r=ntsc:d=4 -t 4 -vf #{filters.join(',')} #{@dir}/intros/#{@lesson.id}.mp4`
[05:36] <c_14> you could use the aevalsrc as another input for that command, yes
[05:39] <benlieb> so `ffmpeg -y -i aevalsrc=0:d=4 -f lavfi -i color=c=black:s=720x362:sar=8/9:r=ntsc:d=4 -t 4 -vf #{filters.join(',')} #{@dir}/intros/#{@lesson.id}.mp4`
[05:39] <c_14> should work
[05:39] <c_14> might need the -f lavfi, not sure
[05:42] <benlieb> would that simplify the concat process at all?
[05:43] <c_14> It would allow you to use -vf concat=n=2
[05:45] <benlieb> hm. didn't work so good.
[05:45] <benlieb> `ffmpeg -y -f lavfi -i aevalsrc=0:d=4 -i color=c=black:s=720x362:sar=8/9:r=ntsc:d=4 -t 4 -vf #{filters.join(',')} #{@dir}/intros/#{@lesson.id}.mp4`
[05:45] <benlieb> produces
[05:46] <benlieb> color=c=black:s=720x362:sar=8/9:r=ntsc:d=4: No such file or directory
[05:46] <benlieb> two -f lavfi ?
[05:46] <benlieb> that worked
[05:47] <benlieb> now `ffmpeg -y -i #{@dir}/intros/#{@lesson.id}.mp4 -i #{@dir}/raw_lessons/#{@lesson.id}.mp4 -vf concat=n=2 #{@dir}/mp4_big/#{@lesson.id}.mp4`
[05:47] <benlieb> produces
[05:47] <benlieb> Simple filtergraph 'concat=n=2' does not have exactly one input and output
[05:49] <c_14> hmm, maybe you can only use the concat filter in a complex filtergrah, try -filter_complex '[0][1]concat=n=2:a=1:v=1[a][v]'
[05:51] <benlieb> `ffmpeg -y -i #{@dir}/intros/#{@lesson.id}.mp4 -i #{@dir}/raw_lessons/#{@lesson.id}.mp4 -filter_complex '[0][1]concat=n=2:a=1:v=1[a][v]' #{@dir}/mp4_big/#{@lesson.id}.mp4`
[05:51] <benlieb> Output pad "out:v0" with type video of the filter instance "Parsed_concat_0" of concat not connected to any destination
[05:54] <c_14> '[0:0][0:1][1:0][1:1]concat=n=2:v=1:a=1[v][a]' maybe?
[05:55] <benlieb> Output pad "out:v0" with type video of the filter instance "Parsed_concat_0" of concat not connected to any destination
[05:55] <benlieb> `ffmpeg -y -i #{@dir}/intros/#{@lesson.id}.mp4 -i #{@dir}/raw_lessons/#{@lesson.id}.mp4 -filter_complex '[0:0][0:1][1:0][1:1]concat=n=2:v=1:a=1[v][a]' #{@dir}/mp4_big/#{@lesson.id}.mp4`
[05:56] <benlieb> need to use -map ?
[05:56] <c_14> yes, -map '[v]' -map '[a]'
[05:57] <c_14> did you use a map in the previous one? if not add the map and try again.
[05:58] <benlieb> ok that worked
[05:58] <benlieb> so it looks like -filter_complex '[0:0][0:1][1:0][1:1]concat=n=2:v=1:a=1[v][a]' -map '[v]' -map '[a]' is the simplest form of a concat. Crazy.
[05:59] <benlieb> More like a concathouse
[05:59] <benlieb> that's a pun
[06:01] <benlieb> technically the default for n is 2 and the default for v is 1, so those can be left off
[06:02] <benlieb> c_14 6
[06:02] <benlieb> ^
[06:04] <c_14> I'm wondering if you might be able to leave off everything except for 'concat[v][a]'
[06:04] <c_14> ffmpeg might automatically throw all streams into the filter
[06:06] <benlieb> this worked: `ffmpeg -y -i #{@dir}/intros/#{@lesson.id}.mp4 -i #{@dir}/raw_lessons/#{@lesson.id}.mp4 -filter_complex 'concat=a=1[v][a]' -map '[v]' -map '[a]' #{@dir}/mp4_big/#{@lesson.id}.mp4`
[06:06] <benlieb> I'll try to take off the maps
[06:06] <benlieb> or should I leave them on?
[06:06] <c_14> I don't think ffmpeg'll like that.
[06:08] <benlieb> seems the above is as short as i can get it. The a=1 has to stay or it gags
[06:08] <benlieb> since I'm adding audio (silence) to the intro, how hard would it be to add actual audio, like a song snippet there?
[06:09] <c_14> you just need the song or the song snippet as an input file
[06:10] <c_14> just take your cammand, replace the -f lavfi -i aeval.* with -i song.format and the add a -c:a copy as an output option so you don't reencode the audio
[06:17] <benlieb> will try
[06:17] <benlieb> does the audio crop?
[06:18] <benlieb> c_14 ^
[06:19] <c_14> you're using -t 4 as an output option, so yes
[06:20] <benlieb> c_14 that didn't get the audio...
[06:20] <DaSpawn> I am trying to do segments of a live mp4 video stream, they work ok, but the subsequent video segments when played think they are streaming and they are not seekable, only the first segment is. How can I make the subsequent segments seekable.correct?
[06:21] <DaSpawn> my current command: ~/bin/ffmpeg -i rtsp://10.0.1.102:555 -map 0 -c:v copy -an -f segment -segment_time 600 -segment_wrap 144 out%03d.mp4
[06:21] <c_14> benlieb: hmm?
[06:21] <benlieb> trying again c_14
[06:23] <benlieb> c_14: strangely, the audio is only audible in the "intro" once it's joined to the lesson...
[06:24] <c_14> benlieb: that's strange
[06:24] <benlieb> playing the "intro" file that's created separately there is no audio
[06:24] <benlieb> hm...
[06:25] <benlieb> c_14: `ffmpeg -y -i "#{audio}" -f lavfi -i color=c=black:s=720x362:sar=8/9:r=ntsc:d=4 -t 4 -c:a copy -vf #{filters.join(',')} #{@dir}/intros/#{@lesson.id}.mp4`
[06:27] <c_14> benlieb: that should work...
[06:28] <benlieb> c_14 there's no volume slider on the quicktime player for the intro, but there is on the lesson once it's added...
[06:30] <c_14> can you pastebin the ffmpeg output for that command or the ffprobe output for the output file?
[06:34] <benlieb> c_14: https://gist.github.com/pixelterra/8c76027af998cdf3824b
[06:37] <c_14> hmm, the audio stream is there
[06:38] <benlieb> c_14 the stream identifiers are weird.
[06:39] <benlieb> shouldn't it be [0:0] and [1:0]
[06:39] <benlieb> ?
[06:39] <c_14> Nah, then there would be two files
[06:39] <c_14> [file:stream]
[06:40] <benlieb> c_14: i tried to specify [0:0] and [0:1] in -map, but it said no [0:1] found
[06:44] <c_14> right, because in the command you have two input files, the audio and the color, the ffprobe, however, is showing you the output file which is one file with two streams. Hence the [0:0] [0:1]
[06:44] <benlieb> c_14: ok, right. So why doesn't it play in the intro then though?
[06:45] <c_14> I'd probably blame the player for that...
[06:45] <c_14> tried it with ffplay?
[06:46] <benlieb> c_14: but after I concat it to another video it plays over the intro section...
[06:47] <c_14> with 'plays over the intro section', do you mean it works correctly?
[06:47] <c_14> ie has 4 seconds of music and then the regular audio for the rest?
[06:47] <benlieb> yes
[06:48] <c_14> I honestly couldn't tell you.
[06:49] <benlieb> yeah, it doesn't break the bank, it's just weird.
[06:49] <benlieb> The other thing is that it sets the metadata of the lesson to the metadata of the audio clip...
[06:49] <benlieb> interesting
[06:50] <c_14> you can change that with -map_metadata/-metadata look them up in the docu.
[06:53] <benlieb> c_14: thanks so much for your expertise. This would have been much harder without you!
[06:54] <c_14> np
[11:11] <gouessej> Hi. The picture still freezes but the audio continues when playing this video with ffplay even with ffmpeg 2.0.3: http://tuer.sourceforge.net/videos/Video20140524_152907_remuxed.mp4
[11:14] <gouessej> I switched to Mageia Linux 4 yesterday, mostly to benefit of a more recent version of ffmpeg but it doesn't solve my problem
[11:15] <ubitux> works fine here
[11:16] <ubitux> but ffmpeg 2.0 is pretty old, i'm using a more recent one
[11:19] <gouessej> ok thanks, I'll try with ffmpeg 2.2.2
[11:19] <gouessej> but if the problem comes from a codec, it won't solve my problem
[11:22] <gouessej> ubitux, which version of ffmpeg do you use?
[11:23] <ubitux> tried git/master and 2.2.2
[11:26] <gouessej> Thanks. I've just tried with a static build (27 May 2014) under Windows and it works.
[11:35] <gouessej> When I use ffmpeg 2.0.3 under Windows 64 bits, I don't reproduce my bug.
[11:36] <shunya_chakra> hi in ubuntu any one please tell me how can i install ffmpeg
[11:37] <gouessej> Which version of Ubuntu do you use?
[11:37] <gouessej> It should be in the official repository except if you use Ubuntu 14.04 as far as I know
[11:38] <shunya_chakra> ubuntu 14.04
[11:39] <Keestu> I have build the ffmpeg in android, and and written small application based on it. I am giving the source code of entire application to the client. It is Ok, if i give the ffmpeg built libraries along with this?
[11:43] <gouessej> shunya_chakra, Ubuntu 14.04 is shipped with LibAV instead of ffmpeg. This ppa provides only ffmpeg 1.2 for Trusty: https://launchpad.net/~jon-severinsson/+archive/ffmpeg
[11:46] <shunya_chakra> Thanks gouessej
[11:47] <gouessej> shunya_chakra, you're welcome.
[12:01] <gouessej> ubitux, which distro do you use?
[12:01] <ubitux> archlinux
[12:01] <ubitux> why would it matter?
[12:02] <gouessej> maybe. Thanks
[12:02] <gouessej> I don't know why, I'm investigating
[14:02] <naxa> there is a mediawiki-based wiki I forgot which has a recipe book or faq for many ffmpeg operations. anyone recall a similar page?
[14:03] <naxa> I think I got the hint from this channel years ago
[14:07] <ubitux> multimedia.cx?
[14:07] <ubitux> but that's probably not up-to-date
[14:08] <ubitux> you should refer to trac.ffmpeg.org
[14:08] <ubitux> and the official documentation
[14:54] <sgtpepper> hey guys , I've been checking the documentation, but I haven't found any reference to this, Is there any way, when you use "Launch" on a feed to actually Launch the process when you receive the first GET from the client
[14:55] <sgtpepper> as opposed to when you actually launch ffserver
[15:32] <naxa> ubitux: probably not multimedia.cx - it was a user thing, at least that page, and it was somehow a huge list
[15:32] <naxa> checked the ffmpeg category on m....cx, doesnt look like it
[15:34] <naxa> it was a single page of many tasks put on one page in a faq manner, in the true 'frequently' sense I suppose
[16:08] <Keestu> does ffmpeg has its own decoder for h264 or do i need to build from libx264 ?
[16:14] <ubitux> ffmpeg has a builtin decoder for h264
[16:14] <ubitux> libx264 is an encoder so it won't make any difference
[16:30] <Keestu> Ah. ubitux thanks for the info.. :)
[18:12] <bencc> can I record desktop on a server?
[18:12] <bencc> in that case, what will be the screen resolution if there is no screen?
[18:16] <JodaZ> bencc, well, you need to be running a desktop
[18:17] <bencc> JodaZ: do I need something like ubuntu-desktop on Linode or DigitalOcean VPS?
[18:19] <JodaZ> you can install a desktop accessible by vnc, thats possible on any dedicated server or vps
[18:19] <JodaZ> but thats not a ffmpeg question, ask in your distributions channel or a general linux help forum or something
[18:21] <bencc> JodaZ: I want to record the desktop with ffmpeg
[18:22] <JodaZ> bencc, yeah, for that you first need a desktop, until you have one that is not a ffmpeg question
[18:33] <bryancp> what option should I use for 8-bit mulaw? seems to be expecting 16-bit atm
[18:33] <bryancp> this is for ffplay, btw
[18:33] <bryancp> currently doing: ffplay -f mulaw -ar 8000 ...
[18:49] <thebombzen> bencc: if you're running an X display you can use ffmpeg -f x11grab -i :0.0
[18:50] <thebombzen> but if you're not running a desktop then you can't grab something that isn't there
[18:51] <bencc> thebombzen: I'm installing a virtualbox guest and trying with xvfb
[18:51] <bencc> it should work, right?
[18:52] <thebombzen> bencc: I don't know what xvfb is and I have never used virtualbox, so I couldn't tell you
[18:53] <bencc> thebombzen: xvfb is X virtual framebuffer
[18:53] <thebombzen> I can't help you there. I've only tried x11 grabbing when I'm using a standard everyday desktop. sorry
[18:54] <bencc> thebombzen: so you take a server, install a desktop and grab with ffmpeg as usual?
[18:54] <the_f0ster> anyone know of any video "Footprinting" software ?
[18:54] <thebombzen> bencc: well I have never dealt with a computer that didn't have a monitor attached. so I don't know
[18:55] <bencc> ok. thanks
[18:56] <thebombzen> the_f0ster: do you mean like adding a watermark? you could do that with FFmpeg filters. look at http://trac.ffmpeg.org/wiki/FilteringGuide
[18:56] <the_f0ster> thebombzen: no.. like identifying if a video is a copy of another, or if some segment in the video is contained in another etc
[18:56] <the_f0ster> http://disp.ee.ntu.edu.tw/~pujols/Introduction%20to%20Video%20Fingerprinting.pdf
[18:57] <thebombzen> I don't really know anything about that, sorry
[19:12] <the_f0ster> thebombzen: cool, just figured this might be a good place to ask
[19:57] <llogan> bencc: maybe fbdev is a better option for you. i've never tried it though. http://www.ffmpeg.org/ffmpeg-devices.html#fbdev
[19:57] <llogan> or simply remotely connect from a computer with x11 and use x11grab.
[20:02] <Bombo> hmm how do i need to encode a h264 vid that can be played by internet explorer 11? "-vcodec libx264 -preset superfast -crf 23 -r 25" seems not to work
[20:02] <Bombo> works just in ff
[20:02] <Bombo> (html5 video tag)
[20:07] <Bombo> llogan: http://pastebin.com/9NN2N64k
[20:09] <llogan> Bombo: ass -pix_fmt yuv420p as an output option
[20:09] <llogan> #add
[20:09] <llogan> s/#add/*add
[20:09] <llogan> remove -r 25. that's the input default anyway
[20:09] <llogan> remove -threads 0. that's also default for this encoder.
[20:10] <llogan> and add "-movflags +faststart" as an output option
[20:10] <llogan> that is all.
[20:21] <Bombo> llogan: ok seems to work now, thx ;)
[21:51] <DelphiWorld> hi FFMpegsters
[21:52] <DelphiWorld> Guys, please i need help with HLS... could someone help me getting audio tracks to be merged in ts
[21:52] <bencc> llogan: what's the difference between using fbdev and xvfb with x11grab?
[21:53] <DelphiWorld> yo yo c_14
[21:54] <DelphiWorld> c_14, this is a garbage script i am using: http://paste.debian.net/102295/
[21:54] <DelphiWorld> c_14: hold on i'lle post you videos what i want
[21:56] <DelphiWorld> c_14: this is another cmd given by iive, to add metadata to audio streams:
[21:56] <DelphiWorld> ffmpeg -re -i udp://@239.100.1.8:1234 -map 0:0 -map 0:8 -map 0:2 -c:v libx264 -vb 512k -c:a libfdk_aac -profile:a aac_he_v2 -b:a 32k -ac 2 -ar 44100 -f hls -metadata:s:a:1 language=ara -metadata:s:a:2 language=eng ./test.m3u8
[21:56] <DelphiWorld> but not working, and wait:P
[21:58] <llogan> bencc: i don't know since i've never used xvfb
[21:59] <DelphiWorld> c_14: see PM for URL
[22:00] <c_14> So you want to take a source such as that video and turn it into an hls stream?
[22:00] <DelphiWorld> c_14: no. but try to turn it ;)
[22:00] <DelphiWorld> c_14: the issue is:
[22:00] <DelphiWorld> 1. i have a MPEG2-TS DVB stream
[22:01] <DelphiWorld> its have several audio stream & one video stream
[22:01] <DelphiWorld> if i save it into a .mp4 with the option -map to chouse 2 or more audio streams
[22:01] <DelphiWorld> is taking it perfectlyyyyyyyyyyyyyyyy
[22:01] <DelphiWorld> iOS play it happyly no complain and i see language to select and perfectly working
[22:01] <DelphiWorld> but
[22:01] <DelphiWorld> if i want to turn it into HLS with same audio streams
[22:02] <DelphiWorld> 1. if i ffprobe it i dont see the language metadata in the streams
[22:02] <DelphiWorld> 2. iOS see only one audio stream
[22:02] <DelphiWorld> i hope you got it c_14
[22:02] <c_14> 1: I think that's a bug in ffmpeg, see ticket: https://trac.ffmpeg.org/ticket/3655
[22:02] <c_14> As for 2, let me test a bit.
[22:05] <DelphiWorld> i realy wish to have HLS with multi audio
[22:07] <DelphiWorld> this ticket dont have patch to try;)
[22:07] <c_14> Hmm, in my tests the hls stream has both audio tracks, it just doesn't have the metadata.
[22:07] <DelphiWorld> c_14: yes me too that was i say
[22:07] <DelphiWorld> but iOS wont see 2
[22:08] <DelphiWorld> iOS will get only the first available stream
[22:08] <c_14> That's probably a limitation in iOS then.
[22:08] <c_14> Or do you know of an hls stream with which it works on iOS?
[22:08] <DelphiWorld> c_14: mmmmmmmmmmmm
[22:08] <DelphiWorld> c_14: no, didnt test another
[22:09] <DelphiWorld> c_14: when you think this metadata issue will be fixed?
[22:09] <bencc> llogan: ok. I'll try both. thanks
[22:10] <another> you didn't test me? i'm disappointed!
[22:11] <c_14> DelphiWorld: Don't know. probably at least a week. Depends on when someone has the free time to look into and fix it.
[22:26] <DelphiWorld> c_14, this bug is not only for hls
[22:26] <DelphiWorld> c_14: is for mpeg2ts i belieuv
[22:27] <c_14> I'm pretty sure I've gotten language codes working in m2ts files.
[22:27] <DelphiWorld> c_14: ok i should do:
[22:27] <DelphiWorld> -f mpegts /tmp/c14.ts?
[22:27] <DelphiWorld> rughtN
[22:27] <DelphiWorld> right?
[22:27] <c_14> ye
[22:27] <DelphiWorld> wait
[22:31] <DelphiWorld> c_14: sory.
[22:31] <DelphiWorld> true, i got audio this time
[22:52] <DelphiWorld> c_14: do you have any recomanded 264 encoding for iGears?
[22:52] <DelphiWorld> iOS's
[22:52] <c_14> encoding settings?
[22:52] <DelphiWorld> any good H.264 parameters
[22:52] <c_14> -profile baseline is usually a good bet
[22:53] <DelphiWorld> c_14: what about fps... any example please?
[22:53] <DelphiWorld> c_14: i am asking about that cause i'm BLIND
[22:53] <DelphiWorld> i dont know anything about VIDEO:P
[22:54] <c_14> you usually don't have to worry about fps, 25 is a pretty normal setting though.
[22:54] <DelphiWorld> c_14: cause apple recomand 30
[22:54] <DelphiWorld> fps is specified by -r, right?
[22:54] <c_14> ye
[22:55] <DelphiWorld> cool
[22:55] <DelphiWorld> c_14: if i give you a stream, will you help me getting a similar settings?
[22:57] <c_14> It's hard pulling settings from a stream, the only x264 options you really need to be worrying about though is -crf, -preset, -profile and maybe -tune
[22:57] <DelphiWorld> hold on c_14i will tel you the issue:P
[23:00] <DelphiWorld> c_14: there's a flag i see it in most broadcaster's stream, iuv420p tv
[23:00] <DelphiWorld> how to get it?
[23:01] <DelphiWorld> some streamer's ask for it
[23:01] <c_14> yuv420p is the pixel format. you can force it with -pix_fmt yuv420p
[23:02] <DelphiWorld> yes i see iuv420p but not the TV flag
[23:02] <c_14> Hmm, I've never seen that flag.
[23:03] <DelphiWorld> hold on;)
[23:09] <DelphiWorld> c_14: see pm
[23:10] <DelphiWorld> in that stream
[23:10] <DelphiWorld> is mpeg2
[23:10] <DelphiWorld> but have iuv420p tv
[23:10] <DelphiWorld> same other streams have but with H.264
[23:16] <c_14> Hmm, I don't know what that (tv) part does or how to get it.
[23:16] <c_14> Are you sure you need it?
[23:17] <DelphiWorld> c_14: one of my streams partner who i'm exchanging streams with him ask for it
[23:17] <DelphiWorld> myself i never needed it:)
[23:24] <c_14> I think it's the colormatrix
[23:25] <c_14> Setting the colormatrix to bt601 seems to add the (tv) part
[23:25] <c_14> -vf colormatrix=src=bt709:dst=bt601
[23:25] <DelphiWorld> c_14: how?
[23:25] <c_14> Assuming the source colormatrix is bt709
[23:26] <DelphiWorld> hold on, let me try
[23:29] <DelphiWorld> lol
[23:29] <DelphiWorld> not added
[23:29] <DelphiWorld> ffmpeg -re -i udp://@239.100.1.8:1234 -map 0:0 -map 0:8 -map 0:2 -c:v libx264 -vb 512k -vf colormatrix=src=bt709:dst=bt601 -c:a libfdk_aac -profile:a aac_he_v2 -b:a 32k -ac 2 -ar 44100 -f mp4 euronews.mp4
[23:39] <c_14> Oh, hmm. the (tv) part was getting added automatically because I was using the mpeg2video encoder.
[23:42] <DelphiWorld> lol
[23:43] <c_14> -x264opts colorprim=bt709:transfer=bt709:colormatrix=bt709:fullrange=off <- this seems to add the (tv) part though
[23:44] <DelphiWorld> let's try;)
[23:46] <DelphiWorld> c_14: lol, works:P
[23:46] <DelphiWorld> c_14: now let's see if we get any screaming from him :)
[23:46] <massdos> hello
[23:47] <massdos> anyone knows why receive this message: [abuffer @ 02d78b80] Unable to parse option value "(null)" as sample format
[23:48] <llogan> not without additional information
[23:48] <massdos> well it's simply "ffplay file.flv" :D
[23:48] Action: DelphiWorld rename FFMpeg to MediaOcean
[23:49] <DelphiWorld> massdos: output the log of ffplay
[23:49] <massdos> ah ok
[23:49] <llogan> and provide link to file if possible if it isn't huge
[23:51] <massdos> http://pastebin.com/ctcsB4j8
[23:51] <massdos> well it's 300 mb about
[23:51] <massdos> but the problem i found it's in zeranoe builds
[23:52] <massdos> i tried with version ffmpeg-20140414-git-5e379cd-win32-static.7z
[23:52] <massdos> and it worked correctly. audio is working
[23:52] <massdos> but then i tried with version ffmpeg-20140415-git-ef818d8-win32-static.7z
[23:52] <massdos> and i got that message
[23:52] <massdos> so
[23:53] <massdos> between these version. something happens :D
[23:53] <llogan> could be a regression. the sample file would be needed to verify though
[23:53] <DelphiWorld> anyone have any idea how to inser a clock in video?
[23:53] <DelphiWorld> like in top right i want to display time
[23:54] <llogan> https://trac.ffmpeg.org/wiki/FilteringGuide#BurntinTimecode
[23:56] <Dave77> how do I get something added to FFMPEG FAQ?
[23:58] <c_14> Dave77: Submit a patch.
[23:59] <llogan> Dave77: if you need additional help or instructions you can ask in #ffmpeg-devel
[00:00] --- Thu May 29 2014
More information about the Ffmpeg-devel-irc
mailing list