[Ffmpeg-devel-irc] ffmpeg.log.20150425
burek
burek021 at gmail.com
Sun Apr 26 02:05:01 CEST 2015
[01:09:29 CEST] <Prelude2004c> hey guys.. anyone know how to copy a trasnport stream from one place to anther without touching it ? everytime i do -c copy it shows up on the other side as just 1 program and it says " service01 " etc etc
[01:09:42 CEST] <Prelude2004c> i want to just copy it and that's t
[01:09:42 CEST] <Prelude2004c> it
[02:42:01 CEST] <rex____> hi
[02:42:06 CEST] <rex____> i have big videos
[02:42:20 CEST] <rex____> i have big videos and want to convert many to hq sound.
[02:42:28 CEST] <rex____> Where do i start from here or google
[07:13:57 CEST] <techtopia> hello all
[07:14:41 CEST] <techtopia> im trying to encode some video that is encoded at 29.97 fps and the video has four frames clean then two frames blended throughout
[07:15:11 CEST] <techtopia> how can i remove those blended frames
[07:15:20 CEST] <techtopia> or fix them so they are not blended
[07:15:32 CEST] <techtopia> im not used to ntsc video
[07:28:45 CEST] <shevy> guys... I have from a university lecture
[07:28:53 CEST] <shevy> several "podcast" downloads; these files are stored in .m4v
[07:29:04 CEST] <shevy> is this the same as .mp4? let me run ffmpeg to analyze...
[07:29:34 CEST] <shevy> ffmpeg says that the video stream is `h264` and the audio stream is `aac`.
[07:29:44 CEST] <shevy> I wonder if I could just rename .m4v to .mp4
[07:31:44 CEST] <techtopia> is it a fragment
[07:31:48 CEST] <techtopia> or a complete video
[07:32:20 CEST] <techtopia> it's basically mp4 but with protection
[07:32:33 CEST] <techtopia> normaly they are fragmented and served in fragments
[07:32:41 CEST] <shevy> aha
[07:33:04 CEST] <shevy> they seem to work fine as standalones; each .m4v here was a lecture from another day
[07:33:04 CEST] <techtopia> mp4 is just a container btw
[07:33:08 CEST] <techtopia> the video is h264
[07:33:16 CEST] <shevy> I kind of want to just merge them all together into one huge long 10 hours lecture
[07:33:22 CEST] <shevy> ok techtopia
[07:33:26 CEST] <techtopia> if they play just keep them as m4v no point to rename them
[07:33:37 CEST] <techtopia> unless you want to re encode them to x264 or somthing
[07:33:54 CEST] <techtopia> oh
[07:34:00 CEST] <techtopia> then you would concat them
[07:34:09 CEST] <shevy> nah, the video quality does not matter anyway, I am mostly interested in the audio... though the teacher also shows some .pdf files on the beamer, so it's nice to retain some quality to decipher these
[07:34:12 CEST] <techtopia> it's kinda awkward with ffmpeg
[07:34:35 CEST] <shevy> yeah, it's the "concat:" thing right? that's ok too, I trust ffmpeg :)
[07:34:35 CEST] <techtopia> what i do is use ffmpeg to encode video to either mp4 or mkv depending on hd or sd
[07:34:48 CEST] <techtopia> then i use mkvmerge gui to merge them into a solid file for hd
[07:34:58 CEST] <techtopia> or mp4box to merge them into a solid file for mp4
[07:35:33 CEST] <techtopia> so use ffmpeg encode them all out to x264 in an mp4 container
[07:35:59 CEST] <techtopia> at the end, use mp4box to demux all the mp4's. then use mp4box to join all the video files
[07:36:56 CEST] <techtopia> for the audio just do copy /b from a command line like "copy /b audio01.mp2 + audio02.mp2 + audio03.mp2 audio_out.mp2"
[07:37:14 CEST] <techtopia> then use mp4box to mux the merged audio and video files
[07:37:18 CEST] <techtopia> it's the best way
[07:38:22 CEST] <techtopia> might be easier to you know just create a playlist for your media player :p
[12:30:43 CEST] <micechal> Has anyone ever used ffmpeg to create captchas?
[12:31:07 CEST] <micechal> How good of an idea is that?
[15:24:55 CEST] <Prelude2004c> hey guys.. anyone know how to copy a trasnport stream from one place to anther without touching it ? everytime i do -c copy it shows up on the other side as just 1 program and it says " service01 " etc etc.. i just want to take the multicast from one place to another
[17:06:35 CEST] <vladmariang> any guy here?
[17:06:38 CEST] <vladmariang> hi
[17:07:32 CEST] <vladmariang> I want to put a background back on the image and I use ffmpeg -i 1.mp4 -i background.jpg \-filter_complex "[0:v][1:v] overlay=0:0:enable='between(t,0,20)'" \-pix_fmt yuv420p -c:a copy \output.mp4
[17:07:50 CEST] <vladmariang> but is a problem with this command
[17:08:08 CEST] <vladmariang> the video is under the image
[17:08:45 CEST] <c_14> swap 0:v with 1:v
[17:12:53 CEST] <vladmariang> [1:v][0:v] ?
[17:13:38 CEST] <c_14> yep
[17:14:45 CEST] <vladmariang> hmm, same output
[17:15:52 CEST] <c_14> [1:v][0:v]overlay should put the picture on the bottom
[17:32:55 CEST] <vladmariang> c_14: Do you know a command for make that?
[17:33:08 CEST] <c_14> hmm?
[17:36:49 CEST] <vladmariang> for put a background and center the video
[17:38:46 CEST] <c_14> ffmpeg -i video -i background -filter_complex '[1:v][0:v]overlay=x=(W-w)/2:y=(H-h)/2[v]' -map '[v]' out.mkv
[17:41:29 CEST] <vladmariang> Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[17:42:44 CEST] <vladmariang> http://pastebin.com/kbg0JuCE
[17:43:47 CEST] <c_14> The complete console output is usually helpful. Not just the error message.
[17:45:28 CEST] <vladmariang> http://pastebin.com/8zQcE5ND
[17:47:51 CEST] <c_14> ffmpeg -i video -i background -filter_complex '[1:v]scale=trunc(iw/2)*2:-2[tmp];[tmp][0:v]overlay=x=(W-w)/2:y=(H-h)/2[v]' -map '[v]' out.mkv
[17:51:43 CEST] <trodis> im trying to stream opus from my pc over udp to the ffserver, the streaming looks ok, but i cant play it
[17:52:04 CEST] <trodis> this is how i stream to the ffserver: ffmpeg -re -f pulse -i default -c:a libvorbis -b:a 64k -f mpegts udp://192.168.1.138:554
[17:52:34 CEST] <trodis> this is my ffserver.conf: http://ix.io/ew8
[17:52:52 CEST] <sfan5> ummm...
[17:52:52 CEST] <trodis> and this is how i try to play it: mplayer udp://192.168.1.138:554
[17:53:17 CEST] <trodis> i mean im trying to stream with the opus codec
[17:53:35 CEST] <sfan5> trodis: ffmpeg will ignore -c:a libvorbis and -b:a 64k
[17:53:53 CEST] <trodis> because he takes the config from the server?
[17:53:56 CEST] <sfan5> yes
[17:54:09 CEST] <trodis> ok i guessed that because thats the "output"
[17:54:19 CEST] <trodis> so will encode to what the output should be
[17:54:38 CEST] <trodis> but anyway i have no clue how to configure my ffserver so i can stream using udp
[17:54:42 CEST] <sfan5> also I don't think ffserver takes any input on udp
[17:55:16 CEST] <trodis> what i want to achieve is how i can stream audio with latency as low as possible
[17:55:41 CEST] <sfan5> trodis: try ffmpeg -re -f pulse -i default http://192.168.1.138/feed1.ffm
[17:55:59 CEST] <vladmariang> @c_14 It works with png background but the video has < 1 second duration
[17:56:16 CEST] <sfan5> trodis: http://192.168.1.138:8080/feed1.ffm i mean
[17:56:20 CEST] <vladmariang> with mp4 format
[17:56:38 CEST] <vladmariang> output
[17:56:50 CEST] <c_14> vladmariang: add -loop 1 before the -i for the background
[17:57:22 CEST] <c_14> and add eof_action=endall to the overlay
[17:58:12 CEST] <c_14> that or shortest=1
[17:58:18 CEST] <sfan5> trodis: i just googled something and it seems that RTSPPort is for outputting streams but not for taking input
[17:58:24 CEST] <trodis> sfan5: now i get connection refused
[17:59:06 CEST] <trodis> [tcp @ 0x7fe0696fb720] Connection to tcp://192.168.1.138:80 failed: Connection refused
[17:59:25 CEST] <sfan5> :8080 not :80
[17:59:29 CEST] <trodis> [tcp @ 0x7f75e4c89760] Connection to tcp://192.168.1.138:8080 failed: Connection refused
[17:59:55 CEST] <sfan5> which ip does ffserver have?
[18:00:14 CEST] <trodis> 192.168.1.138
[18:00:37 CEST] <sfan5> should've read your config..
[18:00:39 CEST] <sfan5> try :8090
[18:00:53 CEST] <trodis> ah shit sry my bad
[18:01:00 CEST] <sfan5> nah, my fault
[18:01:12 CEST] <trodis> i have so many raspberry pis here i was confused with the ports
[18:01:51 CEST] <trodis> ok now im streaming but which protocol im i using i guess its tcp ?
[18:01:54 CEST] <sfan5> yes
[18:02:09 CEST] <sfan5> i don't think your can stream to ffserver using anything other than tcp
[18:02:23 CEST] <trodis> ok that answered my question
[18:02:27 CEST] <c_14> Well, technically it's http over tcp
[18:02:28 CEST] <trodis> so whats the deal with rtp protocoll
[18:02:56 CEST] <sfan5> mplayer http://192.168.1.138:8090/test.ogg should work
[18:03:16 CEST] <sfan5> i don't know how rtsp works but i'd guess it's rtsp://192.168.1.138:554/test.ogg
[18:04:30 CEST] <trodis> than whats the deal with all the udp stuff
[18:04:36 CEST] <trodis> https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming
[18:05:07 CEST] <vladmariang> c_14: It works with -loop 1 but the console keeps running infinitely and no sound for the video
[18:05:30 CEST] <c_14> like I said, add :shortest=1 to the options for the overlay filter
[18:05:40 CEST] <c_14> And add -map 0:a as an output option
[18:05:40 CEST] <vladmariang> I need to press control+c to stop
[18:05:41 CEST] <sfan5> trodis: rtsp/udp is when you want to stream from one computer to another
[18:05:50 CEST] <trodis> so without server?
[18:05:58 CEST] <c_14> vladmariang: you can also press q to stop, but that should fix itself once you add the :shortest=1
[18:06:10 CEST] <sfan5> trodis: the usecase for ffserver is: you have one video and want to deliver it (in possibly mulitple sizes & formats) to many computers
[18:06:18 CEST] <sfan5> nope, no ffserver needed for point-to-point streaming
[18:06:46 CEST] <trodis> and my usecase is, i have a microphone and want to broadcast in LAN to 20 raspberry pis
[18:06:59 CEST] <trodis> with latency as low as possible, like Mumble or TS
[18:07:41 CEST] <sfan5> sounds like you need multicast
[18:07:46 CEST] <trodis> so i thought i could use ffmpeg to capture mic input and send it to the ffserver, and the 20 raspberry pis would connect to the ffserver and listen to what im saying
[18:07:55 CEST] <sfan5> that would work too
[18:08:52 CEST] <trodis> yeah but multicast is hard to maintain, the speak should not be bothered during broadcast when something goes wrong, i guess having a sever gives me more controlle
[18:13:19 CEST] <vladmariang> c_14: '[v]' is an argument?
[18:13:39 CEST] <c_14> It's a filterpad
[18:13:41 CEST] <__jack__> multicast is just a less-stupid broadcast
[18:14:03 CEST] <vladmariang> c_14: -map 0:a '[v]' out.mp4
[18:14:16 CEST] <c_14> -map 0:a -map '[v]'
[18:16:47 CEST] <vladmariang> c_14: For 1080p I need to create the image in this format?
[18:17:06 CEST] <c_14> that or scale it first
[18:18:34 CEST] <vladmariang> c_14: because I want to make full hd and to remove the black borders from the video http://i.imgur.com/LuwmiuB.png
[18:19:00 CEST] <vladmariang> c_14: for make a video like https://www.youtube.com/watch?v=mPrwsHrsodA
[18:19:24 CEST] <trodis> sfan5: how can i reduce the latency when im using ffmpeg to stream to the ffserver, on the client side im using ffplay -fflags nobuffer or mplayer -nocache
[18:20:23 CEST] <c_14> vladmariang: if the video already has black borders, you'll need to crop. Otherwise just overlaying the video onto a larger background should work.
[18:25:48 CEST] <sfan5> trodis: no idea, maybe a lower FileMaxSize in ffserver.conf
[18:28:33 CEST] <vladmariang> c_14: the video is 720x720
[18:30:08 CEST] <vladmariang> c_14: I think I need to increase height
[18:30:45 CEST] <vladmariang> c_14: https://www.youtube.com/watch?v=AFmRxUO4W5w
[18:32:06 CEST] <c_14> First, you'll want to crop off those black bars. Then you'll want to scale the height of the video up to the height of the overlay, or scale the overlay down to the height of the video.
[18:35:52 CEST] <vladmariang> c_14: the problem with the black borders is from the video.
[18:38:19 CEST] <c_14> Which is why I suggested cropping it.
[18:39:03 CEST] <vladmariang> c_14: Now the video is ok, but he is very small
[18:39:14 CEST] <c_14> Which is why I suggested scaling.
[18:40:21 CEST] <vladmariang> Haha, how
[18:40:34 CEST] <c_14> What's your current command?
[18:41:04 CEST] <vladmariang> http://pastebin.com/kMrhy2sz
[18:41:38 CEST] <c_14> Did you already crop 1.mp4
[18:42:08 CEST] <vladmariang> Yes, https://www.youtube.com/watch?v=GBjiXFNvjvs
[18:43:17 CEST] <jaggz> how can I scale an audio rate to get its final length to match that of a video?
[18:43:34 CEST] <c_14> ffmpeg -i 1.mp4 -loop 1 -i background.jpg -filter_complex '[1:v]scale=trunc(iw/2)*2:-2[tmp];[0:v]scale=-1:1080[tmp0];[tmp][tmp0]overlay=shortest=1:x=(W-w)/2:y=(H-h)/2[v]' -strict -2 -map 0:a -map '[v]' out.mp4
[18:43:42 CEST] <c_14> ^ vladmariang
[18:44:03 CEST] <jaggz> I used a screen recorder that, for some reason, recorded the audio 4 seconds shorter (faster) over 2 hours
[18:44:23 CEST] <jaggz> it saves the video in a separate file from the audio
[18:44:29 CEST] <vladmariang> c_14: guy, you are awesome
[18:44:40 CEST] <c_14> jaggz: -af atempo=video_length/audio_length (should do it)
[18:45:05 CEST] <jaggz> beautiful, thank you :)
[18:45:25 CEST] <jaggz> put that after the -i audio and before the output filename?
[18:45:44 CEST] <c_14> yeah, you'll have to actually probe the input files to get the durations though
[18:46:31 CEST] <vladmariang> c_14: You are the creator of the ffmpeg?
[18:47:04 CEST] <c_14> vladmariang: Nah
[18:47:19 CEST] <jaggz> ffmpeg shows it nicely I believe? Video 01:54:52.94 audio 01:54:48.86
[18:47:47 CEST] <c_14> yeah. you'll probably want to convert that to seconds though so you can divide easily
[18:49:37 CEST] Action: jaggz changes script from bash to perl to parse the output easier
[18:50:25 CEST] Action: c_14 actually wrote a bash script once to parse HH:MM:SS.MS.
[18:50:29 CEST] <c_14> It's great (not really).
[18:51:17 CEST] <vladmariang> c_14: There is another method for add a text to a video instead subtitle?
[18:51:27 CEST] <c_14> drawtext ?
[18:51:38 CEST] <c_14> https://ffmpeg.org/ffmpeg-filters.html#drawtext-1
[18:51:41 CEST] <vladmariang> c_14: Something like https://www.youtube.com/watch?v=55uBuHQzSA0
[18:52:03 CEST] <c_14> drawtext should be able to do everything except for that spectrum display
[18:52:28 CEST] <jaggz> c_14, :) bash's expansion capabilities makes things easier nowadays, but still...
[19:02:17 CEST] <jaggz> c_14: in testing, I'm skipping some intro material in the video where I can't tell if it's in sync.. is that too complicated to handle along with the rate issue?
[19:02:19 CEST] <jaggz> ffmpeg.exe -y -ss $start -i "$1" -ss $start -i "$2" -c:v libxvid -b:v 100k -t 00:00:30 "$3"
[19:03:04 CEST] <c_14> Just subtract $start from both durations?
[19:03:04 CEST] <jaggz> $1 is the video, $2 audio.. figured I'd skip 30 minutes in (-ss "00:30:00") and run 30 seconds .. it's a lecture
[19:03:18 CEST] <c_14> though
[19:03:30 CEST] <c_14> hmm
[19:04:20 CEST] <jaggz> it was just to save time while figuring out the other stuff.. I could just process from the beginning.. skipping that amount of time in both files might be an issue with the different rates...
[19:04:36 CEST] <c_14> You'd probably have to reencode the audio with the atempo filter, then test with that file
[19:04:48 CEST] <c_14> Reencoding only the audio shouldn't take long.
[19:05:19 CEST] <c_14> Skipping in the video shouldn't be an issue, but if you skip in the audio and then try applying the rate filter on that, it'll be stretched incorrectly.
[19:05:24 CEST] <jaggz> hey that's an interesting idea
[19:06:31 CEST] <jaggz> and fast
[19:06:46 CEST] <jaggz> oh you said that :)
[19:07:02 CEST] <vladmariang> c_14: There is possibility to draw a text like subtitle, for a specified time?
[19:07:03 CEST] <jaggz> are you carbon_14?
[19:07:11 CEST] <jaggz> ignore me.. off topic..
[19:07:35 CEST] <vladmariang> c_14: And for use like text.txt
[19:07:49 CEST] <vladmariang> Because I want to generate dynamically
[19:08:09 CEST] <vladmariang> and save to a file and load
[19:08:24 CEST] <c_14> Use the textfile option of the drawtext filter
[19:09:56 CEST] <randomdue> hello
[19:10:52 CEST] <jaggz> Oops.. it's shorter than it used to be. Original audio duration: 01:54:48.86, new: 01:54:44.81
[19:11:04 CEST] <randomdue> an ntsc video at 29.97fps has clean and blended frames in the patern 3:2:3:2:2
[19:11:27 CEST] <randomdue> which is what happens when the convert 25 fps pal for broadcast in ntsc areas
[19:11:40 CEST] <randomdue> how can i correctly restore it to 25fps
[19:11:41 CEST] <c_14> jaggz: eh, right. invert the video_duration and audio_duration. A positive atempo speeds up the audio
[19:11:55 CEST] <jaggz> not that I like experimenting without knowing.. but I'll make it atempo = alen / vlen instead
[19:13:02 CEST] <jaggz> randomdue: what's 3:2:3:2:2 indicate? (just for my own knowledge -- I'm unfamiliar with this)
[19:15:31 CEST] <jaggz> c_14, perfect. lengths match to 00.01s
[19:15:42 CEST] <vladmariang> c_14: Error opening filters! My ffmpeg is very clear, without any filter
[19:16:48 CEST] <c_14> randomdue: if the pattern is exact, you can probably use the new detelecine filter. Otherwise you'll need to use one of the other ivtc filters like pullup or fieldmatch/decimate
[19:17:07 CEST] <vladmariang> c_14: http://pastebin.com/kFR4T9iK
[19:17:27 CEST] <jaggz> c_14, thanks for helping; me and others.
[19:18:25 CEST] <c_14> vladmariang: you need an ffmpeg compiled with --enable-libfreetype
[19:24:46 CEST] <vladmariang> c_14: There is a ffmpeg file already compiled with those things?
[19:24:54 CEST] <c_14> http://johnvansickle.com/ffmpeg/
[19:30:49 CEST] <randomdue> [18:13:02] <jaggz>
[19:31:09 CEST] <randomdue> pal 25fps video, broadcast at 29.97 fps
[19:31:27 CEST] <randomdue> causes every 5th and 6th frame to be blended
[19:31:39 CEST] <randomdue> im trying to remove the blends and restore it to 25fps
[20:16:08 CEST] <vladmariang> c_14: Thanks :X
[20:16:11 CEST] <vladmariang> for all
[20:49:53 CEST] <mariangabriel> c_14: There is a way to reduce the time when I want to add a background?
[20:50:05 CEST] <mariangabriel> frame= 447 fps=6.0 q=28.0 size= 8164kB time=00:00:17.83 bitrate=3748.8kbits/
[20:50:18 CEST] <mariangabriel> because I wait for every frame
[20:50:34 CEST] <c_14> hmm?
[20:51:14 CEST] <mariangabriel> http://pastebin.com/e53BNkcP
[20:52:29 CEST] <mariangabriel> how ca I reduce the compilation time
[20:52:32 CEST] <mariangabriel> can*
[20:53:51 CEST] <mariangabriel> I think is impossible
[20:56:03 CEST] <__jack__> mariangabriel: compilation ? you mean: encoding ?
[20:56:09 CEST] <mariangabriel> yes
[20:56:18 CEST] <mariangabriel> the process for adding background
[20:57:43 CEST] <vlops> kierank, i can't find in the source code what ffmpeg does when it's called with -f framecrc, i.e. what library functions (API functions?) it calls to compute and write the Adler 32 CRCs?
[21:09:20 CEST] <c_14> mariangabriel: you can add -preset fast/veryfast/ultrafast
[21:11:38 CEST] <mariangabriel> this preset decrease the quality >
[21:11:39 CEST] <mariangabriel> ?
[21:12:10 CEST] <c_14> In your case it should mainly just make the output larger.
[21:18:00 CEST] <Sneakyghost> Gday everyone. I would need help with converting some raw file containing an image to a windows viewable output. Can this be asked here?
[21:20:02 CEST] <c_14> sure
[21:21:31 CEST] <Sneakyghost> splash screen partition from a HTC One M9 doesnt convert using a command line that used to work on the predecessor phone M8, I cannot figure out the error. The two phones are very similar and the raw image actually shouldn't have changed
[21:21:51 CEST] <Sneakyghost> ffmpeg -f rawvideo -pix_fmt rgb565 -s 1080x1920 -i mmcblk0p15 -f image2 splash1.png this worked on the M8
[21:22:20 CEST] <Sneakyghost> but on M9, it returns: [image2 @ 02fa82d0] Could not get frame filename from pattern av_interleaved_write_frame(): Input/output error
[21:23:09 CEST] <c_14> try with -frames:v 1
[21:23:33 CEST] <Sneakyghost> before input or output?
[21:24:51 CEST] <c_14> output
[21:26:33 CEST] <Sneakyghost> unrecognized option frames:v
[21:28:52 CEST] <c_14> try -vframes 1
[21:31:07 CEST] <Sneakyghost> man that did it
[21:31:26 CEST] <Sneakyghost> whats the vframe option do?
[21:31:45 CEST] <Sneakyghost> is it in the documentation page?
[21:32:17 CEST] <Sneakyghost> Oh i see it
[21:32:23 CEST] <Sneakyghost> it specifies the number of frames
[21:32:26 CEST] <Sneakyghost> just that?
[21:32:35 CEST] <Sneakyghost> I wonder why it worked on the previous phone's raw image
[21:32:44 CEST] <Sneakyghost> without specifiying number of frames
[21:33:23 CEST] <c_14> Probably because the new one looked like it had more frames or something. It might also just be a different version of ffmpeg.
[21:34:18 CEST] <Sneakyghost> totally overwhelmed with ffmpeg. Thank you so much for helping me. I shall creep back to my cave now.
[00:00:00 CEST] --- Sun Apr 26 2015
More information about the Ffmpeg-devel-irc
mailing list