[Ffmpeg-devel-irc] ffmpeg.log.20170410
burek
burek021 at gmail.com
Tue Apr 11 03:05:02 EEST 2017
[00:02:02 CEST] <sonion_> furq are you in belgium?
[00:05:12 CEST] <sonion_> <hint> i have not clue</hint> but wouldn;t you need a user channel? on youtube to send stuff to? to many urls to paste so here is agoogle search https://encrypted.google.com/search?hl=en&q=uploading+live+stream+to+youtube
[00:06:50 CEST] <djk> right you use your youtube account that you may have already used for uploading videos to
[00:08:03 CEST] <djk> I have that and have long ago done a live stream using windows and one their recommended encoders. Now I am wanting to do it on the Raspberry Pi and need a linux solution
[00:08:41 CEST] <sonion_> i can't wait - to get a rpi
[00:09:07 CEST] <sonion_> you didn't need to use cli on windows?
[00:11:46 CEST] <djk> no windows was all gui fill in the blank
[00:11:58 CEST] <sonion_> do they have an enigma simulator for the rpi? i have an apollo spacecraft computer simulator for linux (agc) so only a matter of time to go rpi
[00:13:38 CEST] <sonion_> if you want to trap what you send youtube - you can set up a server on the window box and then use the hosts file to trap it all
[00:13:51 CEST] <djk> not sure. There are a lot of packages built for it and if you have the source you could compile on it
[00:14:15 CEST] <djk> it isn't a powerhouse but is a capable small computer
[00:16:49 CEST] <sonion_> i guess them eu-rope-ans are either asleep in the hoozegow or out carosing
[00:17:57 CEST] <sonion_> they all got multiple mistresses and free champagne
[00:18:20 CEST] <djk> I'll keep digging. Though it looks like rtmp://a.rtmp.youtube.com/live2/[KEY] is the right url
[00:19:56 CEST] <sonion_> what does the key look like?
[00:20:36 CEST] <sonion_> is the uploading of a live stream different from uploading static videos?
[00:20:37 CEST] <djk> it is what is unique to you 'channel' not something you share
[00:20:48 CEST] <djk> yes
[00:23:09 CEST] <sonion_> how many times have you run that ffmpeg line and 'failed' uploading?
[00:23:25 CEST] <djk> never worked
[00:24:04 CEST] <djk> I don't think it is a lock issue just not formatted correctly or something
[00:25:00 CEST] <sonion_> you read my mind ;)
[00:25:34 CEST] <sonion_> what is the channel to get the live stream from?
[00:26:12 CEST] <sonion_> have you checked it ? there could be remnants of something on it .. cause when a live stream ends the full video is there
[00:27:06 CEST] <djk> I'm going trying it on windows again with OBS encoder just to confirm I have the simple stuff right
[00:27:06 CEST] Action: sonion_ couldn't figure out how to watch the trump inaguration without a browser and youtube-dl didn;t work until the live stream actually ended
[00:28:05 CEST] <sonion_> okay
[00:28:26 CEST] <sonion_> gute luckski
[00:36:02 CEST] <lalalalala> i'm using ffmpeg to ffconcat a bunch of images and mp3s into a sort of video slide show
[00:36:13 CEST] <lalalalala> i'm running ffmpeg head freshly compiled (and i've also tried stable 3.2.4)
[00:36:24 CEST] <lalalalala> and i'm getting this error message a zillion times: Non-monotonous DTS in output stream 0:1; previous: 24220238, current: 23567718; changing to 24220239. This may result in incorrect timestamps in the output file.
[00:36:33 CEST] <lalalalala> and the audio in the output is broken
[00:36:42 CEST] <lalalalala> i also specified the audio output codec as 128k lame encoded mp3
[00:37:41 CEST] <lalalalala> i've done pretty much everything i can find on the forums,... can i get some help please?
[00:44:14 CEST] <sonion_> i used swftools was pretty easy
[00:47:14 CEST] <sonion_> you might have to start with wav files and not mp3 - not sure
[00:48:58 CEST] <lalalalala> are you talking to me?
[00:49:39 CEST] <lalalalala> i just tried concatenating the mp3 files (audio only output) without images and the result worked, but i did get this error message a bunch of times: [mp3 @ 0x7ff844018000] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 24196608 >= 23569126
[00:51:22 CEST] <sonion_> what movie is that from ... 'are you talking to me'?
[00:52:06 CEST] <lalalalala> some italian mob thing i think
[00:52:25 CEST] <lalalalala> anyway,... i need help! i've been working on this for hours and am not having success :/
[00:52:40 CEST] <dystopia_> so catting the mp3s works?
[00:52:51 CEST] <lalalalala> strangely, yes
[00:53:03 CEST] <dystopia_> then why not cat, and then at the images to the outputted mp3?
[00:53:09 CEST] <dystopia_> add*
[00:54:35 CEST] <lalalalala> i could,...
[00:55:10 CEST] <lalalalala> looks like i'm going to have to,...
[00:57:04 CEST] <sonion_> swftools was the only good thing that came out of flash
[00:57:30 CEST] <lalalalala> i'm trying the mp3 concat separately now will let you know how it goes... this should be a bug though
[00:57:53 CEST] <dystopia_> what is your line lalalalala
[00:59:05 CEST] <sonion_> the movie was taxi driver
[01:00:07 CEST] <sonion_> geez staring at your nick lalalalala brings on flash backs - speaking of flash :)
[01:03:06 CEST] <lalalalala> my line?
[01:03:17 CEST] <lalalalala> command line?
[01:03:43 CEST] <lalalalala> slideshow_create_command = "ffmpeg -f concat -shortest -safe 0 -i " + ffconcat_image_file + " -safe 0 -i " + ffconcat_audio_file + " -vf fps=25 -vf scale=720:540 -c:v libx264 -acodec copy -profile:v baseline -pix_fmt yuv420p -y " + output_video_filename
[01:04:00 CEST] <lalalalala> that's being run from python but i've also run it straight in the console (the problem is not the fact that i'm doing a system call from python)
[01:04:08 CEST] <sonion_> cya - thanks for the help earlier
[01:05:33 CEST] <lalalalala> ok, concatting the audio separately appears to work
[01:06:32 CEST] <lalalalala> consider this a bug, ...
[01:06:43 CEST] <lalalalala> running this works:
[01:06:44 CEST] <lalalalala> tmpmp3_create_command = "ffmpeg -y -safe 0 -i " + ffconcat_audio_file + " -acodec copy " + tmpmp3_filename
[01:06:53 CEST] <lalalalala> followed by this:
[01:06:53 CEST] <lalalalala> slideshow_create_command = "ffmpeg -safe 0 -i " + ffconcat_image_file + " -i " + tmpmp3_filename + " -vf fps=25 -vf scale=720:540 -c:v libx264 -acodec copy -profile:v baseline -pix_fmt yuv420p -y " + output_video_filename
[01:10:21 CEST] <lalalalala> consider this a bug,... and no i'm not filing one
[01:12:27 CEST] <BtbN> concating at the muxer level is a bit wonky if the files don't match up
[01:12:41 CEST] <BtbN> just remuxing them first is often enough to silence some warnings
[01:24:20 CEST] <Journey_> Hi all, new person here having trouble with a Samsung .sec file from a dvr system (and with getting ffmpeg to pull out the avc stream and to pack it into something else) Would anyone be willing to take a crack at it?
[01:56:10 CEST] <lalalalala> ok so i'm getting the length of the mp3 files,
[01:56:15 CEST] <lalalalala> adding together those lengths
[01:56:24 CEST] <lalalalala> and my final file ends up quite a bit longer than that
[05:41:37 CEST] <khris> anyone having problems rendering on mac os?
[10:46:01 CEST] <hoarse> FFmpeg -f lavfi -i "sine=frequency=55:sample_rate=88:duration=30" -acodec speex -ac 1 -strict -2 -f nut - | ffplay -acodec sipr -ar 4000 -i -
[10:50:56 CEST] <thebombzen> um
[10:51:00 CEST] <thebombzen> okay
[11:14:43 CEST] <hoarse> FFmpeg -i https://a.yiff.moe/xnqogf.mp4 -vcodec avrp -acodec pcm_u8 -vf format=bgra -ar 6000 -s 1280x720 -strict -2 -f avi - | ffplay -vcodec r210 -acodec real_144 -vf format=nv12 -ar 4000 -i -
[11:14:55 CEST] <hoarse> :^)
[12:44:36 CEST] <Azarus> Hi
[12:46:00 CEST] <Azarus> Hey i am really desperate trying to figure this out since a week now
[12:46:11 CEST] <Azarus> Is there a way to concat video files
[12:46:19 CEST] <Azarus> Bad.
[12:46:44 CEST] <furq> there are lots of bad ways to concat video files
[12:47:05 CEST] <Azarus> Sorry im on mobile pressed enter accidently 1 sec
[12:47:53 CEST] <Azarus> Is there a way to concat audio files from a folder with fade transition between each track? Also apply an image overlay and fade each image after 5 mins?
[12:48:05 CEST] <Azarus> And stream it to an rtmp server?
[12:48:21 CEST] <Azarus> And once it finished just restart and loop it endless?
[12:48:36 CEST] <Azarus> Using the cli?
[12:49:17 CEST] <Azarus> And on top of that i would like to apply a transparent image overlay
[12:49:35 CEST] <Azarus> I know its a lot to ask but couldnt get done any parts of it
[12:50:37 CEST] <Azarus> So i am desperate and accept any help
[12:53:38 CEST] <Azarus_> So what i tried first is that i have a list of images in a folder images/ and a test song test.mp3
[12:53:39 CEST] <Azarus_> ffmpeg -i test.mp3 -framerate 60 -pattern_type glob -i images/*.jpg \ -filter_complex "[0:v]scale=1280x720,setdar=16:9[base];[1:v]scale=1280x720,setdar=16:9[ovr];\ [ovr][base]blend=all_expr='A*(if(gte(T,3),1,T/3))+B*(1-(if(gte(T,3),1,T/3)))'" \ -map [v] result.mp4 \
[12:54:22 CEST] <Azarus_> Tried this to cross fade the images, but can't get anything working and i am not even sure ffmpeg is suitable for such procedural video generation
[12:56:04 CEST] <st-gourichon-fid> Azarus_, "can't get anything working" -> can you elaborate? Not that I can help you directly, but the more details you provide, the best outcome.
[12:56:50 CEST] <Azarus_> Well thats the elaboration. Nothing is working from the things mentioned above
[12:57:32 CEST] <Azarus_> I would like to proceduraly generate video files by given a set of images and audio tracks and mix them together with a nice fade effect.
[12:57:57 CEST] <Azarus_> i have like 60 images and 10 audio tracks
[13:00:42 CEST] <Azarus_> Not sure what other information do you need?
[13:16:00 CEST] <st-gourichon-fid> What usually works is progress little by little. First a trivial command line that gets no error, then another that produces a trivial result, then add necessary complexity little by little.
[13:16:21 CEST] <st-gourichon-fid> Then you'll have a hint about what's wrong.
[13:17:14 CEST] <st-gourichon-fid> You'll hit the problems one by one and fix them, rather that face all at once and never get a clue.
[13:23:09 CEST] <st-gourichon-fid> Azarus_, "Not sure what other information do you need?" => Consider section #2 of https://workaround.org/getting-help-on-irc/ (all of it is interesting, e.g. section #14 too).
[13:24:00 CEST] <Azarus_> is st-gourichon-fid a bod?
[13:24:01 CEST] <Azarus_> bot*
[13:25:50 CEST] <st-gourichon-fid> Me, a bot? :-) Read http://fidergo.fr/ and judge by yoursef.
[13:26:24 CEST] <Azarus_> @st-gourichon-fid i wrote a lot about what i have now and what i would like to get done, and by simply saying that nothing is working means nothing is working. If thats not clear for you what it means i can't help it.
[13:28:41 CEST] <Azarus_> Sorry i dont speak french. And i only came here for help. I guess i know how to ask for help. If something is not clear about my question please ask.
[13:29:12 CEST] <Azarus_> Actually just a Yes or No would be a great answer. Is this even possible?
[13:30:36 CEST] <durandal_1707> Azarus_: do you have 2 video streams at all?
[13:30:52 CEST] <Azarus_> Because im not sure if ffmpeg is suitable for doing anything like this.
[13:31:02 CEST] <Azarus_> 2 streams? what do you mean?M
[13:31:58 CEST] <durandal_1707> Azarus_: exactly what i said: blend filter needs 2 video input streams
[13:35:40 CEST] <Azarus_> i have 60 images that i would like to use as input
[13:36:11 CEST] <durandal_1707> thats just one stream
[13:36:59 CEST] <Azarus_> So how would i go about making 5minute long videos from each image and fade them one by one?
[13:38:52 CEST] <durandal_1707> fade to black?
[13:48:03 CEST] <Threads> durandal_1707 i think he means he wants them to fade into each other
[13:48:22 CEST] <Threads> or better fade in and out of each other like 1>2>1 in a way
[13:48:43 CEST] <Threads> orr 1>2>3>4>5>6>7>8>9
[13:48:47 CEST] <Threads> what ever way
[13:50:37 CEST] <Azarus_> fade from the current picture to the next picture
[13:50:47 CEST] <durandal_1707> it should be possible with some tricks
[15:45:54 CEST] <Azarus_> ok finally i got something working finally
[15:46:06 CEST] <Azarus_> is there a way to extract meta data from a currently running ffmpeg stream?
[15:46:31 CEST] <Azarus_> so lets say i ran ffmpeg -i myvideo.mp4 and stream it to a rtmp://myrtmpserver/
[15:46:48 CEST] <Azarus_> Can i extract the meta data from the current file being played?
[16:56:28 CEST] <Azarus_> ffmpeg -re -y -f concat -safe 0 -i list.txt -framerate 1 -loop 1 -i overlay.png -bufsize 512k -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -g 30 -b:v 4500k -r 15 -s 1280x720 -filter:v yadif -ac 1 -ar 44100 -f flv "rtmp://"
[16:56:33 CEST] <Azarus_> im not sure if this is the best way to do it
[16:56:43 CEST] <Azarus_> list.txt contains a list of audio tracks
[16:56:56 CEST] <Azarus_> but it works
[16:57:27 CEST] <Azarus_> How do i make it so that ffmpeg automatically restarts the stream?
[16:57:58 CEST] <Azarus_> I mean i can write a bash script to restart ffmpeg but the stream stops
[16:57:58 CEST] <DHE> as an aside, you don't need yadif for a png
[16:58:30 CEST] <Azarus_> i do update the png every 30 seconds or 60 seconds
[16:58:41 CEST] <Azarus_> that shouldn't cause any issues right?
[16:59:16 CEST] <Azarus_> ng @ 0x4422200] Invalid PNG signature 0xA0635AF29C165581. bitrate= 183.6kbits/s speed= 1x
[16:59:21 CEST] <Azarus_> i got something like this
[17:02:43 CEST] <kepstin> Azarus_: if you're writing a png while ffmpeg is reading it, that could cause issues, yes. Have your stuff write to a new tmp file, then rename it over the old file instead.
[17:04:27 CEST] <Azarus_> okay thanks. In nodejs fs.renameSync is a proper way?
[17:07:54 CEST] <Azarus_> Also now i haev this overlay.png is there a way to make it fade when its updated?
[17:08:21 CEST] <Azarus_> slowly transition from the old one to the new one
[17:11:50 CEST] <kepstin> Azarus_: easiest way I can think of is to read the image with a fairly low input -framerate option (like 1 or 2), then use the "framerate" filter to bring it up to the full video framerate - that filter will do a blur/fade between frames.
[17:12:42 CEST] <Azarus_> can u please show me actually how to use that filter for what i need? :3 :)
[17:12:43 CEST] <kepstin> (might have to play with some of the interpolation or scene detection options to get it to look good)
[17:14:11 CEST] <kepstin> ... why do you have the yadif filter in there? is your overlay image interlaced?
[17:15:25 CEST] <kepstin> but anyways, a good start would be simply using -vf framerate=fps=15:scene=0
[17:15:38 CEST] <kepstin> (where -vf is the short name of the option -filter:v)
[17:17:11 CEST] <Azarus_> I have that filter there because i grabbed it from google
[17:17:23 CEST] <Azarus_> and i didn't see it and had no idea what it actually does
[17:32:37 CEST] <Mista_D> How would one setup a libx264 encoding with IDR frames at start of GOP, and have scene_change insert non-IDR frames as needed please?
[17:58:13 CEST] <kepstin> Mista_D: I think you'd just want to disable scene cut detection completely for that case; it'll still automatically use I macroblocks where appropriate
[18:00:59 CEST] <kepstin> Mista_D: but this is really a question more appropriate to asking the x264 folks directly, my brief look around indicates there's no current way to do exactly what you say you want to do
[18:03:01 CEST] <kepstin> (are you sure it's even automatically adding IDR frames at all? it's not clear, and the docs for scenecut detection just say I)
[18:03:15 CEST] <Azarus_> kepstin is there a way to get the current meta information from ffmpeg? Like progress?
[18:03:35 CEST] <Azarus_> currently streaming filename? or track information?
[18:03:48 CEST] <Azarus_> or anything usefull? :x
[18:04:04 CEST] <kepstin> Mista_D: might also be worth looking into what the min_keyint option does in combination with scenecut, too.
[18:04:15 CEST] <Mista_D> kepstin: scenechange inserts IDR frames
[18:06:09 CEST] <kepstin> Azarus_: the ffmpeg cli isn't really designed to be used programatically for that sort of thing. You'd pretty much just have to parse the UI output, and that's not guaranteed to be stable across versions.
[18:07:34 CEST] <Azarus_> that actually would be fine too
[18:07:54 CEST] <Azarus_> how can i get the ui display any usefull information?
[18:20:48 CEST] <Azarus_> kepstin the -vf framerate filter doesnt seem to do anything
[18:21:01 CEST] <Azarus_> i got a still image thats not changing anymore
[19:08:36 CEST] <zypho> Anyone able to help me out debug a HLS encode w/ subtitles issue?
[19:08:41 CEST] <zypho> Here is my cmd: https://pastebin.com/c3Vm0XGD
[19:10:17 CEST] <zypho> From everything I have been attempting to debug, I cannot seem to map 2 subtitle streams on my HLS output.
[19:31:44 CEST] <zypho> so far this is the only way I have to get output for both subtitle streams in HLS: https://pastebin.com/LRQZUB1k
[19:32:18 CEST] <c_14> afair the hls muxer only supports a single subtitle stream
[19:32:21 CEST] <zypho> the problem with it is I am actually outputting my video stream three times. Because the webvtt output needs a video stream to mux with
[19:34:49 CEST] <zypho> c_14: it would appear that way, there must be a faster alternative to reusing the video stream 3 times for each rendition..
[19:47:24 CEST] <furq> zypho: if you just want to avoid encoding three times then you can pipe two ffmpeg commands together
[19:48:34 CEST] <furq> ffmpeg -i in.mp4 -c:v libx264 -c:a aac -map 0 -f mpegts - | ffmpeg -f mpegts -i - [...]
[19:49:19 CEST] <furq> maybe the tee muxer can do it as well
[19:50:21 CEST] <Azarus_> -vf framerate=fps=60:scene=100
[19:50:24 CEST] <Azarus_> this doesnt seem to do anything :x
[19:56:26 CEST] <durandal_1707> Azarus_: what you need ?
[19:56:43 CEST] <llogan> zypho: ffmpeg -i input -map 0 -f tee "[select=\'v:0,a:0,s:0\':f=hls]output0.hls|[select=\'v:1,a:1,s:1\':f=hls]output1.hls|[select=\'v:0,a:2,s:2\':f=hls]output2.hls"
[19:57:29 CEST] <Azarus_> I am stuck, i have an image input
[19:57:29 CEST] <Azarus_> -framerate 1 -loop 1 -i overlay.png
[19:57:34 CEST] <Azarus_> whenever its changed i would like to fade it
[19:57:36 CEST] <llogan> zypho: you'll need to choose the proper select and your desired encoders.
[19:58:32 CEST] <Azarus_> Durandal_1707: Actually i have a list of images that i would like to fade overtime, they are procedurally drawn using nodejs.
[19:59:01 CEST] <durandal_1707> Azarus_: there is fade filter
[19:59:18 CEST] <durandal_1707> have you tried it?
[20:07:56 CEST] <Azarus_> yea
[20:08:42 CEST] <Azarus_> but i dont really understand the parameters and how to use it to my case
[20:08:46 CEST] <Azarus_> i only have one image
[20:09:10 CEST] <Azarus_> durandal_1707: ffmpeg -re -y -f concat -safe 0 -i list.txt -framerate 1 -loop 1 -i overlay.png -bufsize 512k -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -g 24 -b:v 4500k -r 12 -s 1280x720 -ac 1 -ar 44100 -f flv "rtmp://"
[20:09:13 CEST] <Azarus_> this is my command
[20:14:13 CEST] <zypho> @llogan I still run into the good ol "Exactly one WebVTT stream is needed." error trying tou write output file #0
[20:17:02 CEST] <zypho> @llogan if I update the first map to be specific streams and only contain a single subtitle, then it works
[20:19:43 CEST] <Azarus_> The tblend (time blend) filter takes two consecutive frames from one single stream, and outputs the result obtained by blending the new frame on top of the old frame.
[20:19:48 CEST] <Azarus_> possible this is the filter i am looking for?
[20:20:15 CEST] <Azarus_> but how do i go about setting the time it takes to fade?
[20:20:16 CEST] <Azarus_> Time of the current frame, expressed in seconds.
[20:20:22 CEST] <Azarus_> i dont understand the documentation
[20:35:48 CEST] <Mandevil> Unrecognized option 'framerate=25' ... uh what?
[20:36:08 CEST] <Mandevil> -framerate is not a thing?
[20:36:26 CEST] <Azarus_> where did you put it?
[20:36:40 CEST] <Mandevil> As the very first argument.
[20:36:46 CEST] <Azarus_> -framerate 25
[20:36:47 CEST] <Azarus_> maybe?
[20:37:24 CEST] <Mandevil> Uh... I'm stupid.
[20:37:29 CEST] <zypho> @llogan https://pastebin.com/LJQ9rrYE
[20:37:32 CEST] <zypho> sorry about that
[20:37:39 CEST] <Mandevil> Dozen of args and I don't notice it.
[20:38:01 CEST] <zypho> forgot the output... 1 sec
[20:39:04 CEST] <zypho> @llogan https://pastebin.com/LcrATdW5
[20:40:20 CEST] <dystopia_> Mandevil use -r 25
[20:42:11 CEST] <Mandevil> dystopia_: -r is shortcut for -framerate?
[20:42:26 CEST] <dystopia_> basically
[20:43:00 CEST] <Mandevil> Heh, so there's more to it...
[20:43:09 CEST] <lindylex> How do I modify this so the images in my slide show remain on the screen for 15 seconds? ffmpeg -framerate 0.25 -i r%d.png -vf "framerate=fps=30:interp_start=64:interp_end=192:scene=100" concat.mp4
[20:43:21 CEST] <dystopia_> i use -vf yadif=0:0 and -r 25, for interlaced pal stuff, and if it can be bobbed i use -vf yadif=1:0 -r 50
[20:47:45 CEST] <Mandevil> BTW, does ffmpeg support DNxHR?
[20:47:48 CEST] <Mandevil> Or is it planned?
[20:50:11 CEST] <durandal_1707> Mandevil: yes it supports it both for encoding and decoding
[20:50:54 CEST] <Mandevil> Hm, but my binary doesn't seem to list that.
[20:51:21 CEST] <Azarus_> -vf "framerate=scene=100,tblend=average,framestep=2" \
[20:51:22 CEST] <Azarus_> finally
[20:51:25 CEST] <Azarus_> got something working damnit!
[20:51:40 CEST] <durandal_1707> Mandevil: what version?
[20:52:00 CEST] <dystopia_> are the " required Azarus_?
[20:52:07 CEST] <Mandevil> ffmpeg version N-81338-g6612d04
[20:52:43 CEST] <durandal_1707> Mandevil: too old
[20:52:48 CEST] <Mandevil> Oh.
[20:52:55 CEST] <Mandevil> durandal_1707: Thanks!
[20:53:02 CEST] <Mandevil> DNxHR support is very handy.
[20:53:33 CEST] <durandal_1707> Mandevil: how you tested it?
[20:53:38 CEST] <Mandevil> No.
[20:53:49 CEST] <Mandevil> I mean it would be handy if it works :)
[20:54:09 CEST] <durandal_1707> i mean how you found it doesnt work?
[20:54:19 CEST] <Mandevil> durandal_1707: I listed the supported codecs.
[20:54:38 CEST] <Mandevil> durandal_1707: It lists dnxhd, but not dnxhr.
[20:54:39 CEST] <durandal_1707> Mandevil: dnxhr is part of dnxhd
[20:54:50 CEST] <Mandevil> durandal_1707: ?
[20:54:55 CEST] <durandal_1707> its just another profile
[20:55:16 CEST] <Mandevil> durandal_1707: I understood dnxhr as newer, more flexible dnxhd.
[20:55:21 CEST] <durandal_1707> dnxhd codec contains dnxhr one
[20:55:50 CEST] <durandal_1707> Mandevil: yes, but its very similar to dnxhd
[20:56:17 CEST] <durandal_1707> thats way its part of dnxhd and not separate
[20:56:24 CEST] <Mandevil> Hm.
[20:56:31 CEST] <Mandevil> It's bit confusing.
[20:56:40 CEST] <Mandevil> Does DNxHD support UltraHD?
[20:56:47 CEST] <Mandevil> Because in Resolve it doesn't.
[20:57:21 CEST] <zypho> at this point
[20:57:28 CEST] <zypho> I am going to write a webvtt segmenter in go
[20:57:44 CEST] <durandal_1707> Dnxhd doesnt support it but ffmpeg dnxhr does
[20:58:20 CEST] <Mandevil> That's why I'm even asking.
[20:58:39 CEST] <Mandevil> Exporting videos as TIFF frames is not that bad, but maybe dnxhr is better.
[20:59:01 CEST] <durandal_1707> dnxhr is lossy
[20:59:45 CEST] <Mandevil> Well, it's an intermediate codec.
[20:59:50 CEST] <Mandevil> So the loss should be negligible.
[20:59:50 CEST] <durandal_1707> Mandevil: what you actually need? hqx, 444 or something else?
[21:00:09 CEST] <Azarus_> Can somebody recommend a service or software that i can use to stream audio input into ffmpeg?
[21:00:14 CEST] <durandal_1707> do you need alpha?
[21:00:28 CEST] <Mandevil> durandal_1707: I need to get UltraHD videos from Resolve into H.264.
[21:00:30 CEST] <Azarus_> Since i can't seem to be able to get the metainformation of each song
[21:00:33 CEST] <Mandevil> No, I don't.
[21:00:43 CEST] <Mandevil> I just want to encode with x264.
[21:00:55 CEST] <Mandevil> Since built-in encoder in Resolve sucks.
[21:01:13 CEST] <Mandevil> (Resolve is a NLE)
[21:01:24 CEST] <durandal_1707> what bitdepth?
[21:01:42 CEST] <Mandevil> My source is 4:2:0, so that.
[21:02:05 CEST] <durandal_1707> Mandevil: 420 dnxhr?
[21:02:25 CEST] <Mandevil> It's a vehicle to get video from Resolve to x264.
[21:02:29 CEST] <Mandevil> Nothing else.
[21:02:41 CEST] <Mandevil> At UltraHD resolution, that is.
[21:02:56 CEST] <Mandevil> For 1080p I was using DNxHD, but that does not support UltraHD.
[21:03:18 CEST] <durandal_1707> Mandevil: can you upload short sample of it ?
[21:03:27 CEST] <Mandevil> Of what?
[21:03:34 CEST] <durandal_1707> i dont have 420 dnxhr
[21:03:49 CEST] <Mandevil> durandal_1707: I haven't tried so far!
[21:04:07 CEST] <Mandevil> durandal_1707: The only two times I needed to do that I had to render as TIFFs.
[21:04:17 CEST] <Mandevil> durandal_1707: And I am exploring if there's a better way.
[21:05:00 CEST] <durandal_1707> well ffmpeg supports 4:2:2 and 4:4:4 dnxhr but not 4:2:0
[21:05:14 CEST] <durandal_1707> so i need 4:2:0 sample
[21:05:29 CEST] <Mandevil> Well, I guess there's no harm to render as 4:2:2 and then let ffmpeg downconvert?
[21:05:47 CEST] <Mandevil> But as I said, for the use case I have TIFFs are not that atrocious anyway.
[21:05:51 CEST] <Azarus_> How can i define multiple output sources in ffmpeg?
[21:05:52 CEST] <durandal_1707> yes, but i would like to add support for it
[21:06:07 CEST] <Mandevil> I see.
[21:07:46 CEST] <Mandevil> durandal_1707: REsolve lists its resolve options as DNxHR 444, HQ, HQX, SQ and LB.
[21:07:53 CEST] <Mandevil> durandal_1707: No idea what the bit depth is.
[21:08:08 CEST] <Mandevil> its render options
[21:10:43 CEST] <Mandevil> Oh, crap the video I just encoded with ffmpeg... there's something wrong with it.
[21:10:55 CEST] <Mandevil> As it progresses, it's getting redder.
[21:11:01 CEST] <Mandevil> Then I-frame clears it to normal.
[21:11:05 CEST] <Mandevil> Then it gets redder again.
[21:11:45 CEST] <Mandevil> It's awful.
[21:12:03 CEST] <durandal_1707> Mandevil: what encoder?
[21:12:10 CEST] <Mandevil> durandal_1707: libx264
[21:12:24 CEST] <Mandevil> durandal_1707: Encode directly through x264 is perfectly fine.
[21:13:05 CEST] <durandal_1707> how you get your build of ffmpeg?
[21:13:17 CEST] <Mandevil> This is the batch I use: https://pastebin.com/3Un6QA9z
[21:13:37 CEST] <Mandevil> durandal_1707: I download it from ffmpeg.org (or site linked therein).
[21:14:32 CEST] <lindylex> I am trying to create a crossfade affect and am getting following error. https://pastebin.com/BWV7yJze
[21:16:44 CEST] <Mandevil> There's something fishy with the file.
[21:17:06 CEST] <Mandevil> Because only VLC plays it, QuickTime and Windows Media Player don't play it at all (black frame).
[21:17:31 CEST] <durandal_1707> Mandevil: ffmpeg.org does not provide binaries
[21:17:59 CEST] <Mandevil> durandal_1707: Yeah, but it links to a site with windows binaries.
[21:18:11 CEST] <Mandevil> Also, Media Player Classic plays the video fine.
[21:18:18 CEST] <Mandevil> I am utterly confused.
[21:18:42 CEST] <Mandevil> Ugh!
[21:18:48 CEST] <durandal_1707> Mandevil: use different preset
[21:18:52 CEST] <Mandevil> VLC says it's YUV 4:4:4 format.
[21:19:07 CEST] <Mandevil> Why should I be using different preset?
[21:19:10 CEST] <Mandevil> What will that change?
[21:19:29 CEST] <Mandevil> I used veryslow for the 4k version without any trouble.
[21:20:05 CEST] <Mandevil> The 444 format is the problem, it should be 420p, I'm explicitly requesting that.
[21:22:49 CEST] <durandal_1707> Mandevil: move format request at end
[21:23:13 CEST] <Mandevil> durandal_1707: I did that and I am now encoding again.
[21:23:23 CEST] <llogan> lindylex: might be easier to use fade with alpha=1 with something like setpts=PTS-STARTPTS+5/TB and overlay
[21:23:35 CEST] <Mandevil> durandal_1707: But still, the 444 format should work... why does it only work properly in MPC?
[21:23:51 CEST] <durandal_1707> crappy players
[21:23:57 CEST] <lindylex> llogan: I have no idea to construct what you are saying.
[21:23:59 CEST] <Mandevil> Does VLC count as crappy player?
[21:24:11 CEST] <llogan> lindylex: something like https://superuser.com/a/778967/110524
[21:24:41 CEST] <durandal_1707> Mandevil: it should work, dunno why not
[21:24:49 CEST] <llogan> lindylex: or http://stackoverflow.com/a/37812010/1109017
[21:24:55 CEST] <Mandevil> I tend to have odd problems with VLC.
[21:25:16 CEST] <Mandevil> I kinda don't trust it as a reference player anymore.
[21:25:20 CEST] <durandal_1707> Mandevil: i use mpv
[21:25:25 CEST] <Mandevil> But I don't have a true reference player.
[21:25:27 CEST] <Mandevil> mpv?
[21:25:33 CEST] <durandal_1707> mpv.io
[21:26:03 CEST] <Mandevil> durandal_1707: Thanks for the tip.
[21:26:38 CEST] <durandal_1707> do you still get red artifacts?
[21:26:50 CEST] <Mandevil> durandal_1707: Encoding in progress :)
[21:28:54 CEST] <Mandevil> durandal_1707: mpv plays the file correctly.
[21:29:10 CEST] <Mandevil> durandal_1707: Sad thing, VLC is buggy.
[21:29:42 CEST] <llogan> update your VLC
[21:30:44 CEST] <The8flux> Question: need to play udp mpeg2 and h.264 rtp streams can ffmpeg facilitate this via c# or perhaps python ?
[21:31:37 CEST] <The8flux> Examples i am seeing is from 2008 even with vp
[21:31:47 CEST] <The8flux> Vlc lib wrappers
[21:31:53 CEST] <Mandevil> llogan: Latest VLC, the same problem.
[21:33:34 CEST] <durandal_1707> Mandevil: it probably uses hardware decoder of some sort
[21:33:37 CEST] <Mandevil> durandal_1707: It would be nice if ffmpeg actually showed the x264 output so that you knew what colorspace is in use.
[21:34:51 CEST] <durandal_1707> hmm, yes
[21:36:26 CEST] <Mandevil> durandal_1707: .. hm, tried to disable decoding acceleration and colorspace conversion... still bad.
[21:39:14 CEST] <Mandevil> durandal_1707: Yes, moving the -pix_fmt to the rear helped produce 420 video.
[21:44:34 CEST] <Azarus_> I got everything working except 2 things, i dont know how to get the currently streaming audio's meta information
[21:44:58 CEST] <Azarus_> tried to read the -f ffmetadata meta.txt but it outputs an empty file
[21:45:26 CEST] <Azarus_> Not sure but is it possible to get information about the input somehow?
[22:00:49 CEST] <Mandevil> durandal_1707: mpv seems to do proper adjustment of black/white levels...
[22:01:01 CEST] <Mandevil> durandal_1707: Looks like my new favorite player :)
[22:01:48 CEST] <Mandevil> How can I adjust volume in it?
[22:01:52 CEST] <durandal_1707> most of thing is configured via command line
[22:02:10 CEST] <durandal_1707> default bindings
[22:02:54 CEST] <durandal_1707> Mandevil: press 9 or 0
[22:03:29 CEST] <Mandevil> durandal_1707: Thanks!
[22:03:56 CEST] <durandal_1707> for more info read manual: man mpv
[22:04:08 CEST] <Mandevil> durandal_1707: In windows? ;-)
[22:04:39 CEST] <durandal_1707> there is pdf with default key bindings
[22:04:45 CEST] <c_14> Mandevil: https://mpv.io/manual/master/
[22:04:54 CEST] <Mandevil> durandal_1707: You're right.
[22:05:00 CEST] <Mandevil> c_14: Yeah, I can see.
[22:05:15 CEST] <Mandevil> It looks really nice even for basic usage.
[22:06:09 CEST] <Mandevil> Hey, it's related to mplayer....
[22:06:18 CEST] <Mandevil> ...that used to be my favorite some time ago.
[22:06:41 CEST] <Mandevil> mpv doesn't seem to support avisynth?
[22:07:09 CEST] <durandal_1707> it supports vapoursynth
[22:07:13 CEST] <durandal_1707> instead
[22:07:20 CEST] <Mandevil> Hm.
[23:24:08 CEST] <alexpigment> if i try to create a DVD-formatted VOB file with PCM audio, it gives all these "buffer underflow" messsages and also "packet too large, ignoring buffer limits to mux it"
[23:25:04 CEST] <alexpigment> i narrowed it down, and you don't have the specify any specific parameters. just -i [input] -c:v mpeg2video -c:a pcm_s16le output.mpg
[23:25:25 CEST] <alexpigment> has anyone heard of this issue before? is there a workaround?
[23:28:24 CEST] <alexpigment> another important note is that when i look at the resultant video in MediaInfo, it shows the audio codec as "MPEG (LATM)" - which is very confusing
[23:28:42 CEST] <kepstin> alexpigment: you probably just have to reduce your video maxrate; pcm audio is big
[23:28:52 CEST] <alexpigment> i already did that in preparation
[23:29:11 CEST] <alexpigment> i even put it down to 3mbps in one test
[23:29:22 CEST] <kepstin> and you've set -muxrate correctly?
[23:29:43 CEST] <alexpigment> i did, but i eventually just removed all the parameters until i realized that this is a very basic problem
[23:29:45 CEST] <relaxed> alexpigment: you want -f vob
[23:30:19 CEST] <kepstin> and yes, yes, -f vob or -f dvd (they're aliases for the same thing, iirc)
[23:30:29 CEST] <kepstin> the 'mpg' muxer has different limits from vob
[23:30:43 CEST] <alexpigment> i was using -f dvd
[23:30:49 CEST] <kepstin> I think you do want 'dvd' specifically, actually, vob is different... hmm.
[23:30:53 CEST] <alexpigment> but i can try -f vob if you think it'll make a diff
[23:31:06 CEST] <kepstin> (i'm not actually sure now, have to check)
[23:31:33 CEST] <kepstin> but yes, if you're targetting dvd, make sure you use -f dvd
[23:31:41 CEST] <alexpigment> same thing with -f vob
[23:32:34 CEST] <alexpigment> i have a fairly complex and completely by-the-book longer command line, but after i narrowed it down to just having mpeg2video and pcm audio, it seems pretty straightforward
[23:32:51 CEST] <alexpigment> there's some sort of bad assumption somewhere in the muxer
[23:33:21 CEST] <relaxed> alexpigment: try, ffmpeg -i input -target ntsc-dvd -c:a pcm_s16le -f dvd output
[23:33:34 CEST] <relaxed> or pal-dvd
[23:33:53 CEST] <alexpigment> relaxed: that's a fair call. i didn't try that (although i have everything explicitly defined to match that)
[23:33:53 CEST] <kepstin> can you paste the full output of that command, please? I don't have any test videos handy right now
[23:33:59 CEST] <alexpigment> yeah 1 sec
[23:35:47 CEST] <alexpigment> yeah, same deal with the target parameter
[23:36:26 CEST] <alexpigment> setting it to make a lot and re-running the test
[23:36:31 CEST] <alexpigment> *make a log
[23:36:49 CEST] <kepstin> note that you'll want to be using -c:a pcm_s16be I think
[23:37:48 CEST] <alexpigment> does that apply for DVD?
[23:37:53 CEST] <alexpigment> i thought big endian was just blu-ray
[23:38:01 CEST] <alexpigment> i can try that too though
[23:38:04 CEST] <kepstin> no, dvd is big-endian too
[23:38:37 CEST] <alexpigment> yeah, big endian does the same thing
[23:38:41 CEST] <alexpigment> hold on while i prepare this log
[23:42:36 CEST] <kepstin> hmm. I have to head home to look at what authored dvds with pcm audio are like, don't have any backed up to online storage at the moment.
[23:42:54 CEST] <alexpigment> what do you "are like" exactly?
[23:43:07 CEST] <kepstin> I'm curious about whether using -af asetnsamples to change the number of samples per frame would make a difference
[23:43:28 CEST] <kepstin> just wanted to poke at the muxing to see what ffmpeg does differently
[23:43:52 CEST] <alexpigment> oh gotcha
[23:44:03 CEST] <alexpigment> anyway, what's the size you guys usually use for pasting?
[23:44:05 CEST] <alexpigment> pastebin?
[23:44:18 CEST] <kepstin> has a few
[23:44:24 CEST] <kepstin> i usually use github gists
[23:45:05 CEST] <alexpigment> https://pastebin.com/SrP8LPQK
[23:45:56 CEST] <alexpigment> kepstin, i have a lot of DVDs with PCM audio. let me know if a sample would help
[23:46:21 CEST] <alexpigment> about half of music-related DVDs opt for PCM rather than AC-3
[23:46:38 CEST] <kepstin> yeah, I have a few, they're just not online so I can't access them over ssh :)
[23:46:44 CEST] <alexpigment> oh right
[23:47:47 CEST] <kepstin> I'm curious - if you do an ffprobe -show_frames on them, and look at the number of samples in an audio frame - then use "-af asetnsamples=n=#" with that number, does it work better
[23:48:16 CEST] <alexpigment> will do
[23:48:48 CEST] <kepstin> i'm thinking the dvd muxer might not break up pcm audio frames to get a nice constant rate, so it gets big frames (overflow) then no frame (underflow) kind of thing
[23:49:07 CEST] Action: kepstin is off!
[23:50:10 CEST] <alexpigment> i'm pretty green on ffprobe tbh
[23:50:26 CEST] <alexpigment> do i need to specify anything but -show_frames?
[23:51:05 CEST] <alexpigment> it's just constantly updating when i try that
[23:51:13 CEST] <kepstin> nope, that should be sufficient. it'll pump out a lot of output, so feed it to less or something
[23:51:28 CEST] <kepstin> then just look for a frame that's marked as audio, and see if there's info about the number of samples in it
[23:51:35 CEST] <alexpigment> oh nm, i was using -i inputfile
[23:51:36 CEST] <alexpigment> duh
[23:54:40 CEST] <kepstin> something like `ffprobe -show_frames VTS_01_1.VOB | grep -E '^nb_samples' | head -n1` will just print the value, too :)
[23:55:03 CEST] <alexpigment> https://pastebin.com/vwgsxskZ
[23:55:10 CEST] <alexpigment> nothing in there about nb_samples
[23:55:31 CEST] <kepstin> I said "show_frames", you used show_streams and show_format :)
[23:56:06 CEST] <alexpigment> oh that's what i did first
[23:56:15 CEST] <alexpigment> and it just never stopped outputting
[23:56:43 CEST] <kepstin> well, it would have stopped eventually, but it prints out ~20 lines per frames, so there's a ton of output
[23:56:57 CEST] <kepstin> use my command with grep and head and it'll print out one line and exit quick :)
[23:56:59 CEST] <alexpigment> is there a way to give it like 10 frames or something
[23:57:05 CEST] <alexpigment> oh gotcha
[23:57:19 CEST] <alexpigment> sorry, not trying to be an idiot here ;)
[23:57:35 CEST] <kepstin> I also like the "ffprobe [...] | less" option for looking through the output
[23:58:28 CEST] <alexpigment> grep doesn't work on windows command line :(
[23:59:00 CEST] <alexpigment> nor does | less
[23:59:52 CEST] <alexpigment> there are some advanced things about ffmpeg etc that i know very well. and then there are some basic things that i'm just completely green on. i feel like i play lead guitar in a band but don't know how to make a G chord
[23:59:59 CEST] <kepstin> ... well, I can't really help you there. if all else fails, just ctrl-c ffprobe after it prints some output and go through the scrollback a bit :/
[00:00:00 CEST] --- Tue Apr 11 2017
More information about the Ffmpeg-devel-irc
mailing list