[Ffmpeg-devel-irc] ffmpeg.log.20170505
burek
burek021 at gmail.com
Sat May 6 03:05:02 EEST 2017
[00:01:13 CEST] <furq> shrug
[00:01:23 CEST] <furq> if nobody disagreed on the mailing list then there's no harm in pressing the issue
[00:01:43 CEST] <furq> it looks better than encode_audio.c to me, but i've barely touched the api
[00:02:27 CEST] <furq> maybe rename it "encode_mux_audio.c" or something so it's more clear what the difference is
[00:02:59 CEST] <faLUCE> furq: iive and others, at this point, I ask you to look/test the code and push a request to reconsider it, if you think it's worthwile
[00:03:11 CEST] <furq> i would if i was a developer
[00:03:16 CEST] <faLUCE> furq: yes, encode_mux_audio.c would be better
[00:03:56 CEST] <iive> faLUCE: i won't do it tonight, I'm way too tired.
[00:04:02 CEST] <faLUCE> iive: np
[00:04:10 CEST] <iive> i'll try not to forget tomorrow ;)
[00:04:14 CEST] <faLUCE> iive: do it when/if you want
[00:04:34 CEST] <furq> but yeah i'd be more likely to use the api if the examples were better
[00:05:08 CEST] <faLUCE> furq: for the examples, the point of view of the user is more important than that of the developer
[00:05:14 CEST] <furq> right
[00:05:33 CEST] <faLUCE> I'm more an user of the API, than a developer, so I wrote something that is useful for the user
[00:06:13 CEST] <furq> well yeah by all accounts being a developer is not very much fun for this precise reason
[00:06:24 CEST] <furq> it's probably the same for any large open-source project
[00:06:37 CEST] <furq> especially ones with high concentrations of germans
[00:06:46 CEST] <faLUCE> but people were very dogmatic, in that channel... for example, they argued that I would have feed the encoder with dummy audio frames (sin tones) instead of reading a file (which is absurd, in my opinion, for an user of the API)
[00:07:08 CEST] <faLUCE> [00:06] <furq> especially ones with high concentrations of germans <----- LOOOL
[00:08:21 CEST] <faLUCE> furq: you know? I also prepared an example with libevent and libav (you suggested to use libevent for HTTP, and it worked perfectly)
[00:31:39 CEST] <alexpigment> furq: regarding our earlier conversation, I went and did some PSNR/SSIM tests on x264 (medium preset) vs nvenc hevc (slow preset)
[00:32:12 CEST] <alexpigment> at the same bitrate, libx264 is better quality (not by a huge amount in my 10mbps, but still)
[00:33:08 CEST] <alexpigment> BUT NVENC's HEVC is 6.5x faster than x264 on my system. for context, i have a Core i7 3770 and an Nvidia 1060
[00:34:08 CEST] <alexpigment> more importantly, if you can afford a bitrate that's 1.4x of x264, you end up with the same quality (same PSNR/SSIM, at least) at 6.5x the speed
[00:34:21 CEST] <alexpigment> so i think that's a strong argument for nvenc
[00:34:37 CEST] <alexpigment> especially if for whatever reason HEVC is a required aspect
[00:39:32 CEST] <furq> psnr isn't a hugely useful metric with x264 because of psy and stuff
[00:40:36 CEST] <furq> but yeah if you need an hevc bitstream for whatever reason then i'd definitely consider it
[03:11:21 CEST] <thebombzen_> apparently, using "ffmpeg -h encoder=h264_vaapi" says that the only supported pixel format is vaapi_vld
[03:11:38 CEST] <thebombzen_> which is causing me issues because the autoscaler can't do that. explicitly setting -pix_fmt vaapi_vld doesn't help either
[03:12:31 CEST] <thebombzen_> how am I actually supposed to use the h264_vaapi encoder
[06:03:10 CEST] <thebombzen_> are there any extra things required to get vaapi working on linux?
[06:04:32 CEST] <thebombzen_> if I try to run ffmpeg -f lavfi -i testsrc2 -c h264_vaapi -f null -, then it'll complain that the autoscaler can't scale to null
[06:04:59 CEST] <thebombzen_> upon running ffmpeg -h encoder=h264_vaapi, it says that it only accepts vaapi_vld as a pixel format
[06:06:07 CEST] <thebombzen_> if I try to force -pix_fmt yuv420p, it'll say this: Incompatible pixel format 'yuv420p' for codec 'h264_vaapi', auto-selecting format '(null)'
[06:06:18 CEST] <thebombzen_> am I missing something here?
[06:08:25 CEST] <furq> don't you need -vf format=nv12,hwupload
[06:08:43 CEST] <thebombzen_> hwupload?
[06:10:21 CEST] <thebombzen_> hm. tried that, didn't work
[06:10:22 CEST] <thebombzen_> http://sprunge.us/SJFE
[06:10:51 CEST] <furq> -vaapi_device
[06:11:00 CEST] <thebombzen_> is there way to list devices
[06:11:08 CEST] <furq> iirc it's always /dev/dri/renderD128
[06:11:12 CEST] <furq> at least for intel
[06:11:51 CEST] <furq> you don't need the filters if you're hardware decoding
[06:13:52 CEST] <thebombzen_> hardware encoding
[06:13:57 CEST] <thebombzen_> and now it's an unknown libva error
[06:14:02 CEST] <thebombzen_> http://sprunge.us/ZEUh
[06:14:13 CEST] <furq> well yeah you'd normally be hardware decoding as well
[06:14:21 CEST] <furq> obviously not if you're using lavfi for testing
[06:15:14 CEST] <thebombzen_> do you know how to fix the "unknown libva error"
[06:15:19 CEST] <furq> i do not
[06:15:29 CEST] <thebombzen_> hm. I did install libva-intel-driver since last reboot
[06:15:35 CEST] <thebombzen_> I might have to reboot to refresh some module
[06:18:31 CEST] <thebombzen> it did not fix it
[07:13:48 CEST] <james999> alexpigment: what's that about, benchmarking hevc vs x264?
[11:49:52 CEST] <cryptopsy> how can i get total play time for mplayer? like 8300/15000 , where 15000 is the total play time but 8300 is the current position
[11:52:32 CEST] <faLUCE> hello, iive, if you want to bump the thread we talked about yesterday, this is the correct link: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209494.html . Thanks
[12:07:36 CEST] <thebombzen> well
[12:07:57 CEST] <thebombzen> cryptopsy: perhaps you should ask the mplayer people, not the ffmpeg people
[12:08:03 CEST] <thebombzen> or even better, use mpv
[14:02:32 CEST] <thunfisch> Hey. I've got a problem with concat, using this ffconcat file & command: https://paste.xinu.at/m-LUyZb/ total duration should be 48:37, but somehow ffmpeg generates a file that's only 43:27 long with half of the last file duration missing. sadly cannot share the files because of copyright issues. any idea what I'm missing?
[14:03:27 CEST] <thunfisch> oh, -preset is slow
[14:05:51 CEST] <thunfisch> in the process of doing a run with higher loglevel, gonna paste the output in a minute..
[14:09:08 CEST] <thunfisch> https://paste.xinu.at/m-iAE/
[15:21:51 CEST] <thunfisch> what the hell. okay, i added 'file 0.png\nduration 0.001' to the end, now it's giving me 79183 frames, which is 00:52:47.28. something's really broken there.
[15:21:51 CEST] <thunfisch> but the previously last slide (14.png) now has the proper end time of 46:37, just the new last slide is way too long.
[16:20:34 CEST] <thunfisch> okay, another topic: is it possible to output (to a file preferably) the raw rtsp headers when recording from a rtsp server? I'm specifically interested in the Range and RTP-Info headers.
[16:20:43 CEST] <thunfisch> can't find anything in the documentation that would allow that..
[16:50:53 CEST] <Croolman> Hi everyone. Does anyone know, what does the av_frame_copy() do when dst frame is bigger than source? Can I expect dest frame to be an input with black padding on right and bottom side? I have gone through the function, but cant figure out myslelf with confidence..
[16:52:46 CEST] <BtbN> "This function does not allocate anything, dst must be already initialized and allocated with the same parameters as src."
[16:52:47 CEST] <thebombzen> Croolman: you probably have to scale it yourself
[16:52:53 CEST] <TheWild> hello
[16:53:03 CEST] <BtbN> If the frame does not match 100%, you'll most likely crash
[16:53:06 CEST] <thebombzen> I don't how it'll fail, but it won't work as expected
[16:53:33 CEST] <thebombzen> Croolman: you can use libswscale to scale it, or you could manually pad it with the pad filter in libavfilter
[16:53:48 CEST] <TheWild> I have a video file, but it is damaged in a way that it can be still watched from the beginning, but not seeked. Can ffmpeg be used to fix it without doing any conversion?
[16:54:04 CEST] <thebombzen> TheWild: what format is it in?
[16:54:05 CEST] <furq> maybe
[16:54:13 CEST] <furq> ffmpeg -i foo.mp4 -map 0 -c copy bar.mp4
[16:54:24 CEST] <furq> replace mp4 with the extension you want
[16:54:40 CEST] <furq> if that doesn't work then you're probably out of luck
[16:54:52 CEST] <thebombzen> because furq's command should work
[16:55:04 CEST] <thebombzen> but keep in mind that if the input format is something like mpeg-ts (.ts) then it's nto damaged
[16:55:13 CEST] <thebombzen> certain containers like .ts just don't support random access seeking
[16:55:19 CEST] <furq> what
[16:55:35 CEST] <thebombzen> yea, mpegts doesn't support seeking
[16:55:38 CEST] <furq> i don't think you're thinking of mpegts
[16:55:45 CEST] <thebombzen> hm?
[16:56:11 CEST] <thebombzen> mpeg-ts doesn't support random access seeking by design
[16:56:13 CEST] <dystopia_> .ts can also be h264 or hevc
[16:56:17 CEST] <Croolman> BtbN: I did read the documentation, but do not understand the fraze "same parameters". The function inside checks just if the dest size is smaller than dest, nothing else. I do pass already allocated frame.
[16:56:17 CEST] <dystopia_> it's just a container
[16:56:22 CEST] <TheWild> here are details: https://kopy.io/AHZGO. Original file extension was lost.
[16:56:24 CEST] <dystopia_> and you can seek it fine
[16:57:06 CEST] <thebombzen> no, seeking with mpegts doesn't work well
[16:57:15 CEST] <thebombzen> you can try but it won't be accurate
[16:57:18 CEST] <dystopia_> unless your on about the actual transport stream capture, then you would have to demux the audio and video streams
[16:57:21 CEST] <furq> it does work though
[16:57:22 CEST] <BtbN> Croolman, well, it means exactly that. It has to have the same propperties, otherwise the results are undefined
[16:57:27 CEST] <BtbN> same size, pix fmt, everything
[16:57:40 CEST] <thebombzen> you can't reliably seek mpegts, it doesn't work I'm saying
[16:57:54 CEST] <thebombzen> you can try but there's no guarantee you'll be able to do anything
[16:58:03 CEST] <thebombzen> and there's no guarantee it'll be accurate
[16:58:45 CEST] <furq> every player i've ever used can seek ts just fine, even if it's not designed for it
[16:58:50 CEST] <thebombzen> mpv doesn't
[16:58:53 CEST] <furq> it's not like m2v or something where you actually can't seek
[16:58:57 CEST] <Croolman> BtbN: alright, I have the alocated frame for the destination, would not be then just easier to memcpy the memory from one to another line by line?
[16:59:13 CEST] <thebombzen> I frequently will try to seek mpegts files in mpv and it doesn't work well
[16:59:18 CEST] <thebombzen> whereas if I remux to mp4 it works fine
[16:59:23 CEST] <dystopia_> seek fine in media player classic
[16:59:26 CEST] <BtbN> that's exactly what the function does, but the parameters have to match for either of those methods
[16:59:26 CEST] <furq> i just tried it and it works
[16:59:34 CEST] <Croolman> thebombzen: cannot scale it, I need the image inside to be the same, just with black paddings on right and bottom side
[16:59:38 CEST] <dystopia_> i often watch the .ts while i wait for ffmpeg to finnish the encode
[16:59:38 CEST] <furq> "works well" is subjective, but it does work
[16:59:42 CEST] <dystopia_> i can seek fine
[16:59:48 CEST] <thebombzen> furq: it doesn't work perfectly
[16:59:56 CEST] <furq> probably
[16:59:58 CEST] <BtbN> Croolman, the content of a bigger frame will probably be random garbage, and not black
[17:00:08 CEST] <BtbN> if it doesn't just plain crash
[17:00:10 CEST] <furq> but there are formats that can't be seeked at all
[17:00:18 CEST] <furq> it's a pretty big distinction
[17:00:33 CEST] <thebombzen> I mean that with mkv and mp4 and nut and etc. you can seek to an exact timecode
[17:00:42 CEST] <thebombzen> if you try that with mpegts there's no guarantee it'll work
[17:01:03 CEST] <thebombzen> it sometimes can but it's generally not accurate or reliable
[17:01:17 CEST] <thebombzen> Croolman: try using the pad filter from libavfilter
[17:01:21 CEST] <thebombzen> that does what you need
[17:01:56 CEST] <TheWild> I dropped the -map 0 parameter and ffmpeg did the job well. Thank you very much.
[17:02:08 CEST] <dystopia_> it should seek to nearest i-frame which should be present throughout the file iirc
[17:02:21 CEST] <Croolman> thebombzen: I am wirting a filter to be a part of ffmpeg, cannot really use static functions from inside the library... If I understand it correctly
[17:02:35 CEST] <thebombzen> Croolman: what?
[17:02:53 CEST] <furq> TheWild: you'll need -map 0 or else it'll drop the extra audio streams
[17:03:00 CEST] <Croolman> BtbN, I guess yo ure right, then I am out of options
[17:03:07 CEST] <furq> at least use -map 0:v -map 0:a
[17:03:11 CEST] <thebombzen> Croolman: what are you actually trying to do?
[17:03:16 CEST] <furq> otherwise you'll need to use a container that supports those subtitle streams
[17:03:18 CEST] <furq> i.e. mkv
[17:03:20 CEST] <BtbN> use an appropiate filter for your job
[17:03:49 CEST] <Croolman> thebombzen, I am aware of the pad filter, but I am not using the API of ffmpeg
[17:03:58 CEST] <thebombzen> Croolman: yes you are?
[17:04:12 CEST] <thebombzen> you are using av_frame_copy
[17:04:17 CEST] <thebombzen> how are you not using ffmpeg's API?
[17:04:30 CEST] <Croolman> thebombzen, alright, part of it yes
[17:04:49 CEST] <thebombzen> you should also link to libavfilter, and use the pad filter
[17:05:29 CEST] <thebombzen> it's what you need, and every ffmpeg installation will come with it in addition to libavformat and libavcodec
[17:05:53 CEST] <thebombzen> if you're trying to be minimalist, then you can build a verison of libavfilter with nothing but the pad filter (and a few necessary others like buffer)
[17:05:53 CEST] <TheWild> the other streams are probably damaged, because when I add -map 0, it doesn't work even with -copy_unknown/-ignore_unknown.
[17:06:07 CEST] <thebombzen> TheWild: try muxing to .mkv instead of .mp4
[17:06:09 CEST] <thebombzen> and see what happens
[17:07:25 CEST] <furq> TheWild: the source has got streams which probably aren't supported by the output container
[17:07:34 CEST] <furq> so yeah, use mkv
[17:07:41 CEST] <TheWild> nope, same problems. ffmpeg complains very quickly and ends.
[17:07:46 CEST] <furq> what's the error message
[17:08:05 CEST] <Croolman> thebombzen: ok, lets say I would use the padding filter, what of its function would I call forom the filter?
[17:08:45 CEST] <Croolman> dunno if we are on the same page
[17:09:12 CEST] <thebombzen> well you should read the libavfilter documentation
[17:09:14 CEST] <thebombzen> and the examples
[17:09:35 CEST] <thebombzen> you're trying to increase the size of the video by padding with black. ffmpeg's API has that functionality, namely in the pad filter of libavfilter
[17:09:49 CEST] <thebombzen> so you should read the libavfilter documentation and basic examples
[17:09:55 CEST] <thebombzen> and then work from there
[17:10:02 CEST] <TheWild> https://kopy.io/jK0ed
[17:10:10 CEST] <TheWild> ^ furq
[17:11:41 CEST] <thebombzen> fyi you're using an old version of ffmpeg
[17:12:02 CEST] <thebombzen> more than a year old
[17:12:08 CEST] <thebombzen> it's possible this is a bug that's been fixed
[17:12:26 CEST] <thebombzen> try grabbing a recent build and try again
[17:12:42 CEST] <furq> er
[17:12:45 CEST] <furq> yeah that's all fucked
[17:12:51 CEST] <furq> you shouldn't be getting decoder errors with -c copy
[17:13:01 CEST] <furq> it shouldn't be decoding anything
[17:13:20 CEST] <dystopia_> try just map 0:0 and 0:1
[17:13:30 CEST] <furq> 16:03:07 ( furq) at least use -map 0:v -map 0:a
[17:14:03 CEST] <furq> 0:0 and 0:1 would be the automatic selections anyway, and it'd drop the other audio stream
[17:14:07 CEST] <Croolman> thebombzen, thank you. I was under the imperssion that filtes from inside the libavfilter should not use other filters' functionality from within the libav*. I ll look into it then
[17:14:21 CEST] <thebombzen> are you trying to write a filter for libavfilter?
[17:15:02 CEST] <thebombzen> what filter are you writing? why do you need to pad the video manually?
[17:15:21 CEST] <Croolman> thebombzen, yes
[17:15:35 CEST] <Croolman> I am trying to write the wavelet denoise filter
[17:16:15 CEST] <thebombzen> why do you need to pad for that
[17:16:20 CEST] <thebombzen> also there are two wavelet denoisers already
[17:16:28 CEST] <Croolman> I need every frame to be of 2^n width and height, so internally I need to create a copy of input frame, process it an put the ROI on output
[17:17:50 CEST] <Croolman> do you mean the - vf_waveform.c?
[17:18:01 CEST] <furq> !filter owdenoise
[17:18:01 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#owdenoise
[17:18:08 CEST] <furq> !filter vaguedenoiser
[17:18:09 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#vaguedenoiser
[17:19:06 CEST] <Croolman> Ahh, ok.
[17:19:36 CEST] <Croolman> It is a school project and I am using different wavelet, also the threshold is computed internally
[17:22:49 CEST] <thebombzen> if it's a school project then it might be best not to frame it as a patch to libavfilter
[17:22:52 CEST] <thebombzen> but rather its own thing
[17:23:10 CEST] <thebombzen> also 2^n x 2^n is really nice in theory but in the real world you have annoying prime factors like 3 and 5
[17:23:27 CEST] <thebombzen> you should figure out a way to deal with those
[17:23:50 CEST] <thebombzen> how do you actually do that? Idk, I'm a mathematician, not a software developer.
[17:23:52 CEST] <furq> i don't think he said anything about patching
[17:24:07 CEST] <thebombzen> [11:14:07] <Croolman> thebombzen, thank you. I was under the imperssion that filtes from inside the libavfilter should not use other filters' functionality from within the libav*. I ll look into it then
[17:24:15 CEST] <thebombzen> definitely did
[17:24:22 CEST] <Croolman> it is not a patch. Idid not know there are two already, initially was planning to put it into ffmpeg if it wen well.
[17:24:33 CEST] <thebombzen> putting it in ffmpeg is a patch
[17:24:41 CEST] <furq> "i'm writing a filter" doesn't imply contribution
[17:25:04 CEST] <thebombzen> it does when he says that he's trying to avoid referencing libavfilter filters from another filter he's writing within libavfilter
[17:25:06 CEST] <furq> and it's not a patch unless he contributes it
[17:25:17 CEST] <furq> unless he has some really messed up way of writing code
[17:25:52 CEST] <thebombzen> Croolman: either way, given that this functionality already exists, if you're writing it as a school project, I think it's better to write libcroolmanwavelet and link to libavfilter
[17:25:53 CEST] <Croolman> It is a code written from ground zero
[17:25:55 CEST] <thebombzen> rather than try to integrate it
[17:26:22 CEST] <thebombzen> because the point of the project is to use wavelets, right, not to learn how to play with pads
[17:26:31 CEST] <Croolman> The integration itself is mandatory
[17:26:38 CEST] <thebombzen> ...oh. okay.
[17:27:07 CEST] <Croolman> This is the only issue I am facing. The 2^n closest width/height is easz to compute
[17:27:07 CEST] <thebombzen> why does your professor want you to put it inside libavfilter? that's an odd assignment
[17:27:33 CEST] <thebombzen> yea. 2^n by 2^n is super nice but in the real world you won't get that. I have no idea how to fix that, because I'm a mathematician, not a software developer
[17:27:42 CEST] <Croolman> The way he put it seems like he does not know there are some already either
[17:27:43 CEST] <thebombzen> I just look at 2^n by 2^n and go "it's done!"
[17:27:58 CEST] <thebombzen> there's about eight different denoise filters in libavfilter
[17:28:25 CEST] <Croolman> I do know that
[17:28:47 CEST] <hamdjan> hi, with the android app soundhound you can find out the title of a playing song. i wonder if it is also possible to say what movie a trailer belongs to. basically i got the movie title and just want to make sure the trailer is really belongs to that movie. would it suffice if you grab several unique colors from different frames of the trailer and compare them with e.g. 50 pictures of the google image result? https://www.google.de/search?q=jack+rea
[17:28:47 CEST] <hamdjan> cher&source=lnms&tbm=isch
[17:29:38 CEST] <Croolman> thebombzen: The issue here is really how to copy data from smaller frame to bigger frame, thats it.
[17:29:47 CEST] <furq> Croolman: you might want to look at how -vf pad does it
[17:29:47 CEST] <kepstin> hamdjan: that's a little out of scope for ffmpeg user support. If you figure out how to do that, do let us know about the paper you publish for your PHD ;)
[17:29:52 CEST] <furq> https://ffmpeg.org/doxygen/trunk/vf__pad_8c_source.html
[17:31:24 CEST] <thebombzen> hamdjan: that sounds like a hard problem
[17:31:35 CEST] <Croolman> furq: I did, but it is a lil too complicated to decompose. Also I just dont wanna to copy paste code from another filter.
[17:31:50 CEST] <kepstin> hamdjan: basically, "it's not that simple". Existing online screenshot identification tools, like https://whatanime.ga/ for example, rely on massive databases of video data.
[17:32:07 CEST] <thebombzen> hamdjan: things like soundhound or shazam compare a clip to a large database. if you're trying to parse the title of a video and a few screenshots you're in for a lot of work
[17:32:12 CEST] <hamdjan> kepstin, hehe, i think its possible, but not that easy. trailers are mostly transcodes and multiple different filters are added, so the original colors change, same for the screenshot pictures of google image result list. so i would have to apply multiple filters on both the screenshot and the trailer frames and see how similar they are and if there are reoccuring colors/forms
[17:32:43 CEST] <thebombzen> it won't appear in google images necessarily with "search by image" unless that shot or similar shot appears prominently
[17:32:48 CEST] <thebombzen> otherwise it'll just find visually similar images
[17:33:15 CEST] <hamdjan> yes, it would only see if the visual forms of the e.g. the main actor reappears, or the color of his jacket
[17:33:29 CEST] <thebombzen> this doesn't have an easy solution. the problem you're trying to solve is hard
[17:33:53 CEST] <thebombzen> you can't just bust in and say "I'm going to solve it with algorithms!" and expect it to work easily
[17:34:00 CEST] <hamdjan> right, thanks for clarifying that, won't go for it then. thought i could make a quick solution to this
[17:34:18 CEST] <thebombzen> It might be easier to parse the title of the youtube video
[17:34:25 CEST] <thebombzen> than to parse the video itself
[17:34:31 CEST] <thebombzen> usually the name of the movie is in the title
[17:34:46 CEST] <hamdjan> i do that already, but sometimes there are often movies with the same or similar name or trailers with typos in the movie title etc
[17:35:27 CEST] <kepstin> keep in mind that google itself probably already knows what movie a trailer is from, because of their giant analysis system to detect copyright owners for videos
[17:36:15 CEST] <hamdjan> hm, i took a quick look into google's youtube api, but they don't reveal the trailer movie unfortunately to the public
[18:36:02 CEST] <james999> rnndom question, but what % of URLs that people type in these channels are preceded by http:?
[18:36:20 CEST] <james999> i.e. how often do people just say "go to ffmpeg.org/doc/examples"?
[18:54:33 CEST] <agrathwohl> I am wondering how 2-pass mode works for h264_nvenc and hevc_nvenc. The 'slow' preset is reported to enable '2-pass mode' but does one also need to specify the '-pass 1' and '-pass 2' flag in order to explicitly perform a 2-pass encode?
[19:06:43 CEST] <jkqxz> It's internal two-pass, not external. (The hardware encoder is probably run on each frame twice, the first encode being used the inform the parameters to the second. That is all inside the opaque proprietary drivers, though, so you've no idea what it actually does.)
[19:07:51 CEST] <DHE> as I understand it this is correct. per-block decisions are made better in 2-pass mode but is otherwise still frame-by-frame
[19:08:02 CEST] <kepstin> agrathwohl: jkqxz I've looked this up earlier. The nvenc encoder normally operates a single macroblock at a time - when in the misnamed "2-pass mode", it analyzes an entire frame before encoding that frame
[19:08:07 CEST] <DHE> nvenc was mainly targeted at real-time jobs like video conferencing
[19:08:45 CEST] <agrathwohl> Thanks this is very useful information for us.
[19:08:52 CEST] <furq> that really is misnamed
[19:09:04 CEST] <agrathwohl> How come nVidia can't get their terminologies together? Not the first time in recent memory that they've done something like that.
[19:09:13 CEST] <agrathwohl> Appreciate all the insights, cheers!
[19:09:22 CEST] <DHE> from their perspective it's 2-pass. just on a frame-level rather than a whole video level
[19:09:30 CEST] <furq> https://www.youtube.com/watch?v=WTIe867A0CQ
[19:09:36 CEST] <furq> this is probably why
[19:09:43 CEST] <alexpigment> fwiw, sometimes developers are just out of touch with reality, and yet they name things ;)
[19:09:44 CEST] <agrathwohl> So would a more preferrable approach be to decode with the GPU and encode with x264/x265?
[19:10:10 CEST] <furq> depends what you're doing
[19:10:10 CEST] <agrathwohl> If the target is high-resolution VOD 360 video? :D
[19:10:14 CEST] <furq> for archival, certainly
[19:10:23 CEST] <DHE> x264 easily produces more consistent and better quality than nvenc, at the obvious cost of performance. again, nvenc is best for real-time work like video conferencing, live game streaming, etc
[19:10:26 CEST] <furq> nvenc is good for realtime but not really great for anything else
[19:10:48 CEST] <alexpigment> nvenc is good for realtime + not using up CPU
[19:11:00 CEST] <furq> well that's normally a constraint for realtime stuff
[19:11:13 CEST] <kepstin> if you need to do realtime capture then re-encode for vod, it might make sense to capture with really high bitrate nvenc, then re-encode with x264/x265 to get it smaller.
[19:11:17 CEST] <agrathwohl> i've noticed quite a decrease in quality, indeed. Even with objective tests like PSNR x264 performs much better with not a significant decrease in performance;.
[19:11:18 CEST] <alexpigment> true, i don't know why i clarified there ;)
[19:12:07 CEST] <agrathwohl> I've been considering just pre-processing all our source masters to yuv444p10le so that CUDA cards can decode it, then use CPU to squeeze out all the quality possible.
[19:12:30 CEST] <agrathwohl> Seems like this approach is being validated. Thanks so much to all for your help
[19:12:31 CEST] <alexpigment> agrathwohl: i actually did a few tests yesterday. the extra bitrate needed to achieve the same PSNR is not horrible. it's like 1.4x in my quick tests
[19:12:44 CEST] <agrathwohl> alexpigment what kind of resolutions and frame rates were you dealing with?
[19:12:50 CEST] <furq> you should probably clarify that that was nvenc_hevc vs x264
[19:13:13 CEST] <furq> granted i doubt nvenc_hevc is much better than nvenc_h264
[19:13:14 CEST] <agrathwohl> furq actually it was both hevc_nvenc/x265 and h264_nvenc/x264
[19:13:20 CEST] <furq> oh ok
[19:13:35 CEST] <furq> s/much better/different/ then
[19:14:17 CEST] <alexpigment> true again. it's probably more like 1.5x the bitrate needed to achieve the same PSNR
[19:14:39 CEST] <furq> i figured they weren't much different but i didn't know it was that close
[19:15:22 CEST] <alexpigment> i'd have to do a few more tests to prove that off-the-cuff statement
[19:15:55 CEST] <alexpigment> but it's safe to assume that the quality of x264 is still better than hevc_nvenc but not by a huge amount
[19:16:12 CEST] <alexpigment> and that h264_nvenc trails behind the hevc quality by some amount
[19:16:15 CEST] <furq> 40% is pretty significant for modern video
[19:16:28 CEST] <alexpigment> signficant for certain applications
[19:16:32 CEST] <furq> sure
[19:17:01 CEST] <kepstin> i wouldn't have been surprised if x264 can beat h265_nvenc, at least in the slower encoding modes
[19:17:33 CEST] <alexpigment> sometimes I just need a quick intermediate file, in which case file size is not a big deal at all
[19:17:41 CEST] <furq> nvenc still doesn't support bframes in hevc does it
[19:17:54 CEST] <alexpigment> i don't think so but i can check real quick
[19:18:09 CEST] <furq> wikipedia says pascal doesn't
[19:18:37 CEST] <alexpigment> yeah, no b-frames in this video
[19:19:59 CEST] <furq> out of interest, how does lossless nvenc compare to lossless h264/ffv1 etc
[19:20:06 CEST] <furq> lossless x264, rather
[19:20:09 CEST] <alexpigment> you mean in terms of file size?
[19:20:14 CEST] <furq> yeah
[19:20:18 CEST] <alexpigment> lemme test
[19:20:34 CEST] <furq> can you compare against -preset ultrafast and -preset veryslow
[19:21:55 CEST] <alexpigment> yeah
[19:22:01 CEST] <alexpigment> crf 1 or crf 0?
[19:22:12 CEST] <alexpigment> crf 0 does high 444 i think
[19:22:45 CEST] <furq> -qp 0
[19:22:49 CEST] <alexpigment> k
[19:23:03 CEST] <furq> crf/qp don't affect the pixel format
[19:23:42 CEST] <alexpigment> in my experience crf 0 does
[19:23:45 CEST] <alexpigment> i'd have to test again
[19:23:59 CEST] <alexpigment> but i was pretty sure it forces it to high 444 predictive profile
[19:24:09 CEST] <alexpigment> but then it's probably still doing 4:2:0
[19:24:17 CEST] <alexpigment> (i'm just mumbling at this point, ignore me)
[19:24:18 CEST] <kepstin> it forces that profile, yes, because that profile is needed for the lossless predictive mode
[19:24:46 CEST] <hamdjan> what is the video codec of this video? http://sprunge.us/AhCi
[19:24:51 CEST] <kepstin> note that "-crf 0" is only lossless in 8-bit x264, it is not lossless in 10-bit; that's why furq said to use '-qp 0' instead
[19:24:59 CEST] <hamdjan> mp42 or avc1?
[19:25:24 CEST] <hamdjan> i think avc1 and mp42 is only the container?
[19:25:46 CEST] <alexpigment> furq: for my 1.5 minute test video, the losslesshp NVENC profile was 2.34GB
[19:26:14 CEST] <alexpigment> libx264 with the ultrafast preset was 2.5GB
[19:26:30 CEST] <alexpigment> still waiting on veryslow - will probably be another 5 minutes
[19:26:34 CEST] <furq> that's not bad
[19:27:05 CEST] <furq> hamdjan: it's h.264
[19:27:14 CEST] <hamdjan> furq, thanks for assuring!
[19:27:21 CEST] <furq> Format/Info : Advanced Video Codec
[19:27:23 CEST] <hamdjan> furq, so avc1 is a synonym for h264
[19:27:30 CEST] <furq> avc is
[19:27:56 CEST] <furq> actually i guess avc1 is as well
[19:28:19 CEST] <alexpigment> is avc1 just the fourcc name for avc?
[19:28:33 CEST] <alexpigment> (i.e. they needed four characters and just added a '1')
[19:28:34 CEST] <alexpigment> ?
[19:29:16 CEST] <furq> https://www.fourcc.org/avc1/
[19:29:26 CEST] <furq> h264 is the usual fourcc
[19:30:45 CEST] <alexpigment> interesting. not sure why it shows up in MediaInfo over h264
[19:31:05 CEST] <furq> that just reads whatever's in the container
[19:31:19 CEST] <alexpigment> then again, fourcc doesn't list H264
[19:31:28 CEST] <alexpigment> the website i mean
[19:31:39 CEST] <alexpigment> nm
[19:31:40 CEST] <furq> https://www.fourcc.org/h260-through-h269/
[19:31:46 CEST] <alexpigment> it's not searchable because of the way they wrote it
[19:31:49 CEST] <furq> yeah
[19:32:17 CEST] <alexpigment> maybe it only gets used if cabac is off or something
[19:32:25 CEST] <alexpigment> baseline profile, etc
[19:32:49 CEST] <furq> according to ms avc1 is "h.264 bitstream without start codes"
[19:32:58 CEST] <furq> i'm not sure how reliable that is though
[19:34:46 CEST] <alexpigment> i'm not even sure i know what that means - would movflags faststart add those start codes?
[19:35:20 CEST] <furq> start codes == annex b
[19:35:32 CEST] <furq> faststart is specific to mp4
[19:36:19 CEST] <alexpigment> i see. well i'll just accept that this particular designation is ourside of my area of knowledge ;)
[19:36:25 CEST] <alexpigment> *outside
[19:36:37 CEST] <furq> i'm sure i've seen mp4s with h264 fourccs, which shouldn't be possible according to ms
[19:36:43 CEST] <furq> i wouldn't put money on that though
[19:36:56 CEST] <furq> but then i don't think fourccs are really relevant any more
[19:37:29 CEST] <alexpigment> yeah, the last time i even paid attention to fourcc names was when I was trying to force an AVI to be either Divx or Xvid
[19:37:40 CEST] <furq> yeah i don't think it matters for anything but avi
[19:37:48 CEST] <furq> and it's 2017, so there's no good reason to use avi
[19:38:05 CEST] <alexpigment> it has its place
[19:38:13 CEST] <alexpigment> but i don't usually use it
[19:38:14 CEST] <furq> is the place "the past"
[19:38:17 CEST] <furq> or is it "the bin"
[19:38:47 CEST] <alexpigment> what's that program that everyone uses with avisynth?
[19:38:54 CEST] <alexpigment> not avidemux, but the other one
[19:39:09 CEST] <alexpigment> it's a very barebones NLE
[19:40:24 CEST] <alexpigment> VirtualDub
[19:40:28 CEST] <alexpigment> it took me a minute
[19:40:45 CEST] <dystopia_> :O
[19:40:45 CEST] <alexpigment> anyway, if you use VirtualDub still in 2017, I suppose you would need to work with AVI a lot :)
[19:40:52 CEST] <dystopia_> virtualdub is was out of date too
[19:41:05 CEST] <dystopia_> doesn't support many modern codecs and stuff
[19:41:08 CEST] <alexpigment> exactly
[19:41:17 CEST] <alexpigment> which is part of why people use AviSynth with it
[19:41:25 CEST] <dystopia_> i use to love it back in the day though
[19:41:30 CEST] <furq> it supports any codecs that ffdshow supports, which is most of them
[19:41:38 CEST] <furq> but yeah the container support is terrible
[19:41:44 CEST] <alexpigment> furq: i figured it was only VFW codecs
[19:42:08 CEST] <furq> ffdshow is vfw
[19:42:09 CEST] <alexpigment> but then again i haven't used ffdshow in many years. it wrecked my system codecs so much
[19:42:28 CEST] <furq> you can use avisynth directly with ffmpeg anyway
[19:42:37 CEST] <furq> although i don't see why you would when vapoursynth exists now
[19:43:08 CEST] <furq> and you don't have to rely on awful crash-prone-after-8-hours-of-encoding hacks to get multithreading
[19:43:11 CEST] <alexpigment> anyway, this is a long convoluted way to say that some people still use VirtualDub
[19:43:16 CEST] <furq> they sure do
[19:43:31 CEST] <furq> lord have mercy on their souls
[19:43:38 CEST] <alexpigment> haha
[19:44:23 CEST] <alexpigment> it's a least a more legit program compared to AviDemux. that program only seems to work for a small handful of workflows
[19:47:17 CEST] <alexpigment> ok furq, here are the final numbers on that test from earlier
[19:47:38 CEST] <alexpigment> NVENC, preset losslesshp: 2.34GB
[19:47:47 CEST] <alexpigment> libx264, preset ultrafast: 2.50GB
[19:48:02 CEST] <alexpigment> libx264, preset veryslow: 1.62GB
[19:48:25 CEST] <alexpigment> but damn if veryslow didn't take over 10 minutes :)
[19:53:12 CEST] <furq> that's not bad at all
[19:56:21 CEST] <hamdjan> how can i check if my ffmpeg installation can convert h264?
[19:56:41 CEST] <hamdjan> ffmpeg codecs shows this: http://sprunge.us/RGhd
[19:57:05 CEST] <hamdjan> so listing "DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_crystalhd h264_vdpau ) (encoders: libx264 libx264rgb )" it should also be able to convert h264?
[19:57:05 CEST] <c_14> > DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_crystalhd h264_vdpau ) (encoders: libx264 libx264rgb )
[19:57:14 CEST] <c_14> it can decode and encode h264, yes
[20:02:17 CEST] <faLUCE> hello, is there any code for a simple audio-video player with libav?
[20:02:50 CEST] <faLUCE> something like this (which doesn't work) http://dranger.com/ffmpeg/tutorial05.html
[20:03:27 CEST] <faLUCE> I tried that too: https://github.com/wang-bin/QtAV but it seems in a bad state
[20:10:48 CEST] <hamdjan> c_14, thanks for assuring!
[21:30:47 CEST] <bwe> Hi, how do I add `scale=-1:1080` to -filter_complex correctly? `ffmpeg -i Slide01.png -vf scale=-1:1080 output.png` works, however my more complex don't: https://bpaste.net/show/0569ae5dea03
[21:59:48 CEST] <teratorn> bwe: how does it fail?
[22:03:23 CEST] <bwe> Negative values are not acceptable. Failed to configure input pad on Parsed_pad_4 Log: https://bpaste.net/show/1e9cd4c80cd6
[22:05:05 CEST] <bwe> Ah, I need to consider a different order...
[22:06:00 CEST] <bwe> `scale` must precede `pad`. Why so?
[22:09:23 CEST] <teratorn> bwe: I dunno, sorry
[22:10:27 CEST] <teratorn> bwe: oh
[22:10:34 CEST] <bwe> teratorn: working log here: https://bpaste.net/show/b3c9df1949b8
[22:11:21 CEST] <teratorn> bwe: does specifying scale=w=-1:h=1080 work?
[22:12:41 CEST] <bwe> teratorn: Do you mean adopting the working version (the last paste) to scale=w=-1:h=1080?
[22:13:01 CEST] <teratorn> no the version that gave the error about negative values
[22:15:09 CEST] <bwe> teratorn: ... -filter_complex '[0:v]setsar=sar=1/1,fps=30,loop=loop=-1:start=0:size=1,trim=duration=5.7,pad=width=1920:height=1080:x=(out_w-in_w)/2:y=(out_h-in_h)/2:color=white,scale=w=-1:h=1080[0]; does not work
[22:16:12 CEST] <teratorn> hmmmm
[22:16:24 CEST] <teratorn> so it has nothing to do with the value -1
[22:16:42 CEST] <teratorn> perhaps it does not know what the requested pix format is since it is the last filter in the chain
[22:16:49 CEST] <bwe> teratorn: No, as this is even recommended: https://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg
[22:17:03 CEST] <relaxed> bwe: your input image is 1920x1440, but you're trying to pad it to pad=width=1920:height=1080 ?
[22:17:19 CEST] <relaxed> how's that going to work?
[22:17:20 CEST] <bwe> teratorn: the pad fails because it receives 1440 px yet it expects 1080.
[22:17:24 CEST] <bwe> relaxed: It can't.
[22:17:54 CEST] <relaxed> scale, then pad
[22:17:55 CEST] <bwe> relaxed: So scale ensures that pad is fed with 1080 height.
[22:18:22 CEST] <bwe> relaxed: Thanks for that clarification. The problem was not syntax, as I expected.
[22:18:51 CEST] <relaxed> you're welcome
[22:23:05 CEST] <bwe> relaxed: Where would I have learned that from the docs? Yes, its complex and to believe to get something to work as a newbie in a few hours seems to be rather naïve.
[22:25:06 CEST] <relaxed> "Negative values are not acceptable" was a pretty good hint, but it just takes experience to look at ffmpeg's output
[22:25:45 CEST] <relaxed> you'll bang your head against the desk less and less as you use it =)
[22:27:49 CEST] <relaxed> when you run into an error with filters, simplifying the command as much as possible also helps to see what's wrong
[22:29:40 CEST] <bwe> Isn't there some list on the wiki outlining the process to isolate the issue faster / common mistakes...?
[22:31:29 CEST] <faLUCE> hello, is there any code for a simple audio-video player with libav?
[22:36:53 CEST] <relaxed> faLUCE: the top of ffplay.c says "simple media player based on the FFmpeg libraries"
[22:36:54 CEST] <teratorn> faLUCE: ffplay.c ?
[22:37:25 CEST] <faLUCE> too long.In addition, it uses threads and I don't understand why
[22:38:03 CEST] <teratorn> faLUCE: any real player will be a lot longer
[22:38:30 CEST] <teratorn> "audio-video player" and "simple" don't go together
[22:38:53 CEST] <faLUCE> I tried this, for video only, http://dranger.com/ffmpeg/tutorial02.c and it's ok. I need something which adds the audio part: http://dranger.com/ffmpeg/tutorial05.html <--- this doesn't work and it uses threads
[22:38:57 CEST] <relaxed> People ask for this here quite a bit. Too bad the dranger tutorial is out of date.
[22:39:10 CEST] <faLUCE> relaxed: exactly
[22:39:35 CEST] <faLUCE> about the dranger tutorial I don't understand why it uses threads
[22:40:54 CEST] <faLUCE> I saw that ffplay uses threads too. I don't understand that too
[22:41:37 CEST] <relaxed> What would be cool is if someone took the dranger tutorial, added it to ffmpeg/tools/tutorial.c with explanations by the code. Then tutorial.c could be tested with fate
[22:42:28 CEST] <faLUCE> relaxed: as said before, I tried to send examples to the mailing list. They have been rejected without any good reason, even if they were well coded and useful.
[22:43:31 CEST] <relaxed> it would be a lot of work and they probably want someone to maintain it if it goes in
[22:43:42 CEST] <relaxed> do you have a link to the email thread?
[22:44:13 CEST] <faLUCE> relaxed: really not. they just had a dogmatic behaviour. anyway, iive told me that he wanted to bump and propose my thread again
[22:44:32 CEST] Action: relaxed pets iive
[22:44:41 CEST] <faLUCE> relaxed: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209494.html
[22:45:20 CEST] <faLUCE> I have a good experience with libav, and I could send many useful things
[22:45:48 CEST] <faLUCE> I could re-work on the danger tutorial too, but this behaviour really discouraged me in contributing
[22:47:15 CEST] <relaxed> You might have better luck with an RFC email explaining your goal and the best way to go about it.
[22:48:22 CEST] <faLUCE> relaxed: really not. My time is up. I don't want to waste more time with that. If some people, here, find my examples useful, then it's up to them to propose them again to the mailing list
[22:49:38 CEST] <faLUCE> relaxed: you can see yourself how useful is the example I proposed. But they preferred to keep horrible code like "muxing.c" or "decoding_demuxing.c", or "aac_transcoding"
[22:54:33 CEST] <faLUCE> does anyone want to make this simple avplayer with me? :-)
[22:54:50 CEST] <faLUCE> I have some ideas...
[22:55:30 CEST] <JEEB> I would rather just make a thing on top of libvlc or libmpv
[22:55:41 CEST] <JEEB> we don't need more NIH in base player things
[22:55:52 CEST] <faLUCE> JEEB: I thought about that, but in this way I can't control the latency
[22:55:55 CEST] <durandal_1707> kodi
[22:56:30 CEST] <JEEB> faLUCE: never came up with the idea of contributing to those things instead of doing NIH? esp. since they already can use lavf/lavc
[22:56:56 CEST] <faLUCE> JEEB: what is NIH?
[22:57:08 CEST] <JEEB> Not Invented Here (syndrome)
[22:58:28 CEST] <faLUCE> JEEB: I have to make a low latency live http mpegts player. I succeded in making a video only player, and if I make an audio video player it would be really useful for anyone
[22:59:25 CEST] <faLUCE> but I need some help
[22:59:40 CEST] <JEEB> I still don't see why libvlc (or libmpv) wouldn't be something you could work with
[23:00:04 CEST] <JEEB> since you would still be able to call it faLUCE's delicious player, but at least you'd be contributing to something that would get generally used
[23:01:01 CEST] <faLUCE> JEEB: I don't have clear ideas about that. But I suspect that wrapping libav (as libmpv or libvlc do) doesn't give me control over the latency. In fact, both vlc and mpv have bad latency for mpegts http streams
[23:01:54 CEST] <JEEB> let me guess you didn't even try to minimize that. and even if it wouldn't be perfect what stops you from trying to improve those two
[23:02:09 CEST] <faLUCE> JEEB: I'm not interested in making "faluce's player". I would be glad if I can use external libs for the player (I already built my lib)
[23:02:23 CEST] <JEEB> only if you definitely see that due to how those two are done you can't architecturally make it good enough
[23:02:44 CEST] <JEEB> I recommend you go poke #videolan for example regarding minimizing of latency as I know VLC is often used in UDP environments
[23:04:00 CEST] <faLUCE> JEEB: I already know how to minimize the latency with libav (as said before, I did that for the video part, and for the audio part separately)
[23:04:16 CEST] <faLUCE> and I did that for muxing audio+video
[23:04:18 CEST] <JEEB> I didn't say anything about that
[23:04:30 CEST] <JEEB> actually read what I said, please
[23:05:06 CEST] <JEEB> I know it's often simpler to hack something up yourself for a POC, but generally these things end up more useful if you don't end up NIH'ing
[23:05:18 CEST] <faLUCE> JEEB: let me finish:
[23:05:25 CEST] <faLUCE> the main problem in vlc are THREADS
[23:05:42 CEST] <faLUCE> I don't like libvlc stuff for this reasons
[23:06:32 CEST] <JEEB> you should look into the input modules and such that are in both libvlc and libmpv
[23:06:33 CEST] <faLUCE> in addition, vlc has option for minimizing the latency (I know all of them) but they don't give good results
[23:06:49 CEST] <JEEB> look into meaning the code
[23:07:11 CEST] <JEEB> anyways, at this point it seems like you have made your mind so I'm just going to stop
[23:07:21 CEST] <JEEB> have fun doing NIH
[23:08:06 CEST] <faLUCE> JEEB: vlc code is a mess and it uses threads without a good reasons. In addition, the options for minimizing the latency don't work well. If I sum all these things, it could be easier to re-write the dranger example... This is what I have to decide
[23:09:34 CEST] <faLUCE> now, the first question is: is it necessary to use threads for audio+video playing? is it possible to put them into only one main loop? why does ffplay use them?
[23:10:35 CEST] <JEEB> maybe you should have this discussion on #videolan or #mpv , seeing where those threads actually help (I would guess the renderers at least should be in their own threads)
[23:11:21 CEST] <faLUCE> JEEB: right. I'll try to ask that in #mpv
[23:11:37 CEST] <JEEB> -34
[23:12:06 CEST] <faLUCE> ?
[23:32:28 CEST] <zerodefect> Is this channel appropriate to ask dev-oriented questions using libavformat/libavutils, or is it more for cli-based questions?
[23:36:55 CEST] <teratorn> zerodefect: both
[23:37:23 CEST] <DHE> zerodefect: response times to API questions is longer, but yeah this is the channel.
[23:37:34 CEST] <zerodefect> Thanks.
[23:38:50 CEST] <zerodefect> Would there be any reasons or limitations that would prevent me from being able to dynamically load libavformat and/or libavutils and call corresponding functions?
[23:39:50 CEST] <faLUCE> zerodefect: same as for other libs :-)
[23:41:24 CEST] <zerodefect> Ok. I figured as much. I ask because I did have issues linking at compile time where the order of libavformat/libavutils had to be a specific order (Ubuntu 17.04, GCC 6.3)
[23:41:39 CEST] <zerodefect> I could never quite get to the bottom of that :/
[23:42:25 CEST] <DHE> yeah, gcc's linker is single-pass by default. though it can be made multi-pass
[23:43:27 CEST] <zerodefect> Yeah, I got around it eventually by tweaking the order (can't remember which one came first)
[23:43:43 CEST] <DHE> gcc ..... -Wl,--start-group -lavcodec -lavformat .... -Wl,--end-group
[23:44:01 CEST] <DHE> will run multiple passes through everything in the group until it gets it all (assuming you didn't miss anything)
[23:44:13 CEST] <zerodefect> Ah, good to know. I'll make a note of that :)
[23:44:14 CEST] <DHE> I think that syntax is right
[23:44:26 CEST] <zerodefect> I can look up the specifics
[23:47:23 CEST] <arog> hey
[23:48:36 CEST] <arog> I have an app that is recording images to a mp4 file, hte issue is that sometimes people forget to stop the application (which calls av_write_trailer, avcodec_close and avio_close) and eject the hard disk
[23:48:54 CEST] <arog> is it possible to keep the file in a "safe" state every so often?
[23:48:54 CEST] <zerodefect> I'm using Boost.DLL (v1.62) to dynamically load libavutil. I resolve the symbols for 'av_frame_alloc' - the return address looks right. I call the function pointer and bang...segmentation fault.
[23:49:00 CEST] <arog> so it won't be completely unusable
[23:49:23 CEST] <arog> maybe every 10-15 seconds it saves?
[23:50:33 CEST] <arog> can I just do av_write_trailer then open it again to add more data?
[23:59:16 CEST] <draean> Quick question: if I'm trying to pull in a stream source, and mix it into the file "ffmpeg -i input.mp4 -i "http://stream/source.mp3" -map 0:v -map 1:a output.mp4" but I want to be able to handle that audio stream not being there, and play black audio instead while it's missing. what would I need to look at?
[00:00:00 CEST] --- Sat May 6 2017
More information about the Ffmpeg-devel-irc
mailing list