[Ffmpeg-devel-irc] ffmpeg.log.20180518
burek
burek021 at gmail.com
Sat May 19 03:05:02 EEST 2018
[00:00:09 CEST] <atomnuker> device derivation for vaapi is messed up and doesn't work
[00:00:20 CEST] <atomnuker> so you need to do it manually like I described
[00:01:14 CEST] <atomnuker> so the command line would be ffmpeg -init_hw_device "vaapi=vap:/dev/dri/renderD128" -f kmsgrab -i /dev/null -filter_hw_device vap -vf hwmap,hwdownload,format=nv12 -f null -
[00:01:31 CEST] <atomnuker> this will map, convert and download to nv12
[00:03:27 CEST] <f-safinaskar> atomnuker: wow, this works!
[00:06:47 CEST] <f-safinaskar> atomnuker: thanks a lot
[00:06:58 CEST] <f-safinaskar> atomnuker: even switching into tty1 works!!
[00:07:10 CEST] <f-safinaskar> atomnuker: please describe every option. i want to understand how this works
[00:08:34 CEST] <f-safinaskar> atomnuker: also, is this possible to generate frames as fast as the screen actually changes?
[00:08:50 CEST] <f-safinaskar> atomnuker: and to attach timestamp info into every frame?
[00:08:52 CEST] <f-safinaskar> :)
[00:09:28 CEST] <atomnuker> as fast as the screen changes?
[00:09:34 CEST] <f-safinaskar> atomnuker: yes
[00:09:52 CEST] <f-safinaskar> atomnuker: i. e. every time screen changes, i. e. some new letter appears, i want to write new frame
[00:10:41 CEST] <atomnuker> add a decimate filter for that to remove identical frames
[00:10:56 CEST] <atomnuker> for timestamp, look at the drawtext filter
[00:11:26 CEST] <f-safinaskar> atomnuker: i don't mean actually draw a text on a frame
[00:11:36 CEST] <f-safinaskar> atomnuker: i mean I want to store metainfo somehow
[00:11:48 CEST] <f-safinaskar> atomnuker: for example as a subtitle or something like that
[00:13:54 CEST] <atomnuker> its already in the metadata
[00:14:18 CEST] <atomnuker> you can't not have a timestamp
[00:15:17 CEST] <kepstin> hmm, but depending how you use ffmpeg, you'll probably get the timestamps reset so the first frame is 0. There's way to avoid that tho (assuming your container's ok with it)
[00:15:29 CEST] <kepstin> but all you really need to know is the time that the video starts.
[00:15:36 CEST] <kepstin> and then relative times in the video
[00:16:24 CEST] <atomnuker> pretty much all containers except a few will have a timestamp rollover times of around the age of the universe
[00:16:50 CEST] <f-safinaskar> first of all, i often use suspending
[00:16:53 CEST] <f-safinaskar> and hibernation
[00:17:06 CEST] <f-safinaskar> so, if I write a video using 4 frames per second
[00:17:19 CEST] <f-safinaskar> then actual time between two frames may be 3 hours
[00:17:29 CEST] <f-safinaskar> because my computer was suspended during this time
[00:17:50 CEST] <kepstin> whatever you do, this is definitely going to cut severely into your batter life, if this is a mobile system :)
[00:18:08 CEST] <f-safinaskar> and I want to store real timestamp info. So, timestamps for this two frames should differ 3 hours
[00:18:12 CEST] <kepstin> (it would be interesting to see if kmsgrab actually does survive suspend properly)
[00:18:34 CEST] <f-safinaskar> kepstin: wow, thanks for idea!
[00:18:51 CEST] <f-safinaskar> kepstin: i thought kmsgrab will survive it without problems
[00:19:04 CEST] <f-safinaskar> kepstin: now i understand i should test it firsy
[00:19:10 CEST] <f-safinaskar> kepstin: *first
[00:19:27 CEST] <f-safinaskar> afk
[00:19:58 CEST] <kepstin> f-safinaskar: kmsgrab does timestamps using the realtime clock, so if there's a 3h gap, it'll put 3h between frames
[00:20:09 CEST] <kepstin> that will work as you expect.
[00:20:44 CEST] <f-safinaskar_> i am here again
[00:21:20 CEST] <kepstin> f-safinaskar: i guess you might have missed my last message, since it doesn't seem like you're using an irc bounder :)
[00:21:28 CEST] <kepstin> <kepstin> f-safinaskar: kmsgrab does timestamps using the realtime clock, so if there's a 3h gap, it'll put 3h between frames
[00:21:40 CEST] <kepstin> bouncer*
[00:22:00 CEST] <f-safinaskar_> kepstin: haha, i just tested suspend with kmsgrab and it survived!
[00:24:15 CEST] <f-safinaskar_> atomnuker: please, describe me all options in that huge command
[00:28:07 CEST] <atomnuker> -init_hw_device inits the vaapi device (with a label vap, can be anything, its just an id)
[00:28:29 CEST] <atomnuker> -filter_hw_device looks up the label and uses the hw device specified
[00:28:37 CEST] <atomnuker> hwmap maps to vaapi
[00:28:50 CEST] <atomnuker> hwdownload downloads the hwframe to a software one
[00:29:11 CEST] <atomnuker> -crtc_id is only needed if you have more than 1 display
[00:32:11 CEST] <gnarface> this doesn't quite do what i thought it would: ffmpeg -f video4linux2 -pix_fmt yuyv422 -s 1280x960 -i /dev/video0 -c:v rawvideo -f rawvideo -vf fps=fps=1/60 timelapse_test%d.raw
[00:32:31 CEST] <gnarface> i intended them to be separate files
[00:32:48 CEST] <gnarface> would i use a different value for "-f" then?
[00:33:45 CEST] <gnarface> they'll all be encoded into a video later but the machine this is running on isn't fast enough for that
[00:36:16 CEST] <gnarface> (and i can't just run ffmpeg every 60 seconds using "-vframes 1" either, because there's about a 0.3% chance the camera will freeze due to insufficient power to the USB ports. lol raspberry pi... you so craaaazy)
[00:36:23 CEST] <f-safinaskar_> atomnuker: thanks a lot!
[00:36:58 CEST] <f-safinaskar_> atomnuker: also i want to store with every frame this info: which vt is active
[00:37:13 CEST] <gnarface> (actually it's somewhere between 0.3% and 0.4%, maybe closer to 0.4%)
[00:37:25 CEST] <f-safinaskar_> f-safinaskar_: i want to draw this text on a frame or to store it to subtitle or to some metainfo into container
[00:37:39 CEST] <f-safinaskar_> atomnuker: i want to draw this text on a frame or to store it to subtitle or to some metainfo into container
[00:37:54 CEST] <f-safinaskar_> atomnuker: let's assume container is powerful enough, say, it is .mkv
[00:38:14 CEST] <atomnuker> you can't, this taps into the gpu directly, it has no idea about VTs
[00:39:11 CEST] <f-safinaskar> atomnuker: is this an option to run specified shell command for every frame and then to store output of this command written on a frame or using some metainfo?
[00:41:10 CEST] <atomnuker> no idea, you'd need to write a script yourself
[00:42:54 CEST] <f-safinaskar> atomnuker: it seems you don't understand me. Let's assume I already have shell script which, say, determines current VT and writes its number to stdout. How to call ffmpeg such that ffmpeg will run my (already written and working!) script for every frame and write output of this script on the frame or to metainfo?
[00:43:56 CEST] <f-safinaskar> Alina-malina: >B;8G=K9 =8: :)
[01:00:27 CEST] <gnarface> nevermind, i figured it out.
[02:58:11 CEST] <aszlig> good morning
[02:59:54 CEST] <aszlig> i'm trying to find a right input codec to use for ffmpeg for encoding uncompressed image deltas with timing information and different frame sizes and i'm quite confused on what's the right choice here...
[03:03:15 CEST] <aszlig> basically what i have is the current monotonic timestamp of the current frame, the changed region and the overall size of the frame, the application that i'm patching is QEMU (if that information is relevant)
[03:05:08 CEST] <aszlig> so the idea is to write uncompressed frames to a file as quickly as possible during runtime of the VM and use ffmpeg later to encode it into a proper video
[03:09:49 CEST] <aszlig> the patched VM is used for automated testing so multiple reboots can occur with different screen sizes and in the end the resulting video should scale every frame to the biggest screen size that has occured
[03:11:34 CEST] <aszlig> the scaling part is not the problem here, because i can simply use two ints to pass the maximum size back to the parent process (or maybe even write it to the same file afterwards)
[03:13:19 CEST] <aszlig> however what i'm looking for is a format that i can use for that, because as far as i could see from the ffmpeg source, a certain width and height seems to be expected for all the frames
[03:14:23 CEST] <furq> h264 can do that but i don't think ffmpeg supports it
[03:14:38 CEST] <furq> ffmpeg the cli tool, not the libs
[03:15:29 CEST] <aszlig> oh, forgot to mention: it should be an open format
[03:19:04 CEST] <aszlig> furq: from looking into the libavcodec source it looks like h264 is way too complicated even to parse
[03:19:47 CEST] <aszlig> so ideally the frames would be encoded like this:
[03:21:05 CEST] <aszlig> <timedelta> <cx> <cy> <cw> <ch> <width> <height> <pixformat> <size> <frame_data>
[03:21:50 CEST] <aszlig> where cx and cy are the start offset of the changed region and cw and ch is the width and height of the changed region
[03:24:11 CEST] <aszlig> (forget about the <size>, because that can be implied by <width>, <height> and <pixformat>)
[03:27:19 CEST] <aszlig> i've also stumbled on FITS, which seems to roughly do what i want but without timing information as far as i can see
[03:28:10 CEST] <aszlig> (also it looks more like an image format)
[03:28:49 CEST] <furq> apparently vp9 supports on-the-fly resolution changing
[03:28:55 CEST] <furq> i have no idea how you'd do it with lavc though
[03:30:58 CEST] <furq> i don't really know what you'd use for an intermediate format though
[03:36:10 CEST] <furq> you could just store raw video frames in nut, but idk if there's any value to that
[03:36:59 CEST] <furq> i don't think ffmpeg would actually keep the resolution changes intact so there's probably not much value in using a real container for this
[03:38:59 CEST] <aszlig> hm, didn't know about nut yet, that sounds promising
[03:41:45 CEST] <furq> i guess ffv1 in nut will work if you want to compress the intermediate files a bit
[03:42:35 CEST] <furq> rawvideo in nut should definitely work
[03:55:47 CEST] <aszlig> hm... i just realized i made things more complicated than they are...
[03:56:46 CEST] <aszlig> i just could write the data out in the format specified before and use ffmpeg libraries directly
[03:57:05 CEST] <furq> yeah that's what i was clumsily hinting at
[03:57:23 CEST] <furq> i don't know of any tool that encodes vp9 and handles input resolution changes
[03:57:32 CEST] <furq> although vp9 definitely has to support it for vp9
[03:57:34 CEST] <furq> er
[03:57:35 CEST] <furq> for webrtc
[03:57:55 CEST] <furq> so you'd need to write your own tool anyway, in which case the input format is irrelevant
[04:00:25 CEST] <aszlig> for write_frame there is even time_base, so it should be even easier
[04:03:46 CEST] <aszlig> furq: thanks a lot :-) even though nut is not what i wanted it might definitely come in handy someday[TM] :-)
[04:04:58 CEST] <aszlig> at least while looking at the code/format spec of nut i came to the realization that what i was attempting was a bad idea, so thanks again :-)
[05:08:26 CEST] <tomoko> Guys, could someone tell me how to capture audio from an application (like a running video), instead of mic?
[05:13:00 CEST] Last message repeated 1 time(s).
[05:46:22 CEST] <aszlig> tomoko: depends on your operating system, but have a look at https://trac.ffmpeg.org/wiki/Capture/Desktop
[05:49:18 CEST] <tomoko> Oh, right, I'm on Arch. Thanks, I'll take a look.
[06:07:06 CEST] <tomoko> clear
[06:15:09 CEST] <Etrigan63> when will ffmpeg 4 get loaded into the ubuntu repos? Is there a repo I can use now?
[06:20:21 CEST] <kepstin> ubuntu doesn't typically switch to newer ffmpeg releases within the same ubuntu version, for binary compat. reasons.
[06:20:35 CEST] <kepstin> probably easiest to build it yourself, really.
[06:20:43 CEST] <aszlig> Etrigan63: no idea (not using ubuntu), but a quick web search resulted in https://launchpad.net/~jonathonf/+archive/ubuntu/ffmpeg-4
[06:21:22 CEST] <furq> i would normally recommend relaxed's builds but they're offline right now
[06:21:36 CEST] <Etrigan63> I searched and that did not turn up. Your Google Fu is better than mine, obviously.
[07:33:05 CEST] <nshire> looks like I have to reencode some of my flac library to mp3(probably v0). Are there any good scripts that will preserve the metadata?
[09:09:09 CEST] <akaizen> hello - how do I get the number of frame and number of keyframes for a video?
[09:09:52 CEST] <akaizen> I've tried this but it takes forever : "ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_frames -of default=nokey=1:noprint_wrappers=1 input.mp4"
[09:09:59 CEST] <akaizen> from: https://stackoverflow.com/questions/2017843/fetch-frame-count-with-ffmpeg
[09:56:27 CEST] <d-safinaskar> hi. i am here again
[09:56:42 CEST] <d-safinaskar> and i still trying to capture video using "ffmpeg -f kmsgrab"
[09:57:28 CEST] <d-safinaskar> how to store absolute timestamps? i. e. i want to store timestamp, say, "2018-05-18 7:57:00.00 UTC" instead of just "0:00:05.00"
[09:57:48 CEST] <d-safinaskar> of course, i mean timestamp metadata. i don't mean "drawtext"
[15:06:11 CEST] <zerodefect> I have a .mov file (which I don't have the rights to share :( ). It contains a Dolby-E track. ffprobe seems to believe it's pcm_s24le according to ffprobe (that's a separate issue for now). When I put the stream/track through the pcm decoder, and view the hex data, I can see the header bytes for Dolby-E: https://imgur.com/a/ShziuQA
[15:07:23 CEST] <zerodefect> So image is before and after the data goes through pcm decoder. Going by the Dolby E header bytes, I know it's 24-bit.
[15:08:59 CEST] <zerodefect> Now, I think I'm having an issue with endianness because when I instantiate a Dolby E decoder instance and push the raw pkt through Dolby E decoder, it says 'Invalid Frame Header' when I point it to the start header of the pkt.
[15:10:19 CEST] <zerodefect> On my LE system, I read '0xe08807' but I believe decoder is looking for '0x788e0'. Am I missing something?
[15:18:33 CEST] <blaze> you have to swap starting and ending byte, right?
[15:20:18 CEST] <zerodefect> @blaze, are you suggesting I'm getting my endianness confused?
[15:20:26 CEST] <zerodefect> ...which could be the case :)
[15:21:05 CEST] <blaze> looks very much so
[15:27:50 CEST] <furq> idk about this decoder but normally the decoder takes care of that for you
[15:30:29 CEST] <zerodefect> Do you think I have found the header byte blaze? Just working my way through this step by step
[15:31:05 CEST] <JEEB> furq: yea but he doesn't have a demuxer before him to take all things care for him. and if it's marketed as PCM and not Dolby-E ;)
[15:31:16 CEST] <JEEB> there might be a need for parser, not really sure about Dolby-E
[15:31:21 CEST] <zerodefect> ...as in found it meaning that I've highlighted the correct bytes in the img.
[15:54:26 CEST] <kepstin> zerodefect: if you know the name of the decoder for that codec, it might be sufficient to override the decoder by using -c:a as an input option (before -i)
[16:00:14 CEST] <zerodefect> @kepstin, I should have mentioned that I'm using C-API. I ask so many question these days, that I now forget to mention it :)
[16:00:50 CEST] <zerodefect> I could do that in code, but there is nothing in the file to indicate that the file contains Dolby
[16:01:05 CEST] <zerodefect> *Dolby-E
[16:01:52 CEST] <kepstin> well, if the file's broken, there's not much you can do except provide the user ability to override stuff :/
[16:03:07 CEST] <zerodefect> Yeah, I can't quite see anything online if there is a special tag for Dolby E in MOVs. The atom for the track just says 'soun' which is generic. MediaInfo seems to be aware that the file contains Dolby-E, but I'm not sure if it physically sniffing the first few bytes?
[16:35:56 CEST] <user01> hi is there a way to repair video copied with mtp that shows moov atom not found
[16:36:42 CEST] <JEEB> not automatically, no
[16:36:55 CEST] <JEEB> it lacks the index with the decoding initialization data
[16:39:31 CEST] <kepstin> in particular, it's missing some initialization data that tells the decoder information needed to decode the stream.
[16:40:02 CEST] <JEEB> I thought what I said meant that ^^; but OK, at least it's now more specific
[16:40:08 CEST] <kepstin> there are tools out there that can repair it in some cases, for example if you have another working video from the same device or encoder with same settings, this data can often be copied from that video
[16:40:20 CEST] <JEEB> yea, that's possible
[16:49:42 CEST] <user01> what if i have a lower quality video of the same video? Google backup
[16:50:29 CEST] <kepstin> user01: that would be a re-encode with different settings, so probably not helpful.
[16:51:27 CEST] <user01> i guess i dont see the point of repairing a file with the exact same video . .. seems like you would just copy the good one :p
[16:52:29 CEST] <kepstin> doesn't have to be exact same video, a different video made using the same device/encoder could work.
[16:53:00 CEST] <user01> oh i see
[16:53:36 CEST] <kepstin> like, if it was done on a phone, just make a new video with the same settings - quality, resolution, and you might be able to use that to repair your other file.
[16:56:19 CEST] <kepstin> user01: https://github.com/ponchio/untrunc is a tool I've seen people use to fix this.
[16:57:33 CEST] <user01> ok fix.me seemed to have repaired it . . . but now they ask for my paypal account :P
[16:57:44 CEST] <user01> fix.video i mean
[16:58:18 CEST] <user01> when i gave it a reference video
[16:58:19 CEST] <JEEB> they most likely have a database of things, what sort of parameters they're using
[16:58:22 CEST] <JEEB> ah yes
[16:58:32 CEST] <JEEB> then something like untrunc might be able to help
[16:59:22 CEST] <durandal_1707> while at it, send me some money too
[17:55:08 CEST] <jonan> i have this weird flickering issue when i record with ffmpeg, i suspect it's my compositor. is there any special parameters i could pass in so that i wouldn't need to reboot without the compositor to record video?
[18:29:58 CEST] <zerodefect> Got to the problem of my DolbyE decode. If I feed the undecoded AVPacket out of file into s337m demuxer, it will then decode via dolbye decoder. The s337m demuxer internally does a byte swap to rectify differences in byte order before it is handed off to DolbyE decoder. In an ideal world, the mov file would indicate dolbye track, but I don't think it supports it.
[19:01:23 CEST] <JEEB> zerodefect: if you need s337m it feels like it's PCM-with-DolbyE
[19:01:34 CEST] <JEEB> and yes, it's a mess how you signal that
[19:01:45 CEST] <JEEB> I've been thinking about it as well since I have some capture files with AAC in PCM
[19:01:47 CEST] <JEEB> (yes, really)
[19:02:50 CEST] <JEEB> so we'd need some common compressed-in-PCM header look-up for formats where you might find it
[19:02:57 CEST] <JEEB> and/or per-stream flags or something
[19:03:25 CEST] <JEEB> but then you open your path to misdetection if you start automatizing it
[19:39:08 CEST] <Cracki> I just tested dxva2/cuvid decode speed of some fullhd ~24 Mbit/s footage. got ~80-120 fps on some nvidia GTX5xx/6xx thingies... it's a fraction of what ffmpeg's h264 decoder can do cpu-only (300+ fps on my 2012er intel E3). that's disappointing. does anyone have some data points for me on more recent hardware? I'd really like to see ~300 fps hw-accelerated
[19:48:58 CEST] <atomnuker> for low res video its more common than not for hardware decoding to be slower (but it'll always have less cpu load)
[19:55:52 CEST] <Cracki> found some numbers... 400-700 fps for less cruddy hardware https://www.reddit.com/r/nvidia/comments/6w9gua/how_many_h265_streams_could_a_1080_ti_decode/
[19:56:01 CEST] <Cracki> (and that's hevc)
[19:59:01 CEST] <JEEB> if you look at the VP9 decoding speed with AVX2+64bit, you can bet your ass HEVC could also be fast in libavcodec if anyone just cared
[19:59:12 CEST] <JEEB> unfortunately, nobody cared or cares
[19:59:15 CEST] <JEEB> so we have what we have
[19:59:33 CEST] <Cracki> heh
[19:59:44 CEST] <Cracki> good to know there's room for improvement
[20:00:42 CEST] <JEEB> yea I think VP9 might actually be faster than AVC at this point?
[20:00:45 CEST] <JEEB> haven't benchamrk'd
[20:12:37 CEST] <Mavrik> interesting
[20:12:46 CEST] <Mavrik> I'd expect more efforts to be put into HEVC
[20:15:12 CEST] <Cracki> industry puts their effort into encoding, and hw decoding
[20:17:13 CEST] <atomnuker> for very high res video vp9 will be faster (8k and up, maybe 4k)
[20:17:37 CEST] <atomnuker> 4x4 or 8x8 blocks can't measure up to 32x32 blocks
[20:17:46 CEST] <Cracki> that's for sure
[20:20:34 CEST] <JEEB> Mavrik: back when the decoder was made there was barely any content and the encoders really weren't there to make mass adoption a thing. and now I'm still not sure about the encoders (although the fact that 10bit is only supported in VP9/HEVC by devices I'm seeing encoding as a more or less having its use cases separate from decoding), most content is DRM'd requiring full-hw pipe line and the few ultra HD
[20:20:40 CEST] <JEEB> blu-rays and possibly not so well done encodes aren't enough for people to want to work on it for fun
[20:24:46 CEST] <Mavrik> Ah, I didn't consider the DRM angle, makes sense.
[20:46:39 CEST] <Cracki> I suppose this isn't the time to talk about digitally restricted culture and the people who liberate it
[20:50:51 CEST] <BtbN> 120fps fpr 1080p decoding seems awfully low
[20:51:00 CEST] <BtbN> I remember getting easily 1k fps out of my Pascal chip
[20:53:42 CEST] <lomancer> does ffmpeg have support for hardware accellerated yuv => rgb conversion or just encoding/decoding?
[20:55:38 CEST] <BtbN> If someone were to write a filter
[20:58:25 CEST] <JEEB> lomancer: you either use a proper player, or something like libplacebo
[20:59:20 CEST] <BtbN> Cracki, I get ~700fps for 1080p here on a 1060
[20:59:37 CEST] <Cracki> that's what I figured ;_;
[20:59:59 CEST] <jkqxz> Some of the hardware scale filters support YUV -> RGB conversion, but it's generally inadvisable to use it because you end up at the mercy of whatever interpretation the hardware decides for "RGB" and "YUV". (E.g. expect something like the BT.(601 + 108 * (frame.height < 600)) colourspace and parameters.)
[21:00:04 CEST] <BtbN> swdecode is only like 50fps faster, while using 16 threads at 100%
[21:00:23 CEST] <Cracki> funny expression there ;)
[21:00:51 CEST] <Cracki> yeah I could definitely use 300-900 fps hwdec *and* have the cpu free for important stuff
[21:01:53 CEST] <jkqxz> Consider that most hardware decoders are implemented for playback only, so they may be targetted to achieve 60fps with some headroom but no more. It's only if the implementer wants it to be used for offline uses (like transcoding) that they bother making it more capable than that.
[21:02:39 CEST] <BtbN> nvidia definitely makes them with transcoding in mind
[21:02:39 CEST] <jkqxz> So I don't think it's surprising that you get 60-100fps decode speed on older desktop implementations, or mobile implementations today.
[21:03:39 CEST] <jkqxz> BtbN: They didn't until relatively recently, though. A GTX5xx was stated earlier, which is before NVENC existed.
[21:04:16 CEST] <BtbN> yeah, a gtx500 won't be much faster than realtime playback
[21:26:44 CEST] <Cracki> indeed my cruddy card has no nvenc :)
[22:05:44 CEST] <kerio> what about accelerated yuv->y conversion
[22:24:18 CEST] Action: Cracki is very interested in that too
[22:24:48 CEST] <Cracki> tho i'd expect that to be bandwidth-bound no matter what
[22:41:52 CEST] <havoc> hi
[22:42:31 CEST] <havoc> I have a debian apache perl CGI which calls a bash script to start ffmpeg reading an RTSP stream
[22:43:01 CEST] <havoc> I save/have the PID, but it will not respond to anything but SIGKILL (which it can't block/ignore)
[22:43:19 CEST] <havoc> it does not respond to SIGINT or SIGTERM
[22:43:55 CEST] <havoc> the bash script starts it with: > /dev/null 2>&1 &
[22:44:13 CEST] <havoc> googling is not revealing anything useful yet :(
[22:44:54 CEST] <c_14> the ffmpeg?
[22:45:05 CEST] <c_14> It should respond to INT and TERM
[22:45:08 CEST] <JEEB> generally if you have a use case specific thing you want to make your own API client for the APIs that FFmpeg provides (which ffmpeg.c also utilizes); if you though want to go through with ffmpeg.c, then looking at its code is the simplest :P
[22:45:33 CEST] <JEEB> and yes, it will of course respond to 9 and then normal ctrl+c
[22:45:44 CEST] <havoc> c_14: it does not respond to INT or TERM, as either root or OWNER
[22:45:51 CEST] <JEEB> although the latter will not necessarily be ASAP
[22:46:00 CEST] <JEEB> it will finish its buffers and then finish
[22:46:12 CEST] <havoc> JEEB: that would be ideal
[22:46:12 CEST] <JEEB> but yes, look at ffmpeg.c's code instead of trusting any of us
[22:46:14 CEST] <JEEB> :)
[22:46:27 CEST] <c_14> also try logging to a file instead of dev/null
[22:46:29 CEST] <JEEB> it's under fftools/ in the FFmpeg repo
[22:46:33 CEST] <havoc> JEEB: I've seen much mention of the same thing in my googling
[22:46:38 CEST] <havoc> I still got nothing :(
[22:47:06 CEST] <JEEB> just look at fftools/*.c
[22:47:10 CEST] <havoc> I've even tried echoing ctrl-c to /proc/PID/fd/0
[22:55:44 CEST] <havoc> took out the redirection, left the backgrounding, still nothing :(
[22:56:29 CEST] <havoc> so the /dev/null has no effect, as without redirection it goes to apache's error log
[23:48:52 CEST] <alec500oo> Hello I am looking to add some video decoding to a project I am working on. Are there any resources out there for getting started with the libav* libraries on windows?
[23:50:33 CEST] <JEEB> same as elsewhere. use pkg-config if only possible. test out the examples under doc/examples
[23:50:57 CEST] <JEEB> note, various pesky things those examples do have simpler ways of doing them in the framework
[23:51:21 CEST] <JEEB> so double-check with "site:ffmpeg.org doxygen KEYWORD" first
[23:51:27 CEST] <JEEB> together with git grep -i "keyword"
[23:51:50 CEST] <alec500oo> Okay, Thank You.
[23:51:54 CEST] <JEEB> and for that google thing, also add "trunk" as a keyword, since that is the latest generated documentation
[00:00:00 CEST] --- Sat May 19 2018
More information about the Ffmpeg-devel-irc
mailing list