[Ffmpeg-devel-irc] ffmpeg.log.20180517
burek
burek021 at gmail.com
Fri May 18 03:05:01 EEST 2018
[03:58:29 CEST] <Mrx> why this windows build not contain ffserver?
[04:00:11 CEST] <Mrx> this ain't zeranoe channel?
[04:01:20 CEST] <Mrx> ffmpeg for windows provided by zeranoe and no ffserver.exe in bin folder!
[04:02:22 CEST] <Mrx> is this the end of ffmpeg for windows?
[04:02:39 CEST] <Mrx> should i look for another software?
[04:04:25 CEST] <furq> bye
[04:07:28 CEST] <cutvideo> Hello I need to cut a video without re-encoding and preserving all the audio tracks. The input file has 3 audio tracks but only preserves the 1st one. I'm using this: ffmpeg -ss 00:54:28 -i "input.ts" -vcodec copy -acodec copy output.ts, thank you
[04:08:27 CEST] <cutvideo> I mean when using ffmpeg the output video has only the 1st audio track and I need all 3 of them
[04:09:53 CEST] <furq> cutvideo: add -map 0
[04:12:50 CEST] <cutvideo> oh, how could i forget it, thank you furq
[05:17:45 CEST] <memo1> hi, now that ffserver is out, what options i have to set remote video server?
[05:24:00 CEST] Last message repeated 1 time(s).
[13:15:50 CEST] <jk^> hi all
[13:15:55 CEST] <jk^> i have a problem
[13:16:06 CEST] <jk^> pls hel pme
[13:16:07 CEST] <jk^> help me
[13:16:30 CEST] <jk^> "FFmpeg cannot find audio codec 0x12000"
[13:16:37 CEST] <jk^> in audacity, how to resolve?
[13:26:44 CEST] <iive> this seems to be AMR_NB codec
[13:30:13 CEST] <iive> it seems that there is native decoder and one that could be used through libopencore
[13:30:28 CEST] <iive> do you compile your own ffmpeg, or you are using distro supplied one.
[14:05:13 CEST] <Kam_> Hi, is there a lossless audio codec that I can put into a .mp4 container?
[14:14:22 CEST] <c_14> Kam_: als though I don't think ffmpeg has an encoder
[14:14:26 CEST] <c_14> you can put alac in mov though
[14:23:38 CEST] <brm> Hi :)
[14:24:50 CEST] <brm> I am working on animation video using ffmpeg, do someone know how can I avoid staircase effect?
[14:25:40 CEST] <brm> I have problem when moving image overlay animation doesn't look smooth
[14:29:40 CEST] <durandal_1707> brm: use yuv444 ?
[14:29:41 CEST] <brm> here is my cli command: ffmpeg -i bg.mp4 -i fg.mkv -filter_complex \ "[0:v][1:v]overlay=enable='between=(t,10,20)':x=720+t*28:y=t*10[out]" \ -map "[out]" output.mkv
[14:29:55 CEST] <brm> thanks for response :)
[14:30:31 CEST] <brm> so, using yuv444 will help?
[14:41:10 CEST] <brm> I've modified my command to ffmpeg -f lavfi -i testsrc=duration=10:size=1280x720:rate=30 -t 10 -vf "movie=1.png[inner];[in][inner]overlay=x='t*70':y='t*20'" -pix_fmt yuv444p -r 30 -s 1280:720 testsrc1.mp4, there is set pix_fmt to yum444p but video doesn't work after this fix
[15:02:47 CEST] <durandal_1707> brm: you need to look at overlay documentation
[15:03:00 CEST] <durandal_1707> and set overlay format to yuv444
[15:03:12 CEST] <brm> thanks, I will try
[15:26:44 CEST] <brm> @durandal_1707 thanks! it's much better than before
[16:00:52 CEST] <tuna_> Is it not legal to call av_buffer_unref() with a AVHWFramesContext ? ...I get a segfault when I do.
[16:04:02 CEST] <BtbN> "with a AVHWFramsContext"?
[16:08:03 CEST] <tuna_> Oh I figured it out....since I was avcodec_free_context-ing the AVCodecContext then unref-ing my frame and device context...the avcodec_free was mucking up the frame and device...so now I do it in reverse order and it works
[16:08:25 CEST] <tuna_> frame, then device, then remove reference from the avcodec_context then free it.
[16:08:34 CEST] <tuna_> references*
[16:12:33 CEST] <BtbN> What case exactly crashes? If it crashes, you didn't take a ref somewhere you were supposed to take one.
[16:13:47 CEST] <tuna_> The case where I free the codec THEN unref the frame/device....the freeing of the codec causes the frame and device pts to have a non-zero, odd, debugging filler like address....so it segfaults trying to free something that doesnt exist.
[16:14:10 CEST] <tuna_> pts -> ptr
[16:14:31 CEST] <BtbN> that sounds wrong
[16:16:47 CEST] <tuna_> Well, thats what I saw
[16:26:31 CEST] <Guest94244> Hello, is it possible to concatenate two videos with a fading to black transition between them with no re-encoding? Thank you
[16:27:10 CEST] <BtbN> no
[16:27:26 CEST] <BtbN> unless the fade-to-black is a whole other third video you put in the middle
[16:28:24 CEST] <Guest94244> Can I create that 3rd video with ffmpeg?
[16:56:30 CEST] <Kam_> how can I specify two inputs (-i) with -ss and -to parameters for each input? ffmpeg -i v1.avi -ss 10 -to 20 -i v2.avi -ss 50 -to 60 ... out.mp4 seems to hang after the duration of the first input. also: how would ffmpeg distinguish between -ss being a parameter before the second -i vs. the first -i ?
[16:57:47 CEST] <kepstin> Kam_: "-ss" is an input option, which means it goes before -i and applies to the next input specified with -i
[16:58:06 CEST] <kepstin> Kam_: "-to" is an output option, it applies to the next output (it is not input-specific)
[16:58:57 CEST] <kepstin> Kam_: in particular, the way -to works is it looks at the timestamps on the *output* video, and cuts the video when those reach the desired time.
[16:59:15 CEST] <Kam_> huh, I want to use it like described on this page: https://trac.ffmpeg.org/wiki/Seeking (cut the beginning and end of my input video)
[17:00:14 CEST] <kepstin> Kam_: on that page, they have only one video in the command :)
[17:00:46 CEST] <kepstin> Kam_: what are you doing with the two input videos? If you're using filters on them, you should probably use the "trim" filter to cut them rather than ffmpeg options.
[17:00:47 CEST] <Kam_> yes.. I meant to write, that page describes that I can use -ss and -to after -i
[17:01:04 CEST] <kepstin> ah, -ss after -i is weird, and not needed with modern ffmpeg in most cases.
[17:01:24 CEST] <furq> if you use them after -i then they're output options
[17:01:41 CEST] <kepstin> in that case it's treated as an output option - it has to be after *all* the -i options, and before an output filename
[17:01:45 CEST] <furq> right
[17:02:04 CEST] <furq> they have subtly different behaviour as input or output options
[17:02:22 CEST] <furq> but yeah chances are you want to use trim and setpts
[17:02:57 CEST] <kepstin> in most videos, you'll notice that -ss after -i is significantly slower, and the way it handles timestamps is quite different (which means that -to and -t do different things).
[17:03:08 CEST] <Kam_> I tried to use two inputs because I want to cut a certain section out of my video
[17:03:18 CEST] <Kam_> and then proceed with the concat filter
[17:03:27 CEST] <furq> yeah you can just do that with trim and setpts
[17:03:45 CEST] <kepstin> Kam_: ok, for that use case you should use -ss before -i, and a trim filter in your filter chain to cut the end of each clip.
[17:04:10 CEST] <kepstin> (no need for setpts in this simple case)
[17:04:22 CEST] <furq> if you want to join two clips together then you need setpts
[17:04:44 CEST] <kepstin> hmm? No, in this case both clips will start with pts=0, and concat filter handles that properly
[17:04:50 CEST] <furq> oh
[17:04:54 CEST] <furq> yeah i meant without using the concat filter
[17:05:38 CEST] <furq> but it's two separate inputs so nvm
[17:07:16 CEST] <Kam_> thank you, I'll try that!
[17:08:11 CEST] <kepstin> Kam_: this filter invocation might get a bit complicated if you have video+audio, unfortunately.
[17:08:25 CEST] <Kam_> I do have video+audio
[17:08:52 CEST] <kepstin> because in the filter chain, you'll have to separately trim the video and atrim the audio before handing them to the concat filter.
[17:14:05 CEST] <Kam_> I'm trying 'ffmpeg -i v1.avi -vf trim=00:00:30.000:00:00:40.000 -af atrim=00:00:30.000:00:00:40.000 -i v1.avi -vf trim=00:00:30.000:00:00:40.000 -af atrim=00:00:30.000:00:00:40.000 -filter_complex ....' and it complains: "you are trying to apply an input option to an output file or vice versa."
[17:32:31 CEST] <kepstin> Kam_: you need to put all the filters into a single -filter_complex invocation
[17:32:51 CEST] <kepstin> (and -vf/-af are output options, so have to go after *all* the -i options)
[17:32:53 CEST] <Kam_> ah, thanks, didn't know that!
[17:34:03 CEST] <kepstin> the arguments to your trim functions look wrong too. : is the option separator. If you use "-ss 30" before the -i, you'll want something like "trim=end=10" in the filter chain
[17:34:23 CEST] <kepstin> that'll grab 10 seconds starting at the seek point
[17:35:03 CEST] <kepstin> i'm not really sure what you're doing there, why do you have the same input listed twice?
[17:40:40 CEST] <shfil> hi, has ffmpeg 4.0 changed C's api for decoding mp3?
[17:41:56 CEST] <kepstin> shfil: there's been some api changes, but I don't think there's anything that should affect a simple decoder appliction. You should get a compile error if something you use changed, of course.
[17:42:17 CEST] <kepstin> (it's not abi compatible - you will have to recompile your stuff)
[17:42:31 CEST] <Kam_> kepstin, I'm using the time syntax like described here https://ffmpeg.org/ffmpeg-utils.html#time-duration-syntax HH:MM:SS.ms
[17:43:01 CEST] <kepstin> Kam_: don't use that in the trim filter, because : is a special character there. (I mean, you technically can, but it's hard because you have to escape stuff)
[17:43:50 CEST] <Kam_> kepstin, the same input twice because I want to glue two parts of the same video together (and cut off the beginning and end of the whole source video). think of a recorded tv programme with 1x advertisement inbetween
[17:45:06 CEST] <kepstin> Kam_: ah, right. It might actually be easier to avoid the concat filter here, and do two separate ffmpeg commands to cut out each video piece, then a third command to combine them together
[17:45:19 CEST] <shfil> there's no compile error or meanwhile decoding. but with 4.0 we (in openrw) lost music in cutscenes. https://www.youtube.com/watch?v=m87bJxE9hnU (music at beginning of video)
[17:47:22 CEST] <shfil> snippet of code https://github.com/rwengine/openrw/blob/master/rwengine/src/engine/GameWorld.cpp#L677
[17:47:23 CEST] <Kam_> kepstin, this is what I'm trying now (although probably not very elegant): cutting both pieces and saving them to a lossless format (ffv1) and then concat the two new videos together (source video is lossless, too)
[17:47:25 CEST] <kepstin> this is the relevant code? https://github.com/rwengine/openrw/blob/master/rwengine/src/audio/SoundManager.cpp
[17:48:01 CEST] <shfil> yeah
[17:48:28 CEST] <kepstin> Kam_: after you generate the two separate videos, you probably want to use the concat demuxer to combine them, see https://trac.ffmpeg.org/wiki/Concatenate#demuxer
[17:49:08 CEST] <Kam_> kepstin, I'm using the concat filter, that worked in a test run
[17:49:34 CEST] <shfil> There are some deprecated methods, but other branch the same.
[17:51:35 CEST] <shfil> I've been trying to debug, container with raw data isn't empty.
[17:52:01 CEST] <kepstin> shfil: check the sample format you're getting back - I think the default mp3 decoder nowadays is the floating point one.
[17:52:21 CEST] <shfil> thx, I'll check
[17:54:19 CEST] <kepstin> (you might consider using libswresample to take "whatever the codec gives you" and turning that into your desired format/layout, to be a bit more future-proof)
[17:56:12 CEST] <shfil> something like that? https://github.com/rwengine/openrw/pull/420/files#diff-e9cc6b7c0e9b40ae49ea18ed92518bfcR85
[17:58:56 CEST] <shfil> (or am I missing point?)
[17:59:07 CEST] <kepstin> shfil: that's a switch to the new style (rather than deprecated) decoding api, which is probably a good thing, but it wouldn't help if your problem is sample format.
[17:59:26 CEST] <kepstin> also that code isn't checking the sample format at all, it's just assuming it's s16p :/
[17:59:43 CEST] <shfil> ah, I understand.
[17:59:45 CEST] <kepstin> (unlike the code that i linked to which at least gives an error if it's not the expected format)
[18:00:18 CEST] <kepstin> that said, i'd expect noise rather than silence if that was wrong.
[18:02:03 CEST] <shfil> when nothing is parallel played - silence, when anything else together (like in this pull) - noise.
[18:16:01 CEST] <shfil> sample_fmt contains AV_SAMPLE_FMT_FLTP (0x0008)
[18:16:34 CEST] <shfil> or do I need to check different var?
[18:30:24 CEST] <shfil> dumped everything: https://gist.github.com/ShFil119/6d4d1ec228491910676fd2bb3c0eb6b2
[19:18:54 CEST] <pies_> I imagine it's been asked and asked again, but does anyone have a replacement for https://johnvansickle.com/ffmpeg/ ?
[19:19:46 CEST] <pies_> or maybe just the latest ffmpeg-release-64bit-static.tar.xz ?
[19:19:53 CEST] <pies_> pretty please? :)
[20:12:35 CEST] <f-safinaskar> hi
[20:12:51 CEST] <f-safinaskar> i want to capture my screen for purposes of backup
[20:13:00 CEST] <f-safinaskar> linux
[20:13:18 CEST] <f-safinaskar> but i want to capture both x server and virtual terminals (VT), i. e. tty1, tty2 etc
[20:13:41 CEST] <f-safinaskar> more precisioly: i want to automatically switch to input which is currently active
[20:14:20 CEST] <f-safinaskar> i. e. i want some capturing program. this program should generate frames taken from X server if X server is active and should generate frames from VT (tty1, tty2 etc) if VT is active
[20:14:49 CEST] <f-safinaskar> how to do this?
[20:14:56 CEST] <f-safinaskar> i think about writing my own program
[20:15:04 CEST] <BtbN> I'm not even sure if you can capture a VT
[20:15:10 CEST] <BtbN> and definitely can't switch automatically
[20:15:18 CEST] <f-safinaskar> BtbN: i can :)
[20:15:34 CEST] <f-safinaskar> BtbN: type "man ffmpeg-all" and search for "framebuffer"
[20:15:43 CEST] <f-safinaskar> BtbN: i CAN capture VT
[20:15:53 CEST] <f-safinaskar> BtbN: the question is how to switch automatically
[20:15:56 CEST] <BtbN> I guess your best bet if kmsgrab
[20:16:12 CEST] <BtbN> either it follows along with what's on screen, or you're out of luck
[20:17:06 CEST] <f-safinaskar> my own program will read X screenshots directly using, say, libxcb and read VT (i. e. framebuffer) directly by reading /dev/fb0. And it will determine which is active using..., well, i don't know, but i hope some /dev/tty1 ioctl's will go
[20:17:15 CEST] <f-safinaskar> the question is: is there some simpler way?
[20:17:36 CEST] <f-safinaskar> i don't want to write another pile of code without searching for ready-to-use solutions
[20:18:03 CEST] <BtbN> Just use X terminals instead? Makes the whole recording thing trivial.
[20:18:22 CEST] <f-safinaskar> and then my program will send this frames to ffmpeg via pipe and ffmpeg will convert frames to some format
[20:52:14 CEST] <kerio> i'm gonna go cosmic brain on this and suggest recording the remote kvm via intel ME
[20:53:00 CEST] <kerio> f-safinaskar: i mean, how are you going to handle resolution changes?
[20:53:20 CEST] <kerio> actually even more woke: solder a capture card to the lcd panel
[20:58:39 CEST] <BtbN> Or just get an actual capture card and put a splitter
[21:01:44 CEST] <f-safinaskar> BtbN: "Just use X terminals instead?" - I capture video for backup. I should capture everything. Yes, I use X terminals nearly always. Unfortunately one day I ocassionally logged into VT, typed some important data and then my computer crashed and data was lost. So, I need to capture VT
[21:02:24 CEST] <f-safinaskar> kerio: i don't change resolution :)
[21:02:30 CEST] <BtbN> uhm, so if your PC dies, what you video-capture would be equally lost
[21:03:23 CEST] <BtbN> also, if your PC crashing is a regular thing, I'd fix that PC
[21:03:29 CEST] <f-safinaskar> BtbN: It seems kmsgrab is what i need. Thank you
[21:05:42 CEST] <f-safinaskar> BtbN: "what you video-capture would be equally lost" - no. I already capture video for several years. And during this time my computer often crashes. But captured files always persist, and after crash I can successfully read them
[21:06:06 CEST] <f-safinaskar> BtbN: PC crashes for various reasons
[21:06:48 CEST] <BtbN> so your shell history file does not survive though?
[21:07:33 CEST] <f-safinaskar> BtbN: the last time I ocassionally turned off lamp in toilet and then discovered that power plug takes its power from lamp. So, battery of my laptop became empty and PC turned off
[21:09:59 CEST] <kerio> lmao
[21:11:04 CEST] <f-safinaskar> BtbN: shell history for _opened_ shells does not survive. Yes, I can set special evironment variables to store command right after entering them. But this is not the point. In fact I need not only commands, but their output, too. And I want not-entered-yet-commands, too. I. e. command which typed, but not entered ("Enter" is not pressed)
[21:12:02 CEST] <BtbN> If that's such a frequent problem you run into, record a script...
[21:12:13 CEST] <BtbN> Or get a UPS or anything
[21:16:02 CEST] <kerio> solder a capture card to the lcd panel :3
[22:07:32 CEST] <f-safinaskar> BtbN: I capture X anyway. So, I anyway should capture both X and VT. So, KMS seems to be perfect solution
[22:07:55 CEST] <f-safinaskar> my "ffmpeg -devices" doesn't contain kms
[23:50:54 CEST] <f-safinaskar> I installed recent version of ffmpeg. It has "kmsgrab" support
[23:51:07 CEST] <f-safinaskar> But it records garrbagge
[23:51:18 CEST] <f-safinaskar> And when I switch into tty1, ffmpeg fails
[23:51:33 CEST] <f-safinaskar> So, I have to write my own C program using libdrm or libxcb
[23:51:41 CEST] <f-safinaskar> Or there is other options???????!
[23:52:01 CEST] <durandal_1707> f-safinaskar: read manual is always an option
[23:52:33 CEST] <f-safinaskar> durandal_1707: i have read
[23:52:50 CEST] <f-safinaskar> durandal_1707: I tried options from the manual
[23:53:08 CEST] <f-safinaskar> durandal_1707: x11grab is not for me, because I need virtual terminals, too
[23:53:26 CEST] <atomnuker> f-safinaskar: you need to map drm frames from kmsgrab to either opencl or vaapi in order to not get garbage
[23:56:30 CEST] <f-safinaskar> atomnuker: wow, thanks! but i tried vaapi example from manual
[23:56:36 CEST] <f-safinaskar> atomnuker: ffmpeg -crtc_id 42 -framerate 60 -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' -c:v h264_vaapi output.mp4
[23:56:59 CEST] <f-safinaskar> atomnuker: and I got "No usable planes found on CRTC 42."
[23:57:08 CEST] <f-safinaskar> atomnuker: then i removed "-crtc_id 42"
[23:57:09 CEST] <kerio> atomnuker: but what if i just wanted an array of pixels instead
[23:58:06 CEST] <f-safinaskar> atomnuker: and then i got "libva: va_getDriverName() failed with operation failed,driver_name=i965"
[23:58:18 CEST] <atomnuker> map to opencl and hwdownload,format=bgra/bgr0
[23:58:26 CEST] <f-safinaskar> kerio: atomnuker: yes, I want just array of pixels
[23:58:33 CEST] <atomnuker> or map to vaapi and hwdownload,format=yuv420p
[23:58:50 CEST] <f-safinaskar> kerio: atomnuker: but some compressed form will be OK, because I can decompress the video later
[23:59:47 CEST] <f-safinaskar> atomnuker: how to do this?
[23:59:52 CEST] <atomnuker> f-safinaskar: you need to do -init_hw_device "vaapi=vap:/dev/dri/renderD128" before the -i and you need to add -filter_hw_device vap before -vf and to map you need to remove the derive device part, so its just hwmap
[00:00:00 CEST] --- Fri May 18 2018
More information about the Ffmpeg-devel-irc
mailing list