[Ffmpeg-devel-irc] ffmpeg.log.20180122

burek burek021 at gmail.com
Tue Jan 23 03:05:01 EET 2018


[00:03:43 CET] <ddubya>  mooyoul, audio has i-frames too
[00:04:13 CET] <ddubya> but if you use PCM audio you can cut anywhere, compressed formats have limitations
[00:05:20 CET] <mooyoul> ah yeah
[00:05:40 CET] <mooyoul> but i use wav audio for input, it has same problem also
[00:06:03 CET] <ddubya> hmm
[00:06:05 CET] <mooyoul> let there are a wav file  named "long-audio.wav"
[00:06:37 CET] <mooyoul> if i segment that audio via "ffmpeg -i long-audio.wav -f segment -segment_time 5 segment-%03d.wav"
[00:07:06 CET] <mooyoul> and re-encode them via "ffmpeg -i segment-000.wav -c:a libfdk_aac -b:a 96k -cutoff 18000 encoded-000.mka"
[00:07:37 CET] <mooyoul> and concat them via "printf "file '%s'\n" ./*.mka > mylist.txt" and "ffmpeg -f concat -i mylist.txt -c copy merged.mka"
[00:07:41 CET] <mooyoul> i can hear gaps too
[00:09:48 CET] <ddubya> but if you concat the wav files no gaps right. Generate a sine wave in your source file to make it super obvious
[00:11:02 CET] <mooyoul> Yeah, concat wav files is okay. they don't have any gaps ;)
[00:12:38 CET] <ddubya> so back to my last idea. What if you cut the files so they're a multiple of 1024 samples (this is the frame size of aac)
[00:14:58 CET] <illuminated> can somebody find in the manual what the -re option does?
[00:15:19 CET] <illuminated> I've seen it one time in some of the documentation, but I forgot what it does, and I can't find it in the manual anymore
[00:15:25 CET] <furq> it forces reading at the input framerate
[00:15:40 CET] <ddubya> mooyoul, if your audio is 44.1khz then the cut time would be 4.992290249 s
[00:15:45 CET] <illuminated> furq thx
[00:15:55 CET] <furq> you use it when you want to pretend a file is actually a stream
[00:16:00 CET] <furq> usually for sending a file to a streaming server
[00:16:20 CET] <illuminated> awesome thanks furq
[00:16:31 CET] <mooyoul> okay, i'm trying
[00:17:22 CET] <furq> 23:03:43 ( ddubya)  mooyoul, audio has i-frames too
[00:17:23 CET] <furq> ??
[00:17:54 CET] <mooyoul> hmm that also has gaps
[00:17:56 CET] <furq> mooyoul: you might want to try using m4a or something instead of mka
[00:18:09 CET] <ddubya> furq, sure it does, you can't cut it on any packet and expect it to work
[00:18:43 CET] <mooyoul> i tried other containers like m4a or ts
[00:18:47 CET] <ddubya> maybe i-frames is the wrong term, key-frames is better
[00:18:52 CET] <kepstin> ddubya: for most audio codecs, you can cut on any packet (but you might need to discard the first N decoded samples)
[00:19:23 CET] <mooyoul> but other containers also have same problem
[00:19:38 CET] <mooyoul> so i'm assuming it is not container related problem
[00:19:39 CET] <kepstin> in particular, there's generally no packets in audio codecs which are special such that they're easier to cut on
[00:19:45 CET] <illuminated> I love ffmpeg
[00:20:31 CET] <furq> mooyoul: i take it you need to encode before the split and then concat the encoded files
[00:20:56 CET] <furq> i've never really dug into it but i've had issues with gaps appearing with -f concat
[00:23:29 CET] <mooyoul> ah yeah and it is not related to input source, because i have same problem even i use aac or mp3 encoded input
[00:23:52 CET] <mooyoul> weird thing is when i re-encode concat-ed output, gap are gone!
[00:24:31 CET] <kepstin> concatenating encoded audio files in general won't work, because the last packet of the previous stream will usually be padded, and the first packet (at least) of the next stream will have encoder delay
[00:25:01 CET] <kepstin> but if you just concatenate the packets, then the player won't skip the samples that were added, so there'll be a discontinuity/gap
[00:26:50 CET] <kepstin> end result: if you want to seamlessly/gaplessly concatenate encoded audio files, you'll need to decode then re-encode them
[00:28:40 CET] <mooyoul> ohhhh... thank you kepstin.
[00:29:46 CET] <mooyoul> hmmmm..should i give up encoding huge audio file in parallel servers?
[00:30:53 CET] <kepstin> honestly, the audio usually encodes so many times faster than video that it's probably not worth the effort
[00:31:06 CET] <kepstin> encode the audio on one box, encode the segmented video on a bunch of other boxes
[00:32:20 CET] <mooyoul> yeah audio encoding much faster than video encoding
[00:32:45 CET] <mooyoul> but i needed to end audio encoding job in 5 minutes
[00:33:25 CET] <mooyoul> because there are some execution time limit (i'm running ffmpeg in aws lambda)
[00:33:45 CET] <kepstin> I mean, in theory if you know the encoder delay of your specific encoder and cut the audio on exact frame boundaries, then you can encode the audio such that you can discard the preroll/delay frames from the next stream during concatenation
[00:33:51 CET] <kepstin> a lot of custom work required there
[00:34:16 CET] Action: kepstin just has an autoscaling aws spot instance fleet to run encoder jobs on.
[00:50:02 CET] <s``> How can I find out whether an AVPacket is a b-frame or a p-frame? I get AVPackets with avcodec_receive_packet from a h264_vaapi codec.
[00:53:42 CET] <kepstin> s``: I don't think there's any way to tell without parsing/decoding the result. The packet flags can only indicate whether it's a keyframe or not.
[00:59:20 CET] Action: sis say hola
[00:59:50 CET] <sis> anyone alive?
[01:00:14 CET] <sis> en mi epoca los canales de irc eran un poco mas interactivos
[01:00:33 CET] <sis> y eso que la velocidad era de 14.400 bps
[01:03:11 CET] <therage3> well, thing is, this is mostly a support channel
[01:03:20 CET] <therage3> so unless someone asks something, conversation isn't that frequent
[06:31:24 CET] <rogerschicken> is it possible to change fffmpeg profile without reencoding? if i use -vcodec copy, it seems to override and not change the profile at all
[06:31:51 CET] <rogerschicken> sorry, h264 profile, not ffmpeg
[08:50:34 CET] <Taemojitsu> I think recent versions of ffplay and ffmpeg were changed to not update faster than screen refresh rate, can someone test something for me?
[08:52:30 CET] <Taemojitsu> Create a file with fps faster than your refresh rate with 'ffmpeg -filter_complex color=black:r=120 -t 10 black.mp4', then play it with ffplay. (video output from ffmpeg is also limited for me, but I'm more confident ffplay worked differently in the past.)
[08:52:56 CET] <Taemojitsu> That is, play it with 'ffplay -noframedrop black.mp4'.
[08:56:55 CET] <Taemojitsu> Basically, I used to be able to play 1080p 60fps video with ffplay and -noframedrop, with occasional slowdowns before ffplay could catch back up. Now it's consistently too slow and less than 100% CPU, because ffplay never catches back up and my refresh rate is around 59.7, so even a low-resolution 60fps video falls behind with -noframedrop. I'd like to isolate the problem to a change in ffplay/ffmpeg.
[10:10:49 CET] <Taemojitsu> For anyone who saw my question, the solution is to pass the environment variable vblank_mode=0, as shown here: https://stackoverflow.com/questions/17196117/disable-vertical-sync-for-glxgears
[10:52:51 CET] <thunfisch> Hey there! Does anyone know if using vaapi on intel with the h264_vaapi encoder supports baseline profile? I've not been able to set it.
[10:53:10 CET] <thunfisch> it defaults to high
[10:53:43 CET] <jkqxz> Nothing supports baseline profile.  You probably want constrained baseline.
[10:55:31 CET] <thunfisch> yes.
[10:55:37 CET] <thunfisch> still, any idea?
[10:56:31 CET] <thunfisch> https://paste.xinu.at/1ch1pb/ that's what i'm getting now.
[10:58:07 CET] <jkqxz> "constrained_baseline" (or 578).
[11:00:28 CET] <thunfisch> same error :()
[11:00:46 CET] <thunfisch> ah, 578 seems to work
[11:00:48 CET] <thunfisch> thank you!
[11:01:04 CET] <thunfisch> any documentation about that? couldn't find it via google:/
[11:32:04 CET] <ravi_> Hi, I am muxing vp8 video and opus audio streams (RTP streams) into a webm file. After i mux, when i play the video in vlc, it says "VLC does not support the audio and video format 'undf' . Unfortunately there is no way for you to fix this". And, the "duration" filed in the Segment info of the file is too high (79064652.240s). Do i have to set duration for the audio/video tracks ?
[11:32:52 CET] <ravi_> Is there anything else I need to set to get appropriate duration for the file and make it play ? Also, when I chekc the Codec details in the Media information of VLC it says for audio track the Codec in undf.
[11:33:18 CET] <sfan5> you usually don't need to set the duration of tracks
[11:33:45 CET] <sfan5> seems like vlc is not recognizing opus, can it play opus in other files?
[11:47:33 CET] <ravi_> nope it is not playing opus
[11:48:27 CET] <sfan5> sounds like you need to update VLC then
[11:48:57 CET] <ravi_> but the "duration" and "Default duration" of video track are not valid. Like i said, th eduration in segment info is too high.
[11:51:13 CET] <Daglimioux> Hello there. I'm having troubles when compiling some webm videos into a mosaic mp4 output and I cannot find which is the error that is causing the command being executed forever. First of all, I get an error when it is compiling that says "Invalid profile 6", which I couldn't find any information on internet. Secondly, I get an error that says "Invalid data found when processing input"
[11:51:55 CET] <Daglimioux> The second error is mentioned in many forums, but their workarounds are to try enabling the missing demuxers. I have all demuxers for webm, as mentioned in docs and for mp4
[11:51:56 CET] <jkqxz> thunfisch:  Oh right, the names got added more recently than 3.4.
[11:52:41 CET] <ravi_> i am setting pts and dts values of each packet to respective rtp timestamps . Is that correct ?
[11:52:49 CET] <jkqxz> thunfisch:  The numbers are just profile_idc + constraint_set flags, <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h;h=8fbbc798a2e65e27a2aa0a33220989c83fd81cbc;hb=HEAD#l2863>.
[11:53:41 CET] <Daglimioux> And the main problem is that the video compilation keeps running forever... I don't know if those errors are who cause this behaviour
[12:07:13 CET] <OneSploit> Hi
[12:18:17 CET] <s``> when is use the h264_vaapi from the cli everything is working fine, i can play it in chromium, and the pts dts values are the same. when i use it in my program which is based on the vaapi_encode.c and muxing.c example, I get a video where the pts values is lower than the dts values in the p frames, so some player (chromium, ios) can't play it well. Do you have an idea how can I fix that or where can I find the packet reorder/magic code? 
[12:18:55 CET] <OneSploit> do you guys know if http://deb-multimedia.org/ contains ffmpeg that is build with nvenc ?
[12:19:43 CET] <furq> ffmpeg from apt should have nvenc
[12:19:48 CET] <furq> unless you're on some ancient debian
[12:19:57 CET] <OneSploit> oh
[12:20:06 CET] <OneSploit> that's wired, let me check
[12:20:13 CET] <OneSploit> furq: from official one?
[12:20:39 CET] <furq> yeah
[12:20:57 CET] <OneSploit> hmm so I don't need to rebuild
[12:28:50 CET] <OneSploit> furq: I wonder what should I use to get fast screencasting, it seems my CPU usage jumps a lot when I try to cast something with standard encoder
[12:29:16 CET] <furq> probably obs and nvenc
[12:30:35 CET] <OneSploit> obs is not packaged for Debian and crashes for me
[12:31:19 CET] <furq> it's in debian stable now
[12:31:21 CET] <furq> obs-studio
[12:31:48 CET] <thunfisch> jkqxz: nice, thank you! will bookmark that.
[12:32:15 CET] <OneSploit> furq: oh.. somehow I missed that, thanks
[12:32:19 CET] <furq> you can probably just use ffmpeg and xcbgrab for simple stuff
[12:32:34 CET] <OneSploit> furq: I used simplescreenrecorder
[12:32:45 CET] <OneSploit> but I wonder about that obs what it gives
[12:32:50 CET] <furq> if you want to capture sdl/opengl etc then you'll want something like obs
[12:33:00 CET] <furq> it's generally just better at screen capturing lol
[12:33:23 CET] <furq> it uses the ffmpeg libs internally for more or less everything else it does
[12:33:34 CET] <furq> but all the screen capturing stuff is custom afaik
[12:38:12 CET] <s``> bye
[12:38:13 CET] <s``> wow, if I set the hwframe's pts, everything just working, thank guys
[12:39:34 CET] <OneSploit> furq:can select only Software (h264) in OBS as encoder...
[12:39:35 CET] <OneSploit> wired
[12:39:52 CET] <OneSploit> obs-studio version 19.0.3
[12:39:58 CET] <Daglimioux> I have a command that throws some "minimal" errors, but keeps executing. The problem is that it keeps running forever and I don't know if those errors are the problem here. Anyone that could help me with that? I can link the command if needed
[12:40:08 CET] <furq> are you using ffmpeg from the debian repos
[12:40:21 CET] <OneSploit> furq: yes
[12:40:22 CET] <furq> if you have obs 19 then i guess ffmpeg -version should return 3.4.1
[12:40:38 CET] <OneSploit> ffmpeg version 3.4.1-1+b2
[12:40:40 CET] <furq> weird
[12:41:11 CET] <furq> does h264_nvenc show up in ffmpeg -codecs | grep nvenc
[12:42:44 CET] <OneSploit> yes
[12:43:12 CET] <furq> shrug
[12:43:21 CET] <furq> i've never used obs on *nix so i wouldn't know where to start
[12:43:27 CET] <furq> maybe try their irc
[12:45:19 CET] <OneSploit> ok solved
[12:45:27 CET] <OneSploit> it does not depend on libnvidia-encode1 but needs it
[12:55:13 CET] <OneSploit> furq: do you know what input source should I use in obs to record only opengl app window and not whole desktop :)
[12:55:20 CET] <OneSploit> works on simple screen recorder
[12:55:26 CET] <OneSploit> I guess I should ask at some obs channel
[13:03:56 CET] <Daglimioux> Okay, I've found that one of those "minimal" errors is "Error while decoding stream #0:0: Invalid data found when processing input". Any suggestion why is this happening? I can see the video properly on VLC, without errors, with the right time duration, etc.
[13:44:03 CET] <Daglimioux> I got an error when trying to build x264 library: AVComponentDescriptor has no member named depth. Any suggestion?
[15:00:03 CET] <kikobyte> Hi guys, I'm having problems with scale_npp filter on ffmpeg 3.3.5 (w/ hwaccel cuvid: h264_cuvid -> scale_npp=-1:1520 -> h264_nvenc). My input is an mpeg-ts file with resolution changing over time, and during initialization the scale_npp filter gets properly inserted, I see "[Parsed_scale_npp_0 @ 0x3b12840] w:768 h:432 -> w:2702 h:1520", while on resolution change I see scaler_out_0_1 and auto_scaler_0 trying to make it into the gra
[15:01:01 CET] <kikobyte> ffmpeg -loglevel 50 -y -f mpegts -hwaccel cuvid -c:v h264_cuvid -i dump_multires.ts -vf scale_npp=-1:1520 -map 0:a -map 0:v -c:a libopus -b:a 256k -c:v h264_nvenc -b:v 24M -maxrate 28M -bufsize 28M -g 60 -flags:v +global_header -profile:v baseline -preset:v fast -zerolatency 1 -forced-idr 1 -aud 1 -bsf:v dump_extra -strict -2 -f mpegts -muxdelay 0.1 out1.ts
[15:02:41 CET] <kikobyte> Works as expected when the input file has fixed video resolution, but once it changes, I get output like https://pastebin.com/5gvLapeu
[15:11:25 CET] <BtbN> I don't think random resolution changes are supported by it.
[15:15:52 CET] <kikobyte> BtbN: How one can see if an input resolution change is supported by the filter? I would think into the AVFilter::process_command field defined by the scale filter, but missing in scale_npp, but that seems to change the output resolution. Any hints on what should be done/investigated/implemented to make this working? I could do it myself if I got some clues.
[15:16:34 CET] <BtbN> scale_npp is basically abandoned, you should look into scale_cuda if you want to add that feature.
[15:18:30 CET] <kikobyte> https://ffmpeg.org/ffmpeg-filters.html doesn't have any notion of scale_cuda, but it does for scale_npp (without any deprecation warnings), and there are reasons why I cannot switch to ffmpeg 3.4, which has got the scale_cuda filter.
[15:19:16 CET] <kepstin> well, you can't really expect that this'll get fixed in 3.3 at this point...
[15:24:05 CET] <BtbN> it won't get backported to any current release branch either way
[15:50:14 CET] <aruns> Hi, how do you guys tend to compress JPEGs in FFMPEG?
[15:50:36 CET] <aruns> Atm, I am doing ffmpeg -i /path/to/image.jpg /path/to/output.jpg
[15:54:23 CET] <kikobyte> kepstin: BtbN: I don't expect backports or fixes, a hint would be enough for me to make the necessary changes in my fork. At least something I could start looking from
[16:38:00 CET] <Daglimioux> Is it usual that a mosaic compilation keeps the process running forever if one of the videos duration is 18 minutes length and the other one's duration 1 minute length?
[16:38:37 CET] <Daglimioux> I mean if all videos has to be the same length to do a compilation
[16:39:29 CET] <durandal_1707> full output missing
[16:40:55 CET] <Daglimioux> Do you mean that I must add something like a "black screen" for the other missing 17 minutes?
[16:41:19 CET] <durandal_1707> Daglimioux: no
[16:47:18 CET] <Daglimioux> durandal_1707: then?
[16:47:50 CET] <durandal_1707> Daglimioux: no enough data provided
[16:53:02 CET] <Daglimioux> durandal_1707: the problem is that I have a compilation with 3 videos and a "black screen" in a 2x2 mosaic compilation. When I do it with other videos it works perfectly, but there is one compilation that keeps running forever. The only difference I found so far, is that 2 of those videos are 18 minutes length and the third one is only 1 minute length or so. Could be that the problem?
[16:54:04 CET] <durandal_1707> no, provide full command line
[16:54:08 CET] <Daglimioux> There is no errors at all. Only a "Invalid data found when processing input", but I don't think that is the problem (?) as it is converted from webm to mp4
[16:54:13 CET] <Daglimioux> ah okay, one sec
[17:03:11 CET] <Daglimioux> ffmpeg -i video1.webm -i video2.webm -f lavfi -i color=s=640x360:color=black:d=1 -i video3.webm -i audio1.opus -i audio2.opus -i audio3.opus -i audio4.opus -filter_complex "amix=inputs=4:dropout_transition=0; [0:v] scale=640x360, setdar=0 [r1c1]; [1:v] scale=640x360, setdar=0 [r1c2]; [3:v] scale=640x360, setdar=0 [r2c2]; [r1c1][r1c2] hstack=2 [r1]; [2][r2c2] hstack=2 [r2]; [r1][r2] vstack=2" -c:a libfdk_aac -b:a 96k -c:v libx264 
[17:03:14 CET] <Daglimioux>  -preset ultrafast -y output.mp4
[17:03:20 CET] <Daglimioux> thats the full command
[17:07:54 CET] <adgtl-> Folks, I am getting this error "Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument" for https://gist.github.com/8b428c4e2d4d4e48b41ffb022494794d
[17:08:57 CET] <durandal_1707> Daglimioux: so it hangs? which version?
[17:09:39 CET] <Daglimioux> ffmpeg version N-86400-ga3b5b60 Copyright (c) 2000-2017 the FFmpeg developers
[17:11:02 CET] <adgtl-> I am just trying to run "ffmpeg -i 4485.mp4 -i 4485.opus -strict -2 -codec copy  output.mp4"
[17:11:11 CET] <adgtl-> Do you folks need these files?
[17:11:27 CET] <adgtl-> Can someone confirm that -codec copy really works for opus too?
[17:12:27 CET] <OneSploit> furq: question nvidia 960M does not have hevc_nvenc ?
[17:13:15 CET] <ZeroWalker> when cutting with -ss and -t, is it possible to have it lossless but re-encode the partly in order to make the cut precise?
[17:13:45 CET] <c_14> not with commandline ffmpeg
[17:13:58 CET] <c_14> I mean
[17:14:22 CET] <c_14> I guess you could make the cut with -c copy, then find where it actually cut, do another encode of the part it missed and then combine them
[17:14:24 CET] <c_14> but
[17:14:36 CET] <adgtl-> c_14: to me?
[17:14:45 CET] <c_14> to ZeroWalker
[17:14:58 CET] <adgtl-> can someone help me?
[17:15:11 CET] <adgtl-> Getting error as said above
[17:15:12 CET] <ZeroWalker> ah,
[17:15:53 CET] <durandal_1707> adgtl-: what are you doing?
[17:17:24 CET] <adgtl-> durandal_1707: I have .mp4 without audio and .opus with audio.. I want to create single .mp4 with audio and video
[17:17:32 CET] <adgtl-> and without re-encoding.. just by codec copy
[17:17:41 CET] <adgtl-> I am not able to successfully process that
[17:19:02 CET] <durandal_1707> adgtl-: hmm, try -aco
[17:19:19 CET] <durandal_1707> -acodec copy
[17:20:09 CET] <durandal_1707> otherwise pastebin full uncut ffmpeg output
[17:21:13 CET] <kepstin> adgtl-: putting opus in mp4 is not supported by many programs, so you'll probably want to re-encode to aac.
[17:21:52 CET] <therage3> or if the end player can handle mkv container, use that
[17:22:00 CET] <therage3> since mkv can have just about anything
[17:22:01 CET] <kepstin> (i forget whether an opus mapping for mp4 even exists)
[17:23:02 CET] <c_14> kepstin: there is one, but I'm not sure it's finalized yet
[17:23:34 CET] <c_14> was still a draft last I checked
[17:24:48 CET] <kepstin> adgtl-: anyways, the answer is "yes, you can use -codec copy on opus - but you can't put opus into mp4 container"
[17:24:54 CET] <adgtl-> durandal_1707: updated gist with -acodec copy https://gist.github.com/8b428c4e2d4d4e48b41ffb022494794d
[17:25:14 CET] <adgtl-> kepstin: so error is legit?
[17:25:24 CET] <kepstin> adgtl-: "[mp4 @ 0x170fb80] Could not find tag for codec opus in stream #1, codec not currently supported in container"
[17:25:44 CET] <Daglimioux> durandal_1707: Yep, it hangs when reaches the longest video duration (at 18 mins or so). The version I get from ffmpeg -version is: ffmpeg version N-86400-ga3b5b60
[17:25:50 CET] <adgtl-> kepstin: so what's best solution to avoid re-encoding and do quickest conversion to mp4?
[17:26:02 CET] <kepstin> adgtl-: you can't put opus in mp4 so you have to re-encode.
[17:26:11 CET] <kepstin> adgtl-: if you don't need mp4, then use mkv instead.
[17:27:11 CET] <therage3> adgtl-: i'd just use mkv if whatever place you want to play it on supports that
[17:27:45 CET] <durandal_1707> Daglimioux: well, it should not happen, so try latest version
[17:28:44 CET] <adgtl-> kepstin: what's command to create mkv?
[17:28:48 CET] <Daglimioux> durandal_1707: I tried but I get an error when I try to configure the x264 library: AVComponentDescriptor has no member named depth but I couldn't find anything related on internet :S
[17:29:07 CET] <kepstin> adgtl-: change the file extension on the output filename from .mp4 to .mkv
[17:29:09 CET] <adgtl-> therage3: it's web browsers which will be playing these video files?
[17:29:10 CET] <therage3> adgtl-: just specify the output as ending in mkv extension, that should probably be ebough
[17:29:23 CET] <durandal_1707> Daglimioux: update your libx264
[17:29:26 CET] <kepstin> adgtl-: web browsers generally don't support mkv, so you're out of luck.
[17:29:30 CET] <adgtl-> therage3: and chrome, firefox, internet explorer etc support mkv?
[17:29:39 CET] <adgtl-> kepstin: lol.. what should I do now
[17:29:39 CET] <therage3> adgtl-: ah, there you go, then that's a bit unfortunate
[17:29:53 CET] <adgtl-> anyways.. issue is that re-encoding takes more time.. I want something quick
[17:29:56 CET] <Daglimioux> durandal_1707: I did. I followed the ffmpeg installation instructions, but I keep having that error
[17:29:58 CET] <adgtl-> without re-encodding
[17:30:13 CET] <kepstin> adgtl-: re-encode the audio to aac is the fastest/easiest way to turn that into a file that web browsers can play.
[17:30:27 CET] <Freex> Hi all
[17:30:27 CET] <kepstin> adgtl-: no need to re-encode the video tho.
[17:30:53 CET] <Freex> Can someone provide me with the dev and shared Windows files for the FFmpeg 2.8.13 release?
[17:31:11 CET] <kepstin> adgtl-: so something like "ffmpeg -i 4485.mp4 -i 4485.opus -c:v copy -c:a aac -b:a 128K output_yes.mp4
[17:31:30 CET] <adgtl-> kepstin: what's that 128K there?
[17:31:31 CET] <durandal_1707> Daglimioux: new version of libx264 or ffmpeg have issues compilling?
[17:31:37 CET] <adgtl-> kepstin: I need highest quality possible
[17:31:40 CET] <kepstin> adgtl-: the bitrate for the audio encode.
[17:31:45 CET] <adgtl-> kepstin: okay
[17:32:32 CET] <Daglimioux> durandal_1707: Okay nevermind, something should be in conflict with x264. Now it is compiling, let me check with new version. Sorry about that
[17:33:03 CET] <adgtl-> kepstin: "[aac @ 0x161b020] The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it."
[17:33:10 CET] <kepstin> adgtl-: upgrade to a newer ffmpeg
[17:33:24 CET] <therage3> adgtl-: what?
[17:33:28 CET] <therage3> wait, what version are you on?
[17:33:44 CET] <kepstin> probably 2.8 or something
[17:33:54 CET] <kepstin> iirc the aac encoder experimental flag was removed in 3.0
[17:34:04 CET] <therage3> yeah, because that's an old version that gave that message
[17:36:38 CET] <adgtl-> kepstin: It's "ffmpeg version 3.4.1"
[17:36:46 CET] <adgtl-> kepstin: ?
[17:36:54 CET] <adgtl-> it's more than 3.0
[17:36:59 CET] Action: therage3 raises eyebrow
[17:37:02 CET] <NapoleonWils0n> hi all
[17:37:28 CET] <NapoleonWils0n> im just trying to help someone record a video which has north american closed captions
[17:37:30 CET] <kepstin> adgtl-: according to that error message it's not. You would not see that message with ffmpeg 3.4.1.
[17:37:37 CET] <kepstin> adgtl-: from your pastebin: "ffmpeg version 2.8.11-0ubuntu0.16.04.1"
[17:38:10 CET] <NapoleonWils0n> if i do an ffprobe -i url, i can see the closed captions are in the h264 video stream
[17:38:15 CET] <adgtl-> kepstin: yeah.. on server "ffmpeg version 2.8.11-0ubuntu0.16.04.1"
[17:38:20 CET] <therage3> adgtl-: if you have two ffmpeg versions, the one you call may be the older one. the new one has to be in your PATH
[17:38:30 CET] <therage3> I see
[17:38:34 CET] <therage3> that's why
[17:38:53 CET] <adgtl-> not sure how to upgrade to latest ffmpeg on ubuntu
[17:39:00 CET] <therage3> adgtl-: https://launchpad.net/~jonathonf/+archive/ubuntu/ffmpeg-3
[17:39:01 CET] <Daglimioux> durandal_1707: Is it normal when executing ./configure to show "Unknown option --enable-libopus", "Unknown option --enable-libx264", etc.? I don't remember that message when issued the same command with my current version of FFMpeg
[17:39:14 CET] <therage3> adgtl-: ... did you click that link I just showed you?
[17:39:30 CET] <adgtl-> sudo apt-get install ffmpeg says ffmpeg is already the newest version
[17:39:35 CET] <therage3> it's a PPA for newer versions of ffmpeg
[17:39:36 CET] <NapoleonWils0n> i did see this cmd on stackoverflow for extract closed captions: ffmpeg -f lavfi -i "movie=test.ts[out0+subcc]" -map s output.srt
[17:39:40 CET] Action: therage3 blinksblinks
[17:39:40 CET] <kazuma_> NapoleonWils0n "ffmpeg -i url -c copy out.mp4" would just copy all streams including the subtitles to mp4 or whatever
[17:40:14 CET] <NapoleonWils0n> hi all the closed captions arent in a seperate stream but in the vide stream
[17:40:33 CET] <kazuma_> they have there own stream
[17:40:33 CET] <NapoleonWils0n> im already doing a -c:a copy -c:v copy
[17:40:42 CET] <kazuma_> the video audio and subs are all individual streams
[17:41:33 CET] <NapoleonWils0n> when i do an ffprobe i get this
[17:41:35 CET] <NapoleonWils0n> Stream #0:18: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 416x240, Closed Captions
[17:42:19 CET] <NapoleonWils0n> ithought closed captions where different from subs which are in there own stream
[17:42:25 CET] <kazuma_> try schange your -map s to -map 0:18
[17:42:26 CET] <kazuma_> then
[17:42:50 CET] <NapoleonWils0n> right cheers mate, just trying to do someone a favour
[17:47:31 CET] <adgtl-> what's next version better than b:a 128K ?
[17:47:40 CET] <adgtl-> 320?
[17:47:57 CET] <adgtl-> I need something better quality
[17:50:00 CET] <c_14> usually 192 then 256 then 320
[17:50:16 CET] <ddubya> is it possible to decode with cuvid/nvdec and also do hardware scaling
[17:51:34 CET] <ddubya> nvm I see scale_npp
[18:07:12 CET] <NapoleonWils0n> answering my own question on how to extract closed captions from a video
[18:07:17 CET] <NapoleonWils0n> use http://ccextractor.sourceforge.net
[19:07:54 CET] <kikobyte> BtbN: The issue with dynamic resolution adjustment when NPP filter is in use seems to originate from ffmpeg_filter.c:configure_output_video_filter, which inserts "scale" filter whenever ofilter->width || ofilter->height is true. Disabling this logic when there is a non-null hw_device_ctx seems to fix the problem. Note: that might be needed for current master as well
[19:22:59 CET] <junglistric> anyone have success building ffmpeg .so with Android NDK r16b and OSX or Linux?
[19:31:43 CET] <BtbN> kikobyte, that way you won't get scaling anymore at all tho?
[19:34:52 CET] <sfan5> junglistric: yes works fine
[19:35:31 CET] <junglistric> @sfan5 test
[19:35:43 CET] <sfan5> no need to use @ on irc
[19:36:00 CET] <junglistric> sfan i get various errors with various scripts
[19:36:17 CET] <sfan5> put them on a pastebin site
[19:39:09 CET] <junglistric> https://pastebin.com/e23VqtcP
[19:40:19 CET] <kikobyte> BtbN, in HW pipeline I create NPP scale filter myself, I just don't need an additional CPU scale filter which is otherwise inserted automatically by this logic any time the output dimensions are specified. I see messages [Parsed_scale_npp_0 @ 0x2e1c8a0] w:1920 h:1080 -> w:2702 h:1520 once the resolution changes to FullHD, and everything works fine with the command line recommended on the NVIDIA-ffmpeg website
[19:41:13 CET] <sfan5> junglistric: something is wrong with your compiling setup, __DARWIN_NULL doesn't exist anywhere in the Android headers
[19:41:14 CET] <kikobyte> BtbN, I might have missed something important though... But this logic just tries to create a scale filter right after my NPP scale filter.
[19:41:39 CET] <BtbN> ffmpeg.c always inserts a classic scaler. Which often just does nothing
[19:42:37 CET] <thebombzen> kikobyte: if it autoinserts a scale filter that doesn't do anything, there's not really any performance issue as it just basically does memcpy
[19:42:58 CET] <kepstin> it shouldn't even doing memcpy, if it does that it's a performance issue :)
[19:43:05 CET] <BtbN> it does litterally nothing if there's nothing to scale
[19:43:09 CET] <BtbN> just passes on the frame
[19:43:24 CET] <BtbN> it has no idea about hw frames anyway
[19:43:26 CET] <thebombzen> ah, so it's even faster. however it's worth mentioning that npp_scale doesn't take planar formats (IIRC)
[19:43:31 CET] <junglistric> yeah it's actually just from /usr/include/sys/_types.h, not sure why the compiler is picking that up
[19:44:10 CET] <thebombzen> since npp_scale doesn't use planar formats, only packed formats like nv12 often times you need swcale to convert it between yuv420p and nv12
[19:45:04 CET] <sfan5> junglistric: your ./configure line should have --enable-cross-compile --target-os=android --cross-prefix=arm-linux-androideabi --arch=armv7-a or similar
[19:45:19 CET] <thebombzen> scalars like zimg only take planar formats, so if you use zimg to scale to bgr0, actually it scales it to gbrp and swscale converts it from gbrp to bgr0
[19:46:28 CET] <junglistric> okay but i wanted to compile .so for a virtual device in android studio
[19:47:13 CET] <sfan5> so you want to compile ffmpeg *inside* android studio?
[19:47:28 CET] <Daglimioux> durandal_1707: It worked with the update. Sorry for the late reply, thanks for everything
[19:47:45 CET] <junglistric> no not that route, but compile .so for a virtual device, not a physical device
[19:48:13 CET] <junglistric> i imagine the cross-compile parameters would be the same?
[19:48:28 CET] <sfan5> should be, yes
[19:49:57 CET] <kikobyte> BtbN, probably, it does nothing, but in HW case it prevents graph from being built, because it cannot negotiate `cuda` format
[19:50:38 CET] <BtbN> So you mean the code that re-negotiates on resolution change is not aware of hw frames?
[19:54:19 CET] <kikobyte> BtbN, not sure about saying it in this particular way, because from that code you still can access some hw-specific stuff, but that code always creates "scale" filter, regardless of whether the pipeline is GPU or not. And "scale" filter, as I understand, can't agree on "cuda" format for the buffers
[19:54:41 CET] <BtbN> well, if it always did that, even the simple case wouldn't work
[19:54:54 CET] <junglistric> thanks sfan5
[19:57:58 CET] <kikobyte> BtbN, maybe I'm messing things up with this one, but the solution (disable insertion of the "scale" filter when there is an active hw_device_ctx) works for my particular purpose. Found the fix here: https://github.com/fuchsia-mirror/third_party-ffmpeg/blob/master/ffmpeg_filter.c#L463 after I nailed things down to the place where the unwanted filter is being inserted. If you compare it to 3.3 branch, that's one of the only few change
[20:00:45 CET] <BtbN> This code is always called, not just on format-changes
[20:01:02 CET] <BtbN> so something must be up why it works on the initial setup, but not on re-negotiation
[20:01:41 CET] <junglistric> sfan5 i've gotten this far but with the some other errors popping up https://pastebin.com/DCnJhrC1
[20:02:04 CET] <junglistric> seems like the linker is not finding these object files, but this happens even if I put then in the linker path
[20:03:57 CET] <kikobyte> BtbN, maybe, but, unfortunately, at this point I have no other ideas - I'm not that familiar with ffmpeg code base, so I'm doing what I can at the moment - highlighting the issue and something which I consider an acceptable - oh, let's put it this way - workaround. ^_^
[20:04:26 CET] <sfan5> junglistric: I usually use android-ndk-*/build/tools/make_standalone_toolchain.py to create a toolchain, no idea about using the prebuilt ones directly
[20:04:36 CET] <BtbN> the workaround seems like a sensible addition to me, as the vf_scale.c filter is entirely unaware of hw frames
[20:04:43 CET] <BtbN> I'm more wondering why it works in the first place
[20:05:02 CET] <BtbN> Do you have a small sample that triggers the issue?
[20:09:42 CET] <kikobyte> BtbN, do you need the source media file itself? I have pasted the command line arguments, can paste again
[20:10:03 CET] <BtbN> Just interested in the file
[20:10:11 CET] <BtbN> unless it's something huge
[20:10:41 CET] <kikobyte> 100MiB of an internal testing footage, let me double check if I can share it
[20:11:09 CET] <BtbN> any video file that cuvid can decode and that has format changes will work I guess
[20:12:22 CET] <kikobyte> BtbN, that's my understanding as well
[20:13:55 CET] <kikobyte> BtbN, sorry, I cannot share that particular video, would you like me to find another one tomorrow?
[20:14:09 CET] <BtbN> I'll find something
[20:14:42 CET] <kikobyte> BtbN, thanks for your help, again. I'm off for today, see you!
[20:33:40 CET] <OneSploit> Hi, can someone tell me if GeForce 960M has h245 (hvenc) capability ?
[20:37:05 CET] <OneSploit> hevc_nvenc I mean
[20:37:50 CET] <relaxed> OneSploit: No, https://www.geforce.com/hardware/notebook-gpus/geforce-gtx-960m/specifications
[20:40:08 CET] <DHE> relaxed: really? it says shadow play is supported
[20:44:47 CET] <BtbN> I have trouble deciphering if "h245 (hvenc)" means h264 or hevc
[20:45:16 CET] <BtbN> But keep in mind, with mobile cards in Optimus setups, there is a good chance the video de/encoding unit is entirely disables
[20:45:31 CET] <DHE> yeah that's confusing...
[20:46:24 CET] <DHE> assuming he means h265 HEVC, I would assume not. but not enough information here to make a call
[20:49:41 CET] <kepstin> OneSploit: 960M should be a GM107, which as far as I can tell *does* have an encoder block
[20:50:43 CET] <DHE> question is whether is supports h265... first gen maxwell?
[20:51:00 CET] <kepstin> yeah, first gem maxwell, which is apparently the same as kepler
[20:51:02 CET] <kepstin> so h264 only
[20:51:26 CET] <kepstin> oh, huh, gm107 is listed as second gem maxwell
[20:51:35 CET] <kepstin> so still h264 only, but it can do HP444
[20:53:50 CET] <kepstin> good luck to anyone with a 940M or 945M tho, you'll get (at random?) either a GM107 or GM108, and the GM108 doesn't have nvenc at all.
[21:17:24 CET] <lyncher> hi all. I'm trying to add "user data" (608) to a MPEG2 video (libavcodec)... how can I get the current time? can I just count frames?
[21:18:14 CET] <lyncher> and where should I inject the user data packets? after a MPEG2_START_PICTURE??
[21:18:18 CET] <ddubya> lyncher, current time ~ pts
[21:19:58 CET] <ddubya> I wonder if you can make a "user data" stream type and mux it like anything else.
[21:21:53 CET] <lyncher> I'm looking to mpeg2_metadata.bsf and pts is not available at that level
[21:23:12 CET] <lyncher> the only reference to a "time_code" is made at MPEG2RawGroupOfPicturesHeader
[21:28:36 CET] <ddubya> when you demux the stream you can read pts values
[21:29:54 CET] <ddubya> but I don't know what you mean by (608) data. Is that another mpeg stream type or something else
[21:30:20 CET] <lyncher> CEA-608: closed captioning data
[21:31:07 CET] <lyncher> I suppose it has to be added at the end of frame data
[21:32:58 CET] <ddubya> i'm on the wikipedia page, it says there are two different ways its done
[21:33:35 CET] <lyncher> what is the link that you're reading?
[21:33:46 CET] <ddubya> https://en.wikipedia.org/wiki/EIA-608
[21:34:21 CET] <ddubya> actually they list 3 ways
[21:35:23 CET] <lyncher> it seems that I've to add that info after MPEG2_START_SEQUENCE_HEADER
[21:35:35 CET] <lyncher> (defined cbs_mpeg2.h2)
[21:42:52 CET] <ddubya> lyncher, I don't see anywere user_data is going into the bitstream. It appears to be decoded but not encoded
[21:43:14 CET] <lyncher> I'm trying to add that code in h264_metadata.c
[21:43:41 CET] <lyncher> to receive some CEA-608 data and add it after MPEG2 GOP header
[21:43:49 CET] <ddubya> I did find this in mpeg4 encoder https://github.com/FFmpeg/FFmpeg/blob/8e950c9b4235bb66f3bf53608417c7cbc8148740/libavcodec/mpeg4videoenc.c#L1053
[21:44:34 CET] <lyncher> I'm trying to add that without reencoding the video stream
[21:45:21 CET] <ddubya> so just remuxing then?
[21:45:22 CET] <lyncher> assuming that the received video is already MPEG2, I'm trying just to add that user data and have the remuxer to write them to the final stream
[21:46:44 CET] <lyncher> there's a git repo which has an approach to encode user data to the stream: https://github.com/jpoet/ffmpeg
[21:46:56 CET] <ddubya> If you simply copy packets from av_read_packet into av_write_interlaved_frame for example, I don't think anything is going to look at the packet payload itself
[21:46:57 CET] <lyncher> (not merged with ffmpeg)
[21:47:52 CET] <lyncher> but I'm forcing the packets to go by a patched bsf
[21:48:01 CET] <ddubya> oic
[21:49:02 CET] <sdds> how can I recorde live video with ffmpeg and save to file but takes only 3 frames per seconds?
[21:49:41 CET] Last message repeated 1 time(s).
[21:49:41 CET] <lyncher> I'm still missing the question about the timecodes.... how can I have access to the video timecodes inside the BSF
[21:49:56 CET] <ddubya> sdds, I don't know how to record, but you can always change the frame rate into the encoder to whatever you want
[21:50:33 CET] <sdds> what do you mean?
[21:50:46 CET] <alexpigment> sdds: just put -r 3 after your input
[21:50:49 CET] <sdds> i use not like ffmpeg -i http://..... 1.avi
[21:51:11 CET] <sdds> but then i get faster video
[21:51:18 CET] <alexpigment> that's if you put it before the input
[21:51:19 CET] <ddubya> ffmeg -i input -r frame-rate output
[21:52:03 CET] <sdds> but then the output is like 25 frames per seconds so i get slower video
[21:52:20 CET] <alexpigment> sdds: you're missing up input and output frame rates
[21:52:22 CET] <ddubya> not if you set it to 3 fps
[21:52:30 CET] <alexpigment> *mixing
[21:52:40 CET] <ddubya> yeah, put the "-r" after the "-i"
[21:53:08 CET] <ddubya> if you put it *before* it means change the *input* frame rate
[21:53:19 CET] <ddubya> after, change the output
[21:53:34 CET] <ddubya> but I think that syntax was deprecated?
[21:54:23 CET] <alexpigment> ddubya: you mean specifying the term -framerate vs -r before the input?
[21:54:32 CET] <alexpigment> i haven't messed with it in a bit, but i think that's right
[21:54:58 CET] <alexpigment> at least, i don't see anyone using -r before the input anymore, so it makes sense that's it's deprecated
[21:58:32 CET] <sdds> if i put -r before the input the output reamain 25 frames per sconds
[21:58:47 CET] <ddubya> sdds, so put it after....
[21:58:50 CET] <sdds> so i need to put -r before the input and -r after the input?
[21:59:06 CET] <ddubya> no, only -r after the input
[21:59:14 CET] <sdds> put it after takes 25 frames per seconds from the input and drop some ...
[21:59:27 CET] <ddubya> isn't that what you wanted
[21:59:40 CET] <sdds> I want to save dbandwith
[21:59:47 CET] <sdds> bandwidth*
[22:00:23 CET] <alexpigment> sdds: either there are words that you're thinking but not saying, or you are very confused
[22:00:23 CET] <klaxa> that won't work the way you hope for
[22:00:49 CET] <sdds> i will try to explain again
[22:01:40 CET] <ddubya> if you want it to play at 25 fps and only keep every 5th frame for example in the output, is that the idea?
[22:01:55 CET] <ddubya> so it would appear to play back 5x faster
[22:01:58 CET] <sdds> I have live stream that I want to save for file. but i don't need to save real video like 25-30 frames per seconds , I need only 3 frames per secondes.   I want to takes those frames from the live stream
[22:02:21 CET] <alexpigment> sdds: presumably you're fine with / want a 3fps video, right?
[22:02:25 CET] <sdds> I dont want to takes 25 frames and save only 4 , i want to takes ony 4
[22:02:26 CET] <klaxa> you will need to download the whole stream regardless though, you will need to discard the frames locally
[22:02:36 CET] <klaxa> you cannot tell the remote server to only send every nth frame
[22:02:51 CET] <alexpigment> sdds: it's kinda the same thing
[22:03:00 CET] <klaxa> well he said he wanted to save bandwidth
[22:03:07 CET] <klaxa> in that regard the idea is not wrong
[22:03:15 CET] <klaxa> but in practice it doesn't really work like that
[22:03:24 CET] <klaxa> you could put a proxy with ffmpeg inbetween
[22:03:30 CET] <alexpigment> i guess i didn't assume he meant the bandwidth coming to *him*
[22:03:54 CET] <sdds> so i must download all stream? i can't ask get only low frames per seconds?
[22:03:57 CET] <ddubya> yeah it its input frame rate your after, its a capture setting.
[22:04:16 CET] <ddubya> You can reduce the rate of the recording device if its an option
[22:04:31 CET] <ddubya> but you haven't said what that is
[22:04:38 CET] <sdds> sorry it is not to save bandwidth, the proble is that i have low cpu, and i don't want to use stream copy,
[22:04:59 CET] <alexpigment> <-- even more confused now
[22:05:00 CET] <alexpigment>  :)
[22:05:01 CET] <sdds> so if i get lower fram per secnds the cpu will be good
[22:05:22 CET] <sdds> haha
[22:05:29 CET] <alexpigment> stream copying is always less CPU than re-encoding fwiw
[22:05:53 CET] <sdds> but i want to re-encoding becuase i wand to overlayer or soething like that
[22:06:11 CET] <sdds> so i want to re-encode only less frames per secondes
[22:06:25 CET] <alexpigment> sdds: have you tried just adding -r 3 after the input? what is the problem with that process? that'll help explain the problem
[22:06:58 CET] <ddubya> yeah if you put -r 3 after input, your ouput will be 3 frames per second. Basically a slide show. But it will also be 10x easier to encode
[22:07:59 CET] <sdds> ddubya: but i dowanload all the stream and ffmpeg will drop 27 frames per secondes?
[22:08:16 CET] <alexpigment> exactly
[22:12:59 CET] <sdds> alexpigment: so what -r before the input mean?
[22:13:20 CET] <alexpigment> it means you re-interpret the original frame rate
[22:13:29 CET] <alexpigment> and therefore change the speed of the video
[22:13:47 CET] <alexpigment> which, by your explanation above, you don't want to do
[22:13:55 CET] <sdds> fr what it good? if i want slower/faster video?
[22:14:02 CET] <alexpigment> sure, among other things
[22:14:04 CET] <sdds> that not save bndwidth or cpu?
[22:14:08 CET] <alexpigment> no
[22:14:24 CET] <alexpigment> if your video's frame rate is being interpreted incorrectly, for example
[22:14:43 CET] <sdds> and -r after the input mean how much fram re-encode?
[22:14:47 CET] <alexpigment> like if you know your video is 60fps but ffmpeg thinks it's 30fps, you would specify the frame rate 60 before the input
[22:14:48 CET] <sdds> and rest frames drop?
[22:14:53 CET] <alexpigment> yes
[22:15:17 CET] <sdds> alexpigment: do you know aout stream with ffserver or another way?
[22:15:24 CET] <alexpigment> nope
[22:15:31 CET] <alexpigment> that's for the more intelligent people around here ;)
[22:15:50 CET] <sdds> ok , i want to task 1 more please
[22:15:53 CET] <sdds> can you please?
[22:16:26 CET] <alexpigment> task or ask?
[22:16:30 CET] <sdds> ask
[22:16:31 CET] <alexpigment> sure
[22:16:33 CET] <sdds> I want mjpeg stream what i want to save for file, what is the diffrent if i save to mjpeg / avi/mp4?
[22:17:07 CET] <alexpigment> mjpeg is a compression scheme that is pretty simple; it's like a series of jpegs right after each other
[22:17:11 CET] <alexpigment> because of this, it's not very efficient
[22:17:24 CET] <klaxa> ffserver has also been dropped from git master :P
[22:17:54 CET] <alexpigment> modern codecs like H.264 (which you commonly find in MP4 or MKV files) use compression between multiple frames, so the file sizes are smaller for the same level of compression
[22:18:05 CET] <klaxa> if your devices support matroska you can give this a try: https://github.com/klaxa/mkvserver_mk2
[22:18:17 CET] <sdds> kalxa: i dont found another integrity way how to straem with ffmpeg
[22:19:41 CET] <sdds> alexpigment: so what it the better file to save mjpeg stream to save on quality but with not much cpu
[22:21:57 CET] <alexpigment> sdds: mjpeg should be fairly low on CPU i think
[22:22:21 CET] <alexpigment> but you could also use -c:v libx264 -preset ultrafast
[22:23:18 CET] <alexpigment> the presets for x264 are usually named after speeds (ultrafast, veryfast, faster, medium, slow, etc), but those are also a reflection of the CPU usage
[22:24:09 CET] <alexpigment> given that everything supports h.264 natively, if not at a hardware level, it's good to use it if you can
[22:24:27 CET] <sdds> thank you
[22:25:17 CET] <sdds> my problem is when i record stream 1920x1080 i see that because i have weak cpu i can re-encode only 2 frames per seconds from 25
[22:28:39 CET] <sdds> klaxa: can you explain me on mkvserver_mk2 please?\
[22:28:47 CET] <alexpigment> sdds: add -preset veryfast
[22:28:55 CET] <alexpigment> sorry
[22:29:00 CET] <alexpigment> -preset ultrafast
[22:29:02 CET] <alexpigment> is what i meant
[22:29:18 CET] <sdds> on witch protocols is that support ?rtsp over http? rtsp multi-cast?
[22:29:53 CET] <klaxa> mkvserver_mk2 only supports http
[22:30:25 CET] <sdds> so how can I stream rtsp multicast with ffmpeg?
[22:30:36 CET] <ayum> Hi, I am using ffmpeg read the stream from v4l2 and alsa device, but the v4l2 driver seems exists some bug, sometime the captured frame has no timestamp, to solve this problem, I use "setpts=N/FRAME_RATE/TB" and "asetpts=N/SR/TB" filter drop all timestamps, is it okay?
[22:34:54 CET] <b0bby__> what does num:den syntax is deprecated, please use num/den or named options instead?
[22:35:00 CET] <b0bby__> mean?
[22:35:13 CET] <alexpigment> numerator, denominator
[22:35:45 CET] <alexpigment> so if you have some fraction somewhere, use 8/5 instead of 8:5
[22:36:02 CET] <alexpigment> (presumably. it seems pretty obvious that's what it means)
[22:39:22 CET] <kepstin> ayum: doing that will probably cause desync between your audio and video
[22:40:52 CET] <b0bby__> alexpigment: Thanks!
[22:40:54 CET] <ayum> @kepstin, I am using fps filter to change the frame rate, but once ffmpeg captured the frame without timestamp. it will print more than 10000 buffers cached, and memory use out of it soon
[22:41:42 CET] <ayum> @kepstin, and once I use setpts filter to drop the timestamp, let ffmpeg create new one. the problem gone.
[22:42:28 CET] <kepstin> well, the fps filter using too much memory on large timestamp gaps is a separate bug :/
[22:42:36 CET] Action: kepstin hits that all the time
[22:43:11 CET] <alexpigment> i'm not really aware of this issue, and i'm not trying to derail the conversation, but how much memory are we talking about?
[22:43:56 CET] <kepstin> depends on the length of the timestamp gap
[22:44:06 CET] <kepstin> i've seen it use multiple gigs and trigger the oom killer
[22:44:32 CET] <alexpigment> ah, that is a lot
[22:44:42 CET] <kepstin> the issue is that the fps filter queues up all of the avframes to fill the timestamp gap at the same time
[22:44:48 CET] <alexpigment> is it just storing the intermediate frames in memory as raw data?
[22:44:57 CET] <alexpigment> gotcha
[22:45:08 CET] <kepstin> the frame data itself is shared, i think, it's just the avframe structures
[22:45:20 CET] <kepstin> it needs to be rewritten using the activate callback to generate single frames at a time on request.
[22:45:30 CET] <ayum> @kepstin, I seen there is a "framerate" filter, what's the difference between framerate and fps filter? I tested both before, both will use many memory once the timestamp not set
[22:45:52 CET] <kepstin> ayum: 'framerate' filter blends frames together to "smoothly" fill in gaps
[22:46:04 CET] <kepstin> (it generally doesn't give good results, imo)
[22:46:08 CET] <ayum> @kepstin, oh, okay
[22:46:59 CET] <ayum> @kepstin, do you have solution for it if the some frame's timestamp not set? instead of use setpts filter?
[22:47:25 CET] <kepstin> ayum: the issue you're having probably isn't a not-set timestamp, it's probably a gap in timestamps
[22:48:39 CET] <alexpigment> i wonder if genpts before the input would address the problem
[22:48:39 CET] <ayum>  @kepstin, I added a few av_log() statements to the libavcodec/v4l2.c, it print that the kernel's v4l2 driver returned 0 value in the timestamp structure.
[22:49:12 CET] <ayum> genpts?
[22:49:41 CET] <alexpigment> ffmpeg -fflags +genpts -i input ...
[22:50:01 CET] <alexpigment> honestly, i know very little about when this is needed, but it sounds related to all of this
[22:50:12 CET] <ayum> @alexpigment, thanks, I will try it later
[22:50:50 CET] <alexpigment> yeah, it's worth trying. i make no guarantees about the validity of the recommendation though ;)
[22:51:31 CET] <kepstin> I think using the genpts flag should give basically the same result as the setpts filter
[22:51:50 CET] <kepstin> which means loss of a/v sync.
[22:52:54 CET] <kepstin> but really, you should be complaining to the kernel devs about v4l giving you bad timestamps, i guess.
[22:54:24 CET] <ayum> @kepstin, yes, I will try report this bug to the HDMI capture card's company.
[22:54:57 CET] <kepstin> oh, is it not an upstream (open-source kernel) driver?
[22:58:52 CET] <ayum> I just tried -fflags +genpts, it seems not works.
[22:59:19 CET] <alexpigment> i figured it wouldn't - just wanted to throw that out there in case it was different than what you tried before
[23:00:44 CET] <ayum> @kepstin, yes, it's not a open source driver, I have to download the driver source code from the website, compile and install it manually.
[23:56:31 CET] <ChocolateArmpits> More of a technical question. Is there any suggested approach when a source doesn't have enough audio samples to fully fit the ntsc cadence pattern?
[23:57:30 CET] <therage3> interpolation comes to mind
[23:57:44 CET] <alexpigment> hmmm
[23:57:52 CET] <therage3> that's used by CD players when they have read errors they can't fix
[23:58:08 CET] <alexpigment> few enough samples that it's noticeable but enough that you're mentioning NTSC specifically?
[23:58:44 CET] <ChocolateArmpits> alexpigment, well the pattern is 1602,1601,1602,1601,1602 to fit 48kHz audio if you've heard of it
[23:59:31 CET] <alexpigment> ah, deeply technical :)
[23:59:37 CET] <therage3> is this an Opus file or something?
[23:59:45 CET] <therage3> like, why do you have 48kHz?
[23:59:48 CET] <ChocolateArmpits> nah, mxf
[23:59:57 CET] <alexpigment> 48 is standard for broadcast video
[23:59:58 CET] <therage3> ah
[00:00:00 CET] --- Tue Jan 23 2018


More information about the Ffmpeg-devel-irc mailing list