[Ffmpeg-devel-irc] ffmpeg.log.20181204

burek burek021 at gmail.com
Wed Dec 5 03:05:02 EET 2018


[00:02:35 CET] <DHE> if it's valid then I'll take care of it...
[00:58:42 CET] <DHE> well, I tried it anyway and it's working.. I guess I'll just chalk it up to forced-IDR weirdness and live with it
[02:36:52 CET] <tytyt> I'm trying to convert a png to a raw .y4m, could I get some assistance on how to do that?
[02:49:11 CET] <tytyt> This is my current process: ffmpeg -i back.png -pix_fmt yuv420p back.y4m
[02:49:14 CET] <tytyt> Is this correct?
[04:00:29 CET] <Hello71> seems like you are missing some parameters
[04:01:14 CET] <Hello71> huh, it works
[04:02:04 CET] <Hello71> not sure why to use yuv420p with rawvideo though
[09:11:40 CET] <dv_> iive: I summarized the question here: https://stackoverflow.com/questions/53599205/vc-1-specification-annex-l-what-does-the-0xc5-byte-mean
[10:43:54 CET] <relaxed> tytyt: ffmpeg -i input -f yuv4mpegpipe out.y4m
[11:08:40 CET] <th3_v0ice> Is av_read_frame() a blocking call? Mainly while reading UDP or RTMP streams.
[11:09:02 CET] <th3_v0ice> If so how can I make it non-blocking?
[11:38:24 CET] <c_14> by default I think so, you can set AVIO_FLAG_NONBLOCK on the format and it will return AVERROR(EAGAIN) instead of blocking
[11:38:46 CET] <c_14> may not be supported on all protocols/formats/things though
[11:41:10 CET] <c_14> should work for udp and rtmp though
[11:41:13 CET] <th3_v0ice> For UDP I set timeout, that solved the problem actally. I am just wondering if that will solve the problem for RTMP, but will need to test.
[11:56:04 CET] <vasilica> hello! nice community here
[11:57:29 CET] <th3_v0ice> c_14, thanks for the input. I will try to set AVIO_FLAG_NONBLOCK if the timeout fails on RTMP.
[11:58:29 CET] <vasilica> a have a quick question, maybe you guys can help me. i have a bunch of audio files (different lengths) and one video of 5 second length. could i use the ffmpeg library to batch convert all the audio files to video files using the 5 second video playing in loop ?
[11:59:27 CET] <vasilica> the output video should be the length of the audio file, but with the 5 second video playing as a loop
[12:04:19 CET] <c_14> you can try using the -stream_loop option (it's documented in the man pages)
[12:04:49 CET] <c_14> that may or may not work depending on the video file, otherwise you can always script it with the concat filter or the concat demuxer and if you're using the C libraries it'll definitely be possible
[12:07:18 CET] <vasilica> i will try with the -stream_loop -1
[12:08:40 CET] <pink_mist> you'll also want -shortest
[12:08:41 CET] <pink_mist> probably
[12:09:39 CET] <c_14> yeah
[12:13:27 CET] <vasilica> thanks guys. i'll get on it as soon as i can and i will come back with a feedback, thank you again for the support
[13:22:24 CET] <unsymbol> is it possible to specify a video input by name, like you can do with audio interfaces e.g front:CARD=C920,DEV=0, rather than specifying the mount point?
[13:33:43 CET] <th3_v0ice> c_14: One more question. If I want to set AVIO_FLAG_NONBLOCK for the input, when should I do that, right after the avformat_open_input()?
[13:36:46 CET] <c_14> before the open I'd assume
[13:37:24 CET] <c_14> in the open call pass it as one of the flags
[13:37:37 CET] <c_14> for avio that is
[13:41:13 CET] <th3_v0ice> avformat_open_input() doesnt take any flags, before that method AVFormatContext is null
[13:41:29 CET] <c_14> avio_open and avio_open2 do
[13:42:04 CET] <c_14> you'd need those for udp/rtmp anyway (unless you do custom IO) afaik
[13:42:55 CET] <th3_v0ice> avformat_open_input() is handling the udp and rtmp input just fine, I am however using avio_open for output.
[13:43:12 CET] <th3_v0ice> Should I change avformat_open_input() to avio_open() for input also?
[13:45:10 CET] <c_14> you can probably set it as an AVDictionary option somewhere
[13:47:59 CET] <c_14> th3_v0ice: the AVFormatContext has an avio_flags member
[13:48:11 CET] <c_14> you can set that before calling avformat_open_input and it'll get passed to avio_open
[13:52:18 CET] <c_14> (you can just allocate one with avformat_alloc_context, set the option and then pass a pointer to it to avformat_open_input if you haven't been manually allocating it already)
[13:53:54 CET] <th3_v0ice> I havent, cool! Will try it in a moment. Thanks!
[13:57:32 CET] <th3_v0ice> For UDP it still blocks
[14:05:18 CET] <c_14> hmm, it should work
[14:09:40 CET] <c_14> sounds like a possible bug
[14:16:09 CET] <tombb> hi all, I have 3 MTS files, ffprobe says they all have a different "start" time, when I try to merge them to 1 file I lose audio sync, is there any way to RESET the start time to 0?
[14:16:48 CET] <th3_v0ice> Let me see if it actually is passed to the avio_open
[14:24:42 CET] <th3_v0ice> It seems to be allocated at the start of the method without a check if it already exists.
[14:33:41 CET] <dv_> and now I have questions about realmedia / rv30 format specs
[14:34:25 CET] <dv_> https://dpaste.de/j34U <- I am trying to make sense of this code snippet from NXP's imx-vpuwrap library
[14:35:15 CET] <dv_> I understand that the part in the " if (bIsRV8)" block corresponds to extra data from the realvideo stream
[14:35:28 CET] <dv_> but, the "slice info" stuff I cannot find any information about
[14:38:52 CET] <c_14> th3_v0ice: that's what the !s && is for
[14:39:03 CET] <c_14> if it isn't null it's not allocated
[14:46:36 CET] <vasilica> so, i'm back. i've tried mapping the audio to the video and used -stream_loop -1 but it keeps making a video around 69 hours :)
[14:47:05 CET] <vasilica> can you please tell what am i doing wrong ? i'm using this command
[14:47:50 CET] <vasilica> ffmpeg - i vid.mp4 -stream_loop -1 -i audio.mp3 -map 0:v - map 1:a -c copy output.mp4
[15:07:27 CET] <th3_v0ice> c_14: I am not 100% sure if the second part of the command is not executed if the first logical and fails.
[15:08:06 CET] <DHE> in C, the && operator is short-circuiting and will not evaluate further operations if a 'false' comes up on the left side
[15:09:13 CET] <DHE> if (X != NULL && *X > 0) // This will not crash whether or not X is NULL
[15:09:59 CET] <th3_v0ice> Then I am not sure why solution c_14 suggested doesnt work.
[16:23:52 CET] <archibalddd> hello again
[17:58:26 CET] <ParkerR> Is there a way to use a video file as the encoding parameters for another video? Say I have fileA thats H264 720x480 1500k. I want to transcode fileB to those same specs (albeit more fine grained than just those 3)
[18:00:16 CET] <BtbN> no
[18:00:34 CET] <BtbN> for the dimensions it's easily scriptable, but not for any encoder settings
[18:00:55 CET] <ParkerR> Ahh ok thanks
[18:21:49 CET] <kepstin> if the video was encoded with x264, x264 saves the encoder options used in the file metadata, and you can potentially copy those to encode another video with the same settings
[18:22:08 CET] <kepstin> (note that "with the same settings" isn't the same thing as 'to the same specs")
[18:23:44 CET] <ParkerR> kepstin, Aye yeah different encoder versions/etc producing slightly different results
[18:24:10 CET] <furq> are you concatenating these videos
[18:24:25 CET] <furq> it doesn't really matter if you're not
[18:24:40 CET] <ParkerR> I'll poke around ffprobe sme more. I'm trying to replace a video in an archive but it may be fruitless anyways since I think the application I'm messing with does checksums
[18:24:50 CET] <ParkerR> *some
[18:29:59 CET] <ParkerR> So the settings I want to use are     Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/bt709), 720x484 [SAR 1:1 DAR 180:121], 1937 kb/s, 25 fps, 25 tbr, 25k tbn, 50k tbc (default)
[18:30:27 CET] <ParkerR> And audio     Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 125 kb/s (default)
[18:39:04 CET] <furq> if you need to match checksums then you've pretty much got no chance
[18:40:17 CET] <ParkerR> I know Ill never match checksum
[18:40:32 CET] <ParkerR> I just want to get it matching spec wise and Ill see if it does anything afterwards
[18:44:38 CET] <kepstin> that's a really weird video size/framerate combination, huh.
[18:46:31 CET] <kepstin> but yeah, you'll probably get close enough as to not matter by matching the video resolution/sar/framerate, setting profile to main, and encoding at 2000kbps abr video / 128kbps abr audio (builtin aac encoder will do well enough here)
[19:00:32 CET] <tytyt> @Hello71 "not sure why to use yuv420p with rawvideo though" What should I do?
[19:04:30 CET] <Hello71> 420 means you lose data
[19:05:46 CET] <Hello71> So unless your PNG is 420 (not sure if that's possible) or the input to it was, why not compress
[19:06:01 CET] <ParkerR> kepstin, Thanks. Matched all that but the application just doesnt play it. Probbaly doesnt like me messing with the files heh
[19:06:51 CET] <ParkerR> It's the little AR video overlay thing. Has an obb file on Android that it validates on launch. You can bypass the launch validation but when it's in the viewfinder it just doesnt play when aimed at the target
[19:06:55 CET] <ParkerR> *this little
[19:07:11 CET] <ParkerR> Was worth a shot.
[19:09:19 CET] <tytyt> Okay that makes sense, what would you suggest? I am trying to compress, 'Keep the best perceived values'
[19:10:33 CET] <kepstin> tytyt: what's your final goal? y4m is a container for raw/uncompressed video which is incompatible with 'wanting to compress'
[19:11:38 CET] <tytyt> I need to convert it to a format that AOMedia (AV1) can use, and as far as I'm aware y4m is the best format that I can go with to convert.
[19:12:01 CET] <furq> that works then
[19:12:02 CET] <tytyt> So essentially, I would prefer to convert without compression,
[19:12:12 CET] <furq> although you could also build ffmpeg with libaom support
[19:12:24 CET] <tytyt> Then handle the quality on AV1 encoding.
[19:12:38 CET] <tytyt> Yes, I am just A - B testing right now.
[19:14:50 CET] <kepstin> tytyt: you should convert to a pixel format that matches what you want the final encoded av1 video to be in
[19:15:30 CET] <kepstin> (and if you're doing any comparisons, you should compare the av1 with the y4m, not with the original png)
[19:16:14 CET] <Hello71> although for videos, huffyuv is usually better because most people don't have hundreds of gigs of ram
[19:16:41 CET] <furq> aomenc presumably wants y4m input
[19:16:44 CET] <furq> otherwise it would be ffmpeg
[19:17:27 CET] <Hello71> maybe. how does mkvtoolnix do it? do they have their own demuxers?
[19:17:52 CET] <furq> yes
[19:17:56 CET] <kepstin> mkvtoolnix contains a set of demuxers and codec parsers, yes
[19:18:07 CET] <kepstin> does av1 / aomenv have rgb support at all? or is it yuv only?
[19:18:07 CET] <furq> huffyuv would need a decoder as well obviously
[19:18:23 CET] <furq> if you're testing av1 then you probably want to do it with yuv420p
[19:18:40 CET] <Hello71> also, is there any way to reduce mkv overhead?
[19:18:50 CET] <furq> uh
[19:18:55 CET] <furq> does it have much overhead
[19:19:03 CET] <Hello71> not really
[19:19:20 CET] <Hello71> but 1% is 1%
[19:19:33 CET] <furq> i've never seen it be 1%
[19:19:51 CET] <furq> you can probably mess around with mkvmerge --cluster-length and such
[19:22:17 CET] <tytyt> "aomenc presumably wants y4m input" This is correct
[19:22:24 CET] <tytyt> It won't convert a png
[19:22:55 CET] <tytyt> Technically I want to compare encoding / converting from webp and av1
[21:34:01 CET] <tytyt> Last question, I'm trying to decode (av1) with ffmpeg to webp
[21:59:36 CET] <rmbeer> hello
[21:59:54 CET] <rmbeer> i have this command: ffmpeg -i /opt/Torrent/3D/3Drender/pre.mkv -i out3.mp4 -lavfi "[0:v]scale=1276:400,setsar=1:1[v1];[1:v]scale=1276:400,setsar=1:1[v2];[v1][0:a:0][v2][1:a:0]concat=n=2:v=2:a=2[outv][outa]" -map '[outv]' -map '[outa]' out4.mp4
[22:00:09 CET] <rmbeer> and get this error: Media type mismatch between the 'Parsed_setsar_3' filter output pad 0 (video) and the 'Parsed_concat_4' filter input pad 2 (audio)
[22:00:17 CET] <rmbeer> Cannot create the link setsar:0 -> concat:2
[22:00:28 CET] <rmbeer> no understand how to fix...
[22:01:24 CET] <pink_mist> you probably missed something before the concat in the argument
[22:01:27 CET] <rmbeer> i want concatenate two video with audio+video
[22:01:35 CET] <pink_mist> whether that's a , or a ; or a : or something else I don't know
[22:01:41 CET] <pink_mist> but I'd suspect you need one of those
[22:01:58 CET] <DHE> 2 video inputs and 2 audio inputs?
[22:03:15 CET] <rmbeer> DHE, each file of -i have video and audio, i want concatenate two videos without loss audio and video
[22:03:55 CET] <DHE> right, but you're feeding the concatenation 2 inputs, not that each input has 2 video and 2 audio streams
[22:05:05 CET] <rmbeer> DHE, no understand...
[22:05:45 CET] <rmbeer> each video have two channels of stream, video and audio, i need work with 4 stream, two videos and two audios...
[22:05:55 CET] <DHE> https://ffmpeg.org/ffmpeg-filters.html#concat   Yes, but v= and a= are setting the number of OUTPUT streams
[22:06:32 CET] <rmbeer> ah, i see, thanks
[23:04:21 CET] <tytyt> I'm sorry for posting a question twice, but I'm trying to decode a Av1 format .mp4 [ffmpeg -i test.webp -c:v libaom-av1 -crf 43 -b:v 0 -cpu-used 4 -threads 16 -tile-columns 2 -stats -strict experimental image.mp4] and I just want to decode back, could I ask the suggestion on how to achieve this?
[00:00:00 CET] --- Wed Dec  5 2018


More information about the Ffmpeg-devel-irc mailing list