[Ffmpeg-devel-irc] ffmpeg.log.20190307

burek burek021 at gmail.com
Fri Mar 8 03:05:01 EET 2019


[00:24:00 CET] <systemd0wn>  Question: I'm having issues using the HLS muxer to take a mp4 as input, and output a single file. I've tried single TS and single fmp4 (wich outputs as m4s). Does anyone know if there are issues with using the -hls_flags "single_file" option in 4.1.1? (looking in bug tracker too)
[00:29:01 CET] <systemd0wn> The issue being that it appears to work, I get my output files, and I can play them back. However, if I host those files on a web server and use that m3u8 as the input to convert it back to MP4 I see a lot of 'Non-monotonous DTS in output stream' messages and the output file is only ~2 of the expected 26 seconds.
[00:30:18 CET] <systemd0wn> This is only ever an issue when I use the single_file flag. If I generate chunked TS files (remove the single_file flag), I don't see that issue when converting it back to mp4.
[12:19:40 CET] <sazawal> Hello all. I want to change the brightness of a video in a specified time interval. I used mkvtoolnix to split the original file, and applied the brightness to the video segment. But now I cannot merge the segments back to one video using mkvtoolnix. Is there a way to change the of a video for a specified interval?
[12:20:25 CET] <sazawal> Is there a way to change the brightness of a video for a specified interval?
[12:21:04 CET] <BtbN> You have to 100% match the encoding parameters to be able to merge it back together.
[12:21:14 CET] <BtbN> Otherwise you have to re-encode the whole thing.
[12:22:14 CET] <sazawal> BtbN, When I used "-c:v copy" with the ffmpeg command it said, "Filtering and streamcopy cannot be used together."
[12:23:10 CET] <BtbN> Well, it's right about that.
[12:23:17 CET] <BtbN> Can't change the video without re-encoding
[12:23:44 CET] <sazawal> BtbN, is there some way to do it?
[12:24:13 CET] <BtbN> Just don't copy, but re-encode
[12:24:32 CET] <BtbN> And make sure to match closely what the original video was parameter wise, then you can concat the 3 segments
[12:25:10 CET] <sazawal> BtbN, Sorry, but how do I re-encode it? I guess I need to specify the codec which is the codec of the original file
[12:25:16 CET] <BtbN> yes
[12:26:02 CET] <sazawal> BtbN, When I checked the properties of the original video on nautilus, it says MPEG-4 Video (Advanced Simple Profile). How do I specify this codec in ffmpeg command?
[12:26:31 CET] <BtbN> not sure actually, mpeg4 is a mess and it can be _a lot_ of different things, which might be outright impossible to reproduce
[12:26:58 CET] <BtbN> It's also not exactly good. I'd say just re-encode the whole thing with x264 while you're at it.
[12:28:27 CET] <sazawal> BtbN, Please correct me if I am wrong. First I will encode the whole thing with x264. Then split the file. Then apply the brightness to the video segment. Then merge it.
[12:28:51 CET] <BtbN> no point in that extra encode step, pointless loss of quality.
[12:29:22 CET] <BtbN> Just split it, re-encode all 3 segments, while filtering the middle one, and then concat them all together without re-encoding.
[12:30:50 CET] <sazawal> I see. Yes, this should work. Could you please also give me a command for re-encoding with x264 with ffmpeg? Or I can use Handbrake, but there are so many parameters there that I may lose the quality.
[12:31:23 CET] <BtbN> I'm at work, so can't. There's plenty of examples on the wiki though.
[12:31:46 CET] <sazawal> BtbN, Let me check. But, thanks a lot.
[16:52:23 CET] <ikonia> Hi guys, I'm trying to convert a flac audio file to an alac audio file, I'm using the command "ffmpeg -i file.flac -acodec alac file.m4a"
[16:52:53 CET] <ikonia> I get the error "Could not find tag for codec h264 in stream #0, codec not currently supported in container" which from reading the docs suggests the code isn't supported in the m4a container
[16:53:08 CET] <ikonia> (alac is fine in an m4a container) - what am I missing on this ?
[16:59:49 CET] <pzy> same command works fine in my version of ffmpeg
[17:00:39 CET] <ikonia> I'm on 4.0.3 from the fedora approved repos
[17:00:41 CET] <ikonia> rpmfusion
[17:01:25 CET] <pzy> you could try the -vn flag
[17:01:32 CET] <pzy> since it seems to be some kind of weird video thing
[17:01:39 CET] <ikonia> what does vn do
[17:01:45 CET] <pzy> ignores any video in the file
[17:01:46 CET] <pink_mist> I'd start with using ffprobe on the original file and checking if it doesn't have anything wonky inside
[17:01:57 CET] <ikonia> interesitng that worked
[17:02:14 CET] <pink_mist> right. you have a video in the flac file
[17:02:16 CET] <pzy> sounds like your source flac has some video data in it
[17:02:20 CET] <ikonia> I wonder if the flac files have the album art embedded
[17:02:37 CET] <pink_mist> check with ffprobe
[17:02:39 CET] <ikonia> that would explain it thinking it needed x264 and video
[17:02:40 CET] <ikonia> yup
[17:03:04 CET] <ikonia> Stream #0:1: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 360x374 [SAR 59:59 DAR 180:187], 90k tbr, 90k tbn, 90k tbc
[17:03:07 CET] <ikonia> interesting
[17:03:19 CET] <pzy> sheesh, metadata :P
[17:03:19 CET] <ikonia> cover is listed as metadata
[17:04:26 CET] <ikonia> thanks for the input, apprecaited
[17:04:29 CET] <pzy> no problem
[17:04:39 CET] <ikonia> hadn't considered that it may have seen the cover as video
[17:18:09 CET] <kepstin> ikonia: alac isn't supported in app profiles of mp4/m4a container. It should work if you use "-f ipod", for example.
[17:18:24 CET] <kepstin> in all profiles*
[17:20:07 CET] <kepstin> oh, right, embedded cover art, nvm
[17:20:36 CET] <kepstin> 'm4a' extension does map to a muxer that supports alac by default, yeah
[17:34:47 CET] <brimestone>  Good morning everyone..
[17:35:03 CET] <brimestone> Anyone know how to use VideoToolBox here?
[17:56:27 CET] <brimestone> Hey guys, what does Direct Rendering Method 1 support means ?
[18:00:47 CET] <^Neo> Hi, I'm trying to add support to alsa_dec.c to do spdif decoding and running into some problems. I'm including the spdif code from libavformat into libavdevice. My problem is that as I receive packets within the alsa_dec audio_read_packet and try to probe for the spdif start codes the temporary buffer that's created to do the temporary probing seems to become corrupted? a number of 0s get inserted in the
[18:00:53 CET] <^Neo> middle of the temporary buffer between the start codes which mess up sync code calculations.
[18:03:24 CET] <kepstin> brimestone: i'm not sure about any specifics regarding "method 1", but in ffmpeg in general, iirc direct rendering refers to codecs/renderers/devices that work directly on the image data in avframes without copying.
[18:04:28 CET] <brimestone> Kepstin: ahhh this will eliminate the memcopy part which would save ram footprint and potentially time. Is it on by default?
[18:04:46 CET] <^Neo> here's a pastebin with some snippets
[18:04:48 CET] <^Neo> https://pastebin.com/F5nB1gNw
[18:05:28 CET] <kepstin> brimestone: unless you're implementing an ffmpeg codec - or in some special cases a player - you should just ignore references to direct rendering.
[18:06:30 CET] <^Neo> weird thing is that if I print the AVIO buffer contents then the code continues on until I hit a segfault when generating the spdif packet
[18:06:32 CET] <brimestone> kepstin:  how about for -vf lut3d or scaling?
[18:08:20 CET] <kepstin> brimestone: very few filters implement dr, and scaling pretty much requires copying in many cases, especially when upscaling.
[18:08:39 CET] <brimestone> And downscaling?
[18:09:56 CET] <kepstin> i dunno. libswscale is complex and I haven't looked deeply into it
[18:10:15 CET] <brimestone> Got it.. thanks.
[18:10:17 CET] <kepstin> a multithreaded scaler could be nice for working with large images.
[18:10:33 CET] <kepstin> but ffmpeg doesn't have that yet :)
[18:10:39 CET] <brimestone> yes.. Im faced with this problem
[18:11:17 CET] <brimestone> Is there a good book on how to use the library ? Say Swift or C?
[18:23:59 CET] <deterenkelt> In the ffmpeg -version output, down below, where the library version are, what does the slash mean?
[18:32:08 CET] <kepstin> deterenkelt: one of the numbers is the version ffmpeg cli tool was compiled against, the other is the version it loaded at runtime
[18:32:19 CET] <kepstin> they should always match - if they don't, you have an installation problem
[18:32:38 CET] <deterenkelt> kepstin oh, thank you.
[21:57:24 CET] <harber> i'm on win10, doing something simple like => ffmpeg -hide_banner -f dshow -list_devices true -i null    , and i get an exit code 1 when reading the response from golang, because i think the -i null or dummy creates an error code 1
[21:58:30 CET] <kepstin> harber: the "-list_devices" option causes the dshow input to error out, in order to stop ffmpeg from trying to read from the not-setup input
[21:58:39 CET] <kepstin> harber: so that's working as expected.
[21:58:44 CET] <harber> interesting
[21:58:52 CET] <harber> so a perfectly working command errors, haha
[21:59:10 CET] <kepstin> it's just because of where it's implemented in the layering
[21:59:13 CET] <harber> i get it, it's just... interesting ;)
[22:01:52 CET] <kepstin> if you use libavformat/libavdevice directly and pass the "list_devices" option when creating a dshow input, it'll do the exact same thing -  log some output and fail to initialize.
[22:02:32 CET] <kepstin> i don't actually know if there's a way to programmatically enumerate the input devices :)
[22:03:07 CET] <harber> so i guess i need to let it error and read the stderr and if it looks like a result, regardless of error, that's what i need to parse
[22:03:29 CET] <harber> more than anything, this points out that the previous languages i've been using didn't properly read that exit code, hahaha
[22:05:12 CET] <harber> basically i wanted to make sure there wasn't an option like -ignore_input_error or something i could be flagging
[00:00:00 CET] --- Fri Mar  8 2019


More information about the Ffmpeg-devel-irc mailing list