[Ffmpeg-devel-irc] ffmpeg.log.20191219
burek
burek at teamnet.rs
Fri Dec 20 03:05:02 EET 2019
[00:52:59 CET] <ncouloute> Can any one explain to me why you get a different frame of video depending on how you seek to a particular frame on most modern video players. Frame by frame forward to particular frame, jumping to a particular frame and Frame by frame backwards to particular frame. Seem to give me a different frame of video depending on the file. Some files don't do that but others do. (Tigher key frames perhaps). FFMPEG seems to give
[00:53:00 CET] <ncouloute> me the frame I would get if I went frame by frame forward from the beginning of the file.
[00:55:08 CET] <DHE> seeking must go to a keyframe or decoding cannot proceed
[00:55:14 CET] <jokoon> ffprobe on webm doesn't yield a duration?
[00:55:32 CET] <jokoon> only something in 'tags' ?
[00:55:38 CET] <kepstin> some players seek to the first keyframe after the desired timestamp. others seek to the keyframe before the desired timestamp, decode and discard the extra frames, then start playing
[01:03:09 CET] <ncouloute> Even within the same player I get different behavior. For example PotPlayer using LAV decoder... Jump to a frame and Rew to a frame I get the "future frame" (Frame taken from a later point in time). If I step frame by frame from the beginning I get the "older frame". Older frame seems to use older keyframe and future frame uses newer keyframe. If that is true then how can I know what frame I'm actually seeing....
[01:06:45 CET] <kepstin> also remember that most containers don't store frame numbers or count frames, instead each frame has a timestamp
[01:06:59 CET] <kepstin> frames are (generally) uniquely identified by their timestamp
[01:11:26 CET] <shibboleth> BtbN, DHE: the issue is def the resolution. looks like the vaapi can only handle "common" resolutions (720 and 1080)
[01:11:34 CET] <shibboleth> re our convo y-day
[01:12:19 CET] <furq> jokoon: duration is normally in -show_format
[01:23:38 CET] <ncouloute> Even if i seek using time the issue still remains. It seems to be a matter of if I approach the time or frame going forward or backwards. I would imagine the accurate answer is when I go forward. When I go backwards the players seem to be guessing at what is suppose to be there. Same for when I jump to a frame. I would imagine the proper way would be to go the last keyframe before the point in time in question then
[01:23:38 CET] <ncouloute> start decoding from there... Problem is I need to find out where the "guessed frame" actually is. I suppose I can just go to the nearest keyframe instead of trying to get accurate marks but thats giving up.
[01:32:06 CET] <jokoon> [NULL @ 000001ebe993aec0] Unable to find a suitable output format for '[1:v]scale=320:180:force_original_aspect_ratio=decrease,pad=320:180:(ow-iw)/2:(oh-ih)/2,setsar=1[rescaled1]'
[01:32:19 CET] <jokoon> is there something wrong in my filter?
[01:32:31 CET] <jokoon> I should share the whole command
[01:38:06 CET] <nicolas17> yes you should
[01:38:41 CET] <jokoon> https://bpaste.net/show/RDEVW note those are lines passed as an array to python's subprocess.run() it takes care of spaces etc
[01:38:59 CET] <jokoon> for now I get a [AVFilterGraph @ 000001f7a41edb80] No such filter: ''
[01:40:42 CET] <jokoon> I'm doing both a concat and a xstack
[01:40:55 CET] <jokoon> first scaling, then concat, then xstack
[01:41:15 CET] <jokoon> not sure if -filter_complex is similar to -vf
[01:58:35 CET] <jokoon> go to sleep, sorry
[01:58:36 CET] <jokoon> https://stackoverflow.com/questions/59402005/invalid-command-with-ffmpeg-filter-syntax-with-scaling-concat-and-xstack
[04:44:48 CET] <lavaflow> I have a load of mkv's with ass/ssa subtitles and I'd like to grep the subtitles to find files that contain particular words or phrases, is there a good way to do this?
[04:45:09 CET] <lavaflow> right now I'm extracting all the subs using ffmpeg so I can grep them, but I'm wondering if there is a better way
[04:45:25 CET] <TheAMM> There isn't
[04:45:39 CET] <TheAMM> Not that I'm a bearer of absolute knowledge, but I've done did that as well
[04:46:11 CET] <TheAMM> It's simply easiest to demux the subs and cache/store them for later use
[04:46:24 CET] <TheAMM> Or, well, depends on what exactly you want
[04:46:53 CET] <TheAMM> One thing to certainly keep in mind is to handle all the subtitles in one pass
[04:46:53 CET] <lavaflow> bummer. this works well enough though I suppose
[04:47:03 CET] <lavaflow> oh?
[04:47:13 CET] <TheAMM> Multiple outputs
[04:47:30 CET] <lavaflow> I'm running `ls *.mkv | xargs -I _ ffmpeg -i _ _.ass`
[04:47:39 CET] <lavaflow> oh, right I see
[04:47:40 CET] <TheAMM> Just one read pass to demux them, use named pipes as outputs
[04:47:57 CET] <lavaflow> I think all of these only have a single subtitle track, but I'll have to keep that in mind in the future.
[04:48:15 CET] <TheAMM> Requires something to handle the pipes but I've done my thing with 190TB of anime, so I have a bit of experience
[04:48:26 CET] <TheAMM> Not authority but "done did that" experience
[04:48:42 CET] <lavaflow> wew 190 TB, I'm pretty jealous
[04:48:55 CET] <TheAMM> Not stored or anything, just processed
[04:49:20 CET] <TheAMM> Got 13 million screenshots and ffprobe/mediainfo/subs etc etc etc extracted out of them, fun juicy data
[04:49:32 CET] <TheAMM> But that's a different thing
[04:49:39 CET] <lavaflow> dang, impressive
[04:49:53 CET] <TheAMM> You can also stop demuxing early if you know you don't want to extract it all
[04:50:05 CET] <TheAMM> (I'm figuring Python or something, here)
[04:50:16 CET] <lavaflow> I've been thinking about an ffprobe dump for all the files on my nas but haven't done it yet
[04:50:29 CET] <TheAMM> Spawn ffmpeg, read the output as it comes, see if lines have what you want
[04:50:31 CET] <lavaflow> I've not used ffmpeg as a library before, only from the command line
[04:50:59 CET] <TheAMM> You can also leverage ffmpeg beng able to convert text subtitles to a different format, like write just an ASS parser and have ffmpeg format SRTs as ASS
[04:51:18 CET] <lavaflow> yeah, I've done that before.
[04:51:21 CET] <TheAMM> Nah, I've never used ffmpeg as a library either
[04:52:18 CET] <lavaflow> a while ago I was playing around with creating a DSL in racket or elisp to construct complex ffmpeg commands, with an emphasis on simplifying the creation of complex filter graphs
[04:52:41 CET] <lavaflow> then I got to thinking that I should probably be using ffmpeg as a library for that sort of thing, then I kind of stalled out and didn't finish it -_-
[04:52:52 CET] <TheAMM> I had a prototype of a thing that'd scan for all video files and extract subtitles to a local cache and index them for searching, but never finished that
[04:53:16 CET] <TheAMM> Used the knowledge for the anime thing though, and then improved the tricks
[04:53:32 CET] <TheAMM> Should one day get at it again but finishing things is haaaard
[04:54:27 CET] <lavaflow> I think it would be cool to have a sqlite db filled with ffprobe dumps for all my media files
[04:54:44 CET] <lavaflow> I think that might be a reasonable way to do it.
[04:55:36 CET] <lavaflow> maybe subtitles could be thrown in there as well with sqlite's full text search extension, though I've not used that before so I'm not sure if it would be a good fit.
[04:56:44 CET] <TheAMM> I had a LIKE search done, plan was to use ES for real text searching
[04:57:46 CET] <lavaflow> elastisearch? that'd be neat
[04:57:57 CET] <TheAMM> You'll probably want to uniquely identify your files, so my suggestion is to hash the first and last N bytes of them (like, 10+10 is more than enough)
[04:58:30 CET] <TheAMM> Alongside a full hash too, but hashing the start and end (and maybe filesize to be paranoid) will allow you to quickly hash the files and check if they exist in your db
[04:59:10 CET] <TheAMM> add mtime and path checking and you should have a thing that's fine to run over and over again quickly, without reading literally all the files every time
[04:59:32 CET] <lavaflow> probably, though I think that ship has sailed for me. I already have like 90% of the files on my nas catalogued/tagged, using their full paths only as identifiers.
[05:00:32 CET] <lavaflow> I could probably retrofit file hashes onto this system though
[05:00:53 CET] <lavaflow> I'll have to give that some thought. there are definite advantages
[05:01:25 CET] <TheAMM> Yeah, it's just something that'll withstand duplicates or moving stuff around without needing a full processing on the changed stuff
[05:02:34 CET] <lavaflow> right. as it stands I do all my file management (moving, copying, etc) through my file tagging system, otherwise it would lose track of files
[05:02:41 CET] <lavaflow> which is far from ideal
[08:24:16 CET] <lain98> i called av_rescale_q with the parameters (0, 1/10000000, 1/25) and (170667, 1/10000000, 1/25) but i got zero in both cases.
[08:24:33 CET] <lain98> not sure why
[08:50:51 CET] <JEEB> lain98: the value seems to be less than 0.5 so highly likely the integer math ends up zero
[08:51:25 CET] <JEEB> 0.0170677 seconds vs 0.04 seconds, latter of which is the smallest tick of 1/25
[10:06:55 CET] <jokoon> my xstack job is finally running
[10:06:57 CET] <jokoon> frame=80000 fps=431 q=33.0 size= 9728kB time=00:00:00.07 bitrate=996775.0kbits/s dup=79993 drop=0 speed=0.000431x
[10:07:18 CET] <jokoon> "More than 100000 frames duplicated"
[10:07:31 CET] <jokoon> Seems something's wrong
[10:07:56 CET] <lain98> ok i know why my math didnt work.
[10:14:11 CET] <jokoon> wow encoding is super slow...
[10:14:34 CET] <jokoon> I know my CPU is slow but it's only 35s of video
[10:14:48 CET] <jokoon> at speed=0.000389x
[10:15:13 CET] <jokoon> Guess I might need rescale video first
[10:19:10 CET] <JEEB> lain98: basically `(170677 / 10000000) * 25` is how I checked - I think that's possibly correct :P
[10:19:15 CET] <JEEB> and that gives me a value less than 0.5
[10:19:28 CET] <JEEB> so that even if you had double/float math and then were rounding, that would end up 0
[10:19:45 CET] <lain98> yes but also lack of sleep
[10:19:50 CET] <lain98> :)
[10:20:01 CET] <JEEB> :)
[11:28:18 CET] <ufk> how can I send an image file to a v4l2 video device ? using 'ffmpeg -re -i binary-file.png -f v4l2 /dev/video0' returns Requested output format 'v4l2' is not a suitable output format
[11:38:46 CET] <jokoon> Warning: data is not aligned! This can lead to a speed loss
[11:38:54 CET] <jokoon> Frame rate very high for a muxer not efficiently supporting it.
[11:39:22 CET] <jokoon> https://bpaste.net/show/H6272
[11:44:29 CET] <jokoon> MB rate (920000000) > level limit (16711680)
[11:45:01 CET] <jokoon> trying to use filters: scale then concatenate then xstack
[12:21:02 CET] <jokoon> Anybody using python? I'm sending this with subprocess.run() https://bpaste.net/show/32T2E and I'm not sure ffmpeg likes it
[12:27:27 CET] <jokoon> also curious if mixing vp8 and other format would lead to those issues
[12:28:18 CET] <DHE> macroblock rate is through the roof. you sure you got the framerates aligned?
[12:29:03 CET] <jokoon> I was told to not care about framerate when using xstack
[12:29:15 CET] <jokoon> maybe when doing concat?
[12:29:25 CET] <jokoon> those are diverse formats
[12:30:10 CET] <jokoon> https://i.imgur.com/vhpmoL1.png
[12:30:32 CET] <jokoon> wait framerates are not shown
[12:31:37 CET] <DHE> I think the PATH column is superfluous and all the other columns are shifted. 'rate' makes sense as a framerate
[12:31:47 CET] <jokoon> yes I know
[12:32:00 CET] <jokoon> this script is a clusterfungus
[12:33:03 CET] <DHE> still, some of these are 100fps videos
[12:42:55 CET] <jokoon> https://i.imgur.com/YoA2NVN.png
[12:48:51 CET] <jokoon> https://i.imgur.com/QkxEzEU.png DHE
[12:48:52 CET] <squ> what is this tool which tabularizes video information?
[12:48:56 CET] <jokoon> some are 12fps
[12:49:28 CET] <jokoon> should I normalize those fps then?
[12:53:25 CET] <furq> 11:29:03 ( jokoon) I was told to not care about framerate when using xstack
[12:53:26 CET] <furq> uhh
[12:53:47 CET] <jokoon> not you, someone else :p
[12:54:00 CET] <jokoon> rather, I was told to test, and come back for question
[12:54:13 CET] <jokoon> and now I don't really remember how to normalize fps :x
[12:55:46 CET] <jokoon> I don't understand why this doesn't work, because I did a xstack and it worked, I added concat to my script and testing it on very short videos, and now framerate is an issue where it was not an issue on other longer videos
[12:56:13 CET] <jokoon> I suspect it might be caused by a few of those short videos Im testing the script
[12:56:43 CET] <jokoon> im using those short video to test it, so ffmpeg can do it faster
[13:02:41 CET] <jokoon> Im convert a gif to a vid and it result in a 100fps vid
[13:02:53 CET] <jokoon> and it's defintely not a 100 fps gif
[13:03:13 CET] <jokoon> this one
[13:03:14 CET] <jokoon> https://giphy.com/gifs/reaction-boy-5DQvlC5vk5ihy
[13:07:30 CET] <jokoon> -r 25 fixed this gif, still it's weird that ffmpeg chose 100
[13:07:39 CET] <jokoon> I converted many others and it was not 100
[13:15:01 CET] <b1nny> Hello, does anyone happen to know if there is a way to get CPU statistics per video filter?
[13:15:44 CET] <ubitux> jokoon: sanest highest recommended framerate for gif is 50
[13:16:03 CET] <ubitux> b1nny: bench and abench
[13:16:36 CET] <b1nny> ubitux: cheers! That seems like it will help me out :)
[13:31:52 CET] <jokoon> squ, make this thing myself, it's just a python script generating some html
[13:32:25 CET] <jokoon> also found a tiny bit of JS that sort a HTML table directly
[13:32:30 CET] <squ> okey
[13:33:30 CET] <squ> maybe you are interested https://twitter.com/alexlindsay/status/1199089615056982016
[14:16:56 CET] <jokoon> can '-r 30' work as a filter?
[14:24:40 CET] <jokoon> just trying to normalize fps
[14:29:02 CET] <DHE> there is an fps filter
[14:29:14 CET] <DHE> maybe you just want to put it at the end of the filter chain
[14:29:43 CET] <kepstin> jokoon: if your ffmpeg is old, you want to normalize fps before the concat filter
[14:30:20 CET] <kepstin> if you ffmpeg is anything but latest git, you should definitely normalize fps before using any of the *stack filters.
[14:31:01 CET] <jokoon> ffmpeg version git-2019-10-23-ac0f5f4
[14:31:02 CET] <kepstin> (actually, i'm not sure the concat fix is even in a release yet)
[14:31:28 CET] <kepstin> hmm, that should have the concat fix but not the *stack fix.
[14:31:42 CET] <jokoon> does that concat bug affect what I do?
[14:31:53 CET] <jokoon> oh there's a xstack bug?
[14:32:20 CET] <jokoon> Im still struggling to apply a fps or framerate filter
[14:32:25 CET] <DHE> is it trying to select a framerate that is the LCM of all the inputs?
[14:32:44 CET] <kepstin> xstack previously set the output framerate to the framerate of the first input
[14:32:58 CET] <kepstin> now it sets it to 1/0 (vfr) if inputs have different framerates, i think
[14:33:24 CET] <jokoon> DHE, well as you told me, it seems that ffmpeg calculate a fps that was too high, which might be caused my having too many different FPS in my inputs
[14:33:35 CET] <jokoon> I have many video inputs, which are very diverse
[14:33:51 CET] <jokoon> You used "through the roof"
[14:34:02 CET] <DHE> I said the macroblock rate was through the roof
[14:34:04 CET] <jokoon> I don't know how to avoid that LCM
[14:34:14 CET] <jokoon> oh
[14:34:15 CET] <DHE> which is explained either by a very high FPS, or an extremely large image. (or both)
[14:34:22 CET] <kepstin> jokoon: you should stick an fps filter for each video in your filter chain, in the same place as where you're doing scaling.
[14:34:26 CET] <jokoon> is 100 a very high fps?
[14:34:33 CET] <jokoon> highest I saw was 100
[14:35:03 CET] <DHE> yes, considering hardware video decoders that are not GPUs tend to be limited to ~60fps at 1080p h264 decoding (level 4.1 or so)
[14:35:18 CET] <jokoon> I found some doc but Im not sure it's the fps filter
[14:35:33 CET] <jokoon> oh ok
[14:35:49 CET] <kepstin> don't find "some doc", find the official ffmpeg filters documentation
[14:35:53 CET] <jokoon> I was trying to remove those few 100fps vid
[14:35:56 CET] <kepstin> conveniently linked off the ffmpeg site
[14:36:52 CET] <kepstin> anyways, usage is simple for this. if you want to convert all your videos, to, say, 30fps, add the filter "fps=30"
[14:51:02 CET] <jokoon> woooow it worked
[14:51:16 CET] <jokoon> I put the fps filter after the rescale
[14:51:39 CET] <jokoon> so [vidX]fps=30[vidfX]
[14:52:01 CET] <jokoon> I've seen what fps=fps=30 meant but that was an issue too
[14:53:30 CET] <jokoon> so happy
[14:53:46 CET] <jokoon> 420 lines of python haha
[14:54:15 CET] <jokoon> I guess mixing the sound won't be so hard
[14:54:39 CET] <jokoon> https://gfycat.com/pastelfarflunggentoopenguin
[17:16:33 CET] <TanaPanda> when I use ffmpeg to stream a video out to an IP address what command tells ffmpeg specifically which interface to use? I want it to sprecifically stream out of my eth0
[17:32:58 CET] <ncouloute> So this "player" behavior I was talking about before. Seems to be the same as input seeking. When I transcode a file I get the output seeking result. I need to figure out where the Input seek frame ends up. How does input seeking work? Is it just giving me the nearest key frame?
[17:39:17 CET] <kepstin> TanaPanda: i don't think there's a way to bind to a particular interface, but you can set the local ip address to use (e.g. localaddr udp option), which should make it use the interface with that ip address assigned, i think.
[17:39:56 CET] <kepstin> other than that, fix your routing priorities so the interface you prefer is the preferred interface.
[17:39:58 CET] <DHE> depends on the OS. linus will use the routing table at all times to the best of my knowledge...
[17:40:05 CET] <DHE> *linux
[17:40:56 CET] <DHE> even if you're outputting to multicast, add the multicast addresses to your routing table as /32 entries (or shorter prefixes if you prefer and are doing many outputs)
[17:41:23 CET] <TanaPanda> I am using raspbian
[17:42:14 CET] <DHE> pretty sure that's still linux
[17:43:01 CET] <furq> the udp output lets you set the interface
[17:43:04 CET] <furq> so if it is multicast then that's easy
[17:43:45 CET] <TanaPanda> well the command i use with ffmpeg is this: ffmpeg -loglevel debug -max_interleave_delta 15000000 -rtbufsize 128000k -threads 0 -stream_loop -1 -re -i /home/pi/Desktop/ActiveChannel/Property_LCI.mp4 -codec copy -shortest -f mpegts udp://192.168.6.2:20?pkt_size=1316?buffer_size=65536
[17:43:53 CET] <kepstin> does it? if so it's not documented :/
[17:44:07 CET] <furq> https://ffmpeg.org/ffmpeg-protocols.html#udp
[17:44:08 CET] <DHE> yeah I'm looking at -h full
[17:44:16 CET] <furq> localaddr=addr
[17:44:16 CET] <furq> Local IP address of a network interface used for sending packets or joining multicast groups.
[17:44:32 CET] <DHE> yeah that sets the source IP address. but linux doesn't care that the IP it uses is on the interface
[17:44:35 CET] <kepstin> that sets an ip, not an interface.
[17:44:41 CET] <TanaPanda> can localaddr=eth0?
[17:46:41 CET] <furq> i'm assuming if you have two different interfaces with the same ip then you're competent enough with iptables to figure this out
[17:46:53 CET] <DHE> no this is definitely routing table stuff
[17:47:18 CET] <DHE> ip route add 192.0.2.123/32 [via 192.168.1.1] dev eth0
[17:47:18 CET] <TanaPanda> lol assuming i am competant
[17:47:20 CET] <TanaPanda> thats precious
[17:47:30 CET] <TanaPanda> hmmm
[17:47:34 CET] <TanaPanda> i like where this is going
[17:47:35 CET] <DHE> where 'via' is the gateway IP if need be, and 192.0.2.123 is the intended recipient
[17:47:54 CET] <TanaPanda> 192.168.6.2 is the destination of the stream
[17:48:11 CET] <DHE> and 'ip route get 192.168.6.1' indicates the wrong NIC?
[17:48:22 CET] <DHE> (how many NICs does a rasp pi even have?)
[17:48:33 CET] <furq> presumably eth0 and wlan0
[17:48:35 CET] <DHE> (fix my IP typo)
[17:48:44 CET] <DHE> ah, wifi.. that makes sense
[17:49:32 CET] <TanaPanda> so your suggesting i add
[17:50:00 CET] <TanaPanda> sudo ip route add 192.168.6.2/32 via 192.168.6.1 dev eth0
[17:50:04 CET] <TanaPanda> or something like that
[17:50:21 CET] <DHE> the via is clearly wrong
[17:50:36 CET] <kepstin> if it's on the same subnet as the local ip, it doesn't need the via
[17:50:36 CET] <DHE> you specify the Pi's router, not the destination's router
[17:50:51 CET] <DHE> and if it's local, you don't specify `via` at all
[17:51:01 CET] <TanaPanda> well its wired to the pi
[17:51:09 CET] <DHE> then again if it's local, are you sure you need to be specifying it at all? are both LAN and Wifi on the same network?
[17:51:16 CET] <TanaPanda> no
[17:51:28 CET] <TanaPanda> lan is going to a non-dhcp device with no internet connectivity
[17:51:39 CET] <TanaPanda> wifi is connecting to dhcp router with internet
[17:51:49 CET] <another> so... what's your problem?
[17:51:54 CET] <kepstin> ok, so they have ip addresses in *different* subnets, right?
[17:51:57 CET] <TanaPanda> i cant get both to run at the same time
[17:52:00 CET] <TanaPanda> yes
[17:52:10 CET] <DHE> then chances are this doesn't matter. if the lan port has 192.168.6.x/24 assigned, that will always be the default for anything in that block
[17:52:15 CET] <TanaPanda> one sec
[17:52:16 CET] <kepstin> if so, there's no issue, linux should do the right thing assuming addresses and routes are configured right
[17:52:18 CET] <TanaPanda> for the sake of it
[17:52:29 CET] <TanaPanda> currently i am trying to stream to 192.168.6.2
[17:52:34 CET] <furq> yeah if they're in different subnets then eth0 shouldn't have a route to 192.168.6.2
[17:52:39 CET] <TanaPanda> and my wifi is on a 10.100.13.xxx address
[17:52:43 CET] <furq> oh
[17:52:45 CET] <furq> what
[17:53:03 CET] <DHE> and the lan is presumably 192.168.6.5 or something ?
[17:53:13 CET] <TanaPanda> well i have just left it as dhcp
[17:53:20 CET] <TanaPanda> I am not terribly good at networking
[17:53:25 CET] <kepstin> i thought you said there is no dhcp on the lan interface
[17:53:26 CET] <TanaPanda> i know what i want just not how to get there
[17:53:31 CET] <TanaPanda> there isnt
[17:53:33 CET] <kepstin> if so, you need to manually configure an ip
[17:53:40 CET] <TanaPanda> so the ip address of the lan is 169.254.x.x
[17:53:46 CET] <kepstin> that's not a real ip
[17:53:49 CET] <TanaPanda> which is the default no dhcp detected address
[17:53:52 CET] <TanaPanda> yeah
[17:54:07 CET] <kepstin> you need to configure the eth0 interface manually with an ip on the same subnet as the device you want to communicate with
[17:54:33 CET] <TanaPanda> whats the best way to do this? i have tried using the GUI to set it manually but it doesnt seem to chnage
[17:54:34 CET] <another> kepstin: it *is* a real ip :D
[17:54:55 CET] <furq> what gui
[17:55:14 CET] <TanaPanda> right click and click on wired and wireless netowrk settings
[17:55:29 CET] <kepstin> networkmanager, i'd assume? after making a config change with that you normally have to stop and start the interface to get the change applied
[17:55:35 CET] <furq> yeah
[17:55:51 CET] <furq> normally it's just in /etc/network/interfaces but i'm guessing gnome overrides that somehow
[17:55:58 CET] Action: DHE throws NetworkManager into the same fire systemd is burning in
[17:56:20 CET] <kepstin> networkmanager on debian is kinda weird since i think they setup some alternate config plugin instead of the default one :/
[17:56:53 CET] <TanaPanda> and why they name things differently why cant it just say ip,gateway,subnet,dns1,dns2
[17:57:15 CET] <TanaPanda> no it has to be router dns search
[17:58:53 CET] <kepstin> i think you're looking in the wrong place or have the wrong mode selected? for this interface you want to leave all routing and dns configuration blank.
[18:00:56 CET] <kepstin> You want to set IPv4 method to "Manual", fill in the address and netmask field, and leave everything else either blank or auto.
[18:01:10 CET] <furq> you should just be able to set it in /etc/network/interfaces
[18:01:17 CET] <furq> apparently networkmanager ignores anything set there by default
[18:02:01 CET] <kepstin> or even as a one-off, you could just run "ip addr add 192.168.6.5/24 dev eth0" (that will get stuff working right now, but won't be saved on reboot)
[18:23:48 CET] <TanaPanda> is there like a generic command to stream a video to an IP address that I can use that would always work regardless of the video?
[18:25:34 CET] <jokoon> does concat work the same for audio and video?
[18:29:42 CET] <DHE> TanaPanda: possibly, but the reliability may be a concern if using udp
[18:36:16 CET] <TanaPanda> unfortunatly UDP is my only option
[18:36:43 CET] <TanaPanda> I have a line that works on a small video but then when I ran it on a video file that was 2.8GB large
[18:36:51 CET] <TanaPanda> it took a hot min to get up to 1x speed
[18:37:09 CET] <TanaPanda> I have since then compressed the file to a smaller version but havent gotten around to testing it yet
[18:37:44 CET] <TanaPanda> the compressed file is only 450MB now
[18:37:56 CET] <TanaPanda> so hopefully compressing while resolve that issue
[18:40:25 CET] <ncouloute> So what I think is happening is with input seeking I seek to where Frame 207 is but I'm actually getting Frame 216. which is the next keyframe. Which explains why when I go frame by frame from "207". it goes 208 then jumps to 217 then 218,219. Any idea what would cause a file to do that? Maybe the way the frames were encoded.
[18:59:47 CET] <kepstin> ncouloute: what player are you using?
[19:00:14 CET] <ncouloute> PotPlayer.. but I get the same behavior using ffplay/ffmpeg
[19:01:10 CET] <kepstin> ffplay with -ss should display the frame at the desired seek time, then continue frame by frame from there
[19:01:29 CET] <kepstin> anything else either indicates a broken file or a bug, really.
[19:02:18 CET] <ncouloute> ffmpeg -ss 00:00:08.63363 -i "C:\Users\Me\Desktop\QB School Profile.mp4" -frames:v 1 out1.jpg vs ffmpeg -i "C:\Users\Me\Desktop\QB School Profile.mp4" -frames:v 1 -ss 00:00:08.63363 out1.jpg When I use a video player I get the first commands output
[19:03:01 CET] <kepstin> oh, you're comparing -ss before and after -i with ffmpeg cli
[19:03:19 CET] <kepstin> that will actually give different results, due to ffmpeg's normalization of file timestamps during input processing
[19:03:29 CET] <ncouloute> well that is the same behavior. Ffplay/potplayer behave like input seeking
[19:03:59 CET] <kepstin> -ss before -i uses the timestamps in the file itself, using -ss after -i runs ffmpeg's timestamp processing, then discards frames up to the requested (post-processing) start time.
[19:04:12 CET] <kepstin> so if the timestamps in the file don't start at 0, the result could be different
[19:06:42 CET] <ncouloute> If I add -seek_timestamp 1 -ss 0 before the input. So that should account for that no? It still wrong frame however
[19:07:38 CET] <kepstin> you should just always be using -ss before -i unless you have a particular reason to do otherwise
[19:07:45 CET] <ncouloute> So this: ffmpeg -seek_timestamp 1 -ss 0 -i "C:\Users\Me\Desktop\QB School Profile.mp4" -frames:v 1 -ss 00:00:08.63363 out1.jpg
[19:08:23 CET] <ncouloute> well i dont actually use -ss after input.. but that just simulates what the transcoded file would be.
[19:09:56 CET] <kepstin> for what it's worth, the -ss output option runs after the filters and before encoding
[19:10:48 CET] <kepstin> i'm not sure what you're trying to accomplish here? :/
[19:15:20 CET] <ncouloute> So basically I have these point in time in the video. in this example lets say 08.63363. The 08.63363 in the original file is not the 08.63363 in the transcoded file. I want to get the 08.63363 as input option frame to have the same frame of video in the transcoded file. The 08.63363 in the transcoded file is the same 08.63363 as if I seeked as an output option
[19:16:13 CET] <ncouloute> So I think at some point during the decoding process the frames are being moved
[19:20:55 CET] <kepstin> maybe you just want to use the -copyts option when doing the transcode to disable ffmpeg's timestamp processing?
[19:21:10 CET] <kepstin> assuming there's no negative timestamps, then the output timestamps and input timestamps should match.
[19:21:38 CET] <kepstin> might need -vsync passthrough or additional muxer options depending on the format.
[19:22:40 CET] <kepstin> the -ss output option runs before muxing, so if any additional timestamp transformation is done during muxing, it won't account for that.
[19:26:20 CET] <ncouloute> -copyts and -vsync passthrough. I still get the wrong frame. =/
[19:39:44 CET] <jokoon> Wrote a script that take any number of videos, and makes a xstack with it
[19:39:52 CET] <jokoon> no ideo if somebody would be interested
[19:42:18 CET] <TanaPanda> what is an xstack?
[19:42:37 CET] <jokoon> having several videos playing at the same time
[19:42:43 CET] <jokoon> in a layout of your choice
[19:42:57 CET] <jokoon> although for now all I managed to do is 2x2
[19:43:15 CET] <jokoon> but the params are already available on the wiki
[19:43:17 CET] <TanaPanda> for like a video wall?
[19:43:26 CET] <jokoon> for 4x4 and even beyond
[19:43:28 CET] <jokoon> yes
[19:43:32 CET] <TanaPanda> hmmm
[19:43:40 CET] <TanaPanda> think your script would work on the raspberry pi?
[19:43:52 CET] <jokoon> it's an encode script
[19:43:56 CET] <BtbN> If you don't need more than 1fps
[19:43:59 CET] <jokoon> not real time
[19:44:12 CET] <TanaPanda> ah
[19:44:18 CET] <jokoon> although I don't know if you can adapt it for real time
[19:44:19 CET] <TanaPanda> then I dont think i would have a use
[20:24:13 CET] <ncouloute> So if I convert the mp4 to a ts file. The timestamps are correct. If I try to remux that ts file back to mp4. Marks are off again. Something with the mp4 container that is screwing things up. =/
[21:38:11 CET] <ncouloute> I've had this problem before. Why does the ts container seem to be the only container that is frame accurate? Is it because most of my video files are either 30000/1001 or 60000/1001 or some variation over 1001. I need it to be mp4.
[21:47:45 CET] <kepstin> I'd expect doing a transcode from mp4 source to mp4 destination would preserve timestamps, I dunno what's up :/
[21:49:08 CET] <ritsuka> maybe you need to manually set the correct timescale
[22:09:59 CET] <ncouloute> I'm assuming you meant video_track_timescale. I tried setting it to same as input and 90k. I guess I have to go back to using a frameserver. Which is weird because they are all based on ffmpeg. (l-smash or LAV + avisynth + ffmpeg). It is super slow though.
[22:17:08 CET] <ncouloute> brb
[22:19:40 CET] <jokoon> mmmh I got this problem now [WinError 206] The filename or extension is too long
[22:19:53 CET] <jokoon> I gave too many input file and the command line is saturated
[22:20:08 CET] <jokoon> I guess it's possible to give ffmpeg a file list?
[22:25:14 CET] <jokoon> so apparently .txt file with file "file.mp3" on several lines
[22:47:23 CET] <jokoon> with 500 inputs, I guess I might be abusing ffmpeg?
[23:00:44 CET] <BtbN> Is there a way to silence _just_ that annoying "Opening 'https://...." spam when reading a HLS playlist?
[23:07:15 CET] <BtbN> Where does that message even come from? oO
[23:07:18 CET] <BtbN> grep does not find it anywhere
[23:08:24 CET] <BtbN> It's an unconditional log in http.c...
[23:08:25 CET] <BtbN> Why
[23:08:57 CET] <BtbN> I want to get rid of that message, but keep the speed and status output.
[23:09:57 CET] <BtbN> The message is so spammy that it being printed for every segment causes the processing to slow down a ton
[23:11:09 CET] <pink_mist> edit the source and recompile ffmpeg :P
[23:15:23 CET] <furq> BtbN: -v error -stats
[23:17:15 CET] <BtbN> hm, better than nothing, but I'm still kinda annoyed by that message existing.
[23:17:26 CET] <BtbN> It seems pointless
[23:17:55 CET] <TheSashmo> Has anyone had a similar problem like this: ffplay video size is 1920x1080, the source video is 320x240, when I try to overlay some text on top of the video, I need to scale the text to the 320x240 video resolution because the text will look huge in the player, so for some reason the overlay is taking the real video resolution instead of the player resolution.... any ideas?
[23:19:30 CET] <furq> there's no way for filters to know what size your ffplay window is
[23:20:01 CET] <furq> if you want the text to look good then -vf scale=1920:1080,drawtext...
[23:20:26 CET] <furq> or i guess 1440:1080 if it's 320x240 now
[23:21:07 CET] <TheSashmo> hmm let me try that again, last time I did that, it didnt help.... thanks @furq
[23:24:59 CET] <TheSashmo> furq: I'm using the lavfi-complex is that still supported with that style of argument?
[23:51:31 CET] <BtbN> Hm, we are getting very very tiny audio glitches when segmenting a video into a bunch of mpegts segments, and then concat demux them back together.
[23:52:13 CET] <BtbN> Is this because an audio frame did not end on a video frame boundary? Why did the segment muxer not fix this?
[00:00:00 CET] --- Fri Dec 20 2019
More information about the Ffmpeg-devel-irc
mailing list