[Ffmpeg-devel-irc] ffmpeg.log.20191031

burek burek at teamnet.rs
Fri Nov 1 03:05:02 EET 2019


[00:27:50 CET] <AlexApps> Hello, as I have discussed in previous messages, I am creating a script to generate slidehows for me. I want to have a unique caption drawn to the screen for each image using drawtext, I cannot find any way to pass a unique caption to drawtext every frame, is there any way to refer to the line of a text file correlating to the frame number?
[00:29:54 CET] <furq> AlexApps: drawtext supports https://ffmpeg.org/ffmpeg-filters.html#Timeline-editing
[00:30:14 CET] <furq> with that said you should probably just write a subtitle file and burn it in with the subtitles filter
[00:35:46 CET] <AlexApps> furq: Thanks, will look into them both. :)
[01:05:13 CET] <KombuchaKip> To decode an audio file and read its metadata, is it necessary to call read_header()? When is it necessary? https://ffmpeg.org/doxygen/trunk/structAVInputFormat.html#a286d65d159570516e5ed38fcbb842d5a
[01:05:57 CET] <DHE> generally you should not call any function unless it begins with av* (except avpriv_*)
[01:13:31 CET] <KombuchaKip> DHE: Makes sense which is why I was confused at seeing it used here in the OP's question. https://stackoverflow.com/questions/13592709/retrieve-album-art-using-ffmpeg
[01:15:00 CET] <JEEB> that is weird
[01:15:48 CET] <JEEB> usually if you want to do some pre-reading, you call avformat_find_stream_info
[01:16:17 CET] <KombuchaKip> JEEB: Yeah, hence my confusion. But I noticed none of the streams in an audio file contain any disposition bits set, including AV_DISPOSITION_ATTACHED_PIC. When I use ffprobe I see them set. So I was wondering if there was something I had to call first.
[01:20:51 CET] <JEEB> KombuchaKip: you can literally see what it does in open_input_file() in fftools/ffprobe.c
[01:21:59 CET] <KombuchaKip> JEEB: Yeah, I'm aware of that. It calls avformat_open_input() which in turn can call read_header(). So I'm not sure why my disposition bits are always all zero when ffprobe shows they're not.
[01:24:03 CET] <JEEB> it does open input, by default it also does avformat_find_stream_info since "find_stream_info" is 1 by default
[01:24:25 CET] <KombuchaKip> JEEB: So then I'm stumped.
[01:24:43 CET] <JEEB> does set some options for the stream info opening and the lavf context opening,
[01:24:52 CET] <JEEB> but that's as far as it seems to go
[01:25:22 CET] <JEEB> well, it does seem to also just in case open decoders for all streams too
[01:25:32 CET] <JEEB> but that shouldn't affect the AVFormatContext
[01:25:52 CET] <JEEB> since a new lavc context is opened and just parameters taken from the AVStream's codecpar
[01:26:56 CET] <KombuchaKip> JEEB: ffprobe(1) shows DISPOSITION:attached_pic=1 is set on the input file, but my own code the bit is never set.
[01:28:10 CET] <JEEB> same file and open+find stream info?
[01:29:31 CET] <JEEB> and you're looking at the place where ffprobe.c's show_stream is?
[01:29:42 CET] <KombuchaKip> JEEB: I call avformat_open_input(), avcodec_alloc_context3(nullptr), then loop over each stream looking for disposition bit.
[01:30:16 CET] <KombuchaKip> JEEB: Yeah, I looked at that last night. It's just checking to see if disposition bit is set, same as my code.
[01:30:20 CET] <JEEB> try adding avformat_find_stream_info
[01:30:31 CET] <JEEB> since that definitely is getting done by default in ffprobe
[01:30:44 CET] <JEEB> (you can disable it in ffprobe by giving it an option it seems)
[01:30:48 CET] <KombuchaKip> JEEB: Yeah, I see that. Let me give it a go.
[01:32:01 CET] <KombuchaKip> JEEB: I'm looking through the man page now.
[01:33:15 CET] <KombuchaKip> JEEB: I'm guessing it's -find_stream_info=false, based on --help, but it's not documented in the man page.
[01:34:09 CET] <JEEB> probably just -find_stream_info 0
[01:34:53 CET] <KombuchaKip> JEEB: Tried that. Chokes on it. 'Argument 'foo.mp3' provided as input filename, but '0' was already specified.
[01:38:21 CET] <JEEB> that is weird
[01:38:39 CET] <JEEB> I mean I look at cmdutils how OPT_BOOL is supposed to work
[01:38:42 CET] <KombuchaKip> JEEB: It's argument is declared as a OPT_BOOL. I'm trying to figure it out.
[01:38:44 CET] Action: KombuchaKip nods
[01:38:46 CET] <KombuchaKip> haha
[01:40:45 CET] <JEEB> right
[01:40:48 CET] <JEEB> -no-blah
[01:40:50 CET] <KombuchaKip> JEEB: I can't find anywhere online of anyone actually using it.
[01:41:02 CET] <JEEB> if (!po->name && opt[0] == 'n' && opt[1] == 'o') {
[01:41:28 CET] <KombuchaKip> JEEB: lol, yeah there we go.
[01:42:19 CET] <KombuchaKip> JEEB: Ok, so even then with disabling find_stream_info branch ffprobe still shows DISPOSITION:attached_pic=1.
[01:43:00 CET] <JEEB> (´4@)
[01:43:26 CET] <JEEB> need to get some sleep but you're API wise doing everything right with open and read_info
[01:43:37 CET] <JEEB> only things that could be affecting it are some probing options
[01:43:45 CET] <JEEB> like probesize and analyzeduration
[01:44:00 CET] <JEEB> but if you're not setting any of those and ffprobe isn't either
[01:44:02 CET] <KombuchaKip> JEEB: Yeah, but possibly read_info isn't being called on my end. I'll build ffmpeg with debugging symbols, link and debug.
[01:44:03 CET] <JEEB> then nfi
[01:44:24 CET] <KombuchaKip> JEEB: Yeah I don't think I'm fiddling with those.
[01:55:31 CET] <KombuchaKip> JEEB: Thanks for your help. Have a good night. I'll figure it out.
[02:13:41 CET] <TechnicalMonkey> so can any one help me to figure out what I did wrong in my ffmpeg live stream attempt?
[02:14:03 CET] <TechnicalMonkey> https://pastebin.com/kLBnHhzd
[02:15:16 CET] <TechnicalMonkey> I guess I came at a bad time
[02:15:22 CET] <furq> you definitely don't want -pix_fmt bgr24
[02:16:04 CET] <TechnicalMonkey> oh...? why not....?
[02:16:27 CET] <TechnicalMonkey> I don't mind that the video gets encoded in 420
[02:16:42 CET] <TechnicalMonkey> but I need the first gen input to be RGB
[02:16:56 CET] <furq> then you want -pixel_format bgr24 before the input
[02:17:05 CET] <TechnicalMonkey> and that is the colorspace that it lists
[02:17:07 CET] <furq> you've got it as an output option
[02:17:16 CET] <TechnicalMonkey> oooohh
[02:17:57 CET] <TechnicalMonkey> does it default to 420 if I delete that swtich?
[02:18:17 CET] <furq> it'll be yuv444p i think
[02:18:21 CET] <furq> so yeah explicitly set yuv420p
[02:20:08 CET] <furq> also if -s and -r are supposed to apply to the input then move those as well
[02:20:41 CET] <furq> actually nvm you have -video_size already
[02:23:34 CET] <TechnicalMonkey> cool
[02:23:48 CET] <TechnicalMonkey> this is the farthest I've ever gotten
[02:24:09 CET] <TechnicalMonkey> but now it just keeps giving me these red messages
[02:24:49 CET] <TechnicalMonkey> rtbufsize too full or near too full
[02:27:10 CET] <furq> yeah that means what you think it does
[02:27:36 CET] <furq> if you can't bump rtbufsize then set a lower video_size or framerate on the input
[02:27:59 CET] <furq> if your capture source supports that
[02:33:42 CET] <TechnicalMonkey> so I lowered my input to 720p
[02:35:01 CET] <TechnicalMonkey> it lasted longer but still came up with that message about the rtbufsize
[02:35:08 CET] <furq> what cpu is that
[02:35:23 CET] <TechnicalMonkey> i7-3770
[02:35:45 CET] <furq> that should be able to keep up with encoding then
[02:35:56 CET] <furq> obviously if the status line shows the speed dropping below 1x then that's the issue
[02:36:01 CET] <TechnicalMonkey> overall CPU usage didn't go past 40%
[02:36:58 CET] <furq> i take it your internet can keep up with 8mbit upload as well
[02:38:48 CET] <TechnicalMonkey> I have gigabit FiOS
[02:41:05 CET] <TechnicalMonkey> I'm only uploading to the most reliable of ingest servers
[02:41:34 CET] <TechnicalMonkey> every time I try to use NYC servers
[02:41:38 CET] <TechnicalMonkey> I have issues
[02:42:05 CET] <TechnicalMonkey> I can try it
[02:52:05 CET] <TechnicalMonkey> so I no longer get those messages about rtbufsize in red
[02:52:13 CET] <TechnicalMonkey> but now I get yellow messages
[02:53:16 CET] <TechnicalMonkey> wait... if I touch the scroll bar in CMD while streaming it can pause?!?
[02:54:37 CET] <TechnicalMonkey> the yellow message is about something being too large
[02:57:12 CET] <ponyrider> CFS-MP3: that command works for me
[03:02:36 CET] <TechnicalMonkey> furq: thanks for the help
[03:02:53 CET] <TechnicalMonkey> this really opens things up now
[03:03:16 CET] <TechnicalMonkey> both the audio and the video worked without any flaws
[03:03:44 CET] <TechnicalMonkey> I got a whole bunch of messages from ffmpeg screaming at me
[03:03:55 CET] <TechnicalMonkey> but the audio and the video worked
[03:05:51 CET] <TechnicalMonkey> I also tested in 1080p as well
[03:05:57 CET] <TechnicalMonkey> both at 60fps
[03:06:04 CET] <TechnicalMonkey> and it worked really well
[06:16:05 CET] <derlg> Hey all, I was hoping someone here might be able to point me in the direction of where I might find some answers as to ffmpeg usage of a specific command...
[06:16:39 CET] <derlg> I was wondering how(if at all possible) to save a video made using libcaca?
[06:17:13 CET] <derlg> secondly, if i wanted to add audio to that video, would i re-run ffmpeg, appending audio to it?
[06:18:03 CET] <derlg> my initial command ive been using is "ffmpeg -i input.mp4 -pix-fmt rgb24 -f caca out.mp4"
[06:18:33 CET] <derlg> i was hoping to get something similar to "mpv video.mp4 -vo caca" but be able to save it as a file
[06:25:36 CET] <furq> derlg: you can't do that in a single command
[06:25:44 CET] <furq> caca is an output device that renders directly to a window or to stdout
[06:25:56 CET] <furq> so you'd have to capture that window or something
[06:32:21 CET] <derlg> furq: thanks :) would you know how I could go about using ffmpeg to capture the output from that window? i vaguely recall seeing something about capturing streams
[06:33:06 CET] <furq> https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
[06:35:39 CET] <derlg> thanks furq :)
[07:07:41 CET] <KombuchaKip> JEEB: So it turns out that the disposition bits are being set, but something is clobbering them between after avformat_open_input() and my reading them later. I'll experiment with a hardware breakpoint tomorrow.
[08:52:18 CET] <BeerLover> How can I pipe the output of ffmpeg ?
[08:53:43 CET] <BeerLover> This creates multiple segment files and an index.m3u8 file "ffmpeg -re -y -i song.mp4 -profile:v baseline -b:a 320k -hls_time 10 -hls_allow_cache 1 -level 4.0 -hls_segment_filename segment%d.ts -f hls index.m3u8". I want to pipe those files to "aws s3 cp <file> s3://<bucket>/<file>"
[09:04:57 CET] <furq> BeerLover: that doesn't make sense for a few reasons with the hls muxer
[09:05:16 CET] <furq> probably just use inotifywait or something
[09:24:55 CET] <BeerLover> What exactly does -re do? I don't understand from the man pages
[09:25:04 CET] <BeerLover> it slows the transcoding
[09:27:43 CET] <JEEB> BeerLover: basically looks at the input timestamps and sleep()s the difference
[09:27:50 CET] <JEEB> that way attempting to simulate a live input
[09:28:02 CET] <JEEB> that can also fail in hilarious ways so unless you really need it, don't use it
[10:10:42 CET] <Rhada> Hello, i'm using ffmpeg to create a dash stream from hls live source. Unfortunately, if the source stream is restarted, the media sequence is reset and ffmpeg is stuck complaining about it ([hls @ 0x5588820eaf40] Media sequence changed unexpectedly: 21 -> 0). I can't achieve to make my command exit on this input error, or take the new media sequence as the one to use ?
[10:13:16 CET] <JEEB> yea it warns about that but then does nothing :P
[10:13:31 CET] <JEEB> it's technically invalid I think according to the HLS RFC
[10:13:41 CET] <JEEB> but still what the HLS reader does is not really optimal
[10:13:47 CET] <JEEB> since it will start waiting for the next time it gets 22
[10:13:52 CET] <JEEB> if that ever comes, that is
[10:13:57 CET] <Rhada> yes but after this it keep looping over the same playlist without doing any encoding
[10:13:57 CET] <JEEB> it gets even more fun with larger values
[10:14:05 CET] <JEEB> yes
[10:14:15 CET] <JEEB> since it's waiting until it gets back to "sync"
[10:14:19 CET] <Rhada> yes you get the point
[10:14:46 CET] <JEEB> basically the hls module needs a rework but I think it's unlikely to happen unless someone sponsors it (or someone cares enough)
[10:14:49 CET] <Rhada> if needed, the command i use : ffmpeg -re -i http://mydomain.source/playlist.m3u8 -c:a copy -c:v copy -vtag avc1 -atag mp4a -bsf:a aac_adtstoasc -map 0:p:5 -map 0:p:4 -map 0:p:3 -map 0:p:2 -b:v:0 2500k -b:v:1 1500k -b:v:2 1000k -b:v:3 500k -streaming 1 -use_template 1 -index_correction 1 -window_size 30 -extra_window_size 30 -seg_duration 2 -remove_at_exit 1 -adaptation_sets "id=0,streams=v id=1,streams=a" -f dash /tmp/index.mpd
[10:15:09 CET] <JEEB> this specific bug could be improved by just making the HLS part reset sync
[10:15:22 CET] <JEEB> as in, re-init playback and buffer those three segments again
[10:15:30 CET] <JEEB> but in general there's various issues with the HLS reader
[10:16:53 CET] <Rhada> yes, i was searching for a magic option doing this :(
[11:06:01 CET] <Spring> Shot in the dark but thought perhaps someone here may know. I used OBS with x264 as the encoder and it captured the contents of a white (#ffffff) desktop screen as a darker #fdfdfd instead. Anyone know if that's someone on x264's side?
[11:07:00 CET] <Spring> it's an extremely subtle difference but will mean that I won't be able to use a blend mode in a video editor as-is and will require curve adjustment to compensate, unfortunately.
[11:08:33 CET] <Mavrik> Spring: side effect of colorspace conversion perhaps?
[11:08:44 CET] <Mavrik> Did you try capturing into RGB instead of YUV if OBS supports that?
[11:09:37 CET] <TheAMM> (It does)
[11:09:49 CET] <Spring> Mavrik, I also just tested an ffmpeg conversion to VP9 and the same thing occurs
[11:10:06 CET] <Mavrik> Yes, pretty much all video formats will do YUV420 color conversion.
[11:10:24 CET] <Mavrik> So changing codecs up and down won't make a difference :P
[11:11:09 CET] <Spring> TheAMM, is this an option of a later version of OBS?
[11:11:37 CET] <TheAMM> It is in whatever version I have installed, which is months old by now
[11:12:00 CET] <TheAMM> It's in the advanced tab iirc, next to the renderer
[11:12:32 CET] <Mavrik> Apparently, https://4.bp.blogspot.com/-YEE1aCJRvxE/W2rceTE8EWI/AAAAAAAAAJ4/EoDEli6O9H43qvgQkOT9cbIq0IhKgMbVgCLcBGAs/s1600/obs_settings_1.png
[11:13:03 CET] <Mavrik> Note that a lot of software won't read RGB encoded video, but you can use it to check if that's an issue for you.
[11:13:41 CET] <Spring> so YUV420 can't encode to pure white or is it the conversion incorrectly mapping the color/shade?
[11:14:19 CET] <Spring> *doesn't support (rather than 'can't encode')
[11:16:00 CET] <Mavrik> Might be it's just the mapping
[11:16:12 CET] <Mavrik> There's an SO article where someone has the same issue with BMPs: https://video.stackexchange.com/questions/19944/ffmpeg-bmp-to-yuv-x264-color-shift
[11:16:19 CET] <Mavrik> And it seems forcing the format before conversion fixed it
[11:17:22 CET] <Mavrik> Although in my experience no tweaking will fix the red color shift
[11:17:27 CET] <Mavrik> Except using RGB or YUV444
[11:23:15 CET] <Spring> hmm, OBS states I'm up to date. Apparently OBS Studio is the successor that TheAMM must be referring to.
[11:23:32 CET] <TheAMM> Oh, yes
[11:23:36 CET] <TheAMM> Studio happened years ago?
[11:24:13 CET] <BeerLover> JEEB I want to transcode mp4 to HLS, so do I need it?
[11:24:33 CET] <BeerLover> It's just for simulation, won't affect transcoding in any way right?
[11:24:57 CET] <Spring> TheAMM, quick question, do you know if Studio installs on Windows to the same location?
[11:25:11 CET] <TheAMM> I don't
[11:26:42 CET] <Spring> it's a pity I spent hours capturing this only to discover in the end the mapping issue. RIP.
[11:27:26 CET] <Spring> actually, is there any way after the fact I could use ffmpeg to map that particular color to pure white?
[11:28:59 CET] <TheAMM> You can adjust the ranges with different filters
[11:29:38 CET] <TheAMM> Abusing format to convere between the yuv ranges, using some other filter to do the same
[11:30:19 CET] <TheAMM> Can't remember the invocations now but manual has the parameters
[11:31:29 CET] <durandal_1707> lutyuv
[11:33:17 CET] <Spring> would this require conversion to YUV444 btw?
[11:33:29 CET] <Spring> or does YUV420 support pure white?
[11:33:29 CET] <durandal_1707> no
[11:34:06 CET] <durandal_1707> or lutrgb if your capture is RGB
[11:36:46 CET] <Spring> Oh boy, the methodology for the lut filter is a bit over my head. I'm only familiar with some file-based LUTs. Any tips on what syntax I'd need for this purpose?
[11:41:20 CET] <TheAMM> I was supposed to calibrate my capture card and make the output proper with a LUT but never bothered
[11:41:43 CET] <TheAMM> I have some format filter in there to make the result look "eh, ok"
[11:42:00 CET] <durandal_1707> lutrgb=r='if(eq(val,0xfd),0xff,val)':g='if(eq(val,0xfd),0xff,val)':b='if(eq(val,0xfd),0xff,val)'"
[11:42:57 CET] <Spring> durandal_1707, thank you. If the input was YUV240 would I be able to just switch the 'lutrgb' to 'lutyuv'?
[11:43:04 CET] <durandal_1707> nope
[11:43:19 CET] <TheAMM> Doesn't that just map FD to FF?
[11:43:33 CET] <durandal_1707> yes
[11:43:51 CET] <TheAMM> Well it's going to make gradients look weird
[11:44:54 CET] <durandal_1707> it could map also FE to FF
[11:45:10 CET] <Spring> I think it might be alright in my case since I don't mind so much about the transition but rather surrounding white which I need to use a Multiply blend mode with.
[11:45:39 CET] <Spring> (hence it needs to #fff for that, which is just a solid fill)
[11:45:52 CET] <TheAMM> I meant that there's going to be a sharp jump to pure white instead of adjusting the entire (limited) range of colors to the full scale
[11:46:07 CET] <TheAMM> If it's good enough it's good enough
[11:47:02 CET] <Spring> so for that above command the process would be conversion to RGB w/ lut filter then transcoding again to YUV I suppose.
[11:50:28 CET] <durandal_1707> Spring: it your capture input is YUV you will need to use different lutyuv incarnation, no need to do another colorspace conversion which is not lossless
[11:51:32 CET] <durandal_1707> but if you need to process blending in RGB while your input is YUV than above command should be applied before blending
[12:01:20 CET] <Spring> it's more that I'm not familiar with what the lutyuv values should be :p Having a bit of trouble though with converting to RGB using -pix_fmt' as it comes out looking all distorted. Is there something I'm missing?
[12:18:05 CET] <Spring> Hmm, it seems I need some script for the conversion? Wish I could find a Python script to run or some result that isn't ambiguous code. It's actually for a video for a funeral in just over a day so weighing whether I should re-capture it all in the next few hours or try to get this post-processing working.
[12:19:39 CET] <Spring> *result online, that is. Maybe I should try Google rather than DDG.
[12:24:30 CET] <mifritscher> JEEB: my wild guess was right - it crashes at muc.c -   st->internal->priv_pts->val = pkt->dts; (st being an AVStream)
[12:24:47 CET] <mifritscher> st->internal is ok, sp priv_pts seems to be NULL
[12:40:50 CET] <Spring> Oh my goodness... Turns out that after all that the video editor displays the white as #fff... I actually found an easier YUV->RGB conversion in the meantime though: converting to PNG frames. But unnecessary now.
[12:41:01 CET] <Spring> sorry for all the wall of texts btw
[14:01:41 CET] <ocx32> hi all, is it possible to  use ffmpeg to extract part of a live stream as an mp4 file? for example i am watching now a livestream and want to extract the last 30secs of this stream?
[14:06:19 CET] <DHE> most likely, as long as you can play the stream with ffmpeg
[14:06:29 CET] <DHE> or ffplay I guess
[14:10:17 CET] <ocx32> DHE i was thinking of increasing the hls segment to say 1minute, and then when i want to extract i extract current segment and previous segment and then convert the hls segments to an mp4?
[14:10:22 CET] <ocx32> is this the way to do it ?
[14:11:15 CET] <DHE> umm.. if you're your own livestream source, I guess that would work. I was assuming ffmpeg would join the livestream as an ordinary client and do what you want
[14:11:40 CET] <DHE> most HLS players intentionally start back ~3 segments before playing, so if you have 1 minute segments that's 210 seconds of lag (on average) to players
[14:11:42 CET] <ocx32> i am using my own livestream source which is mainly writing to disk hls segments
[14:12:06 CET] <ocx32> but should i add keyframes or so to join the hls segments into an mp4?
[14:12:09 CET] <ocx32> technically speaking
[14:12:31 CET] <DHE> the first frame of any segment is a keyframe. that's a requirement and enforced by the hls muxer
[14:12:51 CET] Action: DHE uses 6 second segments
[14:13:42 CET] <ocx32> ok so for this last 30 seconds extraction of a livestream should i just take the last 2 segments to be on the safe side, convert into mp4 and cut from the beginning till i have the last 30 sec? is this optimal?
[14:21:25 CET] <ocx32> DHE can i do something like ffmpeg -i playlsit.m3u8 and pick the last 2 segments of the playlist ?
[14:21:34 CET] <ocx32> any options for that?
[14:24:17 CET] <DHE> ffmpeg -h demuxer=hls
[14:25:44 CET] <ocx32> DHE  -live_start_index  <int>        .D...... segment index to start live streams at (negative values are from the end) (from INT_MIN to INT_MAX) (default -3)
[14:25:53 CET] <ocx32> so i should use -2 in my case? on that playlist ?
[14:27:19 CET] <DHE> sounds about right
[16:02:16 CET] <ocx32> DHE ffmpeg never stops when -2 is supplied on a LIVE stream as new segments are being generated always
[16:13:33 CET] <DHE> ocx32: then specify -t 30
[16:19:08 CET] <ddubya> I have an HDV source rendered from Final Cut, interlaced 29.97. When I deinterlace some frames go backwards, what could cause it?
[16:19:59 CET] <ddubya> it seems to be when a time dilation was applied in Final Cut, otherwise its ok
[16:22:32 CET] <ocx32> DHE ok now it stops after 30 seconds, i am using -i playlist.m3u8 -live_start_index -2 -t 30 -acodec copy -vcodec copy out.mp4 , for some reason the video out.mp4 doesnt play i only see the first frame of the video
[16:36:50 CET] <dont-panic> maybe this is a dumb question, but I use ffmpeg to stream to twitch.tv sometimes and I figured out how to use it send a stream through my rtmp server.  I'm curious if there's a way I can stream some low latency audio and video in a way where I can host it on my linux computer.  I'm trying to build a webcam baby monitor that I can put on a raspberry pi and make it portable and accessible from my phone
[16:36:56 CET] <dont-panic> using the browser and stuff I already have
[16:37:45 CET] <dont-panic> I'm currently using a program called motion in ubuntu 19.04 I think and that's very good for realtime video, but it doesn't include audio as far as I can tell
[16:44:48 CET] <ddubya> dont-panic, I would try point-to-point streaming, https://trac.ffmpeg.org/wiki/StreamingGuide has examples
[16:48:35 CET] <dont-panic> ddubya: I saw some stuff about that, but they all mention mplayer as the client for viewing the stream... is there a way to use the browser to view the stream like a live youtube/facebook video or something?
[16:48:49 CET] <dont-panic> or is that a javascript/webdev question?
[17:00:25 CET] <ddubya>  dont-panic for the browser to view the stream I'm guessig none of those protocols will work. I've never tried though
[17:55:37 CET] <fling> Can't I losslessly rotate mjpeg?
[17:56:07 CET] <fling> Or should I just rotate via container metadata?
[18:56:48 CET] <ocx32> DHE still on it... any idea?
[19:28:07 CET] <kepstin> fling: in theory it should be possible to losslessly rotate mjpeg like tools can do with jpeg, in practice... patches to add a bsf to do that in ffmpeg might be accepted?
[19:46:43 CET] <phobosoph> hi
[19:46:58 CET] <phobosoph> anyone here? :)
[19:46:59 CET] <phobosoph> :)
[19:50:41 CET] <another> no
[19:53:48 CET] <phobosoph> :)
[19:53:57 CET] <phobosoph> so ffmpeg streams to Youtube Live and it works nicely for some hours
[19:54:00 CET] <phobosoph> speed=1.x, also good
[19:54:18 CET] <phobosoph> but then suddenly Youtube Live stops recognizing the stream and nothing can be seen on YouTube
[19:54:24 CET] <phobosoph> in the meanwhile ffmpeg is still happily streaming on
[19:54:32 CET] <phobosoph> but the speed dropped down from 1.x to much lower
[19:54:34 CET] <phobosoph> and keeps dropping
[19:54:42 CET] <phobosoph> and ffmpeg doesn't notice there is something wrong and exits with error
[19:54:49 CET] <phobosoph> it just streams on and on and youtube finished the stream
[19:54:55 CET] <phobosoph> restarting ffmpeg fixes the issue
[19:55:09 CET] <phobosoph> so either ffmpeg doesn't terminate itself when it detects an issue with the connection to youtube rtmp
[19:55:35 CET] <phobosoph> or better, the underlying issue for the dropping speed stat and youtube stopping recognizing the stream is found and fixed
[19:55:37 CET] <phobosoph> any ideas? :/
[19:56:16 CET] <fling> paste your commandline
[19:56:23 CET] <fling> and full output
[19:56:30 CET] <fling> and wait for long time :>
[20:05:36 CET] <phobosoph> pastie.org seems to be not working currently, I get a nginx welcome page
[20:07:59 CET] <another> there are dozens of pastebin sites. just choose one
[20:19:37 CET] <phobosoph> szre
[20:20:29 CET] <phobosoph> here is the command: https://pastebin.com/u0z0mmZ9
[20:20:46 CET] <phobosoph> fling, another: ^
[20:21:10 CET] <phobosoph> when checking the ffmpeg logs after some hours when youtube suddenly stops showing the stream feed:
[20:21:14 CET] <phobosoph> frame=20619 fps=1.9 q=-1.0 size=  516529kB time=00:11:27.10 bitrate=6158.3kbits/s speed=0.0649x
[20:21:23 CET] <phobosoph> all the time fps=30 and speed=1.x
[20:21:41 CET] <phobosoph> when I restart ffmpeg (kill + start again), youtube picks the stream up again instantly
[20:21:57 CET] <phobosoph> sure, I could make some contraption that checks for the fps + speed and auto-kills ffmpeg
[20:22:10 CET] <phobosoph> but it would be much better to find the reason - or let ffmpeg terminate itself with error when this anomaly happens
[20:24:44 CET] <filpAM> http://filpam.com/pub/out.mp4
[20:25:12 CET] <filpAM> this plays with ffplay but not on my android phone or discord
[20:25:48 CET] <another> yuv444
[20:25:56 CET] <filpAM> ffmpeg -f rawvideo -s 128x128 -r 60 -pix_fmt rgb24 -i ./tmp.rgb8 -c:v libx264 -vf scale=600:400:flags=neighbor out.mp4
[20:25:57 CET] <another> probably not supported
[20:26:27 CET] <filpAM> this the command line that used to encode the video from rgb24 raw frames
[20:26:49 CET] <filpAM> *that
[20:27:47 CET] <another> add '-pix_fmt yuv420p' as an output option
[20:28:14 CET] <filpAM> ok
[20:30:54 CET] <filpAM> yep it works
[20:31:07 CET] <filpAM> http://filpam.com/pub/out.mp4 updated
[20:32:16 CET] <filpAM> thanks
[20:45:31 CET] <jemius> There are so many video denoise filters in ffmpeg... how do I know which one to use for my scenario?
[20:45:57 CET] <JEEB> see their age, and then make a comparison or so?
[20:46:14 CET] <jemius> JEEB, newer = better ?
[20:47:35 CET] <fling> is hqdn3d the best?
[20:47:36 CET] <JEEB> not necessarily, but you get an idea of in which order things were added
[21:01:25 CET] <jemius> Additionally, they all have ton of parameters :(
[21:47:04 CET] <TechnicalMonkey> is q controlled by crf?
[21:47:39 CET] <TechnicalMonkey> I keep getting yellow messages when I stream
[21:48:43 CET] <TechnicalMonkey> Past Duration 0.xxxxx Too Large
[21:49:30 CET] <saml> where do you stream?
[21:49:47 CET] <TechnicalMonkey> to....? Twitch
[21:51:57 CET] <saml> what's full command you use to stream?
[21:53:36 CET] <furq> TechnicalMonkey: you can usually ignore that warning
[21:53:59 CET] <furq> unless you're getting dropped frames or desyncs or something
[22:02:09 CET] <kepstin> that message does often indicate dropped frames - the usual cause is that there's a filter which has set an output framerate (indicated cfr) but actually outputs vfr or at a framerate higher than indicated.
[22:16:42 CET] <phobosoph> :(  nobody
[22:16:43 CET] <phobosoph> damn
[22:31:16 CET] <CountPillow> how can I tell whether my ffmpeg has been built with ligma?
[22:49:59 CET] <kepstin> CountPillow: what's ligma?
[22:50:16 CET] Action: kepstin doesn't see anything by that name in the ffmpeg source tree
[22:50:17 CET] <FooNess> (Don't fall for it.)
[22:50:40 CET] <CountPillow> LIGMA NUTS LMAO GOT EM
[22:50:44 CET] <FooNess> ...
[22:50:48 CET] <FooNess> What are you, 12?
[00:00:00 CET] --- Fri Nov  1 2019


More information about the Ffmpeg-devel-irc mailing list