[Ffmpeg-devel-irc] ffmpeg.log.20180622

burek burek021 at gmail.com
Sat Jun 23 03:05:01 EEST 2018


[00:28:29 CEST] <boberz> Hello. I'm trying to connect an rtsp camera source to youtube live. ffmpeg seems to be running, but I'm not getting any video in youtube. In the console it's giving me what I think might be an error "Invalid UE golomb code". Any suggestions?
[04:41:45 CEST] <teratorn> trim filter is pretty fast right? like, way faster than anything that munges pixel data?
[04:41:57 CEST] <teratorn> is there any way to profile an ffmpeg filter-graph execution?
[07:05:00 CEST] <Hackerpcs> while trying to do a simple trimming, I want to cut from a key frame, using -ss before -i does start at a key frame but i can't specify end time, not output file duration with -t or -to
[07:06:12 CEST] <Hackerpcs> so cutting from 00:27 to 00:54 with -ss 00:27 before -i and -to/-t 00:54 after -i it results in a 54 second output file, not the desired 27sec one
[07:07:16 CEST] <Hackerpcs> putting -ss 0:27 after -i it does result in a 27sec file put it's cutting without regard to key frames so it may result in non-playable video in the start
[07:09:55 CEST] <Hackerpcs> i mean not non-playable video at all, just the time before the first key frame which may be some seconds
[07:10:07 CEST] <Hackerpcs> when the audio is playing only
[07:14:51 CEST] <Hackerpcs> also -copyts option as said here https://trac.ffmpeg.org/wiki/Seeking#Cuttingsmallsections results in wrong timestamps
[07:29:10 CEST] <Hackerpcs> so summarizing I want to cut with the faster option (ss before input) but specify end cutting time from the input, not the -to option working as duration
[09:59:46 CEST] <dragmore88> Hi, im looking at encoding longgops (4sec) for a little project here using ffmpeg/x264, and i also want to shoot in I frames when needed during hard scene changes. "rc-lookahead=48:keyint=96:min-keyint=48" gives me at 24fps what i need, but when i double the values i get wild results.. any tips ?
[11:36:43 CEST] <sk_tandt> Greetings! I'm trying to build a video out of some images, but for some reason I'm only seeing a black screen for the given framerate (e.g. 1.27s) and what I assume is the first pic of the glob for the rest of the video
[11:36:56 CEST] <sk_tandt> (Which does, however, have the correct duration)
[11:56:31 CEST] <zerodefect> As part of the build step, is there anyway to export the ff_draw famliy of functions from libavfilter?
[13:20:10 CEST] <barhom> Is "-s" a shortcut to "-vf scale=" ?
[13:20:17 CEST] <barhom> or what is the difference between the two?
[13:20:52 CEST] <barhom> Because I currently use "-s 720x576" to scale something. I dont use "-vf scale=720:576"
[13:26:52 CEST] <ariyasu_> you can use either
[13:53:25 CEST] <paulk-gagarine> hi there
[13:53:48 CEST] <paulk-gagarine> jkqxz, I'm wondering, what is the purpose of "av_opt_set_int(decoder_ctx, "refcounted_frames", 1, 0);" exactly?
[13:53:55 CEST] <paulk-gagarine> it's in the hw_decode example
[13:56:16 CEST] <barhom> ariyasu_: sure, but what is the difference?
[14:11:07 CEST] <kepstin> barhom: the -s option basically just ends up sticking a scale filter on the end of the filter chain
[14:11:21 CEST] <barhom> so its a shortcut
[14:11:33 CEST] <barhom> kinda
[14:11:58 CEST] <kepstin> pretty much, yeah.
[14:12:37 CEST] <kepstin> if you want the scale to happen at a particular spot in your filter chain, use the filter.
[14:18:52 CEST] <barhom> kepstin: Im changing now to filter because "-s" doesnt support keep aspect ratio
[14:19:03 CEST] <barhom> -1:720 is an invalid frame size
[15:05:47 CEST] <sk_tandt> Mh, I don't get why ffmpeg is generating a file which only shows the first of the images
[15:06:00 CEST] <sk_tandt> I've run the command with -loglevel debug, and it is opening every file
[17:34:29 CEST] <relaxed> sk_tandt: what's your command?
[17:36:06 CEST] <sk_tandt> The glob one was ffmpeg -framerate $(echo "$(find . | wc -l)/180" | bc -l) -pattern_type glob -i '*.jpg' -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p ../video.mp4
[17:36:07 CEST] <sk_tandt> , but I tried a concat version too
[17:37:25 CEST] <kepstin> sk_tandt: and the output from the encode command? (probably with verbose loglevel is best, debug's kinda noisy)
[17:41:19 CEST] <sk_tandt> Just a second, I'll run it at verbose
[17:42:02 CEST] <sk_tandt> Where should I paste it?
[17:42:43 CEST] <kepstin> pastebin site of your choice, github gist, whatever.
[17:48:16 CEST] <sk_tandt> https://ghostbin.com/paste/f3g8q
[17:51:40 CEST] <kepstin> that's not from the command you said - this log is from something using concat input
[17:51:49 CEST] <kepstin> but that said, it looks like it's all working fine...
[17:57:27 CEST] <sk_tandt> Ops, sorry, launched the wrong version! Here's the glob one https://ghostbin.com/paste/7a5rh
[17:59:07 CEST] <kepstin> right, and that also seems fine. are you sure you're playing back the correct video, and that all of the input .jpg files are actually distinct? :)
[17:59:23 CEST] <sk_tandt> Yep, unfortunately
[17:59:44 CEST] <sk_tandt> I'll try playing the video on another pc
[18:00:08 CEST] <kepstin> it is actually inputting and encoding all the input frames, otherwise it would have stopped after encoding 1 frame
[18:01:18 CEST] <kepstin> 1.36fps is kinda low, but i wouldn't expect it to be an issue with most players
[18:44:31 CEST] <sk_tandt> And in the end, it was the playing PC
[18:44:36 CEST] <sk_tandt> Thanks for the rubber ducking!
[18:54:36 CEST] <saml> is there example of how to read two videos and feed them to filter graph that starts with overlay or scale2ref that takes two inputs?
[18:54:50 CEST] <saml> using C programming interface
[18:56:24 CEST] <JEEB> you have two lavf and lavc contexts and try to somehow sync their decoded sample PTS (if their clocks are already sync'd, great - otherwise you need to think of something)
[18:56:49 CEST] <JEEB> then for lavfi you can either use the examples that parse a string, or do it the long way where you init all filters separately by their IDs etc
[18:57:11 CEST] <JEEB> site:ffmpeg.org doxygen trunk KEYWORD is also recommended
[18:57:28 CEST] <saml> JEEB, oh you're in mpv as well
[18:57:47 CEST] <saml> thanks
[18:58:20 CEST] <JEEB> although if you just want to have two videos that are assumed to be synchronized,  you might just want to look into vapoursynth and ffms2
[18:58:24 CEST] <JEEB> or just ffms2
[18:58:35 CEST] <JEEB> ffms2 is a frame-exact API on top of FFmpeg APIs
[18:58:43 CEST] <JEEB> as in, it creates and index
[18:58:46 CEST] <JEEB> *an index
[18:59:01 CEST] <JEEB> and then it's often used as a source filter in things like avisynth or vapoursynth
[18:59:04 CEST] <JEEB> -32
[18:59:45 CEST] <saml> hrm my goal is to send command to a filter (crop) inside filter graph
[19:01:18 CEST] <JEEB> so this is in the context of what exactly? since you usually then want to render that thing etc
[19:01:35 CEST] <saml> https://people.xiph.org/~xiphmont/demo/daala/update1-tool2b.shtml  i'm trying to build this for videos
[19:02:58 CEST] <saml> i started with mpv scripting. but found no way to send filter command inside filter graph. or translate filter graph to plain --vf lists (problem is that first filter is scale2ref  and mpv does not accept a filter that takes two inputs in --vf)
[19:03:10 CEST] <saml> so, i'm headed to write my own video player
[19:03:40 CEST] <saml> i'm referencing https://github.com/pockethook/player  and ffmpeg doc/example
[19:04:08 CEST] <saml> and ffplay code. but they are all geared towards playing a single video
[19:04:31 CEST] <JEEB> you really don't want to create a Yet Another Player
[19:04:44 CEST] <JEEB> also pretty sure mpv had stuff to create a "complex" filter chain with lavfi
[19:05:12 CEST] <saml> JEEB, yeah,  there is --lavfi-complex.  But I cannot send filter command to a filter within the graph
[19:05:33 CEST] <saml> and re-setting filter graph is too laggy
[19:05:50 CEST] <JEEB> I'd say that probably is less bad of a problem to solve if you can at all send commands to lavfi filter chains
[19:06:12 CEST] <JEEB> and sometimes lavfi filter chains just have to be reset, yes. not sure if you are hitting that case, but sometimes it just needs that
[19:06:31 CEST] <saml> --lavfi-complex '[vid1][vid2]scale2ref[main][ref]; [main]crop at xcrop=w=200:x=0:y=0:keep_aspect=1[cropped]; [ref][cropped]overlay[vo]'
[19:06:56 CEST] <saml> that's the graph, and I tried to send filter command to crop filter (labeled xcrop)
[19:08:05 CEST] <JEEB> since it most likely wasn't implemented in the mpv filtering messaging stuff, I guess  you found the required APIs to be able to pass the message through lavfi?
[19:08:14 CEST] <JEEB> because otherwise no matter what you're doing you're going to have that problem :P
[19:09:21 CEST] <saml> reading mpv source, I have a hard time where I get entry to lavfi graph.  I do get mp_filter*, but not sure if it's lavfi
[19:10:42 CEST] <saml> anyways, maybe sending filter command to crop differently  every time mouse moves, can be too slow
[19:10:46 CEST] <JEEB> you need to find the lavfi-bridging filter entry, and then somehow make it possible to message a filter in the chain of that filter in the mpv filter chain :P
[19:11:40 CEST] <saml> ah lavfi-bridging. that tells me something. thanks
[19:12:54 CEST] <JEEB> also I'm pretty sure at least partially the lavfi filter chain will have to be reset if the output frame size will change. although I'm not 100% sure. I know ffmpeg.c resets/re-inits the filter chain if resolution/pix_fmt etc change
[19:15:45 CEST] <saml> ah resolution won't change, i think. I'm cropping video A  and overlaying it on video B.  A and B are of same dimension
[19:16:03 CEST] <JEEB> yea but if your crop width/height/position changes :P
[19:16:09 CEST] <JEEB> then the output of the thing changes, no?
[19:18:52 CEST] <saml> oh maybe. i'll have to see.  Maybe I'll make sure A and B have same dimension,  crop A and B both so that A.width + B.width is always constant.   then apply hstack
[19:29:34 CEST] <BovineOne> does ffmpeg support injecting "Spatial Media Metadata" into videos, or is using Google's python tool still the only way? https://github.com/google/spatial-media
[19:52:33 CEST] <boberz> Good day. I'm having a bit of a problem trying to use ffmpeg, getting "Too many packets buffered for output stream 0:0.", all search results indicate using -max_muxing_queue_size to work around this, but I've tried various values up to the maximum 2147483647; and I'm still receiving the error. Any suggestions? I'm using ffmpeg N-46347-g649d7ca47-static from yesterdays' git build.
[19:53:41 CEST] <boberz> on CentOS 7 with x264-0.148-11.20160614gita5e06b9.el7.x86_64 in case that matters
[20:09:10 CEST] <relaxed> boberz: command?
[20:22:24 CEST] <boberz> ffmpeg -i "$SOURCE" -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR -acodec libmp3lame -ar 8000 -threads 6 -crf 23 -b:a 128k -bufsize 512k -max_muxing_queue_size 1024 -f flv "$YOUTUBE_URL/$KEY"
[20:23:15 CEST] <boberz> $VBR="1000k", $FPS ="30", $QUAL="medium"
[20:27:44 CEST] <kepstin> boberz: what's "$SOURCE"? is it a local file?
[20:28:00 CEST] <boberz> it's an rtsp URL
[20:29:00 CEST] <^Neo> hello friends, I see there's a EIA-608 codec, so given an H.264 with EIA-608 in EIA-708 in ITU-T user data, is there a way to extract the captions out using FFmpeg command line?
[20:29:42 CEST] <kepstin> boberz: can you pastebin the console output from running the command please? (remove your youtube key, of course)
[20:30:35 CEST] <boberz> yes, https://pastebin.com/cXf42XmE
[20:31:08 CEST] <JEEB> ^Neo: you mean the ATSC stuff? they're handled as side data, you get them either in the AVFrames out of the decoder or AVPackets in the demuxer
[20:31:37 CEST] <JEEB> so you then have to initialize the EIA-608/708 decoder and feed that side data to that decoder
[20:32:25 CEST] <kepstin> boberz: hmm, i wonder if the issue is that the framerate from the source camera is too low
[20:32:37 CEST] <^Neo> JEEB: Yes, ATSC A35 stuff. I see that for H.264 that's where it gets parsed, but there is a EIA-608 standalone decoder that I thought would have been usable.
[20:32:40 CEST] <boberz> kepstin, I'll try increasing it, that's easy enough to do :D
[20:32:57 CEST] <JEEB> ^Neo: the decoder is 100% stand-alone in libavcodec
[20:33:04 CEST] <boberz> oh. or not, 30 is the maximum value. at 720p
[20:33:05 CEST] <JEEB> so if you have valid data in the format that decoder takes in
[20:33:19 CEST] <JEEB> you can initialize the decoder and just feed those packets in
[20:33:19 CEST] <^Neo> JEEB: ok, but is it invokable from the CLI FFmpeg?
[20:33:30 CEST] <JEEB> ffmpeg.c you mean
[20:33:34 CEST] <JEEB> as opposed to the APIs
[20:33:35 CEST] <^Neo> yes
[20:33:41 CEST] <JEEB> probably not :P
[20:33:44 CEST] <^Neo> :D
[20:33:44 CEST] <boberz> Doesn't look like any option allows for more than 30 fps.
[20:33:59 CEST] <JEEB> ^Neo: ffmpeg.c is /really/ static
[20:34:23 CEST] <JEEB> also I'm not sure which input format raw ATSC captions would be in :P
[20:34:33 CEST] <JEEB> usually you have to extract them either from text files or scc or whatever
[20:35:37 CEST] <kepstin> boberz: hmm, 30fps should have been fine
[20:36:00 CEST] <boberz> Sometimes the youtube dashboard will show "starting" before ffmpeg dumps out, if that matters.
[20:36:04 CEST] <kepstin> i was thinking it might be really low (like <1fps) or something
[20:36:46 CEST] <boberz> before it drops out there's a status indicator with a frame count: frame=  103 fps= 61 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s dup=0 drop=54 speed=N/A
[20:37:00 CEST] <boberz> drops increments until it fails
[20:38:03 CEST] <kepstin> interesting. I thought you said the camera was 30fps, so why is it running at 60 there...
[20:38:55 CEST] <^Neo> JEEB: Roger that
[20:40:16 CEST] <boberz> I can say that the camera UI doesn't let me choose more than 30, and watching that status line it starts at a high number then falls with each update and when it hits approximately 30, that's when ffmpeg quits with 'conversion failed'
[20:41:05 CEST] <boberz> weird, that time the fps went *up* to ~160 before stopping
[20:41:15 CEST] <boberz> I don't know what that status indicator is really saying, though.
[20:41:26 CEST] <boberz> Would my key frame interval matter?
[20:41:30 CEST] <boberz> It's 30 right now.
[20:41:45 CEST] <boberz> It can be as low as 10 or as high as 100.
[20:43:07 CEST] <kepstin> boberz: hmm. that's the other thing, x264 can buffer a lot of frames internally. Maybe try it with "-tune zerolatency" and see if that helps
[20:45:23 CEST] <boberz> It appears to be running without stopping at the moment
[20:45:39 CEST] <boberz> now it stopped
[20:45:55 CEST] <boberz> it went a good 30-40 seconds instead of ~5
[20:46:35 CEST] <kepstin> hmm.
[20:47:02 CEST] <kepstin> "Too many packets buffered for output stream 0" implies that it's not getting packets fast enough from the other stream
[20:47:05 CEST] <boberz> I had tried the sub stream (VGA/15fps) as well, trying again with the main stream (720p/30)
[20:47:15 CEST] <kepstin> where 0 appears to be the video, so the audio might be the problem
[20:47:32 CEST] <boberz> There's no mic so the audio would just be dead air
[20:47:54 CEST] <boberz> but it does actually transmit an audio stream of dead air
[20:48:00 CEST] <kepstin> you should try using a generated silence stream (made with a filter) instead of the camera audio then
[20:49:41 CEST] <boberz> I turned the audio sample parameter up to 44100 from 8000, and got some different errors. "mb_type 52 in P slice too large at 51 12" and "error while decoding MB 51 12"
[20:50:27 CEST] <kepstin> hmm. those should just be warnings, i'd expect ffmpeg to error-conceal that and keep going.
[20:50:35 CEST] <boberz> it did seem to do so
[20:50:52 CEST] <boberz> Can I just drop the audio completely instead of generating silence?
[20:51:01 CEST] <kepstin> youtube live requires an audio stream
[20:53:10 CEST] <boberz> http://ffmpeg.org/ffmpeg-filters.html#anullsrc is this what i should be referencing?
[20:55:53 CEST] <saml> https://gist.github.com/saml/d32bb239825723ff83e76e9cd75164c6  how do I read this dump from avfilter_graph_dump()?    I'm programmically creating filter graph  "crop at xcrop=w=100:x=0:y=0"   (plus buffer and buffersink)
[20:56:50 CEST] <JEEB> there's a buffer connected to an automatically inserted scaler
[20:57:02 CEST] <kepstin> boberz: the way I normally do it is to add an extra input to the ffmpeg command: "-f lavfi -i aevalsrc=0" will add an input that's just a silent audio stream
[20:57:04 CEST] <JEEB> then that is connected to crop
[20:57:19 CEST] <kepstin> boberz: then you can use the -map option to select the video from input 0 and audio from input 1
[20:57:42 CEST] <JEEB> crop is then connected to output
[20:58:22 CEST] <JEEB> saml: that function is indeed useful but it can indeed be hard to read. it lists each filter in the chain (as a box), and then the previous/next thing in the chain
[20:59:42 CEST] <JEEB> [in:buffer]-[auto_scaler_0:scale]-[xcrop:crop]-[out:buffersink]
[20:59:52 CEST] <JEEB> that's the full four filters in a single line :P
[21:02:06 CEST] <saml> ah so i'm doing it right.  just crop doesn't seem to be working as I expected
[21:02:55 CEST] <boberz> trying that now kepstin, Option b:v (video bitrate (please use -b:v)) cannot be applied to input url aevalsrc=0
[21:03:46 CEST] <kepstin> boberz: you have some options out of order. general format is ffmpeg [input options] -i [input] [input options] -i [input] [output options] [output]
[21:05:18 CEST] <boberz> oh, I didn't understand that the order of options mattered.
[21:06:09 CEST] <kepstin> how else would it know which options belong to which input? (or which output, you can have multiple outputs too)
[21:06:34 CEST] <kepstin> and some options have the same name when  used on an input or output, but do different things in each place.
[21:06:37 CEST] <boberz> I would assume by not duplicating parameters for input and output, but i don't know much aobut this software.
[21:06:47 CEST] <boberz> OK, I have video on youtube now!
[21:06:54 CEST] <boberz> huzzah!
[21:07:09 CEST] <boberz> So, it was the audio stream causing the problem.
[21:07:21 CEST] <kepstin> yeah, seems to be the case
[21:09:05 CEST] <boberz> The question now is whether I need to optimize this command any. For example, perhaps I don't need "-tune zerolatency -max_muxing_queue_size 1024"
[21:10:13 CEST] <kepstin> boberz: the -tune zerolatency may be helpful in reducing stream latency and avoiding buffering issues. The muxing queue size is probably unneeded.
[21:11:24 CEST] <boberz> https://pastebin.com/af5tqDuw that's my command as it stands currently
[21:12:07 CEST] <furq> don't use zerolatency for youtube
[21:12:31 CEST] <furq> also use aac, not mp3
[21:12:36 CEST] <^Neo> JEEB: riddle me this, do you know how to extract both fields of CC data from A53 SEI metadata for interlaced content?
[21:12:46 CEST] <^Neo> I see that I only get one field's worth of data
[21:12:50 CEST] <boberz> furq that'd be "libaac"?
[21:12:54 CEST] <kepstin> hmm, is it not helpful? I would have thought that x264's default buffering makes the stream kind of bursty which could cause issues
[21:12:59 CEST] <furq> boberz: just aac
[21:13:33 CEST] <kepstin> I suppose if you're using a sufficiently low gop size, zerolatency isn't that useful
[21:13:45 CEST] <furq> kepstin: youtube is turning it into hls anyway, so with bufsize set i don't see how it'd make any difference
[21:14:07 CEST] <boberz> is "medium" a sane -preset?
[21:14:08 CEST] <furq> other than the obvious difference that your rate-limited stream now has no cabac or bframes
[21:14:31 CEST] <furq> boberz: generally you should use the slowest thing you can tolerate
[21:14:34 CEST] <furq> but medium is ok
[21:14:37 CEST] <kepstin> boberz: it's the default, and it's pretty reasonable. If you have lots of idle cpu you can consider using a slower preset
[21:14:48 CEST] <boberz> I don't know enough to know what "you should use the slowest thing you can tolerate" means.
[21:15:02 CEST] <furq> -preset slow/slower/veryslow
[21:15:17 CEST] <boberz> "slow" as in the delay from camera to youtube?
[21:15:20 CEST] <furq> no
[21:15:21 CEST] <boberz> slow as in framerate?
[21:15:23 CEST] <furq> no
[21:15:27 CEST] <kepstin> boberz: for this case, that means "use the slowest preset where your computer can still encode at realtime speed"
[21:15:30 CEST] <furq> preset is the encoding complexity
[21:15:49 CEST] <furq> slower presets will spend more cpu time trying to compress the stream better
[21:16:02 CEST] <boberz> So slower would be higher compression ratios
[21:16:06 CEST] <furq> basically yeah
[21:17:05 CEST] <boberz> The machine is a vm guest, 8 vcpu on E5620 2.4Ghz processors, and I plan on running four streams.
[21:17:27 CEST] <furq> what resolution is the input
[21:17:27 CEST] <kepstin> well, you're using a target bitrate, so in theory the video might look a bit better while at the same compression ratio if you use a slower preset.
[21:17:48 CEST] <boberz> Should I not use a target bitrate?
[21:17:49 CEST] <kepstin> furq: 720p
[21:17:49 CEST] <saml> hrm,  so  crop filter outputs 100x270  but when I receive from buffersink,  the frame dimension is 480x270
[21:18:15 CEST] <furq> also i take it you can't just send the input directly to youtube without reencoding
[21:18:22 CEST] <boberz> 720p30
[21:18:28 CEST] <saml> I wrote C. I'm leet https://github.com/saml/vabslider/blob/master/main.c
[21:18:28 CEST] <kepstin> furq: input is rtsp ip camera :/
[21:18:46 CEST] <kepstin> it might work with -c:v copy, could be worth a try actually
[21:19:00 CEST] <furq> i mean you wouldn't be able to deinterlace it then, but i'd still try that
[21:19:26 CEST] <kepstin> i'm curious about that, i've never seen any interlaced 720 line stuff
[21:19:44 CEST] <boberz> furq, no I think I have to have a layer to pull from the camera then push to youtube. The camera can't push, and youtube can't pull.
[21:19:53 CEST] <kepstin> boberz: are you sure the camera's actually giving you interlaced video?
[21:19:57 CEST] <furq> boberz: that doesn't mean you have to reencode
[21:20:07 CEST] <furq> if your camera outputs h264 then you can just have ffmpeg push that stream to youtube
[21:20:09 CEST] <boberz> I would assum eit's not
[21:20:19 CEST] <boberz> that probably means i should remove -deinterlace
[21:20:37 CEST] <kepstin> boberz: right, then you don't need the "-deinterlace" option, and you should try replacing all the video codec options with "-c:v copy"
[21:21:05 CEST] <furq> ffmpeg -i $SOURCE -f lavfi -i anullsrc -c:v copy -c:a aac -f flv $YOUTUBE_URL/$KEY
[21:21:13 CEST] <furq> it's worth a try
[21:21:47 CEST] <boberz> giving it a stab
[21:22:24 CEST] <boberz> haven't seen this one before [rtsp @ 0x613cc40] Thread message queue blocking; consider raising the thread_queue_size option
[21:22:37 CEST] <boberz> but it's pushing to youtube, quality is quite poor though.
[21:22:39 CEST] <furq> you can usually ignore that message
[21:22:53 CEST] <boberz> oh, my player went to 144
[21:22:54 CEST] <boberz> lol
[21:23:05 CEST] <boberz> putting it back to 720p it looks good
[21:23:30 CEST] <furq> well yeah you'll notice that's hardly using any cpu
[21:23:32 CEST] <boberz> 4-8% cpu load
[21:23:38 CEST] <boberz> yeah very little, that's excellent
[21:23:39 CEST] <kepstin> this stream is probably somewhat higher bitrate than you were using before, but as long as your connection's fast enough that's not an issue.
[21:23:56 CEST] <boberz> Yes, the server is in my datacenter... we're an ISP
[21:24:34 CEST] <boberz> so, 1G AT&T MIS fiber ought to do it. The only choke point is the wireless link down to where the cameras are, but it's still pretty fast
[21:24:50 CEST] <boberz> (as in around 100Mbps)
[21:25:16 CEST] <boberz> A yellow dot on stream health is probably because there's no audio, i'd guess?
[21:25:26 CEST] <boberz> or rather, silent audio.
[21:26:35 CEST] <furq> youtube is a mystery
[21:26:49 CEST] <kepstin> https://support.google.com/youtube/answer/3006768?hl=en has some discussion of possible causes
[21:27:06 CEST] <kepstin> it looks like it should have an actual error message somewhere on the management panel
[21:27:22 CEST] <boberz> [flv @ 0x619ed00] Non-monotonous DTS in output stream 0:0; previous: 280566, current: 280546; changing to 280566. This may result in incorrect timestamps in the output file.
[21:27:35 CEST] <boberz> i'm seeing this ocassionally and it seems to be accompanied by a video glitch in the player
[21:31:20 CEST] <kepstin> boberz: that's likely a problem with out-of-order frames received from the camera. You could try setting the -record_queue_size option in the input.
[21:31:41 CEST] <kepstin> (takes a number of packets to buffer)
[21:34:09 CEST] <boberz> that option appears to not be valid
[21:34:48 CEST] <boberz> "Unrecognized option 'record_queue_size'."
[21:35:46 CEST] <kepstin> er, typo
[21:35:52 CEST] <kepstin> "-reorder_queue_size" :)
[21:35:59 CEST] <kepstin> see https://www.ffmpeg.org/ffmpeg-protocols.html#rtsp
[21:42:18 CEST] <^Neo> what's the preferred way of programming avcodecontext flags?
[21:42:26 CEST] <^Neo> just |'ing them?
[21:42:29 CEST] <^Neo> or is there a set?
[21:43:35 CEST] <JEEB> you mean setting AVOptions?
[21:43:51 CEST] <JEEB> I haven't really had to set too many things at least in decoding
[21:47:51 CEST] <boberz> I think the video hitches are either the cameras or the network, because i see it in the camera direct admin player
[21:48:48 CEST] <JEEB> sounds like you need something dynamic that tries to figure out how insane your input is and tries to normalize it
[21:49:08 CEST] <JEEB> and also tries to just output something pre-generated if the input signal goes wo-hoops
[21:49:28 CEST] Action: JEEB has been looking into upipe for some use cases
[21:49:32 CEST] <boberz> i wonder if it's unhappy to be sending multiple streams, i should try stopping wowza
[21:52:40 CEST] <boberz> There was mention of multiple outputs earlier. Could I also output to a file in order to archive?
[21:53:37 CEST] <furq> -i foo -c copy output1 -c copy output2 ...
[21:53:55 CEST] <furq> bear in mind both outputs will drop if one of them does
[21:54:29 CEST] <boberz> so a full disk would make the livestream stop, and a drop in connectivity to the googleplex would cause the disk output to stop
[21:54:34 CEST] <furq> right
[21:54:54 CEST] <boberz> Not unreasonable for my use case
[21:55:59 CEST] <boberz> I might need to put some more research in to that though, such as chunking the output to filesystem by hour, file rotation, etc, etc
[21:56:41 CEST] <boberz> that'd just be handy, though; it's not a use case requirement.
[22:02:50 CEST] <kepstin> ffmpeg's segment muxer can be used for chunked output
[22:19:56 CEST] <clovisw> Hi exists any cookbook with ready commands to test transcoding? like got one source and convert to 1080p, 720p, 480p and 360p 16:9 ?
[22:28:14 CEST] <boberz> furq, JEEB, kepstin thank you for your help in getting ffmpeg camera->youtube live up and running
[22:28:32 CEST] <boberz> It appears to be working as well as the cameras are (they're not exactly high end)
[22:44:19 CEST] <boberz> Should I be able to use this same mechanism to go to facebook live as well?
[22:44:45 CEST] <kepstin> it's probably fairly similar, but you'd have to look up if they have any specific requirements
[23:24:35 CEST] <saml> https://github.com/saml/vabslider/blob/38f70f8cd65e8349d227ad9f5d5800b88dd2469b/main.c#L117-L118   what's difference between av_buffersrc_add_frame  and  *_flags  with  AV_BUFFERSRC_FLAG_KEEP_REF  ?
[23:54:35 CEST] <SoItBegins> Has anyone made a FFMPEG codec module for Mobiclip videos?
[23:54:53 CEST] <SoItBegins> (Note: Mobiclip is a video format used by various Nintendo video games.)
[23:56:10 CEST] <furq> doesn't look like it
[23:56:33 CEST] <SoItBegins> Thats a shame. Thank you.
[00:00:00 CEST] --- Sat Jun 23 2018


More information about the Ffmpeg-devel-irc mailing list