[Ffmpeg-devel-irc] ffmpeg.log.20190206
burek
burek021 at gmail.com
Thu Feb 7 03:05:02 EET 2019
[01:12:25 CET] <mfwitten> For example, given a video stream in an `input.mp4', can you resize each frame but maintain *exactly* the same presentation time for each frame? I've found it to be impossible; and, I get different results with different containers, too.
[01:13:41 CET] <mfwitten> As you might guess, this is mainly a problem for streams with a variable frame rate (vfr).f
[01:14:15 CET] <furq> did you try -vf settb
[01:18:24 CET] <mfwitten> furq: I have not, but I have tried `-copyts', `-enc_time_base', `-time_base', and `-vsync passthrough'.
[01:18:31 CET] <mfwitten> furq: I'll try settb now
[01:19:14 CET] <furq> you'll get odd results with mpegts and mkv because those have fixed timebases (1/90000 and 1/1000)
[01:19:39 CET] <JEEB> matroska isn't fixed actually, but it's in increments of *10
[01:19:48 CET] <furq> isn't it always 1/1000 in ffmpeg
[01:19:53 CET] <JEEB> most matroska files and the default is 1/1000 though
[01:19:55 CET] <furq> or lavf rather
[01:19:58 CET] <JEEB> lessee matroskaenc
[01:20:40 CET] <furq> i forget if it's the default or if it's hardcoded
[01:20:51 CET] <furq> but either way if your source is 1/25 or something then mkv won't do that
[01:21:12 CET] <JEEB> it does use AV_TIME_BASE_Q an awful lot it seems
[01:21:22 CET] <JEEB> av_rescale_q(pkt->dts, s->streams[pkt->stream_index]->time_base, AV_TIME_BASE_Q)
[01:22:16 CET] <JEEB> chapters are set to AVRational scale = {1, 1E9}; funny enough
[01:23:38 CET] <mfwitten> furq: It doesn't seem to have an effect. I think the encoder and/or muxer is doing its own thing
[01:26:16 CET] <furq> are you getting the same timebase and different pts values
[01:26:20 CET] <furq> or is the timebase different
[01:26:31 CET] <mfwitten> furq: both
[01:27:07 CET] <mfwitten> furq: The calculated duration changes, too. And if I swap clips in a filtergraph, there are minute but accumulative discrepancies in timing
[01:28:46 CET] <mfwitten> furq: The results are very close to what it should be, but basically there are probably rounding errors and the like through the ffmpeg pipeline, and the result is not exact
[01:29:17 CET] <furq> it's weird that the timebase would change if you're not changing container
[01:29:50 CET] <furq> maybe this is one of those codec/container timebase things that i never bothered to learn about
[01:30:15 CET] <mfwitten> furq: Yes. Indeed. I suspect its different hard-coded choices, heuristics based on constant-frame-rate concepts, and rounding errors
[01:31:22 CET] <furq> well i can see where rounding errors would creep into the actual pts values
[01:31:27 CET] <furq> but not the timebase itself
[01:31:51 CET] <furq> afaik it should default to keeping the source timebase
[01:32:02 CET] <mfwitten> furq: Why though? Surely ffmpeg has been written to handle mp4 values *exactly*.
[01:32:27 CET] <mfwitten> furq: It's even wrong with `nut' (on output), which is designed to be very flexible
[01:32:55 CET] <furq> i'm not saying it should happen with timestamps, just that i have no idea how it would happen with the timebase
[01:47:44 CET] <ariyasu> [swscaler @ 0000000003727340] deprecated pixel format used, make sure you did set range correctly
[01:48:21 CET] <ariyasu> any idea what causes this? im not setting -pxl_fmt to anything
[01:48:32 CET] <furq> is your source jpeg or mjpeg
[01:48:48 CET] <furq> for some reason that warning gets thrown if the source pixel format is yuvj*
[01:48:55 CET] <furq> you can just ignore it in that case
[01:49:26 CET] <ariyasu> color space is yuv
[01:49:27 CET] <ariyasu> ok
[01:49:34 CET] <ariyasu> thanks furq
[01:49:43 CET] <furq> you can ignore it anyway, it doesn't change anything
[01:49:50 CET] <furq> it's just to let you know it might stop working in future
[01:51:03 CET] <mfwitten> furq: That's confusing to me, too. yuvj* has a larger dynamic range, doesn't it?
[01:51:25 CET] <furq> yuvj is deprecated because it's a stupid way of signalling full range
[01:51:38 CET] <furq> there's a separate flag for that now
[01:51:47 CET] <mfwitten> furq: Chromium didn't support playing files with that pixel format, but Google's own Pixel phone creates files with that format
[01:52:22 CET] <mfwitten> furq: How should people convert away from it?
[01:52:26 CET] <furq> you don't
[01:52:29 CET] <mfwitten> oh
[01:52:33 CET] <furq> yuvj isn't a real pixel format, it's an ffmpeg internal thing
[01:52:40 CET] <ariyasu> Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuvj420p(pc, progressive), 720x480, 59.94 fps, 59.94 tbr, 1k tbn, 119.88 tbc (default)
[01:52:50 CET] <ariyasu> this was source
[01:52:52 CET] <furq> in practice what it does right now is just set the respective yuv pixel format and then sets the range flag
[01:53:04 CET] <mfwitten> ariyasu: Just curious: Is it a video from an Android phone?
[01:53:19 CET] <ariyasu> no, im trying to do lossless video capture from a gamecube
[01:53:30 CET] <ariyasu> and thats what obs gave me when i set it to crf 0
[01:53:48 CET] <furq> probably because it's an rgb source
[02:03:53 CET] <mfwitten> furq: Oh! Thank goodness! It looks like I can get the right timing by setting `-time_base' appropriately; I can get the value by running `ffprobe' on the input. However, I have to say that this is a major pain in the arse.
[02:28:53 CET] <mfwitten> furq: Fortunately, `-time_base' seems to do the trick, but it requires looking up the right value ahead of time. Would you consider this a bug? Shouldn't the encoder, by default, just use the input stream's timing information if it's valid?
[02:30:02 CET] <mfwitten> furq: Also, `ffmpeg' shows both `codec_time_base' and `time_base'. I can't seem to affect `codec_time_base', despite `-enc_time_base' being an option; what is `codec_time_base'?
[02:30:40 CET] <furq> neither of those are listed in ffmpeg-all so i assume they're old aliases for time_base
[02:31:09 CET] <furq> or just specific to certain codecs
[02:32:28 CET] <mfwitten> furq: I've got ffmpeg `n4.1' installed, and `-enc_time_base' is in `ffmpeg-all'
[02:33:02 CET] <mfwitten> furq: Also `codec_time_base' is different in my `input.mp4' from `output.mp4'. So, it must have some meaning
[02:33:28 CET] <mfwitten> furq: Does the encoder/decoder have its own timing information separate from the decoded stream's timing?
[02:35:52 CET] <mfwitten> I found an old email in the mailing list stating that `-time_base' applies only to native/internal encoders, and so it wouldn't work with `libx264', but it does *now* seem to work with `libx264'. I wonder whether this "bug" (ignoring input timing choices by default) is in `libx264', and not ffmpeg's stuff.
[02:54:56 CET] <mfwitten> furq: Sorry to keep bothering you, but I thought you might be interested. `-time_base' is older than `-enc_time_base'; the former may be per-output while the latter is per-encoder or per-stream; the new one supports special arguments, namely `-1', which means to use the input's time base, which is *exactly* what I want, and it seems to work. (For further details, perhaps see: `git log -1
[02:55:02 CET] <mfwitten> 3ac46a0a62386a52e38c066379ff36b5038dd4d0; git log -1 2b06f2d2e24ccc4098f3ab40efd68e8f3f02b273').
[07:13:00 CET] <mfwitten> What is going on here?
[07:13:04 CET] <mfwitten> Input stream #0:0 (video): 291 packets read (1096238 bytes); 290 frames decoded;
[07:13:07 CET] <mfwitten> ...
[07:13:10 CET] <mfwitten> 290 frames successfully decoded, 0 decoding errors
[07:14:16 CET] <mfwitten> There are supposed to be 291 frames; in the original file, 291 frames are decoded, yet in this re-encoded (but otherwise the same) file, there are 290 frames decoded
[07:14:31 CET] <mfwitten> 291 packets are indeed acknowledged as having been read, but only 290 frames decoded
[07:14:50 CET] <mfwitten> There are no errors of any kind
[07:16:09 CET] <mfwitten> In other words, the lines should be:
[07:16:27 CET] <mfwitten> Input stream #0:0 (video): 291 packets read (????? bytes); 291 frames decoded;
[07:16:38 CET] <mfwitten> 291 frames successfully decoded, 0 decoding errors
[08:55:11 CET] <th3_v0ice> One packet will not always contain one frame. In my testing if you output to h264 raw bitstream there will always be one frame missing at the end. Not sure if bug or not.
[09:02:12 CET] <JEEB> th3_v0ice: you need to flush
[09:02:33 CET] <JEEB> so far all of my tests have decoded all frames with H.264 (and you can see the tests in FATE run over samples)
[11:12:43 CET] <Foloex> Hello world, I'm trying to make the waveform appear on top of the video. Creating the waveform, positioning it and blending it with the video I managed. I can't figure out how to set the size of the showwaves to be the one of the input video. The size is expected in the form of heightxwidth, I don't get how I can generate this string using W and H. Can someone give me some pointers on that ?
[11:14:17 CET] <durandal_1707> 400x300
[11:15:14 CET] <Foloex> Just to be clear, the size of the input varies
[11:15:39 CET] <Foloex> I don't want to hard code those values in my filter
[11:15:50 CET] <durandal_1707> use shell scripting
[11:16:14 CET] <Foloex> You mean use ffprobe then ffmpeg in a script file ?
[11:17:09 CET] <durandal_1707> no
[11:18:46 CET] <th3_v0ice> JEEB: I did actually, but decoding from h264 to yuv left one frame less, or something like that. Thanks for your help yesterday.
[11:19:32 CET] <Foloex> durandal_1707: what do you mean by shell scripting then ?
[11:20:23 CET] <durandal_1707> google it
[11:20:59 CET] <Foloex> I did and I find things about making a bash file (or equivalent)
[11:21:22 CET] <Foloex> nothing specific to ffmpeg though
[11:26:11 CET] <th3_v0ice> Use ffprobe to get WxH, copy pasted from stack overflow ffprobe -v error -select_streams v:0 -show_entries stream=width,height -of csv=s=x:p=0 input.mp4, then use string in your filter.
[11:26:42 CET] <furq> Foloex: generate showwaves oversized and use scale2ref
[11:28:41 CET] <Foloex> I was hoping to do everything within ffmpeg, thanks furq I'll try that
[11:30:10 CET] <Foloex> I'm kind of surprise that H and W ffmpeg constants cannot be used to create a size string
[11:31:03 CET] <furq> those constants are specific to each filter, and showwaves doesn't take a video as input
[11:31:09 CET] <furq> so it has nowhere to get them from
[11:32:21 CET] <Foloex> furq: makes sense, thanks
[12:34:45 CET] <JEEB> well we're testing that stuff with the FATE tests so you might really want to verify. if you're 100% sure you're missing a picture (frame or field), then make sure you make a test sample and a test case that can be tested/verified
[12:34:52 CET] <JEEB> and post an issue on the trac issue tracker
[15:38:17 CET] <egrouse> is it possible to, say, continuously write the timestamp of currently playing vid to a file? so if it fails to convert at any point i can resume from the previous place it reached
[15:38:43 CET] <egrouse> or is there some other way to achieve this resuming of previous file
[15:38:52 CET] <mfwitten> th3_v0ice, JEEB: Interestingly, allowing ffmpeg to choose the default encoding settings for h.264 results in all 291 frames being present, but specifying `-c:v libx264 -preset ultrafast -crf 23' somehow keeps the last frame from being decodable (despite there being a frame, and despite there being no decoding errors. Maybe it requires an even number of frames?).
[15:39:05 CET] <mfwitten> egrouse: Yes
[15:39:22 CET] <mfwitten> egrouse: Here's a quick hack...
[15:40:43 CET] <mfwitten> egrouse: do you understand filtergraphs? If not, perhaps you could give me your command line, so that I can edit it
[15:48:17 CET] <mfwitten> egrouse: ffmpeg -v 0 -i input.mp4 -vf 'setpts=PTS+0*print(T\,0)' output.mp4 2>&1 | tee /tmp/timestamps.txt
[15:50:11 CET] <mfwitten> egrouse: Now, it might be a little more complicated than that; you might want to ensure that the time stamps in `output.mp4' really are what is being gotten from `input.mp4'. In that case, you currently have to be more precise:
[15:50:56 CET] <mfwitten> egrouse: ffmpeg -v 0 -copyts -i input.mp4 -vf 'setpts=PTS+0*print(T\,0)' -enc_time_base -1 output.mp4 2>&1 | tee /tmp/timestamps.txt
[15:52:05 CET] <mfwitten> egrouse: Maybe you should be using PTS values instead (i.e., `print(PTS\,0)'), or even explicit frame numbers (i.e., `print(N\,0)').
[15:52:51 CET] <mfwitten> egrouse: Lastly, you'll probably want to split the output into discreet chunks that can be finished separately and then concatenated together at the end if necessary
[15:53:28 CET] <mfwitten> egrouse: If you do that, then you might not even need to write out timestamps or whatever, which won't necessarily be too useful anyway
[16:20:14 CET] <egrouse> mfwitten, sorry i had to go away - thanks a lot for that
[16:21:19 CET] <egrouse> looks like it could be very useful and definitely appreciate the comments regards splitting chunks etc
[16:21:25 CET] <egrouse> gonna save all that and have a play later - thank you ;)
[16:24:10 CET] <mfwitten> egrouse: d=$(ffprobe -v quiet -select_streams v:0 -show_streams input.mp4 | awk -F= '/^duration=/ {print $2}')
[16:24:16 CET] <mfwitten> egrouse: half_d=$(echo "scale=6; $d / 2" | bc)
[16:24:22 CET] <mfwitten> egrouse: ffmpeg -i input.mp4 -filter_complex '
[16:24:22 CET] <mfwitten> split=3 [v0][v1][v_output];
[16:24:22 CET] <mfwitten> [v0] trim=duration='"$half_d"' [v_output0];
[16:24:22 CET] <mfwitten> [v1] trim=start='"$half_d"':duration='"$half_d"' [v_output1]
[16:24:22 CET] <mfwitten> ' -map '[v_output0]' -an /tmp/output0.mp4 \
[16:24:25 CET] <mfwitten> -map '[v_output1]' -an /tmp/output1.mp4 \
[16:24:27 CET] <mfwitten> -map '[v_output]' -an /tmp/output.mp4
[16:25:07 CET] <mfwitten> egrouse: Something like that. Of course, there are issues with timing and duplicated frames and the like which may be difficult to deal with
[16:25:43 CET] <mfwitten> egrouse: Also see here: https://trac.ffmpeg.org/wiki/Concatenate
[16:26:26 CET] <furq> wow don't do that
[16:26:34 CET] <furq> use the segment muxer to split into chunks
[16:26:56 CET] <furq> also use mpegts or something that's more concat-friendly for the intermediate format
[16:27:24 CET] <mfwitten> furq: Sure. However, sometimes the chunks you want aren't just linear
[16:27:56 CET] <egrouse> thanks to you both
[16:28:04 CET] <egrouse> really come to appreciate this channel over the last few weeks :p
[16:28:42 CET] <mfwitten> egrouse: Good luck!
[16:29:09 CET] <egrouse> thank you :)
[16:55:59 CET] <^Neo> hello friends, I'm trying to add spdif support to alsa dec and leverage the spdifdec code in libavformat. In libavformat its expecting an AVIOContext to be present, but in libavdevice it looks like alsa doesn't have one populated. I figure I have two options, add/populate an AVIOContext in the AVFormatContext used in alsa_dec.c, or modify the spdifdec.c (or make a copy for libavdevice) where I just operate over a buffer that the alsa subs
[17:06:42 CET] <ariyasu> what's the correct way to end a job when using a direct show device?
[17:06:55 CET] <ariyasu> as the input is endless
[17:07:39 CET] <c_14> press q
[17:07:48 CET] <c_14> or ctrl+c as long as you only press it once
[17:08:16 CET] <ariyasu> thanks
[17:44:20 CET] <ariyasu> how come when i do "-vf setdar=4:3" it sets the dar to 3:1 ?
[17:44:38 CET] <ariyasu> Stream #0:0: Video: h264 (libx264), yuv420p(progressive), 720x480 [SAR 2:1 DAR 3:1], q=-1--1, 59.94 fps, 90k tbn, 59.94 tbc
[17:48:13 CET] <furq> ariyasu: setdar=4/3
[17:48:22 CET] <furq> or 4\:3
[17:48:31 CET] <furq> otherwise the : is interpreted as an option separator
[17:50:23 CET] <ariyasu> i see, thanks
[17:53:34 CET] <Sesse> hi. I have a multi-track mkv file (MJPEG, if it matters) where some of the tracks change video resolution a few times. since it's huge (~2.5 TB), I'd like to get it down to h264 for archival, but after transcoding (which takes a few days...), it seems that the resolution information is lost; the tracks are all at the resolution they started at. is there some way I can keep the original data without
[17:53:40 CET] <Sesse> splitting?
[17:54:10 CET] <Sesse> fwiw, the command line is: ffmpeg -i input.mkv -map 0:0 -map 0:1 -map 0:2 -map 0:3 -c:v libx264 -crf 24 -preset slower -sn -tune film -movflags faststart output.mp4
[18:04:02 CET] <mfwitten> Sesse: I do not know for sure, but I suspect that neither the mp4 container nor h264 can handle changing dimensions midstream
[18:04:21 CET] <mfwitten> Sesse: You could encode each track at the largest dimensions
[18:04:54 CET] <mfwitten> Sesse: Also, if you're archiving, you might want to look into a newer codec like AV1 (though the encoder is apparently horrifically slow right now)
[18:05:26 CET] <DHE> oh boy is it.. I spent a week on a Ryzen encoding a 1-minute video (25fps)
[18:05:31 CET] <DHE> or was it 2 weeks...
[18:05:42 CET] <mfwitten> Yikes
[18:05:52 CET] <Sesse> mfwitten: I could use something else than mp4 if it helps, I'm positive mkv can handle it
[18:05:56 CET] <Sesse> h264, on the other hand...
[18:06:01 CET] <Sesse> you'd almost certainly need to insert an IDR frame, of course
[18:06:05 CET] <mfwitten> Well, there are some Big Guns lined up behind AV1, so I suspect we'll have hardware acceleration soon enough
[18:06:19 CET] <Sesse> AV1 is out of the question for the reasons DHE mention :-)
[18:06:28 CET] <Sesse> rav1e is almost realtime now... aka 480p in 10 fps
[18:06:31 CET] <Sesse> on a fast machine
[18:06:37 CET] <DHE> x264, at least as far as ffmpeg offers it, does not appear to support resolution changes. you'd have to close and re-open the encoder at a minimum. if you can just stream the new packets from the reloaded encoder, MAYBE...
[18:06:48 CET] <DHE> but I don't know if that's a spec violation or whatever...
[18:07:46 CET] <Sesse> well, it happens in transport streams all the time
[18:07:50 CET] <Sesse> so surely it must be allowed
[18:08:14 CET] <Sesse> well, ok, switching audio codecs is probably more common than switching resolutions
[18:09:17 CET] <Sesse> but I'm fairly certain I've seen blu-rays where the menus are 1080p and the content is 720p
[18:09:26 CET] <mfwitten> Sesse: I'm not sure that's the same situation. Those are separate streams of the same content at different resolutions, whereas yours is changing explicitly mid-stream
[18:09:38 CET] <Sesse> fair enough
[18:10:19 CET] <Mavrik> Huh, is there even a single codec that supports resolution switching?
[18:10:27 CET] <Sesse> Mavrik: any intraframe codec
[18:10:30 CET] <Mavrik> I mean, there's H.264 SVC, but even that one can't change res
[18:10:39 CET] <mfwitten> If you want h264, and you want to be certain you remain withing the bounds of practical tech, then I suggest figuring out the largest dimensions for a frame, and then filtering the other frames so that they are overlayed correctly at the center of these larger frames
[18:10:53 CET] <Sesse> mfwitten: I'd assume I just scale down everything to the same resolution
[18:10:59 CET] <Sesse> for this purpose, it's good enough-ish
[18:11:01 CET] <mfwitten> No need
[18:11:17 CET] <mfwitten> Sesse: You could overlay
[18:11:26 CET] <Sesse> as in have black borders around?
[18:11:29 CET] <mfwitten> Sesse: Your smaller images get overlayed perfectly within big Frames
[18:11:34 CET] <mfwitten> Sesse: Yeah
[18:11:44 CET] <Sesse> given that it's supposed to simulate camera input, that doesn
[18:11:46 CET] <Sesse> 't sound so good :-)
[18:11:58 CET] <mfwitten> Well, I figured you were just trying to preserve the images
[18:12:15 CET] <mfwitten> Sesse: Otherwise, you'd want to upscale the smaller images in order to preserve the detail in the larger images
[18:12:18 CET] <Sesse> it's multi-angle test data
[18:12:29 CET] <Sesse> which I'm publishing since it's pretty much impossible to find online
[18:12:48 CET] <Sesse> but the cameras changed in the middle of the stream, since one of them had power problems :-)
[18:14:01 CET] <mfwitten> So, what did ffmpeg do? Scale smaller images up automatically?
[18:14:13 CET] <Sesse> it seems it scaled, yes
[18:14:15 CET] <Sesse> up and down
[18:14:50 CET] <Sesse> actually, in one of the transcodes, when I started in the middle to test something (with -ss), it chose to convert to 1280x1440
[18:14:53 CET] <Sesse> which is... bizarre
[18:14:56 CET] <mfwitten> I see. Well, if I were you, I'd just use the largest dimensions for the whole thing. Maybe explicitly set up scaling so that you can control the quality
[18:14:59 CET] <Sesse> (the streams are either 720p or 1080p)
[18:15:45 CET] <mfwitten> Sesse: The aspect ratios of 720p and 1080p are different
[18:15:56 CET] <mfwitten> Sesse: Maybe it was trying to prevent bad scaling
[18:16:11 CET] <Sesse> wait, are they?
[18:16:16 CET] <mfwitten> I think...
[18:16:22 CET] <Sesse> both should be 16:9
[18:17:41 CET] <mfwitten> Sesse: Sorry. My mistake
[18:22:32 CET] <mfwitten> Sesse: What is the output of the following: ffprobe -v quiet -select_streams v:0 -show_streams input.mkv | grep -e ^width= -e ^height=
[18:23:01 CET] <mfwitten> Sesse: I'm curious what `ffprobe' says about the dimensions.
[18:23:21 CET] <darkdrgn2k> im trying to read a video stream from a camera that uses "Advanced ip-Camera Stream(ACS)". i can get FFMPEG to decode the stream by adding -f h264 in front of the source. However to get this into zonerminder is a bit tricky since it does not use command line. there is a "OPTIONS_FFMPEG" paramater that uses for example "reorder_queue_size=nnn" but f=h264 does not seem to work.
[18:23:32 CET] <darkdrgn2k> 1) any one have any idea if tehre is a FFMPEG option that could force the demuxer
[18:23:52 CET] <darkdrgn2k> 2) if not any idea how i can use a ffmpeg process to send out an RTSP stream to another process on the same computer
[18:24:23 CET] <Sesse> mfwitten: 1280x720
[18:24:39 CET] <mfwitten> Sesse: That's it? Just one?
[18:24:45 CET] <Sesse> yes
[18:24:48 CET] <Sesse> mfwitten: but I guess the container doesn't really matter in this case
[18:24:55 CET] <Sesse> the jpeg frame inside is what matters
[18:26:51 CET] <Sesse> it obviously didn't scan anything except the header, unless it read terabytes of data in a few milliseconds
[18:26:55 CET] <mfwitten> Sesse: Well, I would have hoped `-show_streams' would be written to account for that fact, but I guess not. There is probably an assumption that the dimensions do not change
[18:30:55 CET] <mfwitten> DHE: Was the AV1 encoding worth it?
[18:36:31 CET] <mfwitten> darkdrgn2k: Maybe this helps: https://wiki.zoneminder.com/Ffmpeg
[18:36:38 CET] <mfwitten> darkdrgn2k: FFMPEG_INPUT_OPTIONS
[18:36:57 CET] <darkdrgn2k> yes lots of help :( "usually leave this empty " is all it says
[18:37:16 CET] <mfwitten> Well, set it to `-f h264'
[18:37:25 CET] <mfwitten> darkdrgn2k: ^^
[18:37:27 CET] <darkdrgn2k> also docs are out of date, there is a module that saves h264 passthrough
[18:37:36 CET] <darkdrgn2k> also the format is x=y no -x y
[18:37:38 CET] <darkdrgn2k> and didnt seem to work
[18:38:24 CET] <mfwitten> darkdrgn2k: That page says to use zmvideo.pl
[18:38:50 CET] <darkdrgn2k> that page is also about genering a video AFTER the capture is completed (ie export a video)
[18:39:07 CET] <darkdrgn2k> the new module lets you read the video comming of the cam in raw h264
[18:39:12 CET] <darkdrgn2k> (liek i said very out of date)
[18:40:01 CET] <mfwitten> darkdrgn2k: Well, perhaps you can send me to documentation that I can read myself. This is the first time I've heard of zoneminder
[18:40:08 CET] <mfwitten> darkdrgn2k: This isn't #zoneminder
[18:40:34 CET] <darkdrgn2k> mfwitten, yes i know. im trying to see if there was some sort of series of paramaters that where differnt then console for the ffmpeg libraries
[18:41:51 CET] <darkdrgn2k> so assuming thats not somethign that is easy to poke at im looking at plan B
[18:42:10 CET] <darkdrgn2k> can i feed via the ffmpeg process a RTSP stream to another process on the same computer?
[18:47:05 CET] <mfwitten> darkdrgn2k: So, look at this picture: https://zoneminder.readthedocs.io/en/latest/userguide/components.html#system-overview
[18:47:16 CET] <mfwitten> darkdrgn2k: You're trying to feed a zmc instance?
[18:47:33 CET] <darkdrgn2k> correct
[18:48:15 CET] <darkdrgn2k> https://zoneminder.readthedocs.io/en/stable/userguide/definemonitor.html#source-tab
[18:48:43 CET] <darkdrgn2k> source is "ffmpeg" which allows zoneminder to record data withouth transcoding it
[18:52:48 CET] <mfwitten> darkdrgn2k: Well, the question is what does zmc (or whatever) expect? They're trying to be too cute, hiding what's going on.
[18:53:20 CET] <mfwitten> darkdrgn2k: Like you say, you probably need to use RTSP, as that's the only thing mentioned
[18:53:34 CET] <darkdrgn2k> it expects a h264 stream
[18:53:45 CET] <mfwitten> darkdrgn2k: Does it? Where do you get that information?
[18:53:52 CET] <darkdrgn2k> i have one but it include "Advanced ip-Camera Stream(ACS)" that ffmpeg (im guessing) does not like
[18:54:11 CET] <darkdrgn2k> the other camera that produces a rtsp 264 stream works
[18:54:42 CET] <darkdrgn2k> the real problem is that this camerea is a pos
[18:55:08 CET] <darkdrgn2k> and i been poking at it trying to make it workable. I got it to spit out a h264 stream (which is more then anyone has evern done) but im stuck at the next strop
[18:55:37 CET] <darkdrgn2k> the issue may be that its getting the h264 stream over http instead of rtsp but its not clear
[18:55:37 CET] <mfwitten> darkdrgn2k: Listen, you're not being precise. It's hard to know what's failing in your pipeline.
[18:55:51 CET] <darkdrgn2k> because im not sure what fails
[18:56:10 CET] <darkdrgn2k> if i pull a feed of the camera, raw ffmpeg does not recognize it
[18:56:26 CET] <darkdrgn2k> forcing it with -f h264 produces a viewable video
[18:56:41 CET] <darkdrgn2k> that is basically as far as i got
[18:56:50 CET] <mfwitten> What is your ffmpeg command line?
[18:57:24 CET] <darkdrgn2k> ffmpeg http://admin:@192.168.95.102/video/ACVS-H264.cgi?profileid=1 <- failes
[18:57:30 CET] <darkdrgn2k> ffmpeg -f h264 http://admin:@192.168.95.102/video/ACVS-H264.cgi?profileid=1 <- succeeds
[18:58:50 CET] <mfwitten> darkdrgn2k: Does this not seem more appropriate: https://zoneminder.readthedocs.io/en/stable/userguide/definemonitor.html#remote
[19:00:03 CET] <darkdrgn2k> ffmpeg is the only model that supports h264 passthrough (meaning zoneminder will not de-code then re-encode the data)
[19:00:27 CET] <mfwitten> darkdrgn2k: Just for completeness, what exactly is ffmpeg receiving from that URL?
[19:00:34 CET] <darkdrgn2k> i have managed to get it working with jpg stills but the quality of the image is much smaller then 1080p
[19:01:02 CET] <DHE> <mfwitten> DHE: Was the AV1 encoding worth it? # source video was 1080p at 25 and I gave it 1 megabit of video. It actually looks REALLY good
[19:01:18 CET] <darkdrgn2k> i have reason to beleave that it is described here
[19:01:18 CET] <darkdrgn2k> http://gurau-audibert.hd.free.fr/josdblog/wp-content/uploads/2013/09/CGI_2121.pdf
[19:01:36 CET] <mfwitten> DHE: Awesome. Thanks.
[19:01:40 CET] <darkdrgn2k> page 32
[19:01:47 CET] <darkdrgn2k> "4.1.9.get H.264 video stream"
[19:02:05 CET] <DHE> mfwitten: again, it took a week or so in order to encode a 1 minute video
[19:02:10 CET] <furq> does aom have any multithreadnig yet
[19:02:24 CET] <furq> i'm guessing no
[19:02:30 CET] <Sesse> DHE: 1mbit at x264 in placebo is pretty good, too =)
[19:02:42 CET] <darkdrgn2k> i evaluated the data commeing out and it seemed to conform to this model
[19:02:56 CET] <mfwitten> DHE: Indeed.
[19:03:09 CET] <darkdrgn2k> ACS header on page 42 seemed to confirm as well
[19:03:15 CET] <mfwitten> darkdrgn2k: looking now
[19:03:34 CET] <furq> oh apparently it has row-mt
[19:03:45 CET] <furq> i should give it a try
[19:04:00 CET] <darkdrgn2k> (i have been trying to get the camrea to respond to a real rtsp stream but it does not seem to work. And i get port 554 as "filtered" during an nmap scan)
[19:07:14 CET] <mfwitten> darkdrgn2k: OK, so the IP camera is producing an H.264 stream that ffmpeg is capable of processing; you were able to specify an output file or something and then watch the results?
[19:07:32 CET] <darkdrgn2k> yes
[19:07:37 CET] <darkdrgn2k> only when i add -f h264 in front
[19:07:40 CET] <mfwitten> darkdrgn2k: So, then you want zoneminder to use ffmpeg just like that
[19:07:50 CET] <darkdrgn2k> idealy
[19:08:08 CET] <darkdrgn2k> alternativly be able to have ffmpeg feed the "cleaned" stream back into zonerminder -vcopy
[19:08:44 CET] <mfwitten> darkdrgn2k: Can you make this the source path? -f h264 http://admin:@192.168.95.102/video/ACVS-H264.cgi?profileid=1
[19:08:50 CET] <DHE> Sesse: while I don't think I used placebo (maybe veryslow) I did do the same test with x264 and x265 for the sake of comparing the video. x264 looks pretty good but you can still see the artifacts...
[19:08:54 CET] <darkdrgn2k> i tried, it did not work
[19:08:55 CET] <mfwitten> (Include the "-f h264" in there
[19:09:06 CET] <furq> well row-mt is working great
[19:09:13 CET] <furq> using two whole threads for a 720p source
[19:09:20 CET] <darkdrgn2k> ` Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080, q=-1--1, 30 fps, 15360 tbn, 30 tbc
[19:09:21 CET] <darkdrgn2k> `
[19:09:22 CET] <mfwitten> darkdrgn2k: Well, this zoneminder thing is open source, so let's have a lok
[19:09:44 CET] <darkdrgn2k> my cpp foo is not that strong
[19:09:45 CET] <darkdrgn2k> https://github.com/ZoneMinder/zoneminder/blob/51f0e7e5c8320d7b855ae3bff8ed442539233c2f/src/zm_ffmpeg_camera.cpp
[19:10:17 CET] <furq> wait never mind it's using one thread
[19:10:18 CET] <darkdrgn2k> or rather https://github.com/ZoneMinder/zoneminder/blob/master/src/zm_ffmpeg_camera.cpp
[19:13:24 CET] <furq> Codec AVOption row-mt (Row based multi-threading) specified for output file #0 (pipe:) has not been used for any stream.
[19:13:27 CET] <furq> cool
[19:13:56 CET] <furq> -tiles also makes no difference so this is going well so far
[19:16:09 CET] <mfwitten> darkdrgn2k: What version of `libavformat' do you have?
[19:16:24 CET] <mfwitten> darkdrgn2k: ls /usr/lib/libavformat.so*
[19:16:31 CET] <mfwitten> darkdrgn2k: Or the like
[19:17:03 CET] <darkdrgn2k> searching 1 sec
[19:17:19 CET] <darkdrgn2k> /usr/lib/x86_64-linux-gnu/libavformat.so.57.56.101
[19:19:56 CET] <mfwitten> darkdrgn2k: From a quick glance, it looks like the Source Path field probably gets picked up here: https://github.com/ZoneMinder/zoneminder/blob/master/src/zm_ffmpeg_camera.cpp#L87
[19:20:15 CET] <darkdrgn2k> yes and options just below it at mOptions( p_options )
[19:20:26 CET] <darkdrgn2k> which i think is defined in .h
[19:20:54 CET] <mfwitten> darkdrgn2k: https://github.com/ZoneMinder/zoneminder/blob/master/src/zm_ffmpeg_camera.cpp#L317
[19:21:04 CET] <mfwitten> See that line?
[19:21:13 CET] <mfwitten> I bet that's doing the opening
[19:21:43 CET] <mfwitten> If you look below, for older/other versions, it does options processing
[19:22:21 CET] <darkdrgn2k> hmm
[19:22:44 CET] <mfwitten> darkdrgn2k: Or if you look at the options processing, it seems to only do RTSP
[19:22:59 CET] <darkdrgn2k> hmm yeh thast what im looking at here too :/
[19:23:07 CET] <darkdrgn2k> there is a"http tunnel" option but not sure what that is
[19:23:08 CET] <darkdrgn2k> *sigh*
[19:23:29 CET] <mfwitten> darkdrgn2k: Well, ffmpeg can do that, right?
[19:24:12 CET] <darkdrgn2k> so only solution is figure out how to convince the camera to do rtps (which is not going well right now) or have an ffmpeg procses pull the http stream and send it back to ffmpeg as rtsp
[19:26:00 CET] <mfwitten> Camera ---rtsp---> zonemine; or, camera ----H.264----> ffmpeg ----RTSP----> zonemine
[19:26:05 CET] <mfwitten> darkdrgn2k: ^^
[19:26:11 CET] <darkdrgn2k> pritty myuuch
[19:26:49 CET] <darkdrgn2k> or change zoneminder code some how :P but the compile would be a pain lol
[19:27:04 CET] <darkdrgn2k> id love to post all this reverse engeering on the dlink forum
[19:27:22 CET] <darkdrgn2k> where people are comlpainin how krappy this camera is and support is like "uhh use the android app" lol
[19:27:37 CET] <mfwitten> darkdrgn2k: I'm still looking at that code though, because I didn't take a hard look
[19:29:02 CET] <darkdrgn2k> and the dam camrea doesnt repond to syn packests on 554 (rtmp :( )
[19:35:32 CET] <mfwitten> darkdrgn2k: Yeah. I'm pretty sure the later stuff, the option-processing stuff, is what's being used, and it's hard-coded to handle only RTSP
[19:35:47 CET] <mfwitten> darkdrgn2k: As you say, you could just slip in the right code there directly and re-compile if you want
[19:36:41 CET] <mfwitten> darkdrgn2k: However, it would perhaps make more sense just to tell your ffmpeg program to send out RTSP, accessible from localhost:port
[19:37:11 CET] <darkdrgn2k> may be a fast way to a solutin for now
[19:37:16 CET] <mfwitten> darkdrgn2k: You could use udp, too, to cut down on overhead
[19:37:24 CET] <mfwitten> darkdrgn2k: Well, what's your qualm? Unnecessary overhead?
[19:37:25 CET] <darkdrgn2k> so any idea how to do that? seems all the attempts i tried have failed so far
[19:37:53 CET] <darkdrgn2k> my only concern is reduce cpu overhead a much as possible.
[19:38:04 CET] <mfwitten> darkdrgn2k: Actually.
[19:38:08 CET] <mfwitten> darkdrgn2k: There's another solution
[19:38:11 CET] <darkdrgn2k> o.O
[19:38:27 CET] <mfwitten> darkdrgn2k: Maybe you could just have ffmpeg pipe its data to a unix socket, and then open that unix socket as the source path
[19:39:20 CET] <mfwitten> darkdrgn2k: ffmpeg -f h264 .... -f h264 unix://<filepath>
[19:39:32 CET] <mfwitten> darkdrgn2k: Or something like that
[19:39:36 CET] <darkdrgn2k> hmm maybe
[19:39:41 CET] <darkdrgn2k> can we try udp or tcp first?
[19:39:59 CET] <mfwitten> darkdrgn2k: Really?
[19:40:02 CET] <darkdrgn2k> ( was thinking of sockets to but i think going udp/tcp may be easier for now )
[19:40:04 CET] <mfwitten> darkdrgn2k: This seems much better
[19:40:15 CET] <darkdrgn2k> i dont know how zoenminder will react to a non rtsp:// source
[19:40:49 CET] <mfwitten> darkdrgn2k: Well, the old code just opens a file, and the new stuff just warns if it doesn't know rtsp method being used
[19:41:11 CET] <darkdrgn2k> well let me try
[19:41:18 CET] <darkdrgn2k> will only take a second to set it up
[19:41:21 CET] <mfwitten> darkdrgn2k: You wouldn't even need ffmpeg, potentially, though I'd suggest scaling the input image to make zone* run faster
[19:45:05 CET] <mfwitten> darkdrgn2k: ffmpeg -f h264 -i http://sfsadfsfsdkjhsfkjsdhf -f h264 -listen 1 unix:/tmp/socket
[19:45:11 CET] <mfwitten> darkdrgn2k: Then use /tmp/socket as the source path
[19:46:17 CET] <darkdrgn2k> Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080, 30 fps, 30 tbr, 1200k tbn, 60 tbc
[19:46:42 CET] <darkdrgn2k> zoenminder keeps dieing
[19:46:42 CET] <darkdrgn2k> Got signal 15 (Terminated), exiting
[19:47:08 CET] <darkdrgn2k> hmm
[19:47:17 CET] <darkdrgn2k> ffprobe yeilded /tmp/test1: No such device or address
[19:48:27 CET] <mfwitten> darkdrgn2k: Sorry, I think I gave you bad advice
[19:48:31 CET] <mfwitten> darkdrgn2k: Try this...
[19:49:38 CET] <mfwitten> darkdrgn2k: mkfifo /tmp/fifo
[19:49:47 CET] <mfwitten> darkdrgn2k: ffmpeg -f h264 -i http://sfsadfsfsdkjhsfkjsdhf -f h264 /tmp/fifo
[19:50:00 CET] <mfwitten> darkdrgn2k: add -y to that command
[19:50:11 CET] <mfwitten> darkdrgn2k: ffmpeg -y -f h264 -i http://sfsadfsfsdkjhsfkjsdhf -f h264 /tmp/fifo
[19:50:18 CET] <mfwitten> darkdrgn2k: Then use /tmp/fifo as the source path
[19:50:34 CET] <darkdrgn2k> unix:/tmp/fifo: Address already in use
[19:50:52 CET] <darkdrgn2k> wait me dumb
[19:52:02 CET] <darkdrgn2k> Priming capture from /tmp/fifo2 then Got signal 15 (Terminated), exiting
[19:52:06 CET] <darkdrgn2k> doesnt like sockets
[19:52:14 CET] <mfwitten> hmmm
[19:52:23 CET] <mfwitten> You didn't use unix:, right?
[19:52:27 CET] <darkdrgn2k> correct
[19:52:41 CET] <darkdrgn2k> there is only a drop down for tcp udp http or multicast
[19:52:42 CET] <mfwitten> darkdrgn2k: Ah, man! I thought we had it with that
[19:52:46 CET] <mfwitten> ok
[19:52:52 CET] <mfwitten> Then we know what to do
[19:53:24 CET] <darkdrgn2k> tcp or udp rstp
[19:53:30 CET] <mfwitten> udp
[19:53:51 CET] <mfwitten> darkdrgn2k: There's no worry about bad transmission conditions
[19:54:03 CET] <darkdrgn2k> correct
[19:56:45 CET] <darkdrgn2k> i never had any luck getting ffmpet to push out rtsp
[20:03:02 CET] <mfwitten> darkdrgn2k: The documentation is a little wonky.
[20:03:19 CET] <darkdrgn2k> not as wonky as zoneminders :P
[20:04:38 CET] <mfwitten> darkdrgn2k: It seems like either you need to run a separate RTSP server to which you can send data and which can then transmit it to zoneminder, or somehow set up something (like the fifo we tried) to act like source for zoneminder (although this time delivering rtsp data).
[20:08:57 CET] <mfwitten> darkdrgn2k: Hey. When we tried the FIFO, you never got the message "Unknown method", right?
[20:10:05 CET] <mfwitten> darkdrgn2k: Maybe try the fifo again, but set the method to something unknown (e.g., "pleasejustwork")
[20:10:46 CET] <mfwitten> darkdrgn2k: Then the code won't try to set rtsp_transport, and maybe it will just open it as a file
[20:18:06 CET] <mfwitten> darkdrgn2k: https://github.com/revmischa/rtsp-server
[20:18:20 CET] <mfwitten> darkdrgn2k: See the README there
[20:20:02 CET] <mfwitten> darkdrgn2k: I would use `--clientport' to avoid running as root
[20:22:22 CET] <mfwitten> darkdrgn2k: So, run `rtsp-server --clientport 1234 --sourceport 4321' (or the like), and then `ffmpeg -f h264 -i http://sdfasfsdf -f rtsp -muxdelay 0.1 rtsp://127.0.0.1:4321/video'
[20:23:38 CET] <mfwitten> darkdrgn2k: And then tell zoneminder the source path is `rtsp://127.0.0.1:1234/video'
[20:24:02 CET] <darkdrgn2k> shouldnt setting zoneminder for UDP accept a udp stream?
[20:26:46 CET] <mfwitten> darkdrgn2k: And, I guess, zoneminder should be set up to use tcp
[20:26:59 CET] <darkdrgn2k> but if i do udp
[20:27:03 CET] <darkdrgn2k> cant i bypass the server?
[20:29:02 CET] <mfwitten> darkdrgn2k: I don't think ffmpeg acts as an RTPS server; it can act as a source for a server
[20:29:09 CET] <mfwitten> darkdrgn2k: or it can act as a client to a server
[20:30:37 CET] <mfwitten> camera ----H.264 over HTTP----> ffmpeg ---- RTPS source----> rtps-server ----RTPS service---> zoneminder
[20:32:53 CET] <darkdrgn2k> im wondering if ffmpeg can act as a server..
[20:33:43 CET] <mfwitten> darkdrgn2k: I'm pretty sure it cannot. It's a muxer that can send data to a server
[20:36:31 CET] <mfwitten> darkdrgn2k: Alternatively, did you try the fifo method again while setting the RTSP method to gibberish?
[20:55:15 CET] <darkdrgn2k> i found another url that actaulyl works1
[20:58:15 CET] <mfwitten> darkdrgn2k: Good
[20:58:31 CET] <darkdrgn2k> undocumented
[20:58:33 CET] <darkdrgn2k> as everthing'
[20:58:37 CET] <darkdrgn2k> kinda jittery but
[20:58:37 CET] <mfwitten> indeed
[20:58:47 CET] <mfwitten> What does it produce?
[20:58:54 CET] <mfwitten> What does that URL yield?
[20:59:35 CET] <darkdrgn2k> mpegts
[20:59:35 CET] <darkdrgn2k> Stream #0:0[0x100]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 16000 Hz, mono, fltp, 20 kb/s
[20:59:36 CET] <darkdrgn2k> Stream #0:1[0x101]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080, 30 fps, 30 tbr, 90k tbn, 60 tbc
[21:00:24 CET] <mfwitten> And so what are you doing with that?
[21:00:41 CET] <darkdrgn2k> fead it into zonerminder
[21:00:43 CET] <darkdrgn2k> and it pickled it up
[21:00:54 CET] <darkdrgn2k> but it plays a few frames then stalls for a second plays a few frames stalls.
[21:01:43 CET] <^Neo> can anyone point me to an example aviocontext usage where I'm writing data into from an AVPacket data field?
[21:01:46 CET] <mfwitten> darkdrgn2k: Basically, it just thinks its a ".ts" file on the server?
[21:02:03 CET] <darkdrgn2k> i guess so
[21:02:56 CET] <darkdrgn2k> lol announces the langth as Length: 99999999999 (93G) [video/mpeg]
[21:02:57 CET] <darkdrgn2k> lol
[21:03:13 CET] <mfwitten> :-)
[21:03:40 CET] <mfwitten> darkdrgn2k: Have you tried the fifo method again, albeit with a gibberish "RTSP" transport method?
[21:04:31 CET] <darkdrgn2k> not yet i will though
[21:04:32 CET] <darkdrgn2k> report back
[21:08:00 CET] <grayhatter> I'm using swresample, and when playing back the audio, it seems to play at the wrong sample rate. When I use swr_alloc_set_opts(..., in_rate / 4, ...); it sounds closer, but still distorted... I'm out of ideas, anyone have any thoughts?
[21:08:44 CET] <durandal_1707> grayhatter: make sure you use right sample format
[21:09:09 CET] <wfbarksdale> Hey folks, I am working on a c++ wrapper around muxing / demuxing using ffmpeg (2.8 blech). I am trying to initialize an output context for an mp4 container to mux to, but I am seeing the output "Codec for stream 0 does not use global headers bug container format requires global headers"
[21:09:22 CET] <grayhatter> when I use the correct sample rate it plays bake super distorted, and what seems like REALLY sped up
[21:09:25 CET] <wfbarksdale> does anyone know what i need to set when initializing my streams to make this work?
[21:09:39 CET] <grayhatter> as I said, when I use the wrong rate, it sounds closer to correct, but still distorted
[21:10:16 CET] <grayhatter> plays back*
[21:11:30 CET] <mfwitten> darkdrgn2k: I don't know whether the GUI allows for changing things, but the code suggests it loads options (like your desired `-f h264') from a MySQL database: https://github.com/ZoneMinder/zoneminder/blob/9f588d5758b57c187f71a348370f9bf077a5848f/src/zm_monitor.cpp#L2077
[21:13:04 CET] <mfwitten> darkdrgn2k: If you can manipulate the MySQL database entry for this monitor you're setting up, then you can pass whatever you want
[21:14:06 CET] <mfwitten> darkdrgn2k: https://github.com/ZoneMinder/zoneminder/blob/9f588d5758b57c187f71a348370f9bf077a5848f/src/zm_monitor.cpp#L67
[21:14:20 CET] <darkdrgn2k> ill have to check thank you
[21:14:22 CET] <mfwitten> darkdrgn2k: That's the table layout and SQL needed
[21:18:42 CET] <grayhatter> figured it out, I needed to convert the number of samples * NUM_CHANNELS to bytes, then to frames for alsa
[21:44:56 CET] <mfwitten> darkdrgn2k: I take it back about being able to add `-f h264' somewhere; that appears to be an `ffmpeg' option. Look at the 3rd argument here: https://github.com/ZoneMinder/zoneminder/blob/51f0e7e5c8320d7b855ae3bff8ed442539233c2f/src/zm_ffmpeg_camera.cpp#L354
[21:45:15 CET] <mfwitten> darkdrgn2k: It's NULL. That's the format you want to use (i.e., the `h264' demuxer)
[21:46:07 CET] <mfwitten> darkdrgn2k: So, either a gibberish RTPS transport method combined with a FIFO will work, or you'll have to set up the RTPS server
[21:46:17 CET] <mfwitten> darkdrgn2k: (or edit the source code)
[21:46:57 CET] <mfwitten> darkdrgn2k: (To clarify, the `NULL' means that no format can be specified explicitly)
[22:44:36 CET] <mfwitten> darkdrgn2k: I don't know whether you're still working on this problem, but I've come up with another idea that seems obvious in retrospect.
[22:45:53 CET] <mfwitten> darkdrgn2k: You could read your H.264 stream into ffmpeg, and write out to the fifo an mpegts; that way, when you open the fifo as a source path, it will have enough container information to satisfy the missing format info
[22:46:33 CET] <darkdrgn2k> mfwitten, hmm interesting i wanna play with those just to see what happens :)
[22:46:43 CET] <darkdrgn2k> its great solutions to other issues i had before where ffmpeg would come in
[22:46:55 CET] <darkdrgn2k> for now im ttying to solve anotehr zoneminder related issue :/
[22:47:10 CET] <darkdrgn2k> seems in "pass throguh" mode it jitters in encode mode its fine ~*shur*~
[22:47:45 CET] <mfwitten> darkdrgn2k: mkfifo /tmp/fifo
[22:48:07 CET] <mfwitten> darkdrgn2k: ffmpeg -y -f h264 -i http://asfasdfsdafsd -vsync passthrough -enc_time_base -1 -c:v copy -bsf:v h264_mp4toannexb -f mpegts /tmp/fifo
[23:10:34 CET] <friendofafriend> If you had a video clip and you wanted to duplicate its encoding settings, what tool would you use? ffprobe?
[00:00:00 CET] --- Thu Feb 7 2019
More information about the Ffmpeg-devel-irc
mailing list