[Ffmpeg-devel-irc] ffmpeg.log.20160421
burek
burek021 at gmail.com
Fri Apr 22 02:05:01 CEST 2016
[00:02:38 CEST] <johnnny22-afk> that's a bummer :P
[00:04:58 CEST] <johnnny22-afk> And if I recombine the head after ?
[00:05:21 CEST] <johnnny22-afk> The advantage of mpegts is that I can just cut the head of the file even while it's playing.
[00:05:39 CEST] <johnnny22-afk> as long as I don't remove the part where the playhead is at.
[00:05:49 CEST] <kepstin> yeah, as far as I know that's the only format which supports that
[00:06:29 CEST] <johnnny22-afk> only format that supports audio & video with sync'ing right ? obviously I could probably go with some rawvideo and rawaudio !? hehe
[00:06:33 CEST] <kepstin> since it is after all designed as a streaming format for broadcasts, where someone can turn on a TV at any time and start at the first bit their receiver picks up
[00:06:37 CEST] <kepstin> heh
[00:06:50 CEST] <johnnny22-afk> what about that 'nut' format ?
[00:07:07 CEST] <kepstin> i'm pretty sure that also has some required global headers at the start
[00:07:08 CEST] <johnnny22-afk> nvm, it seems to say nonseekable
[00:07:37 CEST] <johnnny22-afk> what's that other format that ffserve uses humm
[00:08:05 CEST] <johnnny22-afk> FFM2
[00:08:47 CEST] <kepstin> i dunno exactly what you're trying to do, but using a custom player that connects to a server which handles all the seeking, segmentation/etc and provides a continuous mpeg-ts (or whatever) stream might be something to look at.
[00:09:53 CEST] <johnnny22-afk> very true, I did ponder about that too. I might revisit that idea.
[00:10:38 CEST] <johnnny22-afk> But I'd probably have to go with rtsp to handle seeking etc.. right ?
[00:11:04 CEST] <kepstin> looks like ffm/ffm2 is just an internal circular buffer format for ffserver, it's not supported by any other tools and isn't even compatible acress ffmpeg versions :/
[00:11:19 CEST] <kepstin> or have out-of-band signalling in some other way, yeah
[00:12:01 CEST] <johnnny22-afk> And the best bet is probably some UDP stream straight to the player.
[00:12:47 CEST] <kepstin> if you're able to, sure :) Use rtp for that unless you have a good reason to do otherwise.
[00:18:28 CEST] <johnnny22-afk> I'll think of that
[01:08:32 CEST] <jacobwg> Does anyone know why transcoding this mkv into an mp4 with an additional audio track is causing that additional audio track to be in a different alternate group? And do you know how I might go about making both tracks be in the same alt group? https://gist.github.com/jacobwgillespie/baf9223ac2c58c5bd73e373dda137395
[01:13:18 CEST] <pzich> jacobwg: I haven't dealt with multiple audio tracks, so I can't really, but I recommend checking out these: https://trac.ffmpeg.org/wiki/AudioChannelManipulation and https://trac.ffmpeg.org/wiki/Map
[01:14:58 CEST] <jacobwg> pzich: I'll take a look - this "alternate group" thing is specific to the container I believe - I'm digging into the source now to see how it's computed: https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/movenc.c#L2419
[01:17:03 CEST] <pzich> ah yeah, that's far beyond my area of knowledge. good luck :-/
[01:27:05 CEST] <johnnny22-afk> kepstin: if i was to go with a streaming server solution, wouldn't that server have to deal with those same issues ?
[01:28:26 CEST] <kepstin> Sure, but you can use formats that the clients wouldn't understand, you can get cross segment seeking correct, etc.
[01:32:05 CEST] <johnnny22-afk> Seems like still a hassle to make sure the server can support that. But you do have a point, that it makes the player side simpler. While I'm still not sure of such server that would support nicely those seeking within multi-segment mpegts's or other formats. I guess I could use mpeg-dash too in such a case on the server-side.
[01:32:52 CEST] <johnnny22-afk> But, out of the box, what server supports this nicely, while supporting the possibility to remove segments or pieces at the head of the file to recreate a sliding window system.
[01:33:54 CEST] <johnnny22-afk> Seems like I'd have to implement quite a bit.
[01:51:39 CEST] <pfelt1> say i've got three ffmpeg commands. two of them create streams of data and then the third pulls that data to do something with it. is there any better way than to use a fifo between them?
[04:55:01 CEST] <fling> Downgrading to 4.1.21 solved the issue. Is it something wrong with v4l2 or with what?
[06:42:14 CEST] <Prelude2004c> hey guys.. good evening to everyone
[06:42:41 CEST] <Prelude2004c> question.. silencedetect.. anyone familiar with it ? silencedetect=n=-50dB:d=5 for example.. i want to say if it detects silence for longer than say 5 seconds.. write to a log file or something.. how does one do that
[09:45:16 CEST] <termos> is there a way to avoid avformat_open_input reading from the input before I am ready? My init phase takes some time and by the time I'm finished I get a huuuuge burst of frames because somehow they are bufferet up when I try to av_read_frame. Ideas?
[10:09:23 CEST] <t4nk407> Hey :) I gto the same problem this guy has : http://ffmpeg-users.933282.n4.nabble.com/wall-clock-time-s-in-setpts-filter-RTCSTART-deprecated-td4657378.html
[10:09:29 CEST] <t4nk407> any suggestions ?
[10:10:02 CEST] <ggugi> hi everyone
[10:45:49 CEST] <Xen0_> is there a reason i always get this error?
[10:45:51 CEST] <Xen0_> Stream #0:3 -> #0:0 (dvd_subtitle (dvdsub) -> ssa (native))
[10:45:51 CEST] <Xen0_> Press [q] to stop, [?] for help
[10:45:51 CEST] <Xen0_> [ssa @ 0xfe4180] Only SUBTITLE_ASS type supported.
[10:45:51 CEST] <Xen0_> Subtitle encoding failed
[10:46:13 CEST] <Xen0_> everything i read says it should work fine but never does
[11:03:52 CEST] <furq> Xen0_: i don't think you can automatically convert dvd subtitles to a text subtitle format
[11:05:38 CEST] <Xen0_> oh
[11:07:13 CEST] <Xen0_> hmm
[11:09:12 CEST] <furq> there are a million different windows tools that will do it
[11:09:28 CEST] <furq> i think subextractor is the cool one at the moment but i've not had to do it in years
[11:09:46 CEST] <furq> as far as *nix goes i don't know of anything other than avidemux
[11:10:09 CEST] <Xen0_> yea in on linux
[11:10:25 CEST] <Xen0_> ffmpeg -i input.mkv -filter_complex "[0:v][0:s]overlay[v]" -map "[v]" -map 0:a <output options> output.mkv
[11:10:33 CEST] <Xen0_> if i use this
[11:10:49 CEST] <Xen0_> and say video is 0:1
[11:11:06 CEST] <Xen0_> do i replace the other [v] with 1
[11:11:20 CEST] <t4nk407> Hey :) I gto the same problem this guy has : http://ffmpeg-users.933282.n4.nabble.com/wall-clock-time-s-in-setpts-filter-RTCSTART-deprecated-td4657378.html
[11:11:21 CEST] <furq> no
[11:12:26 CEST] <furq> Xen0_: i'm pretty sure mkv supports dvd subtitles so you should be able to use -c:s copy and avoid burning them in
[11:13:21 CEST] <Xen0_> i need them burned in for my end usage is the issue
[11:13:44 CEST] <Xen0_> my final output is webm encodes
[11:14:24 CEST] <furq> well using video filters will force a transcode, so you probably want to encode it straight to webm
[11:14:43 CEST] <furq> unless one of your output options is -q:v 0
[11:16:01 CEST] <Xen0_> im ok with reencoding after i get them burned in
[11:16:30 CEST] <Xen0_> but when i try i it with -vf subtitle=foo.mkv i still get an error
[11:20:45 CEST] <furq> that only works with text subtitles
[11:22:18 CEST] <Xen0_> ahh
[11:22:36 CEST] <Xen0_> explains it
[11:24:39 CEST] <Xen0_> i think just got the overlay filter to work
[11:25:18 CEST] <Xen0_> ill have to reencode to webm still but im ok with that
[11:38:35 CEST] <t4nk407> is there a way, to start 4 streams with one command, of wich one is in the upper left, one in the upper right , one in the lower left and one in the lower right of the original video ?
[12:38:24 CEST] <Godspeed990> Hi, I am trying to push a video to a video decoder. I have a mp4 file. I need a PES file. Which option for ffmpeg should I use. I am currenlty extracting a h264 stream. Which I can't play?
[12:44:28 CEST] <Eiken_> anyone know if its possible to use CDL values with ffmpeg somehow?
[14:14:48 CEST] <t4nk421> hey :) any Idea, why this won't work : ffmpeg -f dshow -video_size 1280x720 -rtbufsize 999999k -i video="screen-capture-recorder" -r 20 -c copy -threads 4 -f mpegts udp://239.255.1.1:1234
[14:15:28 CEST] <t4nk421> I don't want to transcode the stream, because I think that would cause to high latency
[14:23:10 CEST] <furq> t4nk421: mpegts can't contain rawvideo or pcm audio
[14:23:14 CEST] <furq> which i assume is what you get out of dshow
[14:27:54 CEST] <t4nk421> ah that is it , thank you ;)
[14:44:58 CEST] <t4nk421> are there other muxers similar like mpegts wich can stream raw streams ?
[15:32:19 CEST] <ek_> Hello, a colleague is trying to install ffmpeg using homebrew on his Mac, but the ffmpeg.org site seems to be refusing connections on port 443; even the mailing list links on that site are broken since they use HTTPS. Does anyone know if there is a way around this problem?
[15:33:13 CEST] <furq> https is working fine for me
[15:34:44 CEST] <ek_> Hmm. Maybe the problem is on my end, then. Thank you.
[15:36:05 CEST] <maziar> i want to create manifest like this http://pastebin.com/U5Q92qUq , i can create *.m3u8 but i don't know how to create master.m3u8 to refer to some quality, please kindly help me
[15:39:41 CEST] <ek_> quit
[15:58:49 CEST] <neuro_sys> I have a set of PNGs with black background on top a textured polygon moving only. I'd like to overlay this on top of a video, how would I go about doing that?
[15:59:32 CEST] <furq> neuro_sys: https://ffmpeg.org/ffmpeg-filters.html#colorkey
[16:01:01 CEST] <neuro_sys> furq: thanks!
[16:09:04 CEST] <neuro_sys> Could someone explain the terms used in here? I couldn't make sense of the manual description: ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' out.mkv
[16:09:18 CEST] <neuro_sys> [0:v] means the first input's video stream
[16:09:27 CEST] <neuro_sys> [1:v] means the second input's video stream
[16:09:58 CEST] <c_14> technically, streams. But yes
[16:10:01 CEST] <neuro_sys> overlay is the overlay filter. So how do filters get their parameters?
[16:10:06 CEST] <neuro_sys> by prefixed with streams?
[16:10:21 CEST] <c_14> filters take inputs and have outputs
[16:10:29 CEST] <c_14> they also have options which have defaults if not explicitly set
[16:10:55 CEST] <neuro_sys> the inputs are the ones inside brackets to the left of filter names, and the output is to the right?
[16:11:04 CEST] <c_14> yep
[16:11:50 CEST] <neuro_sys> so parameters are indexed? because we didn't give those streams any names for the overlay filter to make sense.
[16:12:09 CEST] <neuro_sys> https://ffmpeg.org/ffmpeg-filters.html#overlay-1
[16:12:14 CEST] <neuro_sys> It doesn't speak about ordering here
[16:12:28 CEST] <c_14> it does
[16:12:33 CEST] <c_14> >It takes two inputs and has one output. The first input is the "main" video on which the second input is overlaid.
[16:12:39 CEST] <neuro_sys> oh yes silly me
[16:15:01 CEST] <neuro_sys> In this example: -filter_complex "[0:v][1:v] overlay=25:25:enable='between(t,0,20)'" I see that it's possible to assign values to filters using equals sign?
[16:15:14 CEST] Action: neuro_sys scans through the manual to find the meaning of that
[16:15:27 CEST] <c_14> those are filter options
[16:15:59 CEST] <c_14> 4.1 Filtergraph Syntax
[16:16:04 CEST] <c_14> https://ffmpeg.org/ffmpeg-filters.html#Filtergraph-syntax-1
[16:18:37 CEST] <neuro_sys> beautiful
[16:24:00 CEST] <neuro_sys> http://i.imgur.com/Sn2PyKS.gif
[16:24:02 CEST] <neuro_sys> xD
[16:24:21 CEST] <neuro_sys> it's the best tool
[16:27:07 CEST] <maziar> i want to create manifest like this http://pastebin.com/U5Q92qUq , i can create *.m3u8 but i don't know how to create master.m3u8 to refer to some quality, please kindly help me
[16:35:16 CEST] <m3gab0y> hey all :) anyone up for a command line challenge? I want to setup complex filter overlays but can't get it right
[17:00:45 CEST] <anadon_> I'm trying to use a FILE* to access decoding an H264 encoded video (so stdin and files can both be used) but all the examples I'm pulling up rely on a function call which take a filepath. Where should I be looking to do this?
[17:07:02 CEST] <neuro_sys> what is a filter that does the inverse of colorkey?
[17:08:14 CEST] <c_14> presumably overlay over a color?
[17:08:50 CEST] <neuro_sys> c_14: indeed
[17:09:21 CEST] <c_14> then there's your answer
[17:09:50 CEST] <neuro_sys> yeah thanks, I'm checking
[17:15:53 CEST] <Mavrik> anadon_, you'll probably gonna have to create your on avio_context and implement read() calls
[17:16:27 CEST] <anadon_> Mavrik: That sounds hard. Is it?
[17:17:05 CEST] <Mavrik> Well, you call https://ffmpeg.org/doxygen/3.0/avio_8h.html#a853f5149136a27ffba3207d8520172a5 to allocate the avio_context
[17:17:20 CEST] <Mavrik> And you'll need a read_packet function that will read data.
[17:17:31 CEST] <Mavrik> And then set that aviocontext to "pb" variable in avformat_context.
[17:19:11 CEST] <anadon_> Mavrik: Do you think named pipes might me a better way for me to go about this?
[17:19:53 CEST] <Mavrik> How would named pipes help?
[17:20:03 CEST] <Mavrik> Are you calling the ffmpeg binary or are you using libav API?
[17:20:38 CEST] <kepstin> on linux, you might be able to hack it by opening /dev/fd/# as a file
[17:21:01 CEST] <anadon_> Mavrik: API -- it looks like I'd need to know many details of the h264 format, which really isn't worth it.
[17:21:12 CEST] <Mavrik> Huh, what?
[17:21:34 CEST] <Mavrik> Named pipes seems convoluted way of doing a solution when you have the AVIO API right there for that exact purpose.
[17:21:42 CEST] <Mavrik> No idea why'd you have to know the details.
[17:21:50 CEST] <Mavrik> You'll just get a read (or seek) call and you react on it.
[17:22:27 CEST] <anadon_> kepstin: Doesn't that have files that behave a little diferently when it comes to stuff like seeking? I'd have to know if any file operations of the fd/# would be aversely affected.
[17:23:39 CEST] <anadon_> Mavrik: Could the read function work as a pull of 1 byte at a time from a FILE*?
[17:24:24 CEST] <kepstin> anadon_: re-opening an fd via the /dev/fd/# interface is equivalent to duplicating the file handle, so stuff like seeking, etc. will be independent.
[17:25:43 CEST] <Mavrik> anadon_, not sure I understand the question
[17:25:46 CEST] <Mavrik> look at the signature of the method
[17:25:49 CEST] <Mavrik> you get a buffer and size
[17:25:53 CEST] <Mavrik> and you fill the buffer with file data
[17:26:03 CEST] <anadon_> kepstin: yikes. So that may or may not break. I wonder why there's not support for using FILE* over a file address, unless the URL case is really that common.
[17:26:24 CEST] <kepstin> anadon_: I really can't think of any reason why you'd want to have ffmpeg reading from a file *and* read from it yourself
[17:26:28 CEST] <kepstin> that doesn't even make sense
[17:27:39 CEST] <anadon_> Mavrik: intuitively, the function suggests some weird parsing magics.
[17:27:48 CEST] <Mavrik> ?
[17:28:23 CEST] <anadon_> kepstin: What do you mean? I'm used to FILE* as a generic handle.
[17:29:04 CEST] <anadon_> Mavrik: I'm working through the documentation -- this is my first project in libav and I'm running into roadbumps.
[17:29:24 CEST] <kepstin> FILE* isn't really a generic handle, that's just a C library interface around fds, which are the actual generic handle :)
[17:31:55 CEST] <anadon_> kepstin: 'kay
[17:31:58 CEST] <t4nk281> hi tryng to use ffmpeg3 on windows 7 ¿ where is the .exe file ????
[17:33:01 CEST] <t4nk281> eoeoeo not dificult question ...
[17:33:14 CEST] <kepstin> but yeah, if you're gonna be using libav* to read media files, you should be letting the library handle all of the io in most cases. Reading from stdin is a bit of a special case - having libav* open /dev/fd/0 aka /dev/stdin and then *not touching stdin from your code* might be best? I'm not totally sure.
[17:33:22 CEST] <anadon_> kepstin: http://images.wikia.com/hunterxhunter/images/8/80/Alluka_nanika.png
[17:33:56 CEST] Action: kepstin hasn't seen HxH, so the reference goes over his head ;)
[17:34:00 CEST] <t4nk281> please ... I am in a hurry
[17:34:19 CEST] <anadon_> kepstin: The character only says " 'kay "
[17:34:42 CEST] <kepstin> t4nk281: I assume you probably downloaded the source code. If you want a build, try https://ffmpeg.zeranoe.com/builds/
[17:34:58 CEST] <kepstin> t4nk281: and random people in this chat channel are not obligated to provide you support.
[17:35:03 CEST] <furq> kepstin: you're way too nice
[17:36:24 CEST] <t4nk281> Thank you : "ffmpeg-3.0.tar.bz2" seems ot to be a build ?
[17:36:44 CEST] <c_14> no, that's the source
[17:37:05 CEST] <t4nk281> ok thank again I go to your link
[17:38:46 CEST] <t4nk281> bye !
[18:08:56 CEST] <anadon_> kepstin: in sws_getContext, arguments 3 and 6 take something like "AV_PIX_FMT_YUV420P" -- hardcoding this is probably a bad idea. What's the pointer path in AVCodecContext or AVFormatContext?
[18:10:29 CEST] <JEEB> http://ffmpeg.org/doxygen/trunk/structAVCodecContext.html
[18:10:41 CEST] <JEEB> ctrl+f AV_PIX_FMT
[18:28:34 CEST] <anadon_> What is the correct include path for avformat.h? It doesn't seem to be under avformat/ or avutil/ .
[18:29:20 CEST] <thebombzen> anadon_: try libavformat/avformat.h
[18:29:21 CEST] <furq> it should be in libavformat
[18:29:26 CEST] <thebombzen> oh sniped
[18:29:32 CEST] <furq> ;_;
[18:29:56 CEST] <thebombzen> thx for reminding me to rebuild ffmpeg
[18:30:08 CEST] <furq> i wonder if freebsd have updated yet
[18:30:17 CEST] <anadon_> Thanks!
[18:33:17 CEST] <anadon_> "avcodec_alloc_frame()" is in the libavutil/format.h include, isn't it?
[18:33:52 CEST] <anadon_> nv, got it
[18:42:01 CEST] <anadon_> I know I'm doing something dumb here....
[18:42:36 CEST] <anadon_> does sws_getContext() or sws_scale() throw "[swscaler @ 0xafb940] bad dst image pointers" ?
[19:06:19 CEST] <anadon_> Can someone tell me what I'm doing dumb here? It's causing a segfault later in the program, likely because I'm using something incorrectly. http://pastebin.com/Czgw58QG
[19:07:23 CEST] <DHE> outdata[1] on line 19 isn't valid
[19:08:26 CEST] <anadon_> Howso?
[19:09:26 CEST] <anadon_> yes, and doesn't the RGB32 codec only write out to channel 0?
[19:10:07 CEST] <anadon_> and then...
[19:10:08 CEST] <DHE> what's on line 19?
[19:10:09 CEST] <anadon_> yup
[19:10:53 CEST] <anadon_> Let's play 'how much sleep did I really get last night'
[19:11:26 CEST] <DHE> oh geez, I'll be playing that game tomorrow...
[19:18:49 CEST] <anadon_> Any idea is sws_getContext() has any issue converting to strange resolutions? (32X18)
[19:19:43 CEST] <DHE> if this is linux, compile with -g and run under valgrind
[19:45:15 CEST] <anadon_> DHE: I'm still getting corrupted frames back.
[19:51:00 CEST] <anadon_> I need a second set of eyes on this. It should be easy: http://pastebin.com/3GwKxwqZ
[19:59:15 CEST] <anadon_> Relocated -- did I miss anything?
[20:04:09 CEST] <jack54> Can anyone help me with capturing all unique frames from an OS X display to a QuickTime compatible H.264 file?
[20:04:28 CEST] <jack54> I already have: ffmpeg -framerate 120 -f avfoundation -i "1" -c:v libx264 -qp 18 -preset ultrafast out.mkv
[20:05:14 CEST] <jack54> I think I need to add a decimate filter that cuts out duplicate frames and tweak the output so it's a 60 FPS movie in 420/NV12 format that QuickTime understands
[20:32:18 CEST] <anadon_> jack54: mind taking a look a t my code? http://pastebin.com/qPFq6eW1
[20:32:28 CEST] <anadon_> I'm getting corrupted frame
[20:33:18 CEST] <jack54> anadon_: I can't help you -- I am just stuck with a question myself :)
[21:04:24 CEST] <utack> i guess this is pretty stale and will not be adressed? https://trac.ffmpeg.org/ticket/1885
[21:04:33 CEST] <utack> not that it is high priority or anything, just wondering
[21:24:16 CEST] <andrey_utkin> is concatdec with "-auto_convert 0" supposed to fail on mp4 files? it does: test script: https://gist.github.com/andrey-utkin/53c33e11b68eaa54d67aa25aaa40dd58 output: https://gist.github.com/andrey-utkin/86ca4c63e8340b015edf4f538ba08456
[21:24:32 CEST] <prelude2004cZzzz> hey guys.. looking for help.. can someone tell me if this looks right ? http://pastebin.com/y34iadDv >> i keep getting random audio sync issues and things come out of sync for some reason
[21:25:37 CEST] <prelude2004cZzzz> ? that is what i did
[21:27:20 CEST] <sfan5> um
[21:27:24 CEST] <relaxed> is there a reason you're not using nvenc through ffmpeg?
[21:27:30 CEST] <sfan5> ffmpeg can support nvenc as an encoder
[21:27:35 CEST] <sfan5> without any piping workaround
[21:29:34 CEST] <andrey_utkin> prelude2004cZzzz, no exact command line and no command output at all
[21:31:11 CEST] <prelude2004cZzzz> yes found quality working better with the library direct
[21:31:19 CEST] <prelude2004cZzzz> shows great improvement over the ffmpeg version
[21:31:29 CEST] <prelude2004cZzzz> but i have tried with both and still the same issue
[21:31:39 CEST] <prelude2004cZzzz> something is throwing off the sound sync
[21:32:02 CEST] <prelude2004cZzzz> am i not preserving timestamps correctly?
[21:32:04 CEST] <relaxed> did you look at the options listed from "ffmpeg -h encoder=nvenc" ?
[21:32:07 CEST] <sfan5> that's most likely caused by that piping workaround
[23:11:29 CEST] <Guster_> hi there
[23:11:47 CEST] <Guster_> i have little problem with ffmpeg on ubuntu
[23:12:19 CEST] <Guster_> does someone have some time for me please? :D
[23:13:03 CEST] <sfan5> just ask your question
[23:13:06 CEST] <sfan5> someone will help if they can
[23:13:14 CEST] <Guster_> okey
[23:13:44 CEST] <Guster_> when i try to feed my test stream from web cam on server in my local network
[23:14:10 CEST] <Guster_> i get a error message and break the feed
[23:15:00 CEST] <Guster_> Thu Apr 21 23:12:57 2016 127.0.0.1 - - [POST] "/feed1.avi HTTP/1.1" 200 1282
[23:15:00 CEST] <Guster_> this is message on ffserver
[23:15:00 CEST] <Guster_> av_interleaved_write_frame(): Broken pipe
[23:15:00 CEST] <sfan5> damn bot
[23:15:00 CEST] <sfan5> some more console output would be nice
[23:15:08 CEST] <sfan5> yeah there it is
[23:15:14 CEST] <Guster_> ok i will paste
[23:15:56 CEST] <Guster_> there
[23:15:57 CEST] <Guster_> http://pastebin.com/fXw9qw6V
[23:16:00 CEST] <Guster_> ^^
[23:17:03 CEST] <sfan5> does ffserver output an error?
[23:17:26 CEST] <Guster_> noup
[23:17:31 CEST] <Guster_> Thu Apr 21 23:12:57 2016 127.0.0.1 - - [POST] "/feed1.avi HTTP/1.1" 200 1282
[23:17:35 CEST] <Guster_> header 200
[23:17:48 CEST] <Guster_> and some number after
[23:19:23 CEST] <Guster_> i have to go afk for 15minutes
[23:19:27 CEST] <Guster_> brb
[23:20:13 CEST] <sfan5> hm
[23:21:14 CEST] <sfan5> ah
[23:21:14 CEST] <sfan5> Guster_: [tcp @ 0x29d4fc0] Connection to tcp://localhost:8090 failed (Connection refused)
[23:29:21 CEST] <juls> hi all
[23:36:58 CEST] <Guster_> sfan5: here i am
[23:41:40 CEST] <Guster_> can anyone else help me please?
[00:00:00 CEST] --- Fri Apr 22 2016
More information about the Ffmpeg-devel-irc
mailing list