[Ffmpeg-devel-irc] ffmpeg.log.20160512
burek
burek021 at gmail.com
Fri May 13 02:05:01 CEST 2016
[00:34:00 CEST] <pfelt> are there currently any output formats that run multithreaded?
[00:34:18 CEST] Action: pfelt is playing with the decklink output format and it appears to be single threaded
[00:34:43 CEST] <JEEB> muxers aren't usually threaded
[00:34:48 CEST] <JEEB> encoders are
[00:35:42 CEST] <pfelt> hmm. so if i have the following command line it's going to be single threaded regardless as to what format i use?
[00:36:32 CEST] <pfelt> -map '[ulf]' -f decklink 'DeckLink Quad (3)' -map '[urf]' -f decklink 'DeckLink Quad (7)'
[00:36:58 CEST] <JEEB> no idea about decklink stuff or what encoder gets used with that if any
[00:37:16 CEST] <JEEB> post your terminal output
[00:37:22 CEST] <pfelt> it's output is from rawvideo
[00:37:24 CEST] <JEEB> in pastebin-like :P
[00:37:28 CEST] <pfelt> heh. yeah
[00:37:37 CEST] <c_14> That should probably generate at least 2 threads. One per output afaik
[00:38:27 CEST] <JEEB> yes, I would expect a single muxer be a single thread there, while decoders and encoders and filters have their own limitations and capabilities
[00:39:04 CEST] <pfelt> http://pastebin.com/PzZeNkbm
[00:39:06 CEST] <pfelt> that enough ?
[00:39:41 CEST] <pfelt> so i'm willing to sweat out making decklink mutlithread if there is already an example somewhere
[00:40:09 CEST] <pfelt> and i found that it's single thread because i can't really run more than one output without dying unless i use named pipes and lots of ffmpeg processes (one per output)
[00:40:25 CEST] <pfelt> gdb break on the output packet write seems to always be in the same thread
[00:40:33 CEST] <JEEB> multithreading sounds really weird in a muxer
[00:40:47 CEST] <JEEB> since usually a muxer isn't something you thread, it's just IO work
[00:41:21 CEST] <pfelt> really i just need a way to output to multiple outputs at near realtime and synchronized pts
[00:41:24 CEST] <JEEB> having it in a separate thread and constantly feeding it stuff... sure. also you have two /dev/null outputs there but otherwise looks OK
[00:41:31 CEST] <pfelt> it's quite difficult to keep the named pipes in sync
[00:41:52 CEST] <pfelt> yeah. if i change those nulls to two more outputs it's unreasonably slow
[00:42:19 CEST] <pfelt> (even with just the two it eventually slows down so much where the decklink card is waiting for frames because ffmpeg can't get them there fast enough)
[00:42:30 CEST] <JEEB> also you might want to try a filter chain where you convert to the output format before splitting
[00:42:32 CEST] <pfelt> pts gets all hosed up and the video slows down to something like 1/3 actual speed
[00:42:35 CEST] <JEEB> unless you're already doing it
[00:43:18 CEST] <JEEB> otherwise I'm not sure if threading is the thing to do in the muxer, but you might want to look into optimizing the handling of the decklink stuff, I guess?
[00:43:20 CEST] <pfelt> idea is to cut the video stream into 4 quarters and put one quarter out each output
[00:43:47 CEST] <pfelt> any ideas on how to best find where the bottleneck is in ffmpeg?
[00:44:30 CEST] <JEEB> start with decoding, then add one part of your filter chain (the common one), then add the cutting and then add the outputs
[00:45:19 CEST] <JEEB> also as I noted, you might want to do it so that you do source->pix_fmt conversion to the required format (seems to be uyvy422?)->cutting
[00:45:36 CEST] <JEEB> that way all those outputs don't separately convert their parts into that pix_fmt
[00:47:39 CEST] <pfelt> i had thought about doing that, but i think crop requires rgp or at the least a different uyv format
[00:47:43 CEST] <pfelt> i can retry it
[04:21:37 CEST] <prelude2004c_Zzz> hey guys... i am having some trouble.. i was suggested to use fifo to do the job of a playout list and then a listener to segment.. ( see code here : http://pastebin.com/raw/aMF0uM5j ) .. but its acting up. sometimes the sources just stop going and i am having trouble with stability
[04:21:39 CEST] <prelude2004c_Zzz> can anyone help ?
[04:21:43 CEST] <prelude2004c_Zzz> is there something here i did wrong ?
[04:46:01 CEST] <prelude2004c_Zzz> anyone?
[04:47:33 CEST] <mundus2018> ?
[05:39:20 CEST] <thebombzen> question about hardsubbing - so I want to hardsub the video but only after the first 10 minutes
[05:39:35 CEST] <thebombzen> so I use ffmpeg -ss -i input.mkv -vf subtitles=input.mkv
[05:40:04 CEST] <thebombzen> sorry I mean I use ffmpeg -ss 10:00 -i input.mkv -vf subtitles=input.mkv options out.mkv
[05:40:19 CEST] <thebombzen> but that causes the subtitles to be misaligned
[05:40:46 CEST] <thebombzen> i.e. the first 10 minutes of the subtitles overlay on where the video starts. how do I start the subtitles filter 10 minutes into the video
[05:41:18 CEST] <rsully> not sure, but depending on subtitle format you may be able to edit them so that there aren't any subs until the 10 min mark
[05:41:59 CEST] <rsully> but I generally recommend not hardsubbing - add as a soft track and set forced flag or not
[05:42:09 CEST] <thebombzen> the goal is to upload to YouTube
[05:42:12 CEST] <thebombzen> which doesn't support softsubs
[05:42:19 CEST] <rsully> youtube supports captions
[05:42:25 CEST] <thebombzen> I mean I could always do ffmpeg -ss 10:00 -i input.mkv -map 0:s -c ass out-cropped.ass but that's not what I'm looking for
[05:42:27 CEST] <rsully> you can upload separate from video
[05:42:31 CEST] <thebombzen> also youtube closed captioning is terrible
[05:42:39 CEST] <thebombzen> I have pretty ASS subs that I want to be pretty
[05:42:41 CEST] <rsully> not when it is supplied by you
[05:42:51 CEST] <rsully> oh i thought you meant their speech to text
[05:43:13 CEST] <rsully> well, as a viewer I hate hard subs
[05:43:36 CEST] <rsully> I'd rather have ugly than encoded into the video stream
[05:43:51 CEST] <rsully> not that helvetica is ugly
[05:43:53 CEST] <thebombzen> so in general. if someone asks how to do something. and you don't know know why they're asking
[05:44:01 CEST] <thebombzen> your answer should not be "don't do it"
[05:44:27 CEST] <thebombzen> if you don't have any actual advice to give other than "don't do it" then please don't tell me not to
[05:44:35 CEST] <rsully> this is community support man. you have any idea how many people here ask things and they don't realize there are other ways to do it? you have to prove your knowledge if you don't want my canned responses
[05:45:05 CEST] <thebombzen> idk it just pushes my buttons when I ask questions on the internet and people are like "why would you want to do that?" well it doesn't actually matter why. clearly I'm asking for a reason.
[05:45:06 CEST] <rsully> if anyone else knew they'd chime in
[05:45:28 CEST] <thebombzen> suppose I wanted to give it to a friend who uses iMovie (which doesn't support ASS afaik).
[05:45:39 CEST] <thebombzen> suppose I want it to run on my grandmother's computer and she uses Windows Media Player
[05:45:48 CEST] <rsully> most people don't have good reasons, they just have a path they think they need to follow
[05:45:58 CEST] <rsully> does WMP even support mkv?
[05:46:07 CEST] <thebombzen> probably not. but that's not the point.
[05:46:29 CEST] <rsully> hm I wonder if final cut supports ASS
[05:46:40 CEST] <rsully> I'd imagine imovie would support the same
[05:46:56 CEST] <thebombzen> that's also not the point
[05:47:27 CEST] <thebombzen> the point is if someone asks how to do something on the internet and you don't know why they want to do it, then don't say "don't do that". just keep silent
[05:47:35 CEST] <thebombzen> because it's really annoying
[05:47:37 CEST] <rsully> all my point is is that I didn't know your motives, but I know a lot of people ask for what you want when it isn't actually what they want
[05:48:02 CEST] <thebombzen> another pet peeve - answering the question they think you mean to ask rather than the question you asked.
[05:48:06 CEST] <rsully> too bad youtube doesn't support PGP or ASS
[05:48:08 CEST] <thebombzen> don't do that.
[05:48:18 CEST] <furq> i will carry on doing both of those things
[05:48:21 CEST] <rsully> again, community support. tough luck.
[05:48:36 CEST] <furq> more often than not people don't actually know what they want
[05:49:03 CEST] <furq> anyway i would expect the easiest way would just be to remove the first ten minutes from the ass
[05:49:05 CEST] <thebombzen> anyway, the original question still stands. is there some kind of hidden option in the subtitles filter that simulates -ss or do I have to manually remux.
[05:50:22 CEST] <furq> you could try -vf subtitles=foo.ass:enable=gt(t\,600)
[05:50:29 CEST] <furq> i don't know if the subtitles filter supports timeline editing though
[05:51:01 CEST] <furq> if that doesn't work then you'll have to do it manually
[10:26:51 CEST] <ferdna> i need to open 16 streams from the same feed..
[10:27:01 CEST] <ferdna> if i do that it freezes
[13:01:41 CEST] <theeboat> Does anybody know if it is possible to extract the timecode from a h264 stream being sent over udp. I am told that the timecode is stored in the SEI pic_struct and I would like to view it in the following format HH:MM:SS:FF. thanks
[13:02:14 CEST] <c0rnw19> Bonjour les gars!
[13:32:56 CEST] <hanDerPeder> is it possible to 'pause' a stream opened with avformat_open_input? or should I close the stream and reopen later? reading from an internet radio url
[14:07:16 CEST] <theeboat> Does anybody know if it is possible to extract the timecode from a h264 stream being sent over udp. I am told that the timecode is stored in the SEI pic_struct and I would like to view it in the following format HH:MM:SS:FF. thanks
[14:08:52 CEST] <DHE> usually UDP h264 is in mpegts format (UDP packet payload size of 1316 bytes)
[14:11:57 CEST] <jkqxz> The full time information in the SEI is just discarded; see <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/h264_sei.c;hb=HEAD#l77>.
[14:41:17 CEST] <Aerroon> hello, i have a problem with what i presume to be a recording issue but i was hoping that someone here could potentially push me towards the right direction to seek an answer (i'm recording with obs using quicksync)
[14:41:49 CEST] <Aerroon> sometimes the recordings have the start of the recording be artifacted heavily, when i play it back in VLC the first few seconds look like this: https://dl.dropboxusercontent.com/u/89115692/wows4/weirdness.png
[14:42:26 CEST] <Aerroon> if i demux said recording with ffmpeg those first seconds are completely cut from the video
[14:42:40 CEST] <Aerroon> only error thrown is this: [h264 @ 0000000002293b00] co located POCs unavailable
[14:43:01 CEST] <Aerroon> my mpc-hc setup playing it back just shows the first frame for those seconds and then plays on normally: what could this be?
[14:46:01 CEST] <Aerroon> i know that this isn't exactly an ffmpeg issue but perhaps someone here knows why it could be happening or if i could somehow make ffmpeg preserve the first few seconds when demuxing in however messed up way they stay in so it'd be easier to sync stuff
[14:49:54 CEST] <Aerroon> also i'll be around so if you have an answer just highlight me even if it's hours later
[14:51:01 CEST] <kbo> Hi, I have a weird issue after cross-compiling ffmpeg, when I try to execute ffmpeg binaries I have a "No such file or directory", I've already checked the binary format which correspond to what I expected (ARM EABI), any help is welcome !
[14:52:19 CEST] <kbo> here the configure command: ./configure --cross-prefix=arm-poky-linux-gnueabi- --enable-cross-compile --target-os=linux --arch=arm --cpu=cortex-a8 --sysroot=/home/kbo/sdk/sysroots/cortexa9hf-vfp-neon-poky-linux-gnueabi --enable-libx264 --enable-gpl
[14:53:02 CEST] <kbo> --extra-cflags=' -march=armv7-a -mfpu=neon -mfloat-abi=hard'
[14:54:48 CEST] <jkqxz> kbo: That's coming out immediately with no other output? Probably missing something the dynamic linker wants to find on the target - try running it with LD_DEBUG set to see what it's not finding. ('LD_DEBUG=all ./ffmpeg')
[14:56:33 CEST] <kbo> jkqxz: I got the same result : root at imx6:~# LD_DEBUG=all ffmpeg -sh: /usr/bin/ffmpeg: No such file or directory root at imx6:~# ldd /usr/bin/ffmpeg /usr/bin/ldd: line 116: /usr/bin/ffmpeg: No such file or directory
[14:57:57 CEST] <kbo> jkqxz: it's like ffmpeg isn't linked against any libs
[15:03:05 CEST] <jkqxz> kbo: Do you even have the dynamic linker it asks for in executable?
[15:04:25 CEST] <kbo> jkqxz: how can I know if i have it ?
[15:05:34 CEST] <jkqxz> Look at 'readelf -l ./ffmpeg'. It will mention a program interpreter '/libsomething/ldsomething'. Does that actually exist on your target?
[15:07:11 CEST] <kbo> result : [Requesting program interpreter: /lib/ld-linux.so.3]
[15:07:46 CEST] <jkqxz> And can you run '/lib/ld-linux.so.3' on the target?
[15:09:11 CEST] <kbo> jkqxz: ok, it actually work with the command /lib/ld-linux-armhf.so.3 /usr/bin/ffmpeg
[15:09:41 CEST] <kbo> jkqxz: I got: ffmpeg version 2.2.3 Copyright (c) 2000-2014 the FFmpeg developers built on Mar 23 2016 13:48:27 with gcc 4.9.1 (GCC) ...
[15:10:47 CEST] <jkqxz> Sounds like you have a soft-float toolchain and a hard-float target. It might work a bit, but will probably barf horribly (or just give wrong results for everything) if you ever try to use floating point.
[15:14:18 CEST] <jkqxz> So, use a toolchain which actually matches the target machine.
[15:14:58 CEST] <kbo> jkqxz: ok so its related to toolchain configuration, thanks for you help
[17:05:30 CEST] <ferdna> good morning
[17:05:48 CEST] <ferdna> i need help with ffserver
[17:06:00 CEST] <ferdna> i amn trying to stream one source to many clients
[17:06:39 CEST] <ferdna> the problem is that i run out of bandwidth....
[17:06:45 CEST] <ferdna> or something
[17:06:47 CEST] <ferdna> not really sure
[17:18:15 CEST] <EightBitSloth> I'm running a local nginx-rtmp server and I'd like to take it's output, which is at rtmp://localhost/live/test, and redirect it to v4l2loopback at /dev/video1. My goal is to be able to stream my desktop as a webcam when a user does not have access to my local nginx server. Can ffmpeg do this?
[17:19:51 CEST] <thebombzen> you can always stream your desktop with ffmpeg's x11grab input device
[17:20:14 CEST] <DHE> yes, but it sounds like a choice between "use my own RTMP server" and "stream with some kind of webcam software"
[17:20:18 CEST] <DHE> trying to cheat a bit and do both
[17:20:31 CEST] <thebombzen> ffmpeg -f x11grab -framerate 60 -video_size 1920x1080 -i :0.0 is an example
[17:20:34 CEST] <EightBitSloth> Yep, but I need to be able to transition between desktop and applications.
[17:21:03 CEST] <thebombzen> EightBitSloth: is that what you're looking for?
[17:22:19 CEST] <EightBitSloth> @thebombzen, Not exactly. I need to do transitions and that doesn't allow me to. I forgot to add I need to use OBS.
[17:22:20 CEST] <theeboat> jkqxz: thanks for that link, that makes more sense to me now. Do you know of any other software which writes the SEI time information or if it is possible to do this is ffmpeg? It seems a bit strange that this information would be dropped as it is quite a useful bit of information.
[17:22:53 CEST] <EightBitSloth> I've streamed video to loopback and my desktop to loopback, so I figured sending an rtmp stream to loopback would work too.
[17:27:59 CEST] <EightBitSloth> Oh hey! Found it myself. Apparently I just need to add rawvideo and change localhost to my local ip.
[17:28:16 CEST] <EightBitSloth> ffmpeg -i "rtmp://streamurl" -vcodec rawvideo -y -f v4l2 /dev/video1
[17:34:19 CEST] <jkqxz> theeboat: Have you got a stream which actually contains it? The encoder certainly isn't obliged to supply it at all, and I don't know of any which do. (x264 never writes it, for example.)
[17:37:16 CEST] <jkqxz> (Here is x264 never writing it in pic_timing SEI: <http://git.videolan.org/?p=x264.git;a=blob;f=encoder/set.c;hb=HEAD#l618>.)
[17:38:47 CEST] <theeboat> jkqxz: I am using a hardware encoder, i have spoken to an engineer for the company of the product and he said the following. The time code goes into the SEI pic_struct as defined in the H.264 specification.
[17:38:57 CEST] <vade> where does AudioFrameQueue store the actual samples it enqueues from the AVFrame you add? Im trying to understand how to use the ff_af_queue_* API to ensure my encoder gets AVFrames of the right frame size
[17:39:20 CEST] <vade> but I cant understand how to pull frames off of the AudioFrameQueue to make a new AVFrame* with the right number of samples
[17:39:46 CEST] <ddmd> Anyone know how I can change a video's color space from YUV to RGB?
[17:40:20 CEST] <kepstin> ddmd: libswscale? (if you want a better answer, you need more detail in your question)
[17:40:31 CEST] <vade> ddmd: have to use a SwsContext via libswscale
[17:41:05 CEST] <kbarry> http://stackoverflow.com/questions/27519056/warning-in-converting-yuv-color-format-to-rgb-in-ffmpeg
[17:41:38 CEST] <kbarry> Looks like that might offer some help with your problem ddmd
[17:42:40 CEST] <jkqxz> theeboat: You probably just want to write the code in ffmpeg to store it somewhere you can read, then. Adding it (replacing the code in the previous link) would not be difficult, I think.
[17:44:47 CEST] <ddmd> Thanks guys. I've got an mp4 playing in a html video tag and internet explorer displays white as gray. I've read that it could be the colorspace and that I should use BT.709. Does that make any sense?
[17:44:49 CEST] <theeboat> jkqxz: OK, thanks for the information. I was hoping to not have to change any code but if that is the only option. The comment in the x264 commit makes more sense into why it would be dropped.
[17:45:35 CEST] <kepstin> ddmd: you don't want to use rgb, most web browsers can only decode yuv. The colorspace is something separate from that.
[17:46:48 CEST] <kepstin> ddmd: are you encoding a video using the ffmpeg command-line tool?
[17:47:08 CEST] <ddmd> So if not the color space, what could throw the white balance off like that? Yes, I'm using ffmpeg for everything
[17:47:42 CEST] <kepstin> it might be the colorspace. but you're getting that confused with the pixel format, something rather different
[17:48:42 CEST] <kepstin> if you're getting white showing up as grey, it could actually be the sample range rather than color space. What's the original video source?
[17:49:39 CEST] <ddmd> Its originally an mp4. I don't have unprocessed video.
[17:50:42 CEST] <vade> post the ffprobe output perhaps?
[17:50:56 CEST] <kepstin> anyways, you can use the "scale" video filter in ffmpeg to convert between colorspaces and sample ranges. Try "-vf scale=in_color_matrix=bt601,out_color_matrix=bt709" to convert the video color space, maybe?
[17:52:20 CEST] <ddmd> here is some more info: http://pastebin.com/vrTLLFkP
[17:52:58 CEST] <kepstin> that said, the white point should be the same for bt601 and bt709, iirc?
[17:53:19 CEST] <ddmd> Thanks. I'll try out scale/
[17:55:45 CEST] <vade> how to deal with more samples than frame size ? I get that my AVFrame that ive converted with libswresample has more samples in it than the encoder is wanting. Whats the ffmpeg API correct way to handle that?
[18:06:09 CEST] <ddmd> I get "[AVFilterGraph @ 0473e560] No such filter: 'out_color_matrix' " is this not installed by default?
[18:06:09 CEST] <vade> im also flagging AV_CODEC_CAP_VARIABLE_FRAME_SIZE, but my audioStreams codec context lists 1024
[18:06:27 CEST] <vade> even though I set it to 0 and mark that flag prior to opening the codec
[18:26:33 CEST] <vade> :\
[18:31:08 CEST] <durandal_1707> ddmd: colorspace filter
[18:31:51 CEST] <spoon> Hi, I am trying to decode an RTP stream with the C API but getting pretty much nowhere
[18:32:36 CEST] <spoon> My main issue is I have my own RTSP layer, and this makes things difficult merge with FFMPEG
[18:34:30 CEST] <spoon> I require a lot of help unfortunately
[18:37:58 CEST] <marsupial> can someone help me? i've been trying for 2 days to do something
[18:38:15 CEST] <__jack__> marsupial: tell us
[18:39:38 CEST] <marsupial> i am trying to record an HLS stream. i followed various guides , have located the m3u8 file with firebug. if i just try ffmpeg -i "stream.m3u8" , it says it cannot open the key
[18:40:03 CEST] <__jack__> marsupial: you will need ffmpeg -i http://blabla/stream.m3u8, probably
[18:40:05 CEST] <marsupial> so i downloaded the m3u8 file, and opened it in text.. then i downloaded the chunk file that was identified there.. and i found some URI IV key
[18:40:13 CEST] <marsupial> yes i did that
[18:40:23 CEST] <marsupial> i have the full url of the stream
[18:40:51 CEST] <marsupial> this is the key information i was able to find
[18:40:53 CEST] <marsupial> #EXT-X-KEY:METHOD=AES-128,URI="faxs://faxs.adobe.com",IV=0xc3585344ac1ecb2649fb4aa246614791
[18:41:14 CEST] <marsupial> but i have no idea how to make ffmpeg use that
[18:41:23 CEST] <spoon> well if anyone thinks they can help with my RTP/RTSP decoding problem, let me know. I am willing to pay for the help.
[18:42:07 CEST] <__jack__> marsupial: that is an crypted stream, with DRM, it will not work out of the box (because drm are made for that)
[18:42:55 CEST] <marsupial> yes but i am a showtime subscriber so i should be able to do it
[18:43:00 CEST] <marsupial> i know smarter people than me ARE doing it
[18:43:08 CEST] <jkqxz> spoon: Perhaps if you describe in more detail what you are trying to do?
[18:43:54 CEST] <spoon> I have written my own RTSP and RTP layers, now I need to decode incoming the video/audio
[18:44:19 CEST] <spoon> I have the RTP pointer and size, and RTP payload pointer and size
[18:44:26 CEST] <spoon> I have all the details that have been negotiated in RTSP
[18:45:06 CEST] <spoon> I feel like I have all the information required, but actualy trying to decode the stream using FFMPEG is causing me headaches
[18:45:32 CEST] <jkqxz> What is the format here, H.264?
[18:45:34 CEST] <spoon> I have attempted to copy what FFMPEG is doing internally when it is in control of RTSP/RTP and I still get errors
[18:45:41 CEST] <spoon> it's either H264, MJPEG or MPEG4
[18:47:06 CEST] <spoon> I am trying to get just H264 working at the moment, but the others must be supported too
[18:47:09 CEST] <jkqxz> When you say you've decoded the RTP, you have already removed headers and dismantled STAPs and reassembled FUs into NAL units etc., or you want ffmpeg to do that bit for you?
[18:47:27 CEST] <spoon> I have just read the RTP header
[18:47:39 CEST] <spoon> I can pass the full RTP packet to FFMPEG if that's easier
[18:48:25 CEST] <spoon> I have not done any of the things you have mentioned, so I guess yes, I would like FFMPEG to do that
[18:48:49 CEST] <spoon> If it's easier/better to do it another way, that's fine too
[18:55:08 CEST] <jkqxz> I don't think libavformat it is built to handle that sort of intermediate, but there may be some way I'm not aware of. If you dismantle the RTP packets yourself, then you can just feed the NAL units to libavcodec.
[18:56:01 CEST] <spoon> ok thanks
[18:57:00 CEST] <spoon> so I will need to dismantle the RTP payload further if it is H264
[18:57:59 CEST] <spoon> and I can pass the NAL units to avcodec_decode_video2?
[18:59:04 CEST] <jkqxz> Yes.
[18:59:52 CEST] <jkqxz> (Mashing frames together if they are more than one slice; the marker bit tells you whether a packet ends a frame.)
[19:00:06 CEST] <spoon> wow that easy
[19:00:15 CEST] <spoon> I can just literally join them together?
[19:01:42 CEST] <jkqxz> The NAL units, yes. (After extracting them from the RTP stream.)
[19:01:53 CEST] <spoon> ok awesome, I'm looking into now
[19:02:03 CEST] <spoon> I'll probably have more questions once I've figured this out
[19:02:12 CEST] <spoon> RTP packets can come out of order too, always fun!
[19:02:29 CEST] <spoon> thanks for the help
[19:04:54 CEST] <spoon> another rfc! https://tools.ietf.org/html/rfc6184#section-5.1
[19:07:00 CEST] <jkqxz> That's the one. Look at your stream first to see what types it actually has in it - I expect you need to be able to handle STAP-A and FU-A and can ignore the others, but do check.
[19:08:32 CEST] <spoon> goodness I am really hoping FFMPEG can do this for me, I'll hunt around
[19:08:52 CEST] <spoon> parsing this is a fairly big chunk of work
[19:10:41 CEST] <jkqxz> The libavformat code to do it is in <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/rtpdec_h264.c;hb=HEAD>.
[19:10:58 CEST] <spoon> thanks
[19:11:05 CEST] <jkqxz> I don't see a way to access that externally, though there may be one I'm not aware of.
[19:13:41 CEST] <jkqxz> (Copying the relevant bits of code out and hacking them would also work; it's all LGPL.)
[19:14:09 CEST] <spoon> heh, possibly, all the params are very different
[19:14:30 CEST] <spoon> At least I have something to work on, for too long I've just been confused
[20:05:25 CEST] <dbugger> Hello guys
[20:05:30 CEST] <dbugger> Guys and girls, of course
[20:06:13 CEST] <dbugger> I have a small question: If I have 2 videos (300x100 pixels), how could I make a video that runs both of them at the same time, one on top of the other? (300x200)
[20:17:24 CEST] <DHE> there's a vstack filter for pretty much this exact thing.
[20:18:37 CEST] <DHE> $ ffmpeg -i inputfile1.mp4 -i inputfile2.mp4 -filter_complex '[0:v][1:v]vstack[videoout]' -map '[videoout]' -map '[0:a]' <codec options> output.mp4
[20:19:10 CEST] <dbugger> oh, let me try...
[20:19:50 CEST] <DHE> so that would stack the videos and choose file1's audio
[20:20:04 CEST] <dbugger> I actually want to mate it mute
[20:20:48 CEST] <DHE> oh, then replace -map [0:a] with -an
[20:20:59 CEST] <DHE> or actually just removing that entirely would be fine
[20:21:54 CEST] <dbugger> im trying
[20:23:31 CEST] <dbugger> Mmmm
[20:23:32 CEST] <dbugger> it fails
[20:23:41 CEST] <dbugger> but probably because I didnt give you the real dimensions of the video
[20:23:56 CEST] <DHE> it shouldn't matter as long as they are identical
[20:24:06 CEST] <dbugger> Im afraid they are not
[20:24:15 CEST] <dbugger> Some cropping could be possible?
[20:25:44 CEST] <DHE> To crop the first video, -filter_complex '[0:v]crop=w=1920:h=1080[cropout];[cropout][1:v]vstack[videoout]'
[20:25:59 CEST] <dbugger> One single commando?
[20:26:02 CEST] <dbugger> Or one after the other?
[20:26:27 CEST] <DHE> it's a processing pipeline. the first video is cropped, then the cropped image is submitted as the first image to the vstack
[20:26:42 CEST] <DHE> you can do quite a lot with one ffmpeg command if you're willing to write it out
[20:26:49 CEST] <dbugger> so the final commando is...
[20:27:23 CEST] <dbugger> mpeg -i inputfile1.mp4 -i inputfile2.mp4 -filter_complex '[0:v]crop=w=1920:h=1080[cropout];[cropout][1:v]vstack[videoout]' -filter_complex '[0:v][1:v]vstack[videoout]' -map '[videoout]' -map '[0:a]' output.mp4
[20:27:27 CEST] <dbugger> sorry
[20:27:35 CEST] <dbugger> ffmpeg -i inputfile1.mp4 -i inputfile2.mp4 -filter_complex '[0:v]crop=w=1920:h=1080[cropout];[cropout][1:v]vstack[videoout]' -filter_complex '[0:v][1:v]vstack[videoout]' -map '[videoout]' -map '[0:a]' output.mp4
[20:27:38 CEST] <dbugger> Is that right?
[20:28:23 CEST] <DHE> no, you've got two filter_complex commands, you still have the audio mapped in, and you should specify some codec options like "-c:v libx264 -b:v 3000k" or something
[20:29:18 CEST] <dbugger> Then this? ffmpeg -i inputfile1.mp4 -i inputfile2.mp4 -filter_complex '[0:v]crop=w=1920:h=1080[cropout];[cropout][1:v]vstack[videoout]' '[0:v][1:v]vstack[videoout]' -map '[videoout]' -map '[0:a]' "-c:v libx264 -b:v 3000k" output.mp4
[20:29:39 CEST] <dbugger> I still forgot to take the audio out... but yeah
[20:29:41 CEST] <dbugger> you know
[20:30:39 CEST] <dbugger> is that one right?
[20:33:13 CEST] <dbugger> I tried it and it gave me this error: [NULL @ 0x21a5640] Unable to find a suitable output format for '[0:v][1:v]vstack[videoout]'
[20:33:13 CEST] <dbugger> [0:v][1:v]vstack[videoout]: Invalid argument
[20:34:28 CEST] <DHE> I feel like you've never used the command-line very much...
[20:35:26 CEST] <dbugger> I have used command line, just not much ffmpeg :P
[20:35:39 CEST] <dbugger> oh wow, I added the quotes?
[20:36:14 CEST] <DHE> and took the the "-filter_complex" but left in the actual filter that went with it
[20:37:06 CEST] <dbugger> I corrected it:
[20:37:10 CEST] <dbugger> ffmpeg -i v1.webm -i v2.webm -filter_complex '[1:v]crop=w=320:h=240[cropout];[cropout][1:v]vstack[videoout];[0:v][1:v]vstack[videoout]' -map '[videoout]' -c:v libx264 -b:v 3000k output.mp4
[20:37:13 CEST] <dbugger> That is now right, no?
[20:38:06 CEST] <dbugger> oh damm
[20:38:07 CEST] <dbugger> silly me
[20:38:15 CEST] <dbugger> you were actually giving me before both filters together
[20:38:28 CEST] <dbugger> Sorry, I thought they were separate, one for cropping and one for merging :P
[20:39:22 CEST] <dbugger> Now it seems to be working
[20:39:44 CEST] <dbugger> Oh, yeah! Excelent :)
[20:39:50 CEST] <dbugger> Thanks mate! You are a saver!
[20:44:14 CEST] <dbugger> DHE, what if I wanted to make more advanced layouts, instead of "one on top of the other"? Sort like "2 colums. 1 with 3 rows, and 1 with just 1 row"?
[20:44:23 CEST] <dbugger> How could that be written?
[20:51:08 CEST] <DHE> there's hstack and vstack so for full tiles you could assemble that. otherwise you'll have to get creative with the overlay filter
[20:51:45 CEST] <dbugger> I see....
[20:51:52 CEST] <dbugger> So, lets see if I can come up with something..
[20:53:51 CEST] <dbugger> Interesting... so I have to make combinations then
[20:55:46 CEST] <DHE> just off the top of my head, -filter_complex '[0:v][1:v][2:v]hstack=inputs=3[row1];[3:v][4:v][5:v]hstack=inputs=2[row2];[row1][row2]vstack[videoout]' # just made this up for a 3-wide, 2 high stack
[20:56:07 CEST] <dbugger> ok... "inputs" seems useful
[20:58:24 CEST] <l1l> is this the proper way to rip an mp3 ffmpeg -ss 19:40 -i Alok3.mp3 -t 5:00 aloktres.mp3
[20:58:39 CEST] <l1l> or should i use a different app
[20:58:54 CEST] <dbugger> DHE, works great!
[20:58:55 CEST] <dbugger> Thanks!
[20:59:02 CEST] <dbugger> Now I just need to learn how to better crop images
[20:59:08 CEST] <dbugger> But that can wait for latter
[21:06:32 CEST] <ddmd> Could anyone please look at the difference between these two encodings? http://pastebin.com/Pgr3fjZQ One video displays whites as white and the other displays whites as grays in some browser's html video player.
[21:07:57 CEST] <JEEB> try it in mpv and those are the actual colors of the video
[21:08:14 CEST] <JEEB> on windows, http://mpv.srsfckn.biz/
[21:08:41 CEST] <JEEB> sometimes clips just don't have actual white in there and they are supposed to be grey. in other cases browsers just fail.
[21:08:59 CEST] <JEEB> mpv's opengl renderer generally is the best bet at something getting it right
[21:09:24 CEST] <JEEB> so if it looks grey in it, then either the content is supposed to be like that, or someone eff'd up
[21:13:16 CEST] <ddmd> Thats the thing, it displays white on every native player I've tried and in every browser except for IE. So if it truly was off-white at source, why wouldn't the other players show it? Thanks for the suggestion I will try it in mpv.
[21:14:23 CEST] <JEEB> many players also have various issues. both clips should be taken in as limited range according to that, although I recommend looking at the output of either `ffmpeg -i filename` or `ffprobe filename`
[21:15:02 CEST] <JEEB> also mpv doesn't have a file-opening GUI so you basically can just drag and drop files on its binary
[21:17:48 CEST] <Gues> Hey
[21:18:22 CEST] <Gues> Does ffmpeg have an MP3 decoder? I don't see it here https://ffmpeg.org/ffmpeg-codecs.html#Audio-Decoders but ffmpeg tools seem to decode MP3s.
[21:18:30 CEST] <JEEB> yes, it does
[21:18:58 CEST] <Gues> Okay, ty
[21:18:59 CEST] <JEEB> the manpages have a very small amount of things
[21:19:57 CEST] <Gues> manpages? Are the contents related to web pages?
[21:20:01 CEST] <JEEB> yes
[21:20:07 CEST] <Gues> Ah
[21:20:13 CEST] <JEEB> `ffmpeg -decoders` lists all of the decoders
[21:20:20 CEST] <JEEB> in your FFmpeg
[21:20:48 CEST] <Gues> ty
[21:21:17 CEST] <Gues> I read on the website that ffmpeg uses lame to encode MP3. Does it implement its own decoder, or rely on a library?
[21:21:29 CEST] <JEEB> the decoder is internal to libavcodec
[21:21:47 CEST] <Gues> Okay. I was trying to determine if I should borther using ffmpeg or just its dependency
[21:21:52 CEST] <Gues> *bother
[21:22:17 CEST] <ddmd> The problematic video displays white correctly in mpv. I ran ffprobe on both files also. http://pastebin.com/x20qUUeC
[21:22:53 CEST] <JEEB> ddmd: ok, then it's just a bug somewhere in the browser. enjoy reporting the issue
[21:23:03 CEST] <c_14> Gues: https://ffmpeg.org/general.html#Supported-File-Formats_002c-Codecs-or-Features
[21:23:32 CEST] <JEEB> the only weird part I could notice is that it has the BT.601 matrix and BT.709 primaries, but that shouldn't cause greyening
[21:24:03 CEST] <JEEB> I guess IE might be misinterpreting the explicit color range tag, but I'd be surprised if that was the case :P
[21:26:29 CEST] <ddmd> Its looking that way. I also cannot rule out it being an issue with a specific video card or hardware acceleration though. Thanks for your help.
[21:36:50 CEST] <fp> When calculating psnr with my own tools (python/numpy) I get identical
[21:36:50 CEST] <fp> results compared to ffmpeg and tiny_ssim if I look at the values per frame.
[21:36:50 CEST] <fp> However, if I run ffmpeg || tiny_ssim an I look at the calculated average
[21:36:50 CEST] <fp> I get something different. This I don't get!
[21:36:58 CEST] <fp> see http://pastebin.com/AqUdkttH
[21:56:55 CEST] <ATField> Can someone please help with a file-cutting problem?
[21:56:56 CEST] <ATField> I am trying to cut a fragment from a movie file without re-encoding so that the result will be of the same quality as the original. IIUC, this will be the case if you start cutting from an I-frame so that ffmpeg wont have to decode anything to recover missing information.
[21:56:58 CEST] <ATField> I am trying to find the I-frames timestamp by first generating a tile of all the nearby frames with their timestamps and their frame-type written on them (http://imgur.com/VvAefce, 3840×1800), and then using the needed I-frames timestamp as the startpoint and the TS of the frame right before the next I-frame as the endpoint.
[21:56:59 CEST] <ATField> But the timestamp has to be wrong b\c when you play the file it starts off as a black screen until the player reaches an actual I-frame. Additionally, timestamps for the needed start- and end-points are different in Avidemux (09:56.346 -to 10:00.558 v.s. 09:56.012 -to 10:00.224). And since Avidemux seems to be able to cut the fragment out properly without having to re-encode...
[21:57:01 CEST] <ATField> ...(both video...
[21:57:02 CEST] <ATField> ...and audio outputs set to Copy), there should be a proper way of doing it that Im just missing.
[21:57:04 CEST] <ATField> Ive also tried cutting by pkt_dts_time (http://stackoverflow.com/questions/14005110/) but again to no avail.
[22:26:40 CEST] <SpeakerToMeat> What's the right way to reinterpret a 23.976 film to 24fps? copying video and audio tracks?
[22:28:53 CEST] <JEEB> why would you reinterpret it?
[22:29:06 CEST] <JEEB> the content is what it is, don't touch it without a reason
[22:30:07 CEST] <JEEB> basically if you just change the timestamps on video you have to re-encode audio, if you actually start doing frame rate conversion then you'll be re-encoding video
[22:30:17 CEST] <JEEB> neither of those alternatives sound especially good
[22:30:48 CEST] <SpeakerToMeat> I don't want to reencode video, thus I need to tell it it's really 24 and not 23.976
[22:30:59 CEST] <JEEB> which means you will have to muck with the audio?
[22:31:06 CEST] <JEEB> what are you thinking of achieving with this!?
[22:31:09 CEST] <SpeakerToMeat> yuppers. yes.
[22:31:26 CEST] <SpeakerToMeat> I'm thinking on achieving what I need to achieve. Are you saying it's not possible to do with ffmpeg?
[22:31:59 CEST] <JEEB> well there's a resampler and a few other things so I'm pretty sure you can achieve what you want
[22:32:04 CEST] <JEEB> but it just makes no sense to me
[22:32:17 CEST] <JEEB> you're losing either video or audio quality for no perceivable benefit
[22:32:43 CEST] <SpeakerToMeat> Well the benefit is being able to use it. not being able to use the video at all would be bad.
[22:33:00 CEST] <SpeakerToMeat> Since I have a 23.976 video that needs to play in a device that will not do 23.976
[22:33:08 CEST] <JEEB> but it will do 24?
[22:33:44 CEST] <SpeakerToMeat> Yes
[22:34:14 CEST] <JEEB> given that 24000/1001 is the "broadcast" 24p that surprises me deeply. but hey, I have no idea what weird hardware you have there :)
[22:35:12 CEST] <SpeakerToMeat> It's called a Digital Cinema media server, media block and projector
[22:36:02 CEST] <JEEB> also, for the record I haven't found the way to redo the video timestamps with ffmpeg cli. I know that the API is capable of it but the thing doesn't seem to be available outside of the setpts video filter which requires re-encoding (which is usually something you want not to do)
[22:36:06 CEST] <kepstin> ah, so an actual cinema projector. yeah, those are really 24p :)
[22:36:21 CEST] <JEEB> I use L-SMASH's muxer generally to do the frame rate re-assumptions
[22:36:27 CEST] <JEEB> and then ffmpeg for the rest
[22:37:12 CEST] <JEEB> for audio there's the atempo filter
[22:37:17 CEST] <SpeakerToMeat> kepstin: Yes. I can do the reinterpretation in premiere in another machine, since it'll have to go to dpx and jpeg2000 frames anyhow, but I'm subtitling it, and it's an easier workflow for me if I can subtitle in this machine first
[22:38:14 CEST] <JEEB> http://ffmpeg.org/ffmpeg-all.html#atempo
[22:38:35 CEST] <SpeakerToMeat> But my subtitle editor can't read dpx/jpeg2000 discrete files (afaik) to work with. And copying to the windows machine, reinterpreting it to 24, copying it back, doing the subtitles, copying it to premiere, burning the subtitles to image and then exporting is a pita
[22:38:49 CEST] <SpeakerToMeat> thanks I'll check.. thouch I can sox the audio tracks if I have to.
[22:38:51 CEST] <JEEB> I guess you feed 24 / (24000/1001) into that
[22:39:15 CEST] <JEEB> btw, regarding subtitling - while jpeg2000 will be slow as hell to decode in any case, a recent enough aegisub should be able to read it
[22:39:32 CEST] <JEEB> http://www.aegisub.org/
[22:39:51 CEST] <SpeakerToMeat> I work with subtitle edit, it's the best editor in the world for me.
[22:39:57 CEST] <SpeakerToMeat> Exceptionally well done
[22:40:00 CEST] <c_14> You can probably also use asetrate for o conversion of that scale
[22:40:18 CEST] <JEEB> subtitle edit is something I'd mostly use for basics like OCR and editing
[22:40:28 CEST] <JEEB> anything more requiring I'd probably pop aegisub for
[22:40:38 CEST] <JEEB> also I think subtitle edit still bases on DirectShow, no?
[22:40:46 CEST] <JEEB> (for the video input)
[22:41:33 CEST] <kbarry_> What is the trick to seeing the "aliases" for commands in ffmpeg?
[22:41:35 CEST] <JEEB> also man, aegisub hasn't gotten a release in a while :V
[22:41:37 CEST] <kbarry_> Often i find help online, in the form of commands, but have the hardest time picking the solution apart, so I can learn/better understand what is actually happeneing
[22:42:01 CEST] <c_14> That was fast
[22:42:01 CEST] <JEEB> (it is under active development still, but man)
[22:42:27 CEST] <SpeakerToMeat> JEEB: I use it for editing. Conversions, and retouching subtitle times, fixing sync, etc
[22:42:34 CEST] <kbarry> for instance, I just found a link in the documentation that says "Control quality with -qscale:a (or the alias -q:a). "
[22:43:29 CEST] <JEEB> SpeakerToMeat: whatever floats your boat, basically :) for me subtitle edit is something to do the very basics of ripping blu-ray or DVD subpictures, while further synch or styling or whatever would go into aegisub
[22:44:51 CEST] <c_14> kbarry: besides just getting a feel for it?
[22:45:18 CEST] <c_14> You could go through the source and create such a list
[22:45:31 CEST] <c_14> I wouldn't recommend it though
[22:45:55 CEST] <kbarry> So,
[22:45:58 CEST] <JEEB> kbarry: and the documentation also wouldn't tell you that with some newer encoders q shouldn't be used
[22:46:24 CEST] <kbarry> So, I am very new to ffmpeg, and I don't really have a more experiences mentor to rely on.
[22:46:28 CEST] <c0rnw19> good evening
[22:46:33 CEST] <kbarry> My main sources of info, are the docs, and here, and forums.
[22:46:47 CEST] <kbarry> Just trying to get some basic tips on improving my resource access
[22:46:59 CEST] <kbarry> IF the code is the best source of information,
[22:47:27 CEST] <kbarry> How would I best "reverse engineer" the a command I find online?
[22:47:42 CEST] <kbarry> IE, I am assuming there is some kind of built in help,
[22:48:02 CEST] <kbarry> but I am not entirely sure I know how to use it.
[22:48:05 CEST] <JEEB> ffmpeg-all.html which is all the manpages put together on ffmpeg.org is probably the best general docs you will find, although you shouldn't think from the explanations that something is the best way to do something
[22:48:17 CEST] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html
[22:48:25 CEST] <JEEB> and when you find a parameter you can ctrl+F it
[22:48:28 CEST] <c_14> If you want options for a specific encoder/muxer there's `ffmpeg -h encoder=' and `ffmpeg -h muxer='
[22:48:42 CEST] <c_14> Because not all options are in the manpages
[22:48:47 CEST] <JEEB> that is true
[22:49:08 CEST] <JEEB> also I love it how there's literally two video decoder entries in the manpage
[22:49:11 CEST] <JEEB> hevc and rawvideo
[22:49:25 CEST] <JEEB> both are recent enough that someone cared to write docs, I guess :)
[22:49:40 CEST] <JEEB> allthough rawvideo as such is quite old already
[22:57:14 CEST] <ATField> (Can my messages be seen or the registration issue makes them invisible?)
[22:57:45 CEST] <durandal_1707> off topic
[23:00:05 CEST] <kbarry> I still find the documentation hard to grasp.
[23:00:59 CEST] <kbarry> I think its a problem with, the syntax can vary so widely (ffmpeg is so versetile), that maybe its just hard to find an example that succinctly covers a topic.
[23:01:45 CEST] <durandal_1707> what's your goal?
[23:02:45 CEST] <iive> ATField: we see you. ask your question
[23:03:40 CEST] <kbarry> Idurandal_1707: is your last directed atme?
[23:04:04 CEST] <durandal_1707> yes
[23:04:08 CEST] <ATField> iive: Thanks, I thought I was being rendered invisible by the network. Ive asked it higher, will post a superuser question link here soon instead.
[23:05:47 CEST] <kbarry> My goal is to get better at helping myself. I recognize a lack of ability to help myself to the documentation, or even how to find help while using the CLI.
[23:05:56 CEST] <iive> ATField: your question above is also visible.
[23:06:07 CEST] <kbarry> I want to be able to do more, myself, before I come in here asking for help.
[23:06:19 CEST] <ATField> iive: Thanks again.
[23:06:31 CEST] <kbarry> Trying to LEvel up to "Level II Newb"
[23:08:41 CEST] <iive> ATField: one small hint. you can put -ss at the -i input demuxer set of options and it would be seeking to a keyframe first...
[23:09:04 CEST] <iive> if i remember correctly it's -ss first then -i ... i don't use it...
[23:16:12 CEST] <ATField> iive: Is this syntax correct? "ffmpeg -ss 00:09:56.012 -i "INPUT.mkv" -map 0:0 -map 0:4 -map 0:5 -map 0:6 -t 4.212 -vcodec copy -acodec copy test09.mkv" Still doesnt produce a viable result (the 4.212 is 10:00.224 09:56.012).
[23:23:17 CEST] <SpeakerToMeat> Ok what am I effing up? ffmpeg -i file.mov -vc copy setpts="N/(24*TB)" -ac copy atempo="24/(24000/1001)" file-24.mov
[23:24:01 CEST] <furq> SpeakerToMeat: -c:v not -vc
[23:24:15 CEST] <furq> and -vf setpts
[23:24:23 CEST] <c_14> and -af atempo
[23:24:27 CEST] <SpeakerToMeat> Other than not knowing if -ac copy will work with atempo
[23:24:29 CEST] <SpeakerToMeat> thanks
[23:24:32 CEST] <c_14> It won't
[23:24:32 CEST] <SpeakerToMeat> of course, filters
[23:24:36 CEST] <furq> and yeah -vf and -af require reencoding
[23:24:57 CEST] <SpeakerToMeat> furq: So there's no way to apply setpts without reencoding?
[23:25:34 CEST] <furq> no
[23:25:50 CEST] <SpeakerToMeat> Then I'll end up doing this in premiere
[23:26:11 CEST] <furq> there are convoluted ways to change the framerate of a video without reencoding but i've never had cause to use them
[23:26:21 CEST] <furq> someone else will probably remember better than i do
[23:27:03 CEST] <furq> you'll definitely need to reencode for atempo though
[23:35:58 CEST] <iive> ATField: i don't know. you seem to use timecode notation, but I don't know if ffmpeg actually supports it.
[23:36:24 CEST] <iive> aka afair .012 should mean 12'th frame, but ffmpeg might get it as 0.012 second
[23:37:55 CEST] <ATField> iive: The manual says two different time unit formats can be used, sexagesimal (HOURS:MM:SS.MICROSECONDS, as in 01:23:45.678), or in seconds. And the format works with everything else, or even for this very task if I dont use direct-copy instead of re-encoding.
[23:40:10 CEST] <iive> timecode is a thing, like a standard. not just any time code.
[23:40:17 CEST] <iive> just wanted to make sure...
[23:41:07 CEST] <iive> btw, don't forget to take a look of the manual `man ffmpeg`
[23:42:56 CEST] <ATField> The question on SU with some more code, if anyones interested to take a look: http://superuser.com/questions/1076283/how-to-cut-starting-precisely-from-a-keyframe-while-codec-copying
[23:43:40 CEST] <ATField> iive: Yeah, it usually helps to check and make sure, thanks.
[00:00:00 CEST] --- Fri May 13 2016
More information about the Ffmpeg-devel-irc
mailing list