[Ffmpeg-devel-irc] ffmpeg.log.20180212
burek
burek021 at gmail.com
Tue Feb 13 03:05:01 EET 2018
[00:03:42 CET] <notashark> that was out of habit. though seemingly it works with either one. I'm sure it's not good form but for what I'm doing, it's fine. (unless it screwed up because of that, obviously)
[00:48:24 CET] <xrandr> Thank you all for your help! I was able to fix the problem based on your advice(s) :)
[01:03:54 CET] <paule32> hello
[01:03:57 CET] <paule32> need help
[01:04:01 CET] <paule32> Mon Feb 12 01:02:34 2018 127.0.0.1:48822 - - "PLAY test1.mp4/streamid=0 RTP/UDP"
[01:04:01 CET] <paule32> Mon Feb 12 01:02:46 2018 127.0.0.1 - - [TEARDOWN] "rtsp://localhost:8182/test1.mp4/ RTSP/1.0" 200 814
[01:04:31 CET] <paule32> the stream try to start, but crashes?
[01:05:11 CET] <paule32> this is my cmd: ./ffmpeg -f x11grab -r 5 -s 1000x600 -i :0.0 -preset ultrafast -f rtp feed1.ffm
[01:20:18 CET] <notashark> do the ffmpeg docs not have an example of CLI flags for streaming
[01:27:11 CET] <paule32> i am under linux
[01:27:18 CET] <paule32> running firefox 58
[01:27:28 CET] <paule32> how can i add a mime type?
[01:28:26 CET] <paule32> the situation i have, is, when i click on mp4 links, i get "no plugin/no mime type"
[01:28:56 CET] <paule32> so i have thinking, to load the mp4 file with vlc
[01:29:04 CET] <paule32> but i can't find such point
[01:34:48 CET] <klaxa> ffserver is dead
[01:35:32 CET] <klaxa> how come that now that ffserver is dead, every day someone shows up and has a problem with it?
[01:37:58 CET] <notashark> paule32: https://stackoverflow.com/questions/28713665/
[03:06:44 CET] <Lunchbox> hey i'm trying to lower cpu usage my specifying -threads 2 but i'm still staying at 95%-100% usage
[03:06:53 CET] <Lunchbox> argument doesn't seem to affect anything
[03:13:15 CET] <Lunchbox> https://pastebin.com/qEug62Gf
[04:58:20 CET] <Lunchbox> hi
[05:23:41 CET] <Lunchbox> looks like the correct way to limit cpu usage when using libx265 is to specify the amount of thread pools to use under the -x265-params parameter
[05:23:46 CET] <Lunchbox> https://trac.ffmpeg.org/ticket/3730
[07:52:15 CET] <Lunchbox> cool channel
[08:32:20 CET] <ans_ashkan> Hey, any idea with this(https://superuser.com/q/1294175/474942)?
[09:23:05 CET] <Leu789> Hi, I am trying to find a way to take a video and generate a looping preview gif that has several snippets lasting for a few seconds from different parts of the source video. "ffmpeg -t 3 -ss 00:05:00 -i "SOURCE VIDEO" -vf scale=480:-1 clip.gif" - I worked out that command which makes a 3 second clip starting from 5 minutes in, but I am having trouble coming up with a way to make a single gif that has 3 or 4 of those clips from diffe
[09:25:21 CET] <Leu789> So instead of a gif looping one small scene, it would be looping 4 scenes from different sections of the video, eg a 3 second scene starting 20% in, 40% in, 60% in and 80% in for an 18 second gif
[13:53:05 CET] <kartrel> I have a question about extracting metadata from a video using the net.bramp.ffmpeg library: https://pastebin.com/MHCaPj0n
[13:53:28 CET] <kartrel> for a tldr read lines 1,2,40,41 and 42
[13:59:21 CET] <sfan5> I'd say drop that library and just build the command line parameters yourself
[13:59:57 CET] <BtbN> Or ask the author of the library
[14:00:10 CET] <furq> this is the most java thing i've ever seen
[14:02:06 CET] <kartrel> ok, thanks for the help!
[17:08:02 CET] <thmzz> Hello, I was hoping someone could probably help me with a little problem. I'm trying to get a smooth stream to twitch from two rtmp sources, but it shows very much error messages that it cannot allocate enough RAM, although it has much of it. Also the stream lags a bit. Here is the code; https://pastebin.com/PwHhv449
[17:11:43 CET] <furq> thmzz: you probably want hstack instead of pad/overlay, and amix instead of amerge/pan
[17:11:59 CET] <furq> also if you still need to do -strict -2 then your ffmpeg is very old and you should upgrade it
[17:12:56 CET] <thmzz> furq: It was under testing on centos 7, obviously they have not upgraded yet. But I have the newest also. What if i wanted 4 screens e.g?
[17:13:15 CET] <furq> hstack=inputs=4 or hstack,vstack
[17:13:22 CET] <furq> depending on whether you want them in a line or 2x2
[17:13:29 CET] <thmzz> 2x2 yes
[17:14:44 CET] <furq> you will also presumably want to use asetpts before merging or mixing the audio streams
[17:15:07 CET] <furq> at a glance i would guess that's what's causing you to run out of memory
[17:15:23 CET] <thmzz> asetpts?
[17:15:49 CET] <thmzz> you mean setpts ?
[17:15:49 CET] <furq> a = audio
[17:15:52 CET] <thmzz> aah
[17:15:53 CET] <furq> setpts is for video
[17:23:16 CET] <thmzz> furq: is there a special place I put the asetpts option?
[17:23:33 CET] <furq> you do the same thing you're doing with setpts
[17:23:38 CET] <furq> except with [0:a] and [1:a]
[17:25:08 CET] <furq> http://vpaste.net/xTsPX
[17:25:09 CET] <furq> something like that
[17:28:24 CET] <thmzz> thanks, tested now:) It spams "Past duration 0.9xxx too large" tho. No idea what this means
[17:50:45 CET] <thmzz> fury: quick q. If I had 4 streams e.g. Could I just add two more [vX] and [aX], or would that cause a problem?
[17:51:35 CET] <BtbN> are you sure you don't want to use OBS instead?
[17:52:29 CET] <thmzz> BtbN: I'm using OBS to send to a nginx server with rtmp module. Of course open for better solutions :)
[17:52:45 CET] <BtbN> Why do you still want to do that with ffmpeg then?
[17:52:59 CET] <BtbN> obs is usually the better software for live stream composition
[17:53:40 CET] <thmzz> Combining remote streams into one and then sending to twitch (multistreaming a game with 3 other friends)
[17:54:48 CET] <BtbN> I doubt ffmpeg will run very stable for that, due to its non-parallel nature
[17:55:04 CET] <BtbN> one of the streams lagging will bring the whole thing to a grinding halt
[17:57:14 CET] <thmzz> hmm.. do you have a better suggestion? hehe. I guess ffmpeg would be good if you had all machines internally on gbit e.g.
[17:57:33 CET] <BtbN> even then it would be unreliable
[17:58:59 CET] <BtbN> The most reliable way to do it would be to use OBS and capture one independend video player per stream
[18:00:23 CET] <thmzz> Yeah, might be, what about skype?:)
[18:00:57 CET] <BtbN> ?
[18:02:43 CET] <thmzz> Using skype and making a group call and catching all the video sources maybe.
[18:02:57 CET] <BtbN> sounds horrible
[19:08:11 CET] <isemenov> good evening!
[19:08:31 CET] <isemenov> my goal is to use nvidia hardware encoding in OBS
[19:08:38 CET] <isemenov> STudio on Fedora 27 (Linux)
[19:08:55 CET] <isemenov> to achieve this, I follow the tutorial
[19:08:56 CET] <isemenov> https://scottlinux.com/2016/09/12/how-to-enable-nvidia-nvenc-for-obs-in-linux/
[19:09:20 CET] <isemenov> I've downloaded ffmpeg source code as tar.bz2 version 3.4.2
[19:09:23 CET] <isemenov> and built with
[19:09:41 CET] <isemenov> ./configure --prefix=/opt --enable-nonfree --enable-nvenc --enable-pic --disable-avx2 --disable-xop --disable-fma3 --disable-fma4 --disable-static --enable-shared
[19:09:53 CET] <isemenov> I can see that nvenc is being built in the build log
[19:09:57 CET] <isemenov> in console
[19:10:36 CET] <isemenov> but, once I build and then run OBS, there is no nvenc option available. Do you think this is a ffmpeg or an OBS Studio problem?
[19:10:59 CET] <isemenov> thank you!
[19:11:06 CET] <Fenrirthviti> isemenov: We (obs) use FFmpeg for NVENC currently.
[19:11:33 CET] <sfan5> did you make sure to install the ffmpeg you compiled?
[19:11:35 CET] <Fenrirthviti> So you'll need to check which instance of ffmpeg is being used by OBS, and ensure that it has proper nvenc support.
[19:12:22 CET] <isemenov> sfan5: yes, sure. the libs are in /opt/
[19:12:36 CET] <isemenov> Fenrirthviti: how do I achieve both?
[19:12:42 CET] <sfan5> obs won't pick up your ffmpeg libraries if they are in /opt
[19:12:43 CET] <isemenov> Fenrirthviti: how do I achieve either of those?
[19:13:05 CET] <isemenov> sfan5: I have the paths configured already. which env vars are required for obs?
[19:13:22 CET] <isemenov> FFMPEG_avcodec_INCLUDE_DIR /opt/include
[19:13:42 CET] <isemenov> FFMPEG_avcodec_LIBRARY /opt/bin/../lib/libavcodec.so
[19:13:48 CET] <sfan5> should work
[19:13:49 CET] <isemenov> ^ picked up by cmake at build time.
[19:14:05 CET] <sfan5> does obs actually use those libraries at runtime?
[19:14:12 CET] <sfan5> run e.g. ldd $(which obs)
[19:15:00 CET] <isemenov> libavcodec.so.57 => /opt/lib/libavcodec.so.57 (0x00007fdd7b055000) libavutil.so.55 => /opt/lib/libavutil.so.55 (0x00007fdd7add6000)
[19:15:26 CET] <isemenov> sfan5: ^
[19:15:57 CET] <sfan5> should be fine then
[19:16:41 CET] <isemenov> sfan5: but it's not
[19:17:18 CET] <isemenov> sfan5: (use stream encoder) | x264
[19:17:23 CET] <isemenov> no mention of nvidia
[19:19:00 CET] <Fenrirthviti> try invoking that ffmpeg directly and use nvenc_h264 encoder to run a quick test
[19:19:06 CET] <Fenrirthviti> make sure it's actually working
[19:19:27 CET] <isemenov> Fenrirthviti:
[19:19:28 CET] <isemenov> Wir leben im Maschinenklang
[19:19:30 CET] <isemenov> Und nichts als Angst treibt uns voran
[19:19:31 CET] <isemenov> Wir sind verloren, wir sind verdammt
[19:19:32 CET] <isemenov> Die Reise kostet den Verstand
[19:19:34 CET] <isemenov> Wir sitzen alle alle alle in einem Boot
[19:19:35 CET] <isemenov> Wir sitzen alle in einem Boot
[19:19:37 CET] <isemenov> Kein SOS, kein Funksignal, kein Echo, Not
[19:19:39 CET] <isemenov> Wir sitzen alle in einem Boot
[19:19:39 CET] <Fenrirthviti> uh, what
[19:19:40 CET] <isemenov> Wir sitzen alle alle alle in einem Boot
[19:19:41 CET] <isemenov> Wir sitzen alle in einem Boot
[19:19:43 CET] <isemenov> Um uns herum ein schwarzes Meer das brüllt und tobt
[19:19:44 CET] <isemenov> Wir sitzen alle in einem Boot
[19:19:46 CET] <isemenov> Jeder Abschied hält sein Wort
[19:19:47 CET] <isemenov> Vielleicht gehen wir nie mehr von Bord
[19:19:49 CET] <isemenov> Wer weiß wie tief die Reise geht
[19:19:50 CET] <Fenrirthviti> inb4 flood gline
[19:19:50 CET] <isemenov> Die uns zum Rand des Wahnsinns trägt
[19:19:52 CET] <isemenov> Wir sitzen alle alle alle in einem Boot
[19:19:53 CET] <isemenov> Wir sitzen alle in einem Boot
[19:19:55 CET] <isemenov> Kein SOS, kein Funksignal, kein Echo, Not
[19:19:56 CET] <isemenov> Wir sitzen alle in einem Boot
[19:19:58 CET] <isemenov> Wir sitzen alle alle alle in einem Boot
[19:19:59 CET] <isemenov> Wir sitzen alle in einem Boot
[19:20:01 CET] <isemenov> Um uns herum ein schwarzes Meer das brüllt und tobt
[19:20:02 CET] <isemenov> Wir sitzen alle in einem Boot
[19:20:04 CET] <isemenov> Wer kann schon sagen, wohin (wohin? wohin?)
[19:20:05 CET] <isemenov> Wenn keiner weiß, wo wir waren (jeder weiß, wo wir waren)
[19:20:07 CET] <isemenov> Wenn keiner weiß, wo wir sind (wohin? wohin?)
[19:20:09 CET] <isemenov> Sag, warum schlagen wir Wind?
[19:20:10 CET] <isemenov> Wir sitzen alle alle alle in einem Boot
[19:20:11 CET] <isemenov> Wir sitzen alle in einem Boot
[19:20:13 CET] <isemenov> Kein SOS, kein Funksignal, kein Echo, Not
[19:21:06 CET] <isemenov_> durandal_1707: that was a mistake. wrong buffer.
[19:21:27 CET] <isemenov_> Fenrirthviti: the point is, that libnvidia-encode.so.1 isn't found.
[19:26:42 CET] <sfan5> isemenov_: that's part of the nvidia binary driver
[19:29:09 CET] <isemenov_> sfan5: yep. I've installed it from rpmfusion and then the entry appeared.
[19:29:20 CET] <isemenov_> so, thanks for your help!
[19:31:45 CET] <hypothete> Hi all! I'm wondering if you casn point me in the right direction when it comes to pipes and video duration. Here's my scenario: I have transparent PNG buffers that I'm piping into ffmpeg. I also have a background image that I want to combine with the PNGs. If I use filter_complex to overlay the 2, only a frame or 2 gets rendered, instead of matching the length of the stream.
[19:32:08 CET] <hypothete> Here's the command I'm using: ffmpeg -y -i bg.png -f image2pipe -i pipe:0 -filter_complex scale2ref[0:v][1:v];[0:v][1:v]overlay[out] -map [out] -s 640x480 -framerate 30 -pix_fmt yuv420p -an -b:v 4000k -vcodec libx264 ./tmp/output.mp4
[19:33:41 CET] <hypothete> I've tried messing around with overlay=shortest=[some number], but I get strange errors. t seems like if I could figure out the length of the stream and hardcode it as the duration for both inputs, I would get the full video.
[19:34:10 CET] <hypothete> I do know ahead of time number of frames and framerate, so I can pass duration in as a value.
[19:35:10 CET] <furq> hypothete: i suspect you're missing -loop 1 before -i bg.png
[19:35:15 CET] <furq> otherwise shortest will be one frame
[19:38:53 CET] <hypothete> thanks furq. I gave it a go and it appears to be running without stopping. :/ Guessing that since the pipe comes in with a length of 'N/A' that might be the problem?
[19:39:14 CET] <hypothete> at least it did something different, though.
[19:41:13 CET] <furq> with shortest=1?
[19:41:21 CET] <furq> i would have expected it to stop when the pipe closes
[19:41:26 CET] <hypothete> ah, did not set that up. Let me give it another shot
[19:41:38 CET] <furq> also you can just do -filter_complex scale2ref,overlay=shortest=1
[19:41:42 CET] <furq> and remove -map [out]
[19:43:19 CET] <hypothete> It worked! Can't thank you enough. :D
[19:47:38 CET] <hypothete> take care y'all!
[20:08:52 CET] <Cracki> hey: is there a way to have a nonlinear time compression audio filter that works in *conjunction* with a PTS-changing video filter? I'm thinking about re-timing voice audio...
[20:09:09 CET] <Cracki> just asking if ffmpeg filters allow this
[20:09:18 CET] <Cracki> (yes, these filter would need to be written)
[20:12:33 CET] <durandal_1707> rubberband, atempo?
[20:17:54 CET] <furq> Cracki: aresample=async=1 will sync to the audio timestamps
[20:18:08 CET] <furq> so you might be able to invoke asetpts with the same params as setpts and use that
[20:18:09 CET] <Cracki> hmmm
[20:18:17 CET] <furq> no idea what the quality will be like
[20:18:49 CET] <Cracki> so the audio filter generates however many samples it wants, and those have timestamps...
[20:19:09 CET] <Cracki> how is that done? does a buffer of audio samples have a starting timestamp? end timestamp? timestamp per sample?
[20:20:21 CET] <Cracki> for my idea, it's mostly important that the video is approximately in sync to the audio. I would even go so far as to impose video frame rate limits (upper+lower) different from the audio
[20:20:45 CET] <Cracki> because the audio could be squashed considerably sometimes (silence)
[20:22:31 CET] <furq> it's one timestamp per avpacket
[20:22:37 CET] <furq> how many samples that is depends on the audio codec
[20:22:41 CET] <Cracki> hm right
[20:23:23 CET] <furq> if this is cfr video then i would probably just work out the correct value for atempo
[20:23:43 CET] <Cracki> the output will definitely not be cfr
[20:24:00 CET] <Cracki> non-linear time warping
[20:24:05 CET] <furq> fun
[20:24:44 CET] <furq> well yeah i don't see why you couldn't write a filter that would do that
[20:24:49 CET] <Cracki> was hoping to work something into vlc or ffplay or whatever... something that's a bit smarter than just pitch-constant time scaling
[20:25:16 CET] <furq> you can certainly have a filter that inputs and outputs video and audio streams
[20:25:25 CET] <furq> and you would obviously have access to the video stream timestamps
[20:25:32 CET] <Cracki> one filter that has both audio and video inputs? hm ok
[20:25:45 CET] <Cracki> that's certainly nice to have
[20:25:46 CET] <furq> yeah
[20:25:49 CET] <furq> e.g. concat
[20:26:01 CET] <Cracki> right! well, that answers that then :P
[20:27:30 CET] <Cracki> are ffmpeg filters meant to be used in realtime/interactive setups, in particular in video players, and with UI-changeable parameters?
[20:27:47 CET] <furq> idk about meant to be, but there are definitely video players that do that
[20:27:49 CET] <furq> mpv, for one
[20:27:53 CET] <Cracki> ah good
[20:28:33 CET] <Cracki> I'd rather write a filter for ffmpeg than vlc. vlc is an overgrown player with lots of library support to me, not a video manipulation tool
[20:29:00 CET] <furq> well yeah if you write an ffmpeg filter then mpv will more or less automatically support it
[20:29:06 CET] <durandal_1707> who mentioned vlc?
[20:29:10 CET] <Cracki> lol
[20:29:15 CET] <Cracki> I shall never do that again
[20:30:41 CET] <Cracki> then I'll just have to figure out how to get some filter parameters exposed to mpv gui
[20:31:03 CET] <furq> you'd generally do it in a script and bind keys to it
[20:31:03 CET] <Cracki> nvm, mpv is a lib... I'm lagging in understanding
[20:31:06 CET] <Cracki> ah
[20:31:15 CET] <furq> you can edit the ui but it's a pain in the arse
[20:31:25 CET] <furq> also mpv is a player
[20:31:32 CET] <furq> there is a libmpv but you wouldn't need that
[20:31:38 CET] <Cracki> key bindings are just fine. [/] as in vlc would be totally fine.
[20:31:44 CET] <Cracki> understood.
[20:32:01 CET] <furq> if you write it as an ffmpeg filter and then build mpv against that libavfilter then you should have access to it
[20:32:10 CET] <furq> should work in ffplay as well
[20:32:35 CET] <Cracki> I'm hesitant to use ffplay. heard it's deprecated and about to be axed.
[20:32:44 CET] <furq> you might be thinking of ffserver
[20:32:50 CET] <Cracki> right, that
[20:32:54 CET] <furq> ffplay is pretty safe
[20:32:56 CET] <furq> it's just not very good
[20:33:03 CET] <furq> it's there for debugging more than anything else
[20:38:02 CET] <Cracki> I like the filter howto. let's hope I can get this idea shaped into a master's thesis...
[21:17:38 CET] <alexpigment> hey guys. I'm doing some VHS to digital conversion, and normally I go easy on video filters because a) it's really hard to easily preview each change, and b) I don't have a good comparison
[21:17:51 CET] <alexpigment> but now I've got one part of a VHS tape where there's a DVD source as well
[21:18:09 CET] <alexpigment> so I was hoping to match the colors of the VHS to the DVD via ffmpeg filters
[21:18:31 CET] <alexpigment> is there any program that allows you to easily see the changes to a single frame as you change filter settings?
[21:18:51 CET] <alexpigment> rather, an FFMPEG-based program
[21:19:18 CET] <JEEB> I would have probably looked at vapoursynth and tried out vapoursynth editor
[21:19:32 CET] <JEEB> since the preview in that is rather nice, and you script your filter chain with python
[21:19:38 CET] <alexpigment> this shroud of lacking vaporsynth knowledge is looming above me :)
[21:19:49 CET] <alexpigment> well, i may go that route
[21:19:59 CET] <alexpigment> in the meantime, do you know if avidemux is just using ffmpeg filters?
[21:20:07 CET] <alexpigment> i think i have that installed
[21:20:08 CET] <JEEB> no idea, unfortunately
[21:20:12 CET] <alexpigment> k
[21:26:17 CET] <furq> you can output y4m and pipe into mpv
[21:26:21 CET] <furq> maybe with hstack or something
[21:26:46 CET] <JEEB> just use NUT in that case ;)
[21:26:54 CET] <JEEB> since it lets you have the video *and* the audio
[21:26:56 CET] <furq> nut defaults to mpeg4 for some reason
[21:27:05 CET] <furq> so y4m is less typing
[21:27:29 CET] <alexpigment> yeah, i mean i could use NUT
[21:27:34 CET] <furq> either's fine
[21:27:38 CET] <alexpigment> i just was hoping to sit there on one frame
[21:27:47 CET] <alexpigment> anyway, i'll try these ideas out
[21:27:49 CET] <alexpigment> thanks guys
[21:28:43 CET] <furq> probably just take a screenshot and work on that
[22:08:03 CET] <colekas> hello friends
[22:08:24 CET] <colekas> I'm trying to enable some of the debug flags within aacdec_template.c
[22:08:41 CET] <colekas> the manual says I should be able to set this as part of the libavcodec debug flags
[22:08:54 CET] <colekas> but I can't seem to get the command to work if I'm just decoding a stream to -f null /dev/null
[22:09:21 CET] <JEEB> which debug flags are ye talking about? av_log log level or something else?
[22:09:33 CET] <JEEB> av_log stuff is just -v verbose or -v debug
[22:09:44 CET] <colekas> avctx->debug
[22:09:45 CET] <JEEB> if it's an actual ifdef thing then you have to forcibly enable it while compiling
[22:09:53 CET] Action: JEEB double-blinks
[22:09:56 CET] <JEEB> can you link the file?
[22:10:16 CET] <colekas> ffmpeg -loglevel verbose -i blah.ts -map 0:4 -debug:a:0 'pict' -debug:a:0 'bitstream' -debug:a:0 'startcode' -f null /dev/null
[22:10:23 CET] <colekas> one sec
[22:10:32 CET] <colekas> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/aacdec_template.c
[22:11:08 CET] <JEEB> wow
[22:11:16 CET] <JEEB> I have not seen that stuff
[22:11:23 CET] <colekas> or do I need to do something like -c:a:0 debug 'bitstream'?
[22:11:26 CET] <JEEB> usually it's enough to put logging under AV_LOG_DEBUG
[22:11:41 CET] <JEEB> but here you have some FF_DEBUG_PICT_INFO
[22:11:46 CET] <colekas> https://www.ffmpeg.org/ffmpeg-all.html#Codec-Options
[22:11:56 CET] <colekas> would seem to indicate that this is setable via the command line
[22:12:08 CET] <colekas> lol
[22:12:08 CET] <JEEB> yea, it's & 'd so it could be
[22:12:53 CET] <JEEB> ok
[22:13:05 CET] <JEEB> seems like -debug pict ? before -i ?
[22:13:35 CET] <colekas> oh?
[22:13:36 CET] <colekas> hmm
[22:14:44 CET] <JEEB> before input = decoding
[22:14:50 CET] <colekas> as far as I can tell it was not successful
[22:15:06 CET] <colekas> I would think that these flags would make it bursty af
[22:15:35 CET] <colekas> ffmpeg -loglevel verbose -debug "startcode" -i blah.ts -map 0:4 -f null /dev/null
[22:15:52 CET] <colekas> ah
[22:15:59 CET] <colekas> loglevel overwrites it
[22:16:04 CET] <JEEB> :D
[22:16:04 CET] <colekas> thank you!
[22:44:40 CET] <mkid> Hi. I am discovered that ffmpeg newer version than 2.8.11 during "ffmpeg -i some.mp4 -vcodec copy some.h264" convert length prefixed mode to annex b. Older version requires explict h264_mp4toannexb option. I would like to ask how to preserve length prefixed mode.
[22:46:00 CET] <JEEB> it's not supposed to happen with raw streams
[22:54:07 CET] <furq> Please note that this filter is auto-inserted for MPEG-TS (muxer mpegts) and raw H.264 (muxer h264) output formats.
[22:54:10 CET] <furq> yeah it is
[22:54:38 CET] <furq> but i assume it'll be reversed if you mux back to mp4 so i don't see why you'd need to keep it length prefixed
[22:58:14 CET] <mkid> furq: One of a tool I used requires length prefixed form h.264. Is it any method to get length prefixed form in current version of ffmpeg or should I find other tool?
[22:59:56 CET] <JEEB> and it doesn't want that in mp4 or mkv?
[23:00:00 CET] <JEEB> that's... weird
[23:00:12 CET] <JEEB> since the whole length-prefixed format is defined *for* 14496-15
[23:00:36 CET] <furq> i don't know of a way without disabling that bsf at compile time
[23:00:53 CET] <furq> i assume mp4box will do it but i've never tried
[23:29:59 CET] <mkid> furq: Thanks for help. GStreamer's h264parse works as I expect.
[00:00:00 CET] --- Tue Feb 13 2018
More information about the Ffmpeg-devel-irc
mailing list