[Ffmpeg-devel-irc] ffmpeg.log.20160607
burek
burek021 at gmail.com
Wed Jun 8 02:05:01 CEST 2016
[00:14:29 CEST] <wallbroken> what "muxing overhead" is?
[00:15:30 CEST] <haole> does ffmpeg support gavc geovision files?
[00:18:47 CEST] <JEEB> the simplest way to try is to grab either ffmpeg or ffprobe and do ffmpeg -i file or ffprobe file
[00:18:59 CEST] <JEEB> because that most probably is what a company calls something else
[00:19:10 CEST] <JEEB> if not, then that will be obvious as well
[00:21:10 CEST] <haole> JEEB: guess not...? https://pastebin.mozilla.org/8875435
[00:22:26 CEST] <JEEB> well it did find the AVC stream
[00:23:05 CEST] <JEEB> possibly some weird hack to keep it there but it did find the parameter sets at least
[00:23:49 CEST] <JEEB> you could try posting a sample on the trac issue tracker, but holy crap why do you use software that creates such abominations :<
[00:26:00 CEST] <jackp10> Can someone explain to me why the following conversion does not generate a file that is streamable via Safari ? Chrome and Firefox stream it just fine, but Safari does not and I dont know why& $ffmpeg_bin -y -i '$input' -qscale:v 7 -vcodec libtheora -codec:a libvorbis -preset ultrafast -vf scale=640:-1 '$ogvFile'
[00:26:25 CEST] <jackp10> unfortunately I am using an antient version of ffmpeg (0.10) that I cannot update
[00:26:38 CEST] <llogan> ancient and unsupported.
[00:26:46 CEST] <llogan> why can't you just get a new binary?
[00:29:08 CEST] <jackp10> because this version was built in inside another set of functionalities and IT (being IT) does not want to update it yet, not knowing if that could cause some other functionalities to malfunctioning
[00:29:46 CEST] <jackp10> with the knowledge that right now I cannot update it, is there a way I could convert a video file to play on Safari too ?
[00:30:00 CEST] <jackp10> I dont know why the generated file does not play on that browser
[00:30:40 CEST] <haole> JEEB: it's the only output format for some Geovision devices that my customer owns :~(
[00:35:26 CEST] <drv> Safari doesn't support Theora or Vorbis, as far as I know
[00:38:15 CEST] <jackp10> I also tried the conversion using the following: $ffmpeg_bin -y -i $INPUT -ac 2 -ab 96k -ar 44100 -vcodec libx264 -level 41 -preset ultrafast -vf scale=640:-1 $OUTPUT which should result in an MP4, but still no luck
[00:39:45 CEST] <llogan> we can't help you with such an old version
[00:42:05 CEST] <jackp10> I understand. I wonder if there is a way to have a newer version of ffmpeg as stand alone binary, perhaps having all libraries and any other files within one single directory
[00:44:18 CEST] <jackp10> my fear is that if I ./configure and build a new version on that server, it might overwrite existing files across the system (something that, if I can convince them to let me have a new version, I can say that everything is within one directory)
[00:45:25 CEST] <haole> jackp10: compile a static binary and don't do make install :)
[00:45:32 CEST] <haole> that might hurt the licensing, though
[00:45:38 CEST] <haole> (not an expert nor ffmpeg dev)
[00:46:25 CEST] <llogan> jackp10: http://ffmpeg.org/download.html
[00:46:26 CEST] <jackp10> and I am not an expert in compiling from source. Did that a long time ago, but got used to yum so much nowadays
[00:46:39 CEST] <llogan> see links to already compiled binaries
[00:47:06 CEST] <jackp10> I downloaded an ffmpeg from the git repository right now. I ahve a directory called ffmpeg with lots of files in it
[00:47:26 CEST] <llogan> you downloaded the source code
[00:51:53 CEST] <furq> jackp10:
[00:51:58 CEST] <furq> http://johnvansickle.com/ffmpeg/
[00:53:15 CEST] <jackp10> great.. let me download it right away
[00:58:41 CEST] <jackp10> :D !! it works.. I can convert videos just fine
[00:59:06 CEST] <jackp10> now I need to re-test the conversion I tried in the last few days to see if I can make those conversions works with Safari
[00:59:58 CEST] <jackp10> just a suggestion. Would you make 3 conversions for 3 different file types (ogv, mp4, webm) or one mp4 that can work with all major browsers ?
[01:00:13 CEST] <jackp10> I can see that right now, ogv works for both Chrome and Firefox
[01:00:30 CEST] <furq> mp4 should work with all major browsers
[01:02:09 CEST] <jackp10> one thing I struggle for quite a while to resolve (and gave up at the end) was to create a progress bar on the front end. But it ended being quite a challenge
[01:02:46 CEST] <wallbroken> what "muxing overhead" is?
[01:22:25 CEST] <jackp10> what does Output file #0 does not contain any stream means ? I am trying to convert an mov file using the binary just downloaded, but get that error. I googled it but couldnt find why
[01:23:06 CEST] <pfelt1> jackp10: we'd need your command and it's output in a pastebin to be able to help
[01:23:26 CEST] <pfelt1> but basically you're trying to output to a file and ffmpeg doesn't know what streams to put in that file.
[01:23:39 CEST] <iive> jackp10: video packets form one stream, audio packets form another.
[01:25:19 CEST] <jackp10> oh right I see. I added additional options after have read your messages and it did work.
[01:28:04 CEST] <c_14> wallbroken: the amount of cpu time spent not on encoding the video/audio or filtering etc. but on writing headers/other format things and/or the quantity of bits spent not on video/audio but on the containing metastructure
[01:28:53 CEST] <wallbroken> ok thank you
[01:33:34 CEST] <jackp10> I still have no luck with the streaming of an mp4 on Safari. here is what I am using: http://pastebin.com/TT08Ywdu
[01:33:40 CEST] <jackp10> do you see anything wrong with that ?
[01:33:55 CEST] <jackp10> still, the streaming works in Chrome and Firefox but does not in Safari
[01:55:13 CEST] <furq> jackp10: paste the full output
[01:55:30 CEST] <furq> also -level 41 is redundant with those settings, the actual level will be lower
[01:58:08 CEST] <jackp10> here is the output of it
[01:58:09 CEST] <jackp10> http://pastebin.com/MGJAi0MR
[02:25:32 CEST] <jackp10> I dont know whatelse to do. I cannot find a conversion that gives me an mp4 streamable via all major browsers !!
[02:48:31 CEST] <_delta_> hi, I'm getting version conflict warnings when linking against the ffmpeg packages from debian-backports. when I build, I get this warning: /usr/bin/ld: warning: libavcodec.so.56, needed by //usr/lib/x86_64-linux-gnu/libchromaprint.so.0, may conflict with libavcodec.so.57
[02:49:16 CEST] <_delta_> I posted more information in this Stack Overflow post, but if anyone could help me figure this out I would really appreciate it. https://stackoverflow.com/questions/37668842/version-conflict-warnings-when-linking-against-ffmpegs-libraries-from-debian-ba
[02:52:30 CEST] <kepstin> _delta_: the basic problem is really simple. you're linking directly to ffmpeg, which pulls in libavcodec.so.57, and you're linking to libchromaprint, which pulls in libavcodec.so.56. Then you have two different libavcodecs loaded, and bad things happen.
[02:53:28 CEST] <kepstin> you either need to get an updated version of libchromaprint that uses libavcodec.so.57, or build your app such that you link against libavcodec.so.56 (this will probably require poking around with different -dev package versions on debian)
[02:56:11 CEST] <kepstin> basically, if you want to link to other system libraries that also use libavcodec, you can't use the backports libavcodec for your stuff :/
[02:56:47 CEST] <furq> libchromaprint in stretch is new enough to use libavcodec57
[02:56:54 CEST] <furq> you could install that, although i don't usually recommend mixing repos
[02:57:08 CEST] <furq> or you could request that it be added to backports
[02:57:25 CEST] <furq> and then experience the lightning-fast pace of the debian package maintainers
[02:57:31 CEST] <_delta_> thanks guys, the only two libraries I'm using are SDL2 and ffmpeg, so I guess SDL is pulling in libchromaprint?
[02:58:41 CEST] <furq> _delta_: apt-cache rdepends packagename
[02:59:58 CEST] <_delta_> furq: apt-cache rdepends libchromaprint-dev gives nothing
[03:00:21 CEST] <furq> do it on libchromaprint0
[03:01:00 CEST] <_delta_> ah ok, it says libavformat57 depends on it? but I don't see how I'm pulling that in to my project
[03:01:28 CEST] <furq> https://packages.debian.org/jessie-backports/libavformat57
[03:01:29 CEST] <furq> huh
[03:01:30 CEST] <furq> so it does
[03:02:20 CEST] <furq> if you're not actually using libchromaprint then you can probably just ignore it
[03:03:56 CEST] <_delta_> furq: lol ok, thanks.
[03:05:57 CEST] <furq> that does make me a bit suspicious of backports though
[03:06:20 CEST] <furq> not that i run stable anywhere
[03:13:06 CEST] <_delta_> furq: do you think I should report this to the backports maintainers? this seems broken
[03:24:09 CEST] <furq> yeah it looks like libavcodec57 indirectly depends on libavcodec56
[03:24:14 CEST] <furq> which is pretty stupid
[03:24:48 CEST] <furq> although i assume it's not broken anything yet or else someone would've complained
[03:25:14 CEST] <furq> it's an obvious fix though, just backport libchromaprint 1.3.1
[04:18:36 CEST] <_delta_> furq: ok, I just sent an email to the maintainers, hopefully they'll fix it soon
[06:30:18 CEST] <clownpriest> hello, is there any way to pass a list of onset/offset timestamps to slice an audio file into new audio files? i know you can do this when concatenating audio files (passing in a txt file with paths to audio files)
[06:30:53 CEST] <clownpriest> i'm currently just running the ffmpeg command for each segment that needs to be spliced, was hoping there was a more efficient way
[06:31:18 CEST] <clownpriest> any help would be greatly appreciated
[10:29:21 CEST] <shayla> Hi guys. I'm trying to combine one image (jpg) with an audio file (mp3).
[10:29:33 CEST] <shayla> this is what i'm trying to do :
[10:29:39 CEST] <shayla> ffmpeg -loop 1 -i photo_2016-05-11_14-38-53.jpg -i yourewrong.mp3 -c:v libx264 -tune stillimage -c:a aac -strict experimental -b:a 192k -pix_fmt yuv420p -shortest yourewrong.mp4
[10:30:26 CEST] <shayla> But i get an error. This is the output -> http://pastebin.com/7qxaZKJV
[10:30:28 CEST] <shayla> What i'm doing wrong?
[10:34:00 CEST] <furq> [libx264 @ 0xc824a0] height not divisible by 2 (1280x719)
[10:34:24 CEST] <furq> shayla: -s 1280x720
[10:37:39 CEST] <shayla> Oh well, thank you furq
[11:08:37 CEST] <ricmik> Hi! O
[11:09:29 CEST] <ricmik> I'm trying to figure out why a ffmpeg process that is started from a script does not work, but if I run the exact same command (with output from ps) on the command line it works fine.
[11:09:51 CEST] <ricmik> It looks like it's the drawtext filter that breaks something
[11:10:34 CEST] <ricmik> the command that works from command line is: ffmpeg -v info -y -i rtsp://10.17.223.5/HighResolutionVideo -vf "drawtext=fontsize=48: text='%{localtime}': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000 at 1" -qscale:v 2 -f image2 -updatefirst 1 /var/www/html/test.jpg
[11:11:02 CEST] <ricmik> but when this is started from a shell script, it does not work
[11:13:08 CEST] <ricmik> the script (which is started from init.d) http://pastebin.com/CbiUj2PR
[11:13:35 CEST] <f00bar80> asking how to use ffprobe to check a live stream encoded output integrity ?
[11:15:13 CEST] <ricmik> The error message I get in the log when starting it from the script is: [NULL @ 0x22e5de0] Unable to find a suitable output format for 'text='%{localtime}':' text='%{localtime}':: Invalid argument
[11:16:02 CEST] <ricmik> but when I run the exact same command that I get from ps -fwwp <PID>, it works perfectly
[11:16:15 CEST] <ricmik> any ideas why?
[11:19:18 CEST] <furq> that error message suggests the quotes are being ignored
[11:19:24 CEST] <furq> they look ok to me though
[11:19:46 CEST] <ricmik> hm
[11:23:09 CEST] <furq> http://vpaste.net/4SJ3w
[11:23:15 CEST] <furq> that's probably the path of least resistance
[11:23:31 CEST] <furq> except with the single quotes escaped
[11:43:20 CEST] <f00bar80> ppl any comment ?
[11:57:28 CEST] <ricmik> furq: doesn't seem to work :/
[12:00:06 CEST] <ricmik> I escaped the single quotes, and got the same result
[12:06:36 CEST] <PlanC> I'm playing around with a few variable bitrate files
[12:06:59 CEST] <PlanC> when I get their info in ffprobe it gives me a specific bitrate which I think is the average
[12:07:53 CEST] <PlanC> when I open the audio file in MPC then there are moments where the bitrate is over that average
[12:08:06 CEST] <PlanC> is there anyway to get the maximum bitrate of the audio file?
[12:08:14 CEST] <PlanC> with ffprobe
[12:37:37 CEST] <ricmik> AH! Finally
[12:38:14 CEST] <ricmik> When I removed all the spaces and the quotes for the -vf switch it's finally working
[12:57:11 CEST] <f00bar80> ppl any comment ?
[13:01:32 CEST] <jackp10> Can someone tells me why the following conversion does not let me stream on Safari (it works on Chrome and Firefox) - ./ffmpeg -y -i '/tmp/file1.mov' -ac 2 -ab 96k -ar 44100 -vcodec libx264 -preset ultrafast -vf scale=640:-1 -movflags +faststart '/tmp/file1.mp4
[13:01:53 CEST] <__jack__> f00bar80: what is your input, say ?
[13:04:02 CEST] <__jack__> f00bar80: if you can replay is predictably, you may just want to play it, with something like ffmpeg -i input -map 0 -c copy -f null /dev/null
[13:07:03 CEST] <jackp10> __jack__: this is the output -> http://pastebin.com/K5jvP0qB
[13:07:31 CEST] <f00bar80> __jack__: i want to test the output integrity , so ffmpeg -i input where "input" is the encoded output in this case ? if yes how this should be testing audio/video streams integrity ?
[13:08:41 CEST] <__jack__> f00bar80: if it can decode it, then the output is probably good
[13:09:37 CEST] <f00bar80> __jack__: again , "ffmpeg -i input where "input" is the encoded output in this case"
[13:09:48 CEST] <__jack__> f00bar80: yes
[13:13:11 CEST] <f00bar80> __jack__: here's what i got , frame= 2000 fps= 33 q=-1.0 size=N/A time=00:01:20.00 bitrate=N/A speed=1.33x this is decoding ? why there's no bitrate ?
[13:17:27 CEST] <__jack__> f00bar80: does it die with some error ?
[13:18:31 CEST] <f00bar80> __jack__: no , but when i added the -loglevel warning switch .. I'm getting no output ...
[13:18:59 CEST] <__jack__> f00bar80: then the input is playable, so it's fine
[13:20:27 CEST] <f00bar80> __jack__: when i tried to play the encoded output , i'm getting both picture/audio not sync , is there a way to test what can be the reason for that?
[13:21:59 CEST] <__jack__> f00bar80: what's your encoded output format ? which player do you use ? did you try another player ?
[13:23:15 CEST] <f00bar80> __jack__: output format m3u8 , tried vlc and mplayer
[13:23:29 CEST] <f00bar80> __jack__: cvlc as well ?
[13:25:48 CEST] <__jack__> f00bar80: owh, live HLS, so it is not reproductible
[13:26:19 CEST] <__jack__> f00bar80: what is the input format, before encoding ?
[13:26:27 CEST] <__jack__> f00bar80: what's your command line ?
[13:26:56 CEST] <__jack__> f00bar80: your players don't show errors, but the output is not sync, right ?
[13:30:27 CEST] <f00bar80> __jack__: yes
[13:36:55 CEST] <f00bar80> __jack__: ??
[13:40:30 CEST] <__jack__> f00bar80: yes is a small answer for all these questions :)
[13:49:12 CEST] <f00bar80> __jack__: so ?
[14:01:31 CEST] <f00bar80> __jack__: now when i try the same ffmpeg command , I'm getting http://vpaste.net/oUVjg any idea why is that ?
[14:03:35 CEST] <f00bar80> __jack__: then ffmpeg exit
[14:04:27 CEST] <f00bar80> any idea what's wrong ?
[14:06:12 CEST] <fordfrog> hi, is it possible to do two-pass bitrate limiting on mkv files and to have all audio streams included? i run this command but in the final file i always get just the first audio stream (and video): ffmpeg -y -i S01E01.mkv -c:v libx264 -preset medium -b:v 572k -pass 1 -c:a libfdk_aac -b:a 128k -f matroska /dev/null && ffmpeg -i S01E01.mkv -c:v libx264 -preset medium -b:v 572k -pass 2 -c:a libfdk_aac -b:a 128k S01E01_700.mkv
[14:07:11 CEST] <f00bar80> ppl any comment /
[14:12:21 CEST] <Kaspat> Hello. How i can use ffmpeg for merge m4s fragments in one mp4? I'm on windows. Thanks
[14:21:44 CEST] <__jack__> fordfrog: use -map
[14:21:56 CEST] <__jack__> fordfrog: -map 0:a, if you want all audio from the first stream
[14:30:33 CEST] <fordfrog> __jack__, so if i have two audio streams, i have to add -map 0:a -map 1:a, right?
[14:31:05 CEST] <fordfrog> one is the original language and the other one is the local language
[14:34:28 CEST] <fordfrog> ah, got it :-)
[14:34:35 CEST] <__jack__> fordfrog: -map 0:a means "all audio from input #1"; -map 0:0 means "first stream from input #1"; -map 0:a:0 means "first audio stream from input #1"
[14:35:25 CEST] <fordfrog> __jack__, thanks, will read more about the mapping to see how it works exactly :-)
[14:45:13 CEST] <Kaspat> i'm try to use concate demux but i have several errors like:
[14:45:36 CEST] <Kaspat> Could not find codec parameters for stream - could not find corresponding track id 1 - could not find corresponding track id2
[14:45:57 CEST] <Kaspat> i'm using -f concat -i list.txt -c copy output.mp4
[14:46:10 CEST] <Kaspat> what i need to add?
[16:36:59 CEST] <P4Titan> Hello all.
[16:37:32 CEST] <P4Titan> I wish to load pcm audio data into a fifo buffer
[16:37:49 CEST] <P4Titan> do I need and input_format_context and if so, how would I generate one?
[17:19:39 CEST] <mifritscher12312> hi
[17:20:02 CEST] <mifritscher12312> why is tht fifo_size of the udp expressed on 188 byte blocks?
[17:20:36 CEST] <mifritscher12312> and is the default fifo size realy 7*4096*188=5390336 Bytes ?
[17:21:18 CEST] <Sokolio> MPEG2 TS packet is 188B in size
[17:21:24 CEST] <Sokolio> perhaps that's why
[17:21:46 CEST] <mifritscher12312> ah, ok. that could be an explanation
[17:22:20 CEST] <mifritscher12312> and the fifo is almost 6 MB big by default?
[17:26:00 CEST] <mifritscher12312> another question: What should I use to live stream mpeg2/4 streams over a vpn tunnel? using udp I suffer from lots of artifacts. could rtp be better than raw udp (what we are using now)? we are using mpegts as transport for now
[17:26:07 CEST] <mifritscher12312> both ends are using ffmpeg
[17:32:18 CEST] <Mavrik> mifritscher12312, won't be much difference really.
[17:32:32 CEST] <Mavrik> Perhaps try RTSP/RTMP ?
[17:35:58 CEST] <JoshuaTheOne> Hi guys. Is possible to separate different concat:file1|file2 operations?
[17:37:14 CEST] <JoshuaTheOne> Because o need to join 1071 files and in one command line i can't because the limitation of Windows "Input file too long"
[17:39:11 CEST] <Mavrik> JoshuaTheOne, isn't there an option to pass a txt file with list of files to concat?
[17:39:48 CEST] <kepstin> JoshuaTheOne: not with the concat protocol. You can try the concat demuxer instead, which can take a text file list.
[17:40:10 CEST] <kepstin> or have some other tool concatenate the files and just feed them to ffmpeg on stdin
[17:40:37 CEST] <JoshuaTheOne> Marvik yes but is different, with demuxer i have several errors and i don't know how to resolve, with protocol work fine but there is that limitation
[17:40:48 CEST] <JoshuaTheOne> kepstin some other tool like?
[17:42:09 CEST] <kepstin> JoshuaTheOne: i don't know anything offhand that could do that on windows. could probably script something.
[17:43:19 CEST] <JoshuaTheOne> on Windows you can merge file with COPY /B but after that the file is unplayable or after that i need to pass to ffmpeg?
[17:45:07 CEST] <JoshuaTheOne> From the help "This is analogous to using cat on UNIX-like systems or copy on Windows" well absolutely not
[17:47:21 CEST] <JoshuaTheOne> The command is easy so you can't make mistake copy /b *.mp4 video.mp4
[17:56:58 CEST] <kepstin> anything that works with the concat: protocol with ffmpeg should also work with 'cat' or 'copy /b' to combine them. This notably does *not* include mp4 files.
[17:59:03 CEST] <JoshuaTheOne> kepstin so i need to use ffmpeg but how to resolve? There is no way to pass for protocol a file list? Or maybe "split" the operation?
[17:59:57 CEST] <kepstin> concat protocol doesn't work on mp4 files either...
[18:00:20 CEST] <JoshuaTheOne> no work lol i'm sure of it
[18:00:20 CEST] <kepstin> you'll want to use the concat demuxer, which conveniently can use an external file list.
[18:00:34 CEST] <mifritscher12312> Mavrik, I don't want to use TCP now, but is my last resort
[18:01:08 CEST] <JoshuaTheOne> kepstin i have try with the demux but i have a lot of errors, if i'll tell you know how to resolve?
[18:04:41 CEST] <JoshuaTheOne> Could not find codec parameters for stream 0 - could not find corresponding track id 1 - could not find corresponding track id 2 - could not find corresponding trex - error reading header
[18:05:09 CEST] <JoshuaTheOne> All of that with protocol don't appear
[18:08:39 CEST] <JoshuaTheOne> Ok, http://pastie.org/private/u67lx5afwtncwd6k1wqxw
[18:11:36 CEST] <kepstin> JoshuaTheOne: does 'ffmpeg -i segment-001.mp4' on its own work?
[18:12:21 CEST] <JoshuaTheOne> Same errors plus invalid data found etc.
[18:13:12 CEST] <kepstin> well, that's not gonna work then. are the files not actually individual standalone mp4 files?
[18:14:17 CEST] <JoshuaTheOne> They aren't playable if you don't merge them, for this reason i'm tring to do. Work with concat:file1!file2| and i see the vide but i have too much segments for one single line
[18:16:10 CEST] <kepstin> hmm. so they're not individual mp4 files, but rather pieces of a single longer mp4 file? weird.
[18:16:40 CEST] <kepstin> yeah, if it doesn't fit on the ffmpeg command line for the concat protocol, you'll have to use a different tool to combine them back into a single file.
[18:16:49 CEST] <kepstin> (which you can then use as input to ffmpeg later)
[18:22:01 CEST] <JoshuaTheOne> kepstin wait! If i'm merging them and i'll pass the final file to ffmpeg i don't have any errors! But the file actually is unplayable, what i need to do?
[18:22:19 CEST] <JoshuaTheOne> *merging with COPY /B
[18:22:50 CEST] <JoshuaTheOne> *ffmpeg -i final.mp4
[18:25:16 CEST] <JoshuaTheOne> kepstin something like this is correct? ffmpeg -i final.mp4 -c:v copy -c:a copy output.mp4
[18:27:03 CEST] <ATField> Writing unicode symbols like "×" in -metadata title gives a messed up text if done through a .bat file and returns normal text if done through cmd directly. Putting chcp 65001 at the start of .bat code didnt solve the problem. What do?
[18:27:20 CEST] <P4Titan> Hi all
[18:27:33 CEST] <ATField> Hello, Titan.
[18:27:57 CEST] <P4Titan> How can I get a format context on audio data in a memory buffer rather than a file?
[18:28:29 CEST] <P4Titan> I wish to get transcode_aac.c working where the input data is raw pcm in a memory buffer
[18:28:38 CEST] <P4Titan> and I manually have the header information set
[18:32:08 CEST] <P4Titan> Any thoughts?
[19:46:49 CEST] <mao> hello, i have a question regarding rtmp push using ffmpeg, after calling avformat_write_header(), a video packet with 4-byte 0x00 body and audio packet with 1-byte 0x00 body are written to the wire, has anyone ever encountered this issue? am i setting up my ffmpeg contexts wrong? thanks.
[19:51:49 CEST] <kanzure> does libavcodec implement an http server?
[19:52:22 CEST] <c_14> no
[19:52:47 CEST] <c_14> libavformat has something that can be used like an http server, but it's not an http server and it's probably not what you want
[19:55:01 CEST] <kanzure> i'm not looking for anything in particular, just double checking something i overhead somewhere
[20:02:10 CEST] <ATField> Does -metadata title="" have encoding option (e.g. unicode)?
[20:46:49 CEST] <neuro_sys> is it possible to output 1-bit raw (or bmp) image sequence with ffmpeg?
[20:48:29 CEST] <ATField> ok, nvm
[20:49:31 CEST] <drv> neuro_sys: you should be able to use bmp with -pix_fmt monob
[20:50:25 CEST] <neuro_sys> drv: thanks, it works
[20:51:07 CEST] <neuro_sys> hmm, by default it applied a dithering algorithm to the colored frames (which is great).
[20:51:25 CEST] <neuro_sys> but now I wonder if we can change the dithering algorithm to something else (like floyd-steinberg)
[20:53:05 CEST] <ATField> -paletteuse dither=floyd_steinberg, maybe?
[20:54:04 CEST] <neuro_sys> haha wow yes
[21:02:08 CEST] <neuro_sys> oh well, it works, except pix_fmt monob seems to apply its own dithering on it
[21:08:09 CEST] <c_14> neuro_sys: -sws_dither=none ?
[21:08:48 CEST] <neuro_sys> it was my bad with the palette I think it works now after I generated the palette properly
[21:09:11 CEST] <neuro_sys> but I'll check that option
[21:20:59 CEST] <P4Titan> Hello all
[21:22:08 CEST] <P4Titan> I have pcm audio data loaded into a memory buffer. How can I generate an input format context from that so to decode and store it in a fifo?
[21:41:20 CEST] <eazor> hi !
[22:02:29 CEST] <eazor> guys, do you know how to download a rtmp video with ffmpeg please ? thanks
[22:03:24 CEST] <llogan> ffmpeg -i rtmp://input -c copy output
[23:48:07 CEST] <wallbroken> how ffmpeg in apt-get is called?
[23:48:24 CEST] <furq> it's called ffmpeg
[23:48:58 CEST] <furq> i'm guessing you're on a distro version which doesn't have it in the repos because of the libav bullshit
[23:49:00 CEST] <wallbroken> no
[23:49:11 CEST] <wallbroken> debian
[23:49:33 CEST] <furq> if you're on stable then you need to use backports
[23:52:03 CEST] <wallbroken> furq, libav is good?
[23:52:29 CEST] <furq> ask in #libav
[23:52:40 CEST] <wallbroken> that's not ffmpeg?
[23:52:43 CEST] <furq> no
[23:52:49 CEST] <wallbroken> it's similar software?
[23:53:05 CEST] <furq> it's a fork of ffmpeg which managed to inveigle itself into a bunch of distros by claiming ffmpeg was dead
[23:53:41 CEST] <furq> you'd be better off just using ffmpeg from jessie-backports
[23:53:53 CEST] <wallbroken> ok
[23:54:02 CEST] <wallbroken> thanks
[23:54:34 CEST] <wallbroken> ffmpeg on windows should produce the same identical output of ffmpeg on linux?
[23:55:11 CEST] <furq> i imagine that would depend on the codecs used
[23:55:37 CEST] <wallbroken> they depends from the ffmpeg version?
[23:59:04 CEST] <Plorkyeran_> barring bugs, the output from decoding a file should be identical on all platforms for most formats (some formats do not have bitexact decoding and may have different dithering)
[23:59:44 CEST] <Plorkyeran_> for encoding to lossy formats the output will differ significantly between versions
[23:59:54 CEST] <Plorkyeran_> (hopefully for the better in newer versions)
[00:00:00 CEST] --- Wed Jun 8 2016
More information about the Ffmpeg-devel-irc
mailing list