[Ffmpeg-devel-irc] ffmpeg.log.20140408
burek
burek021 at gmail.com
Wed Apr 9 02:05:01 CEST 2014
[00:12] <llogan> ac_slater: you can make a MJPEG from individual jpg images without re-encoding
[00:14] <Stalkr_> I am sorry, I have already pasted it now though. Do you want me to use pastebin anyway?
[00:15] <llogan> sure
[00:16] <llogan> ac_slater: and you can do the opposite: extract images from the movie without re-encdoing
[00:16] <llogan> https://www.ffmpeg.org/ffmpeg-bitstream-filters.html#mjpeg2jpeg
[00:16] <Stalkr_> Playing with FFmpeg and webm, but it won't burn the subtitles, any idea why? http://pastie.org/pastes/9001844/text -- I followed https://trac.ffmpeg.org/wiki/How%20to%20burn%20subtitles%20into%20the%20video, which looks easy enough. What am I missing?
[00:16] <llogan> where is the rest of the info?
[00:22] <Stalkr_> Oh, sorry... I forgot that. I was able to find out what's causing it, it is `-vf scale=-1:480`
[00:23] <Stalkr_> The subtitles are there without that, but they are gone when scaling
[00:25] <Stalkr_> llogan: This is without scale http://pastie.org/9001871, this is with scale http://pastie.org/9001873
[00:26] <llogan> Stalkr_: you should only need one filtergraph. do you want to scale first, and then place subtitles. or burn the subtitles and then scale everything?
[00:26] <Stalkr_> I think scale first, then subtitles would be prettiest
[00:27] <llogan> -filter_complex "scale=-1:480,subtitles=psych.srt"
[00:29] <Stalkr_> Looks like that's working, thank you very much. It does print `Neither PlayResX nor PlayResY defined. Assuming 384x288` but the output is 480p
[00:29] <llogan> does the output look ok? then you can probably ignore it.
[00:30] <Stalkr_> It's red, but that could just be my iTerm. It looks good
[03:48] <ScottSteiner-> How can I turn softsubs that are included as a stream in the video file into hardcoded subs? The stream is Stream #0.2(eng): Subtitle: [0][0][0][0] / 0x0000
[05:47] <klaxa> ScottSteiner-: http://trac.ffmpeg.org/wiki/How%20to%20burn%20subtitles%20into%20the%20video
[09:45] <stevenm> Hey do I go someplace else for avconv support?
[09:45] <sacarasc> #libav
[09:45] <stevenm> aha #libav - that'll be it
[09:46] <stevenm> lol
[12:43] <bainsja> Could anyone help with a question I have? I'm muxing an rtsp stream to an mp4 container using ffmpeg with the -t option to limit to a fixed duration. When the duration is reached ffmpeg doesn't exit, it just sits there until I kill it, is this expected behaviour? command line is ffmpeg -i "rtsp://x.x.x.x:554/" -y -t 60 -vcodec copy test2.mp4
[13:35] <eristisk> Why is ProRes 4444 not called ProRes 444?
[13:36] <eristisk> What does the 4th "4" refer to?
[13:49] <iive> probably alpha (transparency) channel.
[14:15] <jkli> hi all
[14:16] <jkli> i hope you are all well
[14:16] <jkli> I'm not sure if this is related to my encoding settings with ffmpeg or something else
[14:17] <jkli> When I try to stream a video via nginx-rtmp I get a green screen for the first frames
[14:17] <jkli> and then after a while sound and video appears
[14:17] <jkli> it varies from video to video but overall I get green screen pretty much for every video while streaming
[14:18] <jkli> any idea if this is related to encoding settings in ffmpeg?
[14:18] <jkli> I stream h264 mp4 files
[16:59] <nano-> Is it possible to generate a raw ALAC file from a flac file? I've tried "ffmpeg -i 09\ Lateralus.flac -f data -acodec alac lateralus.alac" but I get the error message "Output file #0 does not contain any stream". What I'm trying to do is to generate raw ALAC data that I can use to test my airtunes framework before doing the full ffmpeg integration. Easier to just read a file than to start interacting with a whole set of new apis.
[17:10] <c_14> nano-: Try adding -map 0
[17:51] <nano-> c_14: that produced an output file in the similar size range, hopefully correct :) thanks
[18:31] <srcreigh> Can I ask a question about using ffmpeg's av libraries here?
[18:32] <srcreigh> avpicture_free segfaults and I can't figure out what the cause of the problem is.
[19:01] <srcreigh> Sorry, I'm reading in the libav archives now. There seems to be some good discussion about this issue.
[19:35] <synth_> where can i find a build of ffmpeg that works on ubuntu that has the --enable-libfreetype feature?
[19:36] <synth_> i'd like to enable text overlays
[19:41] <synth_> i believe i actually compiled from the 3.x+ static build as of march 20th, is libfreetype included with it or is it something i have to compile seperately? i compiled using this guide: https://trac.ffmpeg.org/wiki/UbuntuCompilationGuide
[19:43] <c_14> If you download the static build, it should just work. If you compiled it yourself, you will need to recompile with --enable-libfreetype and make sure that libfreetype is installed somewhere where you can include it.
[19:44] <synth_> hm okay
[20:29] <synth_> perfect, drawtext worked
[20:29] <synth_> just gotta tweak the settings
[20:31] <synth_> klaxa, i work in vfx so often we like to generate mov files from our frames for review by our clients and i've set up an intranet website that submits a bunch of information concerning the frames to a database which is read by a python script that runs ffmpeg and updates information on the database entry to where the final output video lives and i wanted to integrate information onto the overlay
[20:32] <synth_> that shows what project, sequence, shot, artist, version and any comments were reguarding that particular render.
[20:33] <synth_> klaxa this information would be helpful for our design department since they are rebranding NFL and have hundreds of submissions to send to the client so we need to track which helmet styles are in the renders or what teams, which version of the render, etc..
[20:33] <klaxa> hmm yeah i guess drawtext is suited for that
[20:33] <klaxa> i myself find it a lot easier to just fire up aegisub and compile a subtitle file and render that on top
[20:34] <synth_> i'm not familiar with aegisub
[20:34] <synth_> that would also be an option for us, but i'm not sure how to go about doing that
[20:35] <klaxa> aegisub is a rather sophisticated WYSIWYG subtitle editor
[20:35] <synth_> hm
[20:35] <klaxa> drawtext might be easier if it is for informational purpose only
[20:38] <llogan> drawtext can use a file as input too in case that is useful/easier for you.
[20:39] <llogan> or at least line breaks are easier that way
[20:39] <synth_> i think it might be easier for me to just compile the command in python for drawtext
[20:39] <synth_> is there a special character for line break or just \n ?
[20:40] <llogan> i'm not sure, but escaping might be annoying
[20:40] <llogan> http://ffmpeg.org/ffmpeg-utils.html#Quoting-and-escaping
[20:57] <jkli> hi all
[20:57] <jkli> hope you are doing well
[20:57] <jkli> can anybody tell me why i keep getting green screens when i stream via rtmp?
[20:57] <jkli> while http is just fine?
[20:58] <jkli> i read that green screens are a sign of frames being dropped or too low bandwidth
[21:03] <synth_> http://pastebin.com/XLCzfJFL any idea why i'm getting the error shown at the bottom? I've got the text= portion quoted
[21:08] <synth_> wait i think i got it now
[21:08] <DonGnom> Silver my problem
[21:08] <DonGnom> s/Silver/solved/
[22:31] <salohcin> is there a way to *indefinitely* pipe images to ffmpeg and encode and stream those images into video to be broadcast? I am generating images and would like to stream them on the fly
[22:31] <salohcin> right now I'm opening a pipe to ffmpeg in python and am trying to write raw rgb frames to the piep
[22:32] <salohcin> *pipes stdin
[22:37] <klaxa> you can use a image2pipe
[22:57] <voip> hello guys
[22:58] <voip> whats wrong, have error: av_interleaved_write_frame(): Invalid argument
[22:58] <voip> http://pastebin.com/YxwMTmBR
[23:14] <llogan> voip: didn't you encounter that issue a few days ago?
[23:14] <llogan> maybe, in this case, "-loglevel debug" may output something informative
[23:16] <voip> llogan, witth debug http://pastebin.com/PQbx0TDK
[23:22] <llogan> voip: does it occur with a local file output?
[23:27] <voip> llogan, with local file ok
[23:28] <llogan> i dont know. try the ffmpeg-user mailing list and mention that it works normally with a local file output
[23:30] <voip> ok, just confuzed av_interleaved
[23:31] <salohcin> what is the difference between *image2pipe* and simply using *-* as input
[23:48] <Sander_> Can someone help me with DVB-s2 input to transcode it to h264
[00:00] --- Wed Apr 9 2014
More information about the Ffmpeg-devel-irc
mailing list