[Ffmpeg-devel-irc] ffmpeg.log.20140819

burek burek021 at gmail.com
Wed Aug 20 02:05:01 CEST 2014


[00:25] <sirnfs> I have a video that someone sent to me which was created with Quicktime.  The first three seconds of the video are essentially frozen and then the video and audio abruptly kick in.  When transcoding the video, this causes the audio and video to become out of sync.  Any suggestions for how to prevent the audio/video from becoming out of sync during transcoding?
[00:26] <sirnfs> You can see the problem here:  https://www.dropbox.com/s/ldsva3713f06ax1/6bd7b15e-1bc6-4edb-8663-9aed34269551.mov
[00:27] <sirnfs> This seems to happen quite frequently when users clip/trim video with Quicktime
[01:06] <jotterbot1234> Hey guys, i am having trouble encoding wmv video
[01:06] <jotterbot1234> i understand that ffmpeg does not use the latest wmv3 codec
[01:07] <jotterbot1234> but even using msmpeg4 or wmv2 video codec, on "general imagery" the performace is acceptable.......
[01:07] <jotterbot1234> i am just getting issues on text elements. it "flickers" between a good rendition of the text and blocky artefacts
[01:07] <jotterbot1234> would something like 2 pass encoding help in those circumstances
[01:18] <iive> jotterbot1234: no, 2pass would only make better bitrate tradeoff for the target bitrate.
[01:19] <iive> try increasing the quality, using mv0 might be good idea too.
[01:19] <iive> there shouldn't really be artefacts. i remember xvid having problem when using max quantizer 1 (aka perfect quality).
[01:23] <jotterbot1234> iive: thanks mate. I have had some success switching back to the wmv2 codec from msmpeg4
[01:24] <jotterbot1234> still much larger file sizes then ideal, but the text does not flicker between blocky and normal
[01:24] <iive> use mbd=2 and trellist options. they should help a lot.
[01:24] <iive> trellis
[01:25] <jotterbot1234> hmm, as an argument to which parameter? i haven't used that before
[01:25] <jotterbot1234> i can search docs though , thank you for helping :)
[01:27] <iive> i'm not quite sure where they go, i've used them from another program.
[01:28] <jotterbot1234> no worries, I'm just googling those terms to determine what they actually "do"
[01:30] <iive> also, if you have cpu cycles to burn, you can try different cmp functions. e.g. precmp, cmp, subcmp. these control the motion estimation algorithm, using 2 or 3 might make a difference. 5 should be slower but optimal.
[01:36] <jotterbot1234> iive: could i trouble you for an exmaple command with those options
[01:36] <jotterbot1234> i'm looking here: http://smorgasbork.com/component/content/article/97-real-time-mpeg-2-encoding-with-ffmpeg
[01:36] <jotterbot1234> so something like "-trellis 2 -cmp 2 -subcmp 2" might help
[01:50] <iive> jotterbot1234: i think trellis takes other values than 0/1  So :  -mbd 2 -trellis 1 -cmp 2 -flags mv0
[01:52] <jotterbot1234> iive: thank you :)
[02:06] <DoctorTrombone> How would I use ffmpeg to take several audio streams (save one) in my input file (some mono, some stereo), and mix them into one stereo output file?
[03:33] <rudi_s> mencoder supported -vobsoubout to dump the subtiles as .idx and .sub files. Is this still possible with ffmpeg?
[03:34] <jrgill> Does the concat demuxer produce a container with streams for each segment?
[03:52] <rudi_s> I tried to convert the .vob to .mkv, but it seems the idx-"part" is not correctly transferred, as mkvextract doesn't let me extract the .idx/.sub from the resulting mkv file.
[03:53] <rudi_s> (codecprivate seems to be missing.)
[05:01] <jrgill> Is it possible to apply pan to the output of concat, both in the same filter_complex?  I have each filter working; just not sure how to apply the both of them (-af and -filter_complex mutually exclusive).
[05:02] <jrgill> I have [out] from concat and normally map this with -map.  I need my pan to happen in between.
[05:13] <jrgill> Ah, think I've got it.
[05:58] <Eiam_> after completing a `make install` for yasm 1.3, I try to ./configure for ffmpeg 2.3.3 and im told that yasm isn't found.. but its there..
[05:58] <Eiam_> > which yasm
[05:59] <Eiam_> ./configure --prefix=$OPENSHIFT_DATA_DIR/bin --bindir=$OPENSHIFT_DATA_DIR/bin  -> fail
[06:00] <Eiam_> whoops IRC ate the which yasm, but it shows it there in data/bin/yasm
[07:21] <jrgill> Any tips for trimming with full timestamp?  Before I had atrim=35:40 now trying something like atrim=00\:00\:35.00:00\:00\:40.00 which isn't working.
[07:26] <jrgill> (time duration format rather)
[07:27] <Eiam_> ah figured it out
[07:27] <Eiam_> had to force the env to point the right place on PATH
[07:51] <jrgill> Well, finally got it even though it's pretty nasty.  atrim='00\:00\:35.00:00\:00\:40.00'
[10:21] <Meriipu> so I am trying `ffmpeg -s 1x1 -f rawvideo -i /dev/urandom out.mp4` but libx264 complains that width is not divisible by 2, is there anything I can do?
[10:23] <ubitux> video codecs are not designed to support arbitrary sizes, for performance and simplicity
[10:24] <Meriipu> I see
[11:24] <svvitch> I compiled ffmpeg with this options https://dpaste.de/06cV , now I'm trying to open an input stream over http protocol and it fails, what conf options i missed ?
[11:25] <cdecl_xyz> Hello I have a question regarding color space conversions. in sws_getContext, sws_setColorSpaceDetails gets called with ff_yuv2rgb_coeffs[SWS_CS_DEFAULT] for input and output color space range. my question is, if I want the OUTPUT color space output to be SWS_CS_ITU709 or FULL_RANGE, do i also have to set the input color space? I assume yes. But then I would have to know which space the source is in right? This poses a problem. And does
[11:25] <cdecl_xyz> SWS_CS_ITU709 also apply if I convert to RGB?
[11:26] <relaxed> svvitch: does "ffmpeg -protocols" list http? maybe you need https support?
[11:26] <Mavrik> cdecl_xyz, you always have to know what your input color space is
[11:27] <Mavrik> there's no way to guess from image itself you know
[11:27] <cdecl_xyz> thanks
[11:27] <cdecl_xyz> i thought so
[11:27] <Mavrik> but I'm not sure about more exact details of what you have :/
[11:27] <Mavrik> I've never tried doing such fine grained conversion
[11:27] <cdecl_xyz> i cant assume anything abou tthe input file
[11:27] <cdecl_xyz> mostly mpeg2 or prores
[11:27] <cdecl_xyz> but can also be h264 in mp4/mkv
[11:27] <cdecl_xyz> so yes, I thought so already. I will probably ask the user to define the input space
[11:28] <cdecl_xyz> do you know if this also applie swhen converting from yuv to rgb?
[11:28] <cdecl_xyz> i cant find a function yuv2rgb in swscale, only rgb2yuv
[11:29] <svvitch> fflogger: ok, just a minute
[11:30] <relaxed> svvitch: fflogger is a bot, fyi
[11:31] <svvitch> relaxed: ;)
[11:32] <Mavrik> cdecl_xyz, if what applies really?
[11:32] <Mavrik> swscale can do YUV to RGB conversion
[11:32] <cdecl_xyz> yes i Know i do it already
[11:32] <drakeguan> Hi, just wondering if this is the right channel to ask about the result of ffprobe?
[11:32] <cdecl_xyz> but if i specify output color space to be ITU709
[11:32] <cdecl_xyz> and do a yuv2rgb conversion or a rgb2rgb conversion
[11:33] <cdecl_xyz> will the rgb values then be in the range of ITU709?
[11:33] <cdecl_xyz> with rgb2rgb i mean (rgb full range to rgb 709(
[11:33] <Mavrik> isn't ITU709 a YUV definition? which is basically just a flag on a stream?
[11:34] <Mavrik> and makes no sense in RGB context?
[11:34] <svvitch> relaxed: http://pastie.org/9485222 I think problem is not in protocol
[11:34] <Mavrik> drakeguan, probably :)
[11:34] <cdecl_xyz> http://en.wikipedia.org/wiki/Rec._709
[11:34] <cdecl_xyz> and no its not a flag
[11:34] <drakeguan> Mavrik: thx cause it looks like http://ffmpeg.gusari.org/viewforum.php?f=27&sid=8fc1f2eaf1d6fa515dd8f6d0663b43aa is not that active.
[11:34] <cdecl_xyz> it restricts the values of rgb to be within i think 16 and 231 or so
[11:34] <svvitch> relaxed: -protocols : this list active protocols in ffmpeg ?
[11:35] <cdecl_xyz> so if you have an rgb image full range, then black is 0, but with itu709 its 16
[11:35] <Mavrik> ah, yeah, I know what do you mean
[11:35] <relaxed> svvitch: is that all of the console output, or did it get cut off?
[11:35] <Mavrik> I'm not sure really, RGB support is not as developed due to practically no video codecs supporting it
[11:36] <svvitch> relaxed: it's all
[11:36] <cdecl_xyz> i am putting the decoded frames into my jpeg2000 encoder
[11:36] <cdecl_xyz> i dont use avcodec to do the encoding as I need 100% broadcast conform jpeg2000 files
[11:36] <drakeguan> For ffprobe, I'm confused the relationship between probed `codec_time_base` and `avg_frame_rate` (assuming the stream's frame rate is fixed). Given one MPEG_PS file, I got   "codec_time_base": "1001/48000" and "avg_frame_rate": "24000/1001". Does that mean I can conclude this video stream is interlaced?
[11:37] <cdecl_xyz> but the decoding must be well defined, i need more info about how swscale does color space stuff
[11:38] <relaxed> svvitch: does it work without using tee?
[11:38] <Mavrik> I'm afraid you'll just have to read the source :/
[11:38] <Mavrik> drakeguan, no, not really
[11:38] <Mavrik> drakeguan, for alot of codecs you can set timebase for anything and it does not imply anything
[11:39] <Mavrik> so you can't really rely on that :/
[11:39] <Mavrik> also, for example for H.264, it's not even required that all frames are interlaced or not - that's set on perframe basis
[11:39] <Mavrik> which makes detection of interlaced video a serious problem
[11:40] <svvitch> relaxed: yes this works http://pastebin.com/EYEhaqD6
[11:40] <svvitch> relaxed: problem is not in protocol
[11:40] <Mavrik> drakeguan, if you know that your stream will reliably be one thing or another, use -show_frames for ffprobe and grab information from first decoded frame
[11:46] <drakeguan> Mavrik: Thanks a lot. I'm quite surprised to know that timebase doesn't imply anything!
[12:21] <bigzed> hello
[12:22] <coalado> hi. I know that ffmpeg guesses the output format from the file extension. I want to use a different extension, and this have to use the -f format switch.
[12:23] <coalado> but while ffmpeg test.m4a  works, ffmpeg -f m4a test.tmp does not work
[12:23] <coalado> is there any way to access the internal extension->format map ?
[12:24] <Mavrik> coalado, m4a is a different name for mp4.
[12:24] <Mavrik> so that's what causing you issues, name of format is "mp4" :)
[12:24] <Mavrik> the map is just written in source code I'm afraid
[12:28] <coalado> Mavrik:  I only have the file extension.  I need some kind of mapping.
[12:29] <coalado> If I execute ffmpeg dummy.m4a   I could parse the output Output #0, ipod, to 'dummy.m4a': , delete the file, and use -f ipod for the actual execution
[12:30] <coalado> But that's kind of dirty
[12:31] <Mavrik> use ffprobe and it's JSON output.
[12:31] <Mavrik> that's why it's there for.
[12:33] <coalado> ffprobe can be used to analyse actual files/streams. How should I use it to get outputfile mapping?
[12:35] <svvitch> I found that ffprobe does not return video bitrate http://pastie.org/9485629 , is this a common issue ?
[12:36] <Mavrik> svvitch, your pasties don't work
[12:37] <svvitch> Mavrik: http://pastebin.com/37BQMqpw
[12:40] <Mavrik> svvitch, doh
[12:40] <Mavrik> you're just asking for stream information
[12:40] <Mavrik> not container (format) information :)
[12:40] <coalado> svvitch: -show_format
[12:41] <coalado> but this fails for your stream..
[12:42] <svvitch> Mavrik: hmm you are right
[12:42] <Mavrik> also it seems you don't get enough data for video bitrate to be probed
[12:42] <svvitch> but this is stream, that's why I'm looking for stream
[12:44] <svvitch> Mavrik: coalado, thanks
[12:46] <Diogo> hi anyone ubitux..do you know where i can find more information about this article https://tech.dropbox.com/2014/02/video-processing-at-dropbox/?
[12:46] <coalado> svvitch:  this does not help. it returns the audio bitrate only.
[12:46] <coalado> try -probesize
[12:46] <ubitux> Diogo: i'm not working at dropbox
[12:47] <svvitch> coalado: hm yes, it's equal to audio :(
[12:48] <Diogo> ok thanks for your answer, but can you give me some advise to generate the m3u8 from avi file very quick?
[12:48] <Diogo> only the structure?
[12:48] <Diogo> this is possible or to generate the mpegts file in parallel?
[12:50] <drakeguan> Diogo: check this, http://www.bogotobogo.com/VideoStreaming/ffmpeg_http_live_streaming_hls.php
[12:50] <drakeguan> https://www.irccloud.com/pastebin/LjOP6QC0
[12:52] <Diogo> yes thanks for your help and advice...
[12:52] <Diogo> i already did that procedure...but my question is to generate the out%03d.ts...in paralel
[12:52] <Diogo> i need to generate the mpegts very quick
[12:54] <Diogo> imagine that you have a avi file with 2hours...and i need to generate the first chunks of ts file during the user upload and after that i generate the mpegts files on the fly....when user is seeing the video...
[12:55] <Diogo> i don't know if -ss option and -t option works for that solution..
[13:11] <ubitux> Diogo: see http://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment or http://ffmpeg.org/ffmpeg-formats.html#hls-1
[13:14] <Diogo> what is a nut file?
[13:15] <ubitux> a ffmpeg format, it's an example
[13:27] <Nopik> hi, can ffserver read feed directly from http? or I absolutely need to have extra ffmpeg reading from http and pushing to ffserver?
[14:08] <baidoc2> Does anyone have any ideea why after 8 minutes:   ffmpeg -analyzeduration 0 -threads 1 -i "rtmp://RTMP/cam1 live=1" -r 10 -sws_flags lanczos -f image2pipe -q 3 -
[14:08] <baidoc2> stops sending data into pipe?
[14:08] <baidoc2> it has nothing to do with the stream, the stream still works
[14:29] <rudi_s> mencoder supported -vobsoubout to dump the subtiles as .idx and .sub files. Is this still possible with ffmpeg?
[14:29] <rudi_s> I tried to convert the .vob to .mkv, but it seems the idx-"part" is not correctly transferred, as mkvextract doesn't let me extract the .idx/.sub from the resulting mkv file.
[14:29] <rudi_s> (codecprivate seems to be missing.)
[14:43] <Harzilein> oh, regarding "used to be possible"...
[14:45] <Harzilein> i recently was given an "s1 mp3" device and want to use amv containers now. the amv demuxer apparently got rolled into the regular avi demuxer, but there's no muxer. if someone were to adapt the muxer from the s1mp3 project, would it be awkward to have only muxing support? (i.e. it would look asymmetric, demuxing would use -f avi and muxing would use -f amv)
[14:51] <sacarasc> Harzilein: What's an AMV?
[15:03] <Diogo> ubitux:  -reference_stream can you explain with more detail how can i use this option?
[15:05] <ubitux> i don't know
[15:06] <Diogo> -reference_stream <last ts file>
[15:06] <Diogo> ?
[15:07] <ubitux> 15:05:33 < ubitux> i don't know
[15:07] <ubitux> i don't use that, last time i created an hls stream was 2 years ago
[15:07] <Diogo> :) ok thanks
[15:49] <dietlev> hey all. For a small app, I would need to extract the maximum Decibel out of a video. Not really an audio expert, so could someone point me in the right direction or an example for this?
[15:50] <c_14> the volumedetect filter should do just that (among other things)
[15:55] <dietlev> @c_14 so the max_volume ( -4db in the example ) is the highest peak?
[15:57] <c_14> I think so.
[15:57] <dietlev> cool. thx
[16:57] <sine0> serious question. in windows I used aae and mbl and various other plugins to get effects on my videos. now im using linux gentoo and i really like gimp image manipulation. would it be feasable to batch edit the frames if i were to extract them, and do some ffmpeg and gimp cl magic?
[16:58] <ubitux> you can extract all the frames but you're going to loose the timing
[16:58] <sine0> timing ?
[16:58] <ubitux> what automatical filter in gimp do you miss in ffmpeg btw?
[16:59] <ubitux> sine0: yes, for video with non constant framerate, you'll get surprise
[16:59] <ubitux> anyway, your question is vague
[16:59] <sine0> ubitux: i havent delved into what ffmpeg can do in that area, im going to do color correction and video overlay
[16:59] <ubitux> we really have everything you need in that area in ffmpeg
[16:59] <sine0> oh goodie
[16:59] <ubitux> curves, colorbalance and whatever filters
[17:00] <ubitux> you also have 2D and 3D LUT for advanced color management
[17:00] <sine0> wow
[17:00] <ubitux> and we have an overlay filter as well
[17:00] <sine0> i used that in aae with technicolor picture styles
[17:00] <sine0> got some real nice effects
[17:01] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#overlay-1
[17:01] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#lut_002c-lutrgb_002c-lutyuv
[17:01] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#lut3d-1
[17:01] <sine0> im curious about the timing though, im using one camera only and i shoot in a constant fps
[17:01] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#colorchannelmixer
[17:01] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#colorbalance
[17:01] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#curves-1
[17:01] <ubitux> anyway...
[17:01] <ubitux> :)
[17:02] <sine0> did you catch what i wrote
[17:02] <ubitux> sine0: ffmpeg -i ... -f image2pipe ... or simple out%03d.png as output if you want
[17:02] <ubitux> you'll re-encode the png at constant rate
[17:03] <sine0> I dont understand you, are you saying that i will do that and it will be a bad thing or its wrong, OR are you saying thats what i should do
[17:03] <ubitux> it will be fine
[17:03] <ubitux> assuming you have a constant frame rate as source
[17:04] <ubitux> even with variable fps actually, ffmpeg should dup the frames when necessary
[17:13] <Technicus> Hi, I am recording screen casts in a Linux environment with Simple Screen Recorder: < http://www.maartenbaert.be/simplescreenrecorder/ >.  My coworker then needs to be able to open the files in Adobe Premier.  How can I convert them to a format Adobe will play nice with, via FFMPEG?
[17:13] <c_14> What format is SSR creating?
[17:14] <c_14> format + codecs
[17:14] <c_14> But pretty much anything these days will eat H.264 in an mp4 container.
[17:18] <Harzilein> sacarasc: riff-compatible "obfuscated" avi, basically
[17:18] <Harzilein> sacarasc: ffmpeg already supports it through case statements in the avi demuxer
[17:19] <Harzilein> sacarasc: but it did not mainline the muxer, now its awkward for people to get it into ffmpeg because you can't select an avi-variant that way
[17:22] <sacarasc> Hmm, okay.
[17:23] <jkli> hi guys
[17:23] <jkli> is there any command to batch merge a batch of video files ?
[17:24] <c_14> merge as in concat?
[17:25] <jkli> yeah
[17:25] <c_14> https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join,%20merge)%20media%20files
[17:25] <jkli> got 30 vids all 1-2 min
[17:25] <c_14> look at the "with a bash for loop" section.
[17:25] <c_14> also the process substitution section
[17:26] <jkli> already there
[17:27] <jkli> for f in ./*.wav         this thing finds all wav files in CURRENT dir?
[17:27] <c_14> yep
[17:27] <jkli> or do i have to use absolute folder path
[17:27] <c_14> both work
[17:33] <jkli> works
[17:38] <jkli> but i dont really understand the bash loop
[17:38] <jkli> so does it echo "file filename.mp4" everytime?
[17:38] <c_14> pmuch
[17:38] <jkli> i mean, why is the "file" even needed in the echo loop
[17:39] <c_14> that has to do with the concat demuxer
[17:39] <jkli> so it is required
[17:39] <c_14> yes
[17:39] <jkli> i see
[17:39] <jkli> kk then makes sense
[17:39] <jkli> just downloading the newly muxed video file
[17:39] <jkli> thank god, i dont have to watch 30 seperate videos
[17:59] <sine0> ubitux: how can i work out the settings to use on a particular scene and apply it, seeing as there is no gui, is it a matter of try it on a frame and see and then repeat, or can i extract the information from gimp and then apply as a command line
[18:00] <ubitux> you can use the timeline option
[18:00] <ubitux> like, -vf "curves=xxxx:enable='between(t,10,25)'"
[18:01] <ubitux> you can test with ffplay first, and frame step with 's'
[18:02] <ubitux> sine0: try ffplay -f lavfi -i testsrc -vf "curves=vintage:enable='between(t,2,5)'"
[18:02] <ubitux> for example
[18:02] <ubitux> note that not all filters support timeline
[18:02] <ubitux> if you see a filter that don't but think it should, feel free to name it i'll have a look
[18:07] <ubitux> sine0: see http://ffmpeg.org/ffmpeg-filters.html#Timeline-editing for more info
[18:08] <sine0> is that cl correct or does -i need to be before -f
[18:08] <sine0> as you posted
[18:08] <sine0> [lavfi @ 0x7f0f440008f0] No such filter: 'MVI_3799.MOV'  0B f=0/0
[18:08] <sine0> MVI_3799.MOV: Invalid argument
[18:08] <sine0> i replaced testsrc with my movie filename
[18:09] <sine0> but i will read the page
[18:10] <ubitux> sine0: -f lavfi is to say that the next specified input (-i testsrc) is of type "lavfi"
[18:10] <ubitux> sine0: replace "-f lavfi -i testsrc" with "-i MVI_3799.MOV"
[18:10] <ubitux> (you can drop the -i with ffplay btw, but not ffmpeg)
[18:11] <ubitux> gtg, bbl
[18:12] <sine0> tybb
[18:15] <tlhiv_work> i have a AVI file that has a single audio channel (on the LEFT side) and I would like to make the video have a MONO audio channel with this one channel duplicated into both LEFT and RIGHT channels
[18:19] <c_14> https://trac.ffmpeg.org/wiki/AudioChannelManipulation#stereo2monofiles
[18:19] <c_14> just leave out the part for the right channel
[18:20] <c_14> If I'm understanding your question correctly and you want mono output.
[18:20] <Harzilein> tlhiv_work: i think you mean "stream" on two occasions where you write channel. mono streams only have one channel, it's customary for players/operating systems/soundcards to play that on both sides of their analog stereo outputs
[18:24] <tlhiv_work> Harzilein: probably ... i have a video that i just recorded that is only playing out of the left speaker ... i would like to it to play (duplicate) out of both speakers
[18:28] <tlhiv_work> nevermind ... -ac 1 does it
[21:10] <A__> I was just going to join here and ask why .mp4 H.264 videos exported from Premiere Pro (CC 2014) are so god damn big because ffmpeg can convert them without any loss of quality to a fraction of their size.
[21:10] <A__> But then ffmpeg produced a bigger file than the source.
[21:10] <A__> So it's not always the case.
[21:10] <A__> But one has to wonder how bad the Adobe encoder is.
[21:10] <A__> Because it's usually grossly oversized.
[21:14] <JEEB> it's not like the encoder knows what size your source is, and it /shouldn't/ know
[21:14] <JEEB> set CRF to the highest value that still looks good, and encode with the slowest preset that is still fast enough for you, and you should get as much compression as you can handle :P
[21:27] <llogan> A__: might be useful to you https://trac.ffmpeg.org/wiki/Encode/PremierePro
[21:28] <llogan> haven't tried it with CC7+ though
[21:29] <A__> JEEB: CRF = ?
[21:29] <A__> Whenever I set the "target bitrate" to maximum, etc., the video output is completely broken.
[21:30] <A__> Hehe. Their lossless output is so big that my hard drives just look at me with sad eyes and go: "Why, Master? Why?"
[21:31] <A__> Hmm. Both of those solutions, llogan, are quite annoying.
[21:31] <llogan> so is AME
[21:32] <A__> Never used that crap.
[21:32] <A__> I just wanna export in a sane way.
[21:33] <A__> I always use H.264. But I tend to have to run ffmpeg on that output file each time too.
[21:33] <A__> Or else it takes a million megabytes.
[21:33] <llogan> how can you use Premiere but not Adobe Media Encoder?
[21:42] <A__> llogan: I'm not sure I understand the question.
[21:43] <A__> Maybe you are going to say that it's built in or something.
[21:43] <A__> But AFAIK, it's a separate application.
[21:43] <A__> And one that sucks, at that.
[22:01] <A__> "Install Ut Video. It is a fast, lossless video encoder (and decoder), and is natively supported by ffmpeg. "
[22:01] <A__> What?
[22:01] <A__> This also makes no sense.
[22:12] <line0> how does it not make sense?
[22:14] <line0> utvideo is a vfw-based encoder, which is why you can use it in AME
[00:00] --- Wed Aug 20 2014


More information about the Ffmpeg-devel-irc mailing list