burek021 at gmail.com
Fri Jun 3 02:05:01 CEST 2016
[00:44:35 CEST] <Guest35> so when i build from master, I am able to see the h264_videotoolbox encoder, but using it still uses up 100% of the CPU -- am I missing something? Command line I'm using is: ffmpeg -v verbose -i ~/Downloads/stuff/bbb_sunflower_1080p_60fps_normal.mp4 -acodec copy -vcodec h264_videotoolbox test.mp4
[00:47:51 CEST] <rkern> That looks right. The software decoder is probably the bottleneck, but it should still be faster than a software encoder.
[00:48:22 CEST] <rkern> Have you tried the VideoToolbox hwaccel for decoding?
[00:56:51 CEST] <prelude2004c> what is --enable-libzvbi ?
[00:59:13 CEST] <furq> prelude2004c: http://zapping.sourceforge.net/ZVBI/
[00:59:45 CEST] <prelude2004c> i dont think that its it though.. ffmpeg-3.0.1 ( transport stream input ) , sees the closed caption data... and the git does not
[00:59:56 CEST] <prelude2004c> i have compiled them both exactly the same
[01:00:08 CEST] <prelude2004c> not sure how to check what the difference is
[01:03:51 CEST] <llogan> you can attempt a git bisect. how can the issue be duplicated?
[01:06:23 CEST] <prelude2004c> git bisect ? not sure what that means
[01:06:31 CEST] <prelude2004c> yes duplicated every time
[01:06:49 CEST] <prelude2004c> even ffprobe gives me the results different.. it basically ignores the closed caption data on the input side of things
[01:06:52 CEST] <prelude2004c> yet on 3.0.1 i see it
[01:14:14 CEST] <rainabba> I have a built with x264 and opecl support (and I'm using placebo). I need to provide a flag to tell ffmpeg to use that opencl support or it will be used when available?
[01:14:20 CEST] <rainabba> s/built/build
[01:15:44 CEST] <furq> iirc it's -x264opts opencl=1
[01:15:50 CEST] <furq> don't expect a big speedup from it though
[01:15:56 CEST] <furq> and also just generally don't use placebo at all
[01:16:12 CEST] <llogan> prelude2004c: i meant for you to tell *us* how to duplicate the issue.
[01:16:17 CEST] <furq> even on a 32-core gpu instance it's probably a waste of time
[01:16:24 CEST] <llogan> by providing sample input file & command
[01:17:07 CEST] <rainabba> furq: Mind if I PM you briefly?
[01:17:12 CEST] <furq> sure
[01:18:43 CEST] <Guest35> rkern: somehow I assumed the VT decoder is used automatically -- does it take a special CLI parameter?
[01:26:35 CEST] <rkern> Guest35: try ffmpeg -hwaccel videotoolbox -i ...
[01:29:26 CEST] <Guest35> rkern: That produces some errors
[01:29:28 CEST] <Guest35> Error creating Videotoolbox decoder.
[01:29:28 CEST] <Guest35> videotoolbox hwaccel requested for input stream #0:0, but cannot be initialized.
[01:29:28 CEST] <Guest35> [h264 @ 0x7fb28b807c00] decode_slice_header error
[01:29:29 CEST] <Guest35> [h264 @ 0x7fb28b807c00] no frame!
[01:30:25 CEST] <Guest35> rkern: is it supposed to support simultaneous encoding and decoding using VT? perhaps that's my problem
[01:31:28 CEST] <hyponic> I want to transcode several live streams using a 4th gen intel i7 using gpu. i have tried vaapi but the result is not good enough. what other alternatives is there for me to so?
[01:33:50 CEST] <rkern> Guest35: It should support both encoding and decoding at the same time. You may want to file a bug for that issue.
[01:34:08 CEST] <rkern> I can repro - I'll start digging into it.
[01:39:03 CEST] <jkqxz> hyponic: There is only the one set of video codec hardware there (i.e. Quick Sync). You can also access it via the proprietary Intel Media SDK and libmfx, but the result will be pretty much the same. What is "not good enough" about the result from vaapi? Really the other option is software encode with libx264.
[01:41:11 CEST] <hyponic> jkqxz it glitches every few seconds.. like a freeze or something
[01:42:53 CEST] <rainabba> hyponic: NVENC is another option, but my experience so far is that it's not worth it if you're not doing UHD. Dunno why/how it matters, but it seems like it encodes worse at lower resolutions (not just that the results look better because they're higher).
[01:44:57 CEST] <hyponic> rainabba hmm... i have a few HD and some SD streams that i need transcoded. the vaapi is working but not good enough. i have some samples i can send private if you want.
[01:46:52 CEST] <jkqxz> hyponic: Can you give a bit more detail about what is glitching there?
[01:47:38 CEST] <hyponic> jkqxz priv? i can show you.
[01:47:47 CEST] <jkqxz> (Command line at least. The default options are not exactly ideal.)
[01:48:16 CEST] <jkqxz> Sure.
[02:01:50 CEST] <rkern> Guest35: I got it working with a hack (had to modify the code and recompile). It's actually faster using the software decoder. 8.5x with a software decoder vs 7.5x with the hardware decoder (at 1280x720).
[02:02:59 CEST] <Guest35> rkern: that's great -- could you please share the modification with me so I can build and try it out?
[02:05:30 CEST] <prelude2004c> llogan , sorry for the delay..
[02:05:41 CEST] <prelude2004c> the input is simple.. ffprobe -i udp://xxxxxxxxxx
[02:05:48 CEST] <prelude2004c> its an live MPegTS stream
[02:05:59 CEST] <prelude2004c> 3.0.1 sees the closed caption and git does not
[02:05:59 CEST] <prelude2004c> eg.
[02:06:24 CEST] <prelude2004c> eg..( 3.0.1 ) > Stream #0:4[0x1511]: Video: h264 (High) ( / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn ... & ( GIT version ) > Stream #0:4[0x1511]: Video: h264 (High) ( / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn
[02:06:51 CEST] <prelude2004c> same exact input command
[02:09:05 CEST] <rkern> Guest35: sure - I have some other changes in that file. I'll clean it up some and send you a patch in a little bit.
[02:15:29 CEST] <rainabba> Do I need to do anything to ensure I'm using opencl support for unsharp or deshake when I use those filters?
[02:17:10 CEST] <llogan> your ffmpeg needs to be compiled with --enable-opencl and the opencl option in unsharp needs to be set to 1
[02:17:22 CEST] <llogan> http://ffmpeg.org/ffmpeg-filters.html#unsharp
[02:17:55 CEST] <rainabba> Ahh. Thank you. Looked at that dozens of time (when I didn't have a build with opencl).
[02:18:21 CEST] <llogan> although vidstab is probably better than deshake
[02:19:32 CEST] <rainabba> I'm actually trying to deal with small, fast jars like someone is bumping into the camera (or the stage is shaking our entire c-stand). That matter on what you'd suggest?
[02:23:06 CEST] <llogan> you'll have to compare
[02:23:15 CEST] Action: rainabba nods
[02:26:02 CEST] <rkern> Guest35: https://s3.amazonaws.com/ffmpeg-kern/vthwaccel.patch
[03:09:51 CEST] <Guest35> rkern: thanks, this works! On my system that actually speeds it up (I'm testing with a 1080p file) from about 90fps to around 130, and with about half the CPU usage
[03:13:03 CEST] <hyponic> rkern u got a sec?
[03:14:24 CEST] <rkern> hyponic: what's up?
[03:15:16 CEST] <rkern> Guest35: I'll submit a patch to the ml in a bit - need to make sure that works on iOS too.
[03:15:41 CEST] <hyponic> rkern need a bit of help advice trying to encode mpegts streams using vaapi on a haswell. but i keep getting blocks/glitches that i can't figure out.
[03:16:21 CEST] <Guest35> rkern: Just as an FYI before you submit, I am getting these warnings: [h264 @ 0x7fd653005400] Hardware accelerated decoding with frame threading is known to be unstable and its use is discouraged.
[03:17:55 CEST] <rkern> Guest35: try adding -threads 1 before the -hwaccel arg
[03:18:27 CEST] <hyponic> is there a way to control bitrate with vaapi better than 1 and 2 ?
[03:19:47 CEST] <rkern> hyponic: very sorry - I'm not familiar with vaapi
[03:20:49 CEST] <hyponic> rkern what can i use to encode mpegts streams using gpu other than vaapi?
[03:21:09 CEST] <hyponic> on an 4th gen i7 haswell?
[03:25:18 CEST] <rkern> hyponic: are you using Windows or OS X?
[03:25:40 CEST] <hyponic> ubuntu x64 14.04
[03:28:11 CEST] <hyponic> rkern
[03:32:06 CEST] <rkern> hyponic: not sure to be honest. You could look around for a Quick Sync API for Linux.
[03:32:35 CEST] <furq> libmfx?
[03:32:44 CEST] <furq> or did that get deprecated
[03:35:01 CEST] <c_14> I think it still exists, but it's presumably still a pain.
[05:15:29 CEST] <hispeed67> i have an .mkv file that shows stream0:0= h264
[05:15:46 CEST] <hispeed67> stream0:1 is opus. i would like to strip audio out as an ogg file.
[05:16:13 CEST] <hispeed67> isn't opus same as ogg? shouldn't i just be able to separate the stream somehow?
[08:59:26 CEST] <hrishi> hi i am trying to pipe opencv's mat into ffmpeg to be written to another virtual video device.... the command i use is
[08:59:27 CEST] <hrishi> ./t | ffmpeg -f rawvideo -pixel_format bgr24 -s 640x480 -r 30 -i - -an -f v4l2 /dev/video1
[08:59:44 CEST] <hrishi> i am getting an output as in the pic
[08:59:46 CEST] <hrishi> http://s33.postimg.org/3x2b2vizz/snapshot2.png
[09:00:42 CEST] <hrishi> ./t returns the webcam image captured
[09:55:58 CEST] <f00bar80> http://vpaste.net/b1t7G I wrote this script to check if ffmpeg stopped on a specific source and restart it , i appreciate your comments.
[10:03:09 CEST] <f00bar80> is there anybody there?
[11:13:55 CEST] <ATField> Can ffmpeg be used to create a video from the INPUT in a single go that will have several different segments stitched to each others ends? E.g. -tt 00:01 -to 00:03 AND -tt 00:05 -to 00:07 AND -tt 10 -to 12.
[11:21:07 CEST] <ATField> Or to rephrase the question, is there an easier way than this: http://superuser.com/questions/681885/how-can-i-remove-multiple-segments-from-a-video-using-ffmpeg ?
[12:36:42 CEST] <xx_pk_xx> Hello guys. I'm trying to decode DVB subtitles using avcodec_decode_subtitle2 function. Is it possible to save those subtitles as png images?
[12:42:08 CEST] <xx_pk_xx> My current code is here: http://pastebin.com/G2aThmvR I'm not even able to get start/end times of the subtitles. Does anyone know what am I doing wrong? Thanks.
[12:45:45 CEST] <ATField> Can someone please help with a syntax issue? How can I make trim work with sexagesimal timecodes (HH:MM:SS.MS) if the following syntax [ trim=start="10:22.548":end="19:46.923" ] confuses ffmpeg?
[12:50:39 CEST] <gaining> i need to tell ffplay to print the byte position when i'm pausing it
[12:50:57 CEST] <gaining> is there some way i can get that info already?
[12:51:19 CEST] <gaining> or maybe some other player can do it?
[16:32:44 CEST] <chama> I need to print a text on a video based on time. For example I need to print "HOME" on a video. So first it prints "H", then after 1 second it prints "HO", then after 1 second it prints "HOM", then finally after 1 second it prints "HOME". Is there any way to achieve this in FFmpeg?
[16:41:40 CEST] <kepstin> chama: maybe write a subtitle file?
[16:42:20 CEST] <kepstin> you can then burn the text into the video using ffmpeg's 'subtitles' filter.
[16:44:31 CEST] <chama> @kepstin true. But i neet apply several effects to the text. ex. change the color, add shadow, change the position, add a background box, change the font type, etc
[16:44:43 CEST] <chama> is it possible with subtitle?
[16:44:59 CEST] <kepstin> chama: then it sounds like you want to use the 'ass' subtitle format, and probably aegisub to do all the timing and formatting.
[16:45:47 CEST] <furq> i mean you could probably do this with drawtext and drawbox, but you'll end up with a command that's visible from space
[16:46:35 CEST] <chama> @kepstin do you have any reference or tutorial that i can followup for aegisub?
[16:47:12 CEST] <chama> @furq how can i do the timing in drawtext?
[16:47:40 CEST] <furq> it will be much easier to just do the thing kepstin said
[16:48:37 CEST] <kepstin> chama: just start at the aegisub web page, they have a manual that goes over timing, formatting, etc.
[16:48:37 CEST] <furq> but you would do it by calling drawtext multiple times with different enable params
[16:49:00 CEST] <chama> @furq thanks.
[16:49:08 CEST] <furq> every time the text or formatting changes means another call to drawtext
[16:49:21 CEST] <furq> so it's going to get bigger than something tim westwood is excited about
[16:49:31 CEST] <chama> @kepstin thanks :)
[16:49:57 CEST] <chama> :D
[16:50:47 CEST] <xx_pk_xx> Hello guys. I'm trying to decode DVB subtitles using avcodec_decode_subtitle2 function. Is it possible to save those subtitles as png images? My current code is here: http://pastebin.com/G2aThmvR I'm not even able to get start/end times of the subtitles. Does anyone know what am I doing wrong? Thanks.
[16:57:04 CEST] <nick0> Hi, I'm trying to use swresample, with output from a filter, so I get the same amount of frames as the input, I flush the resampler and I get the rest of the frames (and everything sounds as expected). However, it also gives me *extra* frames, which are just a few seconds of the file, played backwards and sped up. What could be causing this?
[16:57:44 CEST] <OmegaVVeapon> @xx_pk_xx Use ccextractor. It does what you need perfectly
[16:59:30 CEST] <xx_pk_xx> @omegaVVeapon, thanks for the tip, however, I would like to encode those DVB subtitles as well, made out of e.g. PNG images
[17:05:34 CEST] <OmegaVVeapon> Mmm, I've seen example of how to encode to DVB sub from SRT files, not a bunch of PNG images...
[17:05:44 CEST] <OmegaVVeapon> how would ffmpeg even know where to place them?
[17:06:16 CEST] <kepstin> my impressions is that most broadcast closed-caption formats were actually text/command based, not images.
[17:06:34 CEST] <OmegaVVeapon> DVB sub are graphic-based (weird af, I know)
[17:06:43 CEST] <OmegaVVeapon> DVB text are normal text-based
[17:07:07 CEST] <kepstin> ah, there's actually a separate thing for subtitles rather than captions?
[17:07:43 CEST] <OmegaVVeapon> yeah, the DVB transport scheme is mostly used in Europe from what I've read
[17:07:56 CEST] <OmegaVVeapon> in the west we use CC's (608 and 708)
[17:08:32 CEST] <OmegaVVeapon> which are embedded in the actual video stream whereas the DVB subtitles have their own separate stream
[17:09:32 CEST] <xx_pk_xx> Well, I need dvb sub, so basically graphics based. However, I was unsuccessful when I tried e.g. srt. My best result was not bad, but I was not able to fully control them, e.g. use more colours in subs
[17:11:30 CEST] <furq> kepstin: dvb text is just teletext, i imagine dvb subtitles are a newer standard
[17:11:50 CEST] <xx_pk_xx> yep, it's a newer standard
[17:13:10 CEST] <OmegaVVeapon> I'm still confused on how you expect the encoding to work. If I give you a PNG with the word "banana" and ask you to encode it into a 1-hour TS. Where in the hell would you place it?
[17:13:35 CEST] <furq> timecodes in the filename?
[17:14:03 CEST] <OmegaVVeapon> is that a thing that ffmpeg understands? Didn't know...
[17:14:07 CEST] <xx_pk_xx> of course I would use some file with times input, however I'm not even sure if I can use a png image (or any other image) as input for DVB subtitles
[17:14:17 CEST] <furq> he's using the api, not ffmpeg
[17:14:54 CEST] <xx_pk_xx> exactly
[17:15:15 CEST] <OmegaVVeapon> ah... ok, I misunderstood that part
[17:17:14 CEST] <furq> i found a tool that will decode dvbsub to png, but it doesn't look like it handles encoding them again
[17:17:59 CEST] <kepstin> ffmpeg itself appears to have a dvbsub decoder and encoder
[17:18:06 CEST] <kepstin> and ffmpeg can also decode pngs
[17:18:16 CEST] <kepstin> so you should be able to do it all with ffmpeg via the api...
[17:18:29 CEST] <xx_pk_xx> umm, which tool is it? Well, I wanted to decode them first, but to be honest I would like to have encoding done mostly... I wanted to decode it with ffmpeg api's to understand how could encoder work
[17:18:41 CEST] <furq> https://github.com/nxmirrors/dvbtools/tree/master/dvbsubs
[17:18:49 CEST] <furq> but yeah it seems like something you should be able to do with the api
[17:19:24 CEST] <xx_pk_xx> ye, that's the thing... it seems like ffmpeg has everything needed, but I'm doing something wrong probably, because I can't even get correct times of subtitles to console...
[17:21:57 CEST] <xx_pk_xx> http://pastebin.com/G2aThmvR - that's the code I have so far, and I can't move from that
[17:24:53 CEST] <kepstin> xx_pk_xx: not sure what you mean by "correct times". What are you getting, and what do you expect?
[17:39:51 CEST] <xx_pk_xx> kestin, well, for example I'm getting same start/end times for all subtitles, e.g. 0:00:00.56. It seems like I'm not declaring AVSubtitle corectly, or I'm not freeing memory on right place using avsubtitle_free
[18:40:21 CEST] <Pepito_> Hello! I was trying to convert a .mp4 to .ts and generate a .m3u8. I've got, my question is ... is it possible to generate only .m3u8? Is it possible to say that .ts want to generate and prevent it from generating all?
[18:50:23 CEST] <kepstin> Pepito_: probably not, since the exact timings might vary a bit based on keyframe positioning etc. depending on encoding settings, you could probably make something up that would be close - but you wouldn't even need ffmpeg for that, just write it out directly from a script.
[18:53:03 CEST] <Pepito_> kepstin: thx, m3u8 wanted to generate first reference to have it, and when should ask a segment know exactly what it is. The problem I have is that I have videos of 8 hours, and is usually seen only the beginning. I wanted to generate only some parts of the video, and the other to generate them on demand (considering that 1 1 minute segment takes very little)
[18:55:43 CEST] <kepstin> Pepito_: you wouldn't be able to do that with ffmpeg's built-in HLS/segment muxer, but by manually writing the m3u8 file and generating each segment individually (using e.g. -ss and -t on ffmpeg to exactly select the video contained in the segment) you should be able to do it.
[18:56:52 CEST] <Pepito_> try to make the cuts manuality, but .ts files have references right? and the player for the first .ts
[18:56:56 CEST] <kepstin> getting timestamps right in the output ts files would probably be a bit of a pain tho, as would getting the audio to play seamlessly. If you put the work in, it should be possible.
[18:58:41 CEST] <Pepito_> and how it could make .ts work well? I think they have some reference to the following, and does not work if create 1 to 1
[19:04:14 CEST] <ikonia> I'm trying to concat 2 m4v video files into one, I've been following the docs on this page https://trac.ffmpeg.org/wiki/Concatenate the process is erroring with "Unable to find a suitable output format for 'concat'" the two files are encoded identically
[19:04:50 CEST] <ikonia> from what I've read this should have worked with the demuxer without issue with them being the same
[19:05:25 CEST] <Pepito_> kepstin: thx, I will work out now. Then I go and comment further. Thank you very much
[19:12:46 CEST] <kepstin> ikonia: sounds like you probably have some command line options in the wrong place (many are sensitive to position)
[22:20:15 CEST] <OmegaVVeapon> has anyone encoded DVB teletext into a TS from an SRT?
[22:48:22 CEST] <hyponic> any vaapi experts here?
[23:37:43 CEST] <speedcube> what is the gpg key ID for singed ffmpeg static builds?
[23:38:08 CEST] <speedcube> signed
[23:38:39 CEST] <JEEB> there are no official binaries anyways
[23:38:43 CEST] <speedcube> is it D67658D8
[23:38:54 CEST] <speedcube> there are none?
[23:39:01 CEST] <JEEB> yes, no official ones
[23:39:14 CEST] <JEEB> only source code is official
[23:39:15 CEST] <speedcube> https://ffmpeg.org/download.html#releases
[23:39:22 CEST] <speedcube> so what is this?
[23:39:23 CEST] <JEEB> yes, I know the unofficial ones are linked there
[23:39:27 CEST] <speedcube> ah
[23:39:47 CEST] <JEEB> so if you want to be sure of something, build locally or trust your distribution
[23:41:06 CEST] <speedcube> Use Debian stable but dont want to poop down the system with a bunch of stuff. ofc Debian stable is old heck though
[23:41:34 CEST] <speedcube> You know who built the stuff over on that page?
[23:42:06 CEST] <speedcube> static builds are nice ... sigh.
[23:42:46 CEST] <llogan> speedcube: which page?
[23:42:58 CEST] <speedcube> https://ffmpeg.org/download.html#releases
[23:43:36 CEST] <llogan> that is a link to the release branches.
[23:43:40 CEST] <llogan> source code
[23:44:07 CEST] <speedcube> lol, did not even unpack it. Thought it was a static build
[23:44:44 CEST] <speedcube> just did a gpg --verify ffmpeg-3.0.2.tar.xz.asc ffmpeg-3.0.2.tar.xz but wanted to make sure the public key was ok
[23:45:19 CEST] <speedcube> RSA key ID D67658D8
[23:45:31 CEST] <c_14> speedcube: the release key is FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
[23:45:36 CEST] <c_14> as listed in ffmpeg/MAINTAINERS
[23:45:40 CEST] <speedcube> thx
[23:46:38 CEST] <furq> speedcube: 3.0.2 is in wheezy-backports
[23:46:51 CEST] <furq> s/wheezy/jessie/
[23:49:15 CEST] <speedcube> prolly the best way to go. Gonna use a backport. Did grab the public key for 2048R/D67658D8 2011-04-26 and it has the same fingerprint as c_14 showed Key fingerprint = FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
[23:53:53 CEST] <speedcube> Just wondering. Why are there no official static builds. Principle or just to much work to compile?
[00:00:00 CEST] --- Fri Jun 3 2016
More information about the Ffmpeg-devel-irc