[Ffmpeg-devel-irc] ffmpeg.log.20160425
burek
burek021 at gmail.com
Tue Apr 26 02:05:01 CEST 2016
[02:23:32 CEST] <SnakesAndStuff> what is the format when using the -ss option for 100+ hours into a video?
[02:23:42 CEST] <SnakesAndStuff> I get an error when I use -ss 229:00:15
[06:12:54 CEST] <blubee> hi guys I have a question about using ffmpeg to record my desktop
[06:13:06 CEST] <blubee> but I feel like this is a little verbose and could be cleaned up a bit
[06:13:19 CEST] <blubee> can I get some help with this ffmpeg command
[06:13:21 CEST] <blubee> http://pastebin.com/QF9bqyRH
[09:05:53 CEST] <t4nk200> Hey, is there a way to change the Presentation Timestamp (PTS) into the Systemtimestamp ?
[09:35:00 CEST] Last message repeated 1 time(s).
[10:35:43 CEST] <t4nk200> how to manage, that every Stream I start has a synchronised Timestamp?
[12:02:49 CEST] <A_> LOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOLLLLLLLLLLL
[12:02:51 CEST] <A_> so much
[12:03:46 CEST] <A_> uhm :/
[12:04:05 CEST] <mad_ady> Hello everyone!
[12:04:11 CEST] <A_> hi
[12:04:13 CEST] <A_> :D
[12:04:19 CEST] <mad_ady> I have a question about ffserver...
[12:04:48 CEST] <mad_ady> I managed to get it to stream rtsp from a UVC webcam, but I'd like for the ffmpeg stream to be started only when the client connects via rtsp
[12:05:04 CEST] <A_> :/
[12:05:12 CEST] <A_> hmmm
[12:05:13 CEST] <mad_ady> right now ffmpeg transcodes 24/7 and the client might connect for 5 minutes in one day
[12:05:16 CEST] <mad_ady> and it's not efficient
[12:05:31 CEST] <mad_ady> any ideas if it could be done via ffserver?
[12:05:36 CEST] <A_> so it like event/trigger for user
[12:05:59 CEST] <A_> no idea about ffserver yet, because never use that before ._.
[12:06:06 CEST] <mad_ady> yes - user connection starts ffm feed
[12:06:26 CEST] <mad_ady> no worries, I'll idle around - maybe somebody know about it
[12:06:34 CEST] <mad_ady> will google some more, maybe I missed something
[12:07:43 CEST] <mad_ady> I have a possible workaround - listen to ffserver's log file and if I see a client connected, fire up the ffmpeg stream
[12:07:50 CEST] <mad_ady> but... it seems rudimentary
[12:09:51 CEST] <mad_ady> it's a good thing that ffserver and ffmpeg run on the same system so I can do this
[12:13:50 CEST] <JEEB> mad_ady: if you are depending on ffserver be ready to become a maintainer since nobody in the main development team wants to keep it around
[12:13:59 CEST] <Prelude2004c> hey everyone good morning
[12:14:23 CEST] <JEEB> it's a hack that will break with the next major API bump
[12:14:29 CEST] <JEEB> in its current form that is
[12:14:49 CEST] <Prelude2004c> having a very strange issue... .thi sis what is happening.. i have a channel being encoded.. and i set PVR to record a show... for some reason after i do the playback on the show... the segments are 6 seconds long.. basically the playback is -6 so it only plays for 6 sends from the end... odd.. its like end of file is at 0 and beginning i have no idea.. :(.. i dont even know how to explain it
[12:40:46 CEST] <Prelude2004c> anyone, any hints?
[13:16:36 CEST] <mad_ady> @JEEB: thanks for the warning. I am prepared for ffmpeg api changes since the avconv fork a while ago...
[13:31:24 CEST] <mad_ady> @JEEB: can you recommend an alternative to ffserver that can do rtsp?
[13:31:56 CEST] <mad_ady> I only need it for signalling, streaming is already done through ffmpeg
[14:15:26 CEST] <Carlrobertoh> Hi! I'm writing c++ code for screen casting. I'm able to get the video but it has no timeline? What can cause this?
[14:31:44 CEST] <t4nk016> how to add fix_teletext_pts as mpegts option ?
[14:44:02 CEST] <Ximon> Hi, How would I go about having a single encoder write to 2 different outputs asynchronously? I'd like record to disk and also transmit an rtsp stream.
[14:44:40 CEST] <Ximon> But when the network is interrupted, the recording task is also interrupted.
[15:10:15 CEST] <Carlrobertoh> Hi! I'm writing c++ code for screen casting. I'm able to get the video but it has no timeline? What can cause this?
[15:10:37 CEST] <cq1> kevin_newbie: Neat accessibility project. I presume you aren't trying to do this in real time?
[15:12:39 CEST] <Ximon> Carlrobertoh, What do you mean by "no timeline"?
[15:13:09 CEST] <Carlrobertoh> when i open my video in vlc then the video duration is 00:00
[15:13:26 CEST] <Ximon> But it still plays? What codec are you using?
[15:13:31 CEST] <Carlrobertoh> H.264
[15:13:34 CEST] <Carlrobertoh> yes it plays
[15:13:39 CEST] <Ximon> And what container?
[15:14:11 CEST] <kevin_newbie> cql Nop! I got the *.avi or related, and the proper srt. What I can do is read the srt, and voice them with pyvona or other library. but how I say "hey ffmep put this thing with this timestamps"
[15:14:18 CEST] <Carlrobertoh> .h264
[15:14:37 CEST] <kevin_newbie> I can use cat for simple concatenation, not more than that :S
[15:16:46 CEST] <Ximon> Carlrobertoh, Sounds like you're writing a raw h264 stream to a file without a container, metadata like video duration is normally written to a container (e.g MPEG4, Matroska),
[15:17:39 CEST] <Ximon> Carlrobertoh, What command are you using?
[15:17:53 CEST] <Carlrobertoh> i'm writing a code
[15:17:57 CEST] <Carlrobertoh> i dont have a command
[15:18:27 CEST] <Carlrobertoh> but i could send my output
[15:20:22 CEST] <Carlrobertoh> http://www.upload.ee/image/5757630/ffmpeg.png
[15:20:31 CEST] <Carlrobertoh> i dont know if that helps
[15:21:34 CEST] <Ximon> Can you pastebin the entire output?
[15:25:40 CEST] <kevin_newbie> how can i set the ffmepg to sync hundreds of audio_clips in one file?
[15:32:58 CEST] <Prelude2004c> hey.. anyone here familair with segmentation on ffmpeg ?
[15:33:02 CEST] <Prelude2004c> i need help with that.
[15:33:06 CEST] <Prelude2004c> having a very strange issue... .thi sis what is happening.. i have a channel being encoded.. and i set PVR to record a show... for some reason after i do the playback on the show... the segments are 6 seconds long.. basically the playback is -6 so it only plays for 6 sends from the end... odd.. its like end of file is at 0 and beginning i have no idea.. :(.. i dont even know how to explain it
[15:34:58 CEST] <BurnerGR> kevin_newbie, do you mean concatenate hundreds of audio_clips in one file?
[15:39:17 CEST] <cq1> kevin_newbie: This is my first day in #ffmpeg in about four years. Four years ago when people would come in and ask to concatenate clips they'd just get laughed at -- it was so broken. Maybe concatenation and mixing works now, but back then the solution would pretty much be to decode all the audio yourself, mix it yourself as a big raw file, and re-encode. You should see what others say.
[15:40:18 CEST] <Carlrobertoh> Ximon, it practically is entire output
[15:45:41 CEST] <kevin_newbie> BurnerGR the only way I can think to this is "convert to voice each speech in srt" and then bind them together but I want them sync lol
[15:48:44 CEST] <kevin_newbie> cq1 This is my 1st day ever here lol and use internet since last millennium :p I'm new to python (almost 4 months). but got stuck with the ffmpeg...
[15:50:36 CEST] <kevin_newbie> what I want to know if there's a way to put that audio clip in the same timestamp as the str, and multiply by hundreds of them ...
[15:51:55 CEST] <kevin_newbie> in the same script to bind in one no-video-but-full-audio-sync-generated-by-my-native-language-subtitles
[15:52:26 CEST] <kevin_newbie> I think is this the best explanation lol
[16:20:46 CEST] <cq1> kevin_newbie: Can you do all the audio processing yourself? That is, pull all the audio out into one big WAV file, then mix in your clips yourself with python/c++, then re-encode everything?
[16:21:01 CEST] <cq1> This solution sort of sucks, but 1) it would work, and 2) I don't know if ffmpeg is good at stuff like this yet.
[16:21:59 CEST] <cq1> Then you can also have a lot of manual control over how it's mixed. You could do ducking, and quiet the main track when your audio comes in.
[16:26:03 CEST] <kevin_newbie> cq1 I can't follow all your steps, because I never done that before. what I can do is pick the lines from the str, use an API to convert that text to voice, and name the files with the timestamps I found on the srt I think lol :) then your steps/instructions are implemented? lol
[16:32:56 CEST] <kevin_newbie> I don't want to make another job obsolete for humankind lol, neither to replace the original audio-track of the movie, just generate a synchronized audio-file in my native-language, therefore my grandfather can see the movie, and listen the translation with some earphones....
[16:33:43 CEST] <kevin_newbie> cq1 do you totally understand this ? lol
[16:34:29 CEST] <furq> kevin_newbie: https://docs.python.org/2/library/wave.html
[16:34:52 CEST] <furq> i imagine you can just seek to the right point and write
[16:35:07 CEST] <furq> the only issue would be timestretching any of the wavs which are too long
[16:35:20 CEST] <furq> or delaying the next one
[16:36:38 CEST] <cq1> kevin_newbie: What I'm suggesting is a rather brute-force crappy solution. You could load WAVs with the module furq linked, and do all the mixing yourself in one big scratch WAV file. Alternatively, you could transcode to WAVs, then merge in each clip with the ffmpeg filters amerge and apad. I suspect you might only be able to merge in one clip at a time though, so it might be O(n^2) time to apply ffmpeg n times to merge in your n clips
[16:36:58 CEST] <furq> doing this with ffmpeg sounds absolutely awful
[16:37:17 CEST] <furq> aside from the muxing bit at the end
[16:38:11 CEST] <kevin_newbie> lol the ffmeg it's until now the only magic tool i here aware of...
[16:38:43 CEST] <kevin_newbie> I'll try to understand the wave module
[16:39:35 CEST] <kevin_newbie> furq, and cq1 thanks, but probably I'll pitch you later lol
[19:26:19 CEST] <DanielMan> Greetings, I'm banging my head against a problem that, by all the documentation I've read *seems,* like it should work: create a side by side mosaic of two webcam inputs. One consistantly lags behind the other by about 1 second no matter what options I try.
[19:26:21 CEST] <DanielMan> http://pastebin.com/VDRwci5S
[19:26:46 CEST] <DanielMan> Any help or point in the right direction on where to learn what I'm doing wrong would be grad. Thanks! :)
[19:27:26 CEST] <DanielMan> Grand, rather.
[19:29:34 CEST] <DanielMan> Incidently ... in the paste, I'm feeding a v4l2loopback at /dev/video3
[19:30:46 CEST] <DanielMan> I have experienced the same delay if I route the output to a file.
[19:49:34 CEST] <prelude2004c_zzz> hey guys.. sorry to be a bother.. anyone know why..#EXTINF:6.286700, wsbkhd2M26640.ts #EXTINF:6.006000, wsbkhd2M26641.ts #EXTINF:5.989322, wsbkhd2M26642.ts #EXTINF:6.006000, wsbkhd2M26643.ts .. anyone know why the lenght keeps changing ?
[19:49:44 CEST] <prelude2004c_zzz> any way to force it to always be 6 for example instead of .xxxx
[19:57:00 CEST] <DHE> your video must have constant keyframe intervals. perhaps set -x264opts no-scenecut
[19:57:16 CEST] <DHE> and constant FPS, obviously
[20:02:28 CEST] <DHE> using the new codecpar, how do I get the pixel format of a video? eg: AV_PIX_FMT_YUV420P I can't find it after doing avformat_find_stream_info
[20:03:35 CEST] <DHE> ran a "git pull" and my custom application just broke into pieces
[20:04:16 CEST] <JEEB> then there's a high chance it was using something internal
[20:04:31 CEST] <JEEB> you should still have avcodec context available to you from the external APIs
[20:04:51 CEST] <JEEB> as far as I can see the demux/decoding example wasn't changed at all f.ex.
[20:04:52 CEST] <JEEB> https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/demuxing_decoding.c
[20:06:07 CEST] <DHE> all I did was the register calls, avformat_open_input, avformat_find_stream_info, then examine the output in the debugger
[20:06:33 CEST] <DHE> avfctx->streams[0]->codec->pix_fmt was AV_PIX_FMT_YUV420P before pulling, AV_PIX_FMT_NONE afterwards
[20:07:04 CEST] <DHE> I've used the examples as a main reference, yes
[20:08:54 CEST] <DHE> I should bisect this..
[20:10:08 CEST] <JEEB> for more details can you note your container and video format? I'm trying to check how ffmpeg.c is outputting the input pix_fmt
[20:11:37 CEST] <DHE> container input is mpegts (UDP source), codecs are MPEG2video and AC3 audio (5.1 surround)
[20:12:10 CEST] <DHE> I'll still try bisecting
[20:12:41 CEST] <JEEB> ok, seems like ffmpeg.c checks input_stream->decoding_needed
[20:12:52 CEST] <JEEB> and avcodec_open2's in case it is needed
[20:13:47 CEST] <DHE> so I'm supposed to open the decoder myself and feed it until it's ready/happy?
[20:14:00 CEST] <DHE> and avformat_find_stream_info doesn't do that anymore
[20:14:31 CEST] <JEEB> I'm trying to make sure of that, but accessing the avctx from avformat could have been a happy accident that worked before
[20:18:44 CEST] <JEEB> ffmpeg.c is full of random hacks that you definitely shouldn't take example from but this decoding_needed thing doesn't even seem too new
[20:20:28 CEST] <john32> Hello guys. I have a small issue with ffplay which I hope someone may be able to help with! I've been trying stuff for almost a full day without success! I have a 4k monitor and am trying to play a video feed in 1080p. Problem is, as soon as I go full screen, the image is zoomed hugely. How would I force the video to be the correct size??
[20:26:41 CEST] <JEEB> DHE: related to that thing
[20:26:42 CEST] <JEEB> 21:20 <@Daemon404> codec->pix_fmt was supposed to keep working after codecpar (it is, in fact, synced with codecpar)
[20:26:45 CEST] <JEEB> some shit
[20:26:47 CEST] <JEEB> 21:20 <@Daemon404> it depends if he's expecting to update it during decode or
[20:27:07 CEST] <JEEB> tl;dr please provide sample and either result from ffmpeg.c or a small C sample
[20:29:33 CEST] Action: DHE is finishing up a bisect...
[20:33:12 CEST] <JEEB> anyways, I recomend dumping the mpeg-ts with something like tcpdump or vlc's dump mode
[20:33:16 CEST] <JEEB> and testing with that
[20:33:22 CEST] <JEEB> so that you're testing with the same sample all the time
[20:34:04 CEST] <DHE> ... that's a good point
[20:34:14 CEST] <JEEB> and I would guess ffmpeg.c finds the pix_fmt just fine?
[20:39:44 CEST] <DHE> ffprobe is reading the file just fine..
[20:40:27 CEST] <JEEB> what about just ffmpeg -i hurr.ts
[20:42:06 CEST] <DHE> yes that works. everything looks fine in the output
[20:42:18 CEST] <JEEB> ok, so it's up to your API usage then
[20:42:49 CEST] <JEEB> basically at this point the best way to go forward is to provide a sample and a piece of code and that can then be checked
[20:43:05 CEST] <JEEB> either you're doing something wrong, or there's something broken in FFmpeg
[20:56:06 CEST] <ferdna> why does my video feed is not live...
[20:56:19 CEST] <ferdna> i mean its like real time
[20:59:34 CEST] <DanielMan> What are you trying to accomplish ferdna and how is your output not as expected?
[21:00:06 CEST] <ferdna> DanielMan, i move my ipcam to record another source like a tv
[21:00:16 CEST] <ferdna> but it is not realtime
[21:00:31 CEST] <ferdna> it takes a few minutes to get to where the tv is at
[21:02:00 CEST] <DanielMan> sound like you are washing everything though a buffer before outputing it. does your command specify a buffer or have you tried -fflags nobuffer?
[21:02:13 CEST] <furq> ferdna: pastebin your ffmpeg command
[21:02:18 CEST] <ferdna> one sec
[21:03:11 CEST] <DHE> .... great, I can't reproduce in anything except this program.... how is that happening?
[21:03:40 CEST] <ferdna> DanielMan, furq: http://pastebin.com/hMw8TbgR
[21:09:15 CEST] <ferdna> any ideas?
[21:09:17 CEST] <DanielMan> ferdna: I'm not familiar with ffserver though it seems like you need to specify that ffserver have at the very least a smaller buffer if you want more real time performance.
[21:09:29 CEST] <ferdna> ohhh
[21:11:36 CEST] <DanielMan> that 250M file may be where this is happening. it seems like by default it wants to preserve every frame so it uses a buffer to accumulate enough to accomidate for interuptions.
[21:12:54 CEST] <DanielMan> try lowering the FileSizeMax? It might not give you real time performance, but at least a shorter delay to start off with.
[21:13:40 CEST] <ferdna> what would be a good value?
[21:14:32 CEST] <DanielMan> try something like 0 and see if you can trick it into jsut passing everything along, other wize start at say 10M and fiddle with it.
[21:14:47 CEST] <Wader8> hello
[21:14:47 CEST] <DanielMan> what is your desired performance that you are trying to schive?
[21:14:52 CEST] <Wader8> i had a realization today
[21:14:52 CEST] <DanielMan> hello there.
[21:15:12 CEST] <DHE> JEEB: I tried making a minimal program using the same code as I use for the master program and it works, master doesn't... well, off to the races I go
[21:15:38 CEST] <JEEB> enjoy the ride
[21:16:26 CEST] <Wader8> so the codec authors only make the documentation, they don't actually make the damn codec, i was extremely surprised by this, then it's open up for implementations whoever does it, so a new codec like HEVCs is labeled as "released" but it's "under heavy development" in ffmpeg and other, which is so weird how this industry works
[21:16:57 CEST] <ferdna> DanielMan, thank you for the advice
[21:17:04 CEST] <JEEB> usually with a standardized thing you get a) the spec and b) the reference implementation
[21:17:05 CEST] <ferdna> my goal is real-time
[21:17:08 CEST] <Wader8> so all the x264 videos out there aren't even the same codec, 1000 of different flavours of it from around the years
[21:17:49 CEST] <Wader8> so they only do a reference, which nobody uses in the end right ?
[21:18:05 CEST] <JEEB> many people base upon it, like in the case of HEVC and x265
[21:18:26 CEST] <JEEB> multicoreware just grabbed HM (the reference encoder/decoder) and start trying to optimize it
[21:18:58 CEST] <JEEB> they licensed the name from x264 LLC so they also had the right to use x264's code as long as they made a GPL version of what was to become x265
[21:19:03 CEST] <DanielMan> ferdna: no problem. have you seen this? https://ffmpeg.org/ffserver.html#toc-Tips
[21:19:06 CEST] <Wader8> so what happens if nobody else does a better implementation, the world stops, all the web streaming would cease to exist, i mean, some people are working so much probably on their own on this, but I can guess companies pay people to contribute, same with linux i heard
[21:19:24 CEST] <ferdna> DanielMan, yes been there...
[21:20:24 CEST] <DanielMan> the player starting in the past is an interesting option. Not sure if that would have helped you
[21:20:27 CEST] <JEEB> anyways, reference implementations are to prove the decisions and make sure things are being improved in the spec. and generally, the spec and the reference implementation should agree. if they disagree, generally the spec wins. sometimes of course mistakes or uncertainties are found in the spec so it's edited and updated with time like with any documentation
[21:20:28 CEST] <Wader8> i never understood the H264 and X264 monikers, i usually delete this from these from filenames, but anyway I think X264 is then an implementation of the H264 right ?
[21:20:56 CEST] <JEEB> yes, in MPEG terms MPEG-4 Part 10 aka AVC or in ITU-T terms recommendation H.264
[21:21:19 CEST] <JEEB> and x264 is just one of many implementations, although currently clearly one of the best if not the best
[21:22:11 CEST] <Wader8> so these international UN-type organizations make these things like that, that codec is released without really being done and silent updates
[21:22:22 CEST] <JEEB> eh
[21:23:05 CEST] <Wader8> well, what's the point of HEVC if it's going to take 5 years to reach some okay speeds and maturity
[21:23:31 CEST] <JEEB> same was done with AVC
[21:23:35 CEST] <Wader8> I have here some stuff, im just going to mux old DVDs into MKV with MPEG2 intact, since I'm tired of bothering 15 hours for HEVC
[21:23:38 CEST] <JEEB> it took years for the implementations to improve
[21:23:52 CEST] <JEEB> and yes, encoding with x264 used to be slow in '06 or so
[21:24:02 CEST] <JEEB> it's nowadays that we really tend to forget how time goes by
[21:25:34 CEST] <Wader8> I believe this whole stagnating development non-GPU type of transcoding "taking 10 years" to reach multicore CPUs even is kind of a really broken type of system imo, but im not the expert, I just don't think these UN-type international European Union-type organizations should be given this much credit
[21:25:47 CEST] <JEEB> ahahahaha
[21:25:51 CEST] <JEEB> excuse me
[21:26:23 CEST] <JEEB> but you have no idea of how video compression works if you think that for nice compression formats GPU encoding with the actual GPU thing will be of any help speed-wise
[21:26:56 CEST] <JEEB> GPU encoding with the actual GPU is something that you need to specifically design formats for. I'm pretty sure ATi/AMD did that at one point. But the point is, that's not geared for compression
[21:27:04 CEST] <Wader8> well what, come on, it's 2016 and there's no proper codec with GPU support, only some god knows where nvidia encoder which is probably outdated ..etc
[21:27:12 CEST] <JEEB> because GPUs require *threading* . *massive* threading
[21:27:31 CEST] <Wader8> you know, there's no native GPU support in x264 or 5, and I don't have NV card
[21:27:54 CEST] <DHE> nvenc actually uses a dedicated encoder chip. running nvenc has no impact on CUDA, opencl or 3d games
[21:27:55 CEST] <JEEB> and that is something you just can't get when you are designing a format around actual compression due to having references all over the place :P
[21:28:19 CEST] <Wader8> it's been 15 years since x264 came to use so how can't they even have basic GPU support
[21:28:21 CEST] <JEEB> aka you have dependencies all over the place and thus you can't freely multithread on the level that GPGPU requires
[21:28:44 CEST] <DHE> Wader8: there is opencl support. whether it provides any benefit is workload dependent. I've seen 20% speedups, I've seen it run SLOWER
[21:28:45 CEST] <JEEB> tl;dr the *format* has to be made from the ground up for such usage, and *compression* is not something you get out of that
[21:29:05 CEST] <JEEB> DHE: also the opencl stuff is a proof of concept as well as only ME lookahead I think?
[21:29:24 CEST] <JEEB> because it's one of the few things that could be run separately from the encoding itself
[21:29:32 CEST] <Wader8> i read the ffmpeg docs that -hwaccel only works in conjunction with the nv_enc codec, if that is old or still ok info
[21:29:55 CEST] <Wader8> and opencl would only affect some filters, not really the core of the encoding process
[21:30:02 CEST] <Wader8> is what I heard
[21:30:12 CEST] <kepstin> Wader8: nvenc is modern and maintained, but it doesn't use gpu compute - it uses a separate dedicated hardware encoder/decoder implemented on the gpu die
[21:30:31 CEST] <JEEB> not really a filter but lookahead for the motion estimation, yes
[21:30:32 CEST] <DHE> JEEB: sounds about right. offloading some jobs. CPU still keeps plenty busy
[21:30:50 CEST] <JEEB> because the lookahead can be run separately from the main encoding processing
[21:31:00 CEST] <Wader8> kepstin, i get it, the one NV added specifically for this, but, does it use the whole GPU to assist, or just that area on the die ?
[21:31:15 CEST] <JEEB> they implemented an AVC encoder on the die in an ASIC
[21:31:29 CEST] <JEEB> you get exactly what they did in hardware, nothing more nothing less
[21:31:32 CEST] <DHE> Wader8: there is a distinct encoder chip on some GPUs. nvenc will fail if that chip is not present on your graphics card
[21:31:36 CEST] <Wader8> JEEB, but there's much more than just lookahead in the encoding process ?
[21:32:11 CEST] <jkqxz> So did everyone else. Pretty much every SoC you can buy for putting in phones or whatever has H.264 encode and decode hardware on, as does every modern CPU.
[21:32:14 CEST] <DHE> JEEB: a GPU consists of a 500+ core CPU with almost no inter-CPU communication
[21:32:43 CEST] <DHE> (this is a good way to imagine it)
[21:32:56 CEST] <JEEB> Wader8: yes - it was picked for GPGPU because it could be run separately and wouldn't negatively affect the encoding process as long as you could upload images there and back to/from the GPU fast enough (and thus as long as it was running in front of the main process)
[21:33:10 CEST] <JEEB> that is because of the limitations of an actual GPU's architecture :P
[21:33:29 CEST] <JEEB> as I said, if you want to have something optimal to be done on the GPU, you have to design the format from the ground up for GPUs
[21:33:35 CEST] <JEEB> and that means removing compression
[21:33:42 CEST] <DHE> and possibly a limitation of implementing encoding in x264 which is already a mature product. conversion to GPU-style encoding paradigms probably means a major rewrite
[21:34:06 CEST] <JEEB> because you implicitly gain dependencies on data like other reference frames when trying to do a high compression format
[21:34:24 CEST] <Wader8> jkqxz, it has the hardware support, whatever that means, in practise does it really use the whole chip to assist, or only a dedicated area on the chip, is the big question, because if it's not the whole chip, it means that another chip is plasted on like an addon which is a total difference, it's not going to use the total horsepower of the main chip that is branded and has a name
[21:34:45 CEST] <JEEB> yes, it's a completely separate ASIC part of the GPU or CPU die :P
[21:35:01 CEST] <JEEB> usually just plastered together with the media segment of it
[21:35:12 CEST] <Wader8> okay, so that's more like "half-baked GPU-Assisted"
[21:35:30 CEST] <jkqxz> It ends up attached to the GPU bits because it wants all of the same memory management, but it won't actually use the GPU cores.
[21:35:38 CEST] <DHE> it's still "offload", just to a video encoder chip rather than the GPGPU chip
[21:35:55 CEST] <DHE> remember, nvidia wanted people to stream their games. making the encoder separate means no gaming performance loss
[21:36:04 CEST] <JEEB> as I've been trying to say for quite a while now, unless GPUs suddenly change in how they work they're just not the tool for the trade for high compression aimed formats
[21:36:14 CEST] <iive> see, you can consider the vga and 3d computations a completely separate parts of the GPU
[21:36:15 CEST] <JEEB> so trying to expect that from GPUs is just dumb
[21:36:42 CEST] <JEEB> please try to read what I've noted and do some research if my words aren't convincing enough
[21:36:45 CEST] <iive> the video de/encoding is just another separate part.
[21:37:44 CEST] <Wader8> DHE, but then, less area for GPU stream processors, it's really a point of view, and they take the one that has a more customer appealing type of marketing view
[21:38:32 CEST] <DHE> umm.. they can build the chip however they want. they can just make it bigger for the additional space needed
[21:39:16 CEST] <jkqxz> The thing is mostly bounded by power consumption anyway. Using dedicated hardware to do the video is a massive gain there, you would use at least an order of magnitude more power using the GPU cores to do it.
[21:39:18 CEST] <Wader8> but see, they don't, every time there's a new nanometer, there's all talk about shrink and "saving power, saving lives, saving CO2, saving private ryan, saving money, saving saving saving"
[21:39:30 CEST] <iive> well, game consoles nowdays record the gameplay in real time all the time. So you can save your last 5 minutes is something interesting happens
[21:39:48 CEST] <DHE> yeah, probably with a similar chip involved
[21:39:51 CEST] <Wader8> oh i forgot saving wales
[21:39:56 CEST] <Wader8> whales*
[21:40:10 CEST] <DHE> but in your PC/laptop you can have an HD video conference and let the GPU chip encode the camera video
[21:40:45 CEST] <Wader8> I guess GPGPU support is such a big deal then, that these ITU people don't tackle it or what
[21:41:30 CEST] <Wader8> JEEB, oh you mean GPUs aren't the type of chips suited for these caluclations, i guess that's fair
[21:42:18 CEST] <JEEB> calculations are one thing, but then you have to see how good the architecture of thing you're trying to use is suited in the overall picture, including how threadable all parts of your thing are
[21:42:26 CEST] <Wader8> But I don't get why on the CPU it's already such a problem, these SSE MMX stuff is there for ages haven't they figured it all out already
[21:42:31 CEST] <jkqxz> They tackle it perfectly - they make a precisely-specified standard which everyone else can implement hardware for, such that they all interoperate. That's exactly what they're there for. The reference implementation is mostly irrelevant, though it is convenient for testing.
[21:43:02 CEST] <JEEB> because it's a new format and it's a new thing to optimize!?
[21:43:06 CEST] <iive> Wader8: see, intel also have decoder/encoder on the cpu,
[21:43:23 CEST] <iive> it is just that they also have a full vga on it too.
[21:43:25 CEST] <JEEB> both psychovisually and speed-wise
[21:43:50 CEST] <JEEB> if I took you 10 years into the past you'd be saying the same about x264
[21:44:06 CEST] <JEEB> '06 and athlon XP running hot encoding with slow x264 parameters
[21:44:07 CEST] <Wader8> iive, so all these years, CPUs and GPUs were never the type of chips suited for codec calculations ?
[21:44:18 CEST] <JEEB> or maybe even a k8
[21:44:31 CEST] <Wader8> Can somebody make a Codec Chip PCI-E card then ?
[21:44:35 CEST] <iive> hey, you don't need new format. MPEG2 is quite easy to decode in parallel, because each row is separate slice.
[21:44:38 CEST] <kepstin> well, cpus are by definition general-purpose, which means they don't particularly excel at anything
[21:44:56 CEST] <kepstin> Wader8: sure, nvidia sells those, and you get a gpu too!
[21:44:58 CEST] <jkqxz> Wader8: Yes. Separate DSPs for video coding have existed for years.
[21:45:25 CEST] <Wader8> jkqxz, but not as big as a GPU 300mm2 core ?
[21:45:40 CEST] <iive> yeh, i've read that radeon 9500 have both decoder and encoder.
[21:46:23 CEST] <JEEB> also the separate chips aren't much better until the people who make them learn how that format can be efficiently implemented. the format doesn't change, but research gets done in how the quality per bit part can be improved :P
[21:46:27 CEST] <kepstin> given that the encoder chip is a relatively small part of the gpu core - and similar encoder blocks are used on e.g. cell phone chips, it should be possible to stuff quite a few on a dedicated chip... if you could sell enough of them to make it worthwhile.
[21:46:28 CEST] <Wader8> what does Radeon R7 370 have ?
[21:46:36 CEST] <Wader8> if someone knows off the head, i can look up later
[21:47:24 CEST] <Wader8> kepstin, nobody making them so obviously nobody's buying them, imo
[21:48:01 CEST] <iive> there were some dedicated decoders on pcmpci cards
[21:48:09 CEST] <iive> ffmpeg even still supports one of them.
[21:48:11 CEST] <kepstin> i'm kind of curious about how well throwing an h264 encoder on an fpga would work
[21:48:13 CEST] <Wader8> and I think it would be sold, compared to all the crap that get's done and still sells
[21:48:13 CEST] <jkqxz> Yes. I mean things like embedded fixed-point DSP chips (TI C6000 and the like).
[21:49:15 CEST] <Wader8> kepstin, isn't it a bit disturbing that nobody tried this yet, i mean, from the community of all the conversion-heads out there, I'm like the lowest guy on the totem pole and I'm the first one with this idea ?
[21:49:19 CEST] <anadon> I can't seen to get non-corrupted frames in decode.cpp:decodeToFrames()
[21:49:19 CEST] <anadon> https://github.com/anadon/youtubeFS/blob/master/decode.cpp
[21:50:34 CEST] <Wader8> not saying im the one, just saying how I'm thinking out of the box but i'm not even converting stuff for very long or knew how
[21:52:53 CEST] <jkqxz> Wader8: By dedicated chips doing a lot of video encode, perhaps you are thinking of something like <http://www.ambarella.com/products/security-ip-cameras#S5>?
[21:53:57 CEST] <Wader8> Why don't the GPU manufacturers throw this on the GPUs, is it really such a niche and a cost ?
[21:54:00 CEST] <iive> Wader8: fpga have been done. after all you need to do it before going to asic
[21:54:11 CEST] <carli> Hey, sorry for this quick question! What could be wrong with converting Online GIF to MP4, I get error: Input/output error. If I save this gif beffore converting it then works just fine.
[21:55:01 CEST] <Wader8> what is this AMD Drag and Drop Transcoding thing then ?
[21:55:17 CEST] <jkqxz> Noone actually needs more than one stream of 1080p60 or so, except for people who are prepared through the nose for bigger chips. It's like why Intel don't bother making consumer parts with more cores, which they could trivially do.
[21:55:56 CEST] <Wader8> if there's so many version of x264, how does the HW support then account for the future versions, just throws out in error ?
[21:56:37 CEST] <furq> what?
[21:56:51 CEST] <DHE> JEEB: I've found out what's wrong. avcodec_open2 is failing on the decoder because... and I don't know how, the codec has some fields out of order and it looks like an encoder instead...
[21:56:52 CEST] <Wader8> jkqxz, im more talking about making a card specifically for encoders and the TV/Internet industry, what do they use, ofcourse, they just throw a server farm at it
[21:56:58 CEST] <DHE> so clearly something's gone catastrophically wrong somewhere
[21:57:06 CEST] <jkqxz> H.264 has been fixed completely since 2005. There have been a few extensions (SVC, MVC), but the base stuff is static.
[21:57:29 CEST] <furq> the tv industry is largely still using mpeg-2
[21:57:49 CEST] <furq> they have no use for a pci-e card full of h.265 encoder asics because practically nothing can decode it yet
[21:57:52 CEST] <jkqxz> Wader8: Yes, those things all exist. You pay through the nose for them. They are not consumer products. (See link above, say.)
[21:58:18 CEST] <furq> and you can encode a lot of concurrent h.264 streams on a halfdecent server cpu
[21:58:32 CEST] <Wader8> okay ...
[21:59:36 CEST] <Wader8> furq, well, for my case, im archiving, so I look at quality and size, not for streaming, so I use "slower" speed
[21:59:55 CEST] <furq> if you're looking at quality then you don't want to use a hardware encoder anyway because they're all optimised for speed
[21:59:57 CEST] <Wader8> which is around 5 FPS per second
[22:00:28 CEST] <sfan5> 5 frames per second per second
[22:00:39 CEST] <Wader8> furq, optimized for speed, hmm, so this is getting really complex now,
[22:00:41 CEST] <furq> i'm sure it's possible to make a hardware encoder which is as good as x264 veryslow but i'm not aware of such a thing existing
[22:01:01 CEST] <furq> nvenc/qsv/whatever the amd one is called that nobody supports certainly aren't anywhere near that good
[22:01:09 CEST] <furq> vce?
[22:01:17 CEST] <Wader8> not only is the software config defining how my video is going to look, but then the HW will also affect the quality, i see this as a weird design
[22:01:45 CEST] <furq> it's no different from using two different h.264 encoders
[22:01:45 CEST] <Wader8> the HW should just claculate, what the software tell em, not have some fixed stuff in HW that overrides software,
[22:02:04 CEST] <furq> so you want a software hardware encoder?
[22:02:07 CEST] <Wader8> isn't that simpler
[22:02:19 CEST] <furq> i've got one of those on my motherboard
[22:02:51 CEST] <Wader8> the HW just do calculations, let the quality/speed be decided by, this "optimized for speed" on HW just seems like a shortcut to me, doesn't please both camps
[22:03:13 CEST] <Wader8> with the term software, im referring to the codec
[22:03:40 CEST] <ferdna> why do i get this warning:
[22:03:41 CEST] <ferdna> Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
[22:05:25 CEST] <furq> i expect the majority of the reason for nvenc's existence is people streaming themselves playing bad indie games on twitch
[22:05:44 CEST] <furq> all those people care about is that it doesn't make their framerate drop from 98 to 97
[22:05:58 CEST] <furq> or from 15 to 14 if it's a unity game
[22:06:04 CEST] <DHE> JEEB: issue identified. a major miscompilation...
[22:14:14 CEST] <DanielMan> ferdna are you using ffserver with mpegvideo format?
[22:14:32 CEST] <ferdna> no.
[22:14:35 CEST] <ferdna> using ffserver yes
[22:14:47 CEST] <ferdna> DanielMan, this is what i want to do
[22:14:58 CEST] <ferdna> get video from a ipcam... and stream it to webm format
[22:15:04 CEST] <ferdna> which i am able to do it
[22:15:09 CEST] <ferdna> but not in real time
[22:15:13 CEST] <ferdna> i am doing something wrong
[22:16:06 CEST] <DanielMan> have you sen this bug report? does if fit your scenario?
[22:16:20 CEST] <ferdna> no i havent seen it...
[22:16:25 CEST] <ferdna> i wouldnt know if it fits or not
[22:16:31 CEST] <DanielMan> https://trac.ffmpeg.org/ticket/4794
[22:18:18 CEST] <ferdna> DanielMan, i am not using mpegvideo
[22:19:01 CEST] <DanielMan> ok, well, at least we eliminated some low hanging fruit
[22:19:33 CEST] <ferdna> lol
[22:19:40 CEST] <DanielMan> What was the result of lowering the FileSizeMax?
[22:23:33 CEST] <DanielMan> will a shorter delay fit your needs? I'm not sure how "realtime" you can possibly get with this many layers of translation in the mix.
[22:25:16 CEST] <DanielMan> raw->encode->UDP->recode to webm->*shrug*->decode
[22:27:16 CEST] <ferdna> DanielMan, do you have other solution for this?
[22:27:20 CEST] <DanielMan> mh point being that if you point your cam at a stopwatch and view the output in this system next to the stopwatch, I"m not sure how low you can really get that.
[22:27:34 CEST] <ferdna> i dont have to recode
[22:27:39 CEST] <TD-Linux> if you need very low delay the usual solution is RTP (or things that use it like WebRTC)
[22:27:44 CEST] <ferdna> i just need to stream incoming video from theipcam
[22:28:46 CEST] <prelude2004c_zzz> hello boys and girls.. question... i am doing ffmpeg -i $stream&overrun_nonfatal=1.... -f mpegts - | ffmpeg -i - something else..... i have found a problem where sometimes when the first input looses data it stays on because i am running the &overrun_nonfatal=1 but the second one seems to end.. is there any way to make sure the ffmpeg -i - stays alive and waiting for new data ? something like " ffmpeg -i -&overrun_nonfatal=1 .. would
[22:28:47 CEST] <prelude2004c_zzz> that even work ?
[22:35:19 CEST] <Isaac> just gonna throw this out here: anyone got a resource that explains -rc-lookahead or -lag-in-frames? I´m having trouble really getting it.
[22:35:39 CEST] <ferdna> DanielMan, do you have any config for h264?
[22:43:11 CEST] <DanielMan> in what context?
[22:43:48 CEST] <prelude2004c_zzz> anyone have any input ?
[22:47:36 CEST] <DanielMan> I have not had experience with the issue you are having prelude2004c_zzz
[22:47:40 CEST] <ferdna> DanielMan,
[22:47:46 CEST] <DanielMan> perhaps you can help me with this ... http://pastebin.com/VDRwci5S
[22:47:54 CEST] <ferdna> i need to take h264 from a ipcam... to the same format
[22:48:37 CEST] <DanielMan> one camera consistantly lags a few second behind the other where I need them to be within a few miliseconds of eachother for a machine vision application
[22:50:40 CEST] <DanielMan> did you try modifying the FileSizeMax or switching you stream protocol to RTP?
[22:50:49 CEST] <furq> ferdna: does it need to be over rtsp
[22:50:57 CEST] <ferdna> not really furq
[22:51:06 CEST] <ferdna> any method is ok
[22:51:22 CEST] <furq> i'd advise against using ffserver because it's unmaintained and likely to be removed in a future version
[22:51:31 CEST] <furq> and also it's generally a bit crap, and iirc it forces you to reencode
[22:51:32 CEST] <ferdna> furq, what do i use then?
[22:51:48 CEST] <furq> rtsp would be fine but i don't have a recommendation for another rtsp server
[22:51:57 CEST] <furq> i do have a recommendation for a good rtmp server, if that works for you
[22:52:10 CEST] <ferdna> i dont know what that ir or the difference
[22:52:11 CEST] <furq> you could just send the h.264 stream directly to that
[22:52:23 CEST] <furq> rtmp is what flash media server uses
[22:52:35 CEST] <furq> https://github.com/arut/nginx-rtmp-module
[22:52:37 CEST] <DanielMan> Have you looked at VLC?
[22:52:39 CEST] <furq> a lot of people in here use that for streaming
[22:52:48 CEST] <ferdna> ohh cool
[22:53:01 CEST] <furq> it does hls as well but if you need low latency then that's not much good
[22:53:06 CEST] <ferdna> i guess i'll stick to ffserver for now
[22:53:09 CEST] <furq> otherwise you need flash or a desktop player
[22:53:25 CEST] <furq> s/desktop/standalone/
[22:54:01 CEST] <furq> i feel a bit like a broken record recommending nginx-rtmp so much, but i'm yet to see anyone recommend anything better
[22:54:22 CEST] <ferdna> i see
[22:54:24 CEST] <ferdna> no worries
[22:55:38 CEST] <DanielMan> Hope that helps you, sorry I couldn't be of more help to you.
[22:57:30 CEST] <ferdna> no that is okay
[22:57:31 CEST] <ferdna> thanks
[22:57:52 CEST] <ferdna> furq, you are right:
[22:57:52 CEST] <ferdna> That's just because FFMPEG will be dropped in a future Debian release, they're just warning you now not to get too hooked on it and start using AVCONV instead, which is a fork of FFMPEG.
[22:58:02 CEST] <furq> er
[22:58:02 CEST] <furq> no
[22:58:10 CEST] <furq> that's a totally different thing which is thankfully in the past now
[22:58:56 CEST] <furq> ffmpeg was dropped from debian a few years ago in favour of libav, but it's back as of the last release
[22:59:01 CEST] <furq> and nobody ever spoke of libav again
[23:00:32 CEST] <ferdna> ohhh
[23:03:16 CEST] <prelude2004c_zzz> thank you daniel
[23:03:18 CEST] <pandb> how do I control the speed of a video that i'm encoding?
[23:03:56 CEST] <prelude2004c_zzz> anyone else?
[23:04:08 CEST] <prelude2004c_zzz> i need to keep the ffmpeg open to listen for stdin without closing
[23:04:13 CEST] <pandb> i'm starting out with raw picture data that i run through sws_scale, then avcodec_encode_video2, then finally av_write_frame
[23:04:29 CEST] <pandb> the video that's output runs really fast
[23:05:36 CEST] <pandb> i've set the time_base fields of the AVStream and AVCodecContext based on my desired frame rate
[23:05:36 CEST] <Wader8> hello again
[23:05:53 CEST] <Wader8> how do I properly downscale NTSC 720x420
[23:05:59 CEST] <Wader8> a DVD
[23:06:10 CEST] <furq> downscale it to what
[23:06:17 CEST] <Wader8> and to retain 16:9 DAR
[23:06:31 CEST] <furq> don't downscale it
[23:06:44 CEST] <furq> leave it at the native res and set the ar in the container with -aspect 16:9
[23:06:53 CEST] <Wader8> don't worry it's for something else, not meant as a main thing
[23:06:54 CEST] <pandb> and after avcodec_encode_video I increment the pts of the packet
[23:07:20 CEST] <pandb> i'm not sure what else I need to do to control the speed of playback
[23:07:21 CEST] <Wader8> i'd like it half down
[23:07:42 CEST] <furq> i'm not sure what you're asking for then
[23:07:48 CEST] <Wader8> it's for some other thing, a sample
[23:08:01 CEST] <furq> -s 123x456
[23:08:02 CEST] <Wader8> downscaling the weird 720x420
[23:08:13 CEST] <Wader8> do you understand
[23:08:22 CEST] <Wader8> it has nonsquare pixels, i can't just do that
[23:09:04 CEST] <DanielMan> prelude2004c_zzz I'm assuming you've tried -timeout? I know that works for for some input types
[23:09:06 CEST] <Wader8> I can't use the table for this
[23:09:10 CEST] <furq> table?
[23:09:37 CEST] <Wader8> doesn't even exist on the table anyway https://pacoup.com/2011/06/12/list-of-true-169-resolutions/
[23:10:09 CEST] <furq> you can downscale to whatever resolution you want if you set -aspect 16:9
[23:10:14 CEST] <Sirisian|Work> Anyone ever experience a http://pastebin.com/bPsnAgcw "PES packet size mismatch"? I have an h264 stream that I'm reencoding to an rtmp stream and noticed that it runs fine for around 10 seconds to 2 minutes then it starts outputting errors and never recovers.
[23:10:28 CEST] <prelude2004c_zzz> hum..... i have not
[23:10:30 CEST] <Wader8> NTSC and PAL are such a pile of crap man
[23:10:35 CEST] <prelude2004c_zzz> can i set timeout 0 as in never ?
[23:10:57 CEST] <Wader8> oh i just got a few more DVDs and I get rid of this nonsense
[23:11:31 CEST] <prelude2004c_zzz> -timeout -1
[23:11:32 CEST] <prelude2004c_zzz> going to try
[23:11:33 CEST] <furq> don't remind me about all these dvds i need to rip
[23:11:52 CEST] <furq> all of these hybrid film/video dvds that i need to deinterlace without ruining
[23:11:55 CEST] <DanielMan> put it immediatly preceeding the imput you wish to set it for
[23:12:02 CEST] <furq> who thought that was a good idea
[23:12:47 CEST] <furq> or everyone's favourite, the entirely-shot-on-film dvd with interlaced credits
[23:14:01 CEST] <prelude2004c_zzz> does not like -1
[23:14:06 CEST] <prelude2004c_zzz> timeout option not found
[23:14:11 CEST] <Wader8> i'm on a run with a big PC cleanup super archival sorting, so I took a shortcut to just mux the DVDs into MKVs, to get rid of the vob segment nonsense, i'll put a low quality failsafe x264 preview version beside it if the DVD MKV won't run 20 years later, ... it's like they designed it to be as annoying as possible
[23:15:04 CEST] <furq> that seems pointless
[23:15:07 CEST] <Wader8> i'll maybe redo those MPEG2 interlaced stuff
[23:15:11 CEST] <DHE> furq: I once heard somene tell me about a video that had a PIP where the main picture was TFF and the PIP was BFF
[23:15:14 CEST] <DanielMan> try a positive number like 2000
[23:15:23 CEST] <furq> there is literally no chance of there not being an mpeg-2 decoder in 20 years
[23:15:33 CEST] <Wader8> we'll I have other priorities now, need to finish before summer, other stuff then
[23:15:34 CEST] <furq> unless there's been a nuclear war
[23:15:36 CEST] <DanielMan> anything longer than your longes dropoutbefor eyou seriously consider there is a total failure in conection
[23:15:47 CEST] <furq> DHE: that does not surprise me at all any more
[23:15:57 CEST] <DanielMan> you're longest dropouts, rather
[23:16:10 CEST] <Wader8> like fukushima isn't already half of it
[23:16:35 CEST] <furq> you mean that thing where nobody died
[23:16:43 CEST] <Wader8> all those californian coast trendies surfing all day, god help them
[23:16:44 CEST] <DHE> garbage in, garbage out. that's my motto. I can't fix it if you give me this garbage.
[23:16:56 CEST] <prelude2004c_zzz> anyone else have any other input ? i can't get ffmpeg -timeout -1 -i ..... to work
[23:17:02 CEST] <prelude2004c_zzz> complains saying timeout is not an option
[23:17:04 CEST] <furq> bbc dvds are terrible for wonky interlacing
[23:17:15 CEST] <furq> on the plus side it's all pal so i don't ever need to deal with pulldown
[23:17:35 CEST] <Wader8> furq, nobody died at fukushima, you kidding me ?
[23:17:42 CEST] <furq> no?
[23:17:57 CEST] <Wader8> you watch mainstream media only or what, yeez
[23:18:04 CEST] <furq> a lot of people died because of the earthquake and the tsunami and the oil refinery that blew up
[23:18:15 CEST] <furq> nobody died because of the nuclear plant
[23:18:37 CEST] <furq> also that's the opposite of the picture you'd get from the mainstream media which did nothing but go INVISIBLE DANGER for two months
[23:18:45 CEST] <Wader8> furq, it's not the topic I personally dug deepest, but others I know did, but I know precisely that the Japanese govt was telling people to drink milk and smile that will make radiation go away
[23:19:11 CEST] <Wader8> on national TV
[23:19:13 CEST] <q3cpma> Hello, is this normal if ffmpeg spouts errors and produce a different output than mpcdec when decoding a freshly encoded musepack file?
[23:19:35 CEST] <furq> q3cpma: pastebin the command and output
[23:20:47 CEST] <Wader8> furq, i guess we could settle with "not yet" but, if you get cancer and die 30 years early, are you going to say it wasn't from that ... so that's what I meant
[23:20:52 CEST] <q3cpma> furq: http://pastebin.com/XiRJrgLH
[23:22:00 CEST] <furq> if mpcdec works and ffmpeg doesn't then it could be a decoder bug
[23:22:04 CEST] <q3cpma> Also, the result is different in length compared to mpcdec. (ffmpeg is 00:03:27.104 and mpcdec is 00:03:28.806)
[23:22:38 CEST] <Wader8> well okay, but nobody can know the number for sure later, it's just affecting people health it's already taking half your life away i mean, but I hope MPEG2 survives
[23:22:40 CEST] <relaxed> q3cpma: see if it happens with ffmpeg git master
[23:24:04 CEST] <DanielMan> hey folks, anyone have any inkling as to why this is delaying one webcam by a full second? http://pastebin.com/VDRwci5S
[23:24:26 CEST] <q3cpma> relaxed: Happens too.
[23:25:03 CEST] <q3cpma> The file plays fine except thos worrying (BIG RED) errors
[23:26:35 CEST] <Wader8> Channel Wide Question: FFMPEG docs say that preset speec changes the speed, but it does clearly affect the quality, it's a night and day
[23:26:37 CEST] <relaxed> DanielMan: see if using the same demuxer options for both inputs makes any difference
[23:26:53 CEST] <DanielMan> I get that it's an obscure one but thank you to those who have been looking at it. :)
[23:26:57 CEST] <Wader8> I mean, it explicitly says it doesn't affect quality, i think
[23:27:13 CEST] <Wader8> but that's not true, i mean, at leat for x265 and 4
[23:27:14 CEST] <DanielMan> ok relaxed, I'll give that a shot.
[23:27:21 CEST] <furq> where does it say it doesn't change the quality
[23:29:13 CEST] <Wader8> i would have to find it
[23:30:39 CEST] <DanielMan> relaxed: same options repeated for both inputs with no affect in behaviour.
[23:31:20 CEST] <Wader8> the web says it's "slightly" but i've seen a massive differente between ultrafast and slower
[23:31:35 CEST] <Sirisian|Work> Interesting. My streaming device has something called "Package A" and "Package B" and only Package B works with the h264 stream. The other one seems to be a corrupted stream.
[23:31:51 CEST] <Wader8> im forced to use slow method just for the quality, also lower size ofcourse
[23:31:53 CEST] <Sirisian|Work> Can't reproduce my previosu error now so that's good.
[23:32:23 CEST] <DanielMan> Just to put it out there, I've experienced the same issue with different camera models in pairs.
[23:32:42 CEST] <DanielMan> two ps3 eye's and two ELP's
[23:33:21 CEST] <q3cpma> I tested with latest musepack-tools too, and nothing has changed. Should I file a ticket?
[23:33:55 CEST] <furq> Wader8: the quality will be different at the same crf value
[23:34:00 CEST] <furq> if you don't want to use slow then lower the crf
[23:35:13 CEST] <Wader8> oh
[23:35:25 CEST] <Wader8> well i can't find it right now, it was somewhere in the ffmpeg docs
[23:35:36 CEST] <Wader8> but not on the x264 encoding page
[23:51:58 CEST] <Wader8> well
[23:53:59 CEST] <Wader8> the HEVC recode from DVD is quite good with 1.5 GB less size, and i didn't use MKV, but MP4, so I put it into ACC 448kb/s, is it worth remuxing from DVD the original AC-3 and putting the whole thing into MKV, another problem, the MP4 HEVC has 4 minutes at beginning cut off, how would I do that when copying audio ?
[23:54:15 CEST] <Wader8> if you're still here furq
[00:00:00 CEST] --- Tue Apr 26 2016
More information about the Ffmpeg-devel-irc
mailing list