[Ffmpeg-devel-irc] ffmpeg.log.20180419

burek burek021 at gmail.com
Fri Apr 20 03:05:01 EEST 2018


[01:53:41 CEST] <wfbarksdale> anyone know if its possible to get VideoToolbox to fallback to software if hardware is not available?
[01:54:05 CEST] <wfbarksdale> passing some options in av_codec_open2() ?
[01:55:06 CEST] <wfbarksdale> seems hardware is not available in our CI environment, and video toolbox would like to see the "-allow_sw" = 1
[04:13:52 CEST] <Anonrate> Why is there an option to disable iconv, but not one to enable it?  I mean I think there is an option to enable it, but why isn't it shown?  I noticed crosscompiling with both shared and static enabled, along with iconv, once I copy over the folders (Prefix was ../ffbuild) so I can acess them on Windows (Because I can't find the folders to of which WSL is located anymore..), if I run ./ffmpeg.exe in the
[04:13:53 CEST] <Anonrate> powershell, literally nothing comes up..  But!  If I run ffmpeg.exe in the CommandPrompt..  I get an error saying that iconv can not be found.  Which it is right.  It's not there.  But if I copy over the iconv.dll into the ffbuild/bin folder (The source of the iconv is from the package win-iconv-mingw32-dev..  Soomething along those lines), then run ffmpeg.exe in the CommandPrompt and ./ffmpeg.exe in the
[04:13:55 CEST] <Anonrate> Powershell..  There is finally output.
[04:14:11 CEST] <Anonrate> I don't remember where I was going with this..  I just had a brain fart..
[04:33:13 CEST] <Anonrate> Okay, so there isn't even an option for --enable-static, just --disable-static.  I'm just going to go ahead and ask my questions (There is a lot, and some of them I will include what I think the answer may be, so hopefully I don't bother you guys with this).  Why am I not able to get ffplay to be included in the compilation?  What is this ff*_g.exe that gets output?  What if I enable and compile with, gcrypt,
[04:33:15 CEST] <Anonrate> gmp, gnutls, libtls and openssl?  Will that break the use of librtmp?
[04:42:08 CEST] <furq> Anonrate: ffplay requires sdl2, and the _g binaries contain debugging symbols
[04:42:39 CEST] <furq> i have no idea if you can build with gnutls and openssl at the same time but one or the other won't break librtmp
[04:43:14 CEST] <furq> the native rtmp support seems to be just as good now though so you probably shouldn't need it
[04:48:33 CEST] <Anonrate> Thank you for that information furq.  gnutls is for https support if openssl or libtls is not used.  I'm building for windows, is there a preferable library I should use?
[04:49:25 CEST] <Anonrate> I see that gcrypt is needed for rtmp(t) support if openssl, librtmp or gmp is not used and so forth with the others.
[04:51:02 CEST] <Anonrate> What I'm trying to do is compile a fully features ffmpeg that can be used on Windows.  So I'm trying to compile every feature that ffmpeg has to offer, making sure it still works on Windows.
[04:52:35 CEST] <Anonrate> Then I am going to make an interactive install script to share with the community.
[05:15:21 CEST] <kepstin> i thought under windows ffmpeg could use native (os provided) apis for ssl?
[05:15:26 CEST] <kepstin> could be wrong.
[05:21:12 CEST] <furq> schannel
[05:22:27 CEST] <furq> it's enabled by default and apparently mingw supports it
[12:46:26 CEST] <alone-y> hello, can i do several crop but output it to one file?
[12:52:22 CEST] <DHE> description is a bit vague, so I'm going to do my best vague suggestion: complex filters with the `concat` filter
[12:54:49 CEST] <alone-y> DHE, sorry for description, i mean just i want to have (for example) first 1/3 of 1600*900 and last 1/3 of 1600*900 but without middle part. i need crop, but two times same time. crop=135:900:500:0 , crop=135:900:1000:0
[12:55:08 CEST] <alone-y> this one for example, but second crop not working. only first
[12:56:31 CEST] <alone-y> i have only one file - but big. for example it's 3*1600*900. But i need CUT middle 1600*900 and have only 2*1600*900 without middle monitor.
[12:56:37 CEST] <alone-y> (its screen capturing)
[12:58:11 CEST] <alone-y> of course i can do it for two files and after that may be i can merging it.
[12:58:35 CEST] <alone-y> but may be exist better way to cut some part of image?
[13:06:46 CEST] <alone-y> like as vf "delogo=x1:y1:w1:h1, delogo=x2:y2:w2:h2, delogo=x3:y3:w3:h3", but i need get some part and merging it with other part.
[13:13:44 CEST] <alone-y> ffmpeg -i 18-04-2018_4.mkv -vf "crop='135:900:500:0', crop='135:900:1000:0'" 123.mkv
[13:13:59 CEST] <alone-y> not working - only 1 crop.
[13:20:24 CEST] <klaxa> you are cropping a cropped image
[13:20:29 CEST] <klaxa> with a larger area
[13:21:33 CEST] <alone-y> well, my size it's 4 times for 1600*900
[13:21:56 CEST] <alone-y> how can i crop 1 1600*900 i know - just crop=1600:900:0:0
[13:21:59 CEST] <alone-y> it's ok
[13:22:34 CEST] <alone-y> but how can i do crop two times - from 0,0 (left screen) and from for example 3200,0
[13:22:38 CEST] <alone-y> ?
[13:22:47 CEST] <alone-y> and put it to ONE file.
[13:26:01 CEST] <zerodefect> Anybody here experienced with subtitles? I'm reading/decoding an SCC subtitle file here using ffmpeg api. My eventual aim is to encode into DVB subs.  Before I get that far, I have one concern - in the source for sccdec.c it sets the codec parameter's codec id to 'st->codecpar->codec_id = AV_CODEC_ID_EIA_608;'  Do I need to perform a conversion of sorts from 608 to DVB sub? :(
[13:26:08 CEST] <alone-y> ffmpeg -i 18-04-2018_4.mkv -vf "crop=135:900:500:0',crop=135:900:1000:0'" 123.mkv
[13:26:16 CEST] <zerodefect> I was under the impression it would use the libass as an intermediate format
[15:02:53 CEST] <alone-y> can i change some color to another?
[15:03:59 CEST] <durandal_1707> alone-y: be more descriptive?
[15:04:04 CEST] <durandal_1707> there is lut filter
[15:04:15 CEST] <alone-y> hello durandal_1707
[15:04:37 CEST] <alone-y> well for example, i want to change all RED (255,0,0) to Green (0,255,0)
[15:06:14 CEST] <durandal_1707> see lut filers
[15:07:26 CEST] <alone-y> https://ffmpeg.org/ffmpeg-filters.html#lut_002c-lutrgb_002c-lutyuv
[15:07:30 CEST] <alone-y> lutrgb?
[15:07:52 CEST] <alone-y> thank u
[15:09:42 CEST] <alone-y> durandal_1707, cool!
[15:10:22 CEST] <alone-y> offtopic, but can i use it with mpv as ffmpeg infilter?
[15:13:48 CEST] <durandal_1707> alone-y: yes
[15:14:42 CEST] <alone-y> just trying to have some picture with conversation red and green to something neutral - example white.
[15:15:19 CEST] <alone-y> i dont want to know where is green and where is red. just one color (white)
[15:19:07 CEST] <alone-y> durandal_1707, sorry for my stupidness, but it seems to me lut about r-g-b channel
[15:20:28 CEST] <alone-y> lutyuv="u=128:v=128" - this is for gray scale, but for bluck and white?
[15:20:30 CEST] <alone-y> black
[16:28:10 CEST] <zerodefect> Any folks familiar with getting subtitle file encoded into DVB subs using API? Written a sample app but there are a few things I don't quite understand. Docs are a bit sparse.
[17:48:55 CEST] <will__> hi I have a question about encoding aacs. I have a script which at the moment encodes wavs to aac using afconvert (osx). I'd like to move this script to use ffmpeg so that it can run on linux. The problem is that is seems like ffmpeg is dropping a few frames from the audio file. I guess normally this wouldn't be a problem, but these files are going to be looped, so need to be exactly the right length. Does anyone have any idea on what
[17:58:13 CEST] <alexpigment> will__ i'm not sure what's going on there, but to help you get an idea of where the problem is, try encoding to pcm_s16le (.wav) and also libmp3lame (.mp3) and see if the problem happens there
[18:13:33 CEST] <nemesit> anyone know how to make a mosaic from rtsp streams and play them with ffplay?
[18:18:23 CEST] <furq> nemesit: ffmpeg -i rtmp://foo -i rtmp://bar -i rtmp://baz -i rtmp://qux -lavfi [0:v][1:v]hstack[t];[2:v][3:v]hstack[b];[t][b]vstack -f nut -c:v rawvideo - | ffplay -
[18:18:28 CEST] <furq> that's how it works in theory
[18:18:36 CEST] <furq> in practice you're probably going to end up with horrible desync issues
[18:18:47 CEST] <nemesit> desync issues?
[18:19:34 CEST] <furq> hstack/vstack expect the timestamps of the inputs to be in sync
[18:19:45 CEST] <nemesit> hm
[18:20:01 CEST] <furq> so if one of them starts dropping packets or gets out of sync then the whole thing will potentially go to hell
[18:20:03 CEST] <nemesit> I thought about using filter_complex
[18:20:12 CEST] <furq> -lavfi and -filter_complex are the same thing
[18:20:15 CEST] <nemesit> that would be annoying xD
[18:20:18 CEST] <nemesit> ah
[18:20:58 CEST] <furq> dealing with multiple network inputs in ffmpeg generally doesn't work well
[18:21:04 CEST] <nemesit> could I just start multiple ffplay thingies in different areas of the screen?
[18:21:11 CEST] <furq> yeah that'd be much more reliable
[18:21:13 CEST] <nemesit> I did that before with omxplayer
[18:21:38 CEST] <furq> you probably want to use mpv since you can pass the window position and dimensions on the command line
[18:21:45 CEST] <nemesit> but it seems a bit difficult to build for this picore linux
[18:22:11 CEST] <nemesit> furq would have to look whether mpv is available
[18:22:48 CEST] <nemesit> otherwise know some way to do that with ffplay/ffmpeg?
[18:23:16 CEST] <furq> i don't think ffplay lets you set the window position
[18:23:30 CEST] <furq> you'd probably want to build mpv yourself anyway if it's on a pi because you'll need the hardware decode
[18:23:55 CEST] <furq> unless your distro was smart enough to include that
[18:24:01 CEST] <nemesit> doesn't ffplay/ffmpeg do  that too?
[18:24:07 CEST] <nemesit> nah it's very very small
[18:24:18 CEST] <nemesit> like 35MB or so xD
[18:24:30 CEST] <furq> you need ffmpeg with --enable-mmal for hwdec on a pi
[18:25:28 CEST] <nemesit> seems to be missing -.-
[18:26:16 CEST] <furq> you can try it without but you'll be lucky if the pi can decode four h264 streams at once on the cpu
[18:26:30 CEST] <nemesit> yeah
[18:26:44 CEST] <nemesit> I'll recompile it
[18:26:51 CEST] <nemesit> thanks for the help :-)
[18:31:53 CEST] <exastiken__> if  there's anyone that works on the ffmpeg wiki
[18:32:01 CEST] <exastiken__> there's a typo in the autoremove script
[18:32:08 CEST] <exastiken__> one of the libs is missing the 'l'
[18:32:56 CEST] <spicypixel> quick question on the concat input filter for file lists, I've been setting a duration on the files, but can I set a global duration on the files outside of the file list? they're all fixed, instead of having "file 'file.mp4'" and "duration 8" on alternating lines
[18:33:09 CEST] <exastiken__> sudo apt-get autoremove autoconf automake build-essential cmake git libass-dev libfreetype6-dev libmp3lame-dev libopus-dev libsdl2-dev libtheora-dev libtool libva-dev libvdpau-dev libvorbis-dev libvpx-dev libx264-dev libx265-dev libxcb1-dev libxcb-shm0-dev ibxcb-xfixes0-dev mercurial texinfo wget zlib1g-dev
[18:33:18 CEST] <exastiken__> 'ibxcb-xfixes0-dev'
[18:40:11 CEST] <ChocolateArmpits> spicypixel, nope
[18:42:16 CEST] <spicypixel> ah
[18:42:23 CEST] <spicypixel> that's a shame
[18:43:18 CEST] <spicypixel> would be nice to be able to load a file list with a -f concat -concat_duration 8 -safe 0 -i mylist.txt
[18:44:14 CEST] <ChocolateArmpits> those file lists are more comfy to use when scripted
[18:46:13 CEST] <spicypixel> I agree, now to see if I can work out how to fork https://github.com/PHP-FFMpeg/PHP-FFMpeg to accept a filelist for concat rather than just an array of files
[18:46:18 CEST] <spicypixel> extra work woo
[18:49:41 CEST] <spicypixel> oh that's good, they generate the filelist to a temp file
[19:22:26 CEST] <matwey> Hi, could somebody advice working command for playing h264 using RockChip MPP hardware?
[19:27:00 CEST] <BtbN> I doubt that hardware is supported? Unless it exposes a standard api.
[19:32:22 CEST] <matwey> There is support in master ffmpeg for mpp
[19:33:24 CEST] <furq> matwey: presumably -c:v h264_rkmpp -i foo.mp4
[19:33:39 CEST] <matwey> right, but I cannot output it to the screen
[19:34:17 CEST] <matwey> [xv @ 0x5587d10f30] Unsupported pixel format 'drm_prime', only yuv420p, uyvy422, yuyv422 are currently supported
[20:08:05 CEST] <kerio> is that a transformer or a decepticon
[20:19:40 CEST] <alexpigment> matwey: did you try adding -pix_fmt yuv420p prior to the input?
[20:20:31 CEST] <alexpigment> generally speaking, most consumer video is 4:2:0, so i would assume that would be the common case, unless something is making it 4:2:2 or 4:4:4 during capture or conversion
[20:21:30 CEST] <alexpigment> admittedly, i don't know what "drm_prime" is , but the term "drm" kinda worries me regarding your ability to play it
[20:21:52 CEST] <sfan5> drm = direct rendering manager, a component of the linux kernel
[20:21:58 CEST] <alexpigment> ah
[20:22:10 CEST] <alexpigment> well, i would assume -pix_fmt yuv420p should address that
[20:22:56 CEST] <sfan5> you might need -vf hwdownload (or similar) because hwdec
[20:23:13 CEST] <sfan5> you can also directly put that buffer onto the screen if you render via drm
[20:23:14 CEST] <sfan5> (https://github.com/mpv-player/mpv/blob/master/video/out/drm_prime.c)
[20:24:32 CEST] <matwey> alexpigment: no, I didn't. drm_prime seems to be result of hardware h264 decoding. AFAIU it is a kind of video memory buffer handler
[20:25:04 CEST] <alexpigment> sounds good. it looks like sfan5 knows much more about it, so i'd trust his recommendations
[20:25:58 CEST] <kepstin> I wonder if mpv has a renderer that can display these directly.
[20:26:27 CEST] <matwey> mpv can render in full-screen
[20:26:51 CEST] <matwey> but I would like to try to use ffmpeg to render in window
[20:27:50 CEST] <kepstin> ffmpeg isn't a video player (ffplay is one, but it's more of a demo app, and doesn't have a great set of video renderers)
[20:27:50 CEST] <sfan5> these kind of hwdecs don't play nice with windowing, but there should be a way to copy the decoded video from the gpu back to system ram
[20:28:31 CEST] <kepstin> i dunno much about how the gpu on that device is set up, but you might be able to turn the drm prime buffer into a texture and render it with opengl to a window.
[20:30:10 CEST] <matwey> ok, thanks.
[20:38:48 CEST] <alone-y> hello, any one know how to convert picture to black & white (not grayscale!) with lut filter?
[20:44:09 CEST] <kerio> it'll suck if you use a LUT
[20:44:13 CEST] <furq> alone-y: do you have to use a lut
[20:44:18 CEST] <furq> you could just use -pix_fmt monow
[20:44:44 CEST] <alone-y> well, i have a video with 3 colors - black, red, green
[20:44:58 CEST] <alone-y> i need to replace red and green to white.
[20:45:02 CEST] <alone-y> is it possible?
[20:45:19 CEST] <alone-y> and have black and white picture....
[20:45:34 CEST] <furq> extractplanes?
[20:46:31 CEST] <alone-y> i dont know something.Durundal had advice me lut.
[20:46:32 CEST] <alone-y> ;)
[20:46:47 CEST] <alone-y> what is extraplanes?
[20:46:52 CEST] <kerio> furq: you misread "black" as "blue" i think
[20:46:58 CEST] <furq> i did
[20:47:03 CEST] <alone-y> no.
[20:47:03 CEST] <kerio> me too :<
[20:47:17 CEST] <alone-y> i have black as background
[20:47:21 CEST] <kerio> alone-y: when you say that you "only have black, red and green"
[20:47:24 CEST] <kerio> what does it mean
[20:47:24 CEST] <alone-y> with only two colors - red and gree,
[20:47:49 CEST] <alone-y> well, imagine red and green letters at the black background?
[20:47:57 CEST] <kerio> how's your video encoded?
[20:48:00 CEST] <alone-y> i need to have only WHITE letter.
[20:48:04 CEST] <alone-y> it's zmbv
[20:48:15 CEST] <alone-y> it's screen capturing to zmbv codec.
[20:48:16 CEST] <kerio> oh ok so actually just 3 colors huh
[20:48:25 CEST] <sfan5> you might be able to piece something together using http://ffmpeg.org/ffmpeg-filters.html#geq
[20:48:28 CEST] <kerio> no antialias or anything
[20:48:36 CEST] <alone-y> well, almost yes.
[20:48:37 CEST] <alone-y> no no no!
[20:48:43 CEST] <alone-y> no antialias!
[20:48:53 CEST] <alone-y> red = 255 0 0
[20:49:01 CEST] <alone-y> green 0,255,0
[20:49:02 CEST] <furq> is it rgb or yuv
[20:49:24 CEST] <alone-y> how can i know it?
[20:49:33 CEST] <furq> what format is the picture
[20:49:46 CEST] <alone-y> i just know exactly, what my screen showing red and gree,
[20:49:49 CEST] <alone-y> green.
[20:49:53 CEST] <furq> actually nvm if it's zmbv then it's rgb
[20:50:05 CEST] <alone-y> nvm = nevermind?
[20:50:08 CEST] <furq> yes
[20:50:15 CEST] <alone-y> thank you for explanation.
[20:51:18 CEST] <alone-y> well, i need lut or geq?
[20:51:43 CEST] <furq> there's probably an easier way, but lutrgb=if(eq(val\,0)\,0\,255),if(eq(val\,0)\,0\,255),if(eq(val\,0)\,0\,255)
[20:52:00 CEST] <alone-y> near this one
[20:52:01 CEST] <alone-y> https://video.stackexchange.com/questions/19696/how-to-change-every-black-pixel-to-a-special-color
[20:52:01 CEST] <furq> or er
[20:52:05 CEST] <furq> replace val with maxval there
[20:53:52 CEST] <Anonrate> Is bzip cross platform?
[20:54:03 CEST] <alone-y> furq, sorry my stupidness, your "code" will replace red and green to white?
[20:54:34 CEST] <kerio> alone-y: it'll replace anything that's not black to white
[20:54:35 CEST] <alone-y> i need to replace val in your code to maxval, but what is maxval?
[20:54:58 CEST] <kerio> (if i understood it correctly)
[20:55:08 CEST] <alone-y> kerio, thank you
[20:55:27 CEST] <alone-y> can i put it to mpv for real time conversation?
[20:55:44 CEST] <alone-y> some filters is possible to put in mpv//
[20:55:45 CEST] <alone-y> ...
[20:55:52 CEST] <furq> nvm that doesn't work
[20:56:01 CEST] <furq> maxval doesn't do what i thought it did
[20:56:29 CEST] <alone-y> i c ;-(
[20:56:29 CEST] <kerio> furq: what's that monow thing
[20:56:34 CEST] <furq> 1-bit rgb
[20:56:50 CEST] <kerio> there's... not enough bits for 3 channels
[20:57:00 CEST] <alone-y> well, nevermind WHICH color will be replacer of green and red
[20:57:08 CEST] <furq> 1-bit mono, then
[20:57:10 CEST] <alone-y> orange, blue... gray.. nevermind!
[20:57:18 CEST] <furq> i meant "not yuv" and said rgb
[20:57:37 CEST] <alone-y> just i dont wish to separate where is green and where is red.
[20:58:05 CEST] <alone-y> i need to see MONO color (nevermind which color it will be)
[20:58:14 CEST] <alone-y> white.. gray..
[20:58:59 CEST] <alone-y> zmbv file is 24 or 32 bit.
[20:59:09 CEST] <alone-y> it's normal screen capturing file.
[20:59:12 CEST] <alone-y> regular.
[20:59:41 CEST] <alone-y> just zmbv better then h264 while grabbing my "specific" 3 colors screen ;)))
[21:14:19 CEST] <alone-y> ffmpeg -i "17-04-2018  4-30-46.mkv" -vf lutrgb=r='if(eq(val,0),3,val)':g='if(eq(val,0),3,val)':b='if(eq(val,0),3,val)'" -vcodec zmbv 123.mkv
[21:17:11 CEST] <zamba> is there a way i can try to extract still frames from a broken mpg?
[21:17:16 CEST] <zamba> i hear the sound, but there's no video
[21:17:23 CEST] <zamba> so is there a way i can read it "raw" or something?
[21:17:58 CEST] <alone-y> convert all to png?
[21:18:42 CEST] <zamba> alone-y: how can i do that?
[21:19:03 CEST] <alone-y> 1 sek
[21:19:20 CEST] <zamba> the file is 87 MB, so there has to be something there.. and the audio plays back just fine
[21:19:33 CEST] <zamba> but i believe the file is just broken when reading out the metadata
[21:21:13 CEST] <alone-y> can't logging in with teamviewer ;(
[21:21:16 CEST] <alone-y> strange.
[21:24:15 CEST] <Anonrate> Can someone take a look at this and let me know what they think, and if I'm wasteing my time or not, or if there is a better approach to achieve this.  https://github.com/FrancescoMagliocco/FFmpeg-Compile-Script.git
[21:30:35 CEST] <tuna> I read that it is possible to use hardware input with ffmpeg encoding, but I have yet to find any resources that show how. My specific issue is I need to be able to somehow pass a OGL/CUDA buffer into FFMPEG such that it can perform h264_nvenc encoding on that image.
[21:30:58 CEST] <tuna> I have spent roughly 2 days on this, any help is greatly appreciated
[21:31:55 CEST] <atomnuker> look in libavutil/hwcontext_cuda.h
[21:33:57 CEST] <atomnuker> wrap your frames in the frame format specified there, create an avhwdevicecontext from you cuda stuff and make sure all avframes have had their hw device context avbufferrefs set to it
[21:39:03 CEST] <zamba> could it be that the file just doesn't contain any video? is there a way to confirm that?
[21:40:34 CEST] <zamba> MPEG-PS file format detected.
[21:40:43 CEST] <zamba> MPEG: FATAL: EOF while searching for sequence header.
[21:40:45 CEST] <zamba> Video: Cannot read properties.
[21:41:25 CEST] <zamba> the file is 87 MB big.. and the audio codec is AUDIO: 44100 Hz, 2 ch, s16le, 224.0 kbit/15.87% (ratio: 28000->176400)
[21:41:45 CEST] <zamba> could it be that this consumes the whole file? the reported length of the file is 8 minutes and 35 seconds
[21:44:44 CEST] <zamba> and there is no encryption here, so reading the raw file there should be some image information in here somewhere?
[21:46:00 CEST] <kepstin> zamba: the error: "EOF while searching for sequence header" seems to say what's going on there
[21:46:26 CEST] <kepstin> probably there is video, but it's missing headers needed to decode it (is this a fragment of an mpg file)
[21:46:26 CEST] <zamba> kepstin: yes, but as i said.. the file is 87 MB and roughly 8 minutes long.. that doesn't add up
[21:46:32 CEST] <zamba> kepstin: i believe so
[21:46:43 CEST] <zamba> kepstin: it has been cut short somewhere
[21:47:19 CEST] <zamba> what i really just need to try and do is see if i can capture a still frame from it to figure out if that is the video i'm looking for
[21:47:19 CEST] <tuna> What is the     AVCUDADeviceContextInternal *internal; in the AVCUDADeviceContext struct for?
[21:48:08 CEST] <zamba> kepstin: so i mean.. isn't it possible to force a codec or something? to let it try to decode it as that?
[21:48:52 CEST] <kepstin> depends on the codec, but most of them need some initialization parameters to know how to decode the stream. which is what it says it couldn't find.
[21:49:42 CEST] <zamba> kepstin: this was probably a video recorded pre 2010
[21:49:45 CEST] <zamba> kepstin: if that's a hint here
[21:51:01 CEST] <zamba> kepstin: and MPEG-PS is the container, so if there's in fact something in the video part of the file, then i should see that in the raw binary data, right?
[21:51:19 CEST] <kepstin> in all likelyhood, it's probably mpeg 2 video, if it's in mpeg-ps file. Your paste of the audio stfuff doesn't include any information about the codec.
[21:52:34 CEST] <zamba> https://pastebin.com/u3t6Zg95
[21:52:37 CEST] <zamba> that's the output of mplayer
[21:53:05 CEST] Action: kepstin would have preferred the output of - *looks at the channel name* ffmpeg, but still.
[21:53:57 CEST] <kepstin> so yeah, mp2 or mp3 audio in an mpeg-ps file are typically used along with mpeg 1 video or mpeg 2 video
[21:54:12 CEST] <alexpigment> well let's put it this way. 8 minutes of 224kbps mp2 will not equal 87MB
[21:54:16 CEST] <kepstin> not guaranteed, of course, it could be other options.
[21:54:20 CEST] <zamba> alexpigment: exactly
[21:54:21 CEST] <atomnuker> tuna: its internal, you can't use it
[21:54:41 CEST] <alexpigment> at any rate, i would be trying as many other players / info programs as possible
[21:54:45 CEST] <kepstin> tuna: anything labelled with internal is stuff... internal to ffmpeg. don't touch it.
[21:54:47 CEST] <alexpigment> what does mediainfo say?
[21:54:47 CEST] <atomnuker> it should be created when you create a device context
[21:54:53 CEST] <alexpigment> does tsmuxer give any info?
[21:54:57 CEST] <alexpigment> ffprobe?
[21:54:58 CEST] <alexpigment> etc
[21:55:03 CEST] <alexpigment> does VLC play it?
[21:55:03 CEST] <tuna> kepstin: figured, ok thanks
[21:55:09 CEST] <zamba> alexpigment: i'll try all of them, one sec
[21:55:20 CEST] <zamba> alexpigment: vlc plays it the way mplayer plays it.. no video, just audio
[21:55:42 CEST] <zamba> ah, mediainfo got something
[21:55:51 CEST] <zamba> it says mpeg video version 1
[21:55:55 CEST] <alexpigment> as a person who has dealt with a lot of "broken" content, there's generally always at least one tool that kinda works. i have a stable of tools for those situations
[21:56:01 CEST] <alexpigment> makes sense
[21:56:15 CEST] <alexpigment> so probably 352x240 for NTSC
[21:56:33 CEST] <alexpigment> or 352x288 for PAL i think
[21:56:43 CEST] <zamba> unfortunately i don't have tsmuxer here
[21:57:03 CEST] <alexpigment> eh, it's a free app and it's sometimes a life saver, especially if you need to demux
[21:57:18 CEST] <zamba> https://pastebin.com/YGtC7Knk
[21:57:33 CEST] <zamba> it says stream size for video 1.55 MiB
[21:57:41 CEST] <alexpigment> the pixel width is weird
[21:57:46 CEST] <alexpigment> pixel0 x pixel0
[21:57:50 CEST] <zamba> 11s 400m
[21:57:54 CEST] <zamba> ms*
[21:58:34 CEST] <alexpigment> have you tried doing ffmpeg.exe -i [input] -c:v copy -c:a an [output.mpg] ?
[21:59:01 CEST] <zamba> unknown encoder "an"
[21:59:08 CEST] <alexpigment> oh right
[21:59:08 CEST] <alexpigment> sorry
[21:59:16 CEST] <alexpigment> ignore -c:a an
[21:59:18 CEST] <alexpigment> just do -an
[21:59:31 CEST] <zamba> Output file #0 does not contain any stream
[21:59:54 CEST] <zamba> ffmpeg spews out lots of error messages
[22:00:11 CEST] <zamba> but that seems to be for the audio (aac, right?)
[22:00:24 CEST] <zamba> Input #0, mpeg, from './recup_dir.28/f21357081.mpg':
[22:00:25 CEST] <zamba>   Duration: 00:08:38.91, start: 3.143667, bitrate: 1391 kb/s
[22:00:27 CEST] <alexpigment> -an is saying to throw away the audio
[22:00:27 CEST] <zamba>     Stream #0:0[0x1e0]: Audio: aac (SSR), 5.0, fltp, 1608 kb/s
[22:00:30 CEST] <zamba>     Stream #0:1[0x1c0]: Audio: mp2, 44100 Hz, stereo, s16p, 224 kb/s
[22:00:47 CEST] <zamba> is the problem the stream #0:0 that is incorrectly identified as audio?
[22:00:48 CEST] <alexpigment> this file is so confusing
[22:00:53 CEST] <alexpigment> yeah, i'd assume so
[22:00:53 CEST] <zamba> good, good :)
[22:00:59 CEST] <zamba> i'm not the only one, then
[22:01:02 CEST] <alexpigment> 1608kbps seems about right for mpeg-1 video
[22:01:23 CEST] <alexpigment> anyway, there are other freeware tools out there that might handle it differently
[22:01:32 CEST] <alexpigment> avidemux *might* handle it (i wouldn't count on it)
[22:01:37 CEST] <alexpigment> again, tsmuxer
[22:02:09 CEST] <alexpigment> i use videoredo a lot (it's got a free trial), and there's a "quickstream fix" option which does some sort of magic whatever to fix broken videos
[22:02:37 CEST] <alexpigment> ultimately, ffmpeg doesn't sound like the right tool
[22:10:18 CEST] <zamba> videoredo says "video program stream not found"
[22:10:57 CEST] <zamba> i believe some raw editing of the binary data here is needed
[22:11:20 CEST] <zamba> how does the mpeg-ps container look? how often does it insert key frames?
[22:12:08 CEST] <kepstin> if you can provide a sample of the broken file in a ticket, it's possible that someone who knows how the format works can improve the detection in ffmpeg.
[22:12:22 CEST] <kepstin> (but maybe not if the file is just completely broken)
[22:13:03 CEST] <zamba> i just find it strange that the audio stream is completely intact
[22:14:33 CEST] <kepstin> mp2/mp3 doesn't really have any stream-wide headers, you can start playing it from any frame.
[22:14:51 CEST] <zamba> kepstin: exactly.. and the video is sequential alongside the audio, right?
[22:15:03 CEST] <kepstin> video is very different from audio
[22:15:06 CEST] <zamba> kepstin: so if every frame of audio is correct, then in theory every frame of video should be intact as well?
[22:15:44 CEST] <kepstin> unlike mp2/mp3, you need some initialization info to set up the video decoder, and you need to start decoding at a keyframe rather than just any old frame.
[22:16:03 CEST] <kepstin> in mpeg-ts, the initialization info is usually periodically repeated so you can resync with the stream
[22:16:10 CEST] <kepstin> but i dunno about how mpeg-ps works there
[22:23:08 CEST] <Yukkuri> hi, is there a way to analyze & lint mp4 container with h264/aac payload for correct timestamping?
[22:25:43 CEST] <Yukkuri> i am implementing RTMP server which takes live rtmp stream with h264/aac payload and remuxes it into mp4 of layout ((ftyp moov) (moof mdat) [moof mdat]+) here: https://home.eientei.org but playback feel a bit stuttery
[22:26:02 CEST] <Yukkuri> i suspect doing incorrect timestamping, but have no idea how to validate
[22:27:32 CEST] <Yukkuri> h264 NALUs looking way to complex for me to fully comprehand how they working actually
[22:28:02 CEST] <Yukkuri> some dumbed-down version for how to distinguish one frame from another there would of help as well
[22:28:25 CEST] <DHE> there is an API called a Parser that may assist you here
[22:29:20 CEST] <Yukkuri> DHE: of libav?
[22:55:48 CEST] <tuna> atomnuker: so I created a AVCUDADeviceContext for my nvidia device, assigned the pointer of it to the AVCodecContext's and AvFrame's hw_frames_ctx.....and I registered my texture as a cuda resource, now how to i tell ffmpeg to operate on that resource?
[22:56:55 CEST] <tuna> (as a note, the current way I have it all setup is glreadbytes() to get the ogl buffer into ram and then I pass to ffmpeg to do the encoding whether be hardware or software based encoding)
[22:59:10 CEST] <atomnuker> tuna: right, I forgot how cuda worked
[22:59:29 CEST] <atomnuker> avframe->data[i] should contain your resources
[23:00:31 CEST] <atomnuker> a cuda pointer or whatever it was it used
[23:00:37 CEST] <atomnuker> one for each plane
[23:09:02 CEST] <tuna> atomnuker: plane here means each image in a texture buffer?
[23:09:27 CEST] <tuna> (I am still learning this OGL/ffmpeg/cuda stuff)
[23:14:26 CEST] <atomnuker> yes
[23:14:34 CEST] <atomnuker> BtbN could tell you more I think
[23:18:47 CEST] <zamba> alexpigment: you still around?
[23:19:44 CEST] <tuna> atomnuke: I am worried that i have something wrong, I cannot tell what cudaGraphicsResource_t is as a base type...and the avframe->data[0] wants a uint8...which seems awfully small for a pointer
[23:26:25 CEST] <alexpigment> zamba: yeah, what's up?
[23:26:56 CEST] <zamba> alexpigment: i'm just very curious is ffprobe isn't initially very confused
[23:27:22 CEST] <zamba> alexpigment: i believe it believes the video stream is aac audio
[23:27:32 CEST] <alexpigment> yeah, i'd say it's confused
[23:27:45 CEST] <alexpigment> no idea why though
[23:27:59 CEST] <alexpigment> seems much more likely that it's mpeg-1 video with mp2 audio
[23:28:04 CEST] <zamba> exactly
[23:28:08 CEST] <zamba> so is there a way to force the detection?
[23:28:12 CEST] <zamba> "detection"
[23:29:39 CEST] <zamba> basically: do your very best to interpret this as mpeg-1
[23:31:57 CEST] <zamba> [mpeg @ 0x1a98c40] decoding for stream 0 failed
[23:31:59 CEST] <zamba> [mpeg @ 0x1a98c40] Could not find codec parameters for stream 0 (Audio: aac (SSR), 5.0, fltp, 1608 kb/s): unspecified sample rate
[23:32:01 CEST] <zamba> also have this
[00:00:00 CEST] --- Fri Apr 20 2018


More information about the Ffmpeg-devel-irc mailing list