[Ffmpeg-devel-irc] ffmpeg.log.20190806
burek
burek021 at gmail.com
Thu Aug 22 15:07:04 EEST 2019
[00:48:30 CEST] <Henry151> hey another , thanks for the helpful link. This is what I'm getting so far: https://bpaste.net/show/rwlE and the outputted mp4 does not appear to have the subtitles burned into it.
[00:48:48 CEST] <Henry151> any further guidance on what I might be screwing up?
[00:53:27 CEST] <Henry151> i ultimately want to script this similarly to the way i scripted this conversion of shn to flac: https://termbin.com/71nz so that it will go through my whole collection of .mkv videos and convert them each to a .mp4 with the english subs burned into it. I will be needing to figure out how to programatically determine which subtitle track is the english subtitles, but i feel like i'm getting closer here..
[00:53:33 CEST] <Henry151> I want to figure out how to do it to *one* file first, then I can start struggling with making it work for a whole collection.
[01:00:43 CEST] <another> Henry151: your subtitles are 1920x1080 while your video is 1280x688
[04:49:38 CEST] <YellowOnion> How can I get seek video output without seeking audio output?
[04:51:41 CEST] <Henry151> another: can you give me guidance on how to deal with that discrepancy?
[04:52:41 CEST] <Henry151> please? :)
[04:52:54 CEST] <YellowOnion> I get frame accurate results with "-i blah -ss 10 output.ext", but with "-i video -i audio -ss 10 output.ext" it seeks both the audio and video right? "-ss 10 -i video .." isn't frame accurate and so I get sync issues...I'm confused on the solution to this.
[04:54:28 CEST] <YellowOnion> Actually I don't think I get sync issues, but I loose a few seconds of video because of lack of a key frame...
[04:54:36 CEST] <furq> Henry151: [0:4][0:v]scale2ref[s][v];[v][s]overlay
[04:55:44 CEST] <furq> YellowOnion: -ss before -i should be frame accurate if you're reencoding
[04:57:05 CEST] <furq> if you're copying then it's normal to have the audio come in before you get a keyframe
[04:57:43 CEST] <YellowOnion> I'm reencoding the video, but Im' missing a few frames of video.
[04:58:52 CEST] <furq> is that with -ss before or after -i
[04:59:08 CEST] <YellowOnion> currently it's before.
[04:59:14 CEST] <furq> did you try with it afte
[04:59:15 CEST] <furq> r
[04:59:36 CEST] <YellowOnion> I don't want to seek the audio...
[04:59:44 CEST] <YellowOnion> can I do -ss:v ?
[04:59:49 CEST] <furq> oh
[05:00:19 CEST] <furq> no -ss after -i just discards everything going to the output file until an input hits that timestamp
[05:00:31 CEST] <furq> it doesn't actually seek the inputs as such
[05:01:12 CEST] <YellowOnion> hmm, maybe the best action is to build the container after wards?
[05:02:24 CEST] <YellowOnion> I'm using to use CoreAudio to encode the audio externally.
[05:02:46 CEST] <Henry151> hey furq , thanks man.
[05:03:44 CEST] <furq> YellowOnion: ffmpeg has audiotoolbox encoding support on osx
[05:03:55 CEST] <YellowOnion> I'm on windows.
[05:04:04 CEST] <furq> oh
[05:04:47 CEST] <YellowOnion> Wait..why doesn't that work on Windows?
[05:05:48 CEST] <Henry151> ffmpeg -i \[Omar\ Hidan\]\ Spirited\ Away\ \[BD\ 720p\ x264\ AAC\ Sub\(Ara\,Jap\,Eng\,Fre\)\]\[4B5D1CE4\].mkv -filter_complex "[0:4][0:v]scale2ref[s][v];[v][s]overlay" -map "[v]" -map 0:a \[Omar\ Hidan\]\ Spirited\ Away\ \[BD\ 720p\ x264\ AAC\ Sub\(Ara\,Jap\,Eng\,Fre\)\]\[4B5D1CE4\].mp4mar\ Hidan\]\ Spirited\ Away\ \[BD\ 720p\ x264\ AAC\ Sub\(Ara\,Jap\,Eng\,Fre\)\]\[4B5D1CE4\].mp4
[05:06:33 CEST] <YellowOnion> wow...that's a lot of slashes.
[05:06:34 CEST] <Henry151> sorry for the ugly paste.. this is telling me "output with label 'v' does not exist in any defined filter graph, or was already used elsewhere
[05:07:02 CEST] <furq> "[0:4][0:v]scale2ref[s][v];[v][s]overlay[o]" -map "[o]" -map 0:a
[05:07:09 CEST] <Henry151> yeah it's ugly. I should have reduced it to something like "ffmpeg -i video.mkv -filter_complex ...etc"
[05:07:35 CEST] <Henry151> thanks again furq :)
[05:08:47 CEST] <furq> does whatever these are being played back on not support mp4 subtitles
[05:09:06 CEST] <furq> because if it does then it would probably be a lot less hassle to do that
[05:09:24 CEST] <Henry151> i want them to be playable "in browser" so that my friends can stream them from my media server
[05:09:51 CEST] <Henry151> i have an nextcloud server and an ampache server so either is fine but i want people to be able to click on it in the browser and just enjoy, without having to download
[05:10:20 CEST] <Henry151> both of them use the media player that is built in to your browser, if i understand correctly.
[05:11:09 CEST] <YellowOnion> furq, there's a 2 year old "patch" to get this working on windows...I wonder why it hasn't been included on mainline...
[05:11:19 CEST] <Henry151> https://romp.network/nextcloud/index.php/s/wybD4BemiDRwPXt
[05:11:28 CEST] <Henry151> this is where i want to "serve them from"
[05:12:26 CEST] <Henry151> you can see there's a built-in media player thing that allows you to click on a video and click "play" but it only works with a very limited number of video types. I'm trying to create a script that will automatically convert any uploaded videos from their current format to something that will stream nicely from there.
[05:12:39 CEST] <another> Henry151: you might want to add -c:a copy -movflags +faststart
[05:12:57 CEST] <Henry151> another: thanks, what does that do?
[05:13:18 CEST] <another> copy the audio stream and make the mp4 streamable
[05:13:30 CEST] <Jonno_FTW> can I have an audio input stream from an mpd output?
[05:14:00 CEST] <Henry151> another: ah, so how i am trying it now will skip the audio stream and leave me with an un-streamable mp4?
[05:15:05 CEST] <Jonno_FTW> considering that the machine has no physical audio outputs since it's a vm
[05:19:15 CEST] <Henry151> another: and, do i add that at the end of my command, or does it have to be somewhere specific within the command?
[05:19:54 CEST] <YellowOnion> Jonno_FTW, https://wiki.archlinux.org/index.php/Advanced_Linux_Sound_Architecture#Virtual_sound_device_using_snd-aloop
[05:22:43 CEST] <another> Henry151: without -c:a copy, ffmpeg will reencode the audio.
[05:23:11 CEST] <another> without faststart the browser will have to download the whole file before it can be played
[05:24:59 CEST] <Jonno_FTW> YellowOnion: thanks
[05:25:30 CEST] <YellowOnion> Henry151, What other formats are supported?
[05:26:32 CEST] <furq> another: it probably won't any more
[05:26:42 CEST] <furq> most browsers will range request the moov atom from the end nowadays
[05:26:49 CEST] <furq> it's still a bit slower than setting faststart though
[05:29:18 CEST] <another> furq: it thought so, but better be on the safe side
[05:29:23 CEST] <YellowOnion> I'm still confused why mp4 wasn't designed to be live streamed...
[05:31:09 CEST] <YellowOnion> They must have hated internet radio...
[05:32:07 CEST] <another> mp4 is usually not used for internet radio
[05:33:37 CEST] <another> you could use fragmented mp4 for that though
[05:35:05 CEST] <Henry151> i think webm is also supported
[05:35:38 CEST] <Henry151> i don't exactly need to be able to "broadcast" this, just to allow people to stream it from that nextcloud interface linked above
[05:36:32 CEST] <Henry151> i understand that mp4 and webm are the two most likely to be able to play in "any browser" ... my limited understanding is that the nextcloud app contains no media player at all, and just relies on whatever media player you have built in to your web browser.
[05:53:35 CEST] <YellowOnion> well I got a correct file this time within the 8MB limit: https://cdn.discordapp.com/attachments/198027456546996225/608145414511656960/23-32.mp4
[09:18:37 CEST] <Jonno_FTW> YellowOnion: I did it without using alsa
[09:39:11 CEST] <YellowOnion> Jonno_FTW, ahh thats good... let me guess with a named pipe?
[09:59:23 CEST] <Jonno_FTW> YellowOnion: I just used http output in mpd
[11:44:54 CEST] <lofo> Hi! I have a series of raw pixel buffers i have to pull individually. For now i convert them into PNGs and input them into FFmpeg but i feel that the PNG conversion isn't necessary am i right ?
[11:45:54 CEST] <lofo> is there a way to send pixel buffers directly to FFmpeg. Without PNG conversion and without disk IO
[11:46:31 CEST] <BtbN> If you're using the libraries directly, of course
[11:46:52 CEST] <BtbN> If your data is in a well defined pixel format, you might even be able to pipe it in via stdin
[11:48:19 CEST] <lofo> I run ffmpeg through MobileFFmpeg but i might be able to use the FFmpeg libs directly
[11:49:02 CEST] <BtbN> You're gonna have to talk to whoever made "MobileFFmpeg" then
[11:49:06 CEST] <lofo> What should i look into the documentation to do such thing ?
[11:49:18 CEST] <lofo> if i use the lib directly
[11:49:38 CEST] <BtbN> If you're using the libraries there is nothing special really. The usual flow is you call a decoder, which gives you raw images.
[11:49:47 CEST] <BtbN> You already have raw images, so you just don't have a decoder.
[11:50:55 CEST] <lofo> Yeah but i would need to "package" the raw images in some way to input it into FFmpeg.
[11:51:17 CEST] <lofo> For now i only have a function that outputs me a raw image when called
[11:52:16 CEST] <BtbN> You're gonna need to be more specific with "raw image"
[11:52:30 CEST] <BtbN> Is it a char* with yuv420p/bgr0/bgra?
[11:52:55 CEST] <lofo> like an array of bytes containing RGBA data on some arbitrary length
[11:53:04 CEST] <lofo> yup something like that
[11:53:27 CEST] <BtbN> That's perfectly fine and can probably be passed in untouched and without extra copy
[11:53:54 CEST] <BtbN> Keep in mind though that libav* are C libraries. Your stuff does not sound like it's C/C++ or anything that could call C directly.
[11:54:17 CEST] <lofo> its Swift and i can bridge with C quite easily
[11:56:12 CEST] <lofo> what is tried to ask was : How am i gonna tell FFmpeg how to fetch next buffer when it needs it ?
[11:57:01 CEST] <BtbN> You don't. The API is push-based. You send in new data as it becomes available.
[11:59:17 CEST] <lofo> oh i see. I think i have all i need for now. Thank you BtbN
[11:59:59 CEST] <lofo> i was confused because i used FFMpeg in a command-line way, which is more pull-based
[15:03:27 CEST] <Henry151> https://termbin.com/7l5x hey y'all
[15:03:52 CEST] <Henry151> I'm working on this and I want to make it automatically select the english subtitle track, any guidance on that?
[15:34:26 CEST] <another> Henry151: you can select all streams of a specific language with -map 0:m:language:eng
[15:35:11 CEST] <another> unfortunately it's not possible to only select subtitles of a certain language
[15:35:55 CEST] <another> however as a workaround you could first extract all subtitles and then filter them by lang
[15:37:12 CEST] <relaxed> Henry151: This will give you the stream index of english subtitles, ffmpeg -i INPUT 2>&1|awk '/\(eng\).*Sub/{sub(/.*#/,"");sub(/\(.*/,""); print $0}'
[15:37:20 CEST] <another> ffmpeg -i input.mkv -c copy -map 0:s subs.mkv; ffmpeg -i input.mkv -i subs.mkv -c copy -map 0:v -map 0:a -map 1:m:language:eng out.mp4
[16:04:19 CEST] <another> relaxed: there was a bug in libaom which broke single pass crf encode. it was recently fixed: https://bugs.chromium.org/p/aomedia/issues/detail?id=2451 would be great if you could update your builds :)
[16:55:51 CEST] <relaxed> another: so this went in Aug 6?
[16:57:10 CEST] <relaxed> er, today :^)
[16:57:18 CEST] <mantas322> Hello everyone
[16:57:27 CEST] <mantas322> I have a stupid question I'd like to ask
[16:57:50 CEST] <mantas322> Is HML5 a specific type of MP4 variety which is most optimal for web friendliness?
[16:58:00 CEST] <mantas322> someone correct my confusion please.
[17:00:21 CEST] <kepstin> I've never heard of "HML5" before, are you sure that's not just a typo of "HTML5"?
[17:00:29 CEST] <another> relaxed: yep. today
[17:03:07 CEST] <mantas322> I checked again
[17:03:13 CEST] <mantas322> it indeed says HML5 Video
[17:03:21 CEST] <mantas322> perhaps thats just shortened
[17:03:25 CEST] <mantas322> to confuse me
[17:03:28 CEST] <kepstin> where does it say that?
[17:03:42 CEST] <kepstin> sounds like a typo to me :/
[17:03:51 CEST] <mantas322> WordPress plugin called SLider Revolution
[17:04:03 CEST] <relaxed> another: and I just uploaded 4.2 release, hah. I'll start rebuilding release and then git master. Thanks for the heads-up
[17:05:08 CEST] <mantas322> see here https://i.imgur.com/gtz3HiI.png
[17:05:17 CEST] <mantas322> when selelcted the source file is an mp4
[17:06:12 CEST] <another> relaxed: 4.2 is out? Whoohoo!
[17:06:53 CEST] <another> mantas322: that definitly a typo. should be HTML5
[17:08:00 CEST] <kepstin> and by "HTML5 video" it probably means "video that can be played the the browser's builtin video support", which is something that varies from browser to browser :/
[17:10:23 CEST] <mantas322> yeah
[17:10:37 CEST] <mantas322> I just assumed HML5 was something new and/or different
[17:13:59 CEST] <mantas322> welp thats it for me
[17:14:00 CEST] <mantas322> thanks.
[17:14:25 CEST] <mantas322> I'll be back later to ask about ffmpeg params fror converting stupid nikon video formats
[17:14:29 CEST] <mantas322> mk whatever
[17:14:30 CEST] <kepstin> apply a little occam's razor here - is it more likely that an author of a random wp plugin is on the cutting edge of designing new video formats, or typoed 'HTML5' :)
[17:14:52 CEST] <mantas322> this is a pretty popular paid plugin with tonnes of on going updates
[17:15:09 CEST] <mantas322> so I would assume he's starting a trend of calling HTML5 HML5
[17:50:20 CEST] <relaxed> another: 4.2 amd64 release build has the aom fix now
[18:04:56 CEST] <machtl> is there any way i can set some dynamic info like mpegts metadata ? i want to somehow route the "video name" iam playing to ffplay and grab it there from the output. but iam concating a lot of videos and want some info at every beginning of the video at ffplay
[18:07:01 CEST] <another> relaxed: great! thanks
[18:12:18 CEST] <Henry151> relaxed: thanks for that snippet, i believe that will work for me, can't try it until i get home later but i sure appreciate it
[18:36:00 CEST] <blizzow> What's the ffmpeg equivalent of this:
[18:36:20 CEST] <blizzow> cvlc v4l2:///dev/video0:chroma=mjpg --v4l2-chroma mjpg --v4l2-width 1280 --v4l2-height 720 --sout '#transcode{vcodec=h264,fps=30,width=1280,height=720}:proto{rtsp-tcp}:rtp{access=tcp,sdp=rtsp://:8888/live.sdp}'
[18:37:38 CEST] <blizzow> Also, is it possible to stream the mjpeg 1280x720 yuv420p format straight to rtsp://
[21:47:26 CEST] <Dotz0cat> hey i am trying to take pictures on a webcam with ffmpeg. the first picture i take always is not right. and for how i want to use the webcam this is not good
[21:51:52 CEST] <piggz_> durandal_1707: kepstin; thx for the help last night, that did the trick
[21:52:28 CEST] <Dotz0cat> example that i took a few minites ago: https://imgur.com/a/ojvRK0Q
[21:53:12 CEST] <klaxa> is that maybe an issue of the camera(-driver)?
[21:55:03 CEST] <kepstin> yeah, many webcams have a few bad frames at the start from firmware issues or auto-exposure errors or whatnot
[21:56:31 CEST] <kepstin> you could do something like -vf trim=start_frame=2 to make it throw out the first frame if you know it's gonna be bad.
[21:58:00 CEST] <Dotz0cat> yeah even more with the webcam i am using. it is a microsoft lifecam cinema. command here: https://pastebin.com/jqMWyccM
[21:58:41 CEST] <Dotz0cat> i can try adding that
[22:03:15 CEST] <Dotz0cat> thanks that worked
[22:03:55 CEST] <Dotz0cat> now i have to make a script that runs it every 2 seconds
[22:05:40 CEST] <Dotz0cat> also have to set up udev rules and stuff so the webcam can always get certen settings and need to do something about my computer going to sleep
[22:06:30 CEST] <kepstin> it might be better to try to get a single ffmpeg command running continuously that saves a frame every couple seconds, rather than start/stop it all the time
[22:06:44 CEST] <kepstin> would avoid the issue with the camera completely
[22:09:36 CEST] <Dotz0cat> i am planing to use this to take timelaspses
[22:10:58 CEST] <kepstin> if you just want 1 frame every 2 seconds, do something like `ffmpeg -f v4l2 -video_size 1280x720 -i /dev/video0 -vf fps=1/2 -c png /path/to/cap%03d.png`
[22:11:29 CEST] <kepstin> (or you can even have ffmpeg encode the timelapse video directly if you like, rather than saving pngs)
[22:25:48 CEST] <Dotz0cat> whould this serve the purpose: https://pastebin.com/vzrhD3TD
[22:28:28 CEST] <relaxed> day="$(date +%D)"
[22:32:23 CEST] <relaxed> Dotz0cat: or dump them all into one dir and have ffmpeg's output look like this, "$(date --rfc-3339=seconds)"-cap%03d.png
[22:33:58 CEST] <relaxed> so ffmpeg's output would look like this, 2019-08-06 16:06:47-04:00-cap001.png
[23:01:24 CEST] <sfs> hello
[23:01:40 CEST] <Dotz0cat> hey
[23:01:41 CEST] <sfs> are there any problems with ffmpeg and VSX on big-endian POWER?
[23:02:00 CEST] <sfs> i ask because in configure, there is
[23:02:00 CEST] <sfs> 5535 if ! enabled ppc64 || enabled bigendian; then
[23:02:00 CEST] <sfs> 5536 disable vsx
[23:02:29 CEST] <sfs> i was wondering why VSX was getting disabled even though i passed --enable-vsx
[00:00:10 CEST] --- Wed Aug 7 2019
More information about the Ffmpeg-devel-irc
mailing list