[Ffmpeg-devel-irc] ffmpeg.log.20190117

burek burek021 at gmail.com
Fri Jan 18 03:05:02 EET 2019


[00:04:41 CET] <DHE> Most C structures are fair game for direct editing, especially if the doxygen docs describe them as such. av_opt_* (and by extension, av_dict_*) are intended for options set externally (like the ffmpeg commandline) or codec-specific values that are not expressed in the standard structures.
[00:11:44 CET] <zerodefect> Sorry, just seen response.
[00:11:45 CET] <zerodefect>  Ok. Thanks for clarification. This is something that has concerned me, so I've explored it a bit more.  You have brought me some relief :D
[00:13:20 CET] <zerodefect> @DHE - so do you ever use av_opt_set_xxx options other than to set codec-specific values?
[00:38:18 CET] <DHE> zerodefect: no.  and even then I use AVDictionary instead
[00:39:25 CET] <zerodefect> Not considered using AVDictionary. What advantage does it give you? Easier somehow?
[00:40:27 CET] <DHE> it's just how I started doing it. All the avcodec and avformat "open" functions taken an AVDictionray so when I learned how to use the API that was the route I started down first.
[00:44:22 CET] <zerodefect> Makes sense.
[01:17:06 CET] <semeion> I am trying to run this command: http://ix.io/1yv7
[01:19:23 CET] <semeion> but have something wrong, because it return an error: http://ix.io/1yv9
[01:19:56 CET] <semeion> " Unsupported input format: bgr0"
[01:21:34 CET] <semeion> so, changing the filter to -filter:v format=nv12,hwupload_cuda,scale_npp=w=1280:h=720:format=nv12:interp_algo=lanczos,hwdownload,format=nv12 it work, but i don´t take the advantage of gpu conversion between the bgr0 to nv12
[01:22:36 CET] <semeion> i tried -filter:v hwupload_cuda,scale_npp=w=1280:h=720:format=bgr0:interp_algo=lanczos,hwdownload,format=nv12 it could work, but don´t
[01:23:30 CET] <semeion> can someone help me?
[01:23:40 CET] <semeion> please
[01:24:04 CET] <fling> [nut @ 0x5583d85a76c0] frame size > 2max_distance and no checksum
[01:24:12 CET] <fling> What does this mean? ^ I'm getting these a lot
[01:28:30 CET] <semeion> seems like i need convert from bgr0 to nv12 before start the scale_npp filter, but how to convert it using the GPU/CUDA?
[01:29:02 CET] <semeion> and/or how to make the scale_npp convert it?
[01:33:01 CET] <Hello71> fling: what version
[01:48:40 CET] <fling> Hello71: 4.1
[03:23:34 CET] <Zexaron> hello
[03:23:59 CET] <Zexaron> is it possible to software transform 170 wide eyefish lens video into more standar looking ?
[03:24:06 CET] <Zexaron> and to be a playable video
[03:35:17 CET] <friendofafriend> goog
[03:35:22 CET] <friendofafriend> Sorry.
[03:53:31 CET] <Zexaron> ah, later, sleeptime
[04:00:49 CET] <lovetruth> hello people :)
[04:01:11 CET] <lovetruth> do you know if it's possible to compile ffmpeg with NDI and nvenc?...
[05:23:47 CET] <friendofafriend> Howdy, all.  I'm using ffmpeg to encode opus with the "-application voip" flag.  How can I find out what options are actually being used?
[06:01:12 CET] <N0BOX> Would it be outside the scope of this channel to ask how to use fmmpeg to fix broken/missing metadata in a flac file?
[06:04:27 CET] <fling> N0BOX: there is -metadata in manual but there are much better apps for tagging like beets
[06:05:53 CET] <N0BOX> Yeah, I would probably be lazy use a GUI app on windows to actually fix the metadata if it is possible, but the problem is that the flac file is not showing its bitrate or duration
[06:06:24 CET] <N0BOX> at least, those two specs don't show in foobar2000
[06:07:14 CET] <N0BOX> some apps simply won't play the file, and none of the converters I have tried will convert it to some other format
[06:07:59 CET] <N0BOX> foobar2k and vlc don't mind playing it on windows and Onkyo HFPlayer will play it on Android, but I really want it to play in HiBy Music on android
[06:08:56 CET] <N0BOX> so, I was hopin there might be some way of having ffmpeg analyse the file and come up with its proper duration and bitrate for me to somehow re-tag those bits with some other app
[06:09:05 CET] <N0BOX> hoping*
[06:09:12 CET] <fling> N0BOX: try repackaging it with `ffmpeg -i bad.flac -c copy fixed.flac`
[06:09:30 CET] <fling> N0BOX: it would be good idea to redownload the file if it will not get fixed.
[06:10:49 CET] <fling> N0BOX: you should really look at beets if you are into tagging or music library etc
[06:10:51 CET] <N0BOX> interesting:  size=   26526kB time=00:03:10.95 bitrate=1138.0kbits/s speed=1.61e+03x
[06:11:13 CET] <N0BOX> but the 'fixed' flac still lacks the duration and bitrate xD
[06:11:23 CET] <N0BOX> but that at least told me the info I needed to know
[06:11:33 CET] <fling> N0BOX: then set it by hand or let beets set it for you haha :D
[06:11:40 CET] <N0BOX> yep :D
[06:11:48 CET] <fling> N0BOX: are we talking about two certain tags right?
[06:12:01 CET] <fling> N0BOX: they are missing in the file but you want them to be there?
[06:12:42 CET] <N0BOX> I assume they are metadata tags, but basically in foobar2000 in my playlist window each song has a bitrate and a duration except for this one song that has trouble in other players
[06:12:55 CET] <furq> neither of those are metadata tags
[06:13:16 CET] <furq> it sounds like the streaminfo block is broken
[06:13:21 CET] <N0BOX> ahh, I was kinda afraid of that
[06:13:24 CET] <furq> try ffmpeg again without -c copy
[06:14:31 CET] <N0BOX> ahh, nice, the 'fixed' flac has those bits, now
[06:15:15 CET] <fling> furq: thanks!
[06:15:30 CET] Action: fling reads on streaminfo
[06:20:08 CET] <N0BOX> and, the verdict is in: "It Works!"
[06:20:43 CET] <N0BOX> Thanks for the help, I doubt I would have figured it out on my own with Google telling me all the incorrect answers
[06:21:41 CET] <fling> 60% of time I'm incorrect all the time.
[06:23:09 CET] <N0BOX> haha
[06:23:53 CET] <N0BOX> man, dunno what I bumped on my mouse to cause irssi to drop the window :P
[06:24:29 CET] <N0BOX> oh, one of my buttons is bound to F4, which I have set to close windows :P
[07:14:05 CET] <ossifrage> Getting the right magic order for -threads can be interesting. I'm finally getting it to use more of the available cpu
[07:16:11 CET] <ossifrage> It would be nice if ffmpeg named its threads so you knew who was doing what
[07:17:44 CET] <ossifrage> (it is kinda annoying that linux limits the thread name to 16 chars)
[10:29:54 CET] <TheWild> hello
[10:31:01 CET] <cousin_luigi> [5~
[10:33:42 CET] <TheWild> I have a video and I would like to put a program-generated overlay on it. How I see it: ffmpeg decodes the frame and sends it as a image to my program, preferably with time. My program then puts overlay on it, returns the modified image and ffmpeg encodes it into new video.
[10:33:47 CET] <TheWild> and I have no idea where to go
[10:34:30 CET] <TheWild> I don't want run ffmpeg thousands of times, everytime specifying which frame to extract
[10:45:55 CET] <kurosu> maybe programmatically (what you say seems to imply shell script, invoking the ffmpeg binary)
[10:46:49 CET] <kurosu> TheWild: also, https://ffmpeg.org/ffmpeg-filters.html#select maybe if you can somehow instead pass everything to ffmpeg
[10:47:47 CET] <kurosu> ffmpeg can't do accurate seeks in a lot of cases without a lot of setup, so there may be some issue in your approach
[10:49:36 CET] <TheWild> so that's why I want to do it frame-by-frame. I think every frame has a timestamp.
[10:51:02 CET] <furq> the simplest format for piping to/from your program that has timestamps is probably yuv4mpeg
[10:51:20 CET] <furq> assuming your video is yuv
[10:51:44 CET] <TheWild> I think the stuff I want to do is too specific for ffmpeg to handle on its own. I could just decode a video into a set of pictures, but it just unnecessarily takes up space when it could really get pipelined.
[10:51:51 CET] <furq> https://wiki.multimedia.cx/index.php/YUV4MPEG2
[10:52:22 CET] <pink_mist> TheWild: ffmpeg can put an overlay on a video just fine
[10:54:07 CET] <TheWild> YUV4MPEG2? Wow, not RGB but a quick read makes me think it will serve. And even format is documented.
[10:54:12 CET] <TheWild> thanks furq
[10:57:53 CET] <kurosu> I suspect TheWild use case is that the overlay depends on the frame and its timestamp, and is thus generated on the fly according to his needs
[10:58:14 CET] <kurosu> obviously one could use ffmpeg various filters to generate said overlay obviously
[10:58:19 CET] <TheWild> ^ yup, exactly
[10:58:35 CET] <kurosu> if not too complicated, but that would not fit his need
[10:59:08 CET] <TheWild> the animations and whatever - I want to have control over every pixel in every frame
[10:59:36 CET] <kurosu> so, yeah, I don't think you can do that by just invoking ffmpeg binary, you'll have to do that programatically
[11:01:10 CET] <kurosu> open input/encoded stream and output stream (setting encoder parameters and so on), decode some or all frames, get pixels and timestamps, do your overlaying, send frames to encoder then to the output stream
[11:01:33 CET] <kurosu> you probably have to decode all of the frames, though, modifying one requires encoding most of the following ones
[11:01:39 CET] <kurosu> (usually)
[11:01:54 CET] <kurosu> *re-encoding
[11:03:00 CET] <TheWild> let's see a thing in hex editor first
[11:03:00 CET] <TheWild> ffmpeg -i output.mkv -f yuv4mpeg2 output.y4m
[11:03:00 CET] <TheWild> ffmpeg -i output.mkv -vcodec yuv4mpeg2 output.y4m
[11:03:08 CET] <TheWild> meh, I'm doing something wrong
[11:03:34 CET] <kurosu> or, furq approach, write to an output pipe the yuv4mpeg data, read it by your program, modify it, write the output to a pipe for input to another ffmpeg instance
[11:03:50 CET] <furq> TheWild: -f yuv4mpegpipe
[11:04:12 CET] <furq> you don't need it if the output ends in .y4m though
[11:04:41 CET] <TheWild> that worked, thanks
[11:04:46 CET] <TheWild> yikes! bitrate=1492993.5kbits/s
[11:04:53 CET] <furq> yeah it's rawvideo
[11:13:46 CET] <kurosu> that's what you want to pipe it also
[11:18:55 CET] <TheWild> do we have compressed but lossless *RGB* format?
[11:19:47 CET] <TheWild> hmm... libx264rgb
[11:44:03 CET] <TheWild> yuv444p12 means 4:4:4 and 12-bit precision, right?
[13:18:59 CET] <Mavrik> yp
[14:22:04 CET] <wallbroken> hi
[14:22:20 CET] <wallbroken> i want to record my cctv in loop over a limited amount of my hd
[14:23:41 CET] <DHE> well the super simple answer is the muxer called "segment"
[14:23:53 CET] <DHE> https://ffmpeg.org/ffmpeg-formats.html#segment
[14:24:32 CET] <wallbroken> yes but i want to set an hard disk quota
[14:24:49 CET] <wallbroken> for example for cctv i want to set 10 gb of my hd
[14:25:24 CET] <wallbroken> if the recording fill the quota, it must overwrite
[14:26:02 CET] <DHE> so there is a -segment_wrap parameter which might do that. I'd experiment first
[14:26:50 CET] <DHE> alternatively you can try the "hls" muxer, which is a specific format but also largely similar in terms of splitting, and it has an explicit option to delete old files. just keep in mind that it keeps 2x the number of files you ask for the list size
[14:27:19 CET] <wallbroken> DHE is present on enigma2?
[14:27:33 CET] <DHE> wat?
[14:27:44 CET] <wallbroken> enigma2 is an OS for video
[14:27:58 CET] <DHE> I have no idea
[14:29:28 CET] <wallbroken> and what if i restart the command?
[14:29:36 CET] <wallbroken> it overwrite the entire file?
[14:31:42 CET] <DHE> it makes multiple files, and will restart from #1 (or zero?) at startup. while running hls will grow indefinitely but delete old files. segment will restart back at #1 when it hits your wrap target
[14:35:45 CET] <wallbroken> so if i reboot the machine, and when i reboot, it reached #4
[14:35:58 CET] <wallbroken> ffmpeg at next reboot will continue from 4?
[14:36:03 CET] <wallbroken> or will overwrite from 1?
[14:36:23 CET] <wallbroken> if the second, it's a proble
[14:36:24 CET] <wallbroken> m
[14:36:49 CET] <wallbroken> so if i reboot the machine, and when i reboot, it reached #4
[14:36:57 CET] <wallbroken> ffmpeg at next reboot will continue from 4?
[14:37:02 CET] <wallbroken> or will overwrite from 1?
[14:38:52 CET] <wallbroken> so if i reboot the machine, and when i reboot, it reached #4
[14:38:57 CET] <wallbroken> sorry for repeating
[14:39:36 CET] <DHE> ffmpeg itself doesn't check these things. it'll be on you to check how far it got and request it start at #5 in this case (don't want to overwrite an incomplete #4)
[14:43:36 CET] <wallbroken> in my case i want to continue writing the next file
[14:43:43 CET] <wallbroken> for example if i reboot on #4
[14:43:52 CET] <wallbroken> ffmpeg should write on #5
[16:26:35 CET] <eject_ck> Hi guys, I'm trying to watch IPTV using my linux PC instead of dummy tvbox, I sniffed UDP address of steam which iptvbox uses to access it. when I connected to the same "network hub" aka bridge I was able to watch that stream from linux pc, when I changed mac address, IP address on my linux pc to match IPTV box settings (I had suspicious that provider filters by ip/mac) I see that stream is not coming.
[16:27:10 CET] <eject_ck> I also found that IGMP packets my box sends are different from packets which ffmpeg sends
[16:27:55 CET] <eject_ck> ffmpeg -i udp://225.0.0.11:5000
[16:28:50 CET] <DHE> IGMP version mismatch?
[16:28:57 CET] <eject_ck> i see that my box was sending IGMPv2 to 224.0.0.1 [IGMP Version: 2] Type: Membership Query (0x11)
[16:29:52 CET] <eject_ck> When I used ffmpeg it sends to 224.0.0.22: igmp v3 report, 1 group record(s)
[16:30:48 CET] <eject_ck> so I see it's using v2 on box and v3 on linux machine, also different addresses
[16:31:02 CET] <eject_ck> I tried to force linux pc to use igmpv3
[16:31:15 CET] <eject_ck> echo 2 > /proc/sys/net/ipv4/conf/ens224/force_igmp_version
[16:32:19 CET] <eject_ck> sorry, v2 I meant, it uses v2, but still sending membership to not 224.0.0.1
[16:32:23 CET] <eject_ck> 225.0.11.63: igmp v2 report 225.0.11.63
[16:32:58 CET] <eject_ck> Is this ok, or can I change that using ffmpeg options or linux settings ?
[16:33:28 CET] <eject_ck> thank you all in advice
[16:33:30 CET] <eject_ck> in dvance
[16:33:34 CET] <eject_ck> in advance
[16:39:21 CET] <eject_ck> https://www.thegeekdiary.com/how-to-configure-multicast-on-an-ip-address-interface/
[17:11:41 CET] <tombb> hi all, would like to pick your brain for a bit.. I got an xdcam file that was created by rhozet carbon coder. for some reason it will automatically create it as an mxf with 1 stream and 4 audio channels, (I'm aware of MONO/STEREO/SURROUND, never seen 4 channels in a single stream)
[17:11:41 CET] <aristaware> Hi
[17:12:23 CET] <tombb> considering tomorrow carbon might product a file with 3 channel in the same track, what would be the right way to split all channels to mono streams, regardless of the amount of channels in a single stream?
[17:14:28 CET] <aristaware> I'm trying to join several little clips from my webcam (Yi home). Each clip last 1 minute. The problem here is that the camera has motion detection and stops recording till it detects new movement. So I have several <=1' clips and for some minutes there's no video. What I would want is to join them all, but fill the gaps with the last frame of the video preceding the gap.
[18:51:15 CET] <seanrdev> Hello hope everyone is ok today. I have a question about the ability to pull multiple rtsp streams and does the quality start to degrade if there are too many attempted streams pulled at once.
[18:53:11 CET] <seanrdev> I noticed 5 streams are ok however when attempting to pull 30 streams they come in very bad. Sometimes completely green. I've tested to verify if this was in fact the camera by requesting the stream from another system and the stream is perfect. Is there perhaps any documentation on the limitations of ffmpeg and input streams?
[18:59:13 CET] <friendofafriend> seanrdev: Are you sure that isn't a limitation of your NIC, or something else?
[19:02:34 CET] <seanrdev> The calculations suggest 300Mbps and everything is connected on 1Gbps. NIC, Switch and router.
[19:04:44 CET] <seanrdev> friendofafriend: Oh you know what.... I do have a 10/100 switch with a good amount of cameras connected I apologize. I'll switch that out and try again.
[19:10:23 CET] <friendofafriend> Very glad to hear, seanrdev.  I've had the same problems with lots of video streams.  Good luck.
[20:52:20 CET] <kevinnn> for recording the desktop does anyone know if there is a performance difference between gdigrab and dshow?
[20:56:19 CET] <kepstin> gdigrab can be pretty slow depending on graphics drivers, etc.
[20:56:44 CET] <kepstin> dshow capture depends entirely on what software you have installed that implements the dshow device (this isn't provided by ffmpeg)
[20:58:18 CET] <kevinnn> kepstin: oh... what backends are available for dshow?
[20:59:38 CET] <kepstin> windows doesn't include any directshow screen grab stuff, so none unless you install one
[20:59:53 CET] <kepstin> (dshow on a stock windows install will only do cameras)
[21:01:04 CET] <kevinnn> kepstin: after doing much research i came across this:
[21:01:07 CET] <kevinnn> https://github.com/rdp/screen-capture-recorder-to-video-windows-free
[21:01:16 CET] <kevinnn> This is a backend for dshow right?
[21:01:25 CET] <kevinnn> would this be faster than gdigrab?
[21:01:53 CET] <kevinnn> just to let you in on my use case I want to create a basic screen recording program
[21:02:17 CET] <kevinnn> needs to record a minimum of 30fps
[21:02:22 CET] <kepstin> kevinnn: use obs
[21:02:36 CET] <kevinnn> I took the source code for gdigrab.c and implemented it
[21:02:43 CET] <kevinnn> and it was way slower than 30 fps
[21:02:46 CET] <kevinnn> obs...
[21:02:59 CET] <kevinnn> never considered that, is the source code readable?
[21:03:16 CET] <kevinnn> ffmpeg's gdigrab.c was actually fairly easy to read
[21:03:25 CET] <kepstin> rather than write your own screen capture app, OBS is an existing app that implements high performance screen capture for fullscreen (capable of gaming, etc.)
[21:03:56 CET] <kevinnn> kepstin: for my particular use case it must be written by hand
[21:03:58 CET] Action: kepstin wrote a substantial amount of the code for gdigrab, so he's happy to hear that it's readable, tho :)
[21:04:26 CET] <JEEB> if you are interested in the low-level stuff, the virtualdub.org blog entry from 2011 about DXGI 1.2 is I think a nice guide into screen capture on windows https://web.archive.org/web/20170615115053/http://www.virtualdub.org/blog/pivot/entry.php?id=356
[21:05:39 CET] <kevinnn> JEEB: thank you for that article, I will read through it
[21:06:03 CET] <kevinnn> kepstin: for obs, have you worked with it at all? any pointers as to where in the source I should start?
[21:07:13 CET] <kevinnn> JEEB: hey I
[21:07:25 CET] <kevinnn> i've actually come across desktop duplication
[21:07:35 CET] <kevinnn> which is what the article is talking about
[21:07:44 CET] <kevinnn> but for some reason it doesn't work on my machine
[21:07:47 CET] <kepstin> I'm not familiar with the OBS source, so I can't really help you there. I think that in fullscreen mode it does use the desktop duplication apis.
[21:08:13 CET] <kevinnn> JEEB: my /usr/include/w32api/dxgi1_2.h file doesn't include any references to IDXGIOutputDuplication
[21:08:39 CET] <kevinnn> I have no idea how that is possible and how to update as the duplication api should be available on windows 8+
[21:08:48 CET] <kevinnn> and I am on windows 10 with cygwin
[21:08:55 CET] <kevinnn> JEEB: any pointers?
[21:09:15 CET] <kevinnn> kepstin: that's what I figured too!
[21:09:17 CET] <kepstin> might just be out of date headers :/
[21:09:22 CET] <kevinnn> refer to my comments to JEEB
[21:09:25 CET] <kevinnn> hmm
[21:09:35 CET] <kevinnn> how is that possible? And how can I fix this
[21:11:40 CET] <kepstin> some options I can think of are to compile with an MS dev environment (visual studio) or to try using a newer mingw64 or msys2 release rather than cygwin
[21:11:59 CET] <JEEB> kevinnn: newer mingw-w64
[21:12:08 CET] <JEEB> you can build just the CRT and headers
[21:12:29 CET] <JEEB> of course if you're already on mingw-w64 version 6.x
[21:14:45 CET] <kevinnn> JEEB: any package in specific from mingw?
[21:15:05 CET] <friendofafriend> I'm trying to encode USB webcam video with h264_omx on a Raspberry Pi and stream to icecast in an MPEG-TS container.  It's working intermittantly.  Does anyone have a working command line for h264_omx encoding they could share?
[21:15:07 CET] <JEEB> well headers and CRT is what you need
[21:15:17 CET] <kevinnn> cygwin shows a million options for mingw64-x86_64*
[21:15:53 CET] <JEEB> well check the version of the actual headers and crt, the package names I think should contain those words :P
[21:15:53 CET] <kevinnn> I'm not seeing crt as an option
[21:16:01 CET] <JEEB> since that's the parts of the project :P
[21:16:28 CET] <JEEB> anyways, if your cygwin mingw-w64 cross-toolchain doesn't have them then I don't think you'll find it in that package manager :P
[21:16:41 CET] <kevinnn> a search for crt in cygwin doesn't show any results!
[21:17:03 CET] <JEEB> mingw-w64-crt and mingw-w64-headers are the directories within mingw-w64 :P
[21:17:21 CET] <JEEB> kevinnn: if you already have mingw-w64 installed from cygwin then if the stuff's not there it's not new enough :P
[21:17:51 CET] <kepstin> the mingw-w64 downloads page says cygwin includes mingw-w64 v5.0.2, fwiw, but i don't know if that's up to date.
[21:18:25 CET] <JEEB> 6.0.x is current release
[21:18:39 CET] <JEEB> at least when I last checked
[21:19:47 CET] <kevinnn> god why can't programming in windows be as easy as it is in linux...
[21:20:02 CET] <kepstin> it's all reverse-engineered/reimplemented tho, so I wouldn't be surprised if some apis are still missing in 6.0.0
[21:20:35 CET] <kevinnn> kepstin: if that's the case maybe I should just use visualc++?
[21:20:50 CET] <kepstin> kevinnn: I wouldn't be surprised if windows devs who initially started with the vs IDE would say the same thing about linux :)
[21:20:52 CET] <JEEB> a lot of the APIs are just windows DLL end points tho, so you just need the header entries (some of which are even on MSDN), and then exporting the library end points
[21:20:58 CET] <kevinnn> that would have the original version of desktop duplication
[21:21:30 CET] Action: kepstin did most of the dev + testing of the gdigrab.c in Wine, fwiw
[21:21:46 CET] <kepstin> i was really surprised when i tried it on intel drivers in win7 and it was *way* slower than in wine
[21:22:20 CET] <kevinnn> is there anyway I can find out what version of mingw I need to have the desktop duplication API?
[21:22:34 CET] <JEEB> also mingw-w64 6.x seems to have IDXGIOutputDuplication
[21:22:43 CET] <kevinnn> do you think OBS uses the reverese engineered desktop duplication that mingw uses?
[21:22:44 CET] <JEEB> I just grepped my installed headers
[21:22:58 CET] <kevinnn> like will it be as efficient?
[21:23:11 CET] <kevinnn> okay I am just going to download mingw manually
[21:23:25 CET] <JEEB> the implementation is the same, reverse engineering is a heavy word when you just need to get the header definitions and the library end points :P
[21:23:28 CET] <JEEB> exports that is
[21:23:41 CET] <JEEB> and the header stuff is often find'able in MSDN :P
[21:23:46 CET] <JEEB> like, on the documentation page
[21:24:04 CET] <kevinnn> right, okay, let me install mingw and see if I can get this all working
[21:24:10 CET] <kevinnn> thanks for the help JEEB
[21:24:17 CET] <kepstin> looks like the OBS build instructions for windows say to use visual studio, so they're not using mingw-w64 fwiw.
[21:25:06 CET] <JEEB> the build process for mingw-w64 headers & CRT isn't too hard
[21:25:07 CET] <kevinnn> kepstin: hmm, do you think the people over at mingw did a good job?
[21:25:25 CET] <JEEB> just do the headers first, and then CRT
[21:25:33 CET] <kevinnn> okay I will
[21:25:47 CET] <JEEB> (and configure --host to your cross-compiler + --prefix to your mingw-w64 prefix
[21:26:01 CET] <JEEB> you probably want to --enable-sdk=all --enable-secure-api for headers, too
[21:26:24 CET] <JEEB> latter enables the """secure""" APIs that windows has
[21:26:39 CET] <kepstin> kevinnn: in general? yeah. it's possible to build a pretty wide variety of windows apps using mingw-w64, even cross-compiling from linux.
[21:26:58 CET] <kepstin> without needing to worry about licensing of header files or copying them from a real windows box or we
[22:31:46 CET] <GuiToris> hey I got this message: Filtergraph 'transpose=1' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph -vf/-af/-filter and -filter_complex cannot be used together for the same stream
[22:31:54 CET] <GuiToris> do I have to create intermediate files?
[22:33:03 CET] <Mavrik> No.
[22:33:20 CET] <Mavrik> You should just stop mixing filter_complex and vf parameters.
[22:33:23 CET] <Mavrik> As the message says.
[22:34:52 CET] <GuiToris> Mavrik, it's really difficult to change anything since I have no idea what's going on in the filter_complex, I just copied from the Internet
[22:35:08 CET] <Mavrik> Not sure what do you want me to say.
[22:35:19 CET] <GuiToris> -vf "transpose=1" -lavfi '[0:v]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16'
[22:35:23 CET] <Mavrik> Read documentation to understand what you're running on your computer? :P
[22:35:30 CET] <GuiToris> that's what I've tried
[22:35:45 CET] <Mavrik> What are you trying to transpose exactly?
[22:36:20 CET] <Mavrik> In other words - what are you actually trying to do? :P
[22:36:29 CET] <GuiToris> I have a vertical footage video and I'd like to stretch a blurred copy in the background
[22:36:37 CET] <Mavrik> ok
[22:36:44 CET] <GuiToris> the filter complex does a good job
[22:36:51 CET] <GuiToris> except I also need to rotate the video
[22:37:04 CET] <GuiToris> that's the problem
[22:37:23 CET] <Mavrik> Do you need to rotate both the blurred version and the normal version?
[22:37:51 CET] <GuiToris> the blurred background will be created by the original one, won't it?
[22:38:05 CET] <GuiToris> if I rotate the main video, i'll be rotated as well
[22:38:08 CET] <GuiToris> right?
[22:38:30 CET] <Mavrik> well, depends on where the transposition is done
[22:38:48 CET] <GuiToris> in the first place
[22:38:56 CET] <Mavrik> That's why I'm asking - you can rotate the video at the start (thus rotating everything) or just part of it.
[22:39:25 CET] <Mavrik> So what your complex filter is doing is it's taking the first video input (called [0:v]), doing the whole blur thing and then outputing that as output named [bs]
[22:39:29 CET] <Mavrik> *[bg]
[22:39:42 CET] <Mavrik> It then takes [bg] and [0:v] and combines them together
[22:40:44 CET] <GuiToris> then I think I should rotate it first
[22:40:48 CET] <Mavrik> -lavfi '[0:v]transpose=1[transposed];[transposed]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][transposed]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16'
[22:41:02 CET] <Mavrik> Something like that
[22:42:07 CET] <GuiToris> matches no streams
[22:42:14 CET] <GuiToris> did I mess up something
[22:42:15 CET] <GuiToris> ?
[22:42:37 CET] <Mavrik> Hard to tell ;)
[22:42:43 CET] <GuiToris> Stream specifier 'transposed'
[22:42:51 CET] <GuiToris> I forgot to copy the beginning
[22:44:31 CET] <GuiToris> Stream specifier 'transposed' in filtergraph description [0:v]transpose=1[transposed];[transposed]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][transposed]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16 matches no streams
[22:45:24 CET] <Mavrik> Can you pastebin everything you've typed and your full output?
[22:46:45 CET] <GuiToris> there isn't much that you haven't seen: ffmpeg -i input -lavfi '[0:v]transpose=1[transposed];[transposed]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][transposed]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16' -frames 1 output.png
[22:51:14 CET] <GuiToris> does you script work for you?
[22:58:41 CET] <furq> GuiToris: [0:v]split,transpose=1[s0][s1];[s0]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][s1]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16
[22:58:55 CET] <furq> sorry, transpose=1,split
[23:00:51 CET] <GuiToris> furq, thanks a lot, it's working now
[00:00:00 CET] --- Fri Jan 18 2019


More information about the Ffmpeg-devel-irc mailing list