[Ffmpeg-devel-irc] ffmpeg.log.20170131

burek burek021 at gmail.com
Wed Feb 1 03:05:01 EET 2017


[00:57:12 CET] <riccardo654321> Hello. I converted a 24 bit flac to an opus file. I tried to downsample the opus to 44.1khz and 16 bit but ffmpeg says "[libopus @ 0x2834b40] Specified sample rate 44100 is not supported"
[00:58:46 CET] <furq> riccardo654321: opus only supports 48khz
[01:17:33 CET] <thebombzen> I don't remember why
[01:17:42 CET] <thebombzen> but there was a reason for it
[01:32:35 CET] <faLUCE> When I encode mp2 audio I obtain 288 bytes per frame. Then, I call av_interleaved_write_frame() with that packet, and the write callback of the AVIOContext linked to the muxer is called soon, without buffering more packets in the same mpeg PES. Why? The buffer of the context has a size of 4096, it should be enogh big... so... how can I compose a muxed PES with more segments?
[01:34:35 CET] <TD-Linux> thebombzen, much more complexity for no benefit
[01:35:08 CET] <TD-Linux> https://people.xiph.org/~xiphmont/demo/celt/demo.html
[01:35:39 CET] <TD-Linux> the critical bands shift around a bit when switching sample rates, so you need different tuning parameters and/or different band layout
[02:21:39 CET] <faLUCE> does anyone know how to fix that? I have the SAME problem that this user reports:  https://ffmpeg.org/pipermail/libav-user/2016-July/009331.html  .   When I call av_interleaved_write_frame() it adds overhead because yhe muxer writes one PES frame per one ADTS frame... is there a way to fix that?
[02:39:57 CET] <thebombzen> TD-Linux: ah thx
[02:40:10 CET] <thebombzen> the "benefit" is that cd quality audio is 44.1
[02:40:25 CET] <thebombzen> or rather CD Audio is 44.1 kHz
[02:40:42 CET] <thebombzen> but you should be able to use something like lanczos to resample it to 48 before opus anyway
[02:44:50 CET] <faLUCE> well, I found this option:   "pes_payload_size" which could be useful... but how can I set it for an AVFormatContext ?
[02:44:55 CET] <TD-Linux> yes, opusenc uses the speex resampler. the original sample rate is also written in a tag if you want to restore it on decode.
[02:47:22 CET] <thebombzen> how does speex's resampler stack up to swresample
[02:57:51 CET] <Thisguy_> Is there somewhere I can go to learn how to record lossless hq audio from a pulseaudio monitor?
[03:11:44 CET] <Thisguy_> I'm invoking ffmpeg like this: http://pastebin.com/vtUrwHEd and this is the output of ffprobe on a file created thusly: http://pastebin.com/F6mmrZui I'm trying to improve the audio quality. It doesn't sound just like it did the first time around when I listen to the video, like it went through a filter of some kind. Am I doing something wrong, or is this the best I can do?
[04:18:56 CET] <xeons> Using filter_complex, I would like to create a nullsrc video of the same hight as another video I'm vstacking. How can I set them without knowing the correct hight? scale supports the -1 option for letting ffmpeg figure it out: nullsrc=size=300x????,scale=w=300:h=-1[emptyfeed]
[04:19:50 CET] <thebombzen> xeons: -vf scale2ref
[04:20:07 CET] <xeons> I can't use -vf because there are two videos streams being filtered
[04:20:15 CET] <thebombzen> well I mean scale2ref video filter
[04:20:27 CET] <thebombzen> it's a complex filter that takes in two video streams and outputs two video streams
[04:20:43 CET] <thebombzen> the second input is passed through unchanged and the first one is scaled to match the format and size of the second
[04:20:47 CET] <thebombzen> (I think that's the order)
[04:20:51 CET] <thebombzen> see the scale2ref filter
[04:21:02 CET] <xeons> https://ffmpeg.org/ffmpeg-filters.html#scale2ref
[04:21:06 CET] <thebombzen> yes that
[04:21:14 CET] <xeons> Looks like that might be it
[04:22:10 CET] <thebombzen> also if you're vstacking, you want the same width
[04:22:18 CET] <thebombzen> although with scale2ref it shouldn't matter
[04:23:17 CET] <xeons> So 'nullsrc[b];scale2ref[b][a];'?
[04:28:51 CET] <thebombzen> xeons: see this 1920x1080 testsrc? http://0x0.st/Vwb.jpg
[04:29:14 CET] <thebombzen> ffmpeg -i some_1080p_video.mkv -lavfi 'testsrc[b];[b][0:v]scale2ref[c][d];[d]nullsink' -map '[c]'
[04:29:52 CET] <thebombzen> you will have to map the output
[04:30:06 CET] <thebombzen> I don't know if there's a shortcut for the above
[04:31:35 CET] <xeons> So you are scaling the input of testsrc to match some_1080p_video and then throwing that feed (d) away?
[04:32:02 CET] <xeons> but keeping the scaled testsrc[b] (now [c])
[04:32:27 CET] <thebombzen> well I'm discarding the 1080p video
[04:32:30 CET] <thebombzen> cause it wasn't really the point
[04:32:34 CET] <thebombzen> I assume you're not going to discard it
[04:33:07 CET] <thebombzen> why are you vstacking nullsrc?
[04:33:12 CET] <xeons> Yes, I just wanted to make sure I'm reading that right
[04:33:27 CET] <thebombzen> you are
[04:33:46 CET] <thebombzen> but keep in mind that nullsrc isn't useful really unless you're testing a filterchain whose input isn't really important
[04:33:57 CET] <xeons> That was really helpful thank you
[04:34:29 CET] <xeons> So I was vstacking a nullsrc because my feed sometimes is empty (not present) but I still want a box where it was since I'm building a grid output
[04:35:26 CET] <xeons> I could probably achieve the same thing by simply sizing a placing my existing video feeds correctly without making a fake nullsrc feed to stack with.
[04:35:32 CET] <thebombzen> I'd use color= then
[04:35:34 CET] <thebombzen> not nullsrc=
[04:36:08 CET] <xeons> ah, good point. Green isn't very nice.
[04:37:06 CET] <thebombzen> if you're building a grid output I'd recommend the tile filter over successive hstacks and vstacks
[04:37:08 CET] <thebombzen> less ugly
[04:38:23 CET] <thebombzen> so if you're looking to make a 2x2 grid, starting from 1080p frames, you cna do -vf tile=2x2
[04:38:41 CET] <thebombzen> this will make 4k video of course so you might want to prepend a scale filter
[04:40:32 CET] <xeons> ... -i one.mkv -i two.... -vf "scale=w=1080:h=-1;tile=2x2" out.mkv
[04:42:53 CET] <thebombzen> if you scale it to 1080 wide before tiling the result witll be 2160 wide
[04:43:02 CET] <thebombzen> if that's the goal then sure but that's what will happen
[04:44:11 CET] <xeons> I'll add the scale after then... or perhaps before (but smaller) might be faster for transcoding
[04:44:42 CET] <thebombzen> you should scale beforehand
[04:45:31 CET] <thebombzen> if scale were perfect they'd be the same but it isn't so you should scale first
[04:49:15 CET] <xeons> Simple filtergraph 'scale=w=540:h=-1;tile=2x2' was expected to have exactly 1 input and 1 output. However, it had >1 input(s) and >1 output(s). Please adjust, or use a complex filtergraph (-filter_complex) instead.
[04:49:46 CET] <xeons> silly me, I used -vf instead of -filter_complex
[04:51:35 CET] <xeons> I must be missing something. -filter_complex "tile=2x2;scale=w=540:h=-1" just makes a grid from the first video source ignoring all others
[04:54:02 CET] <xeons> while -filter_complex "scale=w=540:h=-1;tile=2x2" creates a single output video 540x#
[05:12:50 CET] <xeons> thank you again for your help thebombzen
[05:43:43 CET] <teratorn> what's a good rtsp streaming server?
[05:44:32 CET] <teratorn> just need something for testing. something other than ffserver.. any ideas welcome
[06:09:36 CET] <thebombzen> teratorn: I think VLC can do it
[06:09:39 CET] <thebombzen> but I'm not certain
[06:15:58 CET] <teratorn> thebombzen: thx
[09:05:11 CET] <Thisguy_> I'm trying to improve the quality of the recording I'm taking from my pulseaudio monitor source. Any tips?
[09:22:06 CET] <Thisguy_> Good night, everybody
[10:54:01 CET] <dooome> [Parsed_crop_1 @ 0x3706d20] Option 'crop' not found
[10:54:04 CET] <dooome> [AVFilterGraph @ 0x364e820] Error initializing filter 'crop' with args
[10:54:32 CET] <dooome> tried to upgrade ffmpeg, and now i'm getting this..
[10:55:15 CET] <dooome> okej
[11:00:00 CET] <dooome> sry.. found it my self :P
[11:00:41 CET] Action: dooome you cant crop with "crop=crop=.."
[11:01:34 CET] <dooome> damn copy paste some times
[13:07:03 CET] <kerio> lmao
[13:07:07 CET] <kerio> a friend of mine got burned by libav
[13:07:16 CET] <kerio> he thought he had downloaded ffmpeg
[13:07:22 CET] <kerio> he couldn't get ffvhuff to work
[13:08:13 CET] <JEEB> I think the issue often is "distro packaged libav is ancient" than just libav
[13:08:25 CET] <JEEB> partially because libav does a release per year, if even that
[14:24:49 CET] <ChocolateArmpits> Hello, I'm trying to encode a 720p using mpeg2video but it doesn't respond to any bitrate set, always opting for around 840kbps
[14:26:49 CET] <Mavrik> !fb ChocolateArmpits
[14:26:57 CET] <Mavrik> This.
[14:33:36 CET] <BtbN> sounds like you are trying to set a way too low bitrate, and 840kbps is the lowest it can possibly achive
[14:38:41 CET] <barteks2x> I'm not sure if it's ffmpeg bug or something else, but I'm getting weird graphical glictes in the video when I do screen capture (x11grab). (I think it may not be actual ffmpeg bug because I've seen similar glitches on the screen at one point when I used nvidia proprietary drivers). What could cause it?
[14:41:00 CET] <barteks2x> And should I report it as a bug actually, even if it may be driver issue
[14:41:22 CET] <ChocolateArmpits> BtbN, I'm trying to encode at 15mbps, here's the pastebin http://pastebin.com/0Jtw8g3w
[14:41:34 CET] <ChocolateArmpits> might be due to no vbv
[14:41:38 CET] <ChocolateArmpits> specified
[14:42:15 CET] <ChocolateArmpits> yeah seems that's the case
[14:45:13 CET] <Mavrik> Indeed.
[14:45:25 CET] <Mavrik> (Encoding into MPEG-2 / PCM looks strange in 2017 tho :))
[14:46:20 CET] <ChocolateArmpits> for performance and size reasons
[14:47:14 CET] <DHE> yeah, also include -maxrate:v 15M -bufsize:v 5M and maybe -minrate:v 15M
[14:47:32 CET] <DHE> bufsize should vary depending on your streaming needs
[14:47:49 CET] <ChocolateArmpits> vbv isn't initialized until maxrate is used, at least that
[14:47:58 CET] <ChocolateArmpits> the case with libx264
[15:29:38 CET] <andreaskarlsson> Hey, hopefully this is the correct place to ask. Is it possible to use the -re flag with the rawvideo muxer? I'm trying to encode raw frames that I get at a rate of 5-60 fps (realtime) into a cfr video at 25 fps or so. The problem is that the rawvideo muxer seems to force the input fps to be 25 (or whatever is specified with the -framerate argument) even though I for example may supply 5 fps, making my video play at 5x the s
[15:29:42 CET] <andreaskarlsson> My commandline looks like this: http://pastebin.com/8B1u0DxX (frames are fed through stdin).
[15:30:10 CET] <JEEB> -re is a hack based on sleeping and input timestamps
[15:35:06 CET] <andreaskarlsson> hmm, yeah, I figured it wasn't the most robust solution. Is there a better way to feed raw frames to ffmpeg when the input fps is not constant?
[15:37:14 CET] <andreaskarlsson> I might add that I supply the frames from custom software, so I know the correct times of the frames when I hand them over to ffmpeg
[15:38:39 CET] <DHE> no, you'll have to duplicate frames at your own application in order to provide a constant effective framerate to ffmpeg. you can use filters to convert to variable framerate inside ffmpeg
[15:38:51 CET] <DHE> that would also work
[15:47:02 CET] <andreaskarlsson> I see, I was hoping that ffmpeg could solve it for me :)
[15:47:20 CET] <andreaskarlsson> could I use a filter to set the timecode of each frame in a file?
[15:49:59 CET] <BtbN> you can add a setpts filter, and set the pts just based off on system time I think
[15:50:05 CET] <BtbN> that should achive what you are trying to do
[15:51:03 CET] <BtbN> not entirely sure if that's possible though
[15:54:45 CET] <andreaskarlsson> ok, I'll check the filter
[15:58:55 CET] <kerio> can i add a creation date to a video?
[16:14:10 CET] <basiclaser> hi ther
[16:18:03 CET] <basiclaser> I'm using ffmpeg to put videos together in a grid, can anyone comment on how I can get the sound from all of them to come together as well? currently only the audio from the first video is being included https://ghostbin.com/paste/x5ume
[16:20:09 CET] <kepstin> basiclaser: you'll have to use the 'amix' filter to combine all the audio streams together, something like '[0:a][1:a][2:a][3:a]amix=inputs=4' in your filter-complex script
[16:21:48 CET] <durandal_170> basiclaser: use vstack/hstack filters instead of overlay. its faster
[16:27:49 CET] <basiclaser> im completly new to ffmpeg, so thanks for the feedback
[16:39:33 CET] <andreaskarlsson> Ey, found a filter that works just like I wanted in the examples for setpts: https://ffmpeg.org/ffmpeg-filters.html#Examples-106
[16:39:44 CET] <andreaskarlsson> "Generate timestamps from a "live source" and rebase onto the current timebase: setpts='(RTCTIME - RTCSTART) / (TB * 1000000)'"
[16:39:56 CET] <andreaskarlsson> Thanks a lot for the suggestions!
[17:35:53 CET] <phillipk> Solved my issue I was having!  In case anyone is interested:
[17:36:41 CET] <phillipk> http://pastebin.com/hbn3AzgP  shows my plan--to layer audios on top of an existing video (with audio).  The problem was the output would have audio sync problems in the parts from the existing video.
[17:37:55 CET] <phillipk> Turns out the problem was that the existing audio/video was produced by concat several videos--all with audio or, if silent, I had "silent" audio in them. But, some of the pieces getting concat'd were created incorrectly.
[17:39:17 CET] <phillipk> namely, I was amix'ing a silent .wav--but that .wav was shorter than the video!  For whatever reason, when you adelay/amix and your original has a silent audio that's not the entire duration, the output is effectively shortened... which was causing my sync issues.
[17:40:17 CET] <phillipk> the current version creates the silent audio on the fly (using anullsrc and lavfi)... and I make sure it's long enough to match the video to which I'm adding the audio.
[17:41:22 CET] <phillipk> HTH
[17:42:19 CET] <kepstin> yeah, my fav way to handle that is to concat an endless silence source to the audio, then use the '-shortest' or '-t' ffmpeg options to trim it to match.
[17:45:17 CET] <phillipk> do you mean a very very long source?
[17:47:34 CET] <shincodex> so uh
[17:47:37 CET] <kepstin> nah, just an aevalsrc or something in a filter graph with no duration set
[17:47:38 CET] <shincodex> void *av_mallocz(size_t size) {     void *ptr = av_malloc(size);     if (ptr)         memset(ptr, 0, size);     return ptr; }
[17:47:40 CET] <furq> phillipk: anullsrc lasts forever
[17:47:55 CET] <furq> you have to give it an explicit duration or just use -shortest or something equivalent
[17:47:55 CET] <shincodex> How safe would it be to well if < 1000 stack alloc and return
[17:47:57 CET] <shincodex> lol
[17:49:00 CET] <shincodex> and looks like further down that function i need to memalign
[17:49:01 CET] <shincodex> whatev
[17:49:02 CET] <phillipk> right on, yeah, I'm doing -t
[17:49:24 CET] <furq> you can just use amix=...:duration=shortest
[17:50:30 CET] <furq> there's also duration=first which will use the first input stream's duration
[17:50:58 CET] <kepstin> hmm, wasn't the problem that (at least one of) the audio streams was too short? so the audio had to be made to match the video, not to one of the other audio streams.
[17:51:09 CET] <furq> i think his silent audio was too short?
[17:51:44 CET] <furq> although afaik if you use duration=first then any shorter tracks will just fall silent when they end
[17:52:31 CET] <furq> it doesn't really make much difference if you only have to do this once
[18:01:53 CET] <phillipk> yeah--it was a multistep process--but when I made the actual .wav file it was only 2 minutes long.  Some of the segments (which ended up getting concat'd and then used in the adlay/amix) were longer than 2 minutes.  So, if I needed to add silence but the segment was 2:20, then the synch issue would appear as video shifting 20 seconds too early.
[18:03:08 CET] <phillipk> it was tricky because when I viewed the concat'd file--the segment that was supposed to be 2:20 was 2:20... it was only when I extracted the audio from the concat'd file that the problems began.
[18:03:40 CET] <phillipk> took me days, but along the way, I cleaned everything with a fine toothed comb and made sure I understood every dang flag in the command.
[18:08:23 CET] <Diag> and thats why im afraid of the color yellow
[18:45:41 CET] <efface> I am running ffprobe via php and my script gets hung up when ffprobe trys to connect to an invalid multicast udp source.  I tried adding the timeout option to the udp address, which causes it to return an error code but the program does no exit...it just hangs...which hangs my script.  Is there a way to make ffprobe exit on error returned?
[18:48:46 CET] <shincodex> No? No developers alive
[18:48:50 CET] <shincodex> Jeeb where ye
[18:53:09 CET] <Duality> can i readin stdin and stream it  with ffmpeg ?
[18:53:29 CET] <Duality> goal is to stream some user generated data somewhere in some format like h264
[18:53:38 CET] <kerio> sure
[18:53:43 CET] <kerio> ffmpeg -i -
[18:55:48 CET] <Duality> the input is rgb24
[18:56:10 CET] <kerio> ffmpeg -f rawvideo -pixel_format rgb24 -i -
[18:57:15 CET] <Duality> i get invalid argument, and just before that picture size 0x0 is invalid
[18:57:52 CET] <kerio> ffmpeg -f rawvideo -pixel_format rgb24 -s WxH -i -
[18:58:12 CET] <Duality> your awesome :)
[18:58:24 CET] <kerio> https://www.ffmpeg.org/ffmpeg-formats.html#rawvideo
[19:04:20 CET] <Duality> here is what i am running so far for a test: cat /dev/urandom | ffmpeg -f rawvideo -pixel_format rgb24 -s 128x64 -i -  >(ffplay 2> /dev/null)
[19:04:41 CET] <Duality> this errors on me with unable to find a suitable output format for '/proc/self/fd/13'
[19:05:15 CET] <efface> is there a way to make ffprobe/ffmpeg exit on error? Trying to get it to exit when it can't connect to a multicast udp....the timeout option only makes it return an error code but not abort
[19:11:40 CET] <faLUCE> Are there pixel formats with more than 8 bits per channel?
[19:11:46 CET] <thebombzen> yes
[19:11:52 CET] <thebombzen> 10bit is popular actually
[19:14:06 CET] <faLUCE> thebombzen: can you tell me a format with that?
[19:14:14 CET] <thebombzen> yuv420p10le
[19:14:57 CET] <furq> Duality: -i - -f rawvideo - | ffplay -i -
[19:14:58 CET] <faLUCE> thebombzen: and in that, there is y (10 bits) u (10 bits) and v( 10 bits) ?
[19:15:14 CET] <thebombzen> yes it's 10 bits per channel
[19:15:28 CET] <thebombzen> although it's still subsampled
[19:15:48 CET] <faLUCE> thebombzen: thanks, do you know if is this bits number typedefd ?
[19:16:03 CET] <thebombzen> um
[19:16:04 CET] <faLUCE> or it's just casted?
[19:16:10 CET] <Duality> furq: how confusing
[19:16:27 CET] <thebombzen> Duality: no, ffmpeg automatically determines theoutput format based on the filename you give it
[19:16:32 CET] <thebombzen> but it can't if it's stdout
[19:16:55 CET] <Duality> thebombzen: i was talking about all the -
[19:16:56 CET] <thebombzen> just because you have uncompressed video doesn't man it has to be in a rawvideo container
[19:16:57 CET] <Duality> :D
[19:17:08 CET] <thebombzen> all the -?
[19:17:09 CET] <faLUCE> if you say it's subsampled I suppose it's not typedefd
[19:17:14 CET] <thebombzen> also furq that won't work
[19:17:26 CET] <furq> -f nut -c:v rawvideo - | ffplay -i -
[19:17:27 CET] <thebombzen> faLUCE: no it's 4:2:0 chroma subsampled
[19:17:46 CET] <faLUCE> thebombzen: then ffmpeg works only with 8 bits depth?
[19:17:54 CET] <thebombzen> and I would have used -f yuv4mpegpipe - | ffplay -i -
[19:17:58 CET] <furq> i'm not sure what works with ffplay because i don't have it installed, but mpv doesn't like -f rawvideo
[19:18:14 CET] <thebombzen> faLUCE: no it supports yuv420p10le, as the example I gave you
[19:18:27 CET] <basiclaser> Hi again! Your advice earlier helped me create a video grid successfully, thanks - im now trying the same ffmpeg command with a different video with a different resolution - i update the resolution values but im getting errors -> Unable to parse option value "-1" as pixel format \ Last message repeated 1 times \ [buffer @ 0x7fde8a602700] Error setting option
[19:18:27 CET] <basiclaser> pix_fmt to value -1.
[19:18:27 CET] <basiclaser> \ [graph 0 input from stream 0:0 @ 0x7fde8a6025c0] Error applying options to the filter.
[19:18:30 CET] <thebombzen> when I said "it's subsampled" I meant "chroma subsampled"
[19:18:40 CET] <furq> faLUCE: x264 only supports one of 8-bit or 10-bit, depending on how you built it
[19:18:41 CET] <faLUCE> thebombzen: I see that, but if it subsamples it, it works only with 8 bits, actually
[19:18:46 CET] <furq> no
[19:18:48 CET] <basiclaser> here's my code https://ghostbin.com/paste/s98vw
[19:18:55 CET] <furq> the chroma is subsampled
[19:19:06 CET] <thebombzen> faLUCE: http://en.wikipedia.org/wiki/Chroma_subsampling
[19:19:07 CET] <furq> yuv420p is 12bpp, yuv420p10le is 15bpp
[19:19:34 CET] <faLUCE> furq: I see. then, are the other channels' depth typedefined?
[19:19:50 CET] <faLUCE> or does the library use a brutal cast
[19:19:52 CET] <faLUCE> ?
[19:19:57 CET] <furq> i'm not sure what you're asking
[19:20:23 CET] <furq> it's a pixel format, it works the same as any other pixel format
[19:20:24 CET] <Duality> awesome
[19:20:26 CET] <Duality> i got something
[19:20:40 CET] <Duality> cat /dev/urandom | ffmpeg -s 128x64 -f rawvideo -i - -f nut pipe:1 | ffplay -i -
[19:20:51 CET] <furq> that's encoding to mpeg4
[19:20:58 CET] <furq> add -c:v rawvideo after -f nut
[19:21:19 CET] <faLUCE> furq: if is there a sort of avpixel struct, how its components are typed? uint8_t or  some_define_type ?
[19:21:20 CET] <kerio> does libx264rgb default to something like rgb24?
[19:21:29 CET] <furq> on which note, why is that the default codec for -f nut
[19:21:30 CET] <kerio> something with 8 bits per channel
[19:21:38 CET] <furq> i believe it's bgr
[19:21:55 CET] <Duality> furq: i did a -f rawvideo why does it encode to mpeg4 when i don't add -c:v rawvideo ?
[19:21:56 CET] <kerio> furq: still 8 bits per channel right
[19:22:01 CET] <furq> Supported pixel formats: bgr0 bgr24 rgb24
[19:22:02 CET] <furq> close enough
[19:22:03 CET] <furq> and yeah
[19:22:15 CET] <furq> Duality: -f rawvideo applies to the input
[19:22:47 CET] <faLUCE> I see:   uint8_t * 	data [AV_NUM_DATA_POINTERS]   <--- so I suppose it's brutally casted
[19:22:58 CET] <faLUCE> (in AVFrame)
[19:23:13 CET] <thebombzen> furq: mpeg4 is the best lossy encocer without external libraries
[19:23:19 CET] <Duality> furq: so -c selects a codec for the output this case rawvideo ?
[19:23:24 CET] <thebombzen> Yes
[19:23:26 CET] <furq> but why would you be using nut for mpeg4
[19:23:31 CET] <thebombzen> You wouldn't
[19:23:32 CET] <furq> or any lossy encoder
[19:23:41 CET] <thebombzen> but if you use -f nut then mpeg4 picks the best encoder by default
[19:23:47 CET] <thebombzen> which is mpeg4 without external libraries
[19:23:47 CET] <furq> the default should be rawvideo or ffv1
[19:23:59 CET] <thebombzen> I think it should be ffv1 as well
[19:24:03 CET] <furq> or something that makes sense
[19:24:05 CET] <thebombzen> but someone decided they wanted it to be lossy
[19:24:08 CET] <thebombzen> so mpeg4 it is
[19:25:22 CET] <faLUCE> then, for example, if I want to get a 10 bits pixel component for a channel I have to cast  (uint16_t)
[19:25:37 CET] <ljc> peg
[19:27:21 CET] <thebombzen> basiclaser: is there a reason you're not using vstack and hstack
[19:27:40 CET] <basiclaser> thebombzen: would that effect the bug im having?
[19:28:15 CET] <thebombzen> well no
[19:28:20 CET] <thebombzen> but it would make your thing less slow
[19:29:39 CET] <thebombzen> as for your error about invalid pixel formats
[19:29:46 CET] <thebombzen> clealry you didn't post the actual command
[19:29:51 CET] <thebombzen> or the actual outbut
[19:30:54 CET] <thebombzen> in particular, there's no way it'd be unable to parse a "-1" if you did not type a -1
[19:31:16 CET] <thebombzen> so please come back after you post the actual command and output
[19:34:21 CET] <kerio> deepy my strategy of using yuyv422 worked brilliantly ;o
[19:34:36 CET] <kerio> except that i got a file that's barely smaller than what i get with ffvhuff
[19:34:44 CET] <kerio> after encoding at like 0.01x the speed
[19:35:07 CET] <thebombzen> what codec?
[19:36:06 CET] <kerio> oh lmao deepy isn't even here
[19:36:11 CET] <kerio> thebombzen: lossless h264
[19:36:33 CET] <kerio> i have some gray16 that i want to keep as is
[19:36:39 CET] <thebombzen> I never understood why people insist on using lossless h264
[19:36:50 CET] <kerio> because they need lossless compression?
[19:36:56 CET] <thebombzen> but there's other options
[19:37:00 CET] <kerio> indeed
[19:37:04 CET] <kerio> for instance, ffvhuff
[19:37:05 CET] <furq> lossless x264 is pretty good
[19:37:17 CET] <kerio> which doesn't perform significantly worse than the other lossless codecs on my data
[19:37:21 CET] <kerio> and is MINDBOGGINGLY faster
[19:37:30 CET] <furq> it was significantly worse last time i tried
[19:37:35 CET] <basiclaser> thebombzen: my entire command was pasted in the original link, and the output seems crazy though, seems my command as it is executed and what seems to be an 'ls' is injected into every new line of the command, so ill inline it and try again https://ghostbin.com/paste/r5cre
[19:37:40 CET] <kerio> how did you try it on my data :O
[19:37:53 CET] <furq> i didn't. i'm saying your data is wrong
[19:38:03 CET] <kerio> furq: lossless h264 was kind of a last hurrah here
[19:38:12 CET] <kerio> gray16 packed into yuyv422
[19:38:36 CET] <furq> don't you mean yuv422p
[19:38:37 CET] <kerio> it did losslessly transfer however
[19:38:59 CET] <furq> i can imagine packing one format into another like that would fuck with the compressibility
[19:38:59 CET] <kerio> furq: surely if i have to misinterpret gray16, it's better to do so as yuyv422
[19:39:22 CET] <kerio> furq: it did result in a smaller file than ffvhuff, fwiw
[19:39:37 CET] <furq> x264 doesn't support packed formats
[19:39:39 CET] <thebombzen> basiclaser: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f8e6a800600] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1280x720, 1388 kb/s): unspecified pixel format
[19:39:41 CET] <thebombzen> your file is corrupt
[19:39:55 CET] <kerio> furq: yes, it had to be converted to yuv422p
[19:39:58 CET] <kerio> or whatever
[19:40:03 CET] <basiclaser> thebombzen: (heres a less weird output anyway https://ghostbin.com/paste/56b5v )
[19:40:17 CET] <kerio> but the important part is that my high bits were all luma and my low bits were all chroma
[19:40:32 CET] <furq> but yeah i wouldn't base your opinions on a weird test like that
[19:41:05 CET] <thebombzen> basiclaser: still stands
[19:41:12 CET] <thebombzen> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fa50b009200] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1280x720, 1388 kb/s): unspecified pixel format
[19:41:16 CET] <thebombzen> your mp4 file is corrupt
[19:41:18 CET] <furq> oh cool a new pastebin
[19:41:21 CET] <furq> i'll add it to the ball
[19:41:23 CET] <thebombzen> lol
[19:41:23 CET] <thebombzen> xD
[19:41:35 CET] <basiclaser> thebombzen: thanks, they all seem to play, but there is a weird black 2-3 seconds at the beginning of each clip
[19:41:42 CET] <kerio> furq: yeah but
[19:41:48 CET] <kerio> can you even compare anything to ffvhuff
[19:41:54 CET] <kerio> when ffvhuff takes like 50x less time
[19:42:02 CET] <furq> yes
[19:42:03 CET] <thebombzen> for me, ffvhuff requires bgr0 -> rgb24
[19:42:08 CET] <thebombzen> now normally swcale is slow
[19:42:16 CET] <thebombzen> but is bgr0 -> rgb24 slow? (It shouldn't be)
[19:42:19 CET] <thebombzen> (but idk)
[19:42:25 CET] <furq> ffv1 and lossless x264 compress much better in my experience
[19:42:28 CET] <furq> and they're not that much slower
[19:42:33 CET] <furq> maybe 3-4x
[19:42:43 CET] <thebombzen> ffv1 is generally better than ffvhuff
[19:42:49 CET] <thebombzen> given that ffvhuff is intra-only
[19:42:50 CET] <furq> last i checked it was like 3-4x faster but 50% bigger
[19:42:53 CET] <thebombzen> and ffv1 is inter
[19:43:08 CET] <kerio> thebombzen: ffv1 doesn't default to inter
[19:43:12 CET] <thebombzen> it doesn't?
[19:43:14 CET] <thebombzen> really?
[19:43:19 CET] <furq> ffv1 is still better intra-only
[19:43:21 CET] <thebombzen> I thought ffv1 wasn't intra0only
[19:43:35 CET] <furq> is the inter compression in ffmpeg yet
[19:43:49 CET] <furq> i know you can set -g and stuff but that didn't seem to make any difference when i tested
[19:43:51 CET] <kerio> furq: again, on my data, ffvhuff compresses at like 93x speed, compared to 5x for ffv1
[19:44:14 CET] <furq> i'll bear that in mind next time i have to compress gray16
[19:44:23 CET] <thebombzen> welp I just did a quick screenrecord
[19:44:24 CET] <kerio> gray16 at very low dynamic range :3
[19:44:43 CET] <thebombzen> libx264rgb compressed my screen record to 7 Mbps
[19:44:54 CET] <thebombzen> which is pretty good (although screengrabs of a DE are easy to capture)
[19:45:03 CET] <thebombzen> ffvhuff compressed it to 1 Gbps
[19:45:29 CET] <thebombzen> or more precisely 1050 Mbps
[19:45:35 CET] <furq> nice
[19:45:59 CET] <thebombzen> also you said that by default ffv1 is intra-only
[19:46:02 CET] <thebombzen> is there a way to disable this
[19:46:17 CET] <thebombzen> welp nvm seems to not matter
[19:46:42 CET] <furq> testsrc=s=1280x720 gives me 70fps with ffv1 and 280fps with ffvhuff
[19:46:58 CET] <thebombzen> lossless x264rgb compresses it to 7 Mbps and ffv1 to 200 Mbps
[19:47:15 CET] <thebombzen> and ffv1 only goes at 50 fps and x264rgb can do 60+
[19:47:35 CET] <furq> or 250fps if i actually write it to disk
[19:47:45 CET] <thebombzen> lol hard drive bottleneck
[19:47:54 CET] <kerio> well i mean
[19:48:00 CET] <furq> ffvhuff is 142mbit, ffv1 is 2.8mbit
[19:48:01 CET] <kerio> i'm not saying libx264 is not magic
[19:48:03 CET] <kerio> because it is
[19:48:12 CET] <basiclaser> thebombzen: could you comment on my commands that i used for initially chunking the sourcefile  https://ghostbin.com/paste/3pom2
[19:48:45 CET] <furq> i admit neither your source or testsrc is representative, but yeah
[19:48:46 CET] <thebombzen> Why are you chunking the source file
[19:48:58 CET] <thebombzen> Why not just -i trumpspeech.mp4 nine times
[19:49:02 CET] <kerio> good ol fat32
[19:49:04 CET] <thebombzen> with a different -ss before the -i
[19:49:46 CET] <thebombzen> if you put -ss before -i it seeks the input file
[19:49:56 CET] <basiclaser> thebombzen: oh so you mean specify time range within the filter-complex/overlay code itself?
[19:49:58 CET] <basiclaser> i see
[19:50:19 CET] <thebombzen> putting -ss 5 after -i, that will discard the first five seconds
[19:50:28 CET] <thebombzen> putting -ss before -i will seek 5 seconds into the input
[19:50:32 CET] <basiclaser> well i originally wasn't banking on using the same software for every stage of the projet so i broke it into steps
[19:50:38 CET] <thebombzen> why not?
[19:51:12 CET] <basiclaser> i asked in here at first and the person listed 3 different softwares for getting the task done,
[19:51:41 CET] <basiclaser> lemme check - "you would generate still frames from movies with mplayer, combine them to one mega-frame each set of four using imagemagick, make a new movie out of them using mencoder"
[19:51:43 CET] <thebombzen> that does not mean you need to use all of them at once
[19:52:03 CET] <thebombzen> that sounds like trolling
[19:52:04 CET] <furq> are you sure that was in here
[19:52:08 CET] <basiclaser> yep
[19:52:12 CET] <furq> that doesn't sound like something we'd recommend at all
[19:52:14 CET] <basiclaser> oh wait
[19:52:22 CET] <thebombzen> that sounds like something CounterPillow would say in #mpv
[19:52:22 CET] <basiclaser> sorry that was in #clojure ha
[19:52:39 CET] <thebombzen> in particular, there's several things wrong with that
[19:52:44 CET] <lanc> Hi all starting to research libavcodec to understand audo decoding/encoding for a project. Could anyone point me to good resources to understand how the processes of decoding works in general? Im currently reading through the avcodec docs, but the more resources the merrier
[19:52:45 CET] <thebombzen> 1, mencoder is deprecated
[19:52:57 CET] <furq> 2, mplayer is deprecated
[19:52:58 CET] <thebombzen> 2, mplayer isn't in active development (use mpv instead)
[19:53:00 CET] <furq> 3, imagemagick sucks ass
[19:53:04 CET] <basiclaser> so i have a js script which recursively executes ffmpeg that i shared above
[19:53:32 CET] <basiclaser> but you're saying i can do this all inline in the overlay code when i specify the sources, thats neat
[19:53:45 CET] <thebombzen> no, I didn't say that
[19:53:52 CET] <thebombzen> you shouldn't even be overlaying
[19:53:56 CET] <thebombzen> you should be using hstack and vstack
[19:54:06 CET] <basiclaser> bbbbut it works :D
[19:54:10 CET] <thebombzen> it's slow af
[19:54:17 CET] <thebombzen> hstack and vstack is less ugly and not a piece of crap
[19:54:30 CET] <thebombzen> http://ffmpeg.org/ffmpeg-filters.html#hstack
[19:55:13 CET] <basiclaser> thanks ill certainly check it out, would you mind clarifying where i should be specifying the seeking/ranges ?
[19:55:28 CET] <thebombzen> if you put: -ss TIME -i input
[19:55:33 CET] <thebombzen> it'll seek to TIME in that input
[19:55:43 CET] <thebombzen> if you have two inputs you can also do -ss TIME1 -i input1 -ss TIME2 -i input2
[19:55:51 CET] <thebombzen> that'll seek to TIME1 in input1 and TIME2 in input2
[19:57:48 CET] <basiclaser> yeh i think thats what i meant that you meant
[19:59:00 CET] <basiclaser> i will try this out, does it seem clean and not shit? -> https://ghostbin.com/paste/rwxn4
[20:02:09 CET] <llogan> lanc: did you see doc/examples?
[20:03:54 CET] <llogan> basiclaser: looks ok
[20:08:23 CET] <lanc> @llogan I have been! Mostly just curious as to if anyone has any additional, useful resources
[20:09:07 CET] <llogan> there is the doxygen stuff: http://ffmpeg.org/doxygen/trunk/index.html
[20:18:06 CET] <basiclaser> thebombzen: "putting -ss 5 after -i, that will discard the first five seconds
[20:18:06 CET] <basiclaser> 02:50:28 putting -ss before -i will seek 5 seconds into the input"
[20:18:20 CET] <thebombzen> yes I did say that
[20:18:21 CET] <basiclaser> are those not the same thing? how would i define a range?
[20:18:28 CET] <thebombzen> a range?
[20:18:33 CET] <thebombzen> and yea those are not the same thing
[20:18:37 CET] <basiclaser> 0 - 5
[20:18:39 CET] <thebombzen> for single input single output they do the same thing
[20:19:04 CET] <thebombzen> but for multiple inputs, putting -ss before -i only seeks that one input
[20:19:18 CET] <thebombzen> putting it after all the -i will cause the first 5 seconds of the output to be discarded for all inputs
[20:19:25 CET] <basiclaser> ffmpeg -ss 0 -i -ss 10 trumpspeech.mp4
[20:19:35 CET] <basiclaser> thats what i took away from the comment for defining a range :P
[20:19:35 CET] <thebombzen> -ss 0 does nothing
[20:19:38 CET] <thebombzen> "sek to position zero"
[20:19:47 CET] <kepstin> basiclaser: that doesn't make sense, you can't start a video in two different places...
[20:19:57 CET] <thebombzen> if you're looking to take just hte first ten seconds, do ffmpeg -i input -t 10 output
[20:20:06 CET] <thebombzen> -t limits the duration of the output
[20:20:20 CET] <thebombzen> so -t 10 will limit the output to 10 seconds (just stop encoding after that)
[20:20:35 CET] <basiclaser> aha, i have the pieces now thanks
[20:21:09 CET] <basiclaser> ffmpeg -ss 10 -i input -t 10 output
[20:21:23 CET] <basiclaser> that would take 10 to 20 right
[20:22:41 CET] <kepstin> basiclaser: yep.
[20:23:23 CET] <basiclaser> also drawing a 3*3 grid with vstack/hstack, how'd i go about that?
[20:23:31 CET] <basiclaser> same as overlay?
[20:23:59 CET] <kepstin> use hstack three times to put three videos side by side each, then vstack once to vertically stack the hstacks
[20:24:05 CET] <kepstin> or in the other order if you prefer
[20:24:47 CET] <basiclaser> and that would work with 9 unique inputs?
[20:24:53 CET] <basiclaser> ha ok ill try
[20:25:19 CET] <basiclaser> i find ffmpeg hard to read coming from a different background
[20:30:18 CET] <kepstin> yeah, I make heavy use of this sort of tiling via filters, but I do it with scripts that build the ffmpeg command lines and filters, rather than by hand most of the time.
[20:36:48 CET] <Duality> i am having a weird issue, my screen is green when i send all zero's with this command "ffmpeg -s 128x64 -f rawvideo -i - -an -f nut -c:v rawvideo pipe:1 | ffplay -i -"
[20:37:07 CET] <Duality> any ideas why? am i missing anything obvious :)?
[20:37:09 CET] <furq> add -pixel_format rgb before -i
[20:37:25 CET] <basiclaser> cool the original corruption was as you taught me, the first frames were being discarded due to where i put -ss
[20:37:26 CET] <Duality> for ffmpeg or ffplay ?
[20:37:29 CET] <furq> ffmpeg
[20:39:43 CET] <thebombzen> basiclaser: that was more due to you using -codec copy with -ss and -t
[20:39:45 CET] <thebombzen> you can't do that
[20:39:55 CET] <thebombzen> because you can't discard video if you don't decode it
[20:40:42 CET] <Duality> furq: it tells me no such format rgb
[20:41:01 CET] <furq> rgb24 then
[20:41:39 CET] <Duality> furq: your awesome :) that works :D
[20:45:14 CET] <Duality> maybe an odd question but can i set pixelsize with ffplay ?
[20:45:34 CET] <Duality> like scale pixels by x
[20:45:55 CET] <Duality> or do i have to do that in ffmpeg :)
[20:46:25 CET] <thebombzen> well one pixel is one pixel
[20:46:36 CET] <thebombzen> if you're trying to scale the video you change the number of pixels
[20:46:41 CET] <thebombzen> not the size of a pixel
[20:47:00 CET] <Duality> i understand :)
[20:47:23 CET] <Duality> i don't really wana scale in ffmpeg, but the more i think about it, uh maybe i should
[20:48:11 CET] <thebombzen> things like dpi aren't relevant for video given that you're not printing them
[20:48:18 CET] <thebombzen> videos are only displayed on monitors
[20:49:10 CET] <Duality> yes i understand, but i used to have a application that takes pixel data and scaled it by some amount in x and y, that is why i was wondering
[20:53:24 CET] <llogan> Duality: ffplay -vf "scale=iw*4:-1:flags=neighbor" input.foo
[20:53:48 CET] <llogan> https://ffmpeg.org/ffmpeg-filters.html#scale
[20:54:15 CET] <llogan> default flag is bicubic
[20:54:19 CET] <llogan> https://ffmpeg.org/ffmpeg-scaler.html#sws_005fflags
[20:54:52 CET] <furq> oh can you pass flags= to scale now
[20:55:20 CET] Action: llogan hasn't used -sws_flags in a long time
[20:56:52 CET] <thebombzen> wait why is swresample a thing
[20:57:00 CET] <thebombzen> isn't it just a one-dimensional version of swscale
[20:57:36 CET] <furq> swresample is for audio
[20:58:26 CET] <Duality> llogan: your awesome :)
[21:00:45 CET] <llogan> i don't know...pulled a muscle picking up the dog's ball this morning.
[21:00:58 CET] <Diag> Duality: you're*
[21:01:06 CET] <Duality> you are
[21:01:08 CET] <Diag> lol
[21:01:08 CET] <Duality> and i am sorry :D
[21:01:22 CET] <Diag> :D
[21:01:30 CET] <llogan> sure, i can install some 12'x56" drywall but not pick up a ball
[21:01:32 CET] <Duality> this is irc, do i really need to be that correct about it ?
[21:01:45 CET] <Diag> > do i really need to be that correct about it ?
[21:01:47 CET] <Diag> we got a code red
[21:01:53 CET] <furq> we've
[21:01:53 CET] <Diag> grammar nazis assimilate
[21:02:11 CET] <Diag> furq: yeah but they dont say weve
[21:02:22 CET] <furq> don't
[21:02:34 CET] <Diag> just like i dont use apostrophes and shit
[21:02:52 CET] <Diag> or opus
[21:02:54 CET] <Diag> I use wav
[21:03:03 CET] <llogan> i use rawaudio
[21:03:06 CET] <Diag> ?
[21:03:14 CET] <Diag> tf is rawaudio
[21:03:16 CET] <fridgefire> Helo, welkom
[21:03:23 CET] <Diag> wilkommen
[21:04:11 CET] <Diag> wie gehts?
[21:05:49 CET] <furq> geht's
[21:05:56 CET] <Diag> what
[21:05:59 CET] <Diag> is that even a thing
[21:06:19 CET] <furq> wie geht es
[21:07:15 CET] <Diag> im about to wie your gehts bub
[21:08:27 CET] <thebombzen> llogan: it's pcm_s16le lol
[21:08:39 CET] <thebombzen> although realistically it should be rawaudio
[21:08:54 CET] <thebombzen> or at least there should be -c pcm with various sample formats
[21:09:10 CET] <thebombzen> the way that we have -c rawvideo with various pixel formats
[21:27:28 CET] <llogan> it was a joke
[21:37:25 CET] <hyponic> I am doing transcoding of live streams and some times the video and audio gets out of sync. is there a way to make ffmpeg detect that somehow? i am using script for transcoding and i would like to be able to detect when that happens without actually watching the live stream. is that possible?
[21:41:25 CET] <faLUCE> when the common graphic editors have to modify a raw picture, do they provide to the user use a chosen common format (i.e: RGB) and then they internally convert it to the picture format?
[21:42:55 CET] <faLUCE> or do they give a YUV color picker for YUYV, RGB for rgb etc. ?
[21:43:31 CET] <faLUCE> or do they give a YUV color picker for YUV (*), RGB for rgb etc. ?
[21:43:47 CET] <Diag> faLUCE: what do you mean?
[21:44:50 CET] <Diag> In photoshop you can set the graphic mode, whether it be indexed, rgb 8888, rgb 16161616, rgb32323232, grayscale, bitmap
[21:45:17 CET] <Diag> color picker is either RBGA, HSUV(iirc, hue, saturation, somethin something)
[21:45:21 CET] <Diag> lemme look
[21:45:59 CET] <faLUCE> Diag: so, when you open a rgb picture, then you can modify it with a chosen format, and you are not forced to use the native format?
[21:46:10 CET] <Diag> faLUCE: you can pick whatever format youd like, sec
[21:46:17 CET] <faLUCE> Diag: ok thnks
[21:46:38 CET] <efface> Is there a way to make ffprobe exit if it can't connect to a udp? i tried the timeout option but that just makes valid udp error out
[21:46:45 CET] <Diag> faLUCE: http://puu.sh/tIM2W/08835c3fbe.jpg
[21:46:54 CET] <Diag> is that what youre asking?
[21:47:37 CET] <Diag> ah thats what it was, HSB, hue saturation brightness
[21:47:44 CET] <faLUCE> Diag: yes, but in the menu I see only RGB
[21:47:52 CET] <faLUCE> I don't see YUV for example
[21:48:01 CET] <Diag> faLUCE: i dont think they use yuv for pictures?
[21:48:03 CET] <Diag> http://puu.sh/tIM9a/665d875411.jpg
[21:48:05 CET] <Diag> color picker
[21:48:26 CET] <faLUCE> Diag: then, always RGB
[21:48:48 CET] <faLUCE> so, they convert RGB to the native fmt
[21:49:04 CET] <Diag> ...?
[21:49:15 CET] <faLUCE> Diag: there's not a YUV color picker
[21:49:23 CET] <Diag> no not yuv
[21:49:34 CET] <Diag> faLUCE: why would you want yuv for images?
[21:49:40 CET] <Diag> It makes no sense to not do rgb
[21:49:45 CET] <faLUCE> Diag: I'm writing an interface
[21:49:48 CET] <Diag> oh
[21:50:10 CET] <faLUCE> and I don't know if to provide YUV and other fmts to the user, for color pickers
[21:50:24 CET] <Diag> i wouldnt bother with yuv color pickig
[21:50:29 CET] <Diag> most people are uhh
[21:50:33 CET] <Diag> brought up with hsb
[21:50:48 CET] <faLUCE> Diag: yes, in fact this discourage me to provide other fmts too
[21:51:03 CET] <Diag> lol
[21:51:10 CET] <faLUCE> but in this case I have to make an internal conversion
[21:52:16 CET] <faLUCE> I don't know if it is the best solution
[21:53:31 CET] <faLUCE> maybe a better solution is to convert the image into rgb, modify with rgb and then reconvert to yuv
[21:53:44 CET] <furq> that's not lossless iirc
[21:53:54 CET] <furq> although if you're exporting to yuv then it's probably jpeg anyway so it doesn't really matter
[21:54:10 CET] <faLUCE> furq: but the same problem applies when modifying a pixel
[21:55:06 CET] <faLUCE> I mean: I can open a YUV image, modify with a RGB color picker (which internally converts the pixel into YUYV) and then save as YUV
[21:55:32 CET] <faLUCE> this is loseless as well, although restricted to the pixel I modify
[21:55:49 CET] <faLUCE> so... what to do?
[21:56:35 CET] <faLUCE> (this is NOT loseless)
[22:04:36 CET] <furq> what yuv export format are you planning on using
[22:04:47 CET] <furq> if it's just jpeg then i wouldn't worry too much about it
[22:06:00 CET] <furq> it might also help to use 16-bit rgb (or more) internally, idk
[22:06:50 CET] <PePeYoTe> Hi, I would like to convert a folder that contains .wma in .mp3. But I can' t seem to understand how it works. Can someone help me?
[22:07:23 CET] <DHE> PePeYoTe: the super-quick version is: ffmpeg -i input.wma -b:a 128k output.mp3    and adjust for your desired bitrate, etc
[22:07:56 CET] <kepstin> PePeYoTe: and ffmpeg doesn't have built in batch processing, so you'd have to write a script around ffmpeg to convert an entire directory
[22:08:08 CET] <PePeYoTe> yes of course, but I'd like the whole folder at once
[22:08:21 CET] <DHE> shell scripting
[22:08:45 CET] <PePeYoTe> kepstin: so I'll have to do it one by one?
[22:09:21 CET] <DHE> in bash you can do: for i in *.wma; do ffmpeg -i $i -b:a 128k $(basename $i .wma).mp3 ; done
[22:09:24 CET] <kepstin> no, you can write a script. This is a one-liner in bash.
[22:09:26 CET] <DHE> someone who knows bash beter could improve that
[22:09:53 CET] <kepstin> DHE: "${i%.wma}.mp3" instead of basename, but otherwise good :)
[22:10:53 CET] <DHE> yeah that's the kind of thing I need to learn
[22:11:13 CET] <PePeYoTe> I'm quite the newbie, so do I replace the *with my folder name?
[22:11:32 CET] <kepstin> PePeYoTe: first, to confirm, what os are you on?
[22:11:47 CET] <PePeYoTe> ubuntu 16.04LTS
[22:12:34 CET] <kepstin> PePeYoTe: alright, then just open a terminal, cd into the directory containing the music files, and run:
[22:12:35 CET] <kepstin> for i in *.wma; do ffmpeg -i "$i" -b:a 128k "${i%.wma}.mp3" ; done
[22:12:45 CET] <furq> ${i%.*}
[22:12:48 CET] <furq> (either works)
[22:13:12 CET] <kepstin> PePeYoTe: you can edit that to change the '128K' to a different bitrate, or use "-q:a 2" or something for higher-quality vbr
[22:13:25 CET] <furq> yeah you should really use vbr with lame
[22:14:07 CET] <PePeYoTe> ok I'll try
[22:14:16 CET] <kepstin> given that you're transcoding (presumably lossy) wma to mp3, you're gonna get a quality drop anyways :/
[22:14:36 CET] <furq> sure
[22:14:51 CET] <PePeYoTe> yep, but my phone doesn't read .wma...
[22:15:09 CET] <furq> is this just one directory or do you have subdirectories
[22:17:10 CET] <PePeYoTe> seems to be working
[22:17:44 CET] <PePeYoTe> I'll keep it simple to one directory at a time
[22:19:42 CET] <faLUCE> furq: it's more a theoretical problem, I'm writing an interface of an API, and I have to provide a get/set pixel to the user for multiple fmts
[22:20:17 CET] <faLUCE> so I was modeling it with common image editors
[22:22:26 CET] <PePeYoTe> Thanks DHE, kepstin and furq for the help, it worked perfectly!
[23:52:14 CET] <Mateon1> Oh, I didn't know there was a special pastebin for this channel, is it okay if I already have things related my question pasted somewhere else?
[23:52:52 CET] <llogan> you can pastebin it anywhere
[23:53:34 CET] <Mateon1> My question relates to lavfi filters in ffmpeg (and ffplay). I've been using a filter for a long time in ffplay, and tried converting it to ffplay so I can create an avi with the outputs I need. So far I had no luck (as detailed here): https://ipfs.io/ipfs/QmT7AHGpmWn52hZ42vnB743pGbNnTh74UQKvrXzjiWwGNn
[23:54:23 CET] <Mateon1> converting it to ffmpeg*
[23:55:17 CET] <llogan> use -filter_complex instead of -vf and use the correct -map
[23:55:31 CET] <llogan> and avoid amovie
[23:55:52 CET] <Mateon1> Whoops, apparently I mistyped one of the commands
[23:55:59 CET] <DHE> some channels are anal about the pastebin used. I just want one that doesn't suck. no javascript requirements to render, no ads, a raw video link, etc.
[23:56:03 CET] <Mateon1> Anyway, what is the correct -map?
[23:56:28 CET] <llogan> appears to be: -map "[out3]"
[23:56:39 CET] <Mateon1> Will that catch all three video outputs?
[23:56:59 CET] <DHE> there's only one video output. you used vstack to assemble them into a single output
[23:57:08 CET] <DHE> oh wait, no you didn't...
[23:57:20 CET] <llogan> i assumed the same thing. lazy reader.
[23:57:22 CET] <Mateon1> I have [out1] as audio, and out0, out2 and out3 for video
[23:58:05 CET] <DHE> well, I would personally suggest adding [out0][out2][out3]hstack=inputs=3[video]  and then you can do -map [video] -map [out1] to make a "normal" video
[23:58:29 CET] <Mateon1> Well, the videos are not the same size
[23:58:47 CET] <DHE> or something to that effect. you can do -map [out0] -map [out2] -map [out3] to make a 3-video output but you'll need a format that supports it (mkv) and playing it becomes interesting as well
[23:59:00 CET] <Mateon1> Ah
[23:59:16 CET] <DHE> it's like a multi-angle DVD or whatever...
[23:59:20 CET] <Mateon1> Okay, mkv is fine, and I plan to use vlc for viewing as well
[00:00:00 CET] --- Wed Feb  1 2017



More information about the Ffmpeg-devel-irc mailing list