[Ffmpeg-devel-irc] ffmpeg.log.20170731
burek
burek021 at gmail.com
Tue Aug 1 03:05:02 EEST 2017
[00:00:06 CEST] <JEEB> and you get the parameters for the cross compilation from the config posted in your build
[00:00:39 CEST] <JEEB> --enable-cross-compile --cross-prefix=sh4-linux- --target-os=linux --arch=sh4
[00:01:52 CEST] <paul_> I don't have STM cross-compiler :(
[00:02:02 CEST] <JEEB> well then you're fucked
[00:03:07 CEST] <paul_> Can not compile on my laptop? Under sh4 only in the input settings?
[00:03:44 CEST] <JEEB> you need a cross-compiler to compile for SH4
[00:04:26 CEST] <paul_> Are you writing about it # Apt-get install gcc-sh4-linux-gnu g ++ - sh4-linux-gnu
[00:04:54 CEST] <JEEB> yes, the cross-prefix seems to be sh4-linux
[00:05:03 CEST] <JEEB> so sh4-linux-gcc
[00:05:05 CEST] <paul_> and install this on my computer
[00:08:37 CEST] <paul_> Can i do according to this description? https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[00:09:25 CEST] <JEEB> you don't even have to go that far, that's meant for x86(_64)
[00:09:41 CEST] <JEEB> make sure you have your toolchain (aka you can run "sh4-linux-gcc")
[00:10:02 CEST] <JEEB> then you have git and that's more or less it?
[00:11:15 CEST] <JEEB> then `git clone "https://git.videolan.org/git/ffmpeg.git" && mkdir -p sh4_build && pushd sh4_build && ../ffmpeg/configure --enable-cross-compile --cross-prefix=sh4-linux- --target-os=linux --arch=sh4`
[00:11:47 CEST] <JEEB> then see how badly the configuration step fails
[00:12:06 CEST] <JEEB> if it passes, you can `make` to compile yourself a ffmpeg binary to test out
[00:12:22 CEST] <JEEB> if not, ffbuild/config.log has verbose logging
[00:13:05 CEST] <paul_> Slowly :)
[00:22:40 CEST] <paul_> ok git clone is ready
[00:23:16 CEST] <paul_> Extracted to home / ffmpeg
[00:24:16 CEST] <paul_> and install --> Apt-get install gcc-sh4-linux-gnu g ++ - sh4-linux-gnu , too
[00:27:25 CEST] <paul_> On the computer this looks like -> https://pastebin.ca/3849179 so what should I write in the ffmpeg compilation?
[00:29:21 CEST] <JEEB> I have already noted basic compilation steps based on the configuration of that SH4 build you had if you can run sh4-linux-gcc
[00:29:29 CEST] <thebombzen> I have never seen that architecture before :O
[00:29:40 CEST] <JEEB> it's what was in dreamcast
[00:30:05 CEST] <paul_> no dekoder is adb5800sx
[00:33:16 CEST] <paul_> log --> https://pastebin.ca/3849180
[00:33:56 CEST] <JEEB> that's why I asked
[00:34:01 CEST] <JEEB> can you run sh4-linux-gcc
[00:34:12 CEST] <JEEB> `../ffmpeg/configure: 892: ../ffmpeg/configure: sh4-linux-gcc: not found`
[00:34:45 CEST] <paul_> whow run ?
[00:35:09 CEST] <JEEB> well how do you run something on the terminal :P
[00:35:26 CEST] <JEEB> if you don't have that compiler then you have to install it
[00:35:57 CEST] <paul_> i have install sh4-linux-gcc
[00:36:25 CEST] <JEEB> then it's not in your PATH
[00:36:33 CEST] <JEEB> check what that package contains
[00:37:15 CEST] <paul_> Whow ?
[00:37:45 CEST] <JEEB> dpkg-query -L <package_name>
[00:37:56 CEST] <JEEB> as I just googled "ubuntu list files installed by package"
[00:47:08 CEST] <paul_> in usr/bin I have files sh4-linux-gnu-gcc
[00:49:02 CEST] <JEEB> ok, then your cross-prefix is "sh4-linux-gnu-"
[00:49:25 CEST] <JEEB> do note that it is likely that this toolchain is different to STM's, so it might or might not work
[00:54:06 CEST] <paul_> what is Toolchain ?
[00:54:38 CEST] <DHE> the collection of software that together makes your cross compiler (compiler, assembler, linker, and related tools like ar)
[00:57:18 CEST] <paul_> .sh ?
[02:27:16 CEST] <paul_> are u ?
[02:45:32 CEST] <paul_> thx and bye
[03:54:53 CEST] <notdaniel> has anyone used nvenc with a card like a P400? nvidia claims support, but i cant imagine it performs taht well given the specs of the card
[03:55:16 CEST] <notdaniel> but perhaps i'm underestimating just how well optimized those cards are for it
[04:03:53 CEST] <furq> if it is supported then it'll be exactly the same as any other pascal card
[04:04:38 CEST] <furq> the gpu itself is irrelevant unless you're doing cuda/opencl filtering etc
[04:04:48 CEST] <furq> the nvenc asic is identical across all cards of the same generation
[04:05:11 CEST] <furq> the only difference is that some cards are locked to two concurrent streams in firmware, and some aren't
[04:12:57 CEST] <notdaniel> ah, so he cuda core count only matters if youre doing something extra
[04:13:17 CEST] <notdaniel> furq, not just a straight up encode to h.264 4k at 60p or something
[04:13:45 CEST] <furq> right
[04:13:57 CEST] <furq> iirc some quadros still have the two streams limitation
[04:14:08 CEST] <notdaniel> yeah that card lists two
[04:14:10 CEST] <furq> and nvidia are not exactly forthcoming with that information
[04:14:12 CEST] <furq> oh ok
[04:14:21 CEST] <notdaniel> but i do not need many
[04:14:21 CEST] <furq> well yeah literally any consumer pascal card will perform the same
[04:14:34 CEST] <furq> so if you just want 4th-gen nvenc then there are probably cheaper ways
[04:15:02 CEST] <notdaniel> have a recommendation? i thought a $200 card was pretty damn cheap
[04:15:12 CEST] <notdaniel> (working with K80s at work and such mostly)
[04:19:21 CEST] <notdaniel> it's not a crazy use case, mostly just capturing from OBS
[04:31:58 CEST] <furq> i've never really touched nvenc so don't take any advice from me as gospel
[04:32:08 CEST] <furq> but the GTX 1050 will be the cheapest card with 4th-gen nvenc
[04:32:20 CEST] <furq> if that's literally all you need it for
[04:34:40 CEST] <stevenliu> https://developer.nvidia.com/nvidia-video-codec-sdk
[04:34:54 CEST] <stevenliu> why donot refer to this site?
[04:50:17 CEST] <notdaniel> stevenliu, i did, i just wasnt aware that all the cards processed nvenc the same, whether the $180 one or the $4000
[04:51:55 CEST] <stevenliu> $4000, rich man,haha, K80 is ok for 4k at 60fps but the hevc maybe have problem
[04:52:19 CEST] <notdaniel> 4k at 60 is the intended use case here
[04:52:23 CEST] <stevenliu> Kepler maybe not support hevc
[04:52:34 CEST] <furq> it doesn't
[04:52:35 CEST] <stevenliu> if AVC, Kepler is ok
[04:52:40 CEST] <furq> he's not using a K80 though
[04:52:42 CEST] <notdaniel> using k80s for work. this is not for that
[04:52:46 CEST] <furq> ^
[04:52:54 CEST] <notdaniel> this is for OBS streaming on my personal time
[04:53:04 CEST] <notdaniel> ive got a 1080 lying around somewhere
[04:53:19 CEST] <furq> problem solved then
[04:54:03 CEST] <notdaniel> was hoping for this to be a separate machine. would the P400 perform better/worse than 1050 or 1080? if theyre all using the same chip as far as nvenc
[04:54:14 CEST] <furq> the encoding will be basically identical
[04:54:33 CEST] <notdaniel> basically?
[04:54:39 CEST] <furq> and the 1080 has more cuda cores and vram, so it should beat the P400 for filtering etc
[04:54:52 CEST] <furq> basically in that nvidia apparently make small undocumented changes within generations
[04:55:07 CEST] <furq> and idk which of those SKUs is newest
[04:55:16 CEST] <furq> (or if it makes any difference at all)
[04:55:48 CEST] <furq> if you have a spare 1080 then it's not worth spending any money
[04:56:22 CEST] <notdaniel> i wanted to use the 1080 for an editing system
[04:56:32 CEST] <notdaniel> so still might be looking to buy a cheap card for this other use
[04:56:37 CEST] <furq> ah
[04:56:45 CEST] <furq> well yeah a 1050 should be more or less the same
[04:56:48 CEST] <notdaniel> cant seem to even find where nvidia lists nvenc chipsets on the non-quadro cards. which i think is my problem
[04:57:01 CEST] <furq> nvidia are famously circumspect with nvenc stuff
[04:57:07 CEST] <notdaniel> thus here i am
[04:57:10 CEST] <furq> lol
[04:57:29 CEST] <furq> yeah we get a lot of people who buy 1030s with no nvenc, or quadros which can only do two streams
[04:57:31 CEST] <notdaniel> but if theyre all pascal, then, should be fine? i thought some of the geforce cards werent actually being fully utilized with nvenc
[04:57:47 CEST] <furq> that's my understanding of it, yeah
[04:57:58 CEST] <furq> if you're not doing any filtering then they're all identical
[04:58:06 CEST] <furq> encode and decode is all done by the same asic on all pascal cards
[04:58:15 CEST] <furq> except the ones that don't have it at all
[04:59:11 CEST] <furq> it will use up some vram, so it's better to have more, but the 1050 and P400 both have 2GB anyway
[04:59:51 CEST] <notdaniel> cant find anything about stream count, etc. on the 1050. since that varies even with pascal cards
[04:59:57 CEST] <notdaniel> i know the p400 does 2
[05:00:03 CEST] <furq> it's always two for consumer cards
[05:00:08 CEST] <furq> it's unlocked on high-end quadros
[05:00:09 CEST] <notdaniel> ah. good to know
[05:00:23 CEST] <furq> it's not an actual hardware limitation, it's just locked to two in firmware
[05:00:28 CEST] <furq> because $$$
[05:00:54 CEST] <notdaniel> sure
[05:01:31 CEST] <notdaniel> only worked with nvenc in THE CLOUD so didnt even realize the consumer cards handled it
[05:02:00 CEST] <notdaniel> great info, thanks
[06:20:15 CEST] <bencc> https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/hlsenc.c#L783
[06:20:28 CEST] <bencc> shouldn't it say:
[06:20:34 CEST] <bencc> hls->max_seg_size == 0 ?
[06:29:30 CEST] <codebam> when encoding from flac to opus, I made the file output name .opus.ogg, however by doing this it makes it a video with the embedded album art as the photo. how can I force it to not output video?
[06:29:52 CEST] <kepstin> hmm, have they actually confirmed that the 1030 has no nvenc?
[06:30:07 CEST] Action: kepstin wasn't sure from the info he could find
[06:35:28 CEST] <furq> https://devtalk.nvidia.com/default/topic/1015541/does-video-codec-sdk-8-0-support-gt-1030-/
[06:35:32 CEST] <furq> this is the best source i can find
[06:35:52 CEST] <furq> that philipl appears to be the same one who has 120 ffmpeg commits
[06:35:59 CEST] <furq> so that's probably trustworthy
[06:47:25 CEST] <kepstin> that said, I have a 1030 and i tried it out of curiousity, and it gave a strange error, but I had no idea whether it was driver issues or card limitation.
[06:47:41 CEST] <kepstin> (on linux)
[08:30:56 CEST] <c3r1c3-Win> If it's not a GTX, then it doesn't have NVENC support.
[08:44:12 CEST] <squ> not really
[08:44:36 CEST] <squ> https://www.geforce.com/hardware/desktop-gpus/geforce-gt-710
[08:44:40 CEST] <squ> this one has
[08:58:49 CEST] <furqan> notdaniel: curious what you're using this all for, i work on something that sounds like it's exactly in that space
[08:59:32 CEST] <furqan> when you go from a 1050-> 1080 the way nvenc operates is it'll reduce the amount of bandwith available to process everything else
[08:59:37 CEST] <furqan> at least on the consumer side
[08:59:50 CEST] <furqan> we use aws g3 instances in the cloud as well
[09:26:35 CEST] <notdaniel> furqan, for work, yeah we run a video platform. built the transcoding bit in-house with nvenc on g2 instances in order to stop paying zencoder 15k/month
[09:27:00 CEST] <notdaniel> but the query about 1050/1080/etc is just for a personal streaming thing
[09:52:07 CEST] <atomnuker> would be cheaper and better to probably get something with a ton of cores and run x264 veryfast
[11:05:31 CEST] <rev0n> Does anybody know how to concatenate realtime webm chunks? I have a webm blobs coming over websocket to ffmpeg server, but the problem is that each chunk acts like a full movie (getUserMedia() generated blobs), but I pipe them to ffmpeg and ffmpeg crashes not expecting headers in each and every file. Any idea how should I prepare each chunk other than the first one so ffmpeg understand the incoming data correctly?
[11:17:09 CEST] <paul__> Hi, it's me again :)
[11:25:52 CEST] <furqan> notdaniel: cool, i'm working on a tool that's for streaming (like obs), we're in alpha feel free to check us out: bebo.com
[11:40:27 CEST] <squ> windows 10 only?
[11:53:05 CEST] <paul__> ye
[12:20:09 CEST] <sine0> ok this is going to sound weird, im pulling up some cctv of a robbery that happened at work, it was at night, the footage is a little bit motion blur when im stopping frame by frame, is there a way to blend the images or some way of getting a more complete image, i know its the human eye that makes it watchable when its moving...
[12:21:43 CEST] <sine0> I just about know about the ivtc and some issue with scan lines.. or am i way off
[12:36:56 CEST] <squ> google for deblur
[12:38:11 CEST] <squ> https://www.youtube.com/watch?v=IlcLa5JeTrE
[13:14:06 CEST] <sine0> ta
[14:33:20 CEST] <Renari> HI guys, I'm trying to make a copy of a file using a lot of attachments as well as subtitles and metadata. I'm doing this: https://gist.github.com/Renari/b3beae9d9023b713b1f6a5920d1b36d3#file-removeda-sh I'm not sure why but this isn't working as expected. The error that is showing in that gist is an error that is output however the file is still created, if I try to view the streams in the new output file I get the duplicate element
[14:33:20 CEST] <Renari> error also shown there.
[14:40:20 CEST] <Renari> Ah I think I finally figured it out, it was because I was only copying the metadata for the video stream and not all the other streams as well.
[15:15:40 CEST] <rgerganov_> Hi. I am using the ffmpeg library to play MPEG TS http streams. Are there any options which I can use to control the buffer size? I am using avformat_open_input to open the stream.
[15:33:40 CEST] <selfie> Hi!
[15:34:15 CEST] <selfie> Im planning to create a VOD service with 15TB of videos
[15:34:31 CEST] <selfie> With 20 million viewers
[15:35:15 CEST] <selfie> Anyone have some tips for streaming?
[15:37:22 CEST] <RandomCouch> Um
[15:37:27 CEST] <RandomCouch> I'm actually Vegan
[15:48:04 CEST] <DHE> selfie: that's a lot. competing with netflix?
[15:48:54 CEST] <selfie> Haha actually its just for one partner
[15:49:36 CEST] <selfie> We will have at least 4-5 more with same amount of traffic
[15:50:02 CEST] <selfie> 100M+ totally
[15:50:38 CEST] <DHE> wow. gonna need a lot of machines serving all that up... and/or hire a CDN I think. akamai?
[15:50:56 CEST] <DHE> (not an endorsement, just dropping a name to get you going in the right direction)
[15:54:53 CEST] <Nacht> You would def need a CDN with those ammounts. And still need a trunk load of Edges
[15:55:21 CEST] <Nacht> I take it it will involve DRM ?
[17:25:32 CEST] <selfie_> Hey
[17:25:41 CEST] <selfie_> You are right but CDNs are too expensive
[17:26:10 CEST] <selfie_> Actually we are being CDN for those partners
[17:27:05 CEST] <selfie_> Main problem is handling 20k people streaming at the same time
[18:00:04 CEST] <Mavrik> Hmm... is there any good way of speeding up yadif besides making frames small?
[18:01:29 CEST] <CoreX> no
[18:32:07 CEST] <DHE> yadif has some multi-threading support (slice style)
[19:21:35 CEST] <bencc> a patchwork means that someone suggested a change?
[19:21:37 CEST] <bencc> https://patchwork.ffmpeg.org/patch/4526/
[19:21:44 CEST] <bencc> or is it already accepted?
[19:24:16 CEST] <thebombzen> how good is libavfilter's deband filter? is it worth using at all? are the default settings fine?
[19:24:22 CEST] <thebombzen> I'm thinking of using something like this
[19:26:50 CEST] <thebombzen> I'm thinking of turning 8-bit 1080p into 10-bit 720p using this filterchain: zscale=hd1080:f=spline36,format=yuv444p16le,deband,zscale=hd720:f=spline36:d=error_diffusion,format=yuv420p10le
[19:27:03 CEST] <thebombzen> I'm not sure if the deband is worth it or useful
[19:30:52 CEST] <thebombzen> also, does it depend on the source type? like if the source start is anime, is it different?
[19:31:09 CEST] <thebombzen> compared to film or 3D animation
[20:27:21 CEST] <kepstin> my experience with deband is that you're probably going to need to tweak it a little, and you might want to consider denoising beforehand (and possibly re-adding noise afterwards)
[20:28:49 CEST] <kepstin> I'd probably only use it if the source has visible/annoying banding artifacts.
[20:36:43 CEST] <Cobalt> Hello. I'm making a video from a time-lapse of still images serially numbered. For a slightly less jerky video I wanted each frame to fade out into the sunsequent 5 or so frames. I have a total of 5500 frames at 24fps. Is there a way to achieve this fade out? I'm currently using: ffmpeg -framerate 25 -i img%05d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4
[20:41:09 CEST] <thebombzen> kepstin: the reason I added the deband filter was 8bit -> 10bit will essentially have banding
[20:41:33 CEST] <kepstin> sure, but it only matters if it's visible
[20:41:53 CEST] <thebombzen> well it won't be noticable if it's not in the 8-bit source, but the 10-bit upscaled version that you feed to x264 will be banded
[20:42:04 CEST] <thebombzen> debanding could potentially improve the compression ratio and prediction
[20:42:24 CEST] <thebombzen> because color banding is essentially when you have insufficient color precision, upscaling to 10-bit doesn't really seem useful unless you have continues 10-bit input
[20:43:28 CEST] <thebombzen> the idea was that perhaps the deband filter could populate the extra lower-order bits in a way that makes the encoder take advantage of the extra precision
[20:44:02 CEST] <kam187> hmm can you stich multiple videos of different sizes together without reencoding?
[20:44:23 CEST] <furq> by ignoring the last two words in that sentence
[20:44:27 CEST] <kepstin> kam187: you mean concatenate them (one after the other)?
[20:44:35 CEST] <kam187> yeah
[20:44:40 CEST] <thebombzen> kepstin: if you don't deband, I mean the predictors will not be more accurate. It might still be worth it could still get away with more quantization though, but only if libavfilter's deband filter isn't bad
[20:44:40 CEST] <Cobalt> Sorry, I got disconnected.
[20:45:13 CEST] <thebombzen> kam187: no, not realistically. Players generally expect the video stream to be a constant size. You'd have to scale all the components to the same size
[20:45:16 CEST] <kepstin> kam187: you could put them both in mpegts, concatenate them, and some players might be able to handle the reconfiguration to switch size...
[20:45:26 CEST] <Cobalt> I missed any answers if anyone answered in the last 3-4 minutes.
[20:45:38 CEST] <kam187> hmm ok
[20:45:49 CEST] <thebombzen> so basically yes you can technically, but it will not work with most players.
[20:46:11 CEST] <kam187> i'm trying to handle orientation change when capturing a mobile device screen
[20:46:33 CEST] <thebombzen> ah. I believe there might be metadata to do that
[20:46:38 CEST] <kam187> when it rotates it sends a new SPS/PPS and so switches resolution
[20:47:07 CEST] <thebombzen> some image formats (jpeg + jfif) have metadata to rotate. Like when you take a rotated picture with your camera, the image reader knows to rotate it back before displaying.
[20:47:37 CEST] <thebombzen> Some video formats and players might respect that metadata, and some of them might be able to do it dynamically
[20:48:00 CEST] <thebombzen> but it's not likely that you'll be able to do that for all of them
[20:48:19 CEST] <kam187> so right now i'm saving to mp4 and i setup the sps/pps in the extradata field, but i can't change it half way through
[20:48:31 CEST] <kam187> i also can't add a second video stream half way through with new meta data
[20:48:36 CEST] <kepstin> kam187: yeah, you might be able to store the video in a fragmented mp4 or mpegts or something, but again player support is gonna be pretty poor
[20:48:50 CEST] <kam187> :(
[20:49:03 CEST] <thebombzen> kam187: you might be able to accomplish this with gpac or lsmash, but I'm not sure. JEEB would know
[20:49:20 CEST] <thebombzen> (author of L-smash)
[20:49:50 CEST] <thebombzen> (I think)
[20:50:04 CEST] <kepstin> kam187: if you're just looking for temporary storage, I'd honestly start a new file on orientation switch, and then later consider doing a resize/re-encode to combine them if desired.
[20:50:16 CEST] <furq> he is not the author of l-smash
[20:50:50 CEST] <kam187> i'm just recording the screen for later playback, so yeah i can make a new file no problem
[20:51:01 CEST] <kam187> but not sure how i would put it together without re-encoding
[20:51:09 CEST] <kam187> i dont' really want to waste resources re-encoding
[20:51:22 CEST] <thebombzen> ah, he's just on the L-smash team, my bad :P
[20:51:59 CEST] <thebombzen> kam187: why would the screen orientation change dynamically if you're recording it?
[20:52:17 CEST] <thebombzen> and why would re-encoding waste resources? if anything it would be a quality hit but not a resource hit
[20:52:18 CEST] <kam187> if the suer rotates the device
[20:52:23 CEST] <kepstin> thebombzen: because someone rotated the phone while the screen's being recorded?
[20:52:34 CEST] <thebombzen> oh this is a phone :P
[20:52:40 CEST] <kam187> it' takes time and cpu to re-encode
[20:52:44 CEST] <kepstin> i'm actually kind of curious what my phone does in that case, actually - it has a built-in screen recorder
[20:52:48 CEST] <thebombzen> I missed that
[20:52:55 CEST] Action: kepstin should test it
[20:53:02 CEST] <thebombzen> kam187: honestly I'm thinking that if someone rotates the phone, you should just have the video be sideways
[20:53:56 CEST] <thebombzen> like if the person changes the phone from portrait to landscape and the UI adapts, it might make sense to flag the timestamp in a log, but not stop encoding and not adapt the encoder
[20:54:17 CEST] <kam187> i wouldn't mind that at all
[20:54:27 CEST] <thebombzen> maybe you could use some sort of fanciness to force an I-frame
[20:54:38 CEST] <kam187> but it sends a new SPS/PPS and the h264 stream is now in a new size (w/h switched)
[20:54:43 CEST] <kepstin> alright, I just made 2 videos using the sony builtin screen recorder on my android phone - starting landscape and switching to portrait, and vice-versa
[20:54:47 CEST] <kepstin> lets see what they did
[20:55:03 CEST] <kam187> hmm
[20:55:09 CEST] <thebombzen> kam187: how are you recording the screen? I assume this is android, but can't you ignore a device rotate event
[20:55:19 CEST] <kam187> this is ios (airplay)
[20:55:30 CEST] <YokoBR> hi folks
[20:55:31 CEST] <thebombzen> how does one record an iOS screen :O
[20:55:39 CEST] <thebombzen> I've wanted to do this for a while
[20:55:41 CEST] <Cobalt> I'm afraid the docs online only talk about a few number of images used as slideshows with fade ins and outs. :(
[20:55:45 CEST] <YokoBR> how can I transmit an mp3 to an icecast server?
[20:55:58 CEST] <kepstin> kam187: hmm, that's probably more like what happens when doing android with chromecast then, I guess?
[20:56:30 CEST] <kepstin> kam187: since in that case they have known player hardware and aren't storing it (just streaming), they just sense the update and continue on, and the player reconfigures
[20:56:38 CEST] <kepstin> just send*
[20:56:47 CEST] <thebombzen> but kam187: you're saying that iOS internally encodes the video, and when the user rotates the screen, AirPlay updates the metadata sent to the encoder, and thus you end up with a different H.264 screen?
[20:57:07 CEST] <thebombzen> because this might be a really stupid question, but is there any reason you cannot turn on "Lock Screen Orientation" for the duration of the recording?
[20:57:17 CEST] <thebombzen> as in, do you *have* to rotate the screen or have to allow the user to do that?
[20:57:32 CEST] <kam187> yes ios internall encodes it to h264, when you rotate it sends a new SPS/PPS and starts sending int he new resolution
[20:57:44 CEST] <kepstin> when I use the internal video recording on my sony phone, the result is always 1920x1080, and the screen is pillarboxed when in portrait
[20:57:55 CEST] <thebombzen> Depending on the reason you're encoding it, you can just turn on lock screen orientation, so you don't accidentally have that crap
[20:58:01 CEST] <kepstin> but I assume that the app is capturing the screen, scaling, *then* encoding on the hw encoder.
[20:59:30 CEST] <kepstin> so that doesn't help you, if you're just getting the encoded video from some other app
[20:59:33 CEST] <kam187> hmm i think multiple files is the way to fo
[20:59:35 CEST] <kam187> *go
[21:23:53 CEST] <YokoBR> hi there
[21:24:18 CEST] <YokoBR> I'm trying to concat a list, but I get code 1: list.txt: operation not permitted
[21:26:12 CEST] <Cobalt> Meh. I think I am going with butterflow.
[21:30:39 CEST] <YokoBR> I'm using node-fluent-fffmpeg kepstin
[21:31:34 CEST] <kepstin> well, that's not really something we support here, although we could help you with more complete output or info about the command that the node module build
[21:31:44 CEST] <kepstin> *that the node module built
[23:11:25 CEST] <rev0n_> Is it possible to ignore file duration in ffmpeg?
[23:11:48 CEST] <JEEB> I don't think much in FFmpeg's libraries use the duration, EOF is the thing that matters
[23:11:48 CEST] <Cobalt> Thanks. Found project butterflow on github which fills the role I was looking for.
[23:12:15 CEST] <Cobalt> It is interesting to note that it lists ffmpeg as one of its dependencies.
[23:12:33 CEST] <JEEB> many things use the libraries (or the command line app)
[23:12:38 CEST] <voip_> hello guys, i need ffmpeg with h264_qsv support
[23:12:46 CEST] <JEEB> I mean, nothing else implements as many things as FFmpeg's libraries
[23:13:16 CEST] <JEEB> VLC wouldn't be able to play most videos, gstreamer without hardware decoding modules would be the same
[23:13:20 CEST] <voip_> whaere i can faound step by step compile guide ?
[23:18:51 CEST] <pgorley> voip_: ./configure && make, if you want to know what options you can configure: ./configure --help
[23:19:57 CEST] <rev0n_> JEEB: do you know what byte means EOF in webm?
[23:20:22 CEST] <pgorley> voip_: else there's https://trac.ffmpeg.org/wiki/CompilationGuide
[23:20:48 CEST] <voip_> pgorley, i think a also will need Intel Quick Sync SDK, right ?
[23:21:26 CEST] <JEEB> rev0n_: when you keep reading at the end you will get AVERROR_EOF from the reader
[23:24:33 CEST] <pgorley> voip_: it depends on intel media sdk, and is disabled by default, so you need to configure with --enable-libmfx
[23:25:04 CEST] <rev0n_> JEEB: actually I am analyzing webm file byte by byte, as I get webm blobs from the browser that needs to be converted to a stream. I get 1s chunks but each of chunks acts like totally new file which while live streaming makes ffmpeg crash (it does not expect headers of each chunk while piping the data with python). My problem is making the data readable to ffmpeg so I can start RMTP connection with ffmpeg to my broadcasting server
[23:25:45 CEST] <voip_> pgorley, thank you
[23:25:50 CEST] <TD-Linux> is this from the MediaRecorder API btw?
[23:25:51 CEST] <JEEB> webm is just matroska
[23:26:12 CEST] <rev0n_> TD-Linux: that's right
[23:27:15 CEST] <TD-Linux> note that MediaRecorder doesn't guarantee that each callback == one webm chunk
[23:27:29 CEST] <TD-Linux> unless you are starting and stopping it
[23:28:04 CEST] <TD-Linux> (it's something I wouldn't mind adding to the spec though)
[23:28:30 CEST] <TD-Linux> ah wait timeslice was added since last I checked
[23:30:35 CEST] <rev0n_> JEEB: right, but I was not able to find detailed description of the file construction. I was able to reencode one chunk with ffmpeg (so I get the right headers in the file), and I created a longer file just by conctinating with ffmpeg this one chunk a few times. This let me kinda reverse-engineer the bytes by hours of analysis, and I almost can play the created file. Yeah... almost
[23:31:22 CEST] <TD-Linux> rev0n_, https://www.matroska.org/technical/specs/index.html
[23:31:38 CEST] <TD-Linux> basically you want to concatenate just the Cluster elements + one block of headers
[23:32:49 CEST] <rev0n_> TD-Linux: MediaRecorder seems to be getting the complete chunks, as when I save incoming data with python just writing the incoming bytes to a file, the file is playable with ffmpeg. MediaRecorder gives me full chunks as far is I know
[23:34:50 CEST] <TD-Linux> ok
[23:36:31 CEST] <rev0n_> TD-Linux: what I do is I take MediaRecorder blob (that's what it's returning) and send it over websocket to my Python tornado server. When I connect with my client js app I open a subprocess with ffmpeg and on-message I pipe incoming data to ffmpeg. But obviously this doesn't quite work, as ffmpeg has problems with unexpected headers of each and every chunk, which seems (as mentioned before) a completely new file to ffmpeg. Any idea how
[23:36:31 CEST] <rev0n_> can I work this around?
[23:39:19 CEST] <TD-Linux> rev0n_, yes, you basically want to strip off the header of the second chunk+ and only send the Cluster elements
[23:39:27 CEST] <TD-Linux> as well as possibly fix timestamps
[23:41:48 CEST] <rev0n_> TD-Linux: yes! that's what I want to do, if this will work with ffmpeg. I am banging my head against the wall after spending ~10 hrs in my example file's binary code...
[23:41:55 CEST] <TD-Linux> if you pass the timeslice parameter to MediaRecorder I'd expect it to give you ready-to-join Cluster elements... but I guess not?
[23:42:20 CEST] <TD-Linux> also a quick google found this, maybe you can look at its source code https://www.npmjs.com/package/webm-cluster-stream
[23:42:40 CEST] <rev0n_> ok, just checking
[23:44:18 CEST] <TD-Linux> (most of the magic of that npm package is hidden behind the ebml parser tho)
[23:48:36 CEST] <rev0n_> TD-Linux: I checked the code of the lib and it actually has 39 lines of code, but it requires external ebml library which pretty much makes sence :) I will get a bit deeper into this one and I will get back here soon
[23:49:20 CEST] <rev0n_> yes, that's correct. Let's see how can I make it to pure js without node
[00:00:00 CEST] --- Tue Aug 1 2017
More information about the Ffmpeg-devel-irc
mailing list