[Ffmpeg-devel-irc] ffmpeg.log.20160317
burek
burek021 at gmail.com
Fri Mar 18 02:05:01 CET 2016
[00:29:59 CET] <axc1298> furq: i don't know bash. i think something's wrong with line 1 https://paste.debian.net/416145
[00:30:15 CET] <axc1298> for anyone
[00:31:29 CET] <axc1298> also would be nice if i can put a few different video formats in there, like avi, mkv, mp4
[00:31:46 CET] <furq> put done at the end
[00:33:34 CET] <axc1298> i'm getting 'no such file or directory' when i run the script.
[00:33:45 CET] <axc1298> it doesn't need any arguments does it?
[00:36:40 CET] <axc1298> oh got it now
[00:37:02 CET] <axc1298> it does end with .webm no such file or directory. but does work and list the resolutions now
[00:37:40 CET] <petecouture> Does anyone have any good SDP examples when using RTP for the input? I'm able to get the video fine but the audio sounds horrible. It's supposed to be 22k but ffmpeg says it's running at 4kbs
[01:02:14 CET] <furq> is there any reason to use librtmp over ffmpeg's native rtmp support
[01:09:25 CET] <petecouture> I dunno for rtmp but for licensing issues I have to use native aac over libfaac
[01:11:44 CET] <c_14> petecouture: https://pb.c-14.de/t/kng.CCYLVM ?
[01:11:49 CET] <c_14> furq: whichever you think has less bugs
[01:14:28 CET] <petecouture> c_14: Thank you. *sigh* I don't know SDP at all and it's such a pain.
[01:30:26 CET] <petecouture> Would anyone have any advice on encoding RTP audio into ffmpeg to a different format. http://pastebin.com/wnxVgPdA
[01:31:00 CET] <petecouture> The audio is being recorded at a huge rate. Like 5 seconds shows up as a minute in the ffmpeg
[01:31:02 CET] <petecouture> size= 9kB time=00:02:00.13 bitrate= 0.6kbits/s
[01:32:23 CET] <petecouture> It's also just pure white noise
[01:33:13 CET] <TD-Linux> ew, 22kbps mp3
[01:33:23 CET] <TD-Linux> I assume ffmpeg is picking up the opus stream?
[01:33:52 CET] <DHE> might want to use ffprobe to analyze the file. see what it thinks about it and whether it's correct or not
[01:34:05 CET] <petecouture> no
[01:34:20 CET] <petecouture> it's picking up the pcm stream
[01:34:43 CET] <petecouture> This rtp stream is being provided by a Kurento server which transcodes it from webrtc
[01:34:52 CET] <TD-Linux> that's unfortunate, the pcm stream is the worst quality of the offers
[01:34:57 CET] <petecouture> the 22kbs was just a test.
[01:35:10 CET] <TD-Linux> err wait there is a 48khz and 8khz pcm stream
[01:35:18 CET] <petecouture> Hmm the response offer shows it would accept opus
[01:35:53 CET] <petecouture> My understanding of SDP is you can list all options and the server/client chooses the best one
[01:35:57 CET] <TD-Linux> I guess the 48khz pcm stream is the highest quality, assuming the source is mono and you have a lot of bandwidth :)
[01:36:13 CET] <TD-Linux> yeah that's basically correct
[01:38:07 CET] <TD-Linux> can you paste the full ffmpeg output
[01:39:01 CET] <petecouture> Sure
[01:39:29 CET] <petecouture> I GOT IT
[01:39:30 CET] <petecouture> Lol
[01:39:37 CET] <petecouture> For some reason you reminded me of opus
[01:39:44 CET] <petecouture> so I removed the PCMU codec from the list
[01:39:48 CET] <petecouture> and it works on opus!!!
[01:40:12 CET] <TD-Linux> petecouture, also fwiw I don't think the VP8 stream should be listed in the m=audio line in that SDP...
[01:41:11 CET] <petecouture> TD-Linux: Ya it was just there for testing. Right now there's two encoders running webrtc to rtp to hls
[01:41:24 CET] <petecouture> I'm trying to passthrough the webrtc directly to ffmpet
[01:41:37 CET] <petecouture> ffmpeg, so the media doesn't need to be encoded
[01:42:51 CET] <TD-Linux> yeah, opus is likely what the webrtc source is sending
[01:43:10 CET] <TD-Linux> in theory the PCM ones should work too, dunno if it's kurento or ffmpeg's fault
[01:44:25 CET] <petecouture> Lol TD-Linux: right this is like almost 5 days of trying to figure it out. Some sort of bug is what i'm thinking as far as PCMU
[04:04:41 CET] <moli_> hello, could someone please rewrite this command to ffmpeg? >>> mencoder "$1" -srate 44100 -af resample=44100:0:1,format=s16le -oac mp3lame -lameopts cbr:br=128 -ovc lavc -lavcopts vcodec=mpeg4:vqscale=3:vmax_b_frames=0:keyint=15 -ofps 20 -noskiplimit -vf pp=li,expand=:::::224/176,scale=224:176 -ffourcc DX50 -o "$1".fuze.premux
[04:05:37 CET] <rrauzy> whoever helps moli_ also, can I see a command that can go through an mp4 format h.264 encoded video, and tell me the pict_type for each frame? (IE: I, P, B)
[04:05:55 CET] <rrauzy> I just want to iterate over all frames and gather the pict type (I, P, B)
[04:06:24 CET] <rrauzy> for each
[04:07:03 CET] <relaxed> rrauzy: maybe ffprobe's -show_frames
[04:07:50 CET] <rrauzy> relaxed: This is good advice, but show_frames is giving me the audio data in addition to the frame data, and it doesn't seem to give me the pict_type for all frames, everything seems jumbled and disorganized.
[04:08:04 CET] <c_14> -select_streams v
[04:08:05 CET] <rrauzy> that was my first thought
[04:08:13 CET] <J_Darnley> moli_: rewritten and made better: ffmpeg -i INPUT -acodec aac -ab 128k -vcodec libx264 -crf 18 output.mp4
[04:08:27 CET] <relaxed> rrauzy: you can isolate a specific stream with ffprobe
[04:08:28 CET] <c_14> -show_entries frame=pict_type
[04:08:40 CET] <c_14> ^those were both for rrauzy
[04:09:39 CET] <moli_> @J_Darnley: this is missing many options from the original. e.g. keyframes interval. It must be the exact same, otherwise the device will not play it.
[04:10:16 CET] <J_Darnley> Then read the manual if you want to make shit
[04:10:42 CET] <moli_> @J_Darnley: ok, thank you for the advice
[04:10:52 CET] <rrauzy> c_14, thanks
[04:12:51 CET] <moli_> could someone other please rewrite this command to ffmpeg? it must be the exact same conversion. i've read the manual, still came here to ask for your kind help >>> mencoder "$1" -srate 44100 -af resample=44100:0:1,format=s16le -oac mp3lame -lameopts cbr:br=128 -ovc lavc -lavcopts vcodec=mpeg4:vqscale=3:vmax_b_frames=0:keyint=15 -ofps 20 -noskiplimit -vf pp=li,expand=:::::224/176,scale=224:176 -ffourcc DX50 -o "$1".fuze.premux
[04:16:12 CET] <relaxed> moli_: something like, ffmpeg -i INPUT -c:a libmp3lame -b:a 128k -r:a 44100 -vf <#video filters here#> -c:v mpeg4 -q:v 3 -bf 0 -keyint_min 15 -vtag DX50 output.avi
[04:16:59 CET] <relaxed> moli_: https://trac.ffmpeg.org/wiki/FilteringGuide
[04:19:47 CET] <moli_> thank you
[04:19:56 CET] <moli_> how do i do -af format=s16le ?
[04:20:14 CET] <moli_> little indian 16 bit of the audio
[04:20:29 CET] <moli_> *endian
[04:20:53 CET] <J_Darnley> You're using mp3 not pcm
[04:21:50 CET] <moli_> yes, i know, still
[04:21:54 CET] <J_Darnley> It will transformed into float by lame then frequency transformed resulting in the original sample format being meaningless
[04:22:43 CET] <moli_> meaningless to every decoder in the world, except this one device i need this for, unfortunatelly
[04:23:35 CET] <rrauzy> is the pts an accurate representation of frame order?
[04:23:42 CET] <J_Darnley> Wow. A true AI device with psychic powers has been invented and it only plays crap avi files
[04:24:56 CET] <moli_> if you are interested, for more information please see http://web.archive.org/web/20100304081211/http://forums.sandisk.com/sansa/board/message?board.id=sansafuse&view=by_date_ascending&message.id=31762
[04:25:26 CET] <furq> i wonder if i still have the script i wrote to encode video for my sansa fuze
[04:25:34 CET] <furq> i probably threw it away after i installed rockbox on it
[04:25:58 CET] <furq> i'm pretty certain the input samplerate doesn't make a difference though
[04:26:16 CET] <furq> s/rate/format/
[04:27:17 CET] <c_14> rrauzy: frames are played in order of increasing PTS
[04:27:37 CET] <rrauzy> ok
[04:27:44 CET] <rrauzy> thanks c_14
[04:29:57 CET] <moli_> @furq input sample format? isnt -af format is specifying the output?
[04:31:05 CET] <furq> i've never used mencoder but s16le is meaningless for mp3
[04:32:15 CET] <furq> so i assume that's either being ignored or happening before it gets to the encoder to work around some unrelated issue
[04:32:28 CET] <c_14> it happens before it gets to the encoder
[04:32:56 CET] <c_14> In the worst case the format gets changed twice, in the best case it does nothing in the sense that the filter would have been inserted automatically because libmp3lame wants input in that format anyway.
[04:33:07 CET] <furq> yeah you can safely ignore that
[04:33:30 CET] <furq> the only thing you need to add to relaxed's command is -vf scale=224:176
[04:33:38 CET] <moli_> theoretically yes, but it is mentioned in multiple solutions, and judging by the work took to make this shitfuze work, maybe it is needed. I am ready to test it out without that one parameter
[04:33:53 CET] <furq> it should encode quickly anyway
[04:34:12 CET] <moli_> is this only (and only) resizes the video? -vf pp=li,expand=:::::224/176,scale=224:176
[04:34:21 CET] <furq> moli_: like i said, i assume it's working around some issue with mencoder
[04:35:14 CET] <c_14> you should be able to copy the vf line as is (probably)
[04:35:38 CET] <c_14> hmm, there's no expand filter
[04:35:51 CET] <c_14> Don't even know what that one would do
[04:35:54 CET] <moli_> i am asking because maybe i've got a better vf line >>> -filter:v "scale=iw*min(224/iw\,176/ih):ih*min(224/iw\,176/ih), pad=224:176:(224-iw*min(224/iw\,176/ih))/2:(176-ih*min(224/iw\,176/ih))/2"
[04:36:14 CET] <furq> sure
[04:36:22 CET] <furq> pp=li doesn't seem to do much of value
[04:36:39 CET] <furq> i assume expand is there to preserve the ar
[04:36:54 CET] <furq> but the only bit which matters to the player is scale=224:176
[04:37:38 CET] <c_14> Well, it _might_ need progressive video in which case you could add yadif
[04:37:55 CET] <c_14> iff the input is interlaced
[04:39:03 CET] <moli_> what does this do? harddup
[04:41:28 CET] <furq> It is important that you use harddup as the last filter: it will force MEncoder to write every frame (even duplicate ones) in the output.
[04:41:31 CET] <furq> wow mencoder is dumb
[04:41:58 CET] <moli_> what is the difference between -deinterlace and -filter:v yadif ? the used algorithm?
[04:42:31 CET] <c_14> -deinterlace is deprecated
[04:42:37 CET] <c_14> (it just inserts yadif now)
[04:43:03 CET] <moli_> so harddup is the same as -noskiplimit
[04:44:03 CET] <moli_> then looks like it is all translated. >>> ffmpeg -i "$1" -f avi -vtag DX50 -c:v mpeg4 -q:v 3 -bf 0 -keyint_min 15 -filter:v "scale=iw*min(224/iw\,176/ih):ih*min(224/iw\,176/ih), pad=224:176:(224-iw*min(224/iw\,176/ih))/2:(176-ih*min(224/iw\,176/ih))/2, yadif" -r 20 -vb 700k -minrate 700k -maxrate 700k -c:a libmp3lame -ac 2 -r:a 44100 -b:a 128k "${1%.*}.fuze.premux"
[04:44:23 CET] <moli_> could it be maybe rewriten to a better syntax? like the -ac and the -vb parameters
[04:45:18 CET] <c_14> You can replace -vb with -b:v, but it's fine as is
[04:45:49 CET] <moli_> why use both b:v and minrate,maxrate ?
[04:46:45 CET] <c_14> It enforces some constraints
[04:46:54 CET] <furq> moli_: i'm pretty sure you want -g 15, not -keyint_min 15
[04:52:54 CET] <moli_> i am running it, the console is littered with Past duration 0.619987 too large
[04:54:17 CET] <moli_> nevermind
[04:54:35 CET] <moli_> googled it, -filter:v "fps=20, helped for me too
[04:58:10 CET] <furq> that's exactly what -r 20 does
[04:59:14 CET] <c_14> If you add more than one filterchain the last filterchain will overwrite the previous ones
[04:59:34 CET] <c_14> ie in this case your scale won't take effect
[04:59:45 CET] <c_14> (assuming the -filter:v is after the -vf)
[04:59:54 CET] <c_14> If you put it in front of it, nothing changed
[05:00:37 CET] <moli_> the command now is >>> ffmpeg -i "$1" -f avi -vtag DX50 -c:v mpeg4 -q:v 3 -filter:v "yadif, fps=20, scale=iw*min(224/iw\,176/ih):ih*min(224/iw\,176/ih), pad=224:176:(224-iw*min(224/iw\,176/ih))/2:(176-ih*min(224/iw\,176/ih))/2" -r 20 -bf 0 -g 15 -b:v 700k -minrate 700k -maxrate 700k -c:a libmp3lame -ac 2 -r:a 44100 -b:a 128k "${1%.*}.fuze.premux"
[05:00:59 CET] <c_14> Ah, that's fine then
[05:01:11 CET] <moli_> all video filters are in one parameter, and their order is important too, i guess, so i reordered them
[05:02:08 CET] <moli_> second video is having an error """Invalid pixel aspect ratio 351/352, limit is 255/255 reducing""" but after that the console says """Stream #0:0(und): Video: 224x176 [SAR 254:255 DAR 3556:2805], SAR 351:352"""
[05:02:39 CET] <moli_> looks like it still applies that pixel aspect ratio?
[05:36:25 CET] <moli_> c_14 & furq thank you very much for your help and efforts. sadly it does not work
[05:52:38 CET] <furq> you could always install rockbox
[05:52:49 CET] <furq> that's a bit drastic but the stock firmware sucks anyway
[05:53:58 CET] <moli_> and i cant install video4fuze gui because of the abandoned mencoder
[05:55:00 CET] <moli_> i even searched for a compiled mencoder binary , despite the security flaws this holds
[05:58:25 CET] <moli_> anyway thanks bye
[06:19:21 CET] <petecouture> Can someone recommend the best node package to use for ffmpeg
[06:19:35 CET] <petecouture> I tried fluent-ffmpeg but it's having issues with rtp based input
[07:53:57 CET] <thebombzen> haha reading above. reminds me how awful it was to work with mencoder
[11:38:01 CET] <AndrewMock> How do I calculate the SSIM of a picture?
[11:42:37 CET] <relaxed> AndrewMock: there's a ssim filter
[11:49:15 CET] <AndrewMock> thx
[12:40:21 CET] <tommy``> hi
[12:41:00 CET] <tommy``> anyone have used blackdetect filter?
[12:49:32 CET] <tommy``> i'm trying this: ffmpeg -i file.mkv -vf blackdetect=d=2:pix_th=0.00 -an -f null [but i cant understand the error]
[13:14:57 CET] <zz_> Hi there
[13:15:26 CET] <zz_> I'm tryiung to run the following ffmpeg: ffmpeg -re \ -i rtp://localhost:5000 \ -i rtp://localhost:5002 \ -i rtp://localhost:5004 \ -i rtp://localhost:5006 \ -map 0:0 -map 0:1 -map 0:2 -map 0:3 -map 0:4 -map 0:5 -c copy \ -f rtp_mpegts rtp://wi-006:5000
[13:16:00 CET] <zz_> it seems the command blocks at the second rtp input
[13:16:23 CET] <zz_> is thre a way to receive from multiple simultaneous rtp streams?
[13:19:30 CET] <zz_> nobody there?
[13:30:31 CET] <zuloyd> hi
[13:31:21 CET] <zuloyd> I'd like to record an rtmp stream into multiple mp4 files of 1 minute each
[13:31:25 CET] <zuloyd> is this possible?
[13:31:46 CET] <zuloyd> I thought about just passing a "-t 60" parameter and then starting the whole thing again and again, but then I have gaps between the videos
[14:06:17 CET] <blubee> hi guys any linux users on here?
[14:06:43 CET] <J_Darnley> Not a single person in the world uses Linux(!)
[14:06:50 CET] <blubee> I am on debian testing, running ffmpeg with nvidia drivers I get an error; cannot open display :0.0
[14:08:13 CET] <DHE> trying nvenc or opencl encoding?
[14:08:19 CET] <DHE> what exactly are you trying to do?
[14:08:56 CET] <blubee> DHE: just recording the screen
[14:09:22 CET] <blubee> when I purge the nvidia driver i can record desktop no problem
[14:09:28 CET] <DHE> and you do have an X session open for which you have permission to connect?
[14:10:16 CET] <blubee> I think so
[14:10:26 CET] <blubee> this is a single user machine except for root
[14:10:58 CET] <blubee> i log in, w/o the nvidia driver i can just use the ffmpeg screengrab command no problems, if I install the nvidia driver then i get the error
[14:11:31 CET] <cbsrobot> ubitux: in "ffmpeg -i file.ass -c text file.srt" I see the opening <font> tag, but no closing tags - shoudn't the text encode strip all tags ?
[14:15:39 CET] <blubee> here is a log file: http://pastebin.com/TuKjLcJd
[14:39:42 CET] <tommy``> guys which gui i can use with ffmpeg?
[14:47:25 CET] <J_Darnley> What OS are you on and what exactly do you want to do?
[14:48:57 CET] <tommy``> win10, ineed to detect black frames and writeout on txt the timing
[14:50:12 CET] <Mavrik> Good luck? :)
[14:50:26 CET] <J_Darnley> I can't think of any gui that will let you do that
[14:50:44 CET] <J_Darnley> Partly because I'm not sure what that means
[14:50:55 CET] <tommy``> https://ffmpeg.org/ffmpeg-filters.html#blackdetect
[14:50:56 CET] <tommy``> this
[14:52:08 CET] <J_Darnley> ah okay
[14:52:16 CET] <J_Darnley> Still no
[14:55:02 CET] <tommy``> J_Darnley: is this any sense: ffmpeg -i file.mkv -vf blackdetect=d=2:pix_th=0.00 -f null out
[14:55:38 CET] <J_Darnley> Yes but substitute out for NUL (a special Windows filename)
[14:56:32 CET] <tommy``> speed is 2x very slow....
[14:58:01 CET] <J_Darnley> Well it won't be fast checking every pixel
[14:58:36 CET] <J_Darnley> Perhaps you should stop it and put the whole output on pastebin
[14:59:07 CET] <tommy``> frame= 2651 fps= 69 q=-0.0 size=N/A time=00:01:46.12 bitrate=N/A speed=2.78x <------
[15:05:55 CET] <tommy``> J_Darnley: my pc is crap to do this work
[15:23:22 CET] <zz_> I'm tryiung to run the following ffmpeg: ffmpeg -re -i rtp://localhost:5000 -i rtp://localhost:5002 -i rtp://localhost:5004 -i rtp://localhost:5006 -map 0:0 -map 0:1 -map 0:2 -map 0:3 -map 0:4 -map 0:5 -c copy -f rtp_mpegts rtp://wi-006:5000
[15:24:03 CET] <zz_> it seems the command blocks at the second rtp input. Is there a way to receive from multiple simultaneous rtp streams?
[15:32:31 CET] <zz_> Also, does anyone know why using rtp in output seem to allocate two udp ports instead of just one?
[15:33:41 CET] <jkqxz> The second port is n+1, for RTCP?
[15:33:54 CET] <zz_> the second port seem to be +1
[15:33:57 CET] <zz_> yes
[15:34:06 CET] <zz_> what do you mean for RTCP?
[15:34:41 CET] <zz_> it's a second UDP port, not a TCP one
[15:38:35 CET] <jkqxz> Yes. An RTP stream uses two UDP ports: port n for the RTP data packets and port n+1 for the RTCP control packets.
[15:39:19 CET] <zz_> ok. thanks
[15:39:41 CET] <zz_> i will search for RTCP control packets in the documentation
[15:39:55 CET] <zz_> Do you happen to know about my other question?
[15:40:04 CET] <zz_> Do you happen to know something about my other question?
[15:41:15 CET] <kepstin> if it's blocking during startup, that usually means that the stream probing is blocking - could mean that nothing's being received on one of the ports?
[15:41:45 CET] <zz_> i have another ffmpeg running on the same machine that is sending the rtp streams
[15:42:55 CET] <zz_> ffmpeg -re -i mnt/Archive/Rio\ 2014/matches/Main/match_61.ts -map 0:0 -map 0:1 -map 0:2 -c copy -f rtp_mpegts rtp://localhost:5000
[15:43:13 CET] <zz_> ffmpeg -re -i mnt/Archive/Rio\ 2014/matches/Main/match_61.ts -map 0:3 -c copy -f rtp_mpegts rtp://localhost:5002
[15:43:22 CET] <zz_> I have 4 of those
[17:35:00 CET] <mindheist> Hey People .. I have been searching for videos that are considered typically hard to encode
[17:35:17 CET] <mindheist> we use ffmpeg as our encoding engine in our product
[17:47:06 CET] <jkqxz> mindheist: /dev/urandom? ("ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -r 30 -i /dev/urandom ...".)
[18:22:19 CET] <llogan> mindheist: "parkrun" https://media.xiph.org/video/derf/
[18:24:10 CET] <zz_> increasing the -thread_queue_size parameter improves the situation, but the muxing ffmpeg still msses lots of packets
[18:52:03 CET] <mindheist> hmmm .. Thanks
[18:52:24 CET] <mindheist> But I was looking for more of a database of videos that I could use to test
[19:03:24 CET] <J_Darnley> Well as said, parkrun is hard, as is: parkjoy, crowdrun, a section in big buck bunny and another in elephants dream
[19:03:40 CET] <J_Darnley> crew is firly noisy
[19:03:44 CET] <J_Darnley> *fairly
[19:03:50 CET] <J_Darnley> city has many hard edges
[19:04:07 CET] <J_Darnley> (and these are just the ones I've seen)
[19:04:14 CET] <furq> just record some quakeworld demos
[19:30:31 CET] <petecout_> Regarding HLS encoding, has anyone done mid-stream ID3 tag injection over the course of a live stream? I've found documentation to do it pre-stream. http://jonhall.info/how_to/create_id3_tags_using_ffmpeg
[19:43:21 CET] <kepstin> that doesn't make sense; HLS uses mpeg-ts streams which shouldn't contain ID3, and no software would use it if it did.
[19:43:46 CET] <kepstin> if anything, the info should be put into the playlist (although I'm not sure if this is supported), or just communicated out of band.
[20:33:53 CET] <srg2> I'm trying to convert an mp3 file to an mp4. I tried `ffmpeg -i file.mp3 file.mp4`, but it didn't work. Log is here: https://gist.github.com/srguglielmo/d0ca79ddba60aeee586a Anyone know how I can do this?
[20:34:17 CET] <srg2> I don't mind a blank video, but if possible, I'd like to include a still image. But that's not too important.
[20:37:16 CET] <llogan> get a newer ffmpeg from here: http://johnvansickle.com/ffmpeg/
[20:37:23 CET] <furq> [aac @ 0xebb860] The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
[20:37:30 CET] <furq> do what that error says, or preferably do what llogan says
[20:38:20 CET] <llogan> then do: ffmpeg -loop 1 -framerate 5 -i image -i music.mp3 -c:v libx264 -c:a aac -pix_fmt yuv420p -movflags +faststart -shortest output.mp4
[20:38:33 CET] <srg2> What is that error in reference to? Is the mp3 using aac somehow?
[20:38:48 CET] <furq> oh nvm
[20:38:53 CET] <furq> mp4 defaults to aac if you don't specify
[20:38:59 CET] <srg2> ahh
[20:39:00 CET] <furq> add -c copy to keep the mp3
[20:39:43 CET] <furq> or if you want an image, use llogan's command but replace -c:a aac with -c:a copy
[20:40:49 CET] <srg2> thanks!
[21:06:32 CET] <srg2> -c:v libx264 encodes the image into a video using x264, right?
[21:06:44 CET] <srg2> I don't think this program I'm using likes x264. is there a more common codec?
[21:09:56 CET] <vith> can i use drawtext to overlay the scene detection value onto each frame? i tried -filter_complex "drawtext=text='test %{scene}'" but got %{scene} is not known. my actual goal here is just finding a way to tune the similarity parameter to -vf "select=gt(scene\,0.01)"
[21:55:43 CET] <axc1298> anyone know how i can modify this to add the following: "ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 input.mp4" (i would need to replace the input.mp4 part). https://pastebin.mozilla.org/8864131
[21:57:14 CET] <axc1298> since i have $f as the filename. would i need another eval line for duration, and then a duration= line?
[22:16:40 CET] <ubitux> cbsrobot: sample?
[22:17:06 CET] <ubitux> maybe the font tag wasn't recognized and so was simply copied
[22:24:35 CET] <llogan> axc1298: ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width:format=duration "$f"
[22:24:49 CET] <llogan> echo "$format_duration"
[22:39:24 CET] <axc1298> thanks llogan
[22:48:56 CET] <petecout_> kepstin: Sorry for the late reply. Was grabbing lunch. My understanding is mpeg-ts does support ID3 tags. https://en.wikipedia.org/wiki/ID3 https://en.wikipedia.org/wiki/MPEG_transport_stream
[23:02:42 CET] <kepstin> petecout_: huh, that's interesting. so they don't have it in the mp3 stream itself, they're rather sending it as a special metadata packet in the mpeg-ts. I haven't seen anything in ffmpeg to handle that (not saying it's not there, but I suspect it isn't).
[23:25:15 CET] <nadermx> I asked a question earlier here regarding multithreading with lame, was told it could not be done. What was suggested was to concat the files. Would that be possible from a youtube dash url? So for example ffmpeg -f concat -i "youtubeurl" -acodec libmp3lame -f mp3 -
[23:26:04 CET] <nadermx> Since it seems ffmpeg with larger files (over 10 mins) just cuts it off around that time, not sure if its because of the ffserver
[23:26:53 CET] <J_Darnley> The concat demuxer reads a plain text file listing files to read.
[23:27:15 CET] <J_Darnley> It isn'y going to parse an html page for you.
[23:28:52 CET] <J_Darnley> Or am I not understanding your question properly
[23:29:35 CET] <nadermx> I'm inputting the url that youtube-dl gives for the stream from youtube
[23:29:53 CET] <nadermx> it works fine if its the only song the server is doing, but if its doing multiple songs it cuts them off after a bit
[23:30:24 CET] <nadermx> with larger files
[23:30:28 CET] <nadermx> err songs/vids
[23:31:44 CET] <nadermx> so if i put a contact of multiple lines of the url would that be a work around?
[23:32:01 CET] <J_Darnley> No idea.
[23:41:37 CET] <axc1298> llogan: is ":" used as a separator. so i have "stream=height,width:format=duration" and if i want to add frame rate, can i just do "stream=height,width:format=duration:avg_frame_rate""
[23:58:45 CET] <nadermx> another question, i see the buffer can be set for video, can it be done just for audio? I think what might be happening is that its getting the data faster than it can convert it so it drops at some point
[00:00:00 CET] --- Fri Mar 18 2016
More information about the Ffmpeg-devel-irc
mailing list