[Ffmpeg-devel-irc] ffmpeg.log.20180306

burek burek021 at gmail.com
Wed Mar 7 03:05:01 EET 2018


[02:47:56 CET] <FishPencil> How do I take two videos and take the left half from i1.mp4 on the left side and the right half of i2.mp4 on the right side and merge them?
[04:15:49 CET] <pZombie> hello friends
[04:17:03 CET] <pZombie> it is said that ffmpeg now has an opencl vp9 encoder. What about a vp9 opencl decoder?
[04:49:15 CET] <solidus-river> hmm, i'm getting a return code of -22 fromavformat_alloc_output_context2 when following the example provided
[04:49:31 CET] <solidus-river> i've tried specifying "mpeg" and specifying "metroska"
[04:53:44 CET] <solidus-river> hmm, an invalid argument eror
[04:53:53 CET] <solidus-river> another case of the docs being bad :?
[04:57:24 CET] <solidus-river> https://pastebin.com/7Gfacrrx
[04:57:38 CET] <solidus-river> attempting to follow the example in https://ffmpeg.org/doxygen/trunk/muxing_8c-example.html
[04:59:42 CET] <solidus-river> huh, i'm following the example, the parameters match up
[04:59:45 CET] Action: solidus-river scratches head
[05:01:07 CET] <kepstin> solidus-river: in your pastebin, there's the typo "metroska" (should be "matroska")
[05:01:46 CET] <kepstin> i dunno what exactly your error is tho; not enough context.
[05:02:05 CET] <solidus-river> how could i give you more context? it fails with matroska as well as mpeg
[05:02:08 CET] <solidus-river> the example uses mpeg
[05:02:29 CET] <solidus-river> the return code is -22 which when translated via av_make_error_string is "Inalid Parameter"
[05:04:44 CET] <kepstin> what is the contents of the "filename" string?
[05:06:30 CET] <solidus-river> d:/blah/blah/blah/test.mkv
[05:06:38 CET] <solidus-river> in an unsigned char
[05:06:46 CET] <solidus-river> er, const char *
[05:06:58 CET] <solidus-river> but blah blah is the full path
[05:06:59 CET] <kepstin> hmm, I dunno if ffmpeg will accept that filename format
[05:07:10 CET] <solidus-river> what format does it expect?
[05:07:18 CET] <kepstin> it might try to parse d: as a protocol then fail because it doesn't know the d protocol
[05:07:36 CET] <kepstin> for initial testing, just use a relative path or filename only
[05:07:48 CET] <kepstin> if you really need an absolute windows path, hmm.
[05:07:55 CET] <solidus-river> I tried setting filename to NULL which it says is a valid thing to do but that windows
[05:08:00 CET] <kepstin> maybe you can format it into a file:// url somehow
[05:08:41 CET] <solidus-river> its not physically touching the file, maybe i can just fill it in with a garbage string
[05:09:59 CET] <kepstin> when you call avformat_write_header() it will open the file and write to it.
[05:10:15 CET] <kepstin> but I don't think it opens the file when you allocate the context
[05:10:28 CET] <solidus-river> oh weird.. so i have to use relative paths :?
[05:10:53 CET] <kepstin> there should be a way to use absolute paths on windows, I just don't know how you're supposed to write them.
[05:11:27 CET] <solidus-river> hmm, its still -22
[05:11:34 CET] <solidus-river> maybe i should open the file first?
[05:11:40 CET] <solidus-river> the file definitely doesn't exist at this point
[05:12:00 CET] <kepstin> ffmpeg takes care of opening the file, you don't have to do that
[05:12:07 CET] <solidus-river> but it says the filename could be null, so i guess i'm still confused, if it's null, it shouldn't need to open it
[05:12:32 CET] <solidus-river> so alloc_output_context will create the file?
[05:12:40 CET] <kepstin> some output formats don't require a filename, and in some cases if you're providing a custom io context I don't think you need it
[05:13:02 CET] <kepstin> I don't think the file is opened/created when the context is allocated
[05:13:15 CET] <solidus-river> huh.., wonder what parameter is invalid
[05:13:30 CET] <kepstin> what does your filename look like now?
[05:13:38 CET] <kepstin> try setting it to just "test.mkv"
[05:13:41 CET] <solidus-river> i just hard coded "video.mkv"
[05:13:53 CET] <solidus-river> and i still get invalid parameter
[05:14:15 CET] <kepstin> did you fix the typo and use the correct name "matroska" instead of "metroska"?
[05:14:27 CET] <solidus-river> yes
[05:14:36 CET] <solidus-river> i'm now trying "video.mpeg" and "mpeg"
[05:15:04 CET] <solidus-river> still -22
[05:17:14 CET] <kepstin> when you run that it should be printing an error message on stderr, i think
[05:17:34 CET] <solidus-river> oh let me check
[05:18:30 CET] <solidus-river> hmm, it didn't output anything
[05:18:49 CET] <solidus-river> but av_guess_format("matroska", "mkv", NULL); also returned NULL / failed to find a format
[05:19:08 CET] <solidus-river> am I supposed to call any statefull registry function like avcodec_register_all() prior to attempting to find an output format?
[05:19:16 CET] <kepstin> hmm, the problem is probably that you didn't call av_register_all(); yeah
[05:19:37 CET] <solidus-river> *facepalm*
[05:20:05 CET] <kepstin> ... that's not done in the example file
[05:20:09 CET] <kepstin> which I guess is a bug
[05:20:11 CET] <solidus-river> no, its not
[05:20:12 CET] <solidus-river> lol
[05:20:31 CET] <solidus-river> i looked for it and also checked every function name that began with avformat_ to find something with register in the name
[05:20:44 CET] <solidus-river> but!, i didn't check av_
[05:21:14 CET] <solidus-river> yep, that worked :P
[05:21:41 CET] <solidus-river> so, prior to this i was just dumping the straight packets to the file, i should let ffmpeg take care of all file handle operations :?
[05:22:06 CET] <kepstin> unless you really want to do it yourself, let ffmpeg do it
[05:22:19 CET] <kepstin> (in some formats it has to seek to update headers after writing the file, etc.)
[05:22:40 CET] <solidus-river> kk, i'm going to keep following the example, but given that it didn't register formats, i'm wary of it
[05:22:48 CET] <solidus-river> :\
[05:39:48 CET] <solidus-river> hmm, including ffmpeg.h makes everything go crazy I must be missing some defines
[05:41:19 CET] <solidus-river> oh, i didn't specify toolchain=msvc on building, I hope that doesn't bite me in the bum later on
[05:41:27 CET] <kepstin> umm, there is no installed header named 'ffmpeg.h'
[05:42:10 CET] <solidus-river> yeah I was trying to figure out where OutputStream was defined (its declared at the top of the example but in the docs its defined in ffmpeg.h) in fftools/ffmpeg.h
[05:42:10 CET] <kepstin> for most of the ffmpeg libraries, you want #include <libsomething/something.h>
[05:43:04 CET] <kepstin> OutputStream isn't a type provided by the library
[05:43:24 CET] <kepstin> in the muxing example, it's defined in the same C file
[05:43:49 CET] <solidus-river> your right, my bad, i thought add_stream was defined in the library because it had doxygen docs / defs but it's local to the file as well
[05:44:36 CET] <kepstin> hint: All the public library functions, types, and variables start with AV or av
[05:45:23 CET] <kepstin> well, there's some exceptions
[05:45:33 CET] <kepstin> e.g. the libswscale stuff starts with SWS or sws
[05:47:40 CET] <kepstin> but it sounds like you're reading the headers directly from the ffmpeg source dir; you really should install ffmpeg if possible, then use the installed headers, so you don't accidentally use internal headers.
[06:59:43 CET] <solidus-river> well, thats a step closer, i merged the example with the code i had, no errors but it also never outputs packets, going to pick up here tomorrow, also header parsing fails on the metroska container
[07:03:12 CET] <solidus-river> kepstin, thanks for your help fumbling through that muxer example
[07:25:17 CET] <pZombie_> test
[07:38:51 CET] <pragomer> when extracting the pictures of a video-file via "ffmpeg - i myfile.avi image%d.png" I get "fields" (I at least think the english word for this is "field" as in videos pictures are combined from two "half pictures" (thats the german word))
[07:39:20 CET] <pragomer> can I improve the quality so that there are no such unsharp "movements" becaus of these picture "fields" ?
[07:40:09 CET] <pragomer> is the right technique here "de-interlacing" ?
[08:02:21 CET] <dil12321> Hi, how do I use the unsharp_opencl filter?
[08:06:15 CET] <dil12321> hello
[11:23:51 CET] <BtbN> What the hell. I made a video by concatenating a bunch of 4 second .flv segments recorded by nginx-rtmp. And at some points in the final video, that's just done using -c copy, the audio jumps back to the very beginning of the video, for about 30~60 seconds, and then goes back to normal.
[11:23:59 CET] <BtbN> How can that even happen, 3 hours into a video?
[11:24:40 CET] <BtbN> https://www.youtube.com/watch?v=xyMwIbiec80&t=3h30m40s
[12:40:34 CET] <azarus> When (very roughly is ok too) can we expect a AV1 encoder/decoder that's comparable to x265?
[14:39:08 CET] <fred1807> what is wrong with this script?    for file in *mp4 *avi; do ffmpeg -i "$file" -vn -acodec copy "$file".ffprobe "$file" 2>&1 |sed -rn 's/.Audio: (...), ./\1/p'; done
[14:39:41 CET] <azarus> dunno, show the output when you execute it
[14:40:09 CET] <fred1807> sed: illegal option -- r
[14:40:36 CET] <azarus> are you using GNU sed or not
[14:40:42 CET] <fred1807> oh.. got it
[14:40:53 CET] <fred1807> damn osx
[14:40:59 CET] <azarus> ah, even busybox supports -r
[14:41:14 CET] <azarus> so it's probably just macOS messing with you :P
[14:53:38 CET] <fps> i cry every tim
[14:53:42 CET] <fps> when i read ffmpeg docs ;)
[15:11:36 CET] <durandal_1707> fps: why?
[15:13:28 CET] <saml> cause font?
[15:13:39 CET] <saml> don't cry. be glad
[15:14:57 CET] <durandal_1707> is font too small?
[15:19:00 CET] <saml> maybe he cries for joy
[16:43:51 CET] <King_DuckZ> hi, I think I finally got something that works, though the output doesn't really look playable... but nvm that, I'd like to get some warnings fixed if anybody can help? I get lots and lots of buffer underflow messages
[16:44:17 CET] <King_DuckZ> and at the beginning I get VBV buffer size not set, using default size of 130KB blah
[16:47:58 CET] <King_DuckZ> my code is more or less like this https://alarmpi.no-ip.org/kamokan/dw?cpp then if got_packet is true then I call av_interleaved_write_frame(ctx, packet);
[17:18:29 CET] <Chuck_> What does ffmpeg do at the boundaries of an image when scaling with bilinear and bicubic interpolations?
[17:23:46 CET] <JEEB> depends on the scaling filter
[17:26:04 CET] <BtbN> I don't get it. This mp4 plays fine locally. When I upload it to youtube, the audio does the most fucked up shit. Like, re-starting from the beginning in the middle of the video, but only for like 30 seconds.
[17:28:51 CET] <Chuck_> JEEB: The filters used are bilinear and bicubic
[17:44:46 CET] <Chuck_> What does ffmpeg do at the boundaries of an image when scaling with bilinear and bicubic interpolations?
[17:49:10 CET] <jkqxz> They should match the boundaries of the original image with a one-dimensional linear filter for bilinear (I think that just maps the corner pixels directly).  Bicubic will do more but probably has the same edge conditions?
[17:50:00 CET] <JEEB> Chuck_: it deoends on the filter (within libavfilter) which you're using. for example scale which uses the internal swscale library or zscale which uses the zimg library
[17:50:30 CET] <JEEB> and then you learn what they're doing through looking at thw code of the related implementation
[17:52:23 CET] <Chuck_> I just implemented my own version of bilinear scaling and it results the same scaled image as in ffmpeg if I replicate the border pixels. But for the bicubic it is different.
[17:52:47 CET] <JEEB> thus check either swscale or zimg's stuff?
[17:52:47 CET] <Chuck_> JEEB: Ill give it a look in that lib, thanks
[17:52:57 CET] <Chuck_> swscale
[17:53:15 CET] <JEEB> virtualdub's author had a great blog
[17:53:26 CET] <JEEB> which went over a lot of basics
[17:53:36 CET] <JEEB> but that's down now :<
[17:54:22 CET] <Chuck_> oh.. i was just looking for it
[18:01:41 CET] <Johnjay> virtualdub guy had a blog?
[18:01:45 CET] <Johnjay> and it went down?
[18:01:47 CET] <Johnjay> T_T
[18:05:53 CET] <JEEB> yup
[18:06:00 CET] <JEEB> it returns 500
[18:06:36 CET] <JEEB> http://virtualdub.org/blog/
[18:09:31 CET] <Johnjay> what does archive.org say?
[19:45:54 CET] <ed_> would ffmpeg -y -i source.ts -c:v copy -c:a copy dest.mp4 be lossless?
[19:46:31 CET] <JEEB> for video and audio yes, the streams should get remuxed
[19:47:02 CET] <Chuck_> I just read that OpenCV and MATLAB resize methods use different values for the coefficient A in bicubic interpolation, -0.75 and -0.5. What value does ffmpeg use?
[19:47:20 CET] <JEEB> nobody's going to respond to those details here :D
[19:47:22 CET] <ed_> JEEB, my lack of AV is showing now. what do you mean by remuxed? do you mean things like subtitles would be lost, or do you mean things like multiple audio channels?
[19:47:37 CET] <kepstin> ed_: note that by default it only selects one video and one audio stream; if there are multiple streams in the source they won't all be copied
[19:47:55 CET] <furq> ed_: remuxing = putting the streams in a new container
[19:47:58 CET] <JEEB> ed_: remuxing aka remultiplexing is to demultiplex streams and re-multiplex them
[19:48:22 CET] <furq> remuxing is indeed remuxing
[19:48:43 CET] <ed_> what about weird things like subtitles, do they copy over?
[19:48:58 CET] <furq> not with that command and probably not in general from ts to mp4
[19:49:18 CET] <furq> you can add -c:s copy but that will likely break from ts to mp4 because it's probably something like dvbsub that mp4 doesn't support
[19:50:02 CET] <furq> without that it'll try to convert between subtitle formats where possible
[19:50:08 CET] <ed_> I can live with that, is it possible to take the subtitles to a time indexed file, just incase i want them later?
[19:50:15 CET] <furq> but only between text-based subtitle formats
[19:50:39 CET] <furq> that depends on the subtitle format
[19:50:54 CET] <furq> if it's just text subs you can save them to srt
[19:51:02 CET] <furq> but if it's picture-based then there's normally not a standalone container for those
[19:51:11 CET] <furq> you could always mux them into an mkv
[19:51:54 CET] <ed_> ah, this is to get some .ts into a format that is html5 compatible :/
[19:52:00 CET] <furq> -i foo.ts -c copy -map 0:v -map 0:a out.mp4 -c copy -map 0:s out.mkv
[19:52:32 CET] <ed_> :)
[19:52:58 CET] <ed_> Thanks furq I wouldn't have thought of that
[19:52:59 CET] <kepstin> if you have text subs, you could convert them to an external webvtt file, i think most browsers handle those nowadays
[19:53:18 CET] <kepstin> if you have image subs, well, I guess you could burn them in to the video (hard-encoded?) :/
[19:53:19 CET] <furq> do browsers support mov_text yet
[19:54:28 CET] <furq> if you really want to keep the subs and you have image subs then you'd have to OCR them into text subs
[19:54:33 CET] <furq> which isn't something ffmpeg does
[19:54:44 CET] <furq> there are plenty of free tools which do it though
[21:35:52 CET] <shtomik> Hi guys, can somebody help me with threads, please. Im changing transcoding.c example to multithreading& I have a code: https://pastebin.com/ZVGbEhRW, but params that Im pass to a new thread is wrong, what am I doing wrong guys? Thanks!
[21:36:51 CET] <shtomik> I need to pass two param to a new thread(int and AVFrame), but in new thread I receive uncorrect data, thanks.
[21:39:24 CET] <sr90> hi all, I am trying to play an mpd file and see if/when/how dashdec.c code is invoked. How should I go about it?
[21:48:10 CET] <shtomik> My question is close, thanks ;)
[21:49:55 CET] <DHE> shtomik: you're launching a thread per frame. that's not what you want to do
[21:50:30 CET] <DHE> ffmpeg is not thread-safe for any single AVSomethingContext. you're racing the hell out of it and not respecting buffering
[21:51:29 CET] <DHE> generally you make a single thread whose job is to handle all inputs or all outputs. for a transcode job 2 threads should suffice - one decoder, one encoder. The main thread could take one of these roles
[21:54:55 CET] <shtomik> DHE: Okay, Can I ask you about a small example, please?
[21:55:37 CET] <shtomik> DHE: How to pass AVFrame after decoding to another thread all time?
[21:56:48 CET] <shtomik> DHE: create a new thread for each AVFrame, its a stupid step?
[22:08:27 CET] <kepstin> it's up to you to figure out how to pass the AVFrame from one thread to another - you might want some sort of threadsafe fifo to do it, for example.
[22:08:58 CET] <kepstin> Note that it's safe to share an AVFrame across threads as long as you ref() and unref() it correctly.
[22:13:50 CET] <DHE> there is a thread-safe queue in ffmpeg somewhere...
[22:20:47 CET] <kepstin> looks like libavutil/threadmessage.h is one, yeah
[22:27:40 CET] <shtomik> DHE: kepstin: thanks guys!
[22:28:01 CET] <shtomik> So, I need to use queue, and work with 2 threads, right
[22:28:02 CET] <shtomik> ?
[22:29:44 CET] <kepstin> yes; have one thread use the decoder context and decode to avframes, then pass the frames to a second thread using a queue, and have that second thread use the encoder context.
[22:30:40 CET] <kepstin> note that the AVThreadMessage queue helpfully also lets you pass error messages and EOF between the threads as well.
[22:38:54 CET] <shtomik> kepstin: oh thanks, Im going to try this variant! thanks!
[23:20:09 CET] <chocolaterobot> I have an OGG file (audio file) that I'd like to upload to youtube. Since Youtube accepts only video, I have to add some image to the ogg file. what ffmpeg command is the best in terms of (1) not altering the audio and (2) not increasing file size so much and (3) not taking a long time for the command to be completed?
[23:20:32 CET] <chocolaterobot> I'm fine if the image is one black pixel -- just the bare minimum.
[23:20:40 CET] <chocolaterobot> using linuxmint 18.3
[23:21:41 CET] <kepstin> something like "ffmpeg -framerate 6 -loop 1 -i image.png -i audio.ogg -c:v libx264 -preset veryfast -c:a copy output.mkv" would do that fine
[23:21:52 CET] <kepstin> for upload to youtube
[23:22:26 CET] <kepstin> (if you're doing something else with the video, add a "-pix_fmt yuv420p" before the output filename)
[23:22:43 CET] <kepstin> oh, and you'll need to add the option "-shortest" as well, otherwise it'll make an infinitely long video :)
[23:22:49 CET] <chocolaterobot> what do you mean " do something else with the video"?
[23:22:58 CET] <chocolaterobot> You mean aside from uploading to youtube?
[23:23:40 CET] <kepstin> yeah
[23:23:52 CET] <chocolaterobot> kepstin: ah, it'll be only to upload to yuotube.
[23:25:05 CET] <shtomik> kepstin: I cant find the AVThreadMessage queue& ;(
[23:25:21 CET] <kepstin> youtube will re-encode the audio and video you upload anyways, so it doesn't really matter what formats you upload - the main thing is to make sure to use "-c:a copy", which tells ffmpeg to copy the audio without re-encoding to avoid an extra generation loss.
[23:25:38 CET] <kepstin> shtomik: scroll up, I mentioned what header file it's in.
[23:26:40 CET] <shtomik> kepstin: thanks!
[23:27:37 CET] <chocolaterobot> kepstin: thanks
[23:30:05 CET] <chocolaterobot> kepstin: will your command produce a file format that youtube accepts+
[23:30:07 CET] <chocolaterobot> ?
[23:30:25 CET] <kepstin> yeah, should be fine.
[23:30:52 CET] <kepstin> don't forget the "-shortest" option tho :)
[23:34:03 CET] <shtomik> kepstin: and with this queue I can send/receive my struct to/from queue, right?
[23:37:36 CET] <chocolaterobot> `ffmpeg -shortest -framerate 6 -loop 1 -i image.png -i audio.ogg -c:v libx264 -preset veryfast -c:a copy output.mkv` <-- like that? :)
[23:39:00 CET] <chocolaterobot> kepstin: your command says `output.mkv`, but I don't see mkv in  https://support.google.com/youtube/troubleshooter/2888402?hl=en
[23:39:14 CET] <kepstin> chocolaterobot: it works fine, don't worry about it
[23:39:19 CET] <chocolaterobot> ok.
[23:39:36 CET] <kepstin> "-shortest" is an output option, move it to before the output filename
[23:40:59 CET] <chocolaterobot> `ffmpeg -shortest -framerate 6 -loop 1 -i image.png -i audio.ogg -c:v libx264 -preset veryfast -c:a copy -shortest output.mkv` <----like this? :)
[23:41:59 CET] <kepstin> well, you didn't remove the extra '-shortest' at the start there :)
[23:42:12 CET] <kepstin> (i dunno if that would be an error or if it would just be ignored)
[23:46:35 CET] <chocolaterobot> oops
[23:46:57 CET] <chocolaterobot> `ffmpeg -framerate 6 -loop 1 -i image.png -i audio.ogg -c:v libx264 -preset veryfast -c:a copy -shortest output.mkv` <-- what I meant to write :)
[23:54:56 CET] <solidus-river> hey all, my x264 / mkv journey continues today, I'm using the new api I believe and have successfully created a AVFormatContext with the appropriate output format for video and the container and have written headers. I'm at the point where I'm trying to encode packets and I never get a packet back from avcodec_encode_video2
[23:56:10 CET] <solidus-river> using metroska as the container, and libx264 as the encoder. I'm using this bit of code as a reference and will pastebin my code. I've taken some of the conditionals out of the proces as I'm assured (and have tested) I am only ever encoding via libx264 and on the metroska format so I can be ensured of some things
[23:56:26 CET] <solidus-river> this is the example I am using https://ffmpeg.org/doxygen/trunk/muxing_8c-example.html
[23:56:57 CET] <solidus-river> and this is my current code base, the flow is that an external program calls startencoding, then calls encode frame once each frame for as many framse as the video would need, then calls finish encoding
[23:58:27 CET] <solidus-river> this is my video encoder class that is used
[23:58:28 CET] <solidus-river> https://pastebin.com/HLHiemEh
[23:59:34 CET] <solidus-river> going to keep picking through it, I don't get a packet ever but I also don't get an error, at the end of the process my mkv file appears to be empty aside from maybe some headers that are claimed to be corrupt
[00:00:00 CET] --- Wed Mar  7 2018


More information about the Ffmpeg-devel-irc mailing list