[Ffmpeg-devel-irc] ffmpeg.log.20140302
burek
burek021 at gmail.com
Mon Mar 3 02:05:01 CET 2014
[00:00] <iive> yes...
[00:00] <iive> well, SD data might be random of just overwritten, then nothing interesting would happen.
[00:01] <iive> but if the file compresses to 1-2MB, then you can't expect to save more than a few seconds...
[00:02] <orangey> iive: it seems to have cut 300mb of 1.1gb
[01:15] <geqsaw> how can I change the resolution and add padding in one command?
[01:28] <cbsrobot> geqsaw: scale and pad
[01:29] <cbsrobot> or crop and pad
[01:29] <cbsrobot> depends on what you want to do
[01:30] <geqsaw> ffmpeg -i in.mkv -map 0 -s 1920x400 -vf pad=1920:540:0:70 -aspect 1920:540 -c:a copy -sn out.mkv
[01:35] <geqsaw> output:
[01:35] <geqsaw> [Parsed_pad_0 @ 0x83f700] Input area 0:70:1920:870 not within the padded area 0:0:1920:540 or zero-sized
[01:35] <geqsaw> [graph 0 input from stream 0:0 @ 0x841f20] Failed to configure input pad on Parsed_pad_0
[01:38] <Dark-knight> what is better 4 ref frames or 16 ref frames?
[01:42] <Dark-knight> ok thats nice
[01:42] <Dark-knight> what is better 4 ref frames or 16 ref frames?
[01:43] <geqsaw> next time...
[01:45] <cbsrobot> geqsaw: ffmpeg -i in.mkv -vf scale=1920:400,pad=1920:540:0:70 -c:a copy -sn out.mkv
[01:45] <cbsrobot> ^wild guessing
[01:46] <cbsrobot> Dark-knight: what for ?
[01:46] <Dark-knight> video stream
[01:47] <Dark-knight> i got 2 files and one is 4 ref frames and one is 16 ref frames
[01:47] <cbsrobot> well I guess it depends on the content and your aim
[01:48] <cbsrobot> quality/bandwidth wise
[01:48] <safani> hello all
[01:49] <safani> I am trying to get the sample data of the waveform from an mp3 with ffmpeg. Could someone give me some direction or any clues as to how to do this.
[01:50] <safani> I would like to create a visualization of the mp3 in html5 canvas. But I need the waveform data for each sample to do this.
[01:51] <geqsaw> cbsrobot: good guessing, it worked...
[01:57] <Dark-knight> cbsrobot: the video with 4 ref frames is 1080p and the video with 16 ref frames is 720p
[01:57] <Dark-knight> i was wondering if 4 was better then 16?
[02:06] <safani> Is there a way to get the sample chunks in number form with ffmpeg in order to map a waveform
[02:06] <safani> ?
[02:15] <blight> hmm
[02:15] <blight> how do i flush/discard the buffer of libswresample? when seeking
[02:33] <Dark-knight> this channel isn't very helpful tonight
[02:33] <Dark-knight> ill bbl
[02:41] <klaxa> this user wasn't very patient tonight
[03:04] <RenatoCRON> hello people, i'm trying to segment streaming JPEG input to segmented videos, how can I do it?
[03:05] <RenatoCRON> i'm trying with this:
[03:05] <RenatoCRON> http://pastebin.com/uWjZ7MGC
[03:16] <JodaZ> [Parsed_ashowinfo_1 @ 0x2912de0] n:0 pts:0 pts_time:0 pos:37130 fmt:fltp channels:2 chlayout:stereo rate:22050 nb_samples:1024 checksum:00000000 plane_checksums: [ 00000000 00000000 ]
[03:20] <JodaZ> nvm
[03:36] <JodaZ> *_* ok, i have 2 problems
[03:37] <sacarasc> Jay-Z has 99. You're lucky.
[03:38] <JodaZ> copyts+copytb do not copy in fact copy over timestamps for audio exactly, and for some reason i have a zero audio chunk inserted at the start of my output that was not there in the source
[04:35] <sinclair|work> oh wow, there is a channel for this
[04:36] <sinclair|work> i have a question regarding piping with ffmpeg
[04:37] <sinclair|work> i am currently writing a wrapper over ffmpeg, and using pipes to stream data in (stdin) and getting data back on stdout. Im looking to generate thumbnails with ffmpeg from a video source, however, im not sure how i would pipe multiple pngs out over a single stdout pipe
[04:38] <sinclair|work> can anyone lend some assistance ?
[05:24] <sinclair|work> anyone?
[05:25] <klaxa> aren't there already programs better suited for that? :x
[05:25] <klaxa> also, how many thumbnails do you want to make? if it is just one that shouldn't be too hard
[05:28] <sinclair|work> klaxa: well, im past that
[05:29] <sinclair|work> klaxa: now im looking to stream images into video
[05:29] <sinclair|work> klaxa: the important thing to note here is im trying to pipe these images into ffmpeg from stdin / stdout
[05:29] <sinclair|work> which, according to everything i have read, seems perfectly possible
[05:29] <klaxa> yeah it doesn't seem impossible
[05:30] <sinclair|work> klaxa: the issue im having right now is all documentation points to encoding from disk and outputting to disk
[05:30] <sinclair|work> its ... annoying
[05:30] <sinclair|work> i mean, here, you have a perfectly capable technology to handle streaming, and no one seems to be doing that
[05:31] <sinclair|work> klaxa: its a big issue with ffmpeg
[05:32] <sinclair|work> klaxa: no one seems to be using it to its full extent, rather, people are just using it for some scrappy back end task
[05:32] <klaxa> well it's not a general usecase
[05:33] <sinclair|work> klaxa: its a perfectly reasonable usecase
[05:33] <sinclair|work> i mean, anyone even attempting to report back progress on a encoding is lost given the lack of help out there to actually do that
[05:33] <sinclair|work> klaxa: as it turns out, ffmpeg reports back on stnerr
[05:34] <klaxa> i never said it's an unreasonable usecase
[05:34] <klaxa> i said it's not a general usecase
[05:34] <sinclair|work> klaxa: youll have to forgive my frustration mate
[05:35] <sinclair|work> but i really does feel like im doing something "out of the norm" by actually trying to pipe my own streamed data to ffmpeg for encoding
[05:35] <sinclair|work> when doing that shouldn't be out of the norm at all....it should be the most common means of using ffmpeg
[05:36] <klaxa> the common usecase is you have files
[05:36] <sinclair|work> sure, i can appreciate people get by with running batch jobs in the background, but it need not be the extent to which the tech is used
[05:36] <sinclair|work> klaxa: what if i have a web camera feed?
[05:36] <klaxa> okay
[05:36] <klaxa> so...
[05:37] <klaxa> it actually doesn't matter because in the end everything is a file descriptor
[05:37] <klaxa> the thing with multiple images is
[05:37] <klaxa> they are not concatenated in one stream you read from one filedescriptor
[05:37] <klaxa> but multiple files, each with its own filedescriptor
[05:38] <sinclair|work> klaxa: so, they need delimiting
[05:38] <klaxa> that's the issue you are running into right now
[05:38] <klaxa> no they need independent filedescriptors
[05:38] <klaxa> ffmpeg doesn't handle images like that
[05:38] <klaxa> or it can and i am unaware
[05:38] <sinclair|work> klaxa: internally, its capable of reading images from disk on its own
[05:39] <klaxa> yes
[05:39] <sinclair|work> klaxa: it must be perfectly capible of letting me stream images on its stdin
[05:39] <klaxa> no
[05:41] <sinclair|work> why no?
[05:41] <sinclair|work> klaxa: if i am streaming images in, the most i should need to do is delimit each image, possibly with a fd
[05:41] <klaxa> apparently it is
[05:41] <klaxa> cat *.png | ffmpeg -r 1 -s 1366x768 -c:v png -f image2pipe -i - -c:v libx264 -r 30 -pix_fmt yuv420p -f matroska pipe:1 > shots.mkv
[05:42] <klaxa> that works
[05:42] <klaxa> you were missing the correct format, in this case image2pipe
[05:42] <klaxa> i... don't think you understand how filedescriptors work
[05:42] <klaxa> anyway, you have to add the image2pipe format before your input
[05:42] <sinclair|work> klaxa: one second, let me digest your arguments
[05:42] <klaxa> kk
[05:43] <klaxa> you might also have to set the codec (image format) like i did
[05:43] <sinclair|work> klaxa: in that scenario, you are piping from a cat command
[05:43] <sinclair|work> so, you are telling ffmpeg the file paths basically
[05:43] <klaxa> it doesn't matter what type of command you use to produce the stream
[05:43] <klaxa> no
[05:43] <klaxa> i am concatenating all files into one filestream
[05:44] <sinclair|work> right
[05:44] <sinclair|work> klaxa: so, its that concatenation that i need to do
[05:45] <klaxa> i thought you have a stream of images already?
[05:45] <sinclair|work> klaxa: in this scenario, i have 1 image, that i want to pipe endlessly to ffmpeg
[05:45] <sinclair|work> the format for the image is jpg
[05:46] <klaxa> define endlessly
[05:46] <klaxa> maybe you can use a video filter for that
[05:46] <sinclair|work> i load the jpg once, store its bytes in memory, and then write those bytes to ffmpeg's stdin
[05:46] <sinclair|work> over and over
[05:47] <sinclair|work> klaxa: its a benign example, but its just a test
[05:47] <klaxa> well you can do that i guess
[05:47] <sinclair|work> klaxa: so, how is that different from cat.*?
[05:47] <klaxa> it's not
[05:48] <klaxa> except you are using the same file over and over again which is perfectly fine
[05:48] <sinclair|work> klaxa: so i think i may have got it working
[05:48] <klaxa> nice
[05:50] <sinclair|work> klaxa: well, its streaming something
[05:51] <sinclair|work> klaxa: im down to this...
[05:52] <sinclair|work> -r 1 -c:v mjpeg -f image2pipe -i pipe:0 -c:v libx264 -r 30 -pix_fmt yuv420p -f matroska pipe:1
[05:52] <sinclair|work> klaxa: these arguments feel wrong
[05:52] <klaxa> which ones?
[05:52] <klaxa> what output do you want to have?
[05:57] <sinclair|work> klaxa: mp4
[05:57] <sinclair|work> klaxa: have whittled it to...
[05:57] <sinclair|work> -r 1 -c:v mjpeg -f image2pipe -i pipe:0 -c:v libx264 -pix_fmt yuv420p -g 12 -movflags frag_keyframe+empty_moov -crf: 25 -f mp4 pipe:1
[05:57] <klaxa> looks good
[05:57] <sinclair|work> klaxa: which appears to work,
[05:57] <sinclair|work> ...
[05:58] <klaxa> is forcing 12 images per gop intened?
[05:58] <sinclair|work> klaxa: gop?
[05:58] <klaxa> group of pictures
[05:59] <klaxa> your -g 12
[05:59] <sinclair|work> that is the output isn't it?
[05:59] <klaxa> a group of pictures is a segment in a video that can be independently decoded from all other frames
[05:59] <sinclair|work> oh
[05:59] <sinclair|work> no, that is for mp4 segmenting i think
[05:59] <klaxa> yeah i mean it doesn't really change a lot
[06:00] <sinclair|work> klaxa: i need that (i think) otherwise ffmpeg wants to seek from disk
[06:00] <klaxa> i'm pretty sure you can leave that out, but like i said it's not going to change a lot
[06:00] <klaxa> maybe increase filesize slightly
[06:00] <sinclair|work> klaxa: random question, is this channel like many other quiet channels on freenode?
[06:01] <klaxa> irc is 90% idling
[06:01] <klaxa> so yeah, but as you can see, users can get some support in here
[06:02] <sinclair|work> klaxa: well, i appreciate your help
[06:05] <sinclair|work> klaxa: if you have 5 mins, can test this out if you are interested?
[06:06] <klaxa> sure
[06:11] <sinclair|work> klaxa: just let me get setup here
[06:17] <sinclair|work> klaxa: still there?
[06:17] <klaxa> yes
[06:17] <sinclair|work> can try http://118.90.17.50:9070/webm?w=320&h=200
[06:17] <sinclair|work> stream ok?
[06:18] <sinclair|work> that is the single image
[06:18] <klaxa> yes looks good
[06:19] <sinclair|work> so, all i need to do is generate images on the fly, and i can stream them
[11:00] <relaxed> sinclair|work: you can loop an image with ffmpeg and avoid the piping.
[11:01] <sinclair|work> relaxed: ?
[11:01] <sinclair|work> relaxed: what do you mean, loop a image?
[11:01] <relaxed> are you using one image in your stream? maybe I missed something.
[11:02] <sinclair|work> relaxed: http://118.90.17.50:9070/webm?w=320&h=200
[11:02] <sinclair|work> relaxed: there, i am computing each frame using GDI
[11:03] <sinclair|work> its very rough, but its a start
[11:03] <sinclair|work> relaxed: there is no temporary file
[11:03] <sinclair|work> there are no files at all in fact
[11:03] <sinclair|work> there is just GDI generating random lines, and a circle from left to right
[11:04] <relaxed> oh, I see
[11:04] <sinclair|work> relaxed: the idea is, if i have a image pipline of some description, be it from a desktop screen capture, or a web camera, or some other feed, i can pipe that to ffmpeg and stream it down the wire
[11:05] <sinclair|work> relaxed: so, that was my sunday code, i have to implement something like this for work
[11:48] <dvnl> hello Everyone! I'm stuck configuring an ffmpeg compilation with only the hevc decoder functionality. HEVC decoding from a file works just fine by the default compilation, but it throws me an error 'Invalid Data found when processing input' with the reduced-functionality version. I'm trying the configuration with these options: ./configure --disable-everything --enable-decoder=hevc --enable-parser=hevc --enable-encoder=rawvideo --enabl
[11:48] <dvnl> Can please someone help me out with a working configuration?
[11:58] <relaxed> dvnl: did you add --enable-protocol=file ?
[12:01] <dvnl> relaxed: yes, i added that switch. I also tried adding -f hevc when running ffmpeg.exe, but it says 'Unknown input format: hevc'.
[12:01] <relaxed> --enable-demuxer=rawvideo
[12:01] <dvnl> thank you, i'm trying it now
[12:05] <relaxed> er, did you have --enable-demuxer=hevc ?
[12:05] <relaxed> if not, that's probably it.
[12:10] <dvnl> Now I'm using --enable-demuxer=hevc and --enable-demuxer=rawvideo, but the problem persists :(
[12:11] <relaxed> do you have --enable-gpl?
[12:12] <relaxed> hm,, I'm guessing configure would fail it required that.
[12:12] <dvnl> No, i didn't include that during configuration
[12:12] <relaxed> show me your current ./configure
[12:13] <dvnl> this is my current config: configure --disable-everything --enable-decoder=hevc --enable-parser=hevc --enable-encoder=rawvideo --enable-protocol=file --enable-muxer=hevc --enable-demuxer=rawvideo --toolchain=msvc
[12:14] <ubitux> what command are you trying?
[12:14] <ubitux> what's the format?
[12:16] <dvnl> I'm trying with ffmpeg.exe -i rum_test_x265.bin test.yuv; also tried with .hevc instead of .bin
[12:16] <relaxed> you need --enable-demuxer=hevc
[12:16] <relaxed> .bin ?
[12:18] <dvnl> OK, i try with --enable-demuxer=hevc. Yes, .bin, this is what the HM reference encoder uses. Am I making a mistake trying with that extension? The full compilation of ffmpeg successfully decoded .bins
[12:20] <relaxed> ok, I just haven't seen .bin before.
[12:24] <relaxed> dvnl: you'll probably need all those --enable's for rawvideo too.
[12:25] <dvnl> you're meaning encoder, demuxer, muxer and parser, too? or some more besides these?
[12:27] <relaxed> all but parser
[12:27] <relaxed> encoder, decoder, muxer, and demuxer
[12:28] <dvnl> actually, there is a change in the error message, the decoding started, but it says: 'Unable to find a suitable output format for test.yuv. test.yuv: invalid argument
[12:28] <dvnl> I think that it's missing the decoder then. Thank you! I hope it solves the problem
[12:29] <relaxed> encoder and muxer for sure
[12:39] <dvnl> wow, I had a compilation error when adding these two, but I'm trying
[12:47] <dvnl> Thank you very much, relaxed!! It works now. I'm grateful.
[12:51] <relaxed> dvnl: you're welcome
[13:04] <dvnl> relaxed: I would bother you with one more question: is there a way to find out, which .c source files are used for compilation in the case of a given config? a log file from make or something? It would be great being able of compiling ffmpeg with only the hevc decoding functionality manually, by importing the needed sources to a visual studio solution, as in the future, I will need to edit some of the sources and it would be easier sor
[13:04] <dvnl> *files than the whole huge ffmpeg package.
[13:05] <bparker> do I need to call avcodec_open2 even if I am just muxing from h264 video that's already in non-ffmpeg memory? currently I'm not, and I'm also not allocating a frame or picture with libavcodec, I'm just pointing the muxer's AVPacket to my image data and that's it... is that the wrong way to do it? documentation seems scarce about this topic
[13:06] <bparker> basically I have h264 data that's already been encoded with libx264 and my program is just trying to mux that into an mp4 container
[15:22] <jpsaman> Does swscale support multithreaded scaling?
[16:42] <haspor> hello, if i want to decode at3p format to raw pcm_s16le samples, which options i need for the configure, only the decoder or ?
[16:43] <reliability> Is there any part of ffmpeg which has some potential for "local" optimizations (instruction level parallelism, removing potential cache misses, etc., no multithreading though)?
[16:45] <JEEB> haspor, well you will be reading that stuff out of a container (OMA or WAVE) so you will most probably need that, and the atrac3plus decoder will output planar float audio, so you will need to use swresample or avresample to convert it to pcm_s16le
[16:46] <klaxa> reliability: see doc/optimization.txt
[16:46] <klaxa> there might be something of interest
[16:46] <reliability> klaxa: thx
[16:51] <haspor> JEEB, all right
[16:54] <haspor> which one i need, demuxer, parser, audio encoder etc etc with that atrac3p decoder?
[16:57] <leonbienek> hi all
[18:30] <leonbienek> Could anyone help explain to me the best way to stream with low latency over ethernet?
[18:32] <leonbienek> I attempted to use this: ffmpeg -f v4l2 -i /dev/video0 -r 1 -codec copy -f rawvideo udp://[HOST_IP]:6789
[18:39] <Hello71> depending on resolution that's usually too much data
[18:40] <leonbienek> would a smaller size help reduce that?
[18:41] <leonbienek> say ffmpeg -f v4l2 -i /dev/video0 -r 1 -s 640x480 -codec copy -f rawvideo udp://[HOST_IP]:6789
[19:24] <Hello71> leonbienek: use x264 ultrafast
[19:58] <BtbN> leonbienek, i think you're underestimating the size of uncompressed video. You can easily overwhelm even a gibt link with it
[19:58] <BtbN> lossless or extremely high quality/bitrate h264 might be an alternative
[19:59] <leonbienek> I may as well share my intentions, as from the sounds of it, it may not even be possible
[20:00] <leonbienek> I'm looking to get the video and audio from 2 webcams plugged in to a RaspberryPi to a host connected via ethernet
[20:01] <leonbienek> i've been attempting to send the data as rawvideo over udp, but to no real avail. and that was with just 1 webcam
[20:33] <thebombzen> leonbienek: raw video is always huge huge huge. If you want to make it smaller, use ffv1 (which is ~ the bitrate of lossless H.264 but faster and less cpu-intensive). However, lossless video will always be super huge
[20:33] <thebombzen> If you want to improve ratios more, and the video is recorded from a webcam, try running it through -filter:v hqdn3d to denoise the video which will improve compression ratios
[20:37] <leonbienek> thanks thebombzen
[20:37] <leonbienek> Would that be -vcodec ffv1?
[20:37] <thebombzen> Yea. It's a lossless video coded developed by FFmpeg
[20:38] <thebombzen> Also, I say "from a webcam" because video recorded from the screen won't improve if you denoise it.
[20:41] <leonbienek> ah ok, i'll give that a shot
[20:49] <Hello71> building git head, libavcodec/libavcodec.so: error: undefined reference to 'ff_synth_filter_inner_avx'
[21:03] <leonbienek> thebombzen I cant ffplay that codec on my host machine
[21:30] <bparker> why does a h264/TS file created using a time_base of 1/fps and pts of ++frame_count (starting from 0) produce a zero duration with ffprobe?
[21:42] <quidnunc> Can someone tell me how seeking in segmented MP4 (i.e. HTTP live streaming) works?
[21:42] <quidnunc> (at a high level, not necessarily in FFMPEG)
[21:45] <bparker> quidnunc: the server reads in the mp4 file from whatever duration you seek to, and provides segments from that point in time forwards
[21:56] <quidnunc> bparker: So it can't be done client side (without requesting all data until the seek point)?
[22:03] <bparker> quidnunc: there is no way to get a readable video file (starting at some point in the middle) without the server knowing how to do that
[22:04] <quidnunc> bparker: DASH does it
[22:04] <bparker> because for mp4 files there's information needed to decode the video typically at either the beginning or the end of the file, so you can't just start reading from the middle and expect it to work
[22:04] <bparker> bparker: DASH must be implemented by the server
[22:04] <quidnunc> bparker: Right. Assuming you have the beginning metadata.
[22:05] <quidnunc> bparker: Not really, why can't you implement it client side?
[22:05] <quidnunc> assuming you have the xml
[22:05] <bparker> xml?
[22:05] <bparker> you don't need a streaming protocol to seek within a file you already have
[22:05] <bparker> if it's client side
[22:06] <bparker> or I don't understand your question
[22:06] <quidnunc> I thought there was an output xml file with the segment positions
[22:07] <quidnunc> Let me try restating my question
[22:07] <bparker> I don't know much about dash, but it sounds a bit ridiculous to require an xml file to use it
[22:08] <quidnunc> bparker: It would be used server side
[22:08] <quidnunc> (typically)
[22:08] <quidnunc> I thought that was how the segment positions were stored
[22:08] <quidnunc> But I might be mistaken
[22:09] <quidnunc> Anyway, some background: I know how to seek in a fragmented MP4 using the mfra and mfro atoms (and this can be done client side if HTTP range requests are supported).
[22:09] <bparker> well, there is a difference between 'http live streaming' and 'DASH', the former could mean many different things
[22:10] <bparker> personally I don't understand DASH at all
[22:10] <bparker> it seems extremely complicated
[22:11] <quidnunc> bparker: So how does HTTP live streaming handle seeking?
[22:11] <bparker> define 'http live streaming'
[22:11] <quidnunc> I thought the server was "dumb" and only responded to HTTP range requests
[22:12] <Hello71> building git head, libavcodec/libavcodec.so: error: undefined reference to 'ff_synth_filter_inner_avx'
[22:12] <quidnunc> bparker: http://en.wikipedia.org/wiki/HTTP_Live_Streaming
[22:15] <bparker> I don't understand the difference between HLS and DASH, but I know there were types of 'http live streaming' before HLS was a thing
[22:15] <bparker> so I don't know why the wiki article defines just HLS
[22:17] <quidnunc> bparker: Do you know anything about MP4 segments?
[22:18] <bparker> define segments
[22:18] <bparker> I know the layout of regular MP4 files and their atoms/etc.
[22:19] <bparker> but it sounds like this streaming thing you're referring to is some new extension to mp4 that's different than what people have been using, which I don't know anything about
[22:20] <JEEB> DASH is a whole lot of pain
[22:20] <quidnunc> bparker: segments are defined in the standard
[22:21] <JEEB> and definitely not simple
[22:21] <quidnunc> JEEB: I'm not using it but why?
[22:21] <JEEB> quidnunc, all the crap you have to implement for it :P
[22:21] <JEEB> I think you need an XML parser among other things
[22:21] <JEEB> also are you talking about movie fragments?
[22:21] <JEEB> because that's a MOV/"MP4" feature
[22:22] <JEEB> which lets you have small indexes in the file, and thus you can f.ex. pipe the mux straight into a player or whatever
[22:22] <quidnunc> JEEB: I understand movie fragments. I don't understand MP4 segments
[22:23] <JEEB> specify the actual feature from the spec later and someone might actually comment, I don't remember anything called "segments"
[22:23] <Paranoialmaniac> segments are just incomplete file in the context of iso base media file format
[22:24] <quidnunc> JEEB: Section 8.16 "Segments"
[22:24] <Paranoialmaniac> concatenatation of segments makes a full iso base media file
[22:24] <JEEB> ^ this man knows what he's talking about
[22:24] <bparker> JEEB: I'm about to scream I am so pissed off at this PTS/timebase bullsh**
[22:24] <bparker> I still can't understand it
[22:25] <quidnunc> Paranoialmaniac: So seeking is the same as in an unsegmented file? That is, using the sample table?
[22:25] <JEEB> it's supposed to be simple, timebase sets how many ticks is a single second
[22:25] <bparker> I tried setting timebase to 1/fps and pts = 0,1,2,3 etc. in and I still get 0 duration
[22:25] <JEEB> does it actually play?
[22:25] <bparker> depends on the player and if they pay attention to the pts
[22:26] <JEEB> because MPEG-TS has no duration per se
[22:26] <JEEB> so you could just be derping off whatever is reading your data :P
[22:26] <JEEB> and calculating the probable duration
[22:26] <bparker> right
[22:26] <JEEB> that, or you're still somewhere ending up with wrong values
[22:27] <Paranoialmaniac> quidnunc: there are roughly two types of segments, index segment and media segment. index segment consists of only movie sample table. media segment consists of contiguous moof+mdat pairs
[22:27] <JEEB> do note that IIRC there's a lot of places with various timescales in lavc and lavf, and I have no idea how much you use both of them
[22:27] <bparker> I switched back to MOV to make sure I get my timebase/pts correct, I wanna see the duration be calculated correctly with ffprobe
[22:28] <JEEB> you can make ffprobe print out the PTS of packets
[22:28] <bparker> as far as I can tell there's 3 different time bases and somehow the PTS is related
[22:28] <JEEB> and with MOV/"MP4" you can use L-SMASH's boxdumper tool
[22:28] <bparker> JEEB: yep, and they're wrong :/
[22:29] <bparker> ffprobe -show_packets blah.mov
[22:29] <bparker> gives me the pts, duration per frame etc.
[22:29] <JEEB> and those are incorrect?
[22:29] <bparker> yep
[22:29] <JEEB> then you're setting/using something wrong in lavc/lavf and it has less to do with the actual PTS/timescale things
[22:29] <JEEB> because the idea of timescale/PTS is rather simple
[22:30] <bparker> I'm sure I'm still setting it wrong
[22:30] <bparker> somehow
[22:30] <JEEB> but how lavf/lavc esp. if interused poke those values around
[22:30] <JEEB> are the values off by some kind of amount?
[22:30] <bparker> x264 has a timebase, AVCodecContext has a timebase, and AVStream has a timebase
[22:30] <bparker> no idea what they should all be set to
[22:30] <bparker> whether the same or not
[22:31] <bparker> and there's r_frame_rate in x264 which I'm not certain is correct either
[22:31] <bparker> because it's a rational
[22:31] <bparker> i.e. does it want 1/60 or 60/1
[22:31] <JEEB> frame rate usually is 60/1
[22:31] <quidnunc> Paranoialmaniac: "For segments based on this specification (i.e. based on movie sample tables or movie fragments)" <--- So the segments don't have to be fragments (?)
[22:32] <JEEB> one of them should be num and the other should be denum though, no?
[22:32] <bparker> and apparently when I go back to mpeg ts, you have to use a timebase of 1/90000 as it forces it
[22:32] <JEEB> yes
[22:32] <bparker> so I want to make it work with that
[22:32] <JEEB> so you need to have the scaling thingamajig there
[22:32] <JEEB> there's a function for scaling PTS from scale to scale
[22:32] <bparker> there's av_scale_q(a, b, c)
[22:32] <JEEB> yup
[22:33] <bparker> but I'm not certain what parameters to give it
[22:33] <quidnunc> Paranoialmaniac: Never mind that, suppose I want to seek within a segmented MP4. What are the steps at a high level?
[22:33] <bparker> more like I have no idea at all :p
[22:33] <Paranoialmaniac> quidnunc: as i said index segment uses movie sample table. and media segment uses movie fragement. index segment doesn't contain any media
[22:33] <Paranoialmaniac> quidnunc: we cant seek those types independently. the spec allows mixture of index segment and media segments called indexed self-initialization media segment which is .
[22:34] <Paranoialmaniac> *which is seekable
[22:34] <Paranoialmaniac> youtube splits audio and video stream into two indexed self-initialization media segments
[22:35] <Paranoialmaniac> so, you can seek youtube's audio and video dash files separately
[22:36] <bparker> Paranoialmaniac: does that mean you can turn off the video to save bw?
[22:36] <quidnunc> Paranoialmaniac: I don't see a description of "indexed self-initialization" in ISO/IEC 14496-12. Is it somewhere else?
[22:37] <Paranoialmaniac> quidnunc: 23009-1:2012 6.3 Segment formats for ISO base media file format
[22:37] <Paranoialmaniac> 23009-1 is the spec of DASH
[22:38] <JEEB> welcome to overcomplicated derpiness :P
[22:41] <Paranoialmaniac> indexed self-initializing media segment is something like this http://up-cat.net/p/5c1bf982
[22:46] <Paranoialmaniac> note that a media segment does not always form movie fragments. you can see a media segment could be mpeg-2 ts at 6.4 Segment formats for MPEG-2 transport streams
[22:47] <quidnunc> So the indexed self-initializing media segment holds all the seek points for the file?
[22:47] <quidnunc> Paranoialmaniac: So the indexed self-initializing media segment holds all the seek points for the file?
[22:47] <Paranoialmaniac> yes. self-contained file
[22:50] <quidnunc> Paranoialmaniac: Doesn't that make the initialization segment very large for a large file?
[22:51] <Paranoialmaniac> quidnunc: a dash manifest file (.mpd) points where segments are there, and handles the presentation of described all segment
[22:51] <quidnunc> Paranoialmaniac: The spec says that an MPD is not necessary (?)
[22:52] <Paranoialmaniac> without mpd, why you use dash? :)
[22:54] <quidnunc> Paranoialmaniac: I'm trying to understand DASH and other streaming/seeking methods and why I would use them.
[22:55] <quidnunc> Paranoialmaniac: In my case I'm particularly interested in streaming files without additional metadata and using a dumb server (only support HTTP range requests)
[22:55] <quidnunc> Paranoialmaniac: So is the seek information self-contained in the file or do you need an MPD?
[22:58] <Paranoialmaniac> quidnunc: to be exact, you could seek a media segment independently, but you may be not able to decode media because initializing information is in movie sample table
[22:58] <quidnunc> Paranoialmaniac: But movie sample table is in moov, right?
[23:00] <Paranoialmaniac> yes. so, you get the movie sample table through mpd. mpd describe the location of the segment which contains the movie sample table
[23:01] <quidnunc> Paranoialmaniac: Okay, things are becoming a little clearer. So there is no "global" movie sample table for a segmented MP4?
[23:01] <Paranoialmaniac> quidnunc: yes
[23:01] <quidnunc> Paranoialmaniac: The initialization segment's sample table is empty
[23:01] <Paranoialmaniac> this is why DASH called Dynamic Adaptive
[23:02] <Paranoialmaniac> quidnunc: sample description table (stsd) is mandatory.
[23:03] <Paranoialmaniac> and not empty
[23:03] <Paranoialmaniac> stsd contains initialiazing information to decode media
[23:03] <quidnunc> okay, but no seek information
[23:04] <Paranoialmaniac> oh, sorry. i maybe remember something wrongly
[23:05] <Paranoialmaniac> a segment may contain segment index box
[23:06] <Paranoialmaniac> also, moof box contains positions of each sample
[23:06] <Paranoialmaniac> you can seek by these information
[23:07] <quidnunc> Paranoialmaniac: But only once I have the position of a segment, which requires the MPD, right?
[23:07] <Paranoialmaniac> if a media segment is a mpeg-2 ts, segment index boxes will help you
[23:08] <bparker> JEEB: do you have access to an OSX machine to see if one of my files will play?
[23:08] <bparker> on quicktime
[23:09] <quidnunc> Paranoialmaniac: Let me explain what I am doing right now. I'm using fragmented MP4 and no external metadata. Everything is self-contained in the MP4 and I can find the data I need to seek using HTTP range requests: I download the initial bytes of file which gives me the file size. From the file size I get the mfro and then mfra atoms which gives me sync samples.
[23:09] <quidnunc> Paranoialmaniac: Can I do something similar with DASH, HLS or anything else?
[23:10] <bparker> quidnunc: with jwplayer you can seek regular mp4 files without weird streaming stuff, as long as the moov atom is at the beginning of the file
[23:10] <Hello71> building git head, libavcodec/libavcodec.so: error: undefined reference to 'ff_synth_filter_inner_avx'
[23:11] <bparker> Hello71: are you just going to keep asking the same question over and over
[23:11] <bparker> well, it's not even a question, lol
[23:11] <Hello71> ...
[23:12] <JEEB> bparker, I have a pre-HW decoding model
[23:12] <JEEB> a 2006 macbook :D
[23:13] <Paranoialmaniac> quidnunc: sidx box (segement index box) contains stream access point (SAP) which is more descriptive about random access point rather than one of mfra
[23:13] <JEEB> so it will be picky as hell regarding what kind of H.264 streams it would decode (correctly)
[23:15] <Mavrik> bparker, I can check if you want
[23:15] <bparker> http://fiveforty.net/140302-1AAA.mov
[23:15] <quidnunc> bparker: I can't use it for my application because I need to be able to send it data directly (and not a url to data)
[23:16] <bparker> quidnunc: can you send the codec data without the container? that would make life 1000x easier
[23:16] <bparker> or use a different container
[23:16] <bparker> that's better for streaming
[23:16] <quidnunc> bparker: No.
[23:17] <bparker> JEEB: good
[23:17] <Mavrik> bparker, am I looking for anything specific?
[23:17] <Mavrik> QT plays it well.
[23:17] <bparker> Mavrik: just that it plays, at all
[23:17] <Mavrik> 16 secs of color test
[23:17] <bparker> can I ask what version of quicktime/osx ?
[23:17] <bparker> yep.
[23:18] <Mavrik> rMBP 15"/10.9.2/10.3
[23:18] <JEEB> yeah, that's most probably a HW decoding based one
[23:18] <quidnunc> Paranoialmaniac: Just so I understand you are saying to use sidx with segmented files and not sidx with fragmented (and no segmentation)?
[23:18] <bparker> Mavrik: cool thanks
[23:18] <bparker> JEEB: so that answers the question of quicktime supporting annexb :)
[23:19] <Mavrik> JEEB, yeah, I'd hope so -_-
[23:19] <bparker> actually that file is annexb *with* avcC atom :p
[23:19] <JEEB> lol
[23:19] <JEEB> how the fuck does that even work
[23:19] <JEEB> because you're supposed to have the length
[23:19] <JEEB> and then the data
[23:20] <JEEB> the AVCc should contain the length of the NAL units' length thingy
[23:20] <bparker> I guess it just ignores the sps/pps that's stuck in the middle, like it checks for annexb startcode ?
[23:22] <Paranoialmaniac> quidnunc: segmented files have styp box at the first in the stream. sidx is not mandatory but a derived file format may require it. i'm not an expert of dash. difficult to answer to your questions for me at present
[23:23] <bparker> JEEB: or maybe the muxer sees that my h264 data is annexb and ignores the avcC
[23:23] <bparker> I don't know
[23:23] <quidnunc> Paranoialmaniac: I appreciate your help. Very difficult to find information, not many people understand the details, you know far more than most.
[23:23] <bparker> I guess looking into the file itself would tell me more
[23:25] <Mavrik> it's also possible that HW decoder uses the same codepath and just ignores that always?
[23:25] <quidnunc> Paranoialmaniac: One last question: As far as you know, is it possible to achieve what I am doing now with fragmented files (seek + no metadata + server only does HTTP range requests) with DASH, HLS or any other streaming solution?
[23:25] <Paranoialmaniac> quidnunc: i recommend you should read the section of the sidx box of 14496-12 and the summary of 23009-1
[23:25] <Mavrik> (might be talking out of my arse, came late to the discussion :P)
[23:27] <quidnunc> Paranoialmaniac: I have read the sidx section in 14496-12. It seems only to deal with seeking within segments. But I don't see how to find segments without MPD
[23:27] <quidnunc> I don't understand why MPD wasn't just embedded, at least optionally, like the mfra atom
[23:28] <Paranoialmaniac> quidnunc: mpd is not a part of 14496-12 (ISO Base Media file format)
[23:28] <quidnunc> Paranoialmaniac: Ah, so they couldn't add it.
[23:28] <Paranoialmaniac> mpd is a text file format
[23:28] <Paranoialmaniac> xml like
[23:29] <quidnunc> I know, I meant equivalent information
[23:29] <quidnunc> well, at least locations of the segments
[23:29] <quidnunc> in a contiguous file
[23:30] <quidnunc> Paranoialmaniac: Anyway, as far as you know there is no way to seek to a given location in DASH without the MPD?
[23:32] <Paranoialmaniac> quidnunc: dash works through mpd. a dash file itself makes no sense i think
[23:32] <quidnunc> Paranoialmaniac: Okay, what about HLS?
[23:33] <Paranoialmaniac> i dont know HLS
[23:34] <quidnunc> Paranoialmaniac: Thanks again for your help. Like I said very hard to find information
[23:34] <Paranoialmaniac> http://dashif.org/testvectors/ there are test vectors of DASH. how about looking this?
[23:34] <quidnunc> Paranoialmaniac: Thanks, I will take a look but I doubt the deal with my strange use case.
[23:35] <quidnunc> It really doesn't have to do anything with DASH at all, just segmented MP4
[23:36] <quidnunc> I can probably build a segmented MP4 with a non-empty stbl but then I lose fast-start
[23:38] <thebombzen> leonbienek: You should be able to ffplay a file with ffv1 codec. It's a format designed by the FFmpeg developers, so ffmpeg's its native tool
[23:39] <thebombzen> make sure you compiled in support, by not doing something like ./configure --disable-decoders --enable-encoders=<...>
[23:39] <thebombzen> that's --enable-decoders*
[00:00] --- Mon Mar 3 2014
More information about the Ffmpeg-devel-irc
mailing list