[Ffmpeg-devel-irc] ffmpeg.log.20181217
burek
burek021 at gmail.com
Tue Dec 18 03:05:01 EET 2018
[00:31:22 CET] <merethan> Lets try ffvhuff in the nut container
[00:31:34 CET] <merethan> brb, mem constraints
[00:56:00 CET] <ServiceRobot> hey guys, so I'm working on a personal project with ffmpeg but I keep running into an invalid data error
[00:56:07 CET] <ServiceRobot> any idea what could be causing it?
[00:57:06 CET] <JEEB> have you checked your code against f.ex. the transcoding example under doc/examples?
[00:57:26 CET] <ServiceRobot> it's a library written in java. can't exactly do that
[00:57:36 CET] <ServiceRobot> but the error code I'm getting is an ffmpeg error
[00:57:43 CET] <ServiceRobot> INDA, aka invalid data
[00:58:10 CET] <JEEB> why can you not? like, the core is the same so if you're getting FFmpeg errors underneath then you can match that against what's being done over JNI
[00:58:31 CET] <JEEB> it's your java wrapper and if not - go ask someone who made that thing if you cannot figure it out
[00:58:38 CET] <JEEB> sorry for being blunt
[00:59:14 CET] <JEEB> and even if you're using someone's java wrapper, unless it's closed source you should be able to peek WTF it's doing with the APIs
[00:59:27 CET] <JEEB> and compare against API examples
[01:00:30 CET] <ServiceRobot> thing is I'm not exactly adept at C. the library is open source, but I was just hoping to get some more info on the error code
[01:00:34 CET] <ServiceRobot> since it is an ffmpeg error code
[01:01:09 CET] <JEEB> those don't pop out from a single place for eff's sake. and it usually means you're doing something wrong or that's your input's broken
[01:01:39 CET] <JEEB> even if you're not adept at C, you should be able to grasp basics in the examples and compare WTF that JNI wrapper is doing
[01:03:11 CET] <ServiceRobot> well, I'll take a look again then
[01:04:58 CET] <JEEB> also make sure that a basic ffmpeg.c binary built with whatever copy of FFmpeg you have can actually open your input
[01:06:34 CET] <ServiceRobot> it can open my input, just not remotely. I'm trying to send packets over the network and de-serialize them. idk what is needed to keep the packets consistent though
[01:07:17 CET] <JEEB> I hope you're not trying to push through network AVFormat specific data structures :P
[01:08:04 CET] <JEEB> you're supposed to push your AVPackets through an AVFormatContext set up with a usable container and a protocol or your own AVIO wrapper to send out the written data
[01:08:19 CET] <JEEB> like mpeg-ts or NUT over some protocol :P
[01:08:36 CET] <JEEB> NUT is often used for raw video/audio, while MPEG-TS is often used for broadcast uses
[01:09:00 CET] <ServiceRobot> hmm, I was using a peer-to-peer library to just send packets over UDP
[01:09:31 CET] <ServiceRobot> what is needed in an avformatcontext? I have video height, width, and pixel format
[01:09:55 CET] <JEEB> valid input data is what's needed
[01:10:16 CET] <ServiceRobot> what is needed for valid input data? the bare minimum?
[01:10:48 CET] <JEEB> for sending side you either demux the input into AVPackets, or you generate them yourself, and open an *output* AVFormatContext with either NUT or MPEG-TS or whatever, with UDP protocol for example
[01:11:14 CET] <JEEB> then on the receiving side you open an *input* AVFormatContext with that same protocol and container, and start reading into it :P
[01:11:36 CET] <ServiceRobot> well I demuxed the input into packets, then I sent those packets across the network by serializing them
[01:11:45 CET] <JEEB> yes, nope - don't do that
[01:11:47 CET] <ServiceRobot> obviously there's missing data
[01:11:49 CET] <ServiceRobot> oh
[01:12:09 CET] <JEEB> AVPackets are not supposed to be network transparent
[01:12:21 CET] <JEEB> they're internal FFmpeg data structures and have references to buffers etc
[01:12:40 CET] <ServiceRobot> the library I'm using allows direct access to the buffer byte array
[01:12:45 CET] <JEEB> that's OK
[01:12:52 CET] Action: merethan out of memory, again
[01:12:53 CET] <JEEB> but jesus christ jus topen an output lavf context
[01:12:53 CET] <ServiceRobot> which is what I store in the serialized class
[01:13:00 CET] <JEEB> that's not enough
[01:13:10 CET] <JEEB> just open an output lavf context with the UDP protocol for example
[01:13:16 CET] <JEEB> then if it's raw use NUT, if not try MPEG-TS
[01:13:20 CET] <JEEB> and push that out :P
[01:13:35 CET] <JEEB> then on receiving side just read input from UDP as if it's a file with libavformat
[01:13:36 CET] <ServiceRobot> maybe pushing them raw would have been better, since I'd need to decode them on the other end
[01:14:13 CET] <JEEB> anyways, you get crazy person points for even trying to serialize lavf internal structures :P
[01:14:20 CET] <merethan> I invoke ffmpeg twice. First step i output to ffvhuff in nut. Stragely I now run out of memory. Before I did output to h264 both steps (with associated quality loss) but it did fit precisely in my ram.
[01:14:23 CET] <JEEB> although it seems like you didn't even think it thoroughly enough
[01:14:34 CET] <JEEB> where you would have seen the insanity of it
[01:14:44 CET] <JEEB> just write your AVPackets into X in UDP for example
[01:14:49 CET] <merethan> Does it make sense that processing lossless input takes more ram?
[01:14:50 CET] <JEEB> and receive on another side
[01:14:58 CET] <ServiceRobot> ya, I thought I was being clever, but seems I misunderstood the concept
[01:15:24 CET] <merethan> Anyway, what lossless intra format takes the least ram? :P
[01:15:27 CET] <JEEB> if you could 1:1 replicate all the lavf structures simply
[01:15:30 CET] <JEEB> sure
[01:15:43 CET] <JEEB> but since that's not true, just stop and properly push multimedia over network
[01:15:52 CET] <ServiceRobot> what are all the lavf structures?
[01:15:55 CET] <JEEB> that also lets you use random FFmpeg-based clients to test the stream :P
[01:16:00 CET] <ServiceRobot> alright, I'll have to look at the examples again
[01:18:21 CET] <ServiceRobot> what's even crazier is that I did get audio to work somewhat
[01:19:51 CET] <JEEB> yea, I mean, given correct circuimstances and all of the correct parameters (including timestamps and init data etc) passing stuff would work
[01:20:03 CET] <JEEB> after all there are FFmpeg wrappers that work over DirectShow etc
[01:20:10 CET] <ServiceRobot> ya, I was passing timestamps, etc
[01:20:30 CET] <JEEB> but you have to carefully regenerate values of AVPackets as well as the decoder initialization data etc
[01:20:59 CET] <ServiceRobot> which I also tried to do. I sent the decoder over as well
[01:21:10 CET] <ServiceRobot> I probably could have done this in a multitude of better ways
[01:21:34 CET] <JEEB> but really, with network streaming it's just much simpler to do it in a way in which you can then receive and test the stream with something that isn't just your code
[01:21:38 CET] <JEEB> also same for the sending side
[01:21:58 CET] <JEEB> you can test both your receiver and your sender without your code on the other side if you just use a proper container over network :P
[01:22:28 CET] <ServiceRobot> are you saying I could just do peer-to-peer streaming with ffmpeg alone?
[01:23:25 CET] <JEEB> lavf has support for receiving and sending out UDP, including multicast - yes. and there are containers which send out headers in-band
[01:23:40 CET] <ServiceRobot> O___O I'm an idiot then
[01:23:53 CET] <ServiceRobot> I should have just used an ffmpeg wrapper then
[01:25:31 CET] <merethan> Aight, another go (kill of all mem users again)
[01:53:16 CET] <Hello71> maybe someone should tell merethan about swap
[03:19:30 CET] <Keshl> In https://x265.readthedocs.io/en/default/cli.html#psycho-visual-options , I found a bit mentioning "--analysis-save" and related options. Given the description, I'd like to use these, but ffmpeg claims these options don't exist. Are they still present in recent builds?
[03:22:26 CET] <furq> Keshl: -x265-params analysis-save=foo
[03:24:32 CET] <Keshl> Thank you! For some reason, I thoguht x265-params was depricated.
[03:25:55 CET] <Keshl> All happy, thank you. -- Holy nuggits, this file's big. o_o'
[03:26:32 CET] <Keshl> That's a "me" problem though. I'm happy.
[03:45:02 CET] <Keshl> Unrelated, just curious: Why can't I use yuv410 with x265? From my relitave naive perspective, I can't find any logical reason.
[03:46:17 CET] <furq> nobody implemented it because nobody cares about it
[03:47:44 CET] <Keshl> Am I right in assuming that it's more complicated than just diving by two?
[09:02:36 CET] <th3_v0ice> Why would a decoder return a frame with wrong(?) PTS value if the input packet has the proper next PTS timestamp. For example: Input packet PTS: 2370413408, TB of stream 1/90000, TB of decoder 1/50, after rescaling 1316896, output frame PTS 6089068. Next packet PTS: 2370417008, same timebases, after rescaling 1316898, frame PTS 1316884. Why so big jump? Its very hard to debug this since this
[09:02:36 CET] <th3_v0ice> program has been running 24+ hours and it seems to happen only on a long runs. Any advice is welcome. Thanks.
[09:08:43 CET] <durandal_1707> th3_v0ice: which decoder?
[09:20:34 CET] <th3_v0ice> h264
[09:22:47 CET] <JEEB> if it's live input I recommend having something dumping the input as well so with problematic cases you can double-check
[09:23:04 CET] <JEEB> I think that's MPEG-TS? in which case DVBInspector is a good "alternative" to check the timestamps with
[09:30:11 CET] <th3_v0ice> Its indeed an MPEG-TS live UDP source.
[09:33:00 CET] <th3_v0ice> These are the timestamps that I am printing for every packet that comes in from input. Input packets timestamps seems fine to me, but the output from decoder is for some reason wrong.
[09:34:40 CET] <JEEB> you can use something like multicat to dump the stream into a directory into f.ex. 2 min chunks
[09:34:47 CET] <JEEB> and then you can have a running, say, 18 hour buffer
[09:34:56 CET] <JEEB> so when you notice something weird you can double-check from the dump buffer
[09:35:18 CET] <JEEB> https://code.videolan.org/videolan/multicat
[09:35:36 CET] <JEEB> also packaged in various distributions, I think the version from 16.04 ubuntu upwards supports it
[09:35:48 CET] <JEEB> the "dumping into directory" option
[09:45:32 CET] <th3_v0ice> So If I understand correctly, You think its an input packet problem? I am just trying to understand what could be the possible culprit. Because I am logging input pts and timebases exactly for this reason and didnt expect the results that I saw. I will use multicat to dump the input.
[09:47:27 CET] <JEEB> we don't know what the problem is, that is why you need a *reproduce'able* sample
[09:47:46 CET] <JEEB> dumping the input and checking the dump when oyu notice something weird is the only way to deal with that
[09:49:08 CET] <th3_v0ice> Ok, thanks, will start the dump now.
[15:35:43 CET] <Skandalist> I don't get it how to get known, which file extension to use with certain encoder? When I don't put the file extension, the program says, that it doesn't know what to do. When I put wrong file extension I get the same error or unworkable file. I can get list of encoders and list of formats, but there is no info about that.
[15:39:32 CET] <pink_mist> mkv can take most encodings afaik, so just use .mkv as extension
[16:02:58 CET] <justinasvd> Morning, guys.
[16:04:17 CET] <justinasvd> How do I properly capture frames from ffmpeg, say, every 100 seconds? The command ffmpeg -i video_file -f image2 -vf fps=fps=1/100 out%3d.png does not really work.
[16:04:29 CET] <justinasvd> I.e., it outputs frames, but I don't understand what frames.
[16:05:08 CET] <justinasvd> I verify simply: open the same video_file in mplayer and scroll to 1 minute and 20 seconds. It's a frame at 100 seconds. I except to see this frame from ffmpeg output.
[16:05:20 CET] <justinasvd> What I see from ffmpeg is something different. What?
[16:05:53 CET] <justinasvd> *expect
[16:07:25 CET] <ed8> The following command return only audio with blank video:
[16:07:27 CET] <ed8> ffmpeg -i ./data/partie-1:-Apprendre-300-mots-du-quotidien-en-LSF.jauvert-laura.hd.mp4 -b:v 1024k -c:v libvpx-vp9 -crf 32 -filter:v scale=1280x720 -loglevel error -maxrate 1485k -minrate 512k -pass 1 -quality good -speed 4 -threads 8 -tile-columns 2 -y ./data/partie-1:-Apprendre-300-mots-du-quotidien-en-LSF.jauvert-laura.hd.webm
[16:13:44 CET] <relaxed> ed8: pastebin.com ffmpeg's output
[16:18:38 CET] <ed8> relaxed: there is no stdout/stderr message
[16:26:06 CET] <saml> hey how can I create a shorter summary video? given a long video, I want to capture important scenes and stitch them together to a few second long video
[16:27:17 CET] <saml> I guess I can use some alpha go ai and get sections of videos. cut them and merge them
[16:27:47 CET] <ed8> relaxed: does the order of the argument matter ?
[16:28:17 CET] <relaxed> ed8: it maybe the file name, try quoting the input/output filename
[16:30:09 CET] <ed8> relaxed: the output file exist and weight ~11Mb but I see audio
[16:33:56 CET] <justinasvd> ed8: That's because you set log level error. Set it as debug.
[17:01:23 CET] <justinasvd> Found the problem in my case. FFMPEG add t = 1/2*1/fps to the frame time, so all the captured frames are shifted by that exact amount.
[17:01:31 CET] <justinasvd> adds
[17:09:34 CET] <ed8> Converting from mp4/h264 to webm/vp9 I got audio but no video.
[17:12:01 CET] <ed8> Here is a gist with the ffmpeg command executed, and the `mplayer -identitfy` for input/ouput files
[17:23:26 CET] <Razwelles> tseting
[17:23:52 CET] <Razwelles> Hey, does anyone know how to manually calculate pts for variable framerate libx264 frame by frame?
[17:24:23 CET] <kepstin> Razwelles: need more context, what exactly are you trying to accomplish?
[17:25:15 CET] <Razwelles> kepstin: I'm encoding a live stream from a realsense camera, its timestamps are measured in milliseconds
[17:25:21 CET] <Razwelles> I'm able to encode the frames but I can't get the timing right
[17:25:27 CET] <BtbN> You can't calculate it. It's either already there from what produced the VFR content, or that information is lost.
[17:26:09 CET] <Razwelles> I thought the pts was a calculation basedd on timebase and framenumber?
[17:26:17 CET] <kepstin> Razwelles: set the timebase to 1/1000, put the received timestamps in the timestamp field, done?
[17:26:35 CET] <Razwelles> kepstin: I tried that, it gives me an error about dts, sec
[17:26:37 CET] <kepstin> Razwelles: frame number is irrelevant
[17:26:59 CET] <Razwelles> I did just try putting the timestamp in the pts with that timebase though
[17:27:17 CET] <Razwelles> non-strictly-monotonic PTS
[17:27:17 CET] <Razwelles> (repeated 7 more times)
[17:27:17 CET] <Razwelles> Application provided invalid, non monotonically increasing dts to muxer in stream
[17:27:49 CET] <Razwelles> I think I do check for repeated timecodes but there may be redundant frames
[17:28:23 CET] <kepstin> well, you have to ensure that the pts values your sending to the encoder are monotonically increasing (no backwards steps, no repeated numbers)
[17:28:41 CET] <kepstin> and then the encoder should be generating correct dts
[17:29:52 CET] <Razwelles> I just ran acrosss some docs, it seems there's a timebase for both the codec and the stream, I'd been setting codec so maybe I was in the wrong are altogether. I'll try to do the check you mentioned thank you
[17:30:15 CET] <Razwelles> *setting stream
[17:30:18 CET] <kepstin> ah, yeah, if the codec and muxer timebase are different, then you have to handle converting between them
[17:30:36 CET] <kepstin> note that some muxers will reject the timebase you set and instead use one they pick
[17:30:50 CET] <Razwelles> Yes that might be the problem! Fingers crossed, thank you :)
[17:40:10 CET] <justinasvd> Guys, how do I take snapshots every 100 seconds WITHOUT the initial 50 seconds shift?
[17:40:28 CET] <justinasvd> ffmpeg -ss 0 -i my_video -vf "fps=1/100" -qscale:v 5 "img%03d.jpg" does not help. :-(
[17:42:22 CET] <kepstin> justinasvd: the fps filter has a setting for rounding mode, you can try changing that to a different value
[17:42:52 CET] <kepstin> justinasvd: alternately, you can use the select filter with an expression that matches only the frames you want
[17:46:10 CET] <justinasvd> I will look into that rounding mode knob first.
[17:58:19 CET] <alkmx> Hi everyone, I have been searching but I can not find a solution. Is there a way to send a header with a rtsp protocol, I know exist -header option but this only works for http protocol
[18:01:50 CET] <justinasvd> kepstin: I can't thank you enough. round=down was exactly what I wanted.
[19:27:44 CET] <saml> ffmpeg -f concat -i concat.txt output.mkv concat.txt has file, inpoint, outpoint. but output.mkv is not precise
[19:28:19 CET] <saml> https://gist.github.com/saml/897cd893f49006cd80493d62a26a0e7d this is example concat.txt
[19:28:53 CET] <saml> [matroska @ 0x55825cac0100] Non-monotonous DTS in output stream 0:1; previous: 5241, current: 3860; changing to 5241. This may result in incorrect timestamps in the output file.
[19:28:57 CET] <saml> lots of output like this
[19:33:46 CET] <saml> one constraint i have is that i'm removing portions from one file (file parameter of concat.txt will be always a single file)
[19:37:45 CET] <kepstin> saml: concat demuxer is restricted in that it can only seek to keyframes
[19:38:20 CET] <kepstin> saml: given how short your segments are, they probably are overlapping given that it has to go to the keyframe before the segment start
[19:38:55 CET] <kepstin> saml: to get frame accurate seeking & concat like that, you're gonna have to use the concat filter.
[19:39:11 CET] <saml> hrm i generated those segment inpoint and outpoint using silencedetect filter.
[19:39:24 CET] <saml> how do I adjust my timestamps so that they fall in keyframes?
[19:39:36 CET] <saml> I don't need frame accurate cuts.
[19:40:07 CET] <saml> I guess I need to list timestmaps of all keyframes and combine it with silencedetect filter and find the closest keyframe timestamp
[19:40:58 CET] <kepstin> saml: the easiest way to get a list of keyframe timestamps is probably with ffprobe -show_frames (there should be some additional selection options you can use to make it only show keyframes, or only specific streams)
[19:46:00 CET] <kepstin> (actually, you might want the -show_packets option, and check for K in the flags field, since 'I' frames as reported in -show_frames might not necessarily be valid seek points)
[19:46:10 CET] <saml> ffprobe -loglevel error -skip_frame nokey -select_streams v:0 -show_entries frame=pkt_pts_time -of csv=print_section=0 input.mp4
[19:46:27 CET] <saml> hrm i see. let me try both
[19:46:33 CET] <saml> thanks. merry christmas
[19:48:41 CET] <kepstin> ah, the 'key_frame' field in the frame (and the -skip_frame nokey option) should be equivalent to the K flag in the packet, I think
[19:48:46 CET] <kepstin> should be good with that
[20:13:26 CET] <bch> hello: i have a DVD (resolution: 720x480, aspect ratio: 1.78) which i'd like to convert to mp4. i end up with resolution: 720x480, aspect ratio: 1.50. the data is printed with mpv ("i" key). is there a way to convert a video in such a way that resolution & aspect ratio remain the same?
[20:17:18 CET] <bch> i.e. keep the resolution and aspect ratio the same as the input file. dividing 720/480 is actually 1.5, so it seems that the original DVD is wrongly labeled? not sure...
[20:21:00 CET] <kepstin> how exactly are you inputting the dvd to ffmpeg? Some of the methods (e.g. using a vob file as input) don't keep the correct aspect ratio
[20:22:33 CET] <bch> i tried different versions, for example: cat *.VOB | ffmpeg -vf scale=720:480 -i - ~/lecture01_t.mp4
[20:22:39 CET] <kepstin> that must be ntsc widescreen, so to reset it to the correct value do "-vf setsar=40/33"
[20:22:43 CET] <bch> the pipe might be the problem?
[20:22:55 CET] <kepstin> no, the issue is that vob files don't store any aspect ratio info at all
[20:23:13 CET] <relaxed> filter goes after input
[20:23:25 CET] <bch> ok, thanks. will try with -vf
[20:23:29 CET] <kepstin> well, the input is already 720x480, so the scale filter would do nothing anyways
[20:23:39 CET] <kepstin> but yes - as relaxed said :)
[20:24:25 CET] <bch> ohps yes, thanks. i stumbled upon this a few times. :)
[20:24:52 CET] <kepstin> (note that 720x480 is actually *slightly* wider than 16/9)
[20:25:08 CET] <kepstin> er, 720x480 ntsc widescreen specifically
[20:26:10 CET] <kepstin> if you want exactly 16:9 output, then do "-vf setsar=40/33,crop=704:480:8:0"
[20:28:05 CET] <bch> mhm ok, will try both. but it takes some time. 2hour DVD.
[20:31:28 CET] <kepstin> you could add a "-t 60" option or something to encode just a minute of video to check the aspect ratio
[20:32:43 CET] <bch> ah thats nice.
[21:51:01 CET] <fullstop> hi all. I have a raw hevc stream and I'd like to put it inside of a container, such as mp4 or mkv, but I'm having some difficulty doing so.
[21:51:24 CET] <durandal_1707> pastebin full uncut console output
[21:51:59 CET] <fullstop> I don't want to reencode, just package the hevc stream and set the framerate.
[21:52:28 CET] <durandal_1707> what you tried?
[21:54:15 CET] <JEEB> IIRC we have issues with taht
[21:54:22 CET] <JEEB> esp. with b-frames
[21:54:39 CET] <durandal_1707> with hevc?
[21:54:45 CET] <JEEB> with both H.264 and HEVC, IIRC_
[21:54:52 CET] <JEEB> unless that got fixed somewhere
[21:54:55 CET] <fullstop> https://pastebin.com/raw/99WtjFLu
[21:55:17 CET] <JEEB> umm
[21:55:22 CET] <JEEB> that just seems to lack any parameter sets?
[21:55:24 CET] <fullstop> this is from an ip camera which knows nothing about b-frames
[21:55:47 CET] <JEEB> basically parameter sets are what's required for initializing the decoder
[21:55:48 CET] <fullstop> right, the sps is likely missing or is not at the beginning of the file.
[21:56:15 CET] <durandal_1707> fullstop: cant help unless short sample is uploaded
[21:56:16 CET] <JEEB> if it comes somewhere along the road then you will take note of those messages regarding analyzeduration and probesize
[21:56:29 CET] <fullstop> ffplay can play the file, but does spit out some error messages regarding PPS id out of range, etc
[21:56:33 CET] <JEEB> see https://www.ffmpeg.org/ffmpeg-all.html for what those options can take in
[21:56:48 CET] <JEEB> basically make the input parser probe the input file more
[21:57:08 CET] <JEEB> generally those things expect parameter sets right away or very close by
[21:57:28 CET] <JEEB> and to mux into mp4 you need things from the parameter sets
[21:57:39 CET] <durandal_1707> it may be in special container, thats why i ask for sample
[21:57:39 CET] <JEEB> like resolution and generally the sets themselves
[21:58:12 CET] <JEEB> since the person knows about SPS I would guess he would see if it's some weird custom format?
[21:58:21 CET] <JEEB> at least that's my initial guess :P
[21:58:38 CET] <JEEB> if the person had no idea what I meant with parameter sets then I would maybe expect otherwise
[21:59:38 CET] <durandal_1707> or pastebin hexdump of start of sample
[21:59:48 CET] <fullstop> I'm fairly certain that it is a raw hevc stream. It's a rtsp stream straight from a cheap ip camera.. but it likely does not start on an i-frame and the sps stuff might be further in than the parser looks.
[22:00:19 CET] <JEEB> ok, then just look for those two options in the docs I linked
[22:00:27 CET] <JEEB> it should get parsed at some point
[22:04:08 CET] <fullstop> okay, -analyzeduration did the trick
[22:04:44 CET] <fullstop> it looks like the framerate set in the stream is incorrect; it is playing at 25 but the real rate is 15.
[22:05:24 CET] <durandal_1707> raw h265 does store rate?
[22:05:31 CET] <kepstin> i know h264 didn't
[22:06:04 CET] <JEEB> there's a possible field for maximum time base
[22:06:15 CET] <JEEB> which many things interpret as frames per second
[22:06:31 CET] <JEEB> I am not sure if it is parsed
[22:06:45 CET] <JEEB> but it of course requires that thing to exist in the stream
[22:06:50 CET] <kepstin> fullstop: you said this was rtsp? It might make sense to send the rtp packets to ffmpeg rather than the hevc data only. Since the rtp packets have the timestamps
[22:23:49 CET] <fullstop> kepstin: this stuff is captured by an embedded device and written to flash.. this device is not capable of running ffmpeg, unfortunately.
[22:26:14 CET] <kepstin> well, if the device that's capturing the stream can't save to a proper container with timestamps, the best you can do is set the '-framerate' input option to the framerate you expect the stream to have when reading the hevc data with ffmpeg.
[22:26:57 CET] <kepstin> and if the camera isn't constant framerate or some frames got dropped, well... :/
[22:33:36 CET] <fullstop> the container on the device does happen to have timestamps, so we're good there.
[22:34:17 CET] <kepstin> so, why are you feeding ffmpeg raw hevc then, rather than the video in a container?
[22:36:24 CET] <fullstop> It's a home-grown container. :-)
[22:36:55 CET] <fullstop> I really just needed to get the data into a format playable on a PC.
[22:40:37 CET] <kepstin> in the case of a home-grown container, you might be better served by writing a program (based on one of the encoding examples) that takes packets + timestamps from your container and feeding them to the ffmpeg decoder.
[22:41:51 CET] <durandal_1707> fullstop: look if it is DHAV / DAHUA ?
[22:43:02 CET] <fullstop> https://www.dropbox.com/s/wcivrzpicx868i8/soccer.mp4?dl=0 <-- here's a sample of the video
[22:43:39 CET] <durandal_1707> is that output file?
[22:43:54 CET] <fullstop> no, it's not DHAV/DAHUA... it's a container format that I came up with to minimize flash writes and mux a bunch of cameras into a single file.
[22:44:00 CET] <fullstop> durandal_1707: yes
[22:46:09 CET] <fullstop> anyway, it's all working now. Thanks for the help!
[00:00:00 CET] --- Tue Dec 18 2018
More information about the Ffmpeg-devel-irc
mailing list