[Ffmpeg-devel-irc] ffmpeg.log.20160423

burek burek021 at gmail.com
Sun Apr 24 02:05:01 CEST 2016


[01:28:26 CEST] <CoffeeFluxx> oh, is there a way to include chapters when muxing to mp4?
[04:18:20 CEST] <fling> How do I capture two sources at once?
[04:18:39 CEST] <fling> I mean I have two video inputs
[04:25:34 CEST] <relaxed> fling: ffmpeg -i one -i two
[04:27:47 CEST] <fling> -map 0:0 -map 1:0 -map 2:0
[04:27:52 CEST] <fling> relaxed: thanks :>
[04:45:30 CEST] <fling> why does first webcam freeze at the frame 15? -> https://bpaste.net/show/363f92d64635
[04:47:26 CEST] <fling> but everything works just fine when I capture separately
[04:47:27 CEST] <fling> hmm hmmmm
[11:04:22 CEST] <Wader8> hello
[11:04:48 CEST] <Wader8> trying to batch convert whole folder of files with this, doesn't quite work http://stackoverflow.com/questions/5784661/how-do-you-convert-an-entire-directory-with-ffmpeg
[11:05:40 CEST] <Wader8> basically it's a folder that stuff gets put into with same exact settings, then, ocassionally i would run to transcode it and delete the sources
[11:06:44 CEST] <furq> what about it doesn't work
[11:08:19 CEST] <Wader8> the cmd window closes, maybe I have to add other parameters to ffmpeg
[11:09:42 CEST] <Wader8> but i used the solution the people talked in the comment below the one with double quotes, i may try the original, while my filenames don't contain spaces i think, but they use underscores, I usually don't use spaces in such files thank god, i only use spaces on desktop shortcuts and light stuff
[11:09:52 CEST] <Wader8> i could use the original solution then,
[11:14:25 CEST] <Wader8> ah
[11:14:30 CEST] <Wader8> seems like old stuff
[11:14:33 CEST] <Wader8> doesn't work at all
[11:19:57 CEST] <Wader8> okay this worked http://forum.videohelp.com/threads/356314-How-to-batch-convert-multiplex-any-files-with-ffmpeg
[11:33:01 CEST] <Wader8> hi again
[11:33:08 CEST] <Wader8> this warning is always shown: Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[11:33:27 CEST] <Wader8> i use loglevel verbose and v 9
[11:34:03 CEST] <Wader8> it's all ok, it's just that im not sure what the warning is about, doesn't matter if I use x264 params crf=  or -crf
[13:51:44 CEST] <spirou> are there a way to make ffmpeg not change background color on some messages? I really don't like black background on stuff...
[13:54:42 CEST] <furq> you can set AV_LOG_FORCE_NOCOLOR to get rid of colours entirely
[13:55:03 CEST] <furq> i don't think there's a way to disable just background colours
[13:55:08 CEST] <furq> maybe file a feature request
[14:20:38 CEST] <spirou> ok
[14:23:08 CEST] <spirou> I guess best would be to make it possible to have it only do "bold" instead of setting a color, becuase then it would be the color I set my terminal window to use for bold :-)
[14:54:21 CEST] <spirou> would "PCM" be uncompressed audio? (nondestructive I mean)
[14:54:33 CEST] <Draz> Padding on video appears grey: Applying pad filter to video stream gives non-black padding. http://pastebin.com/vuHEVSGZ
[14:55:02 CEST] <Draz> I've tried a few things, as documented in pastebin. Help appreciated.
[14:56:10 CEST] <printline> Hi. What is the simplest way to read a video stream in h.264 from a webcam using ffmpeg as encoder? I've considered reading it in via UDP, but that would require depacketizing and possibly handling for dropped frames, which I want to avoid. Is there a way to read the stream output of ffmpeg directly from RAM, perhaps as a subprocess?
[14:58:24 CEST] <spirou> I found the description (if it is correct or not I down know) about what files my tv can use from the usb-stick, and for .mkv it is "video: H.264, MPEG1,2,4  audio:EAC3/AC3" but for .mp4 it instead lists "video: MPEG4, Xvid * H.264  audio: PCM/MP3" so I'm curious about what the PCM would be
[14:58:37 CEST] <spirou> and how to set ffmpeg to do that if I would like to
[14:59:02 CEST] <spirou> *don't know
[15:02:40 CEST] <furq> spirou: https://en.wikipedia.org/wiki/Pulse-code_modulation#Implementations
[15:04:06 CEST] <furq> i have no idea why it wouldn't support mp3 in mkv though
[15:04:40 CEST] <furq> personally i would probably just remux and then see if it works, those docs are always flaky
[15:07:14 CEST] <spirou> yeah I was thinking about testing some different variants and see what it really can do
[15:09:23 CEST] <spirou> and when it say it can do pcm, would that mean everything listed on https://trac.ffmpeg.org/wiki/audio%20types and making a mp4 with it would be using for example  -acodec pcm_s16le
[15:15:53 CEST] <spirou> aha I would need to write  -f s16le -acodec pcm_s16le  for that. I test with one of those too. and a mkv/mp3 version.
[15:20:59 CEST] <furq> just `-c:a pcm_s16le out.mp4` should do
[15:21:17 CEST] <furq> i'd try it with aac and mp3 though since that should cover most of what you've got
[15:21:34 CEST] <furq> pcm audio will be ~10x the size
[15:21:45 CEST] <spirou> ok
[15:22:25 CEST] <spirou> what is "global headers" btw? I assume the warning is about it is adding those becuase mkv needs that
[15:23:46 CEST] <furq> something like that
[17:22:30 CEST] <DHE> API question, is there a reason multi-threaded decoding and/or encoding is not the default?
[17:33:24 CEST] <kepstin> it is in codecs which support automatically determining a suitable number of threads
[17:34:07 CEST] <kepstin> oh, in the api, huh.
[17:34:14 CEST] Action: kepstin doesn't know if it's different there
[17:37:00 CEST] <DHE> I'm just following the examples and not getting multi-threaded decoders. seems my CPU needs it to reliably decode H264
[17:37:15 CEST] <shibly> Hi, what does ffmpeg do?
[17:37:27 CEST] <JEEB> ffmpeg or FFmpeg
[17:37:30 CEST] <JEEB> FFmpeg is project
[17:37:30 CEST] <DHE> I've been explicitly enabling it for encoders because I have specific needs
[17:37:38 CEST] <JEEB> ffmpeg is the command line tool included in FFmpeg
[17:38:10 CEST] <shibly> What's the difference between ffmpeg and FFmpeg?
[17:38:12 CEST] <JEEB> also please don't double-post on -devel :P
[17:38:16 CEST] <JEEB> I just noted the difference :P
[17:38:23 CEST] <kepstin> DHE: do these files play fine with e.g. ffplay?
[17:38:31 CEST] <JEEB> one is a tool in a project and the other is the project itself
[17:38:32 CEST] <shibly> What type of project?
[17:38:38 CEST] <JEEB> software project
[17:38:42 CEST] <kepstin> DHE: you could look at what that tool does to enable multithreaded decoding and copy it if so :/
[17:38:56 CEST] <shibly> What does that FFmpeg project do?
[17:39:12 CEST] <JEEB> FFmpeg contains libraries to handle reading, decoding, filtering, encoding and writing/passing multimedia data
[17:39:20 CEST] <DHE> kepstin: I know HOW to. The program works. I'm asking if there's a reason it isn't the default. H264 in realtime is a bit flaky in terms of being able to keep up
[17:39:35 CEST] <JEEB> FFmpeg also contains tools such as ffprobe, ffmpeg and ffplay that are example usages (kind of) of those libraries
[17:39:36 CEST] <DHE> I just turned on multithreaded decoding to try and keep up
[17:40:24 CEST] <JEEB> when you talk of ffmpeg you generally talk of the ffmpeg tool within FFmpeg, but how you phrased your question made me want to affirm what you wanted to ask about
[17:40:38 CEST] <JEEB> s/affirm/confirm/
[17:40:54 CEST] <shibly> How can i read multimedia data? Suppose i have a file named p.avi
[17:41:08 CEST] <shibly> What would i do reading multimedia data?
[17:41:18 CEST] <JEEB> do you just want to use the ffmpeg tool or actually the APIs?
[17:41:21 CEST] <DHE> traditionally get raw RGB or YUV frame pixels
[18:24:31 CEST] <shibly> There was some problem with internet here.
[18:24:46 CEST] <shibly> How can i read p.avi data with ffmpeg?
[18:24:58 CEST] <shibly> What can i do with reading p.avi data?
[18:25:55 CEST] <JEEB> < JEEB> do you just want to use the ffmpeg tool or actually the APIs?
[18:26:15 CEST] <shibly> I want to use api
[18:26:20 CEST] <JEEB> then take a look at the examples
[18:26:28 CEST] <JEEB> you will be opening the file with avformat
[18:26:41 CEST] <JEEB> you will be then presented packets from the streams and you usually feed them to avcodec for decoding
[18:26:56 CEST] <JEEB> from which you received decoded pictures in the colorspace they are in (usually YCbCr)
[18:27:00 CEST] <JEEB> and so forth
[18:27:20 CEST] <JEEB> shibly: take a look at these examples https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples
[18:27:46 CEST] <JEEB> demuxing and decoding is the first thing I guess
[18:28:02 CEST] <JEEB> after that you can look into what you want to do with that data you get out of it
[18:37:14 CEST] <shibly> https://github.com/FFmpeg/FFmpeg , is it the official source code mirror of ffmpeg project?
[18:39:23 CEST] <JEEB> it's an official mirror and in general has more fancy C syntax highlighting etc
[18:39:32 CEST] <JEEB> so if I want to quickly link something I usually do that :P
[18:39:44 CEST] <JEEB> for actual cloning you will want to use git.videolan.org
[18:39:49 CEST] <JEEB> since that's where the main repo resides
[18:39:58 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=summary
[18:41:02 CEST] <andrey_utkin> how could I get AVCC-formatted H.264 packets out of libx264 encoder (AFAIU it produces Annex B)? Would it be viable to implement BSF for this?
[18:43:01 CEST] <DHE> andrey_utkin: there's a BSF for the opposite direction actually...
[18:43:08 CEST] <JEEB> http://git.videolan.org/?p=x264.git;a=blob;f=x264.h;h=5581ab9b9971ea8eb9a1b944b0b360280bb998d0;hb=HEAD#l462
[18:43:22 CEST] <JEEB> andrey_utkin: you can already get the packets out of x264 in that format by setting the b_annexb to correct value. you will not get extradata pre-created of course...
[18:43:36 CEST] <JEEB> but that way you already get the length in the beginning
[18:43:49 CEST] <JEEB> so you don't have to poke the NAL units
[18:45:50 CEST] <DHE> multi-threaded decoding seems to have solved my issues...
[18:54:13 CEST] <andrey_utkin> JEEB, oh nice, thanks a lot, that allows me to reencode selected GOPs (with others remuxed) in mp4 files without convertion to mpegts and back.
[18:54:42 CEST] <andrey_utkin> with stitchable x264 option of course
[19:07:33 CEST] <Amitari> Anyone who knows how to batch-remux a whole directory of .ts-files into MKV?
[19:07:47 CEST] <Amitari> Doing it manually with MKVToolnix is so bothersome.
[19:08:01 CEST] <BtbN> ffmpeg -i something.ts -c copy something.mkv
[19:09:11 CEST] <Amitari> Thanks, but how do I make them get saved with the same file name, but a different extension? Just switching "something" to an asterisk usually causes problems.
[19:10:01 CEST] <BtbN> Use your shell to iterate over all the files and modify the filename string
[19:11:07 CEST] <Amitari> Uh, I don't know how to do that. :(
[19:11:12 CEST] <Amitari> Isn't there any other way?
[19:11:45 CEST] <BtbN> Well, you could it manualy for every single file...
[19:12:19 CEST] <Amitari> Ouch!
[19:12:23 CEST] <Amitari> You sure there isn't any way?
[19:12:34 CEST] <BtbN> The one i just mentioned
[19:12:41 CEST] <BtbN> No idea what's so bad about that
[19:12:55 CEST] <anadon> I'm getting garbage frames from the linked code and I'm not sure what could be going wrong: http://pastebin.com/3MGivRdv
[19:12:57 CEST] <Amitari> I don't understand what you mean.
[19:12:59 CEST] <BtbN> it's a plain for loop over *.ts, and some ${f/.ts/.mkv}
[19:13:14 CEST] <Amitari> Oh, I'm just not very well-versed in that.
[19:14:30 CEST] <Amitari> But I don't understand what you mean with "for loop over" and all that jazz.
[19:17:23 CEST] <furq> Amitari: what os
[19:17:39 CEST] <Amitari> GNU/Linux, Arch, BASH.
[19:18:06 CEST] <furq> for f in *.ts; do ffmpeg -i "$f" -c copy "${f%.*}.mkv"; done
[19:19:20 CEST] <Amitari> Thanks!
[19:19:26 CEST] <Amitari> I'll save that to my command collection. :P
[19:20:23 CEST] <furq> you may also want to add -map 0 if you have multiple audio/subtitle tracks
[19:20:58 CEST] <Amitari> Where in the command should that be added?
[19:21:06 CEST] <furq> between " and -
[19:21:51 CEST] <Amitari> And this won't cause any negative effects if I do it on a clip that doesn't have multiple tracks?
[19:21:56 CEST] <furq> no
[19:22:17 CEST] <Amitari> Good.
[19:22:19 CEST] <Amitari> Thanks! :D
[21:03:15 CEST] <q3cpma> Hello, I'm trying to extract some subrip from a mkv with "ffmpeg -i my.mkv -map 0:3 -c:s copy out.sub" but it gives me Could not write header for output file #0 (incorrect codec parameters ?): Operation not permitted. Any idea why?
[21:06:28 CEST] <relaxed> q3cpma: try, ffmpeg -i my.mkv -map 0:3 -c:s copy out.srt
[21:07:41 CEST] <q3cpma> relaxed: Now it says "Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument" (ffprobe says that it's dvd_subtitle)
[21:08:15 CEST] <q3cpma> relaxed: With "[srt @ 0x2388640] Unsupported subtitles codec: dvd_subtitle" before. Expected.
[21:08:52 CEST] <relaxed> pastebin the command and output
[21:09:19 CEST] <furq> you can't automatically convert dvd_subtitle to srt
[21:09:20 CEST] <relaxed> dvd subtitles are images
[21:09:34 CEST] <furq> i'm not sure what container to use there though
[21:09:40 CEST] <q3cpma> relaxed: Yeah I know, that's why my original command was -> out.sub
[21:10:08 CEST] <furq> try without -c:s copy
[21:10:25 CEST] <q3cpma> At least my subs work now, I'm using a pretty old ffmpeg (2.8.6)
[21:10:25 CEST] <furq> vobsub and dvd_subtitle aren't the same iirc
[21:10:39 CEST] <q3cpma> furq: What's the extension for it, then?
[21:10:52 CEST] <furq> i guess .sub for vobsub
[21:11:12 CEST] <furq> iirc ffmpeg doesn't have very good support for this but i could be getting it conflated with its poor support for ifo
[21:11:13 CEST] <q3cpma> furq: Without -c:s copy, I get "Encoder (codec microdvd) not found for output stream #0:0"
[21:12:47 CEST] <relaxed> what are you trying to do with these subs?
[21:13:10 CEST] <q3cpma> relaxed: They don't show while using mpv or ffplay (but they do with a mpv/ffmpeg git)
[21:13:14 CEST] <furq> you should be able to mux them directly into an output video
[21:13:20 CEST] <q3cpma> So i was trying to examine them.
[21:13:22 CEST] <furq> i don't know if there's a standalone container for dvd_subtitle
[21:13:34 CEST] <furq> and i don't know if ffmpeg can convert them to anything which does have one
[21:13:53 CEST] <q3cpma> Could I put i into a .vob?
[21:13:54 CEST] <furq> you could just mux them into an mkv i guess
[21:15:28 CEST] <q3cpma> Well, anyway, I can watch my shit with git ffmpeg/mpv, so I suppose there's no problem.
[21:15:36 CEST] <q3cpma> Thanks for the help.
[21:15:45 CEST] <relaxed> why use anything else? :)
[21:15:57 CEST] <q3cpma> relaxed: Cause gentoo
[21:16:17 CEST] <furq> is cpma still going
[21:16:24 CEST] <furq> i see promode.org is still dead
[21:16:41 CEST] <q3cpma> furq: Still play it with a friend. But yeah, the mod is dead (or finished)
[21:16:53 CEST] <furq> doubtless there are still some diehards playing ca on q3dm6
[21:17:08 CEST] <q3cpma> furq: I'd say cpm22 more
[21:17:17 CEST] <furq> not for ca
[21:17:29 CEST] <furq> i don't think i ever saw a public server on any other map
[21:17:43 CEST] <q3cpma> duel is all that matter :)
[21:17:54 CEST] <furq> aerowalk is pretty great
[21:18:10 CEST] <furq> it's better in qw though
[21:18:17 CEST] <q3cpma> qw?
[21:18:20 CEST] <furq> quakeworld
[21:18:31 CEST] <furq> the original and best
[21:18:31 CEST] <q3cpma> Must admit I never tried.
[21:18:35 CEST] <relaxed> that's the floating arena map?
[21:18:54 CEST] <furq> https://www.youtube.com/watch?v=kcgh9UNIN60
[21:19:04 CEST] <q3cpma> relaxed: Nah, vertical and small map with teleporters
[21:19:23 CEST] <q3cpma> https://i.ytimg.com/vi/ny4y30-N7wo/maxresdefault.jpg
[21:20:07 CEST] <q3cpma> furq: Looks nice. Is the netcode as good as cpma?
[21:20:32 CEST] <furq> it's pretty good
[21:20:37 CEST] <furq> idk if it has the fancy lag compensation stuff cpma has
[21:21:48 CEST] <q3cpma> Actually, I don't think a cpma player would feel like a stranger. Looks pretty similar.
[21:22:07 CEST] <emilsp> hello, what's the state of h/w accelerated video encode on android, if there is such a thing ?
[21:22:11 CEST] <furq> cpm's movement is based on qw's
[21:22:21 CEST] <furq> aside from the stuff which is from q2 and q3
[21:25:54 CEST] <Mavrik> emilsp, yp, MediaCodec
[21:26:21 CEST] <Mavrik> 18+ for a decendly compatible implementation
[21:27:34 CEST] <emilsp> Mavrik, I'm looking for a C only imlementation
[21:28:15 CEST] <Mavrik> yeah, no.
[21:28:17 CEST] <emilsp> is there nothing useable on the native side ? The application I am developing would not be running as a regular android app but rather as a development tool
[21:28:20 CEST] <emilsp> oh well
[21:28:46 CEST] <Mavrik> It's a silly requirement though.
[21:28:55 CEST] <Mavrik> If you want to interact with Android, you have to do it via Java.
[21:28:56 CEST] <emilsp> since I want to access /dev/graphics/fb0, I will need adb access rights
[21:29:02 CEST] <Mavrik> huh.
[21:29:08 CEST] <emilsp> the same with input events
[21:29:13 CEST] <Mavrik> What are you trying to do?
[21:29:27 CEST] <Mavrik> Since that sounds very wrong.
[21:29:29 CEST] <emilsp> I am trying to write a non-intrusive android application automation thing
[21:29:43 CEST] <emilsp> this is for my uni project, so it is very wrong
[21:29:57 CEST] <emilsp> because 'in the real world' I would be using HDMI output to obtain the video stream from a phone
[21:30:18 CEST] <Mavrik> Is there a reason why aren't you using Android's API to take screen?
[21:30:27 CEST] <Mavrik> And then use the same API to move surfaces to HW encoder_
[21:30:27 CEST] <emilsp> is there one ?
[21:30:30 CEST] <Mavrik> Yes.
[21:30:39 CEST] <emilsp> could you please refer me to it ?
[21:31:19 CEST] <Mavrik> There's an opensource app that does that: https://github.com/JakeWharton/Telecine/blob/master/telecine/src/main/java/com/jakewharton/telecine/RecordingSession.java
[21:31:28 CEST] <Mavrik> There's also an adb command that starts recording to an mp4 file.
[21:32:19 CEST] <emilsp> I need to annotate the stream frame by frame with the unix timestamp, so just piping to a unix socket (instead of a file) wouldn't work to push the stream to a host
[21:32:27 CEST] <emilsp> thank you very much Mavrik
[21:32:48 CEST] <Mavrik> Well, then take the frame, add timestamp and pass it to mediacodec encoder
[21:33:47 CEST] <emilsp> I was expecting to just have a struct, holding the delta frame/reference frame and the actual timestamp :(
[21:34:07 CEST] <Mavrik> huh
[21:34:16 CEST] <Mavrik> Well depends what you're doing.
[21:34:34 CEST] <emilsp> Mavrik, wouldn't your previous suggetion imply that I just overlay the timestamp on the actual frame ?
[21:35:09 CEST] <Mavrik> I guess, isn't that what you want? :P
[21:35:30 CEST] <emilsp> no, I'd rather not mess with the frame itself
[21:35:54 CEST] <emilsp> since I am going to send the frames off to a host anyway, I could just 'package up' each frame with the timestamp
[21:39:25 CEST] <emilsp> so, in an ideal world, I could memcopy from /dev/graphics/fb0, get_timespec(); pass the memcopied framebuffer to the encoder, receive a new frame from the encoder and push both the timestamp and the frame off to a server, so the server would basically receive the something along the lines {'timestamp':1333564552423, alpha_frame:"", delta_frame:""}
[21:39:35 CEST] <emilsp> is what I am envisioning to do completely mental
[21:39:39 CEST] <emilsp> ?
[21:46:40 CEST] <Mavrik> Yes.
[21:46:52 CEST] <Mavrik> Because there's no guarantee fb0 will actually have what you want or work.
[21:46:59 CEST] <Mavrik> Because Android isn't your desktop Linux distro.
[21:47:07 CEST] <Mavrik> Use proper APIs for that in Java.
[21:48:58 CEST] <emilsp> Mavrik, the ADB command that you suggested does exactly the same thing
[21:49:14 CEST] <Mavrik> ok.
[21:49:28 CEST] <emilsp> it's just that I have no experience with video encoding and decoding at all
[21:50:25 CEST] <emilsp> and wrt to the adb command, the only difference is that it outputs the framebuffer to stdout or to a file without grabbing timestamps
[21:53:35 CEST] <Mavrik> It's not outputing the framebuffer from /dev
[21:53:45 CEST] <Mavrik> But it creates a surface composer client
[21:53:49 CEST] <Mavrik> Just like proper API does.
[21:54:12 CEST] <emilsp> https://android.googlesource.com/platform/frameworks/base/+/jb-release/cmds/screencap/screencap.cpp
[21:55:36 CEST] <Mavrik> screenrecord is this: https://android.googlesource.com/platform/frameworks/av/+/kitkat-release/cmds/screenrecord/screenrecord.cpp
[21:55:51 CEST] <anadon>  I'm getting garbage frames from the linked code and I'm not sure what could be going wrong: http://pastebin.com/3MGivRdv
[21:56:16 CEST] <Mavrik> It uses private APIs which you don't have guaranteed as a user.
[21:56:33 CEST] <emilsp> Mavrik, I am aware of that
[21:56:41 CEST] <emilsp> and yes, you are correct
[21:56:51 CEST] <emilsp> I guess I'll have to do this at the Java level
[21:59:12 CEST] <Mavrik> You can try doing it via framebuffer but it might not do what you want
[21:59:24 CEST] <Mavrik> Those APIs aren't stable or standard across devices
[22:00:36 CEST] <emilsp> yes, I am awareo of that, I feel like you have made me reonsider the way I approach the problem
[22:01:12 CEST] <emilsp> but still, all the examples using MediaCodec deal with streams, which essentially means that i don't get to granularly 'pick out' my frames
[22:03:31 CEST] <Mavrik> But you want to encode video to h.264 right?
[22:04:14 CEST] <emilsp> anything that is optimal, really
[22:04:33 CEST] <emilsp> I don't expect to be able to just push raw pixel values over wifi :(
[22:04:43 CEST] <Mavrik> That's kind of important.
[22:04:47 CEST] <Mavrik> You're talking about picking out frames.
[22:04:50 CEST] <Mavrik> But videos ARE streams.
[22:04:55 CEST] <emilsp> I am aware of that
[22:04:59 CEST] <Mavrik> So not sure which part did I misunderstand? :)
[22:05:13 CEST] <Mavrik> How many frames do you want to send per second?
[22:05:29 CEST] <emilsp> so, video's are streams where a frame may consist of a reference frame or a combination of a reference frame and delta frames, right ?
[22:06:11 CEST] <Mavrik> Um, video is a sequence of frames.
[22:06:13 CEST] <Mavrik> They have timestamps.
[22:06:22 CEST] <Mavrik> When encoded they may reference other frames to be fully constructed.
[22:06:29 CEST] <emilsp> yes
[22:06:36 CEST] <Mavrik> But usually you don't care about that since encoders / decoders handle that for you.
[22:06:39 CEST] <emilsp> but the timestamps are local to the video, are they not ?
[22:06:48 CEST] <Mavrik> You essentially get a sequence of timestamped frames.
[22:07:25 CEST] <Mavrik> Yes, timestamps are in video format specific units and don't reflect actual wall clock
[22:08:00 CEST] <emilsp> so, if I were to take a systemwide timestamp at the start of the recording, would or would I not be able to then 'correlate' the video timestamps with the system timestamp at the start of the recording ?
[22:08:48 CEST] <Mavrik> You could :)
[22:08:50 CEST] <emilsp> I am expecting to correlate different frames with different input events, (touch inputs), so I can deteremine what parts of the UI have changed and thus signify what input is epxected to drive an application to the next step
[22:09:00 CEST] <Mavrik> Provided that timestamps are of course correct.
[22:09:01 CEST] <emilsp> but would it be precise ?
[22:09:10 CEST] <emilsp> hmm, and what about dropped frames ?
[22:09:19 CEST] <Mavrik> Dropped frames just aren't there.
[22:09:38 CEST] <Mavrik> Remember, video has to be precise to the ranges of 1/60th of a second or even more ;)
[22:09:42 CEST] <emilsp> but the timestamps for the next frames are still 'relative' to the first frame, right ? not just +1
[22:09:48 CEST] <Mavrik> Even though, I have a feeling that you'd probably get away with screenshots.
[22:09:55 CEST] <Mavrik> Since you usually don't get 60 input events per second.
[22:10:02 CEST] <emilsp> you'd be surprised
[22:10:31 CEST] <Mavrik> The issue you'll mostly have is that a lot of players don't really handle variable framerate well :)
[22:10:32 CEST] <emilsp> anyway, thank you very much for not being dismissive and being helpful and constructive in this discussion
[22:10:56 CEST] <Mavrik> the timestamps for frames are always properly relative between themselves
[22:11:11 CEST] <Mavrik> e.g. if you have a 25fps video with 1/90000 timebase
[22:11:21 CEST] <Mavrik> first frame will have timestamp 0, second 3600, third 7200, etc.
[22:11:42 CEST] <emilsp> I am going to implement 'the player' anyway, since I will basically map input events with frames on the host side and analyze the thing frame by frame and I think also in a streamish manner to determine movement
[22:11:45 CEST] <Mavrik> and 25th (at exactly 00:00:01.000 play time) will have timestamp of 90000
[22:12:08 CEST] <emilsp> yes
[22:17:38 CEST] <Mavrik> emilsp, consider if you can get away by just sending a stream of JPEG screenshots
[22:17:45 CEST] <Mavrik> Those aren't all that bandwidth intensive
[22:18:01 CEST] <emilsp> those are resource intensive, are they not ?
[22:18:41 CEST] <Mavrik> less than encoding video
[22:18:47 CEST] <emilsp> well, at least when I attempted to solve this problem in an 'easier' way in the past, piping adb screencap -p | dos2unix >> screenshot gave me about 5fps
[22:18:50 CEST] <Mavrik> they're bigger bandwidth wise
[22:19:06 CEST] <Mavrik> that sounds inefficient :)
[22:19:19 CEST] <Mavrik> What's your required FPS?
[22:19:41 CEST] <emilsp> honestly, I've no idea, but I guess 5fps wouldn't really cut it
[22:20:54 CEST] <Mavrik> Try to check that first, dragging in video encoding if you don't need it is a lot of unnecessary work :)
[22:21:25 CEST] <Mavrik> After all, even most security cameras get away with MJPEG ;)
[22:24:06 CEST] <emilsp> well, this is all in all a resarchy PoC thing to get a piece of paper that says I can do what I've been doing for about two years already, so I'm guessing resorting to the least amount of effort thing (which would be adb screencap -p | dos2unix ) isn't excatly what I want to do here
[22:24:52 CEST] <emilsp> and in the real world, I would be getting HDMI output, so I guess I have to make something that would emulate a unix fd that outputs a video stream
[22:25:12 CEST] <emilsp> otherwise the PoC doesn't really replicate the way this would work, if it even would be able to work
[23:21:56 CEST] <mixfix41> ffplay isn't playing to hdmi through rpi2 on slackware arm
[23:22:07 CEST] <mixfix41> with no display how to get it hearing sound?
[23:22:16 CEST] <sfan5> try using a proper video player
[23:22:44 CEST] <c_14> set up your audio device correctly (alsa/pulse)
[23:26:26 CEST] <mixfix41> maybe I gotta enable mixing lets see
[23:34:31 CEST] <Prelude2004c> hey guys.. need some urgent help
[23:34:33 CEST] <Prelude2004c> http://pastebin.com/ustVEMaa
[23:34:41 CEST] <Prelude2004c> see problem in pastebin below
[00:00:00 CEST] --- Sun Apr 24 2016


More information about the Ffmpeg-devel-irc mailing list