[FFmpeg-devel] [PATCH] avformat/hls: Disallow local file access by default
wm4
nfxjfg at googlemail.com
Wed May 31 16:42:41 EEST 2017
On Wed, 31 May 2017 14:49:19 +0200
Michael Niedermayer <michael at niedermayer.cc> wrote:
> On Wed, May 31, 2017 at 01:13:50PM +0200, wm4 wrote:
> > On Wed, 31 May 2017 12:51:35 +0200
> > Michael Niedermayer <michael at niedermayer.cc> wrote:
> >
> > > On Wed, May 31, 2017 at 11:52:06AM +0200, wm4 wrote:
> > > > On Wed, 31 May 2017 11:29:56 +0200
> > > > Michael Niedermayer <michael at niedermayer.cc> wrote:
> > > >
> > > > > On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:
> > > > > > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > > > > > <michael at niedermayer.cc> wrote:
> > > > > > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:
> > > > > > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > > > > > >> <michael at niedermayer.cc> wrote:
> > > > > > >> > This prevents an exploit leading to an information leak
> > > > > > >> >
> > > > > > >> > The existing exploit depends on a specific decoder as well.
> > > > > > >> > It does appear though that the exploit should be possible with any decoder.
> > > > > > >> > The problem is that as long as sensitive information gets into the decoder,
> > > > > > >> > the output of the decoder becomes sensitive as well.
> > > > > > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > > > > > >> > disable hls or possibly some of its feature. More complex solutions like
> > > > > > >> > checking the path to limit access to only subdirectories of the hls path may
> > > > > > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > > > > > >> > portably and would not stop every possible attack nor would they work with all
> > > > > > >> > valid hls files.
> > > > > > >> >
> > > > > > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > > > > > >> > Reported-by: Thierry Foucu <tfoucu at google.com>
> > > > > > >> >
> > > > > > >> >
> > > > > > >>
> > > > > > >
> > > > > > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > > > > > >> all its file) onto disk and simply open it again is a good thing.
> > > > > > >> Maybe it should just be smarter and only allow using the same protocol
> > > > > > >> for the segments then it already used for the m3u8 file, so that a
> > > > > > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > > > > > >> saved the playlist), but a http HLS playlist only allows http
> > > > > > >> segments?
> > > > > > >
> > > > > > > we already prevent every protocol except file and crypto for local
> > > > > > > hls files. We also already block http* in local hls files by default
> > > > > > > thorugh default whitelists (file,crypto for local files)
> > > > > > >
> > > > > > > This is not sufficient, the exploit there is successfully puts the
> > > > > > > content of a readable file choosen by the attacker into the output
> > > > > > > video, which if its given back to the attacker leaks this information.
> > > > > > >
> > > > > >
> > > > >
> > > > > > Well, I want to be able to store a HLS playlist and its segments
> > > > > > locally and play it, without specifying some obscure flag. So how can
> > > > > > we make that work?
> > > > >
> > > > > What you ask for is to use vulnerable code (hls with local files is
> > > > > pretty much vulnerable by design).
> > > > > Enabling this by default is a bad idea and it would be also an
> > > > > exception to how its handled in other demuxers.
> > > > > For example mov has drefs disabled without the enable_drefs flag.
> > > > >
> > > > > Other means than a flag could be used to let the user enable it.
> > > > > Would this be what you had in mind ? If so what did you had in mind
> > > > > exactly ?
> > > > >
> > > > >
> > > > > >
> > > > > > In general, information disclosure is always a risk when you process
> > > > > > unvalidated HLS streams, even if you load a remote http playlist which
> > > > > > only contains HTTP links, it could reference something on your
> > > > > > intranet and get access to something otherwise unavailable to the
> > > > > > attacker.
> > > > >
> > > > > > I would put the responsibility of ensuring this doesn't happen on
> > > > > > people creating transcoding services, not making our protocols barely
> > > > > > usable.
> > > > >
> > > > > According to the authors of the exploit, which they kindly seem to
> > > > > have sent to every affected company but not to us.
> > > > > Most if not all the big names had hls enabled and were vulnerable.
> > > > > So "its the users responsibility" alone did clearly not lead to secure
> > > > > code.
> > > > >
> > > > > Also users cannot review the ever growing feature set we have for
> > > > > security sensitive features and disable them, thats not practical nor
> > > > > would it be reasonable from us to ask them to do that.
> > > > >
> > > > > If you see a better solution than to disable hls or local file use in
> > > > > hls by default then please explain what you suggest ?
> > > >
> > > > How is this "vulnerable"? If it's via the completely useless and
> > > > annoying bullshit tty decoder, I'm going to throw a party.
> > >
> > > As stated in the commit message of the patch
> > > "The existing exploit depends on a specific decoder as well.
> > > It does appear though that the exploit should be possible with any decoder."
> > >
> > > The exploit uses a decoder which we can disable for hls but it would
> > > not do anything to stop the vulnerability.
> > > We do not have a tty decoder. We have a tty demuxer, the tty demuxer is
> > > not used in the exploit.
> >
> > Well, the tty demuxer / ansi decoder. The combination allows you to
> > dump any text file as video. This would assume the attacker gets the
> > video back somehow (transcoder service?).
> >
> > > Also from the commit message:
> > > "The problem is that as long as sensitive information gets into the decoder,
> > > the output of the decoder becomes sensitive as well."
> > >
> > > This should be quite obvious. If you feed some headers + sensitive
> > > data into a decoder, the output can be used to reconstruct the
> > > sensitive data
> >
>
> > The commit message doesn't really explain how that is a security issue.
>
> hls is a playlist with links to files to play
> if hls can refer to local files then the output of transcoding a
> mallicous hls file can contain any file readable to the user.
> That can be his passwords, private keys, and so on
>
> This renders the output of the transcoder unusable for many uses and
> the user is unaware of the potential sensitive content.
OK... though I know all that and changes nothing about what I said in
the previous mail.
>
> >
> > > Its easier with some decoders, harder with others but it will work
> > > with almost all decoders with some effort. The existing exploit, not
> > > suprisingly uses a decoder with which the effort to reconstruct the
> > > data is minimal.
> > >
> > > [...]
> >
> > I'm not sure why you keep talking about decoders. Decoders just
> > transform data (unless they have bugs, which should be fixed). Isn't
> > the problem that the "leaked" data from local filesystem access is
> > going somewhere it shouldn't. (Again, I can only imagine a transcoder
> > service that accepts anything without being sandboxed. That's pretty
> > stupid in itself.)
>
> Theres no question that automated transcoding services should be using
> a sandbox, that of course is very true.
> Still this problem is not limited to automated transcoding services.
>
> A user downloading a mallicous file (which does not need to end in .hls)
> and then transcoding it can put private data of the user into the
> output file. That being unknown to the user. if she shares this file
> she also shares the data embeded in the file.
> This is both a security and privacy issue.
Why would the user not look at the output? What kind of scenario is
that?
>
> >
> > As as Hendrik already said, you can get the same with http access. An
> > attacker could poke around in your local network, which could provide
> > access to normally firewalled and private services. Or does your
> > attack scenario randomly stop here.
> >
> > Also, this doesn't prevent that a HLS file on disk accesses network,
> > which could be another problem with services that consume user files
> > without further security measures. An attacker could for example upload
> > a media file (that really is a m3u8 file) to make the server connect to
> > an attacker's server.
>
> The default whitelist for local hls does not contain any network
> protocols. So unless the user application or user overrides this or
> theres a bug unknown to us there is no network access from local hls
>
What is considered local? What if the protocol is samba, or the thing
on the filesystem is a mounted network FS? I'm fairly sure you could
come up with loads of ideas that subtly break the existing "security".
Anyway, a remote URL could still access the local network. Isn't this
worse?
> >
> > I'm also not very welcoming of the added message. How is a user of a
> > software using FFmpeg supposed to know how to set a libavformat option?
>
> by reading the manual of the software she is using.
So the manual is supposed to contain information about random ffmpeg
error messages that are randomly being added?
>
> >
> > The real fix is to leave sanitation of external accesses to the API
> > user by overriding the AVFormatContext.io_open callback.
>
> There are problems with this
> 1. There is no io_open callback in all affected and supported releases.
Make them update? I don't think the issue is so critical. And it's
very, very, very obvious and most were probably aware by it.
You might as well disable the http protocol by default because a user
could enter an URL instead of a file name in a web forumlar, and the
transcoder would unintentionally do a network access. This is on a
similar level of ridiculousness.
It's a playlist protocol, and a network protocol that reads references
from somewhere else. Of course it's to be expected that it can access
arbitrary locations.
>
> 2. the io_open callback where it is supported did not stop the exploit.
> in practice.
Failures of the API user. (Or the API user doesn't consider it a valid
attack scenario.)
> 3. How would an application like ffmpeg and ffplay actually do the
> sanitation ?
> We need to allow access if the user enabled mov dref for example.
> As well as any other such cases.
> also we need to allow accesses to any manually specified pathes and
> files. images with wildcard expansion, ...
> This would result in a rather messy and complex callback
> with many special cases.
If they want security, they need to sandbox it fully, severely restrict
use-cases, or something similar.
> Security fixes should be as simple as
> possible.
Well, your fix isn't simple. It adds yet another exception with
questionable effect. It makes it more complex and harder to predict
what will actually happen, not simpler.
> If people want, I can limit the local file check to the case where
> the io_open callback is not set?
> That way user applications which do their own sanitation would not be
> affected by the check or error message and stay in full control of
> what access is allowed.
That would have little value and would make it more complex too.
I'd say a good way to make this secure would be disabling the hls
protocol in builds which are security sensitive.
In general there doesn't seem to be a good way. Feel free to prove me
wrong. (I tried something similar, but in addition to the security vs.
convenience tradeoff, it just didn't work.)
More information about the ffmpeg-devel
mailing list