-
Notifications
You must be signed in to change notification settings - Fork 531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP2 -- does disabling do more harm than good? #107
Comments
@Thorin-Oakenpants true. I bundled them because last time I tested, SPDY pref needed to be enabled for HTTP2 to work in FF 52. Besides, this is as much about HTTP2 (which is almost entirely based on SPDY), and that's certainly not deprecated. |
Tor browser has a different risk profile on this issue: because all Tor traffic is encrypted gibberish invisible to ISPs, Tor users are not vulnerable to either of the two problems that HTTP2 protect against from my OP -- the header traffic manipulation to track users , as well as the header traffic analysis to unmask browsing history over HTTPS. On the other hand vanilla FF users are vulnerable to these practices.. |
yes HTTP2 is only ever used over TLS but that's not relevant to the point I tried to make in my last post. Let me rephrase. For Firefox users:
For Tor users:
The encryption in Tor isn't just TLS. TLS leaves lots of data exposed (eg: which site you're going to, the size of the headers etc..) which leaves users vulnerable to the 2nd exploit (header traffic analysis to unmask browsing history over HTTPS). Tor hides all that info, as a VPN would. So when I said tor has a different risk profile on this issue, I meant that when Tor disables HTTP/2, it doesn't suffer the same drawbacks in privacy protection that regular Firefox does. |
I understand what you are saying, but the fallback in FF (I assume) would still be TLS |
We should probably merge these two together (2614+2615). We should also probably add in |
I think I get your point: that in the absence of HTTP/2, TLS over HTTP/1 would still protect users. This is true only for the 1st of the two exploits I alluded to. Though to be fair, the 2nd is still just academic at this point. Do you have an understanding of what specific privacy exposure is caused by allowing HTTP/2? Because I couldn't make sense of exactly why it's disabled, I had a hard time weighing the options fairly. Because the ISPs who stand to profit by exploiting user traffic data lobbied to oppose SPDY, I was inclined to support it (or rather its successor HTTP/2). |
This is what I was referring to about multiple domains which seems like a concern to me on first glance
Points on those quotes and the article
EDIT:
http://www.zdnet.com/article/severe-vulnerabilities-discovered-in-http2-protocol/ - is this still a thing? It's an attack on the servers, not people browsing. But it's interesting http://blog.scottlogic.com/2014/11/07/http-2-a-quick-look.html - good summary of exactly what everything is (has
|
Great finds! Some quick points
PS: the clarification on multiplexing -- especially from different hosts -- makes sense of why Tor doesn't like it. Thanks for that info. |
Just saying it needs to be looked at - not saying it's bad. But my initial thought is .. uggh: https://en.wikipedia.org/wiki/HTTP/2_Server_Push
"BEFORE the browser requests them" .. ring any alarm bells (it kinda does for me). Now 99% of the web will use this for "good" - nothing wrong with speeding up web performance/interaction. I just want to explore if it can be used for "bad" EDIT: look at the example on the wiki page. You request the html only (eg block css via uMatirx), but get sent the css anyway. |
I agree it does feel icky. But guess what else works that way? WebSocket messages; they get pushed from the server. All I'm saying is, let's walk through the privacy implications of the server pushing files as soon as the browser connect (HTTP/2) vs the browser automatically fetching those same files as soon as the initial HTML is parsed (HTTP/1). The exposure is the same, unless HTTP/2 neuters the user's ability to control exposure through privacy tools. By the way, here is the HTTP/2 spec. I'll post it in the OP as well. |
Agreed about need for an expert, and agreed doubly about the need to know whether the connection would be effectively blocked (simply stopping execution after a download wouldn't be good enough). Enjoy your coffee. |
One other consideration, how many sites currently support HTTP2. I've been using the HTTP/2 and SPDY indicator (https://addons.mozilla.org/en-US/firefox/addon/spdy-indicator/?src=search) and server support seems quite spotty. |
And no doubt it will grow rapidly EDIT: https://w3techs.com/technologies/details/ce-http2/all/all |
The 1st attack OP mentioned is mobile-only. If you can't trust your ISP you're screwed anyway. If you somehow think this also applies to non-mobile - use Tor, problem solved. @RoxKilly wrote:
https://http2.github.io/http2-spec/#security (Pants, maybe we should those 2 links, at least the 1st because the 2nd is part of the first) @RoxKilly wrote:
How? given that h2 is enabled by default for vanilla FF users?
It has already been demonstrated that servers can be exploited (has since been fixed), but the research has been done by a "data-center security" company and they only looked at the server side for obvious reasons. So far nobody seems to have looked at vulnerabilities in clients. Therefore I'm completely against changing our default values and opening a whole new can of worms with potentially exploitable flaws that are very likely to exist somewhere (where nobody has looked yet) - just because there's 1 (ONE) very "optimistic" potential but completely theoretical "threat" that an ISP could or couldn't do. If you live in a shitty country where you can't trust your ISP, you're free to enable those prefs in your own user.js if you think the benefits outweigh the unknown risks. But you should really focus on making your country less shitty. ot: They also deliberately and knowingly weakened TLS1.3 for alleged performance reasons and I absolutely hate that they allowed someone to influence them and let that happen. Who gives a fuck about a few millisecs here or there in a freaking encryption protocol?!! today science is compromised and it's sad AF |
I haven't even looked at that. I assume this is the Summary
The only things I wanted to question were 1. multiplexing and 2. server push (and maybe check on 3. Need a Zilla Engineer in here to calm everyone down :) |
https://queue.acm.org/detail.cfm?id=2716278 - original article by the "dissident" dev |
@Thorin-Oakenpants wrote:
AT&T's patent acknowledges that it would be impossible to insert the identifier into web traffic if it were encrypted using HTTPS, but offers an easy solution – to instruct web servers to force phones to use an unencrypted connection. source: https://www.propublica.org/article/somebodys-already-using-verizons-id-to-track-users |
|
@Thorin-Oakenpants wrote:
This is where I am for my private settings. I would disable HTTP/2 in my own settings if I learned that it led to my browser connecting to servers and downloading files that it wouldn't have over HTTP/1 (because my uBO settings would have prevented it). For the public template settings, @earthlng and @Thorin-Oakenpants have made a compelling point and I've changed my mind. I think leaving it disabled makes the most sense. Mostly because:
Thanks |
So what are the good things about H2? That it may or may not make things faster? Or that it prevents a theoretical "attack"? Anything else? On the other side we have things like
I think at this point I've seen enough to be able to answer OP's question:
|
I was still typing while he posted that, sorry everyone ;) |
How could I not be salty if this guy proposes we help kill my fellow earthlings?! xD |
@Thorin-Oakenpants wrote:
Well if you put it that way: gorhill/uBlock#2582 My own test (linked above) suggests that uBO and other content blockers in fact do work over HTTP/2. In Firefox at least. |
@RoxKilly, thanks for testing and letting us know. But IMHO that test site is not really a very representative example that h2 is a lot faster than h1. For example, without clearing the cache between refreshes, the h1 server took it's time to reply with 304 Not modified, up to 12secs actually, on average maybe 7-8 secs for many of the tiles. While the h2 server only took an average of ~100ms per tile to respond with a 304. They want to advertise their h2 CDN servers and there are probably a few ways to make the results more in their favor. (more powerful server, throttling, etc) Another thing I noticed is that some h1 tiles that were visible a millisecond before, suddenly timed out and the server responded with 404 not found. Highly suspect test site IMO. Not to mention that nobody would split such a relatively small image into 200 tiles on a h1 site. |
@earthlng I agree with you that we should not trust that site as a demonstration of speed difference. For it to make sense, the same exact tiles should be fetched from the same server, and the response headers should instruct the browser not to cache anything. Comparing speed wasn't the point of my testing. I wanted to find out whether the browser exposes those HTTP/2 connections to the WebRequest API so that extensions such as uBO can effectively block them (they can). That's all I was looking for, because that's what determines whether I disable HTTP/2 in my browser.. |
review (remove old 2614+2615, replace with below) /* 2614: disable HTTP2 (which was based on SPDY which is now deprecated)
* HTTP2 raises concerns with "multiplexing" and "server push", does nothing to enhance
* privacy, and in fact opens up a number of server-side fingerprinting opportunities
* [1] https://http2.github.io/faq/
* [2] http://blog.scottlogic.com/2014/11/07/http-2-a-quick-look.html
* [3] https://queue.acm.org/detail.cfm?id=2716278 ***/
user_pref("network.http.spdy.enabled", false);
user_pref("network.http.spdy.enabled.deps", false);
user_pref("network.http.spdy.enabled.http2", false); |
Implementation status - point 10. SPDY and HTTP/2 - Set to FALSE: network.http.spdy.enabled |
OK, I'm going to ask a really dumb question here - what exactly are the added "fingerprinting" risks/vectors/concerns? The comments on this issue all seem (?) to reference https://queue.acm.org/detail.cfm?id=2716278, but all that it says is:
(Plus some complaining that it doesn't do away with the cookie model). |
Well, we never really delved into the specifics - the word of the resident dev was enough for me .. but here we go .. first google result: https://blogs.akamai.com/2017/06/passive-http2-client-fingerprinting-white-paper.html Edit: tl;dr:
Not a massive leak in my book - UA spoofing is pretty moot anyway |
That's nothing to do with HTTP/2 and everything to do with the generic problem of detectable protocol implementation differences.
So, uh, same as a useragent string then? ;) There's nothing in that white paper about "what about HTTP/1.x". In other words: HTTP/2: no evidence for inherently greater fingerprintability, just "it's new network code of any sorts -> differences between clients." p.s.:
On a discussion about privacy... ah the irony 😁 |
Just throwing things here - yet to read em
Don't forget there are other aspects to HTTP2 - it kills polar bears, has server PUSH, concurrency / multiplexing .. holy crap what is that on page 22 .. deducing system uptime? |
Wow .. page 34 .. entropy in pseudo headers .. every single major browser decided to do it differently :bash-head-on-wall: |
oh great, I had this weird feeling that @smithfred was gonna bring this shit up next. Now I have to read all this stuff again FFS. I mean ... it literally freakin kills polar bears! POLAR BEARS DUDE! What else do you need to know? Don't you care about the environment, huh? HUH??! Don't you have a HEART?? IT'S BAD!!!! HTTP/2 IS BAD! bad bad bad - can't we just leave it that, pleeeease? |
Well, TBH, the one paper Enjoy your reading :) |
Yep, been slogging through the damn thing to customise it, so it's all hitting Issues in order :) Fuck polar bears, murderous bastards. Have you even $image_search-ed "bloodied polar bear"? All their cute little cubs covered too in the blood of poor innocent sea-bunnies? (Or whatever they eat... no-one knows). Also, can we just shoot the planet into the sun and get it over with? I digress... |
you asked
and it seems the main FP issues that have been identified and proven (so far) are:
admittedly those are probably all not that worrying for most people: Clients can be identified by other means as well, spoofing UA is a bad idea (we already knew that) and there are probably a shit ton of similar OSes using any given VPN at all times, so that's not really a problem either AFAICS. But there could be other issues as well, either FP stuff not identified yet or things like PUSH ( The user.js is a template - if you want to enable H2 go for it. |
uptime? nothing to do with H2 but here you go: http://lcamtuf.coredump.cx/p0f3/ ^^ does it show your uptime? |
BlackHat pdf - page 38 - Akamai dude slideshow - page 22 - I know it says deduce but this intrigues me. Wot the F are they talking about? |
I don't know |
p0f shows "uptime" for me but is completely wrong 😆 |
OS detection is based on the differences between TCP/IP Packets generated by various operative systems, as the TTL value contained in the IP Header or other Flags. P0f stores the operating system signatures in the file p0f.fp |
What an interesting thread. I'll keep in mind that if there are HTTP2 issues they re not of the concern of uBO.
@RoxKilly mentioned above,
The atis.org page is no longer accessible, I found its last recording on Wayback Old thread already, pity I haven't read it any sooner. |
A nice read, if interested:
|
I'd like to file a complaint regarding this pref: the user.js fails to mention polar bears! Anyways, I read this whole thread and some info elsewhere and, while I didn't read any of the technical docs, as I'm sure they'd mostly go above my head, what I have read has said the multiplexing is nothing more than turning a serial connection into a parallel one, which sounds great for performance. I've not seen anything indicating that it can result in connections being made to other servers; everything suggests it's just more connections to the same one. So to me, the bigger, and likely only, real issue is the server push, and that seems to me to at worst be a mixed bag. It's confirmed uBO still blocks everything on HTTP/2 that it would on HTTP/1, so you're not going to see any more ads or be exposed to any more JS. The real question is: is uBO capable of keeping the server from sending that stuff, even though it (uBO) would normally block requests for it, or does the stuff get sent regardless due to push. If uBO is able to block it from being sent, then there's no problem. If it's not, then the main/only problem is more bandwidth usage. This is definitely not ideal for a slower or metered connection, and then it becomes a matter of whether the speed benefit gained by multiplexing is more or less than any slowdown imposed by the pushing of unused data (and, ironically, it's the slower connections that would benefit more from multiplexing that would also suffer more from pushing). I'm currently seeking answers on uBO's subreddit, so while I doubt many of you use Reddit, you may want to jump in on that conversation. Also, I hate to be the bearer (no pun intended) of bad news, but we've probably killed more polar bears running our computers to research and discuss this than HTTP/2 does in a year. 😦 |
AKA Prefetching over multiple streams
The pros aren't that big (an article said around 10%), the cons are more content from big players and improved fingerprinting: Bob goes from site a to site b over http2 where TCP connections are still opened. Both sites are provided by the same CDN that knows exactly how much time Bob was on site a. If Bob opens a site c that's over http2 from the same CDN, this service keeps monitoring poor bob's surfing time. [1] WikiTextBot on r/uBlockOrigin Also see: gorhill/uBlock#2582 |
The problem is, basically everything I've read so far does not suggest that at all. And there's no reference on that page to where that information is coming from. So I'm not sure if that's actually true. Just because a connection is held open longer between Bob and Amazon, doesn't mean that connection will somehow magically be used between Bob and Google, too. But that seems pretty official, so I guess it's true, in which case then, yeah, multiplexing=bad. Which is unfortunate, because it seems like it would be a nice little bump in performance. With my connection, I'll take whatever I can get, though of course not at the expense of such a huge potential privacy leak. As for the CDN stuff, that makes a bit more sense, and is just another reason to use Decentraleyes.
Already read that, and referenced it in my post on Reddit. The problem is it only partially answers the questions at hand. It would still be nice to know the answer, but I guess it's less important knowing that multiplexing is such a potential concern, whereas I thought before only push might be. |
FYI: https://bugzilla.mozilla.org/show_bug.cgi?id=1337868#c3
When this bugzilla is closed/confirmed, then it might be worthwhile enabling HTTP2 (it does have speed advantages, but probably not earth shattering - it might have been 20-30% or more, which I read in some tortrac - depends on the site/content of course) - but only if you have FPI on |
Hi! P.S.: I've been following this awesome repo for months now and I just wanna say thank you for your hard work. You all are great! EDIT: Just saw you are already discussing the matter in #491, sorry for the bother. |
Not only that, but I think SSL session ticket ids is closely related. The thing is TBB is different - it uses tor and changes circuits (I'd have to check now, but it used to be every 10 minutes) |
You're right, I didn't think of that. |
maybe start a new topic "Investigate: SSL + HTTP2 + AltSrv" - this thread is rather long, and 491 is for something else (I'll use it to do a TBB vs ghacks diff when TBB ever gets a final 8, it's still a huge mess IMO). It would be kinda cool to know if SSL session tickets are required for HTTP2 and AltSrv, because that's the overriding factor in even allowing them in the first place. Currently FF only wipes SSL sessions on FF (or last PB mode) close, otherwise it allows up to 2 days (and I'm not sure if it respects that, eg if a site says the id is valid for 5 days, what does FF do, and vice versa if the site says its shorter) |
Prefs 2614 and 2615 disable SPDY and HTTP2.
The link reference leads to a Tor project page that doesn't give much info about what's wrong with these protocols:
Does anyone understand (1) how exactly SPDY/HTTP2 expose users' privacy more than HTTP1 and (2) whether there is any evidence of harm in the wild?
I bring this up because disabling SPDY/HTTP2 has significant downsides. We may not care that these protocols drastically reduce overhead (header data sent back and forth) and speed up browsing, but we should care that they protect against a real ongoing threat to privacy.
I'm specifically talking about what ISPs call "Header Enrichment": the growing trend in altering customer traffic on the fly, tagging it with unique identifiers to make users easier for advertisers to track regardless of browser settings. In the US, AT&T, Verizon and Comcast have all be caught doing this. Here's a great article on the practice. Below I quote the last paragraph:
Here is evidence of telecom industry's displeasure with SPDY (scroll down to see the News section). So we have a growing privacy abuse that is actually becoming standard in the telecom industry, but one that SPDY/HTTP2 protect against. Not only that, but these protocols protect against a class of attacks that use header traffic analysis to unmask user activity. Here is an example of another privacy attack that wouldn't work with SPDY/HTTP2. From the article:
So my basic question is: Are we disabling HTTP2/SPDY with a full understanding of the privacy implications of both sides of the coin? I think actual, ongoing threats outweigh theoretical ones, especially when the ongoing threats are becoming industry standard practice
EDIT - resources
The text was updated successfully, but these errors were encountered: