-
Notifications
You must be signed in to change notification settings - Fork 696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prototype encrypting data client-side with the system's public key #92
Comments
I feel that the Security model of encryption with the node keys is not the best one and SD should switch to The same approach used by globaleaks, encrypting with the recipients keys. Anyhow leveraging OpenPGP.js is a good strategy, i'm following the project and in the past 1 year it improved a lot! Adding client side crypto with server provided keys will add a bit of perfect forward secrecy to the communication exchange, however it does need to use javascript on the submission interface. In globaleaks the submission client is fully JS but i don't know if in the SD threat model is acceptable to use javascript on submission interface. |
Oops, didn't mean to close this! |
Just say no to Javascript crypto. If the server is compromised to capture plaintext documents, it could just as easily be compromised to corrupt the javascript crypto code served to the source. So the gains are illusory. In the meantime, you'd be forcing (or at least encouraging) sources to turn off NoScript, making them vastly more vulnerable to Freedom Hosting-style malware. |
@klpwired The Javascript crypto give you certain values related to PFS within the respect to the following real-risk-context:
So, Javascript crypto is valuable given that you properly assess the kind of protection that will provide. Anyhow you must consider that in SD, the default browser to be used is Tor Browser Bundle. If you like to keep the philosophical choice of "keeping javascript off", i just agree to disagree :-) |
@klpwired @dtauerbach @fpietrosanti We're already using Javascript on the source interface (jQuery). @klpwired Re: "If the server is compromised to capture plaintext documents, it could just as easily be compromised to corrupt the javascript crypto code served to the source," I don't agree because presumably we would perform the client-side javascript crypto in a browser extension, which the client would have to download [1] before using the source website. This actually provides extra security against server corruption, because the client would have the ability to check the source code of the extension and make sure their documents are actually being encrypted. You can essentially think of the browser extension as an OS-independent, user-friendly front-end to OpenPGP. It could be as safe as using GPG offline if the user doesn't have malware in their browser. (thanks to @Hainish for having this conversation with me last night and bringing up some of these points.) [1] Either we could bundle the extension with the Tor browser bundle, or the client could download it over Tor separately as a signed, deterministically-built package. We need to be careful that simply having the extension in your browser doesn't single you out as a securedrop source! |
I agree that there are dangers to turning on host-served Javascript and using Javascript crypto libraries. But I think the analysis deserves a more nuanced treatment. In particular, host-served Javascript can be compromised, but is also auditable. Suppose an attacker has compromised the source server, and can send malicious Javascript. If the client gets anything except the expected Javascript, it has the opportunity to raise a red flag and fail, or, perhaps more importantly, detect that the server has been compromised. It is much more difficult for the attacker to target particular individuals given that TBB is being used, so even if a handful of clients are doing this auditing due diligence, this raises the cost of serving malicious Javascript quite a lot. On the other hand, if the encryption happens server-side, then the attacker who has compromised the source server (but not the SVS) will simply have plaintext access to the documents and not have to raise extra audit flags by serving malicious Javascript. There are serious downsides of course:
I'd suggest it's worth thinking through carefully. Empirical data could be gathered about downsides 1-4 in order to weigh them against the upside. Client-side encryption provides a major benefit, and makes the increased security of the air-gapped SVS much more significant. And a longer term solution to consider would be to create a browser add-on that ships with TBB. That way the Javascript isn't host-served, but there's stil a question of how the public keys of the SecureDrop sites get into the add-on -- the host could send the public key, but there would have to be some way to establish trust in that key. |
Well, not to strip the nuance away, but unauthorized plaintext access to documents being leaked to a journalist for publication is not the primary threat. De-anonymization of the source is. Making the system Javascript-dependent increases the risk to the source's anonymity in order to provide (again, illusory) gains in document confidentiality-- a distant second in importance. |
Tor Browser is not the only way people will be using the SD either, Tor2Web users could well be using Chrome or even worse, Internet Explorer to access an SD. You will also have sources using throwaway internet ready cell phones as well which can access through either Tor2Web, or Orbot. While I am not adverse to the use of javascript for some front end functions, but there are social issues with encrypting client side with javascript even more so when having to add on an extension to do so. Consider scope of potential sources ranges from the technophobes to Snowdens. Then ask these two rhetorical question: Could an Edward Snowden type whistleblower ( or in fact anyone who has read the leaks concerning EGOTISTICALGIRAFFE ) be put off using a dead drop system that employed client side javascript to encrypt files knowing that the vast majority of NSA efforts is focused on browser hijacking of firefox shipped with TOR? ( in fact anyone with contacts with him could very well ask the real Snowden of his thoughts on this issue ) Could a Technophobe be put off by the extra added step in the situation where the extension had to be manually installed, or at least the public key rather than be presented with the common select file field all computer users have become accustomed to. Source de-anonymisation is the number one threat if it comes down to a weighing exercise. |
@Taipo In SD threat model Tor2web is not contemplated, it is on the GlobaLeaks one. We need to see which will be the decision regarding #43 but i expect that following SD philosophy there will be no compromise. Please consider that most whistleblowers are technologically unskilled and a little bit dumb, so the main effort is to try to protect them from their own mistakes, not from NSA. @klpwired If de-anonimization of the source is the main risk, then you need to have very usable user interface, with super-strong-and-useful awareness information. To do so, you will need fancy UI with some major JS framework and a proper usability study made by UX design experts based on emotional-design concepts. Social risks are much more relevant than technological risks, IMHO. |
@fpietrosanti My point about Tor2Web is that it allows a user to access an SD using a wider variety of web browsers other than Firefox so any GPG encryption extensions would need to be available across a much wider range or browsers, or else browser brand restrictions are needed. I agree with you about technologically unskilled whistleblowers. That is basically what a 'Technophobe' is, its a slang word for the same thing, my apologies for the language barrier issues ( perhaps ). |
I've been having this conversation on the securedrop-dev mailing list, I've copied my conversation with Patrick Ball: Date: Mon, 21 Oct 2013 19:28:51 -0700 hi Seth, Bill, and Micah, My concern is essentially the same as the audit's final bullet in 3.4. In The solution that seems to me safest to the host-based-attack I proposed in If the server has to inject evil javascript in order to compromise encryption I think that encrypted public and private keys could be stored on the DD Danny's point that Tor doesn't want to implement anything special for you or I had a long conversation with Ben Adida about this a couple of years ago, hope this helps -- PB. Patrick Ball |
Date: Mon, 21 Oct 2013 21:32:48 -0700 Hey Patrick, I definitely like the idea of the encryption being done on the client side. But just because an attack is exposable doesn't mean that it will be exposed. The current problem is actually that the version of Firefox that the Tor Bill |
Date: Tue, 22 Oct 2013 09:49:55 -0700 hi Bill, first off, yes, certainly you may use anything from this thread in any way On 21 Oct 2013, at 21:32, William Budington wrote:
I know that Hushmail was encrypting client side, but by "host-based
Of course not. But a non-exposable attack has zero chance of being detected.
True, but there might be a way to detect a necessarily incomplete but It's not clear to me that a perfect system can be built, but an imperfect
I think they'd be way more open to it if the add-on were somehow I am convinced by Danny's point that having a SecureDrop-specific extension
I don't think the primitives approach has the same attack surface as HushMail
Good luck! and I look forward to following the developments -- PB. |
Regarding specific OpenPGP.JS threat model/uses please loin http://list.openpgpjs.org/ where those kind of discussions are usual every month! |
Thanks Bill. One option that I just discussed with @micahflee would be to encrypt client-side if and only if the user has Javascript running; if it is not running, display an alert of some sort encouraging the user to encrypt the documents herself before submitting. In terms of threats, I don't think there is a big delta between "attacker having plaintext access to documents" and "attacker being able to identify the source" -- I think the documents will often be the most identifying piece of information about the source, perhaps more identifying than having root on the computer used to leak. I also don't necessarily agree that Snowden would be turned off by the idea of client-side Javascript-based cryptography but NOT by the idea that the submission platform has you send the documents in plaintext to a host, instead of encrypting them in a way that they can only be decrypted via an SVS. I don't know what the right answer is, but I think this issue deserves careful consideration. |
@klpwired The concern here is that plaintext access to documents may lead to de-anonymization of the source due to identifying metadata in the documents.
@diracdeltas As I expressed on the mailing list, I do not believe that change (to allow sources to customize the number of words in their codename) has a good usability/security tradeoff. Given what we know about how NSA tries to de-anonymize Tor users, I think we should be encouraging users to disable JS. The only reason I accepted that change is because the codename chooser gracefully degrades and is still functional with JS disabled. I do not think we should add any functionality that requires JS, and the current existence of JS in the tree should not normalize its further use (without careful consideration).
@dtauerbach As long as it is being served in a signed browser extension, I agree - but this has serious usability problems (although bundling it in TBB would help a lot). In the end, I agree with @klpwired above. If an adversary could compromise our server to the degree that they could access the plaintext of documents being uploaded, then they could also serve a JS-based exploit. This would be much more likely to succeed because while uploaded documents might have identifying metadata, a successful exploit on the client's machine would certainly lead to de-anonymization. Therefore I think we should focus on securing our server and encouraging users to minimize their attack surface by disabling Javascript. |
I don't think it likely that the TBB will include a browser plugin for SecureDrop for a number of reasons. Firstly, every additional plugin is an additional vector for attack of all TBB users, not just ones that want to leak documents. I don't think they would want to expose their users to that risk. Secondly, it would imply that the TBB is a tool for leaking documents, which is not what they're going for. I think it may be unreasonable to ask the TBB to include such a plugin. As an alternative to a TBB plugin, I think we can develop an additional piece of infrastructure, let's call it a "SecureDrop Directory Server." (SDDS) This server could periodically check the SecureDrop running instances for their HTML and Javascript. Since it is a request over the Tor network, the SecureDrop server could not differentiate between a SDDS and a real leaking client, thus avoiding the HushMail problem of providing a malicious application to specified IPs. The SDDS then verifies if the set of HTML and JS returned is a verified instance of SecureDrop. This would make detection of malicious SecureDrop instances streamlined, and we could create a directory page that de-lists instances that are not verified (or even instances that are too old and security vulns have been found for). Providing that the SDDS headers can't be fingerprinted (and we'd have to provide the same headers as the TBB in our requests) this would eliminate the timed attack vector. In addition, the SDDS could be provided a list of public keys for running instances of SD servers, so the attack above that Dan mentioned (the JS providing a MITMed public key) could also be eliminated by having these SDDS servers. One criticism I've heard of this model is basically centralized. But it doesn't have to be, anyone can run an SDDS, including Freedom of the Press Foundation and any other organizations that wish to be guardians of the sanctity of SecureDrop servers. As a side-note, above I mentioned that the TBB currently does not support window.crypto.getRandomValues. I talked to Mike Perry and he mentioned that before December 2nd, they will be upgrading to FF 24, which does indeed provide the secure RNG API. This means that we can conceivably in the near future provide a client-side application for encrypting documents to the journalist. |
In-browser "generic crypto tools" is the goal of the W3C Web Crypto Working Group. This is still in development and it is unclear when it will be ready to be implemented. "Ways to audit evil code" is specifically mentioned as a use case here. The TBB developers have in the past entertained this idea, although it would be nontrivial and who knows what they would say now. Ultimately the problem is one of establishing a trust anchor if you want this to be automated. If you don't want to involve the user, you would have to either TOFU or do something similar to pinning. Otherwise you can get the user involved, which offloads the burden onto them (with concomitant risks).
We just released the new ESR, which is based on Firefox 24 and has |
Nice one, @Hainish ! |
Correction, TBB based on FF 24 by Dec 10th: 11:47:20 mikeperry by dec 2nd, all TBBs should be based on FF24 |
OK, a quick recap: Myself, Patrick, @Hainish, @fpietrosanti seem to favor exploring a host-delivered Javascript approach, trying to maximize the auditability/security of the untrusted code, and noting this will only be possible Dec 10 after TBB migrates to the new Firefox ESR. @klpwired, @Taipo, @garrettr warn against requiring a user to use Javascript (I agree). Would the 3 of you -- or anyone -- like to weigh in on whether you would consider non-required host-delivered Javascript? If a user is not running it, we could have a message suggesting that additional encryption may be helpful. There are other concerns with this approach too -- I tried to enumerate them above. In addition to the host-based Javascript question, there has been discussion by @diracdeltas and others about shipping an extension with TBB, or otherwise requiring a signed extension, and having that extension responsible for Javascript (so that it is not delivered by the host). This is more work, and poses several additional problems: deniability if source comp is compromised, key management, etc. But it has the big advantage of not relying on Javascript delivered by the host. Have I missed anything important? |
Great recap, @dtauerbach. I'd still consider non-required host-delivered Javascript harmful. It trains users in the wrong direction. Users should be blocking Javascript (and Flash, ActiveX, Java, Silverlight, whatever) from SecureDrop sites, so that if the host is compromised, the risk of the host successfully delivering malware to the user is minimal. IMO, the best use of Javascript would be: window.alert("You should turn off Javascript"); |
@dtauerbach +1 on the recap. In an ideal world, I agree that all encryption would be end-to-end from sources to journalists. Currently, there are too many open questions around Javascript cryptography for us to implement it. It is fine for projects like Cryptocat, which advertise their experimental nature and state up front "You should never trust any piece of software with your life, and Cryptocat is no exception". We are asking sources to take enormous risks to share information using our platform, and I think we can best serve them by being as cautious and conservative as possible in our design choices. @klpwired I completely agree with your last comment, and have opened #100 and #101 to address it. This is not to say that I think Securedrop could never encrypt data client-side using Javascript (using a browser extension, until someone solves the problem of securely delivering Javascript in an auditable manner). I would love to see experimental work in this direction. Perhaps it could be part of a 1.0 release sometime in the future! |
@garrettr @klpwired That seems like a reasonable decision, and I definitely agree that users are generally safer not browsing with Javascript (or Flash, or Java, etc). Still think it's worth being specific about the concerns. In this case, the main concern seems to be that we don't want to encourage users to turn on Javascript, to the point where we want to actively discourage them. That seems like a good idea to me. I listed other concerns above as well that folks haven't discussed. Are there others we've missed? The reason specificity is important is twofold. First, for the project itself, I agree that being conservative makes sense but one should be conservative relative to one's design goals, not just generally afraid of doing any crypto via Javascript or in browsers. For example, suspend your disbelief and suppose the Tor Project made the TBB come with Javascript always-on with no option to turn off. Then I think that might change the decision above, despite the fact that the Javascript libraries used are still experimental and security guarantees of host-based systems are almost non-existent. The decision we've gone with for now for SecureDrop would be analogous to Cryptocat not performing any sort of end-to-end crypto at all (just a irc/jabber server). It's hard to argue that Cryptocat as a service is less secure than if Nadim just ran a jabber server equivalent, and this has empirically borne out as best I can tell with a cursory look at the bugs in the service that have been identified (e.g. http://tobtu.com/decryptocat.php; yes, they are bad, no they aren't worse than no encryption at all). So in this case, I think the real concern we've keyed in on is that users are less safe running Javascript and we want to actively discourage them, not that the Javascript crypto is too experimental to deploy from a security perspective*, given that the alternative is no e2e crypto at all. Second, there is a lot of FUD about Javascript crypto. With the meteoric shift of software to the web, it's inevitable that most cryptography will take place in Javascript in browsers sooner than we'd like, if we'd like more than a tiny population to use crypto at all. Specificity allows us to productively move forward and identify showstoppers, to feed back into standards development.
|
@dtauerbach I totally agree that there is an excessive amount of FUD about Javascript and Javascript crypto, compared to the improved value and the effective context of use in anonymous whistleblowing technologies. It's likely that 99.99% of use of a Tor Hidden Service website is done with the default TBB configuration, that have Javascript turned on, so if this assumption it's true, all the JS/non-JS discussion would be useless. That's the reason GlobaLeaks started as a framework pure with Javascript application and the upcoming Chat and Messaging features are going to be full JS crypto based (with Cryptocat, OpenPGP.JS and Mailvelope): However, in order to satisfy the JS-related-sensibilities, we are going to implement a simplified GLClient that expose a submission interface with only HTML and interact with the GLBackend over it's submission API http://docs.globaleaks.apiary.io/ . Those set of security improvements would focus this project proposal: |
I just opened a ticket Log statistics about javascript support of whistleblowers submitting information at #109 to collect objectively collected data about the effective use of No-Script on submission interface on live infrastructures. |
The amount of uneducated FUD regarding JS crypto, in this thread, is terrifying, especially considering the otherwise solid reputation of the people involved. Guys, the concerns @klpwired has about JS crypto are solvable using a signed browser extension to deliver the code. Also, regarding your other concerns on the matter, please do read my blog post on JS crypto, which I hope will dispel a lot of the FUD in this thread. |
@diracdeltas SJCL could be a good choice. Again, for performance it might be good to use asm.js (Emscripten-compiled native libraries). This blog post is another good take on "what library should we use to do crypto in the browser?" Generally, I think it's most important that we first define a generic API that can be utilized by a variety of clients (browser plugins, native desktop or mobile apps, etc.) A design document for such an API, and the accompanying protocol, is in progress. |
Yesterday at FOSDEM I had a chat with @tasn, who has written a nice browser extension that verifies the PGP signature of web pages. This is something that is worth experimenting with in the context of SecureDrop. In SecureDrop releases, we'd ship JavaScript that encrypts submissions client-side to the instance's public key, and this js code would be signed with the SecureDrop release key. The browser extension would verify the sig and only execute the JavaScript if the signature verifies (we'd need the release key baked into the extension). We'd fall back to server-side crypto for sources that have JavaScript turned off entirely. We'd also (eventually) need to get this browser extension bundled into Tor Browser. This doesn't address the problem that a malicious server can replace the submission key on the server with an attacker controlled one, though we can detect this using OSSEC, and alert on the replacement of the key such that admins can respond. This would be a significant improvement over the current situation, where a very careful attacker (i.e. careful not to trigger any OSSEC alerts) that is able to compromise the application server can read submissions from memory without being detected. |
@redshiftzero covered almost everything, I just have a few comments. The extension verifies user controlled websites. This means, that users can add website + pubkey combinations as they please. I plan on adding a preloaded list of trusted services and their corresponding keys, and would love to add securedrop once you are ready. You probably know better if it makes sense, but in my mind I see two alternative ways of using this extension with securedrop. The first is you signing your HTML (for the extension) and instances, e.g. NYT, just upload it as is. This is the easiest solution, and will let users verify the code is really from securedrop. I understand you'd like to use this extension in order to support client side encryption, which is great, and what the extension was made for (I created it for EteSync's web client), but I think you could already benefit from it, given the sensitive nature of the project. For example, attackers with the ability to modify files on the server (but not sniff transport), could at the moment change the form's target to a server controlled by them and steal data this way. This extension will prevent that. If there's anything I can do to help with integrating the extension, or if you have any suggestions or queries regarding the extensions, please let me know. |
Added "prototyping" to title to clarify that's what we're committing to for now. |
Looking towards the future ECDH key pairs could be generated in the trusted crypto VM on the Qubes Reading Room Workstation (RR). A large number of the ECDH public keys, all signed by the long-term ECDSA identity key of the (RR), are sent to the server through a networked VM a series of them really). Clients get served unique* ECDH public keys, verify the signature over them, derive a shared symmetric key for AEAD, and use that to encrypt a document/ message. The client then uploads a tuple
So this straightforward hybrid-encryption scheme provides forward secrecy and a measure of sender unlinkability, and the crypto is pretty straightforward to implement. Honestly, the harder part of the implementation will be due to the complicated security architecture of SD where instead of client-server we're dealing with CryptoVM-NetworkedVM-server-client. I glanced over some finer details here, including how to achieve forward secrecy for replies and how it might be possible to likewise add a measure of receiver unlinkability (although that seems harder), but would be happy to flesh this out more and even write a formal spec that I could have smarter cryptographers than I help with/ verify if the SD team is ever serious about implementing this. The above builds on some of the ideas in #3281. |
Cross-referencing: "An End-to-End Encryption Scheme for SecureDrop" (May 2018 student course paper). Unfortunately I wasn't able to find a repo for the example extension code referenced in the paper. |
Hey, I've been thinking about this ticket, and am definitely part of the "javascript is risky to enable" camp. I can definitely see some promise with using a plugin that can validate that the javascript code is signed, but would still prefer having a self contained browser plugin. tasn/redshiftzero/others brought up a few ways the js signing could work, sorry for rehashing statements.
As stated earlier, a generic browser plugin that does the encryption for you might be the best option. The user doesn’t need to enable any javascript, and the largest risk I can see here is a mitm replacing the public key sent to the user with one the attacker controls. (Via a server compromise) This does not place them in any worse scenario than they are in with the current setup, and requires an active attacker. Redshiftzero mentioned earlier that this could potentially be monitored with OSSEC controls. Is there any reason against just having a standalone “PGP encrypt file” browser plugin that I missed? It should be generic enough that it wouldn’t be SD specific and provides all the functionality without figuring out how to manage code signing across deployments. |
What do you mean by a standalone "PGP encrypt file"? If you mean just a generic browser extension that validates generic pages using normal PGP signatures with normal PGP keys, that's what Signed Pages is (the extension mentioned previously in this thread). |
@tasn in this case PGP would be used to encrypt files before uploading them. |
@zenmonkeykstop, oops, thanks for the clarification. I can see the confusion now upon re-reading the thread. I thought he was talking about the signature verification, but instead what he was talking about is having a plugin that encrypts the files being uploaded before they even hit the page. Sorry for the noise. As for the comment: it looks like this solves the uploading of the files problem quite well, but I think there's still value in verifying the integrity of the page to prevent the running of unapproved javascript that could be used for e.g. fingerprinting. |
One other point that just occurred to me about client-side encryption of submissions, is that server-side submissions are gzipped before gpg encryption - I'd imagine to ease the pain of large file transfers over Tor. As HTTP compression isn't going to help with gpg-encrypted files, a client-side solution is either going to have to do something similar or deal with said pain. (This is a minor detail compared to stuff above, obvs.) |
@lev-csouffrant Any update on your prototyping work? I see the public repo at https://github.com/lev-csouffrant/Uplocker , should we consider that the final state of your prototyping effort, or are you still planning to do further work on it? Thanks :) |
Hey eloquence, yeah that protoype is final-ish state, and we will see how much free time I can put into updating the last few important pieces (i.e. testing and packaging). Otherwise, it works as a proof of concept for now as a browser plugin that encrypts files via a PGP key (passed to the plugin via a HTML meta tag). I also handled compressing the files before encrypting them as @zenmonkeykstop suggested. Compressing encrypted files is not going to help much, so if there is going to be any compression is should probably be done before the encryption phase occurs. One thing I am worried about after writing this is the memory usage for files. You need one copy to be stored in memory for the encryption to run on (There's a streaming file capability but it didn't look like it was supported in the version of Firefox TOR browser uses). The encrypted file will also need a copy in memory. If compression is going to be supported that is a third copy that will be stored. Additionally, you need to transfer it from a content-script to background due to limitations on what each portion can run. I used a transferable object to pass between them which should mean that it is not putting a fourth copy into memory... With 500MB max per file, that means at minimum it will probably need 1-2GB memory just for the file itself because of all of this. Maybe someone smarter at js stuff can chip in on if there's a better way to handle this? |
Right now as I understand it, the source uploads a sensitive document, that document is sent over Tor to the hidden service running on the source server, that source server encrypts the document, and it is only decrypted on the SVS. This means that if the source server is somehow compromised, an attacker could recover the plaintext of the document before it is encrypted.
Channeling some of the feedback from Patrick Ball at the techno-activism event tonight, it might make sense to instead encrypt on the client with the public key of the system. That way, if the source server is compromised, the data will still be protected, so long as the SVS is secure. Since the SVS has a higher security model than the source server.
The way that was suggested to accomplish this is via a browser extension, or baking keys into the browser. In addition to being a lot of work, this brings up the whole can of worms that comes with key distribution (e.g. does the browser extension/patch server as a CA?)
In the shorter term, one could just provide the public key with Javascript, and encrypt the document using it before sending it to the source server. There are two issues I see with this: first, adding Javascript may open up an attack vector if no Javascript is being used right now. Second, the attacker we've presumed to have control of the source server could modify the Javascript to include a different public key. The second problem I think is solvable with a super basic browser add-on or something that detects when a client sees unexpected Javascript. Not all clients have to run this. Given the attacker does not who has submitted documents, she must attack everyone to attack her target. That means even if a small percentage of people run the testing add-on, it will still make an effective attack (against everyone) detectable.
[There should be a separate bug for if and how to move the conversation with the journalist to use a somewhat similar client-side approach.]
The text was updated successfully, but these errors were encountered: