Wall of surveillance cameras Photo by Lianhao Qu on Unsplash

When a tech company tells you your data is private, what does that actually mean?

In most cases it means there is a document — a privacy policy — that says the company won’t look at your data except under certain circumstances. Diagnosis of technical issues. Compliance with legal obligations. Fraud prevention. The language is reassuring, and for the most part the companies mean it.

But “we won’t” is not the same thing as “we can’t.”

Privacy by policy

Years ago, as a young engineer newly arrived in Silicon Valley, I heard about a new product called Gmail. It offered incredible things — search over your entire inbox, access from any browser, seemingly unlimited storage. The catch was that your email had to be visible and searchable by Google. I remember hesitating for a moment. What if they deviate from their friendly, “don’t be evil” ethos? But the convenience was irresistible, and I was happy to accept the invite. (It was invitation-only back then.)

I made the same deal most of us made: I traded access to my data for a better product, trusting that the company’s policy would keep things in line. And for the most part, it did.

Today, every major collaboration and cloud platform — Google Workspace, Microsoft 365, Slack, Zoom — operates on the same basic model: your data sits on their servers, encrypted in transit and at rest, but readable by the provider. The encryption keys are theirs. The infrastructure is theirs. Your privacy depends on their policy, their internal controls, and — critically — their response to outside pressure.

To be fair, Google does offer client-side encryption for Workspace — where the keys stay with you and Google cannot read the content. But it is only available on their most expensive tier (Enterprise Plus, quote-based pricing), requires the purchase of an additional “Assured Controls” add-on, and demands that you set up an external key management service and configure a separate identity provider. It is, by any measure, a project — not a default.

And even if you go through all of that, you are still trusting that Google’s closed-source code does what it claims. There is no way to verify that a backdoor hasn’t been introduced — whether by choice or under legal compulsion. The code is proprietary and will remain so. Privacy is available, if you are willing to pay significantly more, do the work, and still take Google’s word for it. For everyone else, the provider holds the keys.

This distinction matters because policies bend under pressure, and the pressure is increasing.

When policy meets a subpoena

A recent New York Times investigation reported that the Department of Homeland Security has sent hundreds of administrative subpoenas to Google, Meta, Reddit, and Discord, demanding identifying information behind anonymous social media accounts that criticized or tracked Immigration and Customs Enforcement.

These are not court-issued warrants reviewed by a judge. They are administrative subpoenas — issued unilaterally by the agency itself. And the tech companies largely complied.

Google, Meta, and Reddit turned over at least some of the requested data. Some companies gave users 10 to 14 days to challenge the subpoena in court — placing the burden on individual users to hire lawyers and fight the government on their own. As an ACLU attorney quoted in the article put it: “The government is taking more liberties than they used to. It’s a whole other level of frequency and lack of accountability.”

This is not a hypothetical scenario. This is happening now, in the United States, to people exercising their right to free speech. And while these particular subpoenas targeted social media accounts, the same legal mechanism applies to any data a provider holds — your email, your documents, your chat history. If the provider can access it, a subpoena can reach it.

The structural problem

The issue is not malice. It is capability. Most of the time, the policies work as advertised. But if the provider can read your data, then anyone who can compel the provider can also read your data. That includes governments, courts, law enforcement, and — as we are seeing — agencies acting without judicial oversight.

Privacy by policy is a promise. Promises can be broken — by a change in leadership, a shift in legal interpretation, a new administration with different priorities, or simply a subpoena that arrives on a Friday afternoon.

Privacy by default

Privacy by default means something different. It means the system is designed so that the provider cannot access your data, even if they wanted to, even if compelled.

Signal has demonstrated what this looks like in practice. When served with a grand jury subpoena demanding user data, Signal could only provide two pieces of information: the date each account was created and the date it last connected. No messages, no contacts, no groups, no profile names. As they explained: “It’s impossible to turn over data that we never had access to in the first place.” That is not a policy decision — it is an architectural one.

Signal proved this works for messaging. mothertree is built to extend the same principle to the rest of your workspace — chat, documents, video, and email:

  • End-to-end encryption means the server never sees plaintext. There is nothing to hand over.
  • Open source code means you don’t have to trust the provider’s claims — you can verify them yourself.
  • Self-hosting means you can remove the application provider from the equation. You still depend on lower-level infrastructure — DNS, a cloud host or ISP, the operating system — but the attack surface shrinks dramatically. No single vendor has access to both your keys and your data.

We don’t ask you to trust us more than you trust Google or Microsoft. We’d rather you didn’t have to trust us at all.

It shouldn’t require courage to speak freely

The people targeted by those DHS subpoenas were not criminals. They were tracking public information about government activity and commenting on it. In a system where privacy depends on policy, exercising free speech can be quietly de-anonymized with a letter to a tech company.

In a system where privacy is the default, there is nothing to hand over. That is the difference — and it matters now more than ever.

The hard case: email

Not every part of a collaboration platform fits neatly into end-to-end encryption. Email is the obvious exception.

Unlike chat or documents, email is a federated protocol — we don’t control both ends of the conversation. True end-to-end encryption for email exists (PGP, S/MIME), but decades of experience have shown that these technologies are too cumbersome for widespread adoption. We can’t force parties outside mothertree to use them, and we won’t pretend otherwise.

Instead, mothertree uses what we call edge-to-edge encryption: messages are encrypted from the moment they enter mothertree until the moment they leave, using keys that belong to the user, not the server. We cannot read the contents of your email while it is in our care. The tradeoff is that the message is only protected while it is within mothertree — once it leaves our boundary for an external recipient, standard email transport applies.

We think this is an honest middle ground: maximum protection where we have control, and transparency about where we don’t. This system is still under active development and testing, and we will cover the technical details in a future post.


mothertree is an open-source, end-to-end encrypted collaboration platform. Learn more at mother-tree.org.