summaryrefslogtreecommitdiffstats
path: root/wiki/src/blueprint/tails_server.mdwn
diff options
context:
space:
mode:
author127.0.0.1 <127.0.0.1@web>2017-01-21 18:00:51 +0100
committeramnesia <webmaster@amnesia.boum.org>2017-01-21 18:00:51 +0100
commitc550775d9972ba933ca7d6c58a1eaf5cca07099f (patch)
tree034bd5108796e1ad6eeb29223c81fcf09ab69969 /wiki/src/blueprint/tails_server.mdwn
parentbba05effe728770932c0257ce32af5475fc8ea8d (diff)
fix typo
Diffstat (limited to 'wiki/src/blueprint/tails_server.mdwn')
-rw-r--r--wiki/src/blueprint/tails_server.mdwn2
1 files changed, 1 insertions, 1 deletions
diff --git a/wiki/src/blueprint/tails_server.mdwn b/wiki/src/blueprint/tails_server.mdwn
index a5dd7ee..8d6b2ff 100644
--- a/wiki/src/blueprint/tails_server.mdwn
+++ b/wiki/src/blueprint/tails_server.mdwn
@@ -153,7 +153,7 @@ My current proposal is that, until we can use a Tor version with the [next gener
The reasoning for this is that users running onion services in Tails currently face an increased risk of deanonymization. In the default Tor configuration, the first Tor node that the Tor client connects to stays the same for a longer time, currently 60 days. This node is called the entry guard. The reasoning is to reduce the risk of using a bad entry node, because the entry guard is the only node in the Tor network that knows the real IP address of the Tor user. An attacker controlling the entry guard gains important information about the Tor user, which can lead to deanonymization.
-Tails currently does not [persist the Tor state](https://tails.boum.org/blueprint/persistent_Tor_state/), which means that Tor chooses a new entry guard after each system boot. Thus Tails users have a much higher risk to use a bad entry guard at some point, which is bad enough in itself. But when hosting onion services in Tails, this is even worse, because it is a lot easier for a bad entry guard to deanonymize onion services than normal Tor clients. For example, if an attacker knows the onion address of an onion service A and control a Tor node which is used as an entry guard, they can just block all traffic on the entry guard and try to connect to A. If A unreachable only while they block the traffic at their Tor node, they know that it is A who is using their Tor node as an entry guard, so they know the IP address of A.
+Tails currently does not [persist the Tor state](https://tails.boum.org/blueprint/persistent_Tor_state/), which means that Tor chooses a new entry guard after each system boot. Thus Tails users have a much higher risk to use a bad entry guard at some point, which is bad enough in itself. But when hosting onion services in Tails, this is even worse, because it is a lot easier for a bad entry guard to deanonymize onion services than normal Tor clients. For example, if an attacker knows the onion address of an onion service A and control a Tor node which is used as an entry guard, they can just block all traffic on the entry guard and try to connect to A. If A is unreachable only while they block the traffic at their Tor node, they know that it is A who is using their Tor node as an entry guard, so they know the IP address of A.
This attack requires the attacker to know the onion address of the onion service they want to deanonymize. Unfortunately, the current implementation allows attackers controlling a directory server responsible for an onion service to learn that service's onion address. This will be fixed in the next generation onion services. So once we can use the next generation onion services in Tails, it will be sufficient for Tails Server users to keep their onion address secret and only share it with users they trust. I think this will be good enough to make the client authentication optional and display a prominent warning about keeping the onion address secret in Tails Server.