Why don't we encourage Tor relays in East Asia?

I’ve heard about Tor relays being very EU-centric.

Many Asian countries have some of the best residential connections. Countries like Sweden have tons of relays on fiber ISPs as fiber is ubiquitous in Nordic countries but South Korea has as much fiber but very little relays in comparison. Even Russia, a fascist country with terrible laws was better with relays pre-censorship than Asian democracies like Japan or even “free” European countries like Spain.

Yes, bandwidth is expensive in Asia, so maybe it’s a terrible option for VPSes. But it’s not for residential connections provided we have a public IP. We should really have good relay outreach to Asian nations. Heck, I’ve always wanted Asian countries to have tons of relays like NA/EU.

I am aware relays might be “slower” in Asia. The bandwidth scanners aren’t fair towards Asian relays as most relays are in Europe. They weren’t even fair towards west coast cities like Seattle and LA, the only saving grace is I prefer New York so I moved back. I’m no longer passionate about coding, but we should enhance the scanner code and network to make it truly global and not eurocentric.

So we do need to encourage relay outreach in East Asia and not just in North America and EU. Why not expand it to the global south as well while we’re at it?

2 Likes

More then half the relays are in Germany alone, most likely due to many VPS’s being out of there and Germany’s laws towards hosting in general.

So we do need to encourage relay outreach in East Asia and not just in North America and EU.

Agreed.
Is there any hosting providers in East Asia people could reccomend?

I’ve always read that using a residential connection was not the best idea or solution. If I use my asymmetrical connection as an example, I would be limited to my egress or outgoing or upload bandwidth.

My connection is made for surfing like most home connections: a little bit of data through my upload allocation to request something from a web server and a ton of stuff back on my download allocation. e.g. stream this program, download this file. Great for me but as a server not so.

I could easily receive tons of little bits of data on my incoming or download allocation from Tor users requesting a web page from somewhere. I would choke trying to sent the return traffic from those websites. The outgoing pipe is not large enough.

Or have I got this wrong?

Agreed cost would be a big factor, probably the biggest. What would my ISP think or do about my bandwidth usage?

I have leased from A LOT of providers and pq hosting is very Tor friendly. Here’s a look at where their servers are located:

2 Likes

re: low consensus weight in Asia. We have an open issue waiting network health analysis:

4 Likes

No XMR payments

1 Like

We now have an AS running a non-exit relay in Japan, whose real-life measured bandwidth to major ASes in Japan is 500Mbps+ (which is expected to become a few Gbps in a future with 10GbE interfaces).

However, we have rarely seen the relay’s CPU usage go above 10%, nor seen the relay’s bandwidth usage go above 10Mbps.

Relay Search: menhera1

We recently raised the relay’s bandwidth limits to 12MB/s or 96Mbps. We also improved the AS’s connectivity by adding more upstream connections. Also planning 10Gbps physical connections (now we have only 1Gbps interfaces).

I understand that usually not all of the bandwidth will be used all the time. How to improve the “observed bandwidth” of our relay or other East-Asian relays so that it reflects well on infrastructure/configuration limits? Lower consensus weights in East Asia would well discourage relays in the region.

I believe there are two main issues:

  1. The low consensus in relays across Asia (see the ticket above).
  2. c-tor relays not fully utilizing your bandwidth.

For the first problem, we need an analysis from the network health team.

For the second problem, you can run up to 8 tor instances per IPv4 address to use all your bandwidth. If you’re using Debian, you can create these instances with tor-instance-create (man page). Please remember to declare all your relays in your MyFamily.

Let me know if, after creating multiple tor instances, your relays start using the available bandwidth.

1 Like

Just set up menhera1b 0D1799FDE49AB2498362D9B2D2542309F6E99E30 on 43.228.174.250:9001 (same IP).

The relay first failed reachability tests for a few hours though I set up everything correctly, and even I can telnet to ORPort from multiple different external ISPs/VPSes. It finally succeeded in reachability tests IPv4/IPv6, but how can this happen at all? Just that the circuits/relays used in the self tests had somewhat broken reachability?

Could you provide your tor logs? And can you test if Tor Directory Authorities are reachable? To test, maybe run ooni probe Tor test - Install OONI Probe CLI | OONI

Now running ooni probe tests in the background. Apparently some small number of connections are failing.

Do we need to set up a full DNS resolver to even run a middle relay, or 1.1.1.1 would suffice? If it is not the Cloudflare DNS that is blocking some servers, some of our transit providers must be doing some blocking, because this server is connected to the Internet fairly simply with BGP full tables.

Or maybe just some servers tested were offline.

This is the logs from the relay menhera1b (0D1799FDE49AB2498362D9B2D2542309F6E99E30):

Sep 02 18:09:34.000 [notice] Tor 0.4.8.12 opening new log file.
Sep 02 18:09:34.370 [notice] We compiled with OpenSSL 30000020: OpenSSL 3.0.2 15 Mar 2022 and we are running with OpenSSL 30000020: 3.0.2. These two versions should be binary compatible.
Sep 02 18:09:34.371 [notice] Tor 0.4.8.12 running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.2, Zlib 1.2.11, Liblzma 5.2.5, Libzstd 1.4.8 and Glibc 2.35 as libc.
Sep 02 18:09:34.371 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://support.torproject.org/faq/staying-anonymous/
Sep 02 18:09:34.371 [notice] Read configuration file "/run/tor-instances/menhera1b.defaults".
Sep 02 18:09:34.371 [notice] Read configuration file "/etc/tor/instances/menhera1b/torrc".
Sep 02 18:09:34.372 [notice] Based on detected system memory, MaxMemInQueues is set to 2929 MB. You can override this by setting MaxMemInQueues by hand.
Sep 02 18:09:34.373 [notice] Opening OR listener on 0.0.0.0:9001
Sep 02 18:09:34.373 [notice] Opened OR listener connection (ready) on 0.0.0.0:9001
Sep 02 18:09:34.373 [notice] Opening OR listener on [::]:9001
Sep 02 18:09:34.373 [notice] Opened OR listener connection (ready) on [::]:9001
Sep 02 18:09:34.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Sep 02 18:09:34.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Sep 02 18:09:34.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.
Sep 02 18:09:34.000 [notice] You are running a new relay. Thanks for helping the Tor network! If you wish to know what will happen in the upcoming weeks regarding its usage, have a look at https://blog.torproject.org/lifecycle-of-a-new-relay
Sep 02 18:09:34.000 [notice] It looks like I need to generate and sign a new medium-term signing key, because I don't have one. To do that, I need to load (or create) the permanent master identity key. If the master identity key was not moved or encrypted with a passphrase, this will be done automatically and no further action is required. Otherwise, provide the necessary data using 'tor --keygen' to do it manually.
Sep 02 18:09:34.000 [notice] Your Tor server's identity key fingerprint is 'menhera1b 0D1799FDE49AB2498362D9B2D2542309F6E99E30'
Sep 02 18:09:34.000 [notice] Your Tor server's identity key ed25519 fingerprint is 'menhera1b HTx3lLRiCF9uFRqLxv5J1U1p1izSqC98wc2+hQuabTQ'
Sep 02 18:09:34.000 [notice] Bootstrapped 0% (starting): Starting
Sep 02 18:09:34.000 [notice] Starting with guard context "default"
Sep 02 18:09:34.000 [notice] Signaled readiness to systemd
Sep 02 18:09:35.000 [notice] Opening Control listener on /run/tor-instances/menhera1b/control
Sep 02 18:09:35.000 [notice] Opened Control listener connection (ready) on /run/tor-instances/menhera1b/control
Sep 02 18:09:35.000 [notice] Unable to find IPv4 address for ORPort 9001. You might want to specify IPv6Only to it or set an explicit address or set Address.
Sep 02 18:09:35.000 [notice] Bootstrapped 5% (conn): Connecting to a relay
Sep 02 18:09:35.000 [notice] Bootstrapped 10% (conn_done): Connected to a relay
Sep 02 18:09:36.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
Sep 02 18:09:36.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
Sep 02 18:09:36.000 [notice] Bootstrapped 20% (onehop_create): Establishing an encrypted directory connection
Sep 02 18:09:37.000 [notice] Bootstrapped 25% (requesting_status): Asking for networkstatus consensus
Sep 02 18:09:37.000 [notice] Bootstrapped 30% (loading_status): Loading networkstatus consensus
Sep 02 18:09:39.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Sep 02 18:09:39.000 [notice] Bootstrapped 40% (loading_keys): Loading authority key certs
Sep 02 18:09:39.000 [notice] The current consensus has no exit nodes. Tor can only build internal paths, such as paths to onion services.
Sep 02 18:09:39.000 [notice] Bootstrapped 45% (requesting_descriptors): Asking for relay descriptors
Sep 02 18:09:39.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/7561, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of end bw (no exits in consensus, using mid) = 0% of path bw.)
Sep 02 18:09:40.000 [notice] We'd like to launch a circuit to handle a connection, but we already have 32 general-purpose client circuits pending. Waiting until some finish.
Sep 02 18:09:41.000 [notice] Bootstrapped 50% (loading_descriptors): Loading relay descriptors
Sep 02 18:09:41.000 [notice] The current consensus contains exit nodes. Tor can build exit and internal paths.
Sep 02 18:09:44.000 [notice] Bootstrapped 55% (loading_descriptors): Loading relay descriptors
Sep 02 18:09:45.000 [notice] Bootstrapped 60% (loading_descriptors): Loading relay descriptors
Sep 02 18:09:45.000 [notice] Bootstrapped 65% (loading_descriptors): Loading relay descriptors
Sep 02 18:09:46.000 [notice] Bootstrapped 70% (loading_descriptors): Loading relay descriptors
Sep 02 18:09:46.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
Sep 02 18:09:46.000 [notice] Bootstrapped 80% (ap_conn): Connecting to a relay to build circuits
Sep 02 18:09:46.000 [notice] Bootstrapped 85% (ap_conn_done): Connected to a relay to build circuits
Sep 02 18:09:47.000 [notice] Bootstrapped 89% (ap_handshake): Finishing handshake with a relay to build circuits
Sep 02 18:09:47.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
Sep 02 18:09:47.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
Sep 02 18:09:48.000 [notice] Bootstrapped 100% (done): Done
Sep 02 18:17:50.000 [notice] New control connection opened.
Sep 02 18:21:31.000 [notice] New control connection opened.
Sep 02 18:26:02.000 [notice] New control connection opened.
Sep 02 18:28:09.000 [notice] New control connection opened.
Sep 02 18:30:36.000 [notice] External address seen and suggested by a directory authority: 43.228.174.250
Sep 02 18:49:35.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at 43.228.174.250:9001 and [2001:df3:14c0:ff00::2]:9001. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
Sep 02 18:59:46.000 [notice] New control connection opened.
Sep 02 19:09:35.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at 43.228.174.250:9001 and [2001:df3:14c0:ff00::2]:9001. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
Sep 02 19:28:39.000 [notice] New control connection opened.
Sep 02 19:29:35.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at 43.228.174.250:9001 and [2001:df3:14c0:ff00::2]:9001. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
Sep 02 19:39:31.000 [notice] New control connection opened.
Sep 02 19:49:35.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at 43.228.174.250:9001 and [2001:df3:14c0:ff00::2]:9001. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
Sep 02 20:00:08.000 [notice] Self-testing indicates your ORPort 43.228.174.250:9001 is reachable from the outside. Excellent.
Sep 02 20:04:41.000 [notice] Now checking whether IPv6 ORPort [2001:df3:14c0:ff00::2]:9001 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Sep 02 20:04:45.000 [notice] Self-testing indicates your ORPort [2001:df3:14c0:ff00::2]:9001 is reachable from the outside. Excellent. Publishing server descriptor.
Sep 02 20:05:03.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60000ms after 18 timeouts and 1000 buildtimes.
Sep 02 20:05:08.000 [notice] Performing bandwidth self-test...done.
Sep 02 20:17:33.000 [notice] New control connection opened.
Sep 02 22:32:36.000 [notice] No circuits are opened. Relaxed timeout for circuit 3300 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
Sep 02 23:47:07.000 [notice] No circuits are opened. Relaxed timeout for circuit 4707 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 162380ms. However, it appears the circuit has timed out anyway.
Sep 03 00:00:27.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
Sep 03 00:00:27.000 [notice] Read configuration file "/run/tor-instances/menhera1b.defaults".
Sep 03 00:00:27.000 [notice] Read configuration file "/etc/tor/instances/menhera1b/torrc".
Sep 03 00:00:27.000 [notice] Tor 0.4.8.12 opening log file.
Sep 03 00:09:35.000 [notice] Heartbeat: Tor's uptime is 6:00 hours, with 7 circuits open. I've sent 194.13 MB and received 238.19 MB. I've received 1046 connections on IPv4 and 340 on IPv6. I've made 654 connections with IPv4 and 99 with IPv6.
Sep 03 00:09:35.000 [notice] While bootstrapping, fetched this many bytes: 582343 (consensus network-status fetch); 13358 (authority cert fetch); 6687244 (microdescriptor fetch)
Sep 03 00:09:35.000 [notice] While not bootstrapping, fetched this many bytes: 20714132 (server descriptor fetch); 1080 (server descriptor upload); 829595 (consensus network-status fetch); 19725 (authority cert fetch); 345328 (microdescriptor fetch)
Sep 03 00:09:35.000 [notice] Circuit handshake stats since last time: 0/0 TAP, 1294/1294 NTor.
Sep 03 00:09:35.000 [notice] Since startup we initiated 0 and received 0 v1 connections; initiated 0 and received 0 v2 connections; initiated 0 and received 0 v3 connections; initiated 0 and received 0 v4 connections; initiated 618 and received 1324 v5 connections.
Sep 03 00:09:35.000 [notice] Heartbeat: DoS mitigation since startup: 0 circuits killed with too many cells, 0 circuits rejected, 0 marked addresses, 0 marked addresses for max queue, 0 same address concurrent connections rejected, 0 connections rejected, 0 single hop clients refused, 0 INTRODUCE2 rejected.
Sep 03 01:53:35.000 [notice] No circuits are opened. Relaxed timeout for circuit 6456 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [12 similar message(s) suppressed in last 7500 seconds]
Sep 03 01:55:53.000 [notice] Guard Liberation ($BBEAFF24A9D3406DCC95484DEB3B7DDF98A99980) is failing more circuits than usual. Most likely this means the Tor network is overloaded. Success counts are 118/170. Use counts are 0/0. 118 circuits completed, 0 were unusable, 0 collapsed, and 65 timed out. For reference, your timeout cutoff is 60 seconds.
Sep 03 06:09:35.000 [notice] Heartbeat: Tor's uptime is 12:00 hours, with 9 circuits open. I've sent 261.75 MB and received 302.29 MB. I've received 3562 connections on IPv4 and 1110 on IPv6. I've made 1258 connections with IPv4 and 243 with IPv6.
Sep 03 06:09:35.000 [notice] While bootstrapping, fetched this many bytes: 582343 (consensus network-status fetch); 13358 (authority cert fetch); 6687244 (microdescriptor fetch)
Sep 03 06:09:35.000 [notice] While not bootstrapping, fetched this many bytes: 25681944 (server descriptor fetch); 1080 (server descriptor upload); 1128513 (consensus network-status fetch); 43060 (authority cert fetch); 421776 (microdescriptor fetch)
Sep 03 06:09:35.000 [notice] Circuit handshake stats since last time: 0/0 TAP, 3607/3607 NTor.
Sep 03 06:09:35.000 [notice] Since startup we initiated 0 and received 0 v1 connections; initiated 0 and received 0 v2 connections; initiated 0 and received 0 v3 connections; initiated 0 and received 0 v4 connections; initiated 1216 and received 4561 v5 connections.
Sep 03 06:09:35.000 [notice] Heartbeat: DoS mitigation since startup: 0 circuits killed with too many cells, 0 circuits rejected, 0 marked addresses, 0 marked addresses for max queue, 0 same address concurrent connections rejected, 0 connections rejected, 0 single hop clients refused, 0 INTRODUCE2 rejected.
Sep 03 07:20:45.000 [notice] Performing bandwidth self-test...done.
Sep 03 12:09:35.000 [notice] Heartbeat: Tor's uptime is 18:00 hours, with 14 circuits open. I've sent 372.08 MB and received 409.18 MB. I've received 5940 connections on IPv4 and 1826 on IPv6. I've made 3093 connections with IPv4 and 727 with IPv6.
Sep 03 12:09:35.000 [notice] While bootstrapping, fetched this many bytes: 582343 (consensus network-status fetch); 13358 (authority cert fetch); 6687244 (microdescriptor fetch)
Sep 03 12:09:35.000 [notice] While not bootstrapping, fetched this many bytes: 30053010 (server descriptor fetch); 1080 (server descriptor upload); 1388338 (consensus network-status fetch); 62805 (authority cert fetch); 481287 (microdescriptor fetch)
Sep 03 12:09:35.000 [notice] Circuit handshake stats since last time: 0/0 TAP, 3536/3536 NTor.
Sep 03 12:09:35.000 [notice] Since startup we initiated 0 and received 0 v1 connections; initiated 0 and received 0 v2 connections; initiated 0 and received 0 v3 connections; initiated 0 and received 0 v4 connections; initiated 3354 and received 7603 v5 connections.
Sep 03 12:09:35.000 [notice] Heartbeat: DoS mitigation since startup: 0 circuits killed with too many cells, 0 circuits rejected, 0 marked addresses, 0 marked addresses for max queue, 0 same address concurrent connections rejected, 0 connections rejected, 0 single hop clients refused, 0 INTRODUCE2 rejected.
Sep 03 13:03:44.000 [notice] No circuits are opened. Relaxed timeout for circuit 15968 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [16 similar message(s) suppressed in last 40140 seconds]
Sep 03 14:43:58.000 [notice] No circuits are opened. Relaxed timeout for circuit 17277 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
Sep 03 15:18:49.000 [notice] New control connection opened.
Sep 03 16:04:03.000 [notice] New control connection opened.
Sep 03 17:14:50.000 [notice] No circuits are opened. Relaxed timeout for circuit 19242 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
Sep 03 18:09:35.000 [notice] Heartbeat: Tor's uptime is 1 day 0:00 hours, with 11 circuits open. I've sent 529.55 MB and received 563.99 MB. I've received 8283 connections on IPv4 and 2559 on IPv6. I've made 5105 connections with IPv4 and 1308 with IPv6.
Sep 03 18:09:35.000 [notice] While bootstrapping, fetched this many bytes: 582343 (consensus network-status fetch); 13358 (authority cert fetch); 6687244 (microdescriptor fetch)
Sep 03 18:09:35.000 [notice] While not bootstrapping, fetched this many bytes: 34494629 (server descriptor fetch); 1620 (server descriptor upload); 1720902 (consensus network-status fetch); 82514 (authority cert fetch); 683869 (microdescriptor fetch)
Sep 03 18:09:35.000 [notice] Circuit handshake stats since last time: 0/0 TAP, 3565/3565 NTor.
Sep 03 18:09:35.000 [notice] Since startup we initiated 0 and received 0 v1 connections; initiated 0 and received 0 v2 connections; initiated 0 and received 0 v3 connections; initiated 0 and received 0 v4 connections; initiated 5765 and received 10626 v5 connections.
Sep 03 18:09:35.000 [notice] Heartbeat: DoS mitigation since startup: 0 circuits killed with too many cells, 0 circuits rejected, 0 marked addresses, 0 marked addresses for max queue, 0 same address concurrent connections rejected, 0 connections rejected, 0 single hop clients refused, 0 INTRODUCE2 rejected.
Sep 03 18:23:33.000 [notice] No circuits are opened. Relaxed timeout for circuit 20143 (a Measuring circuit timeout 3-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
Sep 03 19:18:49.000 [notice] Performing bandwidth self-test...done.
Sep 03 20:13:42.000 [notice] Our directory information is no longer up-to-date enough to build circuits: We're missing descriptors for 1/3 of our primary entry guards (total microdescriptors: 7700/7720). That's ok. We will try to fetch missing descriptors soon.
Sep 03 20:13:42.000 [notice] I learned some more directory information, but not enough to build a circuit: We're missing descriptors for 1/3 of our primary entry guards (total microdescriptors: 7700/7720). That's ok. We will try to fetch missing descriptors soon.
Sep 03 20:13:43.000 [notice] We now have enough directory information to build circuits.

The OONI result is: Tor censorship test result in Japan

Fetching https://proof.ovh.net/files/100Mb.dat through Tor via this newer one of our relays as the ‘bridge’ from the same network, the throughput is around 2000kBytes/s, multiple times consistently.

Our two relays menhera1 and menhera1b are on the same virtual machine, 43.228.174.250 / 2001:df3:14c0:ff00::2, running on ORport 443 and 9001 respectively. The virtual machine has 4 vCPUs and 4GB mem on an Intel Core i5 12th gen host (physically 12-core and 64GB mem). The physical host for the VM is not overloaded most times.

Just testing: doing concurrent downloads via our relays as bridges, I got the following results:

Test 1 – single relay test

Setup: Four Tor clients, running with SOCKSPort 19051,19052,19053,19054. This machine is at the same facility of AS63806, 10Gbps physical connection to the server running two relays: menhera1 and menhera1b. These clients uses menhera1b (43.228.174.250:9001) as the bridge (it is not a bridge, but to force it as the first hop I set it as the only bridge for the clients).

Test: run the following script on the host running Tor clients (dash or bash or any of the two as /bin/sh, any of them should work):

dash -c 'URL=https://proof.ovh.net/files/100Mb.dat ; NUM="1 2 3" ; SOCKSPORTS="19051 19052 19053 19054" ; exec </dev/null ; for n in $NUM ; do for port in $SOCKSPORTS ; do curl -s -w "%{speed_download}\n" -x "socks5h://speedtest:try${n}@127.0.0.1:${port}" "$URL" -o /dev/null & true ; done ; done ; wait'

Results (Bytes/s for each connections):

1034191
1009326
957451
940153
899960
878835
842613
815977
785147
784935
781103
558126

Nyx showed that the relay was handling more than 10MBytes/s during the test.

Test 2 – multi relay test with both 2 relays

Setup: additional four tor clients on ports 19061-19064 with menhera1 (43.228.174.250:443) as the bridge.

Test:

dash -c 'URL=https://proof.ovh.net/files/100Mb.dat ; NUM="1 2 3" ; SOCKSPORTS="19051 19052 19053 19054 19061 19062 19063 19064" ; exec </dev/null ; for n in $NUM ; do for port in $SOCKSPORTS ; do curl -s -w "%{speed_download}\n" -x "socks5h://speedtest:try${n}@127.0.0.1:${port}" "$URL" -o /dev/null & true ; done ; done ; wait'

Results (one failed request):

0
1135920
1042000
978204
974830
948715
920839
873585
856092
853557
849736
840378
825818
822594
763352
759988
746641
722571
688763
640665
560687
539946
473451
457160

The two relays were both sustaining 8-11MBytes/s during the test, indicating the hardware and the network can handle as much as the specified BandwidthRate of 12MB.

EDIT: I’ve found each of the both relays can go up to 20MBytes/s, but I’ll test no more because it would DoS the Tor network for no good reason. Also suspicious if transit providers see this.

Yuka MORI via Tor Project Forum:

Our two relays menhera1 and menhera1b are on the same virtual machine, 43.228.174.250 / 2001:df3:14c0:ff00::2, running on ORport 443 and 9001 respectively. The virtual machine has 4 vCPUs and 4GB mem on an Intel Core i5 12th gen host (physically 12-core and 64GB mem). The physical host for the VM is not overloaded most times.

Just testing: doing concurrent downloads via our relays as bridges, I got the following results:

Test 1 – single relay test

Setup: Four Tor clients, running with SOCKSPort 19051,19052,19053,19054. This machine is at the same facility of AS63806, 10Gbps physical connection to the server running two relays: menhera1 and menhera1b. These clients uses menhera1b (43.228.174.250:9001) as the bridge (it is not a bridge, but to force it as the first hop I set it as the only bridge for the clients).

Test: run the following script on the host running Tor clients (dash or bash or any of the two as /bin/sh, any of them should work):

dash -c 'URL=https://proof.ovh.net/files/100Mb.dat ; NUM="1 2 3" ; SOCKSPORTS="19051 19052 19053 19054" ; exec </dev/null ; for n in $NUM ; do for port in $SOCKSPORTS ; do curl -s -w "%{speed_download}\n" -x "socks5h://speedtest:try${n}@127.0.0.1:${port}" "$URL" -o /dev/null & true ; done ; done ; wait'

Results (Bytes/s for each connections):

1034191
1009326
957451
940153
899960
878835
842613
815977
785147
784935
781103
558126

Nyx showed that the relay was handling more than 10MBytes/s during the test.

Test 2 – multi relay test with both 2 relays

Setup: additional four tor clients on ports 19061-19064 with menhera1 (43.228.174.250:443) as the bridge.

Test:

dash -c 'URL=https://proof.ovh.net/files/100Mb.dat ; NUM="1 2 3" ; SOCKSPORTS="19051 19052 19053 19054 19061 19062 19063 19064" ; exec </dev/null ; for n in $NUM ; do for port in $SOCKSPORTS ; do curl -s -w "%{speed_download}\n" -x "socks5h://speedtest:try${n}@127.0.0.1:${port}" "$URL" -o /dev/null & true ; done ; done ; wait'

Results (one failed request):

0
1135920
1042000
978204
974830
948715
920839
873585
856092
853557
849736
840378
825818
822594
763352
759988
746641
722571
688763
640665
560687
539946
473451
457160

The two relays were both sustaining 8-11MBytes/s during the test, indicating the hardware and the network can handle as much as the specified BandwidthRate of 12MB.

That’s visible on relay-search now where you can hover over the
Advertised Bandwidth section of your relays. That’s one component of
your relays’ consensus weight. The other part is contributed by
measurements done by our bandwidth authorities. For that to happen
properly your (good) observed bandwidth has to trickle down to get used
by them. This takes longer than it should due to a
bug

in our measurement tool. Additionally, our bandwidth measurement system
is still location-dependent, alas. It’s not clear why this continues to
happen after we provided congestion control on the
network

but we need to solve that issue first before relays e.g. in Asia are not
getting punished anymore for having long(er) round trip times to
bandwidth measurement systems and thus less consensus weight.

For what it is worth you can already see on
https://consensus-health.torproject.org/ how the new observed bandwidth
impacts the weight (by entering your relay’s fingerprint into the field
at the end of that page): longclaw shows right now for menhera1b
bw=18000 while the other directory authorities, which still need to pick
up the observed bandwidth change, show bw=1.

2 Likes

Thanks for the details.

One point I am wondering upon is that my relay has larger bandwidth just after testing or DoS attacks, but the numbers usually drop down soon if the consensus weight does not catch up quickly, and thus the relay is not being fully utilized for some time, even if our network is not overloaded at all and nothing changes about the infrastructure.

Yuka MORI via Tor Project Forum:

Thanks for the details.

One point I am wondering upon is that my relay has larger bandwidth just after testing or DoS attacks, but the numbers usually drop down soon if the consensus weight does not catch up quickly, and thus the relay is not being fully utilized for some time, even if our network is not overloaded at all and nothing changes about the infrastructure.

Yeah, this can happen. There is roughly a 5-day window where some
catch-up has to happen. That’s the best we have right now, alas.

The longclaw measurement for
https://metrics.torproject.org/rs.html#details/0D1799FDE49AB2498362D9B2D2542309F6E99E30
is pretty encouraging and we might have some improvements along that
line ready for deployment later this year. If those hold the water as we
hope and expect we should see some re-weighting of relays which could
benefit relays e.g. in Asia as well. Stay tuned and thanks for running
relays!

2 Likes

Yeah it would be nice if more hosting providers accepted XMR. Great for running exits since nothing ties back to you.

1 Like

This goes both ways, too.

The majority of relays being in NA/EU means that using Tor almost always requires cross-ocean traffic, which makes it painfully slow.

These grapths show that German users get ~40-100Mbps of bandwith… while Hong Kong users only get ~20-30Mbph of bandwith.

I think the challenge is that the people most motivated to set up relays are often heavy Tor users, but the slow speeds in Asia make it difficult for them to justify the effort, since they’re not even using the network themselves due to its poor performance.”

1 Like