I’ve been supporting the Tor network for a while now by running a few non-exit relays and bridges. Recently I’ve wanted to do more so I’ve done some research and setup a few relays on cheap-ish ‘unmetered’ VPS’s.
I’ve made a few observations while doing this, e.g. 1vCPU can be a limiting factor, 1GB of memory is a little too low once the relay starts ramping up, and importantly, even on 1Gbps (and sometimes 10Gbps) links I’ve never seen over 31MB/s observed bandwidth, and I usually only see between 5-20Mbps.
I’d like to setup more relays, perhaps even my first exit, but I’m unsure how best to proceed to do the most good for the network, and not waste my money on extra resources that won’t be used.
My main question boils down to:
- Is one dedicated server with truly unmetered bandwidth (and much better specs) better than say 4-5 cheap VPS’s with ‘unmetered’ bandwidth and shared resources?
I understand that from a resillience point of view, running a few ‘lesser’ relays distributed in different countries is better than running a single higher powered one. But realistically, what would do the most good for the network?
Additionally, would setting up one higher powered exit relay be more beneficial than a few more guard/middle relays?
I’d appreciate any other advice on this subject as well, from those who’ve been through a similar journey.
No (if you ask me xd).
You’ll be limited by IP’s assigned to the VM. You can run two nodes per IP. Cpu and iops is critical. Ram is meh, but go for 2gb in general. I only run servers with 1 vCPU dedicated - but the core itself is high performance. Tor is bad at multithread anyway.
Cost wise it’s so much better to go for small vps’es. I run 5 100mbps vps’es at around 10MB/s continued for only 1.60 EUR per month per vps haha. Try beating 50MB/s continued for 8 EUR per month lol.
I have nodes peaking at 35MB/s and only pay 10EUR. 1 core (peak to 3.5ghz), 2gb ram, and 30gb nvme.
I’m very happy with Aeza, you should check them out. Anonymity, good support, good hardware/software and solid pricing to bandwidth ratio. Have other hosters in use as well, but this is by far my best place thus far.
You should do the calculations yourself, but it never made sense to me personally. Also don’t want to deal with the overhead of managing multiple IP’s and multiple tor processes etc all on a single machine. Big SPOF xd.
It’s even 4 nodes per IP now. Don’t forget to create a family for those.
Oh sick, wasn’t aware of this. Thanks for correcting me.
Thanks for the replies!
@cozybeardev, wow, that’s really good value! At the moment I’m running at around 100MB/s combined on my relays for ~£60 a month. I have been experimenting with different providers, countries, and system specs, so I should be able to improve it in the future.
I got some warnings on one of my VPS’s when I hit over 35MB/s, it only had one 1vCPU which was relatively low performance (and was pinned at 100%), I upped it to 2vCPUs and it seemed better, but then the speed did drop . I’m experimenting now with a 1vCPU VPS which is more performant.
I’d never heard of Aeza, they sound worth it! Although, do they have a non-Russian website? I couldn’t find one.
True, I ran two relays on one host for a while to experiment, but the limited shared uplink seemed to be the major bottleneck (on a ~£10 VPS).
Do you have any thoughts on types of CPU, e.g. Xeon vs Epyc vs Ryzen vs …? I started on cheaper VPS’s with Xeon’s, but have started to try out the others and the difference is noticable.
Lastly, do you run any exits, and if so do you use higher specc’d VPS’s for them?
I run exits primarily, they require more ram. I always focus on single core performance. My whole fleet is AMD, type does differ. Most Ryzen, but some threadripper.
Aeza.com has english translation as well.
You can search for cozybear on relay search and find my nodes. Around 25 right now. 1gbit minimum continued (as in used) both up and down for the whole fleet. Peak is around 1.5-1.7 gbit per line.
Ah fair enough, thanks, that makes sense
In the theme of ‘doing the most good’, I take it exit relays are preferred rather than guards/middles?
And as an aside, do you have a good way of monitoring your relays? I have ~10 right now, and using nyx, htop, and logging in to various different hosting providers is becoming a pain.
Just to note, I believe it’s now up to 8 [tor-relays] 8 relays pers IP address are allowed from now (end of June 2023) on
Exit relays are preferred, primarily because it’s harder. You take on the heat of all the garbage traffic that goes out - which is why most hosters don’t want it as well. But don’t let that stop you, any relay is welcome. Choose whatever risk you’re willing to take.
I use ansible and hardware based ssh access to manage all servers. I never do anything manual, impossible. I automatically install the software you mention, but manage it all using netdata monitoring wise.
This is where I manage it myself; GitHub - cozybear-dev/ansible-tor: Automated stateless Tor relay management - [0-XX].tor.shadowbrokers.eu. I simply buy servers and set it up using ansible.
Thanks for the link! I bought a VPS on Aeza and I see they don’t offer IPv6. How did you set it up for your relays?
Every VPS has IPv6, sometimes it doesn’t work as some locations are WIP - but they fix it rather quick. How come you came to the conclusion they don’t support it?
Hm. I bought a VPS in the USA and there’s no option for it under Networking. Asked on the support and their answer was:
You can add them in the network interface settings: nano /etc/network/interfaces
Unfortunately, we can’t give you more detailed information.
Which led me to believe I need to supply an IPv6 tunnel myself.
Normally IPv6 is added by default on the interface, no need to add it yourself. I never had to with most locations. You can check if routing works by curl -6 ifconfig.co. If it does not work, verify your kernel has IPv6 enabled and change + reboot if not. Sometimes there were actual routing issues, it does happen - but support was always quick to resolve it.
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.