for a long time I’ve been wanting to contribute to the anti-censorship measurements of the Tor Project and since I’ve recently got to set up my homeserver I’ve also decided to also set up a Snowflake standalone proxy server in the same go. Now I’ve only found this guide on how to set up the standalone server, but it’s very brief and I did not manage to fulfill all three recommendations :/.
Thus I wanted to share my setup here and ask you more experienced guys for any comments, recommendations or possible improvements. Any help is welcome! Also there are a few areas where I am a bit unsure and I’ve got some explicit questions there and would be happy about answers :).
The setup
General
I am hosting this on my personal tower server hardware which is physically located in my home. I have a 5G internet connection and a public IP address (luckily no carrier-grade NAT).
This post mentions that it is recommended to have the port range about 2.5x as wide as the amount of clients, so I have decided to use the port range 61000-65000 for the Snowflake proxy and limit the amount of clients to 1600. Now I got two questions on this:
Is the recommendation to have ~2.5x as many ports as allowed clients actually good?
I’ve decided to use ports outside of the ephemeral range (32768-60999) to avoid interference with my NAT. Is this fine or might this impair the anti-censorship functionality in any way?
Router
On my OpenWrt router I’ve added a rule to forward ports 61000-65000 to the Snowflake server. I’ve also tried to get the NAT from restricted-cone (the default on OpenWrt, checked with turnutils_natdiscovery -f -m stun.voipgate.com) to full-cone, but struggled to accomplish this on OpenWrt (without like loading an extra kernel module and writing custom firewall rules which sounds like a pain-in-the-ass to maintain). So I did not manage to accomplish points two of these recommendations. Now I’ve also got two questions on this:
How much of an issue is the restricted-cone NAT? What are the implications?
Is there an (easy) way to configure a full-cone or even unrestricted NAT on OpenWrt?
Server
I am running the snowflake-proxy binary (installed with apk add snowflake) in an Alpine Linux container with an (automatically started) OpenRC service and the following arguments: -capacity 1600 -ephemeral-ports-range 61000:65000 -metrics -metrics-address 0.0.0.0 -unsafe-logging. Any comments on this?
I am especially concerned about any mistakes which make the node reachable but prevent the censorship circumvention to function properly. This would hurt the network as clients will try to use me but fail, which I absolutely want to avoid.
My setup:
Snowflake Standalone proxy compiled with GO using their instructions running native on Ubuntu 22.04.4 LTS behind a home router. This machine’s primary purpose is to run work units (jobs) using the Berkley software BOINC. It’s 100% CPU bound running 24/7 on 32 cores. Hardly any network traffic so it’s a good marriage with Snowflake.
I use the following command and capture the logs which I analyze..
proxy -nat-retest-interval 0 -ephemeral-ports-range “65497:65534” -capacity 10 -unsafe-logging -verbose
Yes 2.5x is a good number because the proxy actually uses 2 ports per connection. I actually use more just because.
I wonder about “-metrics -metrics-address 0.0.0.0”. Why “0.0.0.0” the default is localhost on port 9999 and you need the geoip data in /usr/share/tor/ from either the Tor Browser or Tor Expert Bundle or a Tor relay install.
Not familiar with restricted-cone or not. Mine runs as unrestricted simply because I opened those ephemeral-ports on the router and forward them to the machine.
2025/12/06 22:33:21 Test WebRTC connection with NAT check probe server established! This means our NAT is unrestricted!
2025/12/06 22:33:21 NAT Type measurement: unknown → unrestricted
2025/12/06 22:33:21 NAT type: unrestricted
What does your log say? grep the log for the word restricted
1600 connections. WOW! Notice I only allow -capacity 10 but my connection is not 5 Gbps but 10/30 Mbps. I plan to up that -capacity at some point.
You set the capacity to 1600 but you never get to see how many users are actually connected. That would be on my wishlist. Also on my wishlist would be a way to up the capacity without stopping the proxy along with being able to start and stop the -metrics on the fly.
A quick and dirty way to see the number of users is in my tip:
I choose 0.0.0.0 to make the metrics page visible in my internal network so that I can view them on http:///internal/metrics. Is this a (security) issue? And yeah I’ve also installed the tor bundle for the geoip data after snowflake complained about not finding geoip databases.
Yeah I have the same setup where I forward these ports from my router. But the guide I mentioned lists both full-cone NAT and port forwarding. At least I read it that way but yeah I also think needing both is a bit odd. I also cannot make sense of the reason for that tbh and maybe I am just misinterpreting this?
NAT type: unrestricted
I mean you can kinda estimate that from the log because it says how many successful connections there were in the last hour. This number is around always 20 for me, so yeah 1600 is probably a bit overkill. But this is just a maximum anyways and I still have more than enough free ports available on my router, so I’ll just leave it like that.
Ah! so it acts like any as in netstats ouput. I was thinking 0.0.0.0 as in hosts file where it acts like a blackhole.
I would not put much faith in that number. It just says how many connected in the last hour but not the previous hour and the ones before that and are still connected. They could be there for hours even days. When I analyzed my logs for duration of sessions I found sessions open for 3 days.
Here’s a line from mine which is impossible since I only allow 10. Yes 7 could have disonnected but not so. 2026/01/14 04:33:19 In the last 1h0m0s, there were 17 completed successful connections
You have 1600 so it makes not much difference but with only 10 I don’t want anyone monopolizing ports and preventing others from using the proxy. I can’t believe someone can have a session for 10 hours and be using it. I monitor it a bit and after 7 hours I knock them out. Yes, it’s not my business what users do or where they go but it’s my bandwidth and my resources so it’s my rules… and I kill plenty.
In that guide you quote I give a quick and dirty tip to get a count. I’ve refined it a bit
This gives a count:
me@proxy:~$ netstat -t4u4wanp | grep -i 'proxy' | grep -i -E -c '141.212.118.18|193.187.88.42'
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
4
me@proxy:~$
And my own script which corresponds
2026/01/14 17:23:56 42619906fa4abea9-824708273586 open n.n.n.n
2026/01/14 19:03:08 6edd0dc0d6358dfb-824768678998 open n.n.n.n
2026/01/14 19:03:51 5d5953b203b44de9-824762437098 open n.n.n.n
2026/01/14 23:34:54 7050c1dca82a2f87-824773550838 open n.n.n.n
2026/01/14 23:37:21 4 connections
That guy from 17:23:56 is close to being knocked out.
I just changed my startup today to: proxy -ephemeral-ports-range “65497:65534” -capacity 11 -unsafe-logging -verbose
Thanks for sharing your setup—always nice to see Snowflake being tested on lighter systems like Alpine Linux, especially behind an OpenWrt router From experience, this kind of setup usually works, but the tricky part is almost always routing rather than Snowflake itself.
If Snowflake is starting up and connecting to the broker, that’s already a good sign. When it doesn’t seem to relay much traffic, it’s often due to NAT, firewall rules, or traffic getting pushed through a VPN interface instead of the WAN. On OpenWrt, making sure Snowflake traffic is routed directly out via the WAN (and not policy-routed elsewhere) can make a real difference.
Also worth keeping in mind that Snowflake proxies don’t always get constant traffic. The network pairs clients dynamically, so even a correctly configured proxy might sit idle for long periods. That’s normal and doesn’t mean it’s broken—similar to running a low-traffic relay or a niche exit, like a Falkland islands proxy, where usage depends heavily on demand.
This is good to know. Even though I have -capacity set to 11, I can’t seem to get more than 7 as in the days when I ran as restricted (logs say unrestricted). So I started looking to see if something was wrong.
It fluctuates a lot, but it usually is around ~10. For this reason I’ve also decided to use -capacity 512 instead. I’ve also changed the ephemeral port range to 58500:59999 to move it into the usual Linux ephemeral range.
That’s good to hear :). Regarding the routing that should be perfectly fine, I do not even have a VPN interface on the Router.
@BobbyB — appreciate the reassurance about how the Snowflake network pairs clients dynamically. Good to know that long idle periods don’t necessarily mean a misconfiguration, and that getting consistent utilization can vary quite a lot even with a higher -capacity setting like 11 — it’s expected behavior for Snowflake proxies. I’ll keep an eye on the logs and traffic over time rather than expecting a steady client count.
@nullptr — good point about the fluctuations in numbers you see — ~10 seems pretty reasonable from what others have shared as well. Your adjustment to -capacity 512 and moving the ephemeral range into the standard Linux ephemeral space makes sense, and I’ll consider similar tweaks if needed over time.
Right now everything should be reachable (no VPN interface on the router, and the forwarding rules are in place), so it’s mainly about letting it run and seeing how it behaves over a longer period. If I spot any consistently low pairing or odd NAT behavior despite unrestricted logs, I’ll report back with log snippets — maybe someone else will spot a clue. Thanks again both!
In the README.md count.sh also warns about going trigger-happy and not over do it or your IP may get blacklisted by ip-api. You want to know this info more than once or twice a day. Maybe the ip-api thing could be optional. Knowing the country in real time is not that useful. I am more interested in “How am I doing?”. I also want to know the count because after a long period of time I knock out the session.
@BobbyB Yes, the script is not made for your situation of kicking people out of their connection. And personally I do not like your approach, because there might be circumstances you do not know. An individual choosing to use snowflake to access the internet will have reasons and nobody should interfere with this decision.
For me the script is nice, when I see traffic spikes and I want to know which country causes this. atm there is no traffic from Iran on Snowflakes or WebTunnels for obvious reasons…
It also helps to detect if one county is missing, so I can change the IP/location of the Snowflake.
I do not want to monitor individuals, I want to see aggregated information, this is why I modified the script so it does not expose the full IPs of the individuals to the API.
I couldn’t have said it better myself. While some users need a connection and are waiting for one, others block the available resources by hanging on their connection and possibly keeping it alive by pinging pointlessly.
My Snowflake proxy shortens the time after which an idle connection is disconnected, depending on the duration of a client connection.
Maybe. My red line is about 7+ hours. I just can’t believe that a person is doing something at a computer for 7+ hours. I believe the session is artificially being held open “just in case”. It’s like me holding the elevator door open with a chair so I don’t have to wait in line when I want to use it. I call this behavior abusive. Yesterday one sessions was 11+ hours. It was was overnight. I have to sleep sometime