Snowflake on OpenWRT as standalone proxy

My router is powerful enough it can run Snowflake proxy.

At least the router itself does run 24/7 and it’s low powered compared to the dedicated machine.

The set up should be for what I know (looking here for info):

opening the ports in FW (/etc/config/firewall) such as:

config rule
option src_port ‘min-max’
option src ‘wan’
option name ‘Snowflake’
option target ‘ACCEPT’
list proto ‘udp’

and starting the Snowflake as:

procd_set_param command “$PROG” -capacity 5 --ephemeral-ports-range min:max

in init.d should do the trick right?

(Maybe adding the -verbose for few hours to see some details.)

Is there some good default for port min:max range?
(Except for the info 2x as wide as the amount of clients.)

What bothering me also is process of avoiding any VPN being possibly deployed on the router itself.

Because the traffic should use the ISP. There is no point nor is it for sure anyhow beneficial (possibly a lot of connections and also the lag VPN does add).

How to do that?
Does anyone have some config. tips for OpenWRT that could facilitate such setup.

Thank you in advance.

2 Likes

I use 3.5x. It is now really 4x because I went from -capacity 12 to -capacity 9 and never made the change. In a previous post someone mentioned 2.5x just to be safe. I guess in case destroying a used port is not yet complete while the exact 2 ports are trying to be re-used for a new connection.

I started at 65534 and went backwards. I avoided using the last port 65535 just because so for you and 2x this would be 65525-65534.

I use -ephemeral-ports-range “65497:65534” -capacity 9 -unsafe-logging -verbose and I keep stats

What bothering me also is process of avoiding any VPN being possibly deployed on the router itself.
Don’t understand this. Do you mean someone compromising your system? Hmmm probably not wanting to have to use a VPN on your router.

With -verbose and -unsafe-logging you will see exactly which IP is connecting and all kinds of other stuff.

Have no tips for OpenWRT since I have only tried that one many years ago.

Good luck

1 Like

I don’t think it matters much. I’d just pick a range above 1024, maybe it doesn’t even have to be in the ephemeral ports range (above 32768).

1 Like

(Links for FAQ redacted in text below because I’m not allowed to made post with more than two.)

Well dig in a bit and there is a package named PBR.

This allow me to specify quite easily the set up for LAN devices (most easy settings available are to be specified with the LuCI web interface: IP, mac, port, protocol based - so the IP it is) to go with the VPN or WAN - bypassing the VPN.

(FAQ: [basically ipv4 only] - so switched my network into this mode.)
Source: https: // docs.openwrt.melmac. net/pbr/#AWordAboutIPv6Routing

However with the Snowflake and VPN ‘route all traffic’ in the router it’s tricky really - because of the [default routing] https: // docs.openwrt.melmac. net/pbr/#AWordAboutDefaultRouting.

ip route get 1.1.1.1 will naturally return the WG VPN interface and because that is its function to route eventually all the traffic through - it can’t be avoided easily.

Did initially create a rule only with the ‘ephemeral ports’ range, but that cover only part of the communication.
There is a lot of communication with the brokers - thus it will split possibly the communication with broker through VPN and with client through WAN(?). Not sure if this is somehow in design, (assuming the snowflake reporting its environment to the broker so should be good).

So ended up with empty field as for all.

Also not sure what IP/MAC should be chosen for the PBR rule - the WG does not have any MAC.

As the packets originating from the process on the router did settle for the LAN IP (br-lan) but still a lot is going through the WG and not WAN.

Have it as Chain Output and Protocol All.

Possible solution could be unset a WireGuard tunnel as default route’ - but therefore it will not serve the purpose (in my case to catch all not specified devices to go into the VPN).

https: // docs.openwrt.melmac. net/pbr/#WireGuardtunnel

Checked this with:

ip route show > for the basic route setting for now running OpenWRT.
wg show all (or showconf WG) > just to see the WG status.

tcpdump -i WG -n -vv -s 0 -c 20 tcp port 443 > for catching a few connections (my WG Endpoint Host port is not UDP but 443).

lsof -i@someipfromdump > this show the process snowflake and its PID

In my case all IP are belonging to the Snowflake - DNS PTR - A record.
(Because set up all the LAN clients to go into WAN just to see really only router traffic itself.)

lsof -p PID | grep IPv4 > will print all the network connections the snowflake have

from logread -e "snowflake" can sum up the totals for transfer the Snowflake facilitated as its printing in log each hour.

U/D is in ratio 1/50 - it should be (didn’t have it running long enough to be it representative but normal browsing is mainly one way traffic).

But interestingly wg show all is almost 1:1 for that period of time - so PBR partially is sending the traffic outside of VPN, but definitely not all.

So far unsuccessful attempt for this set-up (or have some mistake in there).

Overall this study help me a lot to understand a big picture. So definitely check that out.

And lastly - my snowflake reporting: “Type measurement: unknown → restricted = restricted”.

So maybe it’s not even for greater good.
How one could know if his Snowflake is a drag to the network (mainly to the broker) or adding actually some value?

2 Likes

So, in summary, what is the problem that you are trying to solve at this point? It seems that you have managed to start Snowflake.

I’m pretty sure that binding to the WAN interface instead of the VPN’s interface is the way to bypass a VPN. As long as the “Kill-switch” feature (which prevents direct WAN traffic) is disabled.
Snowflake has the -outbound-address parameter, you could try to set it to your WAN address to bypass the VPN.

Although the broker will incorrectly count proxy country stats then (https://snowflake-broker.torproject.net/metrics), it is otherwise fine to use different IPs for broker communication and for serving clients.

Note that if take the Snowflake process as a whole and count its inbound / outbound traffic, you should expect to see a 1:1 ratio, because it’s just a proxy: it sends pretty much exactly as much as it receives.
See:


“Restricted” doesn’t mean it’s a drag. The majority (97.7%) of Snowflake proxies are restricted (see the metrics), and this is expected. Generally if a proxy is able to serve clients, then it’s good for the network.

1 Like

I ran restricted for almost 4 months and was never able to get more than 6 clients regardless of -capacity above 6 but then 6 is better than 0.

Was thinking about this yesterday but had never noticed the -outbound-address parameter. I’m curious to see if this solves it. You will keep us informed I hope. Just for my own knowledge.

1 Like

That’s because the vast majority of clients are also restricted, so 95% of clients have to get paired with 5% of unrestricted relays.
A proxy’s usefulness is not measured by how many clients it gets. Snowflake proxy network’s strength is in numbers.

1 Like

So this leaves 95% of relays being restricted (from the docs: browser extensions). Then who connects to restricted? This leave only unrestricted clients.

For a Snowflake operator, there is really no way to tell how many connected clients there are at any one time. I analyze the logs but a user here sent me a mod to output this in the log. His is more scientific and it validates my method.

I am not sure what your point or question is.
I’m saying that if you did get at least one client, then this pretty much means that your proxy is working, and thus that it is useful.

You don’t even know that but agreed that if you did then it is useful. If 1 is good enough then you could just stick to the extension the way I started this project. For me it was not. If you go to all that effort you at least want to know, maybe, how useful.

OK, here is a quick and dirty way to know. You will get the output below.
netstat -t4u4wanp | grep -i 'proxy'

We know that it opens 2 connections per client so the ones to 193.187.88.42:443 and/or 141.212.118.18:443 tell you how many there are. We notice that the ones to 0.0.0.0 use our ephemeral ports.

tcp        0      0 10.10.10.10:42926     193.187.88.42:443       ESTABLISHED 1264/proxy          
tcp        0      0 10.10.10.10:52406     141.212.118.18:443      ESTABLISHED 1264/proxy          
tcp        0      0 10.10.10.10:60734     141.212.118.18:443      ESTABLISHED 1264/proxy          
tcp   199394    600 10.10.10.10:42152     193.187.88.42:443       ESTABLISHED 1264/proxy          
tcp        0      0 10.10.10.10:49180     193.187.88.42:443       ESTABLISHED 1264/proxy          
tcp        0      0 10.10.10.10:40532     141.212.118.18:443      ESTABLISHED 1264/proxy          
tcp        0      0 10.10.10.10:58098     141.212.118.18:443      ESTABLISHED 1264/proxy          
tcp        0      0 10.10.10.10:51502     193.187.88.42:443       SYN_SENT    1264/proxy          
tcp        0      0 10.10.10.10:57356     193.187.88.42:443       ESTABLISHED 1264/proxy          
udp        0      0 0.0.0.0:65510         0.0.0.0:*                           1264/proxy          
udp        0      0 0.0.0.0:65512         0.0.0.0:*                           1264/proxy          
udp        0      0 0.0.0.0:65513         0.0.0.0:*                           1264/proxy          
udp        0      0 0.0.0.0:65517         0.0.0.0:*                           1264/proxy          
udp        0   1280 0.0.0.0:65524         0.0.0.0:*                           1264/proxy          
udp        0      0 0.0.0.0:65529         0.0.0.0:*                           1264/proxy          
udp        0      0 0.0.0.0:65530         0.0.0.0:*                           1264/proxy          
udp        0      0 0.0.0.0:65532         0.0.0.0:*                           1264/proxy