Remote returned status code 400

Is it just me, or do others also see a lot

2024/10/06 11:13:36 Generating answer...
2024/10/06 11:13:36 Answer:
        v=0
        ...
2024/10/06 11:13:36 error sending answer to client through broker: error sending answer to broker: remote returned status code 400

in the standalone Snowflake proxy output (-verbose)?

1 Like

In the answer, do you see a line starting with a=candidate?

All answers send by my proxy have a

a=candidate:[...] typ host

line, but in some the

a=candidate:[...] typ srflx raddr 0.0.0.0 rport [...]

with my public IPv4 address is missing.

Right now it seems like it’s solved by an increase in -ephemeral-ports-range.

I don’t know if there is a relation here, but with -capacity 10, how many ports need to be opened with -ephemeral-ports-range? Are 10 enough or should/must there be more?

1 Like

I don’t have your answer but checked my logs (1 month’s worth) and only see one:
024/09/14 21:18:11 error sending answer to client through broker: broker returned client timeout

In the a=candidate:[...] typ srflx raddr 0.0.0.0 rport [...] line my IPV4 IP and port are there and the rport is filled also.

I know. Not much help.

Edited later:
I use -capacity 20 and changed nothing with -ephemeral-ports-range

Edited even later:
I found this in snowflake/server/dial_linux.go if it helps.

// dialerControl prepares a syscall.RawConn for a future bind-before-connect by
// setting the IP_BIND_ADDRESS_NO_PORT socket option.
//
// On Linux, setting the IP_BIND_ADDRESS_NO_PORT socket option helps conserve
// ephemeral ports when binding to a specific IP addresses before connecting
// (bind before connect), by not assigning the port number when bind is called,
// but waiting until connect. But problems arise if there are multiple processes
// doing bind-before-connect, and some of them use IP_BIND_ADDRESS_NO_PORT and
// some of them do not. When there is a mix, the ones that do will have their
// ephemeral ports reserved by the ones that do not, leading to EADDRNOTAVAIL
// errors.
//
// tor does bind-before-connect when the OutboundBindAddress option is set in
// torrc. Since version 0.4.7.13 (January 2023), tor sets
// IP_BIND_ADDRESS_NO_PORT unconditionally on platforms that support it, and
// therefore we must do the same, to avoid EADDRNOTAVAIL errors.
//
// # References

I did not list the 5 references but they are in:

snowflake/server/dial_linux.go

I have to use -ephemeral-ports-range as I have to open and forward ports in my router.

With

-ephemeral-ports-range 40731:40733 -capacity 3

(3 ports for 3 clients) there is the above mentioned 400 error.

With

-ephemeral-ports-range 40731:40736 -capacity 3

(6 ports for 3 clients) all is fine.

Is this documented anywhere?

1 Like

You got me on that -ephemeral-ports-range thing. I do nothing to my router. It just happens.

I thought punching holes in the router was the job of that WebRTC/ICE protocol stuff which is in the Snowflake proxy.

I know that when a packet goes out the NAT router the return trip rule is created by the router so packets can get back to the source IP:port. This is standard networking for regular home/smallbusness routers.

I read the documentation Connecting _ WebRTC for the Curious.PDF and this is what I understood about how it functions to get it’s job done. I checked and the traffic comes through because the connections are is the ctstate RELATED,ESTABLISHED and I presumed that the WebRTC/ICE protocol did that.

BTW I run Ubuntu 22.04 LTS as an OS. I compiled my proxy using the Golang method for Debian instructions.

So what is different about your setup?

Yeah, a narrow port range could very well be the reason for this. I think it has to be at least a bit wider than -capacity, to account e.g. for old connections not freeing up their ports immediately idk.

I don’t think it’s stated in the Snowflake docs explicitly.

If -ephemeral-ports-range and -capacity are given, and the port range is too small, it would be good if there’s a warning in the log about this misconfiguration.

1 Like

For my own curiosity, can you tell me why you need this -ephemeral-ports-range. One thing I can think of is that in your router everything outbound (egress) is denied unless explicitly allowed like in a school or something; the opposite of a home router .

With or without -ephemeral-ports-range, I have to manually open/forward some ports in the router. Without forwarded ports in the router the proxy always reports NAT type: restricted.

For example, if I forward ports 33331 to 33339 in the router I set -ephemeral-ports-range to 33331:33339. Now my proxy reports NAT type: unrestricted.

As I always recommend, create a separate user that will launch the program (snowflake-proxy in your case). Then configure the firewall to accept incoming traffic on a specific port range of your choice, but only for this user. This means other processes will not be able to receive traffic on the specified port range. This way you can open a wider range of ports more safely. If you are on Linux, you can restrict access for snowflake-proxy to the file system using systemd as additional security measure.

If you are interested, let me know. I’ll tell you how to do it (if you are on Linux).

@tobrop
Then I have to ask what kind of a router do you have?
I get that message also: NAT Type measurement: restricted → restricted
I do absolutely nothing special. Have you tried this.
but then it kicks in automatically with connections

Here is a sample from the logs after a startup.
2024/10/06 22:58:13 Probetest: Created Offer
2024/10/06 22:58:13 Probetest: Set local description
2024/10/06 22:58:13 Probetest offer:
stuff here
2024/10/06 22:58:38 NAT Type measurement: restricted → restricted
2024/10/06 23:05:10 Received Offer From Broker:
2024/10/06 23:05:10 Generating answer…
2024/10/06 23:05:31 Timed out waiting for client to open data channel.
2024/10/06 23:07:02 Received Offer From Broker:
2024/10/06 23:07:02 Generating answer…
2024/10/06 23:07:22 Timed out waiting for client to open data channel.
2024/10/06 23:24:42 Received Offer From Broker:
2024/10/06 23:24:42 Generating answer…
2024/10/06 23:24:44 New Data Channel snowflake-510f685401de1573-824633800310
2024/10/06 23:24:44 Connection successful
2024/10/06 23:24:44 Data Channel snowflake-510f685401de1573-824633800310 open

and it then keeps on going with connections

@excurso
I do none of that and it works completely automatic.

I have a normal home router. As I mentioned above the WebRTC/ICE does all that rule stuff by itself. With all its negotiations it convinces the router that the traffic from the client is related/established. It’s the first rule in the iptables INPUT filter and FORWARD filter.
With the state related,established the incoming packet must go to the process which initiated it.

I do not use the machine. It is on 24/7 doing community computing for four health and medical science non-profit NGOs. I have not taken into account anything rogue. The machine is in my basement and no one touches it.

Correction: it is the second rule in ufw-before-input and ufw-before-input is the first rule in the INPUT filter. Ignore the FORWARD filter comment.

The problem with having restricted NAT on snowflake proxy (or WebRTC in general):
The client that requests some website for example in most cases will also have a restricted NAT.

Two machines that are both behind a restricted NAT cannot connect directly to each other. They connect out themselves to an additional node (the TURN server) through which the data flow happens (client <=> TURN server <=> snowflake proxy). That is waste of resources and makes data transmission even slower since an additional node is involved. If the TURN server is heavily loaded, it makes the transmission slower additionally. So a restricted NAT is not a good thing at all in case of snowflake proxy.

If you have a restricted NAT and don’t want to change it, then it’s better not having a snowflake proxy, than annoy users with slow transmission.

My snowflake-proxy 2.9.2 (a524830e), compiled with Go 1.23, runs as user nobody in a Docker container.

docker-compose.yml:

services:
    snowflake:
        ...
        # I do not use 'network_mode: host'
        ports:
            - "192.168.x.x:33331-33339:33331-33339/udp"
        networks:
            - sf-net
        # Prevent in-container privilege escalation
        security_opt:
            - no-new-privileges:true
        # Limit CPU resources
        cpus: "1.0"
        command: [ "-ephemeral-ports-range", "33331:33339", ... ]

networks:
    sf-net:
        driver: bridge
        driver_opts:
            # Disable inter-container connectivity (ICC). Must be a string.
            com.docker.network.bridge.enable_icc: "false"
        # Disable manual container attachment
        attachable: false

And I don’t want to waste my time with this iptables fiddling.

Is this the advice you would give to all people running a restricted NAT proxy? All 125,000 unique IP addresses as per the article. Snowflake, a censorship circumvention system using temporary WebRTC proxies. OK, most are browser extensions versions but still restricted NAT.
https://www.bamsoftware.com/papers/snowflake/proxy-nat-type.svg
https://www.bamsoftware.com/papers/snowflake/proxy-type.svg

Let me quote the wiki for Snowflake: One of the assumptions in Snowflake is that both the client and proxy are behind NAT. Snowflake discovers this pathway without requiring the user to manually configure port forwarding. Automatic NAT traversal in Snowflake is possible due to ICE negotiation.

Then a little below read Caveats for STUN and TURN
Right now, Snowflake is only configured to utilize STUN by default (this is 2 year old info)

The rest of the article is also interesting.

How do we know these people are annoyed. A slow connection is better than no connection. I think.

In my post A follow up question about Snowflake which lasted almost 1 month, had 32 posts and 636 views no one suggested I should not run the proxy behind a restricted NAT. I was convinced by @WofWca and @SirNeo to give it a go.

I have serious questions about your TURN server thing but have not enough facts yet to question it.

If I get enough people who suggest I pull the plug then I will; not a problem. I learnt a lot about Linux which was one goal.

Is this the advice you would give to all people running a restricted NAT proxy? All 125,000 unique IP addresses as per the article. Snowflake, a censorship circumvention system using temporary WebRTC proxies. OK, most are browser extensions versions but still restricted NAT.
https://www.bamsoftware.com/papers/snowflake/proxy-nat-type.svg
https://www.bamsoftware.com/papers/snowflake/proxy-type.svg

Let me quote the wiki for Snowflake: One of the assumptions in Snowflake is that both the client and proxy are behind NAT. Snowflake discovers this pathway without requiring the user to manually configure port forwarding. Automatic NAT traversal in Snowflake is possible due to ICE negotiation.

Then a little below read Caveats for STUN and TURN
Right now, Snowflake is only configured to utilize STUN by default (this is 2 year old info)

The rest of the article is also interesting.

How do we know these people are annoyed. A slow connection is better than no connection. I think.

In my post A follow up question about Snowflake which lasted almost 1 month, had 32 posts and 636 views no one suggested I should not run the proxy behind a restricted NAT. I was convinced by @WofWca and @SirNeo to give it a go.

I have serious questions about the TURN server thing but not enough facts yet to question it.

If I get enough people who suggest I pull the plug then I will; not a problem. I learnt a lot about Linux which was a goal.

@tobrop
Agreed about iptables. It can get quite complex.
Good that it works.

This is getting a bit off-topic, but

While this is true for two particular peers, it’s not the case for Snowflake. Snowflake doesn’t utilize TURN servers (although it could). It simply makes sure that restricted clients get paired only unrestricted proxies.
From the Snowflake paper:

Snowflake clients and proxies self-assess their NAT type and report it to the broker, which then avoids making matches that would require TURN.

So…

This advice is bad.
Although unrestricted proxies are much more rare and in-demand, running a restricted one is still useful to clients with an unrestricted NAT.

It’s discussion around nothing. I should have mentioned explicitly, that I mean the standalone snowflake proxy.
It’s OK when someone doesn’t know, how to configure the NAT or firewall, but still wants to contribute with a proxy. But if you are advanced user, why not simply forward the ports? End of discussion.

That doesn’t make things any better. That means the clientele on such a snowflake proxy is very limited. The TURN server is the main thing why this whole WebRTC exists at all, because most clients are behind firewalled NAT. If all clients could connect directly to each other, WebRTC would be nonsense.

When I use torsocks and it doesn’t get over 200 KB/s sometimes, that advice is very good.

1 Like

Sure. Forward the ports. But don’t take down your proxy altogether if you don’t forward the ports.

No. Snowflake works fine without TURN, and it would not work at all without WebRTC.

This has nothing to do with NAT though.

It works fine with unrestricted NAT only. Clients behind restricted NAT cannot communicate with proxy behind restricted NAT and vice versa without a TURN server. So a proxy behind restricted NAT is only available for clients behind unrestricted NAT, but such clients are rare nowadays.
I quit the discussion related to this topic. Decide yourself whether you want to limit your proxies :slight_smile: