@Felix @trinity-1686a I found it! It was unrelated to your questions, but it did cause me to debug it a bit more.
I used lsof to see if port 30001 was bound on the host and found out it was never bound. So, there is the issue. Then I went and digged further into the issue and stupidly of me, NodePort does not expose the port to the outside. I have to admit that I’ve never used NodePort before and the reason I didn’t catch this is because:
- I took the code from a helm chart that wasn’t updated anymore, I guess they used a LoadBalancer to still expose the port.
- ChatGPT has hallucinated a lot with this one because it never caught the error. (Other then this, ChatGPT is awesome to write Kubernetes manifests :-))
I used a sample server (manifest below) to check if I could reach it. This one worked with a hostport:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
namespace: tor-prod
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-container
image: hashicorp/http-echo # Using hashicorp/http-echo for simplicity
args:
- "-text=Hello World"
ports:
- containerPort: 5678 # The port the container listens on
hostPort: 30002 # Exposing the container port on the host's port 30002
protocol: TCP
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/role
operator: In
values:
- arm-worker
tolerations:
- key: "node.kubernetes.io/role"
operator: "Equal"
value: "arm-worker"
effect: "NoSchedule"
Here is my final deployment manifest for future reference if other people want it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tor-relay
namespace: tor-prod
spec:
replicas: 1
selector:
matchLabels:
app: tor-relay
template:
metadata:
labels:
app: tor-relay
spec:
containers:
- name: tor
image: <myimage>
ports:
- name: orport
containerPort: 30001
hostPort: 30001
protocol: TCP
- name: dirport
containerPort: 30030
hostPort: 30030
protocol: TCP
volumeMounts:
- name: tor-data
mountPath: /var/lib/tor
- name: tor-config
mountPath: /config/torrc
subPath: torrc
volumes:
- name: tor-data
persistentVolumeClaim:
claimName: tor-data-pvc
- name: tor-config
configMap:
name: tor-config
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/role
operator: In
values:
- arm-worker
tolerations:
- key: "node.kubernetes.io/role"
operator: "Equal"
value: "arm-worker"
effect: "NoSchedule"
Note: is just a simple Docker container which starts up the tor daemon with the -c /config/torrc
flag, which in turn is a config inside the cluster and is mounted when starting the pod.
Once again thank you! This topic can be closed