Hi everyone, here's your monthly dose of sysadmin minutes... Do note an
interesting discussion about LLMs, not a formal policy, just a
discussion for now.
# Roll call: who's there and emergencies
all members present
# Express check-in
How are you doing, and are there any blockers? Then pass the mic to the
next person.
# Roadmap discussion
Review the [2026 roadmap][] especially with the [STF grant][] not
going through. One month left in the quarter, we might get some input
from upstairs for the next.
[STF grant]: https://gitlab.torproject.org/tpo/operations/proposals/-/issues/70
[2026 roadmap]: TPA 2026 roadmap (#2) · Epics · Epics · TPA · GitLab
Here's what everyone will be working on:
## zen
- [profiles merge][]
- [planning phase for Tails migration to Prometheus][]
[planning phase for Tails migration to Prometheus]: Migrate the Tails infra monitoring to Prometheus (#41946) · Issues · The Tor Project / TPA / TPA team · GitLab
[profiles merge]: TPA-RFC-77-J: Final codebase cleanup and normalization (#42103) · Issues · The Tor Project / TPA / TPA team · GitLab
## lavamind
- donate-neo
- download page
## groente
- anarcat carried the idea that email should be prioritized
- let's meet with nadya to get a better idea of the requirements,
anarcat will check for reusing the timeslot tomorrow
## lelutin
- prometheus HA
- garage conversion
## anarcat
- wiki problem? experimented a lot with mkdocs/zensical for a [personal project][]
[personal project]: https://lora.reseaulibre.ca/
# AI brainstorm?
- anarcat: was extremely skeptical with capacity and morals of AI, now
he's worried with the profession. We might lose an entire generation
of programmers. Worried with reliability of software, environmental
and political impact. But he's using it -- not vibe coding, rarely
using it for coding -- it causes skills atrophy, but uses it for
spell checking. Disclosure should be mandatory: disclose the model
and prompt (eg. in commit messages).
(Other messages here are unattributed, but from each team member.)
- It's hard, and getting harder, to not use it: e.g. google pushes you
towards it when you search. Concerned with our jobs and pressure on
"efficiency", and that we might be forced to use LLMs. Could we have
[RAG][] for our documentation?
[RAG]: Retrieval-augmented generation - Wikipedia
- groente: As for job security and pressure for "efficiency", one
would hope there will always be a market for people that understand
how systems work. The pressure to use LLMs will likely grow: compare
it to household appliances (or many other technologies) that were
thought to save us time, but in the end just created higher
expectations. Resisting this will likely be extremely hard within a
capitalist environment, but hopefully we can carve out some space
within non-profits at least. It's kind of ironic, this LLM use is
once again creating a huge dependency on a small number of US tech
companies. There's now a lot of geopolitical momentum to decrease
infrastructural dependency on the US, but at the same time we're
massively walking into the same pitfall with AI. Let's hope Tor is
smarter than that. The earlier push towards cloud computing was
successfully resisted, so I'm still hopeful here.
- We still have jobs through the cloud migration, even though we were
told our jobs would stop existing. I like to code so I don't want to
use AI, but all my friends are using it, and the reason I was
getting crap outputs in first attempts was that I wasn't sending
enough context. Concerned about people's mental health, especially
after an article [shared this morning about "How to Talk to Someone
Experiencing 'AI Psychosis'"][].
[shared this morning about "How to Talk to Someone Experiencing 'AI Psychosis'"]: How to Talk to Someone Experiencing 'AI Psychosis'
- Not using it. Search results are getting worse and worse. Really
hesitant in jumping in, fundamentally misanthropic technology.
As a temporary moratorium, anarcat announced that TPA team members are
forbidden to grant any sort of execution access to LLMs for machines
under their stewardship. If you want to experiment with AI, you can
copy-paste things, feed it input, but in no way should you execute
code from LLMs without the same level of review you would give to an
untrusted contributor. This applies, in particular, to agents: if you
experiment with agents like Claude Code, run those in a virtual
machine.
# Next meeting
Next month's first Monday is Easter in some cultures and a holiday in
most of our workers, so the roadmap meeting will be held the second
Monday of April.
# Metrics of the month
* host count: 95, LDAP 140 (!), Puppet 136 (!)
* number of Apache servers monitored: 32, hits per second: 659
* number of self-hosted nameservers: 6, mail servers: 13
* pending upgrades: 0, reboots: 0
* average load: 1.38, memory available: 4.3 TB/6.8 TB, running processes: 160
* disk free/total: 111.3 TB/216.5 TB
* bytes sent: 534.4 MB/s, received: 362.1 MB/s
* [GitLab tickets][]: 272 tickets including...
* Needs Triage: 1
* Not Scheduled: 164
* Backlog: 66
* Next: 29
* Doing: 12
* (closed: 4416)
* [~Needs Information][]: 3 open, 125 closed
* [~Needs Review][]: 7 open, 201 closed
* [~Technical Debt][]: 10 open, 38 closed
[Gitlab tickets]: Issue Boards · Development · Boards · The Tor Project / TPA / TPA team · GitLab
[~Needs Information]: Issues · TPA · GitLab
[~Needs Review]: Issues · TPA · GitLab
[~Technical Debt]: Issues · TPA · GitLab
Upgrade prediction graph lives at trixie · Wiki · The Tor Project / TPA / TPA team · GitLab
···
--
Antoine Beaupré
torproject.org system administration
_______________________________________________
tor-project mailing list -- tor-project@lists.torproject.org
To unsubscribe send an email to tor-project-leave@lists.torproject.org