If I understand correctly, it’s because of possible fingerprinting.
But why not let the user decide, how much he/she wants to be fingerprinted, instead of hardcoding either the light or the dark theme?
Fingerprinting is “exponential”, even binary choices such as light vs dark will double the possible number of results.
Imagine 4 OSes (windows, linux, mac, android) x 41 supported languages x 2 architectures x 12 browser inner window sizes … and so on - that’s almost 4k different fingerprints and I’m barely scratching the surface. Allowing light vs dark and we’re suddenly at 8k. So whilst you might think “it’s only 1 bit (light or dark)”, it’s actually very devastating to overall fingerprint health - think of the smaller bundles of users with a less common fingerprint (icelandic on mac with a small resolution).
Pefers light/dark is arbitrary - websites do not have to provide it. So it’s not a universal solution - it’s a nice-to-have. But we are aware, and have issues open, to address accessibility - but prefers-color is not it!
This could be solved on the one hand by adding some more restrictions onto the “NoScript” extension. On the other hand by generating some random values for User-Agent etc. This way fingerprinters would receive fake data.
Maybe I misunderstand, what’s meant with “fingerprinting” here.
Currently the definition I think of is: leaving traces on the Internet with the ability for a website to identify you when you revisit.
If it’s the wrong definition in this case, let me know.
There are several “tracking” methods - i.e being able to link your traffic by (re)identifying you. What you described above is “state” tracking - i.e the identifier is written to state on the disk and available next visit - e.g. think cookies. You visit a site, it checks for a “cookie” or other identifier - if you have it then you’re re-ID’ed. If you don’t it assigns one to you for the future. In Tor Browser we limit state tracking to the first party (i.e the website you are on, so third party web sites can’t link you), and we sanitize (clear) all state on close.
Fingerprinting is stateless (not written to disk), and does not not to be assigned. It’s just given freely by the browser when asked. The client (browser) runs JS given it by the website to compute a hash based on metrics - metrics such as what is your userAgent, what are your languages, timeZone name, screen size, etc. Edit: there’s also passive (non-JS) fingerprinting that can be collected server-side
all randomizing can be detected - if not via math in the 1st party (e.g. known pixel tests), then by checking with a third party. Ultimately, randomizing does nothing against advanced scripts. It’s also costly and hard - the number of bugs and holes in Brave and Mozilla’s randomizing are/were legion. Keep it simple.
The only benefit to randomizing is usability. And about the only thing that makes sense here are canvas and webgl rendering - where a subtle randomness still means a usable image.
Tor Browser 14, based on ESr128 is the first release to even have the engineering to randomize per eTLD+1 per session (you need to do this to protect the seed) - because upstream at Mozilla they built that in for their own purposes. We might be able to leverage that in the future to make things harder for fingerprinters
because Tor Browser is for anonymity. Part of that is protecting linkifying traffic, because that can be used to unmask real IDs. Fingerprinting is one such method to track users (i.e their traffic) and compromise anonymity