Digital Privacy and Ethics
It’s the end of my fall semester, and I just got home from school. All I am doing with my time is reflecting on random things, binge-watching whatever comes up on my Youtube feed, and playing games like MapleStory to pass my time in the limbo that is Winter break. I ended up stumbling upon the Free Software Foundation website, which sparked me thinking about the topic of technology and ethics.
Having taken “Defense Against the Dark Arts” this previous semester, I narrowed the reflection down to the topic of browser fingerprinting and the subsequent user profiling/content curation algorithms applied. In this blog, I want to walk through my own thoughts on the ethics of the internet and how it is deeply tied up with myth-making through user tracking mechanisms (browser fingerprinting). In doing so, I question the following: Should any server work on the assumption of who you are “as a person”?
Why I’m Against Tracking
Identity and Algorithms
The interplay between identity and algorithms is an interesting one. To make clear, I am not talking about the user authorization sense of identity, i.e., bank logins. I am talking about the assumptions made on the part of the server about who the user is as a construct, i.e., is user X an alcoholic? That is their ontic status. An algorithm generates a label X for a user X. Say, for example, you are browsing Amazon, and they have determined you are a cat lover. Is there anything wrong with that?
One thing might be that you are actually a dog lover, so the algorithm is plainly wrong. Now, if you may grant that user X is never 100% a dog-lover or cat-lover. Is there something off about the algorithm deciding which one you are? Especially when they use that label in how you interact with their website? My response to this is yes.
Taking from Wittgenstein:
“The limits of my language mean the limits of my world. - Tractatus Logico-Philosophicus”
Wittgenstein points out that language is inherently tied to a project of myth-making towards reality. That language does not speak to the absolute world but acts as a symbolic construct for it1. The implication is that the labels and algorithms prescribed are immoral, or maybe just unvirtuous, to the extent that they are saying, “Only these labels are of the world.” In that way, an algorithm enforcing such labels is wrong, and to that point, they mechanically do so. As for some labeling algorithms, you will forever be labeled as A or B, no matter if you generate more data2.
To make the point of labeling clearer and explain why humans are more “unique” in the sense of generating new labels, we can take a look at university departments. Each field of study3 has its name because it delineates some part of the universe to be studied or engineered. I mean, you could say all the students are students of World Science or World Engineering or Brainwashed, but that wouldn’t say much, right? That’s why there are sub-labels for specific fields of study like physics, psychology, etc. Now, what happens when there’s a new field or paradigm such that the existing labels don’t map precisely to a field? Make a new label. You can become a quantum psychologist now.
I make the point above to say that when algorithms try to predict a non-physical process or label, their set of labels X might be at some level wrong and outside of the prediction itself is wrong. A prediction at time-point T inevitably constrains the user and gives an answer based on probabilities, not possibilities. Also, algorithms don’t just generate labels; they utilize the labels predicted, which leads to the point of the next section. At best, algorithms are maintaining who you are; at worst, they are subtly changing your identity.
Knowledge and Algorithms
What do I mean by “Knowledge and Algorithms”? I mean the ways in which algorithms fundamentally restrict some kinds of information based on user data. This is a different task than content moderation, where the former works on some collective standard; this is more of a user-content moderation that I am having qualms with. I think even if you were to disagree with the problems of labeling, you can agree on why the usage of those labels in the context of moderation can be problematic. In making my argument, I want to consider the argument that content algorithms are good because “I don’t want to see what I don’t want to see” or “I want to see what I want to see.”
The counterpoint can be summed up as: “That’s what you’ll ever see then.” As a passive user of YouTube, there will “always never” be any content that they will throw in their willy-nilly to see how much you like another area of discourse. Yes, you might be diametrically opposed to another discourse’s arguments, maybe deem it as “nonsense,” but is there not something dangerous in deciding never to hear another out?4 I view the trade-off to accepting such a filter bubble as a convenience for complacency in thought.
Defenses Against Tracking
I thought about writing this section here, but I decided to split to another part (If I will ever get to writing it). Defenses against tracking contains the technical and political fights against actors attempting to implement tracking.
Citations
The ethics of advertising for health care services. (2014). In The American Journal of Bioethics (Vol. 14, Issue 3, pp. 34–43) [Journal-article]. https://doi.org/10.1080/15265161.2013.879943
Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D. M., & Watts, D. J. (2021). Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences, 118(32). https://doi.org/10.1073/pnas.2101967118