If you’ve ever found yourself clicking into the echo chamber known as “recommended for you” then you’ve either perfected your personal algorithm, through lifetimes of online scrolling, or you’ve embraced the consumption of a persona that only understands a simple part of who you really are.
People are complex, with various interests and evolving tastes and opinions. Hyper-personal analytics attempt to measure personal traits, values and attitudes of every individual who has used the internet to create an algorithm that will (hopefully) accurately predict behavior.
The faith we have in the creation of an algorithm that truly understands our individual identities, when we are yet to fully understand ourselves, is concerning. There is also severe consequences in expecting these expert algorithms to define ourselves for us.
How much meaning can be derived from an in-depth analysis of your data?
Algorithms allow bias and stereotypes, based on assumptions and limitations in understanding data, to define and perpetuate our online behavior. In a 2016 Futurism article, the authors celebrate the success of an — obviously a human team — teaching a machine learning algorithm how to judge a person based solely on their looks. Mel McCurrie, from the University of Notre Dame, created the algorithm to read faces and test for what they perceived as positive traits — dominance and trustworthiness. The objective of the experiment was to encourage people who did not meet the algorithms standard to change their looks and behaviour. Surprisingly, no action was required from the — clearly judgemental — 6,300 people who were used to teach the algorithm the bias in the first place.
More recently, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm has come under fire for exhibiting racial bias in its predictions of offenders and their subsequent sentencing. In a study analysing the algorithm’s effectiveness, it was found that, “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.”

If subjectivity is a human trait, and humans create the rules for algorithms, how accurate is the analytics in defining who we are and what we really want/ need? Algorithms will understand people’s behaviour only as far the creator/s interprets the behaviour themselves. We need to first achieve neutrality in ourselves before we are able to impart it into algorithms that can be used to reinforce our reactions.
“Algo-trading” caused the 2008 financial crisis. The financial market put their faith in software-driven algorithms, designed by so-called brilliant quantitative analysts, creating trillions worth of mythical wealth. Similar to unpredictable markets, people are irrational. Algorithms have the ability to make us rational beings if we never question our choices, and keep scrolling into techno oblivion. But living in a world without choice or deviations positions us into a simulation created by an algorithm of our ‘past’ selves, and requires us to give up a significant part our humanity — reality.
We’re teaching algorithms to teach us to stay the same
Consider the time we spend on digital social platforms. Our algorithm is able to collect data on; the time we spend on each platform and what we’re observing, what we engage with and either disliked or enjoyed, the people in our network who we respond quickly to or like/ comment on their posts more often, and combine this with the entirety of our past online traffic. Although there are moments when we’re searching for something specific, it’s unfortunate that a lot of our time online feeding our algorithm is spent out of boredom. The more we feed this generic recipe, the more generic the algorithm becomes in terms of the content it delivers back to us.

Once we have a concept or ideology that we believe in, we search and become more interested in similar content that can validate our argument. As we engage with the content the algorithm — on the assumption that this search is all I am and ever will be — creates a feedback loop of the argument. It propels me toward people with similar viewpoints, and I find my social tribe at the expense of opening my mind to any alternative perspective. We allow our identity to remain trapped in one sphere of our lives without actualising the necessary changes that progresses societies.
Real world interactions shape us and our internal dialogues in ways we are yet to fully understand. Even with voice-recognition and the eavesdropping soundbytes gathered from our devices, the profundity of unspoken nuances are mysterious and difficult to define. Reality is real for, and understood differently by, every individual. The greatest minds have attempted to, and given up, trying to understand human consciousness. For an algorithm to really know us, it requires conscious code that gathers illogical data around our unique perceptions of reality.
With the amount of time we spend online, our algorithm defines our psyche. But it should be the other way around. We need to reclaim our power of choice, and essentially confuse the f*ck out of algorithms and their creators. Confused algorithms are less likely to recommend similar content, and trap us in a box where all we have is the sound of our unchanging voice. With this power also comes the responsibility of using the internet with purpose and strategic intent. By engaging only with valuable and informative content, we reset the entirety of the content disseminated online, and our own algorithms.
Alternatively if consciousness is not your thing, by all means keep scrolling…
Back to Top