Mapping Ideas and Connections
During this session, I created a mindmap to visually explore and organise my initial research. This process helped me break down complex ideas, make connections between different themes, and identify areas I wanted to investigate further. The mindmap became a useful tool for guiding my thinking, highlighting gaps in my knowledge, and laying the foundation for the next stages of our project. On this page, I’ll walk through some of the key ideas that came out of that session and reflect on how the mapping process shaped our direction moving forward.
Feedback
As part of the session, we also spent time looking at each other's mindmaps, sharing feedback by placing stickers on points we found particularly interesting or insightful. I gathered the pointers marked on my mindmap by the class and categorised them into different categories.
⑴ Posthumanism
• Explore identity beyond human-centred perspective by
emphasising hybridity of human and machine in shaping digital selves
• Implications for
algorithmic identity construction where identity is viewed as a fluid co-produced phenomenon by
highlighting how identities are co-constructed with technologies and algorithms
⑵ Anthropomorphism
• Affects trust and interaction with digital systems
•
Leads to perceptions of algorithms as intentional identity shapers
⑶ What
• Processes and systems that shape identity: algorithms mediate and
construct multiple facets of digital identity
⑷ Why
• Perspectives that are necessary: critically analyse power, ethics,
and representation in algorithmic identity
⑸ How
• Ways for application: design innovation
⑹ Thoughts
• Putting a face to algorithms
• How does the system
categorise us?
Andreas also gave me feedback from another angle: Manipulation in
Algorithms
How can we visualise an algorithm being manipulative?
How can we
experience this manipulation in algorithms?
Where does this manipulation live? Physical?
Arduino code? Analogue?
Reflection & Thoughts
Andreas' feedback really stuck with me during the session. It made me stop and ask myself: are algorithms manipulative? The more I thought about it, the more I realised how complex and layered that question is.
Research shows that algorithms don't just passively deliver content, but it actively shapes user behaviour, beliefs, and even identity, often without users fully realising it. It is done by exploiting on the user's cognitive biases, filtering and controlling the flow of information we see, and optimising for metrics like engagement or profit rather than our actual well-being.
RPO Framework
To deepen the conceptual framework, I created diagrams for each pillar that map out how the processes work and how they relate to one another. These diagrams do not stand alone. They highlight the relationships between pillars. Datafication feeds into feedback loops, which in turn create openings for manipulation.
By making these flows visible, the diagrams clarify how algorithmic identity is constructed as a layered, and interconnected process. They became a bridge between theory and analysis, translating abstract readings into a visual logic that underpins my argument.
Pillar 1: Datafication
This diagram shows how everyday actions are transformed into data that fuels algorithmic identity. User behaviours like clicks or swipes are captured as traces, sorted, and classified into categories that shape a data double.
Cheney-Lippold argues that such profiling reduces individuals into measurable identities (Cheney-Lippold 24), while Ortiz-Freuler and Venkatraman highlight how these classifications embed power and normative assumptions (Ortiz-Freuler and Venkatraman). Schroeder extends this by noting that platforms don’t merely reflect users but actively structure how identities are understood and enacted (Schroeder 57).
Together, these perspectives reveal datafication as a process that constructs identities through categorisation, continuously feeding into recommendations that reinforce the algorithmic self.
Pillar 2: Algorithmic Feedback Loops
This diagram illustrates how algorithmic feedback loops reinforce identity through cycles of curation and perception. User actions such as clicks, likes, or shares feed into algorithmic curation and ranking systems, which determine the content users see next. These curated feeds and recommendations are then interpreted as reflections of the self, shaping user perception and identity. Over time, these outputs reinforce biases by narrowing what is made visible or salient.
As Bucher argues, algorithms act as “invisible mediators” that curate experiences in ways users often cannot perceive (Bucher 4). Joseph shows how such algorithmic events, like Spotify Wrapped, are taken up as identity markers, blurring reflection and construction (Joseph 296–304). Glickman and Sharot further demonstrate that feedback loops can exploit cognitive biases, making users more susceptible to reinforcement and narrowing of self-concepts (Glickman and Sharot 18). Together, this evidence highlights how feedback loops subtly transform platforms into mirrors that not only reflect but also entrench identities.
Pillar 3: Manipulation
This diagram shows how manipulation is not an isolated process but emerges at the intersection of datafication and feedback loops. Once user actions are reduced to data traces and categorised, recommendations shape perception and identity, which in turn steer behaviour in subtle ways. As Vangeli argues, algorithmic manipulation operates by narrowing what information is visible, producing filter bubbles that shape worldviews (Vangeli 3–4).
Carroll et al. caution that platforms exploit cognitive and emotional biases to nudge choices, often blurring the line between guidance and control (Carroll et al. 93–100). Fu and Sun note that users sometimes resist or attempt to manipulate these systems in return, revealing identity as a contested space rather than a fixed profile (Fu and Sun 179–93). Joseph extends this by showing how helpful recommendations can gradually redefine notions of choice and volition (Joseph 457–65).
Taken together, these perspectives highlight manipulation as the point where algorithmic processes shift from reflection to intervention, actively steering how identities are enacted.
Precedent: Spotify Blend
On Spotify, there is a feature called Blend that merges the listening habits of two or more users into a single shared playlist. It dynamically analyses each person's recent music behaviour such as their favourite genres, frequently played tracks, mood tendencies, and artists.
It uses Spotify's recommendation algorithms to generate a playlist that represents their "shared listening identity". Blend's colours or shapes often represent each participant's "slice" of the playlist, a visual indicator of how much the playlist comes from whom.
Experiment 1: Playlist Visualiser
While Spotify Blend visualises the shared identity of two or more listeners, I was interested in the opposite direction: What happens when you strip away the social dimension and reduce the system’s focus to one user?
Instead of comparing tastes or building a relational identity, I wanted the system to generate a self-contained visual composite: an identity portrait that emerges purely from the user’s own listening habits.
By removing the “Blend” between individuals, the system becomes a mirror. The question shifts from “How much do our tastes overlap?” to: “What does my music say about me when it is reduced, encoded, and visualised as data?”
This experiment's idea was made in alignment with my research into datafication: Platforms reduce us into patterns, metrics, and signals. So instead of celebrating shared identity, I wanted to experiment with a version that visualises the reductive logic. It is also made from the expansion of the Mini Experiment: CHAD, shifting from verbal judgment to a generative portrait of a self.
Try Experiment 1 Catalogue of Making (Experiment 1)
In order to get the different elements of the playlist, the system extracts the public metadata through Spotify's oEmbed API. It is then converted into numerical traits including (energy, valence, consistency, novelty, focus), with ChatGPT's help of seeding a pseudo-random function that ensures each playlist generates a unique but stable signature.
Using Perlin noise, the rings gently distort and drift, creating a behavioural field that changes according to the playlist’s emotional profile. High energy increases movement; low consistency increases wobble; high valence brightens colour; novelty adds variation across rings; focus sharpens the palette.
At the top right, there is a classification box where the system classifies the user based on the metadata extracted: “gender,” “age,” and “genre”. This mirrors how actual platforms reduce complex behaviour into categorical identity, allowing users to see how their listening habits are being interpreted.
The "Save" button allows users to save their identity into the archive, and the "Archive" button lets users view all the saved playlist in the system that other people have saved. Together, this process exposes how easily personal taste can be transformed into data, encoded into traits, and mapped into reductive classifications, mirroring the simplifying logic of algorithmic identity systems.
Challenges Faced
The first major obstacle I faced was data access. Spotify's Web API requires authentication, making it impossible for me to fetch the full playlist audio features using a simple browser-based set up. I had to seek ChatGPT's help of using the oEmbed endpoint, which sadly only returns limited metadata.
Another challenge was mapping metadata to visuals in a way that felt expressive and meaningful. I struggled with the classification logic where I went through rounds of iteration to ensure that the "system identity classification" does not come out too reductive.
The archive system also introduced further technical difficulties. The archive did not save the entries, and I had to seek ChatGPT's help once again to sort out the storing and loading of past outputs without crashing the handling of performance.
Visual Archive
Feedback Moving Forward
Experiment 1: Playlist Visualiser raised conversations about Data Visualisation. While the visual outputs aren't very similar, but there are certain similar aspects found in the outputs. This raised questions like: Does the algorithm know we're friends? Does it even understand what kind of connection we have?
For future experiments or expanding on this experiment, I can explore the idea of compatibility: how different platforms interpret taste, connection, or "matchability." Opposites might attract, or maybe algorithms just can't capture human dynamics fully. This is definitely a thread I want to pull on further.