Week 2 / Atelier Presentation

Introduction to Algorithms

Key Terms

⑴ Algorithms: set of step-by-step instructions that tells the machine exactly what to do
⑵ AI (Artificial Intelligence): systems designed to learn from data, recognise patterns, and make decisions
⑶ Large-Language Model: type of AI trained on vast text corpora, capable of understanding or generating human-like language (e.g. ChatGPT)
⑷ Machine-Learning (ML): set of computational instructions that enable computers to learn from data and improve their performance over time without being explicitly programmed

Examples of the type of algorithms popular social media apps use:

⑴ Google: Search Ranking Algorithms
⑵ Instagram: ML Powered Algorithms
⑶ Reddit: Voting Algorithm
⑷ TikTok: ML Powered Algorithms

Seminal Thinkers & Scholars

The project is grounded in the work of seminal thinkers who redefine how power, identity, and data operate in contemporary life. These scholars form the intellectual backbone of my exploration of "A-I: Algorithmic Identities."

Michel Foucault
French Historian and Philosopher
Lev Manovich
American Theorist of Digital Culture
John Cheney-Lippold
American Assistant Professor @ U-M
Shoshana Zuboff
American Philosopher and Scholar

We Are Data

One of the seminal readings, John Cheney-Lippold’s book, We Are Data: Unpacking Identity in the Age of Algorithms and Surveillance, meticulously examines how algorithms continuously construct and reconstruct our identities based on the vast data generated from our online behaviours.

It explores key themes across its chapters, including the process of categorization, where complex human experiences are reduced to "measurable types" for computational use, is a key concept in this project.

Furthermore, the analysis delves into algorithmic control and algorithmic regulation, illustrating how systems dictate perceptions of citizenship and self-worth through dynamic, data-driven assessments, often without user consent or transparency.

Ultimately, the work advocates for a critical re-evaluation of privacy, proposing the idea of "dividual privacy" and suggesting strategies like obfuscation to resist a world where corporate and state entities leverage data to define our existence.

Precedents

Training Humans Trevor Paglen, Kate Crawford
AI: More than Human Google Arts & Culture, Barbican

Training Humans exposes how facial-recognition datasets are built by collecting, classifying, and commodifying human images, often without consent.

Exploring this work pushed me to confront the uncomfortable reality behind AI systems: what we call "intelligence" is built on vast archives of human faces taken, sorted, and labelled without consent. It made me realise that algorithmic systems don't just analyse identity. They actively produce it by imposing categories, assumptions, and biases onto people who never chose to participate. These systems are grounded in choices about who gets seen, how they are defined, and what is done with their data.

AI: More than Human is an exhibition exploring the evolving relationship between humans and technology.

This exhibition shifted my understanding of AI from a purely technical system to a deeply human story about co-creation. AI is not something outside of us. It is built through our data, our narratives, and our cultural histories. The installations highlights how easily boundaries blur: where human intention ends and machine inference begins. It made me more aware of how AI ssytems quietly reshape perception and behaviour, not through force but through participation.

Mini Experiment: CHAD

CHAD is single AI agent (LLM) that I built on Coze after discovering during my TikTok internship that Douyin (chinese version of TikTok) uses the same platform to prototype AI-driven products.

It is inspired by a weekly team activity where we used ChatGPT to roast each other’s music playlists and guess the owner. I built CHAD to streamline that entire process: users can simply upload the URL of their playlist, and the agent instantly generates a personalised roast.

What started as an office game became a way for me to explore how AI can both reflect and distort identity through data, turning something personal like a playlist into a site of humor, judgment, and algorithmic personality.

Get Roasted by CHAD Telegram
COZE Backend

To build CHAD, I treated Coze as the backend engine that powers the agent’s logic, personality, and behaviour. Instead of coding a full server, I used Coze’s internal tooling to structure the “brain” of the agent in three modular steps.

⑴ Persona & Prompt
This stage functioned like writing the "backend logic" or core algorithm of CHAD. I designed CHAD by breaking its behaviour into role, skills, response formats, and restrictions. CHAD's skills ranges from playlist dissection, personality profiling, to character and restrictions.

⑵ Arrangement
The second layer of the backend is where I refined how the model behaved statistically. I adjusted its generation diversity, input and output settings to ensure CHAD is consistent and dramatic.

⑶ Preview & Debug
The final stage is an iterative debugging loop. After each change in and , I had to test CHAD by testing various playlists, check the roast output if it is accurate to the metadata, and evaluate the tone.

Presentation Feedback

The feedback I received encouraged me to critically explore the tension between algorithmic systems and personal identity, especially in a fast-paced, data-driven society. I found the idea of "algorithmic performers" compelling, but the suggestion to approach these tools with a lighter, more playful attitude was valuable.

While I aim to maintain a critical lens, this made me reconsider how tone can impact engagement and accessibility. It also pushed me to reflect on how algorithms vary across platforms and how this shapes user behavior differently. Going forward, I want to balance critique with curiosity, allowing space for experimentation and unexpected outcomes.