Shoe-Field Sonya Rapoport
Shoe-Field is an an early example of a home computer being used to create an interactive artwork in a gallery context. The project invited audience participation: people were asked about the shoes they were wearing, how much they liked them, why they bought them, and these responses were entered into a computer in the gallery. The computer then assigned each participant a “shoe-psyche” value (on a scale from –2 to +2).
Using a custom program (based loosely on concepts like the inverse-square law from electromagnetic field theory), Rapoport transformed these responses into a generative “map”, originally printed in ASCII/dot-matrix output, which visualised the emotional and psychological associations tied to footwear.
Research Relevance
Studying Rapoport's work helped me articulate key ideas central to my own project around algorithmic identity and data-driven visualisation.
⑴ Data = Identity
Rapoport showed how personal and emotional dimensions can
be transformed into data profiles. This parallels how algorithms today analyse our data and
translate them into our inner preferences and behaviours into identity markers.
⑵ Participation & Co-Creation
The audience was not just a passive viewer, but
an active contributor: their shoes and responses became part of the dataset, part of the final
artwork. This foregrounds the idea that digital identity is co-constructed, not static, not
solely “given”, but shaped by both individual choices and systemic coding.
⑶ Use of ASCII / Dot-Matrix
Rapoport’s use of what was then low-fidelity
computing (home computers, dot-matrix printouts, ASCII maps) demonstrated how even limited
computational tools can become expressive media for identity, proving aesthetic and conceptual
power need not come from high-tech visuals. This inspired me directly: I realised I could use
ASCII (the earliest, most basic computer alphabet) to reinterpret personal data or language
through a minimalist, symbolic code, a visual language that strips away polish and shows the
skeleton of data.
⑷ Critical Awareness of Systems
By making footwear and personal taste subject
to algorithmic assessment (shoe-psyche number, “force-field” mapping), the project provoked
reflection on how everyday consumer objects and habits can be quantified and re-interpreted. It
mirrors contemporary concerns about surveillance capitalism, profiling, and algorithmic
categorisation.
Experiment 2: Text Reduction
Expanding from Prototype 1: Feed Your Garden's, where user words plant a seed and blooms into flora in the system, I began questioning what really happens when personal expression enters an algorithmic environment. Rapoport's work became a key reference here. In her work, intimate, subjective data like how people feel about their shoes, was translated into numerical values and visual maps. This transformation of lived, emotional information into computable form directly echoed my own interest in how identities are flattened when processed by systems.
I started by asking: what really happens when our words meet a system? In my research, I kept returning to how algorithms read patterns and signals, and reduce human data inputs by reducing rich behaviours into simplified categories, metrics, and features. What if I made that reduction visible? That led me to use ASCII as a way to expose this reduction.
Experiment 2 becomes a descendant of both Prototype 1 and Rapoport's methodology: reveal what is usually invisible, the moment where lived experience becomes data, and where identity is quietly reshaped through reduction.
Read more about the algorithmic process breakdown in Experiment 2's Catalogue of Making.
Try Experiment 2 Catalogue of Making (Experiment 2)
I first defined four ASCII “shape families”:
⑴ Short Words (Circles): o O @•
⑵ Long Words (Squares): # ▓ ░ +
⑶ Starting with Vowels (Triangles): ^ /\\ >
⑷ Ending with Punctuation (Lines): | / \\
I assigned rules to each of the ASCII shape families to classify each word based on length, vowel beginnings, or punctuation endings. From there, I seeked ChatGPT's help to create a dynamic grid that regenerates based on what the user types. The density of the grid increases as more words are entered, while the ASCII size shrinks as input grows, making the visual feel increasingly compressed like the process of reduction. The key function (getAsciiSetForWord) determines which ASCII family the system will use. With ChatGPT's help I managed to make regeneratePattern() rebuild the entire field of characters each time, ensuring that the grid visually reflects the user's input. Through these rules, the technical process reenacts the computational reduction that inspired the experiment.
Visual Archive
Challenges Faced
The biggest difficulty was making the regeneratePattern() rebuild the grid properly. ASCII characters behave unpredictably at different scales, so keeping the grid readable while still reacting dynamically to user input required repeated recalibration of spacing, density, and cell size.
Another issue was designing rules that feel “algorithmic” without becoming overly literal. Early versions produced patterns that looked random rather than meaningfully reduced, forcing me to refine classification logic (vowels, punctuation, length) so users could sense the system’s “reasoning.”
Reducing text without drifting into sentiment analysis or real semantic interpretation, meant constantly rethinking the rules to ensure the system behaves like a classifier, not an interpreter.