Eleven Electronic Media Questions for Artists

2015-07-31 12.01.23

Q1. How do you like to communicate? Rank everything from voice phone call, texting, email, in person, etc. Top five only.

Q2. How do you search for art on the internet? What do you find?

Q3. What is your oldest digital file? How do you store it?

Q4. What is your oldest extant artwork? Where is it? How is it stored or displayed?

Q5. Do you have an art documentation system, and if so, what? Have you seen other art documentation systems that you liked? Do you remember how you made work ten years ago?

Q6. Search for yourself. Google, Facebook, etc. What do you see? Do you see anything missing?

Q7. Where is your oldest digital self hidden?

Q8. Do you have any media policies? If yes, detail. A media policy is an action or protocol that you put in place that regulates media. For instance, not having a Facebook account is a media policy. Only watching two hours of television a night is a media policy. Figuring out when to post on Instagram (Instagram Prime Time) is a media policy. Etc, etc.

Q9. List all current media and social media accounts accessed via the internet. Specify two or three favorites. Do you have any aliases, and if so, detail to your desired comfort level. Have you ever had an on-line account suspended or deactivated, if so, detail as above. Do you archive social media history, track meta-data, and if so detail.

Q10. Is your public representation a result of a deliberate strategy or strategies on your part, or is this just internet magic? Discuss. Do you own FIRSTNAME.LASTNAME?

Q11. What simple things do you do that are likely to work in the future? Especially for areas like password managers, image file formats, archival data, on-line, etc?

Something great from the New Yorker: The GNU Manifesto turns 30 by Maria Bustillos. Previously as Dream Freely. See Rafael Lozano-Hemmer’s Best practices for conservation of media art from an artists perspective for art best practices. Lynda Schmitz Fuhrig at the Smithsonian Institution Archives, in Preserving Your Treasures.

Thirty Years of the GNU Manifesto

The free software movement entering its 30th year is the ideal time to reflect on previous successes and prioritize future work.

The success has been massive. The software industry has been irrevocably changed from a model of selecting the best fit commodity proprietary software component for a given task into a knowledge-based model of crafting known good components into the custom software-hardware machines. Changing the software to fit the task and not the other way around. The idea of contributing back to development communities, and empowering others has moved from the periphery to a central part of all software development. It’s not all rosy, some obstinate people and institutions still don’t get it, but even ten years of perspective gives confidence that the worst structural barriers of the old proprietary software model have been banished for good.

There is still much to do, with both the social and the technology aspects of the free software movement.

The software aspects.

The software universe is expanding at an exponential rate. Software components are being combined into custom systems with exponentially increasing complexity. Managing this complexity is the top technical priority of the free software movement today. Solutions include conscientious documentation practices, and the development and incorporation of new visual tools into software engineering practice that supplement the usual perception of source code as literary text. Visual grammars that change perception, comprehension, and analysis of software sources exist today within proprietary confines and must be surpassed by yet-to-be devised free forms. The free software community must lead this effort and make sure that the tools and visual solutions adopted are free for all to use and fully model the capabilities of free software ecosystems.

The organizational and social aspects.

Free and open software communities are not fixed forms, and need to evolve as social movements. Experiments with new organizational forms that encourage equal participation regardless of gender, generational cohort, geographic region, or corporate sponsorship should be encouraged.

Why Algo

I exploit free software algorithms and commodity hardware to remove asymmetries in computational power. This aesthetic of computing makes media machines that first represent, then simulate, and finally construct reality in a way that can demystify control structures and make transparent instruments of exploitation. I use these media machines to explore perceptual limits, search for hidden structural beauty, and reveal new aesthetic domains or conceptual territories that are otherwise obscured by our normal human sensory apparatuses.

This work is created from a set of experiments that seek to combine computer science with media. These experiments are collectively called The Machine is Learning, and consist of images generated by training computers to watch and analyze print, television, film, and social media. Works created from The Machine is Learning incorporate many mediums and cross multiple disciplines, embrace the failures of machine seeing, the proven weaknesses of human perception, and the racial and gender biases encoded in mass media. Some examples:

1. The Machine is Learning the “The Man Trap”
Samples the characters and the story of the Star Trek episode “The Man Trap,” and uses facial recognition algorithms to mark up this source such that the computer and facial recognition algorithms become a new character in an augmented story, creating an intelligent automated protagonist. This new character emphasizes humanoid elements like eyes and faces, allowing the perception of the crew “as if” from the viewpoint of the native life form, or even as an omnipotent force present in both the consciousness of the native live form and all the Star Trek crew members, simultaneously.

2. All the Uhuras (left/right), All the Uhuras (center)
Samples the frames from two seasons of Star Trek episodes, and identifies all the frames containing the series’ only African American female character on the regular cast Uhura, sorts the resulting frames by Uhura’s face position in the frame, and arranges these samples as cropped portrait photos forming a broken grid on two large sheets of paper.

3. Equal Weight Uhuras
Samples six characters from two seasons of Star Trek episodes, and re-constructs two seasons in a condensed form, where all the characters have the same amount of “screen time” as the character Uhura, with the added proviso that all the characters are shown talking equally to each other in a random fashion, or conversing alone with a representation of space.

4. Valentine Homography
Two Channel Video Art Installation. Two 34 inch LED televisions, wall-mounted to be touching in landscape orientation, two 19:44 minute 1080p synchronized loops playing a metadata composition generated by combining portrait photography, tumblr and pinterist social media image scrapes, and computer vision software.

I am experimenting with color contour detection in images, using the SURF homography algorithm in the Open Source Computer Vision (OpenCV) software library. I apply this algorithm as a Trevor Paglen-defined seeing machine to social media images: profile portraits, liked images, and disliked images. This computational lens simulates computer matchmaking in a visual form that is but one instance in a sea of many thousand “known-good” or positive test cases that the machine learning behind websites such as Facebook, Tindr, Grindr, OkCupid,,, etc. must crunch to create a single user’s match.

To create this positive test case, I collaborated with my real-life partner to stage ten normcore portraits counter to the prevailing profile portraiture aesthetic, collected forty images from each of our preferred social medias that were positive, and ten that were negative. These input images were used by the machine learning system to simulate what a positive match “looks like” when using its “match the humans” algorithm.

One the first screen, an image is matched with an image on the second screen. The results of this match are visualized as Edward Tufte-inspired blue circles, indicating the parts of the image with enough color contrast to serve as a useful point of comparison. As the algorithm expands the size of the points, less accurate and lower-frequency blue circles appear. Blue circles on the first are “matched” to blue circles on the second with red lines marking the connecting route in-between. Often, the matching lines seem nonsensical and wrong to the human observer.

To structure the generated images, I use a composition of blinks, winks, slow fades, and swipes right. These are contemporary gestures mobile interfaces and applications associate with love and liking. I re-purpose and re-imagine these repertoires as a video editing grammar. Experimental film and video artists like James Benning, Takeshi Murata, and Nicholas Provost have also used editing grammars to explore new art spaces.

5. Asama Loops
A combinatoric iconography machine composed of four black and white images, one composite black and white image, three pure colors, and three modified color compositions. These images are combined with carousel swipes, synthetic vertical rolls, and one and two-minute still loops to represent a previous conceptual work on paper (Asama OG) in video art form.