I exploit free software algorithms and commodity hardware to remove asymmetries in computational power. This aesthetic of computing makes media machines that first represent, then simulate, and finally construct reality in a way that can demystify control structures and make transparent instruments of exploitation. I use these media machines to explore perceptual limits, search for hidden structural beauty, and reveal new aesthetic domains or conceptual territories that are otherwise obscured by our normal human sensory apparatuses.
This work is created from a set of experiments that seek to combine computer science with media. These experiments are collectively called The Machine is Learning, and consist of images generated by training computers to watch and analyze print, television, film, and social media. Works created from The Machine is Learning incorporate many mediums and cross multiple disciplines, embrace the failures of machine seeing, the proven weaknesses of human perception, and the racial and gender biases encoded in mass media. Some examples:
1. The Machine is Learning the “The Man Trap”
Samples the characters and the story of the Star Trek episode “The Man Trap,” and uses facial recognition algorithms to mark up this source such that the computer and facial recognition algorithms become a new character in an augmented story, creating an intelligent automated protagonist. This new character emphasizes humanoid elements like eyes and faces, allowing the perception of the crew “as if” from the viewpoint of the native life form, or even as an omnipotent force present in both the consciousness of the native live form and all the Star Trek crew members, simultaneously.
2. All the Uhuras (left/right), All the Uhuras (center)
Samples the frames from two seasons of Star Trek episodes, and identifies all the frames containing the series’ only African American female character on the regular cast Uhura, sorts the resulting frames by Uhura’s face position in the frame, and arranges these samples as cropped portrait photos forming a broken grid on two large sheets of paper.
3. Equal Weight Uhuras
Samples six characters from two seasons of Star Trek episodes, and re-constructs two seasons in a condensed form, where all the characters have the same amount of “screen time” as the character Uhura, with the added proviso that all the characters are shown talking equally to each other in a random fashion, or conversing alone with a representation of space.
4. Valentine Homography
Two Channel Video Art Installation. Two 34 inch LED televisions, wall-mounted to be touching in landscape orientation, two 19:44 minute 1080p synchronized loops playing a metadata composition generated by combining portrait photography, tumblr and pinterist social media image scrapes, and computer vision software.
I am experimenting with color contour detection in images, using the SURF homography algorithm in the Open Source Computer Vision (OpenCV) software library. I apply this algorithm as a Trevor Paglen-defined seeing machine to social media images: profile portraits, liked images, and disliked images. This computational lens simulates computer matchmaking in a visual form that is but one instance in a sea of many thousand “known-good” or positive test cases that the machine learning behind websites such as Facebook, Tindr, Grindr, OkCupid, Match.com, Jdate.com, etc. must crunch to create a single user’s match.
To create this positive test case, I collaborated with my real-life partner to stage ten normcore portraits counter to the prevailing profile portraiture aesthetic, collected forty images from each of our preferred social medias that were positive, and ten that were negative. These input images were used by the machine learning system to simulate what a positive match “looks like” when using its “match the humans” algorithm.
One the first screen, an image is matched with an image on the second screen. The results of this match are visualized as Edward Tufte-inspired blue circles, indicating the parts of the image with enough color contrast to serve as a useful point of comparison. As the algorithm expands the size of the points, less accurate and lower-frequency blue circles appear. Blue circles on the first are “matched” to blue circles on the second with red lines marking the connecting route in-between. Often, the matching lines seem nonsensical and wrong to the human observer.
To structure the generated images, I use a composition of blinks, winks, slow fades, and swipes right. These are contemporary gestures mobile interfaces and applications associate with love and liking. I re-purpose and re-imagine these repertoires as a video editing grammar. Experimental film and video artists like James Benning, Takeshi Murata, and Nicholas Provost have also used editing grammars to explore new art spaces.
5. Asama Loops
A combinatoric iconography machine composed of four black and white images, one composite black and white image, three pure colors, and three modified color compositions. These images are combined with carousel swipes, synthetic vertical rolls, and one and two-minute still loops to represent a previous conceptual work on paper (Asama OG) in video art form.