Two Channel Video Art Installation. Two 34 inch LED televisions, wall-mounted to be touching in landscape orientation, two 19:44 minute 1080p synchronized loops playing a metadata composition generated by combining portrait photography, tumblr and pinterist social media image scrapes, and computer vision software.
Experiments with color contour detection in images, using the SURF homography algorithm in the Open Source Computer Vision (OpenCV) software library. I apply this algorithm as a Trevor Paglen-defined seeing machine to social media images: profile portraits, liked images, and disliked images. This computational lens simulates computer matchmaking in a visual form that is but one instance in a sea of many thousand “known-good” or positive test cases that the machine learning behind websites such as Facebook, Tindr, Grindr, OkCupid, Match.com, Jdate.com, etc. must crunch to create a single user’s match.
To create this positive test case, I collaborated with my real-life partner to stage ten normcore portraits counter to the prevailing profile portraiture aesthetic, collected forty images from each of our preferred social medias that were positive, and ten that were negative. These input images were used by the machine learning system to simulate what a positive match “looks like” when using its “match the humans” algorithm.