Looking Eye
the Looking Eye is the connected documentary I am working on in a class called Connected Documentary, which inspired me to take eye tracking into the realm of documentary film making.
It will use eye tracking glasses to capture and edit footage dynamically. It may work to use a content analyzing algorithm for editing, or dynamic web editing by the viewer. The subject of the documentary is the language of the eye.
Looking Eye presents the image that an eye receives and the image it creates. It attempts to explore the difference between the seen and the assumed. The viewer is able to manipulate the eye movement, and in that way engages with language of the eyes, something we all employ but seldom acknowledge or understand.
Using eye tracking technology, the video is captured with the data of the eye movement around the scene, and presented as the collage of what appears, inviting the viewer to infer what is perceived, or to re-envision the eye movement.
The language of the eyes - both in direct communication and in the way we use it to inform ourselves, is a huge topic. At first I was interested in addressing communication with the eyes, but have discovered a lot of interesting ideas and texts about the eyes for studying mental patterns, thought, and culture. There is a lot that can be revealed through the eyes, and explained through this language we are all familiar with.
Update:
story-wise
originally supposed to be all live and dynamic but I am really interested in the push toward story arc
I don't know how to reconcile the two. I can't let go of the live and dynamic because it is the only way the eyetracking as editing works for me. I don't see the point otherwise, but I am sure it could be there. After thinking of it this way, I think I should stick to the live idea and see what emerges as room for eyetracking in the viewing, or editing based on a story arc.
tech-wise
I have been trying to make it live on a small windows tablet that is internet enabled and has 2 usb ports: lamer but similar to the raspberry pi idea, while the raspberry pi remains unavailable. I also heard about some good examples of eye tracking using only camera video feeds from cell phones with two way cameras, and have been working on that. And I am still hacking my bluetooth cameras because they are small and could work via phone. I met with somebody who may be able to help me do it. But the process so far has been slow.
web-interaction-wise
In this realm I am torn between the meaning of the interaction - I am starting with a mouse based thing: moving the mouse is like moving the eyes, superseding the camera feed's eye position. So if the video is showing a certain eye position, the viewer can change it. I think this will be good enough for now, but I want the viewer to be able to edit the footage, to store pieces of what is collected, and to arrange them.
My goal for this week is to rewrite the javascript code in a way that the scan is visible: The video is revealed based on eye location, and then paused and blended into the current video that is showing the eye movement.
And to combine it with the eye tracking that is attached to the computer.
Then to do one interview, where the eye tracking data is synched after the fact with the (wide-angle) footage.
On the web side, I have been using seriously.js, but it has been hard to decipher some of it, since I only need one of the effects, and I need to manipulate it a lot, I am trying to figure out how to write it myself in javascript, and I am at the early stage of this process. But the idea is to use the coordinates of the gaze to reveal the image, and to slowly freeze the frame as the coordinates move away, in that area, so that the gaze is the only real time footage of the video, and the past areas of gaze are fading into stillness and eventually into white.