07 August 2011

From Internet, to Interact: Glyph Grave's "Composition in Realities"

Twenty years ago, on 6 Aug 1991, Tim Berners-Lee created the world's first website, using a modified version of SGML (which was used for all sorts of texts) called HTML. The Internet, which had formerly been used to mainly transmit data and simple email, began its transformation into what we understand it be today.

Fittingly, today Glyph Graves opened a new exhibition of pieces called Composition in Realities that combine the physical world and the virtual world. In his words, "It does something other than cross the RL/SL boundaries. It ignores them."  Many of the works take real-time data from various RL sources and transform them into responsive shapes.  For example, one piece draws on data about wind in locations around Antarctica. Another uses the solar wind's interaction with the earth's magnitude field. There are a few more, but the showstopper is "Faceted Existence," in which Glyph uses a Kinect to control a panel of spheres which move out to show his face -- changing as he moves, live. It is a kind of performance piece. ColeMarie Soleil made a video (better than my pile of stills) that shows it in action:


You can't see it at the small scale of the video, but the spheres aren't simply changing color: actually, a video is playing on them. I have a couple of photos that give the idea (click to enlarge):



In a sense, a data source is a data source is a data source; from that perspective, it doesn't make much difference whether one uses wind sensors or a Kinect to control prims. But this particular bridge has been much awaited: the human body's entrance into virtual worlds, to create (again in Glyph's words) a secondary avatar. "Faceted Existence" is part of the effort to take the third step in this direction, after the manual creation of fixed files for standing, walking, dancing, and so forth, and then the use of motion capture (mocap) technology to do the same. The move from static files to streaming data is a significant one. Already this year, the Kinect has been used to trigger preexisting avatar actions (like gestures and flying) within Second Life. Glyph's piece goes an important step farther by connecting SL to arbitrary human actions not already within the program. This is crucial for the final stage, in which people's avatars can move within SL in accordance with their RL bodies.

But let me return to my comment above: data is data. What difference does it make whether it's prims that move, or an avatar? The issue presents itself as psychological: on some level or other, to one degree or another, most of us identify with our avatars -- not in the sense necessarily of feeling that the avatar is an expression of selfhood, but rather, that we experience our avatar's "experience" as our own. When a friend's avatar gives my avatar a hug, I feel hugged. When I go to a sim with large-scale builds like Kolor's Studio, I'm overwhelmed by their enormousness, even though in physical reality they're only images on a computer screen not even a foot high.

This sense of experiential identity is, however, not just psychological: it's neurological. Evidence suggests that human brains have "mirror neurons" which generate approximately the same pattern whether a person is actually doing something, or watching someone else do it. If a movie character steps on a nail, part of our experience is the brain triggering the same set of neural activity as it would if we had stepped on the nail ourselves; when we watch someone dance, our brains are also dancing. At some point, we may physically make the gesture of hugging as another another person does the same, our avatars will hug, and we will feel it with even less distance than we do now.

And so we stood there, some dozen and a half of us at the opening, transfixed for an hour watching Glyph's face doing almost nothing (he was not making exaggerated movements for show). We were, so to speak, facing the future of virtual worlds: when the internet provides new ways to interact.

1 comment:

  1. Nice post Dividni. Im finding myself increasingly intrigued by the the body identification aspect of avatars in second life. This has come out of wondering why the experience of seeing pics or machinima of mine and other peoples work never seems to give the same experience even though you are still looking only looking at a flat screen when you are using your avatar. The difference seems to me to be the additional dimension provided be a sense body identification.

    This difference becomes an important factor especially when trying to convey work to people outside of the the SL circle.

    On a lighter note its fun doing things I haven't done before like the other day Mabs was hanging around close to the front of the Faceted Existence .. so I opened my prim mouth, leant forward and consumed her... and for that while I was manoeuvring it it really did feel like my avatar.

    ReplyDelete