08 August 2011

Holy %^#@!! Linden Labs finally added region environment controls!

For a while, many of us have wanted Linden Labs to make it feasible for landowners to establish a particular Windlight setting on their land. Not long ago the people who gave us Phoenix made this possible at least for Phoenix and I think Firestorm viewers. Well, in today's announcements from Linden Labs, after all the silly stuff is this:
You’ll also notice some changes relating to Region Environment Settings. These give region managers the ability to customize environment settings — like atmosphere, sun, time of day, and cloud coverage. Visiting Residents will automatically see those settings, unless they’ve already chosen to use a personal override. For example: a Gothic castle might have a gloomy, spooky environment — and a tropical island might be bright and sunny. This feature is sometimes referred to by its previous moniker, Windlight Environment Settings. Check out the Knowledge Base for more info.
This is great news! Yes, it took them long enough; and it's region-based rather than parcel-based, which creates limitations that Phoenix still surpasses (and Phoenix can have different Windlights depending on altitude, too). On the other hand, it doesn't require giving people a Windlight file. So on balance, definitely something for the SL art world to cheer about!

There's also an item about enabling parcels to have hidden avatars. Nice for people having sex, yeah. But somehow I suspect that people like Bryn Oh and Oberon Onmura (who's recently been trying things with bots) will find ways to make use of this feature....

07 August 2011

From Internet, to Interact: Glyph Grave's "Composition in Realities"

Twenty years ago, on 6 Aug 1991, Tim Berners-Lee created the world's first website, using a modified version of SGML (which was used for all sorts of texts) called HTML. The Internet, which had formerly been used to mainly transmit data and simple email, began its transformation into what we understand it be today.

Fittingly, today Glyph Graves opened a new exhibition of pieces called Composition in Realities that combine the physical world and the virtual world. In his words, "It does something other than cross the RL/SL boundaries. It ignores them."  Many of the works take real-time data from various RL sources and transform them into responsive shapes.  For example, one piece draws on data about wind in locations around Antarctica. Another uses the solar wind's interaction with the earth's magnitude field. There are a few more, but the showstopper is "Faceted Existence," in which Glyph uses a Kinect to control a panel of spheres which move out to show his face -- changing as he moves, live. It is a kind of performance piece. ColeMarie Soleil made a video (better than my pile of stills) that shows it in action:


You can't see it at the small scale of the video, but the spheres aren't simply changing color: actually, a video is playing on them. I have a couple of photos that give the idea (click to enlarge):



In a sense, a data source is a data source is a data source; from that perspective, it doesn't make much difference whether one uses wind sensors or a Kinect to control prims. But this particular bridge has been much awaited: the human body's entrance into virtual worlds, to create (again in Glyph's words) a secondary avatar. "Faceted Existence" is part of the effort to take the third step in this direction, after the manual creation of fixed files for standing, walking, dancing, and so forth, and then the use of motion capture (mocap) technology to do the same. The move from static files to streaming data is a significant one. Already this year, the Kinect has been used to trigger preexisting avatar actions (like gestures and flying) within Second Life. Glyph's piece goes an important step farther by connecting SL to arbitrary human actions not already within the program. This is crucial for the final stage, in which people's avatars can move within SL in accordance with their RL bodies.

But let me return to my comment above: data is data. What difference does it make whether it's prims that move, or an avatar? The issue presents itself as psychological: on some level or other, to one degree or another, most of us identify with our avatars -- not in the sense necessarily of feeling that the avatar is an expression of selfhood, but rather, that we experience our avatar's "experience" as our own. When a friend's avatar gives my avatar a hug, I feel hugged. When I go to a sim with large-scale builds like Kolor's Studio, I'm overwhelmed by their enormousness, even though in physical reality they're only images on a computer screen not even a foot high.

This sense of experiential identity is, however, not just psychological: it's neurological. Evidence suggests that human brains have "mirror neurons" which generate approximately the same pattern whether a person is actually doing something, or watching someone else do it. If a movie character steps on a nail, part of our experience is the brain triggering the same set of neural activity as it would if we had stepped on the nail ourselves; when we watch someone dance, our brains are also dancing. At some point, we may physically make the gesture of hugging as another another person does the same, our avatars will hug, and we will feel it with even less distance than we do now.

And so we stood there, some dozen and a half of us at the opening, transfixed for an hour watching Glyph's face doing almost nothing (he was not making exaggerated movements for show). We were, so to speak, facing the future of virtual worlds: when the internet provides new ways to interact.