Wednesday, June 13, 2007

Making Connections, virtual reality, agent computing, robots, and even real human beings

So I spent a few minutes digging around after reading the Slashdot article about using AI, agents, and 3D visualization to train firefighters. Off on ZDNet is the original article, by Roland Piquepaille.

ZDNet describes the system this way:

The system is currently used by the Los Angeles Fire Department. DEFACTO has committees of AI ‘agents’ which can create disaster scenarios with images and maps seen in 3-D by the trainees. The software agents also evaluate the trainees’ answers and help them to take better decisions.

This is interesting in several ways.

Virtual simulation and training

One of the great potential uses of virtual worlds is the creation of immersive training and simulation environments. I'd anecdotally observe that interacting in a 3D environment with an avatar provides a pretty effective experience. Situations like a fire or a disaster are prime candidates for such an application. Other uses might include immersive language learning, law enforcement, or hospital/medical situations.

Collaborative visualization, ambient data, situational awareness

Collaborative is the key word here, because there are better, higher resolution methods for exploring data through visualization. A simple equation may be to combine your avatar, the avatars of collaborators, and the visualization, so that remotely distributed teams can fly around, point, manipulate, and refer to parts of a visualization as a group. This is somewhat linked to the themes illustrated by multi-touch displays, such as the Microsoft Surface Computer that I mentioned a few posts back.

I'm mostly looking at Second Life, for many reasons. It's safe to say that SL is not a platform for visualizations, but I have tried several small prototypes with the premise that the collaborative nature of these environments yields qualitatively different experiences. Another way of saying this is that it might be useful to look at ways of creating 3D visualizations within virtual environments, not necessarily as the best visualization tool, but as points of reference in virtual collaboration.

Take a look at this image from the DEFACTO page, and imagine how that application, combined with a collaborative, avatar-based environment, could have interesting possibilities, even as far as visualizing and managing an actual event, versus a simulation.

Agents again!

I had a brief run on some earlier projects where I looked at agent technology. At the time, we were looking at the state of context-aware computing, especially as it applied to the development of smarter mobile applications (location awareness, etc). This was mostly using the JADE agent framework, and was based on a research framework called CoBrA. Honestly, I have not been thinking about agents for a while, but this article made me think about agent technology again. Agents are a great model when you have heterogeneous, autonomous, entities that need to cooperate. Especially important is the ability to dynamically form associations, and negotiate to solve a shared task. Web2.0 talks about small things, loosely joined, and agents share that same philosophy in their own arena.

Agents have always struck me as not getting enough play in the whole 'next generation web' yap-space, especially considering the merging of the virtual (web) and physical world through the explosion of sensors and actuators that are starting to talk on the web. Both agent technology, and the physical/virtual merging still seem like blind-spots, when both may play an important part in the post-web2.0 world.

In this case, agents are seen as proxies for what Machinetta calls RAP's. Machinetta is one of the underpinnings of the DEFACTO system, and it is essentially an agent framework that supports negotiation, assignment of roles, and other aspects of team-work. RAP's are the Machinetta term for "Robot, Agent and/or Person". Cool...we got robots too!

Virtual/Physical merging

So this was just mentioned, and bears repeating. The web is not only the information and people, but also the parts of the physical world that are being hooked in. This has gone on for a while, but what is interesting is to see that merging playing out on something suggestive of a virtual environment as well. This is actually something I've been messing with in Second Life, though at a much less sophisticated level. The DEFACTO application seems to suggest some of the same notions, in any case.

Virtual ambient information

The last point I'd make is that this application shares some common characteristics of many of the location-aware mash-ups that are everywhere, especially using tools like Google Maps, Google Earth, and now Google Mapplets. This gets back to the original point about interacting with visualizations in an immersive environment. In a virtual, 3D space, it seems like the potential is there for mash-ups on steroids. Here's a shot from an earlier post of a modest example using 3D symbols on a map...





It might be hard to get the gist of this, but, just like in DEFACTO, virtual worlds can represent ambient information about state and situation by the appearance and behavior of the objects. There is no reason that these objects could not link to DEFACTO RAP's for example, and provide handles to communicate or interrogate the state of the various agents.

Lots of possibilities!

No comments: