Showing posts with label agent. Show all posts
Showing posts with label agent. Show all posts

Wednesday, March 26, 2008

Android and Agents

I am able to revive my interest in agent computing a bit with a few projects, especially the development of a Social Computing Room on the UNC-Chapel Hill campus. The whole smart space, ambient computing thing really plays into where I see the web evolving...that is, an always connected web of people and things, with a continuous flow of information shaped by location, presence, situation, and the filtering effects of social networks.

Agents to me are the perfect interface between myself, my devices, my environment, and others around me. Agents can also play a part in mediating between my 'personal cloud', and the larger web. This mediation is two way...I may be life-blogging, sending real-time media, location reports, etc. I may also be watching for events, conditions, or proximity.

Anyhow, I am looking at setting up some agents to automate things in the Social Computing Room, so I popped out to the JADE site to see if I had the latest version, to find that they are working on a JADE agent toolkit for the Anderoid platform:

Version 1.0 of JADE-ANDROID, a software package that allows developing agent oriented applications based on JADE for the ANDROID platform, has been released. Android is the software stack for mobile devices including the operating system released by the Open Handset Alliance in November 2007. The possibility of combining the expressiveness of FIPA communication supported by JADE agents with the power of the ANDROID platform brings, in our opinion, a strong value in the development of innovative applications based on social models and peer-to-peer paradigms. See the JADE-ANDROID guide for more details

That looks really interesting, note their (tilab's) own observation about the relation of Android to social network enabled, peer-to-peer applications.

Incidentally, I note that I have crossed the 100th blog post line, so w00t!

Thursday, August 2, 2007

A bit part on virtual worlds last nite

NC-17 news did a piece on virtual worlds last night. See if you can spot the nerd. Link to video here.

I'm working on JADE agents today. I'm somewhat suprised that agent frameworks like JADE are not applied more, especially in this 'come to me web' era. As we get an excess of computer cycles in our individual 'infrastructures', and as we become more mobile, there certainly seems to be a niche that agent computing could fill.

Wednesday, August 1, 2007

Real Agents working with virtual spaces

It's been a while since I looked at agents, but I was happy to see that Jade 3.5 had been released. That's actually old news for some. I'm working on a project that embeds a physical space within a virtual 'building', and utilizing the JADE agent framework to tie the virtual and the physical worlds together seems like the ticket.

Anyhow, the things that jumped out at me about 3.5 were the ability to communicate between agents using a pub-sub topic model, and a re-working of the web services integration gateway.

In a previous post, I had talked about a small framework to tie virtual space (in this case Second Life) to external applications. The framework uses sensors (and I'm looking at other means) to detect and inventory objects in the virtual space, and gives the facility to pipe messages to those objects from outside. Yesterday, I used that to create a control panel GUI that can run on a small tablet. This control panel uses the framework to send information into the virtual space, causing alterations to the environment.

Over the weekend, I added the facility to push events out of the virtual space to subscribing listeners. Objects in SL can generate events for touch, creation (on_rez), collision, and so forth. By dropping a script in an object, the framework can trap these events and communicate them to a hub. The hub takes these events and sends them to the framework. Here's a pic where the 'pyramid' is the event generator, and the sphere is the hub. I simply 'touch' the pyramid, and the hub is messaged, sending the event to the framework for dispatching to subscribed listeners.





Below is a shot of the 'touch' event in the framework. There is a facility that inspects events coming out of the virtual space, and compares it to subscribers. A subscriber picks the 'region', or island, the object name, and the desired event. The subscriber also sets up a callback, and receives the paramaters associated with each event. I want to add a more flexible subscription, using regular expressions, etc., but that's more than I need right now. It might also be cool to add the ability to specify a script to run when an event is encountered, but for now it can just callback to a subscriber at a given url. Here's the basics of the event as it arrived to the framework. What's not shown is a generic 'payload' field, where I plan on pushing in the variables associated with each SL event.




At any rate, the 'control panel' I wrote for the tablet uses the ability to push messages into the sim by using a known region and object name. The new addition of the ability to push events out of the virtual space to subscribers is next on the plate, hence the interest in using agents on the 'real life' side. I think topic-based subscriptions on the agent side will help me figure out cool things to do given that I can hook into virtual events, plus it is just plain geek-fun.

The first task will be to have an avatar push a doorbell button in the sim, pick up that event, push it to the agent, and have the agent kick off a real-life doorbell chime. A stupid pet-trick, true, but the point will be to exercise this thing end-to-end, and then I'll have established a workable 2-way bridge to interesting things later. So far, the scripting/framework approach works out. Time will tell how well it scales, how lag-inducing it can be, etc. I've gone the approach of using conservative ping and sense rates, and it's been pretty smooth and stable so far.

Something whack that would be a fun side-project would be to wrap virtual devices with Jini, and have discoverable virtual services under a framework like that. This gets back to an idea I had a while back, using virtual spaces, virtual sensors, virtual actuators, and virtual people, to develop and prototype smart, ambient computing services. Given the collaborative nature of these environments, it might make sense!

Wednesday, June 13, 2007

Making Connections, virtual reality, agent computing, robots, and even real human beings

So I spent a few minutes digging around after reading the Slashdot article about using AI, agents, and 3D visualization to train firefighters. Off on ZDNet is the original article, by Roland Piquepaille.

ZDNet describes the system this way:

The system is currently used by the Los Angeles Fire Department. DEFACTO has committees of AI ‘agents’ which can create disaster scenarios with images and maps seen in 3-D by the trainees. The software agents also evaluate the trainees’ answers and help them to take better decisions.

This is interesting in several ways.

Virtual simulation and training

One of the great potential uses of virtual worlds is the creation of immersive training and simulation environments. I'd anecdotally observe that interacting in a 3D environment with an avatar provides a pretty effective experience. Situations like a fire or a disaster are prime candidates for such an application. Other uses might include immersive language learning, law enforcement, or hospital/medical situations.

Collaborative visualization, ambient data, situational awareness

Collaborative is the key word here, because there are better, higher resolution methods for exploring data through visualization. A simple equation may be to combine your avatar, the avatars of collaborators, and the visualization, so that remotely distributed teams can fly around, point, manipulate, and refer to parts of a visualization as a group. This is somewhat linked to the themes illustrated by multi-touch displays, such as the Microsoft Surface Computer that I mentioned a few posts back.

I'm mostly looking at Second Life, for many reasons. It's safe to say that SL is not a platform for visualizations, but I have tried several small prototypes with the premise that the collaborative nature of these environments yields qualitatively different experiences. Another way of saying this is that it might be useful to look at ways of creating 3D visualizations within virtual environments, not necessarily as the best visualization tool, but as points of reference in virtual collaboration.

Take a look at this image from the DEFACTO page, and imagine how that application, combined with a collaborative, avatar-based environment, could have interesting possibilities, even as far as visualizing and managing an actual event, versus a simulation.

Agents again!

I had a brief run on some earlier projects where I looked at agent technology. At the time, we were looking at the state of context-aware computing, especially as it applied to the development of smarter mobile applications (location awareness, etc). This was mostly using the JADE agent framework, and was based on a research framework called CoBrA. Honestly, I have not been thinking about agents for a while, but this article made me think about agent technology again. Agents are a great model when you have heterogeneous, autonomous, entities that need to cooperate. Especially important is the ability to dynamically form associations, and negotiate to solve a shared task. Web2.0 talks about small things, loosely joined, and agents share that same philosophy in their own arena.

Agents have always struck me as not getting enough play in the whole 'next generation web' yap-space, especially considering the merging of the virtual (web) and physical world through the explosion of sensors and actuators that are starting to talk on the web. Both agent technology, and the physical/virtual merging still seem like blind-spots, when both may play an important part in the post-web2.0 world.

In this case, agents are seen as proxies for what Machinetta calls RAP's. Machinetta is one of the underpinnings of the DEFACTO system, and it is essentially an agent framework that supports negotiation, assignment of roles, and other aspects of team-work. RAP's are the Machinetta term for "Robot, Agent and/or Person". Cool...we got robots too!

Virtual/Physical merging

So this was just mentioned, and bears repeating. The web is not only the information and people, but also the parts of the physical world that are being hooked in. This has gone on for a while, but what is interesting is to see that merging playing out on something suggestive of a virtual environment as well. This is actually something I've been messing with in Second Life, though at a much less sophisticated level. The DEFACTO application seems to suggest some of the same notions, in any case.

Virtual ambient information

The last point I'd make is that this application shares some common characteristics of many of the location-aware mash-ups that are everywhere, especially using tools like Google Maps, Google Earth, and now Google Mapplets. This gets back to the original point about interacting with visualizations in an immersive environment. In a virtual, 3D space, it seems like the potential is there for mash-ups on steroids. Here's a shot from an earlier post of a modest example using 3D symbols on a map...





It might be hard to get the gist of this, but, just like in DEFACTO, virtual worlds can represent ambient information about state and situation by the appearance and behavior of the objects. There is no reason that these objects could not link to DEFACTO RAP's for example, and provide handles to communicate or interrogate the state of the various agents.

Lots of possibilities!

Tuesday, January 23, 2007

DARPA PAL, Clippy, armed to the teeth

So there's all this talk about Web3.0 now, as I described in a previous blog entry. In poking around DARPA, I ran across their PAL project. The vision of PAL is:

DARPA intends to make major and long-term contributions to the field of cognitive systems, by:
  1. producing long-term scientific and technical innovations in the areas of machine learning, reasoning, perception, and multi-modal interaction;

  2. developing prototype PAL systems that bring together the best individual technologies to create integrated cognitive assistants;

  3. conducting a progression of increasingly more capable and robust prototypes to be tested and used in real world situations.
So this sounds familiar! In more pedestrian applications, I think mobility and Web3.0 look a lot like the PAL vision.