Showing posts with label 3D Web. Show all posts
Showing posts with label 3D Web. Show all posts

Friday, May 9, 2008

Techno-travels and HASTAC Part II

In brief, here's a demo of a physical/virtual mashup. In this case, UbiSense tracking is used on individuals within a space called the Social Computing Room, and depicted within a virtual representation of the same space.

One can think of a ton of ways to take this sort of thing. There are many examples of using the virtual world as a control panel for real-world devices and sensors, such as the Eolus One project. How can this idea be applied to communication between people, for social applications, etc. What sort of person-to-person interactions between persons in the SCR and remote visitors are possible? I have this idea that virtual visitors would fly in and view the actual SCR from a video wall. Then they could fly through the wall (through the looking glass) to see and communicate with the virtual people as they are arranged in the room. A fun thing we'll using as a demo at HASTAC.



Friday, May 2, 2008

Techno-Travels and HASTAC Part I


I'll be presenting at the HASTAC conference on May 24th at UCLA. The conference has a theme of 'TechnoTravels/TeleMobility: HASTAC in Motion". I'll quote the description of the theme:

This year’s theme is “techno-travels” and explores the multiple ways in which place, movement, borders, and identities are being renegotiated and remapped by new locative technologies. Featured projects will delve into mobility as a modality of knowledge and stake out new spaces for humanistic inquiry. How are border-crossings being re-conceptualized, experienced, and narrated in a world permeated by technologies of mobility? How is the geo-spatial web remapping physical geographies, location, and borderlands? How are digital cities interfacing with physical space? How do we move between virtual worlds? And what has become of sites of dwelling and stasis in a world saturated by techno-travels?

OK...so how do you take a bite out of that apple? In my case, the presentation is going to center on something called the 'Social Computing Room' (SCR), part of visualization center at UNC Chapel Hill. There are lots of different ways to approach the SCR. It's a visualization space for research, it's a canvas for art and new media projects, it's a classroom, a video conference center, a gaming and simulation environment, and it's a physical space that acts as a port between the physical world and the digital world. It's difficult when talking about interesting new ideas to avoid overstating the potential, but I'll try to use the SCR to talk about how physical and digital worlds converge, using the 'port' metaphor. Thinking about the SCR as a port can start by looking at a picture of the space. Now compare that picture with a capture of a virtual version, in this case within Second Life:




To me, the SCR is a port in the sense that it exists in both worlds, and the ongoing evolution of the space will explore the ways these two sides of the coin interact. Before I go there, perhaps a bit more about the HASTAC theme. In this installment, let's talk about borders in a larger sense, coming back to the SCR a bit down the road.

Techno-travels? Borders? Mobility? Borders are falling away in our networked world, this means the borders that exist between geographic places, and the borders between the physical and virtual world. The globe is a beehive of activity, and that activity can be comprehended in real time from any vantage point. A case in point are real time mashups between RSS feeds and Google Maps, such as flickrvision and twittervision. These mashups show uploads of photos to Flickr, and mapping of twitters around the globe. You can watch the action unfold from your desktop, no matter where you are. Borders between places start to disappear as you watch ordinary life unfold across the map, and from this perspective, the physical borders seem to mean less, like the theme song to that old kids show 'Big Blue Marble', if you want to date yourself. Sites like MySpace and Orkut have visitors from all over the world, as illustrated by this ComScore survey, and social networks don't seem to observe these borders either.

The term 'neogeography' was coined by Joab Jackson in National Geographic News, to describe the markup of the world by mashing up mapping with blogs. Sites such as Platial serve as an example of neogeography in action, essentially providing social bookmarking of places. Google Earth is being marked up as well...Using Google Earth and Google Street View, you can see and tag the whole world. Tools like Sketch-up allow you to add 3D models to Google Earth, such as this Manhattan view:



So we're marking up the globe, and moving beyond markup to include 3D modeling. Web2.0 and 'neogeography' add social networking too. At the outset, I also waived my hands a bit at the SCR by comparing real and virtual pictures of this 'port'. That's a bunch of different threads that can be tied together by including some of the observations in an excellent MIT Technology Review article called 'Second Earth'. In that article, Wade Roush looks at virtual worlds such as Second Life, and at Google Earth and asks, "As these two trends continue from opposite directions, it's natural to ask what will happen when Second Life and Google Earth, or services like them, actually meet." Instead of socially marking up the world, the crucial element is the ability to be present at the border between real and virtual, to recognize others who are inhabiting that place at that time, and to connect, communicate, and share experiences in those places. This gets to how I would define the SCR as a port.

The drawback to starting out with this 'Second Earth' model is that it limits the terrain to a recognizable spatial point. While a real place sometimes can serve as a point of reference in the virtual world, that also unnecessarily constrains the meaning. What is an art exhibit? What is a scientific visualization? What is any collection of information? As naturally as we mark up the world, we're also marking up the web, collaborating, and experiencing media in a continuous two-way conversation..that's a lot of what Web2.0 is supposed to be about. How can we create the same joint experience, where we're all present together as our real or virtual selves sharing a common experience? That to me is the central goal of 'techno-travels', and perhaps expands a bit on the idea of border crossing.

Anyhow, I'm trying to come up with my HASTAC presentation, and thinking aloud while I do it.

Monday, February 25, 2008

Sun Worldwide Education and Research Conference

Here's an item that I'll probably check on this week...

Sun is moving fast on many fronts in 3D worlds - but
focusing on education. I hear they will have an
important announcement this week at their Worldwide
Education & Research Conference in SF:
http://www.events-at-sun.com/wwerc08/agenda.html

The agenda shows SUN Founder Scott McNealy speaking
wedged between the Immersive Education Forum and Lunch
the second day. I'm GUESSING that this placement is
intentional and hints that Sun has BIG news for
educators interested in immersive environments.

I played a bit with MPK20 (the Sun virtual environment). It still has limited features, but it's open, and among the ones to watch, along with Croquet. I may put up the feed in the Social Computing Room as availability permits, if anyone is interested in viewing it there, let me know.

Other than that, I'm wrestling a bit with UbiSense again!

Monday, December 3, 2007

Taking the Social Web into the virtual world

Here's an interesting article on IBM's efforts to blend social software, identity, and the virtual world through Lotus.

IBM Lotus programmers and engineers from IBM's research groups are currently working on ways to employ virtual reality technologies with Lotus Connections social computing software, said Jeff Schick, vice president of social computing for IBM.


I heard someone describe the 3D web by saying 'people are back', and I think this is the true strength of environments like Second Life. It's not about the graphics or a particular virtual space, it's about the people and experiences that are available. Maintaining and monitoring a social network is Web2.0, the 3D web is about being present with friends and peers in meaningful ways, through virtual experiences.

Thursday, August 2, 2007

A bit part on virtual worlds last nite

NC-17 news did a piece on virtual worlds last night. See if you can spot the nerd. Link to video here.

I'm working on JADE agents today. I'm somewhat suprised that agent frameworks like JADE are not applied more, especially in this 'come to me web' era. As we get an excess of computer cycles in our individual 'infrastructures', and as we become more mobile, there certainly seems to be a niche that agent computing could fill.

Wednesday, August 1, 2007

Real Agents working with virtual spaces

It's been a while since I looked at agents, but I was happy to see that Jade 3.5 had been released. That's actually old news for some. I'm working on a project that embeds a physical space within a virtual 'building', and utilizing the JADE agent framework to tie the virtual and the physical worlds together seems like the ticket.

Anyhow, the things that jumped out at me about 3.5 were the ability to communicate between agents using a pub-sub topic model, and a re-working of the web services integration gateway.

In a previous post, I had talked about a small framework to tie virtual space (in this case Second Life) to external applications. The framework uses sensors (and I'm looking at other means) to detect and inventory objects in the virtual space, and gives the facility to pipe messages to those objects from outside. Yesterday, I used that to create a control panel GUI that can run on a small tablet. This control panel uses the framework to send information into the virtual space, causing alterations to the environment.

Over the weekend, I added the facility to push events out of the virtual space to subscribing listeners. Objects in SL can generate events for touch, creation (on_rez), collision, and so forth. By dropping a script in an object, the framework can trap these events and communicate them to a hub. The hub takes these events and sends them to the framework. Here's a pic where the 'pyramid' is the event generator, and the sphere is the hub. I simply 'touch' the pyramid, and the hub is messaged, sending the event to the framework for dispatching to subscribed listeners.





Below is a shot of the 'touch' event in the framework. There is a facility that inspects events coming out of the virtual space, and compares it to subscribers. A subscriber picks the 'region', or island, the object name, and the desired event. The subscriber also sets up a callback, and receives the paramaters associated with each event. I want to add a more flexible subscription, using regular expressions, etc., but that's more than I need right now. It might also be cool to add the ability to specify a script to run when an event is encountered, but for now it can just callback to a subscriber at a given url. Here's the basics of the event as it arrived to the framework. What's not shown is a generic 'payload' field, where I plan on pushing in the variables associated with each SL event.




At any rate, the 'control panel' I wrote for the tablet uses the ability to push messages into the sim by using a known region and object name. The new addition of the ability to push events out of the virtual space to subscribers is next on the plate, hence the interest in using agents on the 'real life' side. I think topic-based subscriptions on the agent side will help me figure out cool things to do given that I can hook into virtual events, plus it is just plain geek-fun.

The first task will be to have an avatar push a doorbell button in the sim, pick up that event, push it to the agent, and have the agent kick off a real-life doorbell chime. A stupid pet-trick, true, but the point will be to exercise this thing end-to-end, and then I'll have established a workable 2-way bridge to interesting things later. So far, the scripting/framework approach works out. Time will tell how well it scales, how lag-inducing it can be, etc. I've gone the approach of using conservative ping and sense rates, and it's been pretty smooth and stable so far.

Something whack that would be a fun side-project would be to wrap virtual devices with Jini, and have discoverable virtual services under a framework like that. This gets back to an idea I had a while back, using virtual spaces, virtual sensors, virtual actuators, and virtual people, to develop and prototype smart, ambient computing services. Given the collaborative nature of these environments, it might make sense!

Tuesday, July 31, 2007

Sun's open source 3D World, and 3D web as a training tool for WMD management

A couple interesting links, first is Sun's project Wonderland, open source client and server for their 3D world. This looks like it's in early stages of development, but you can run the client and the server on your own, always a plus!.

The vision for this multi-user virtual environment is to provide an environment that is robust enough in terms of security, scalability, reliability, and functionality that organizations can rely on it as a place to conduct real business. Organizations should be able to use Wonderland to create a virtual presence to better communicate with customers, partners, and employees.


Second, Idaho Bioterrorism Awareness and Preparedness Program, using 3D web (in this case Second Life) for incident management training...

This virtual environment spreads over two islands Asterix and Obelix (65536 x 2 sq. meters), with one island dedicated to a virtual town and the other a virtual hospital. The design of this virtual environment is influenced by dioramas frequently used by emergency services to support their tabletop exercises.

Monday, July 30, 2007

NCSU, and using 3D environments for learning

UNC has a lot of interesting projects applying the 3D web to education, visit the UNC island sometime! Here's some info about similar initiatives at NCSU!

I went to an interesting Croquet demo last week, I've got some notes and am working on a write-up of what I heard.

Friday, July 27, 2007

Wired, SL, 3D web, the hype curve in action

This is sort of fun, the back and forth about advertising and the 3D web based on a Wired mag article, sort of like that previous LA Times article. In the bubble days, businesses thought they could sell sock monkeys and pet food, and just because it was on the web, they'd be millionaires. The corporations that think they are going to sell mac & cheese because they put up a virtual store are as deluded, and the press will be all over that, I imagine. The most useful thing in the back and forth is the fact that sites in virtual space often appeal to the long tail, versus a mass-market appeal. The long tail is ignored in the original Wired write-up, and I think that's the critical omission.

There's that old saw about asking a farmer what would help his farm work better, and him responding a better plow or a stronger mule, rather than responding that automated farm equipment would help. In other words, the farmer only can apply the world he knows to the question. I'd say we're in the middle of a prime example of the phenomena.There's also the old adage that we overestimate change in the short term, and underestimate it in the long term, and this has a lot to do with the shape of the hype curve.

History repeats itself, and at an accelerating rate, it seems.

Using the Wii with 3D Web as a training/sim device

From Wired...

For Stone, the Wiimote is the key to building realistic training simulators within the virtual world of Second Life. He is helping companies and universities do that through his WorldWired consultancy. Clients include a company interested in training workers for its power plants, a manufacturer of medical devices and pest-control firm Orkin.

Tuesday, July 24, 2007

The future of virtual worlds, LA Times says it's bleak

It's odd sometimes, the way backlashes go. I've been blogging a bit, and being somewhat evangelistic about the 3D web. I didn't invent the term, and wasn't among the first to catch on, but I have a gut feeling (me and Chertoff) that the term means something. Lately, especially after the LA Times article about the death of commerce on the 3D web, I've been approached by multiple people who want to explain to me that this is all a tempest in a teapot.

A proper, direct response to the LA Times article about Second Life can be found here, and here, so I won't try to recapitulate the common myths that make up such negative press. I was considering reasons why I am intrigued by the whole topic of virtual worlds, though, and I wanted to jot some of these down. These observations are shaped by my own interests, by past projects I worked, and so forth. You may discover other reasons to pay attention to the 3D web.

Real and Virtual are Merging

This is a drum I was beating well before I delved into Second Life. In this older post, and this one soon after, I talked about Mobile2.0 and Web2.0, trying to relate these terms to this larger idea of real and virtual merging. The main idea was that mobility and the new web were, in part, about the 'web of things'. Sensors and actuators talking on the web, and smarter applications to discover and manipulate this explosion of new information and services. Whole new types of applications stretching the definition of the web. Virtual worlds are important because they are a metaphor for this merging. In a way, our avatars allow us to cross the barrier, and physically inhabit the web of things. That's a bit sketchy, but I see the 3D web riding the coat-tails of the emerging web of things.

There are potential, practical benefits to visiting the virtual world to understand and manipulate the physical too. I've been interested lately in the development of EOLUS One, as described in this UgoTrade blog entry. This is a fairly wide-ranging project, but it does serve as an interesting illustration real world/virtual world merging.

People Make a Comeback

The ubiquitous social networking web site provides many benefits. I'll pick on a few, and tie them to a virtual world experience:

  • a venue to expand social/professional networks
  • a tool to maintain connections to existing friends
  • a platform to shape and present our own identity
  • a tool to filter and flag important information (use of social networks to compensate for a deficit of attention)
  • a collective tool to add value, from which we individually extract benefit
All of these points can be extracted from classic definitions of Web2.0, and many of the points are mirrored in virtual worlds such as Second Life.

Expanding Social Networks

The first point, virtual worlds as a venue for expanding social networks, is primarily a function of the ability of a virtual world to create an event, or common experience. Think about where friendships start, it is often based on some shared experience, like a college course, or a conference, or some notable event. Virtual worlds can provide an immersive, compelling experience from which these connections can take root.

Social networking also relies on sharing connections, in a friend-of-a-friend style. Given the existence of shared experience in virtual worlds, the familiar mechanism of meeting new people through current friends has a virtual analogue.

Maintaining Current Connections

It's probably a question for sociologists, but what quality of social experiences can be achieved in a virtual environment? I'm quite sure it's not the same as real life, but I also suspect it's a richer interaction than what one would suspect.

We're using tools like Twitter, Flickr, and FaceBook as a way to keep up with our friends and colleagues when we're separated by time or distance. The function of these tools do not map onto the real-time nature of virtual worlds, but I suspect that virtual worlds can add some unique new tools to serve these ends. One example that comes to mind is the ability to establish 'hang outs' particular to a group of friends and colleagues.

Shaping Identity

People use social applications as a way to shape and present themselves. Virtual worlds such as Second Life have an economy partially based on the customization of personal avatars. People take great care to build an image of themselves. Does this aspect of virtual worlds play into this basic function of social networking applications? I guess this is another one for the sociologists...

Tapping into Collective Power

Successful Web2.0 sites often become so because they provide tools to build something interesting, let the tools loose on the world, and leverage the resulting content. I'd toss out Wikipedia and Flickr as two prime examples. There's a fundamental principle at work there, and a lesson that virtual world developers need to take to heart.

Professional 3D developers really don't like Second Life. I picked that up! I can see why, I think the building tools are crummy. This is something I had observed in a previous blog entry, but it bears repeating...the quality of the tools matters, but more important than professional level, sophisticated building tools are accessible tools, available in-world, suitable for the average Joe to get something done. There are indeed master builders within environments like Second Life that could take advantage of special tools, but I will guess that the vast majority rely on simple constructs, and use the ecosystem to purchase the rest.

I think about how bad HTML is, and how crude the tools still are, and would not be suprised to find out that, back in the day, that the web was dismissed as consisting of poor technology in the hands of unqualified developers. I know there are two sides to the coin, as I still encounter poorly designed sites with flaming clip-art, but I look at how far the web has come based on simple HTML, and simple scripting, and don't think it wise to assume it won't happen again.

It's not there yet!

Don't take this as a Second Life fan site. There are lots of things lacking in Second Life, and lots of other virtual worlds out there. I'm going to a Croquet presentation this afternoon, and have begun looking at that tool, getting used to Blender, and intent on learning Squeak. The dust has not settled on the particulars, but I really do think the 3D web means something.

There are 'virtual natives' coming up fast. Under my watchful eye, my little kids spend a little time wandering around Nicktropolis, and similar sites that approximate virtual worlds. These kids don't even blink, they just jump right in, and they are right at home. It's a mistake to put our own preconceptions and limits on a new technology, based on our own experiences and habits. I liken this to the way that younger people don't have a problem editing and keeping documents out on the web, or in alternative, open-source office suites, versus the old MS Office stand-by. I look at my own kids, and it makes me think that the metaverse is as natural to them as Tom and Jerry was to me.

It's especially clear that issues like identity, security, scalability, and application development support all are lacking in many of the current contenders. The power of open source and standards needs to be applied to this space, but the 3D web is here, and it's going to keep growing, I feel confident in saying, even if everyone wants to observe that this is just a game with no future...heck, I'm still waiting for the death of Java!

Wednesday, June 20, 2007

3D web as disruptive technology - Mitch Kapor

I've had the videos of the IBM & MIT Media Labs conference on virtual worlds running in the corner of my monitor all morning, and I was highly impressed with Mitchell Kapor, Linden Lab Chair, and his view of virtual worlds as disruptive technology. He brings up the term macromyopia, which is a nice word that captures the idea that we overestimate change in the short term and underestimate it in the long term.

Anyhow, his talk is entertaining and thought-provoking, and worth the investment of about 45 minutes of your time.

Food for thought

Check out the prologue to "Everything is Miscellaneous", by David Weinberger. He's an interesting and engaging speaker, and writes in the same style.

Anyhow, I loved this quote, and I think about it in terms of what's happening with the 3D internet...

Those differences are significant. But they’re just the starting point. For something much larger is at stake than how we lay out our stores. The physical limitations that silently guide the organization of an office supply store also guide how we organize our businesses, our government, our schools. They have guided—and limited—how we organize knowledge itself. From management structures to encyclopedias, to the courses of study we put our children through, to the way we decide what’s worth believing, we have organized our ideas with principles designed for use in a world limited by the laws of physics.

Suppose that now, for the first time in history, we are able to arrange our concepts without the silent limitations of the physical. How might our ideas, organizations, and knowledge itself change?

For me this neatly captures a central idea of the 3D web. In a world without limitations, physics, or other constraints, how can we use the tools in a way that feels real, but that doesn't place the limits of physical world, or a static organization of information, into the virtual? This quote highlights both a mistake to be made, and new ways to think.

It also struck me last night that there is a common thread between the (admittedly modest) things I'm doing within Second Life, and past work I did on smart spaces and context aware computing. In some ways, the tools you wish were there in the physical world can be modeled in the virtual, sometimes with the same ends. In each case, physical and virtual, the goal is to respond to each individual, and provide a mesh of services around that person as they navigate the environment. I'm intrigued by the idea that some of these context-aware computing concepts could be applied within the metaverse toward the aims that David Weinberger describes. By the same token, I am interested in how the metaverse could be a testbed for context-aware applications. The whole environment is scripted, you can build sensors and actuators, have location, manipulate the environment, add social elements, etc. Model a smart home, classroom, or office in Second Life...It's certainly faster and cheaper than trying to build a testbed or living lab!


As a note, I happened upon this tidbit from Bob Sutor's blog, links to video now available from the MIT & IBM Conference: Virtual Worlds: Where Business, Society, Technology & Policy Converge which took place on Friday at MIT Media Labs.

Monday, June 18, 2007

Second Earth in MIT Technology Review

From MIT Technology Review...

The World Wide Web will soon be absorbed into the World Wide Sim: an immersive, 3-D visual environment that combines elements of social virtual worlds such as Second Life and mapping applications such as Google Earth. What happens when the virtual and real worlds collide?

This is worthy of a read. The basic premise is that 3D worlds as part of a mash-up with real life locations and data will transform the way we view the 'web'. I'm down with that...

Friday, June 15, 2007

IBM Conference on Virtual Worlds

Coverage...

"We are now at the threshold of newly emerging (Web) platforms focused on participation and collaboration," he said. "The power of collaboration and community are one of the major drivers of innovation as companies figure out the capabilities to accelerate collaborative innovation."

Parris described some of IBM's initial uses of virtual worlds in a business context, including enhanced training, immersive social-shopping experiences, simulations for learning and rehearsing business processes, and event hosting.

Wednesday, June 13, 2007

Making Connections, virtual reality, agent computing, robots, and even real human beings

So I spent a few minutes digging around after reading the Slashdot article about using AI, agents, and 3D visualization to train firefighters. Off on ZDNet is the original article, by Roland Piquepaille.

ZDNet describes the system this way:

The system is currently used by the Los Angeles Fire Department. DEFACTO has committees of AI ‘agents’ which can create disaster scenarios with images and maps seen in 3-D by the trainees. The software agents also evaluate the trainees’ answers and help them to take better decisions.

This is interesting in several ways.

Virtual simulation and training

One of the great potential uses of virtual worlds is the creation of immersive training and simulation environments. I'd anecdotally observe that interacting in a 3D environment with an avatar provides a pretty effective experience. Situations like a fire or a disaster are prime candidates for such an application. Other uses might include immersive language learning, law enforcement, or hospital/medical situations.

Collaborative visualization, ambient data, situational awareness

Collaborative is the key word here, because there are better, higher resolution methods for exploring data through visualization. A simple equation may be to combine your avatar, the avatars of collaborators, and the visualization, so that remotely distributed teams can fly around, point, manipulate, and refer to parts of a visualization as a group. This is somewhat linked to the themes illustrated by multi-touch displays, such as the Microsoft Surface Computer that I mentioned a few posts back.

I'm mostly looking at Second Life, for many reasons. It's safe to say that SL is not a platform for visualizations, but I have tried several small prototypes with the premise that the collaborative nature of these environments yields qualitatively different experiences. Another way of saying this is that it might be useful to look at ways of creating 3D visualizations within virtual environments, not necessarily as the best visualization tool, but as points of reference in virtual collaboration.

Take a look at this image from the DEFACTO page, and imagine how that application, combined with a collaborative, avatar-based environment, could have interesting possibilities, even as far as visualizing and managing an actual event, versus a simulation.

Agents again!

I had a brief run on some earlier projects where I looked at agent technology. At the time, we were looking at the state of context-aware computing, especially as it applied to the development of smarter mobile applications (location awareness, etc). This was mostly using the JADE agent framework, and was based on a research framework called CoBrA. Honestly, I have not been thinking about agents for a while, but this article made me think about agent technology again. Agents are a great model when you have heterogeneous, autonomous, entities that need to cooperate. Especially important is the ability to dynamically form associations, and negotiate to solve a shared task. Web2.0 talks about small things, loosely joined, and agents share that same philosophy in their own arena.

Agents have always struck me as not getting enough play in the whole 'next generation web' yap-space, especially considering the merging of the virtual (web) and physical world through the explosion of sensors and actuators that are starting to talk on the web. Both agent technology, and the physical/virtual merging still seem like blind-spots, when both may play an important part in the post-web2.0 world.

In this case, agents are seen as proxies for what Machinetta calls RAP's. Machinetta is one of the underpinnings of the DEFACTO system, and it is essentially an agent framework that supports negotiation, assignment of roles, and other aspects of team-work. RAP's are the Machinetta term for "Robot, Agent and/or Person". Cool...we got robots too!

Virtual/Physical merging

So this was just mentioned, and bears repeating. The web is not only the information and people, but also the parts of the physical world that are being hooked in. This has gone on for a while, but what is interesting is to see that merging playing out on something suggestive of a virtual environment as well. This is actually something I've been messing with in Second Life, though at a much less sophisticated level. The DEFACTO application seems to suggest some of the same notions, in any case.

Virtual ambient information

The last point I'd make is that this application shares some common characteristics of many of the location-aware mash-ups that are everywhere, especially using tools like Google Maps, Google Earth, and now Google Mapplets. This gets back to the original point about interacting with visualizations in an immersive environment. In a virtual, 3D space, it seems like the potential is there for mash-ups on steroids. Here's a shot from an earlier post of a modest example using 3D symbols on a map...





It might be hard to get the gist of this, but, just like in DEFACTO, virtual worlds can represent ambient information about state and situation by the appearance and behavior of the objects. There is no reason that these objects could not link to DEFACTO RAP's for example, and provide handles to communicate or interrogate the state of the various agents.

Lots of possibilities!

Wednesday, May 30, 2007

Microsoft Surface Computing - HCI and UbiComp

Sometimes, it's hard for me to pin down what my own blog is about. I tend to run many threads at once, and end up thrashing sometimes, as I suspect anyone working in technology does these days. The past few weeks, it's been about Second Life, and that continues, but I'm looking at other areas as well, such as plain old Web2.0, ubiquitous computing, agent computing, mobility, location aware services, SOA, and dynamic scripting languages (specifically Ruby and Rails these days). In the mix somewhere is my original interest in Java/J2EE, along with things like Spring.

Rather than a testament to a short attention span, I think this wide variation in themes is actually a sign of the times we live in. Developers no longer learn one language, and roll into every project with the same set of tools. The evolving web, the evolution of mobility, and the pervasive field of networked information and devices that surround us everywhere we go make for an interesting and challenging time. I'd like to suggest that the disparate topics covered in this blog are on a converging trajectory. Maybe that's what this blog can be about.

Case in point, check out this short video on Microsoft's Surface Computer. I think this is an exciting platform that brings together a bunch of ideas. Essentially, this is a big, touch sensitive display that uses gestures to manipulate data. The cool thing is that it's multi-touch, so you can gesture with both hands, and multiple people can interact with the computer at the same time. In addition, the Surface Computer is sensitive to physical objects. It can sense with these objects, and also interact with other computers placed on the surface.

  • The 'multi-touch' is collaborative. Technology is getting more and more social. This reality is core to Web2.0, as well as the evolving 3D web. We're not isolated from each other anymore, we Twitter and blog, we IM and message, now we can compute together.
  • The Surface Computer bridges the physical and the virtual. In the video, they demonstrate placing a device on the surface, having it dynamically connect, and using a gesture to shoot a photograph into the device. The natural action of placing a device of interest on the collaborative surface, and being able to manipulate it, is a step towards useful ubiquitous computing.
  • The Surface Computer could be an interesting new metaphor for web collaboration in the way that avatar representation in Second Life creates a sense of immersion. I think it won't be long until you could assemble remotely around a common 3D web surface, with remote participants as avatars.
The combination of natural interface, immersion, and the ability to easily incorporate data from the web, or from other devices, in collaborative ways seems like a natural progression.

Thursday, May 24, 2007

IBM Promo Video on their new Virtual Biz Center

The simple thing that stuck with me, back when I heard Dr. Irving Wladawsky-Berger talk at as part of the RENCI Distinguished Lecture series was the observation that the '2D' web was about taking the catalog and putting it into the browser, while the 3D web was about taking the whole store, sales staff included, and putting it in a virtual space. The purely commercial side of the web is not the whole story by any means, but I suspect that the next Amazon or Ebay will rise from the 3D web, and that makes this stuff exciting to watch.

So dig this little YouTube video about the new IBM Virtual Biz center. I took five and logged in, and happened upon the real virtual site, and saw this thing on the Eightbar blog. IBM's Second Life presence is impressive, and I often point people there when they let me know that the virtual web is just a game....

Friday, May 18, 2007

Some SL Shots...another mash-up too

In previous posts, I'd been talking about "SlIcer". It's really a simple framework, and it's only in rev 1, so don't let me give you the impression that it's more than some Ruby scripts, but I do think SlIcer is illustrating some cool 3D web mash-up ideas.

I am working on interacting with 'mash-up' info in the 3D world, concentrating on a few concepts:

  • Representation, and interaction with spatial information in a 3D environment.
  • Triggering real-world actions from a 3D interface.
  • Creating visualizations of real-world information using primatives in Second Life.
Here are a few simple illustrations I've been working on. First is a 'Mapping Room'...



There are a few things going on in this pic. See that small green sphere above the map? That's a SecondLife part of SlIcer. It's an object sensor/hub. This thing scans for objects in range, and sends an inventory up to the RL server with key, object name, and position. I'm standing on a big map, and on this map are 'counters' for real world objects. The counter closest to me on the right is a representation of a communications truck with a balloon antenna. This has a known name, and using SlIicer, I can hook up a GPS signal from the truck, get the lat/long, as well as other information, and push it into SL to move the truck counter on the map as the real truck moves. It's hard to see, but the object has a balloon that deploys as the real-world balloon deploys. People collaborating on the map in Second Life can walk around, point, etc. This has a couple of benefits:

  • Collaborators in Second Life are able to point, position their avatars, and refer to objects on the maps. This is not possible with other forms of collaboration.
  • Situational information is presented as ambient information. In a funny way, you can create virtual physical devices representing virtual data. How cool is that? As a side-line, our work with AmbientDevices orbs is going to merge in here too, where an orb can reflect information about what is happening in the virtual space.
Note also that SlIcer is keeping track of objects on the map, and it is possible to view the positions of counters as symbols generated by a GIS map server. Individuals in Second Life could move a counter to a particular place on the virtual map, causing real-life GIS to be updated! This works (with some bugs to work out), but it was hosing up when I took the screenshot. There is a media viewer in the above picture that's displaying a 2D barcode link to approx31's blog...drat...it's supposed to show a map with the Second Life data as symbols in a GML layer. Just trust me, it does kinda work.

Anyhow, here's another mash-up, attempting to do visualization of data. If only you could load textures on prims dynamically from a URL! This pic shows a tropical storm mash-up. It's a mock-up right now, as I look for a good data source. The idea would be to parse a data stream, such as RSS, that shows current tropical depressions and hurricanes, and depict them on a map. Also shown is a fly-out I'm working on. As it operates now, you click on a storm, and the fly-out rezzes. The flyout uses prims to depict intensity (the red bar), storm directions (the compass rose and pointer), and storm track speed (the blue bar). If I can find a data feed, this could be wrapped up into a stand-alone mash-up!

The storms are prims too, in (real) Second Life they spin and stuff, way cool. It's dumb stuff like that that amuses us programmers.

Wednesday, May 16, 2007

REST-ful Rails reflections, SlIcer, and the 3D Web

So I've been quite busy throwing together a web/Second Life mash-up, implementing ideas outlined in a previous post. The basic premise is to build a framework, (I apply the term loosely, perhaps 'hack' is the more appropriate term), that could start bridging the gap between Second Life and the web.

I called the framework SlIcer for the hell of it, and began implementing it using Rails. The basic functions I'm working on include:

  • A set of sensors, deployed into Second Life, to detect people, and to detect scripted objects within range. These sensors are installed in various areas, and tuned for sensor range and angle to provide localized service. The sensors repeatedly sweep for named objects and people, and report up to SlIcer via HttpRequest with the object name, second life key, and the location vector. It would certainly be possible to add state properties to the report for real-life storage as well.
  • Co-located with certain sensor types are 'hubs', which act as the bridge between Second Life and real life. This may be a dumb idea, but I thought it useful to create a database to store and forward incoming data in SlIcer. The hub would poll SlIcer, and get messages as bundles. The bundles are pulled into Second Life, marked as distributed in the database, and then distributed to the target objects in range. The main benefits are:
    • A real time view of the objects in range, and the current position. Again, state properties are also possible. External applications could use this data in ways I quite haven't come up with yet...
    • An ability for real life apps to address objects within Second Life by a 'plain english' handle. This obviates the need to know the current key of a rezzed object in the sim.
  • A plain http call can be put into any application to post message bound for a Second Life object. Any application can push data into Second Life without any SL cruft, as long as the name of the target object is known.
  • The (future) ability to add reliable delivery (sequencing of messages, and delivery confirmation) semantics if this seems useful.
A later development might be to have some sort of event pub/sub mechanism. For example, anytime an object within a location is touched, it could report that event, and listeners outside of Second Life could tie in. We're looking at ambient orbs, as an example. It would be possible, given these mechanisms, to watch for certain thresholds of people occupying a certain room in a sim, and change the orb status...really, any number of interesting things would be possible.

So I've got the basics working using Rails. I have not tried NetBeans and Rails yet, sticking with RadRails for now, that seems to be all I need at this point. My original impulse was to use a REST-ful approach, given the new facilities in Rails for scaffold_resource. The particulars are described in this great tutorial. This was particularly attractive as the llHttpRequest LSL function supported GET, POST, PUT, and DELETE>. At any rate, I had some initial success, but rapidly ran into strange problems with Second Life. There are likely some strange ACCEPT headers sent out from an HttpRequest, because I began running into strange 406 Not Acceptable errors. Lazy developer I am, I just decided to punt REST for now, and just use GET and POST within normal Rails controllers. Now it works fine. I'll go back and dig when I have time...but I just don't.

It's an odd thing if you think about it. I was going through all sorts of gyrations to implement REST-ful interfaces, which was supposed to make things 'simple'. Really, the gyrations were there so I could code to a particular programming philosophy, and while attaining a philosophically pure implementation still has it's attraction, I was wasting time. Ironic!

I want to add props for a pretty interesting blog on the 3D web, there's more out there than I thought, and I thought I was pretty tiresome hyping this technology around my shop! Look through some of the reported developments, and see if you agree...hype or not?

Well, the sim is down for mx, hence attention to the blog. I'll put up some screen shots of how we're using SlIcer when I can get back in! Cheers...