Saturday, December 8, 2007

Rails to the rescue

It's good to keep the tools in your box sharp, I'm happy I've been messing with Rails on my own, because I need to knock out a prototype for work really quickly, Rails to the rescue!

A couple notes, NetBeans 6.0 is out, and I highly recommend it as your Rails development environment. I took a few minutes to switch to the new IDE, and it's very smooth. Alas, when I switched, I was having a very hard time getting plug-ins, and getting Rails to upgrade via Gems. I was receiving weird buffer errors. Google to the rescue, there was a simple fix that involved upgrading gems..just run this command:

gem update --system

and you should be in business...thanks to the DRY blog for the heads up! I'm also incorporating the WillPaginate plug-in in this round, and I've got to mash-up with Google maps. I did find a gem for Google maps, we'll see how far I get, but I'll post any findings..


Apparently, will_paginate returns nil if your collection is smaller than the per-page. I was having a problem where I was returning an array of results for my 'position log', and it would not show up in the form where I had used

<%=will_paginate @obs_log %>

I'm not sure why, but here's the code for will_paginate..

# Renders Digg-style pagination. (We know you wanna!)
# Returns nil if there is only one page in total (can't paginate that).

def will_paginate(entries = @entries, options = {})
#total_pages =

if entries.page_count > 1
renderer = entries, options, self
links = renderer.items

content_tag :div, links, renderer.html_options

So this really does me no good, I'm sure I'm missing something obvious, but it should at least render the collection without the paging cruft. I'm just in too much of a hurry, but this was frustrating. I punted and just showed the last 30 or so with a :limit in my find_all.

I want to present a REST-ful interface for my app to accept data from a mobile with a GPS, I ran across this RailsCast on REST, which seems useful...

Monday, December 3, 2007

will paginate

I've been working on a Rails app as a sort of hobby, it's for a site that I'm prototyping for fun, and we'll see where it goes. I've been off and on, depending on the home schedule, but having a concrete goal helps in my effort to keep my chops up. My areas of focus right now are Ruby, Rails, AJAX, JavaScript (with the Dojo toolkit), as well as CSS and design (never my strong suit). The things I work on lately are quite fun and challenging, but have taken me away from heads-down web app coding for quite a while.

Anyhow, working with the project, pagination turns out to be deprecated in Rails. I grabbed the new plug-in, classic_pagination, but the first thing I got is a notice that it's dead code, and I should move to WillPaginate. I love working with Rails, but the capricious nature of open source does have its drawbacks! Anyhow, I'm working with it, it looks nice. I found a quick jump-start on a RailsCast episode dedicated to pagination, linked here. RailsCast is a fine resource, and I highly recommend it.

One thing I would point out, and something that I'm still working to change, is to wake up to the fact that Rails and Ruby let you pop open a console and work with your code while you code. Try things out, see what responses you get, and if it looks good, copy it into your codebase. As a Java monkey, this is still a foreign mode of operation. The point is, if you're used to coding Java like I am, get used to having that console open, and monkey with your code! Check out the RailsCast, and give WillPaginate a try.

Taking the Social Web into the virtual world

Here's an interesting article on IBM's efforts to blend social software, identity, and the virtual world through Lotus.

IBM Lotus programmers and engineers from IBM's research groups are currently working on ways to employ virtual reality technologies with Lotus Connections social computing software, said Jeff Schick, vice president of social computing for IBM.

I heard someone describe the 3D web by saying 'people are back', and I think this is the true strength of environments like Second Life. It's not about the graphics or a particular virtual space, it's about the people and experiences that are available. Maintaining and monitoring a social network is Web2.0, the 3D web is about being present with friends and peers in meaningful ways, through virtual experiences.

Thursday, November 29, 2007

Building the 'web of things' means breaking some eggs

I've been writing a lot about the 3D web lately, and I'm still pretty jazzed about what I've seen. Part of the appeal is the way that environments like Second Life serve as a metaphor for the merging of the physical and virtual worlds that is taking place all around us.

Anyhow, way back in the day, I was writing here about the next web, or Web3.0, or whatever you want to call it. Back in those old days, about 12 months ago, I was really thinking more about the new web, and the mobile web, like in this dusty old post. Someone had already fronted Mobile2.0 as a variant of Web3.0, in the confusing cloud of infotech talking heads.

Anyhow, one barrier to 'Mobile2.0' is the fact that networks and devices can't be ubiquitous if you can't get your cell phone to run an app or use the network you want, witness the entire iPhone hacking phenomenon. The subtext of the whole Google Android platform seems to be an attempt to smash through the walled gardens that are your typical telecom, witness the mission of the Open Handset Alliance. Anyhow, the actions of Google, and the recent announcement by Verizon that they are going to open their platform to any app and any service hint at the cracks that are appearing in the walled gardens. Does this hint at a new wave of innovation driven by the availability of an open platform? I'd think so, but at any rate, changes in the mobile space are coming, and they'll contribute to the next 'version' of the web!

Monday, November 19, 2007

Virtual SunSPOT controlled by a real SunSPOT

Pardon the zapruder-like quality to this film, but this shows the hack I mentioned in my last post. I'm in SecondLife, controlling a virtual SunSPOT from a real one. In this case, tapping into the 3D accellerometer to pick up the xyz rotation, sending it through my framework to rotate the virtual one. It's a bit laggy, and not 100 percent there, but enough to get the idea.

If I ever find the time, the next cool example would be to implement the ectoplasmic bouncing ball demo using one real and one virtual SPOT. Anyhow, it works. The point really is to learn about the SPOT, and why not do something interesting while testing them...?

Thursday, November 15, 2007

SunSPOTS talk and demo today at Sitterson

Paul Jones sent out this note, and I'll be attending for sure:

About SunSPOTS

Where: Sitterson 014 When: Thursday November 15th at 3:30
Who: A member from the Sun Labs, David Simmons

David has hands-on experience in building applications for SunSPOTs and was instrumental in its design and development, will be on hand to offer his insight into this amazing product.

The SunSPOT (Small Programmable Object Technology) was developed in the Sun Labs and represents the future of embedded systems. Already used throughout academia, students and professors alike are finding new and interesting uses for SunSPOTs. Each SunSPOT comes equipped with a
processor, memory, eight external tri-color LEDs, light sensors, temperature sensors, an accelerometer, and several digital/analog inputs and outputs; offering up seemingly countless practical uses.

At its core, a SunSPOT is an embedded system. But, unlike other embedded systems that must be programmed using a low-level language such as assembly or C, SunSPOT applications are developed in Java. By allowing Java applications to be uploaded and run on an internal Java Virtual Machine, Sun is not only opening up SunSPOTs to more users than many other embedded systems, it is also leaving the final function of each SunSPOT up to the end user. By following a simple API with which to interface the SunSPOT, developers nationwide have created unique uses for SunSPOTs - everything from animal research to rocket testing and much more!

I'm currently working with the SunSPOT developers kit, and have been going through (and hacking on) the demo apps. One of the first things I am trying is to tap into the 3D accellerometer. I took the telemetry example and added tilt to the packets coming off the SunSPOT, and have that available on the host. At the same time I've created a virtual SunSPOT in Second Life, and have scripted that to mirror the pitch, yaw, and roll coming into the LSL script. Just a few more tweaks, and the virtual SunSPOT will be controllable from a real one. This has been done before, but not to Second Life. The lag will probably be pretty bad, but I want to explore how multiple SunSPOTS, used by different people in an immersive environment, can create cool experiences.

Anyway, here's a shot of the virtual SunSPOT, when I get it hooked up, I'll shoot a video. I might have it by this afternoon, if the creek don't rise. Anyhow, see you all at the talk this afternoon!

Tuesday, November 13, 2007

My good deed for today - UdpClient in C# joining a multicast group

Why are simple things so difficult! I spent a couple hours banging my head against the wall on this one. All I wanted to do was push out multicast UDP packets and pick them up from a C# program. The UdpClient is not very well documented, and the examples I found didn't work. So simply, this is what I had to do, marked in red. Now this works!

I hope I saved someone a minor headache.

using System;
using System.Collections.Generic;
using System.Text;
using System.Net.Sockets;
using System.Net;
using System.Threading;

namespace UbiSenseUdpClientTest
class UbiSenseUdpListener
private static readonly IPAddress GroupAddress =
private const int GroupPort = 64555;

private static void StartListener()
bool done = false;

UdpClient listener = new UdpClient(GroupPort); <- even though the samples show the noargs constructor for UdpClient, you must specify the port you are going to use if you want to receive multicast packets
IPEndPoint groupEP = new IPEndPoint(GroupAddress, GroupPort);

//listener.Connect(groupEP); <--- even thought the MSDN examples say to connect, don't connect before you receive, or you will sit and block at the receive below 'waiting for broadcast' below

while (!done)
Console.WriteLine("Waiting for broadcast");
byte[] bytes = listener.Receive(ref groupEP);

Console.WriteLine("Received broadcast from {0} :\n {1}\n",
Encoding.ASCII.GetString(bytes, 0, bytes.Length));


catch (Exception e)


static void Main(string[] args)

Thursday, October 11, 2007

Turning Turing Around

I was reading Irving Wladawsky-Berger's blog today when I happened upon this wonderful observation..

I was reminded of the Turing Test recently, as I have been watching the huge progress we are making in social networks, virtual worlds and personal robots. Our objective in these applications can perhaps be viewed as the flip side of the Turing Test. We are leveraging technology to enable real people to infuse virtual objects - avatars, personal robots, etc - with intelligence, - as opposed to leveraging technology to enable machines and software to behave as if they are intelligent.

What intrigues me so much about virtual worlds like Second Life is this ability of avatar-based virtual spaces to allow you to push through the barrier, and cross over. How's that for a bunch of meta-physical BS! This is a different aim then something like Looking Glass, which is trying to apply a 3D metaphor to a 2D's about stepping through to live with the data, or the sensors, or the other distant collaborators. As the real world becomes more inhabited by pervasive computing, it only seems natural that we go and visit the virtual on its own turf. One wonders about the definition of an application interface in the future. As machines grow smarter, perhaps we'll pop into the 'living room' of our personal agent to have a chat.

At any rate, there are a couple of fun things I'll be looking at in the near future that can tie in to these ideas. First, the idea of pervasive, wireless sensors everywhere. I'm waiting for a SunSpot Developers Kit, and there will be some sensor applications coming down the pike that could involve these extremely cool sensors. The fact that they use Java is a plus in my book. Needless to say, I'll be brushing up on my J2ME.

The next thing I see coming down the pike is real time location tracking, using the UbiSense platform. This is being leveraged for an intriguing space called a Social Computing Room, and has all sorts of potential uses. Here, I'm going to be doing some .Net programming.

Like the blog quote above, I've had a unique chance to push the physical into the virtual, and with the mentioned projects, there's a chance to work in the other direction. Where these meet is getting to be a pretty interesting space!

Wednesday, October 3, 2007

Science 2.0 talk on Friday

I'm going!

Center for the Digital Libraries (CRADLE) presents

Speaker: Bora Zivkovic, the Online Community Manager at PLoS ONE (Public Library of Science)
Date: Friday, Oct 5
Time: 12:00-1:00pm
Location: Manning Hall, Room 208

Title: Science 2.0

The development of the Web has provided new ways for doing science, publishing and communicating science, networking within a scientific community, and teaching science. Blogs and wikis, existing social networking sites (e.g, Facebook), new science networking sites, Open Access Publishing and Open Notebook Science are just some of the many ways that scientists, students and interested laypeople are starting to change the way science (communication) is done, connected, used and archived and the future is difficult to predict - which does not stop us from speculating, which is the fun part, so let’s speculate!

Friday, September 28, 2007

Why Delete?

Yesterday, I attended a discussion at the Wilson Library posing the question...Storage is Cheap: Why Select? Here's the premise:

Storage media for digital information are extremely cheap and getting
exponentially cheaper over time. The price of a terabyte of hard drive
space is a few hundred dollars, and in a decade it will be less than a
dollar. The cost of the expertise of well-trained information
professionals, on the other hand, is quite high and likely to increase
over time. So shouldn't we stop worrying about selection and just capture
and keep as much material as possible?

There are some drawbacks to keeping everything, including the cost of storage, and maintenance (file formats and media change, for example, moral and ethical questions (do I want this propaganda or hate speech to be available to the public), and impacts of preserving institutional records on liability.

The main point of contention was the value of selection, where the archivist's choice of what to preserve is in itself a valuable data point. In essence, the archivist is applying the values and understanding of material at the time of selection, which can preserve the context of a collection for future viewers. Another key point is that, even if it's preserved, it's not findable. A Google search that returns 5000 hits is great, but if the item shows up as hit 4999, it's not findable.

Fine, but I came away thinking that the game really has changed, and we're relying too much on the present to see how the future is developing. Paul Jones really hit the key point. It's preserved, it is not going away, even if you think you have deleted with it. There are backups, things leak to the web, or are documented through other channels.

I wonder how much impact an individual archive can have on our understanding of an event, time, or place in the read/write web era! An archive/archivist seems like a throwback idea. This relates to Paul's point, but I see the future of archives as distributed. Storage will be cheap enough to keep everything, search algorithms will improve, and the cost of preserving media will continue to decline (but free the formats!). We will throw the data up on the web in widely distributed formats, and the power of (buzzword points) the long tail, collective wisdom, and the value added by participation will turn the web into a huge, search able, participatory archive of everything. I notice that I've departed from pinning preservation through traditional institutions, because this can be seen more broadly, but I can still see great value in the editing, selection, and context provided by an archivist to a narrow, specialized collection of data.

There are also so many new sources of data, and I wonder how these play into the idea of archives and collections. How about personal archives, life-blogs, uploaded media, records of digital communications, and the coming deluge of data from sensors in the environment. Nobody in the present can imagine how all of these data sources may be used in combination by a future researcher or viewer. Given the emerging participatory web, the way that people use and link information will itself provide context, and assist in find ability, creating spontaneous collections that have nothing to do with the original intent when data was first stored.

So that's a pretty confused picture, and what is surprising to conclude is that being a librarian sounds pretty cool.

Thursday, August 23, 2007

A goldmine of information about Second Life and Education

This came to me via Kathy Kyzer at ITS-Teaching and Learning, who have done wonderful work on Second Life (visit the UNC island sometime).

These are the proceedings of 2007 Second Life Education Workshop (PDF warning), lots of information about experiments and experience using Second Life in an educational setting.

Friday, August 10, 2007

The realities of virtual worlds for corporate sites

There has been a swirl of hype, and anti-hype around the idea of 'Serious Virtual Worlds'. Of note were the Wired article, as well as a blog posting by Chris Anderson that generated a lively exchange. Even more recent was a Gartner report cautioning corporate America about the risks of doing business in Second Life.

Metaversed, yesterday, held a really nice event in the SAP space on Silicon Island on Corporate Challanges in Virtual Worlds. The panel was:

Here's a shot...hardly a 'ghost town' for tech firms in Second Life!

The panel seemed to be in agreement that much of the recent coverage of virtual business was uninformed, or based on misconceptions. As an example, Gartner cautioned that corporations could not control access to their virtual sites. Expectation plays a big part here too. I think it is true that corporations that expect to build a virtual commercial for a product will be disappointed because nobody showed up. Second Life seems driven by events and gatherings, and is very much a socially driven animal around small networks. Increasingly, real life blogs, and things like Twitter are playing the same role in virtual space as they do in real life, acting as an alert system for happenings, and reporting events later to a wider audience.

The virtual medium has characteristics that distinguish it from the web. Instead of an ad that is encountered for a brief few seconds by a wide audience, virtual interactions involve deeper contacts with customers or contributors in small groups. It was pointed out that surveying a corporate office and seeing it empty can be misleading. The vendors represented pointed out the fact that one must understand the point of a particular build, it could be used for various events, could host customers for private exchanges and training, and also has ripple effects in the 2D media that must be counted in any calculation.

This was alluded to by one of the presenters, but I think about trucking out to the Airport Sheraton for Oracle Tech Days, or similar, vendor-driven events. In those cases, vendors ship out their employees, arrange conference space, and put on an all-day show to a small group. How is this any different from the corporate uses described by the panelists? Not very!

Anyhow, the Metaversed podcast carries the content. I continue to be impressed with the richness of these experiences in virtual space, including all of the networking that occurs before and after these presentations, which in itself demonstrates one of the true, unique properties of the 3D web.

I took a tram into the fourth dimension
Cos I had the blues, the blues of throwing it all away
Just gimme a Tequila, I'll slam it the 4-D way
And when I got there you know it had certain similarities

Like no smoking anywhere
And hiding in the khazi to avoid paying the fare
4-D Tequila anyone
And dont think we didn't dance to records by the Fifth Dimension

-Joe Strummer

Thursday, August 9, 2007

Interesting SL Event Today...

I picked this up from Metaversed...this is at 12PM SL time today. People from Sun, Amazon, Xerox, etc...

There's been a lot of negativity in the press of late over the marketing failures of corporations in the virtual world of Second Life. Analyst firm Gartner have even warned companies away from public worlds recently. With all of that in mind, Metaversed has put together a panel of active real life firms in Second Life to discuss their experiences, and lessons learned from being part of the community. The debate will no doubt prove useful to others and be of great interest to anyone involved in the business side of virtual worlds.

Monday, August 6, 2007

JADE, RMI error running JADE Gateway using Glassfish

Nothing comes up on Google, so I'll throw this out for reference. I was trying to use the new Glassfish server to run a JADE GatewayAgent, on localhost, connecting to a main container on the localhost. Anyhow, it blows off with a MalformedURLException: no protocol yadda yadda yadda when trying to add the child node, blowing off in the RMIMTPManager$PlatformManagerAdapter.addNode method.

This looks like an incompatability with the RMI implementation on Glassfish. In the short term, I punted to Tomcat 5.5 and all is right with the world again.

I'm on J2SE 1.6_02, btw.

Thursday, August 2, 2007

A bit part on virtual worlds last nite

NC-17 news did a piece on virtual worlds last night. See if you can spot the nerd. Link to video here.

I'm working on JADE agents today. I'm somewhat suprised that agent frameworks like JADE are not applied more, especially in this 'come to me web' era. As we get an excess of computer cycles in our individual 'infrastructures', and as we become more mobile, there certainly seems to be a niche that agent computing could fill.

Wednesday, August 1, 2007

IT Conversations: Faculty Summit Opening Panel

This week's Technometria features the Microsoft Research Faculty Summit's opening panel, moderated by Ed Lazowska and including a number of leading academics and Microsoft researchers.

IT Conversations: Faculty Summit Opening Panel

Real Agents working with virtual spaces

It's been a while since I looked at agents, but I was happy to see that Jade 3.5 had been released. That's actually old news for some. I'm working on a project that embeds a physical space within a virtual 'building', and utilizing the JADE agent framework to tie the virtual and the physical worlds together seems like the ticket.

Anyhow, the things that jumped out at me about 3.5 were the ability to communicate between agents using a pub-sub topic model, and a re-working of the web services integration gateway.

In a previous post, I had talked about a small framework to tie virtual space (in this case Second Life) to external applications. The framework uses sensors (and I'm looking at other means) to detect and inventory objects in the virtual space, and gives the facility to pipe messages to those objects from outside. Yesterday, I used that to create a control panel GUI that can run on a small tablet. This control panel uses the framework to send information into the virtual space, causing alterations to the environment.

Over the weekend, I added the facility to push events out of the virtual space to subscribing listeners. Objects in SL can generate events for touch, creation (on_rez), collision, and so forth. By dropping a script in an object, the framework can trap these events and communicate them to a hub. The hub takes these events and sends them to the framework. Here's a pic where the 'pyramid' is the event generator, and the sphere is the hub. I simply 'touch' the pyramid, and the hub is messaged, sending the event to the framework for dispatching to subscribed listeners.

Below is a shot of the 'touch' event in the framework. There is a facility that inspects events coming out of the virtual space, and compares it to subscribers. A subscriber picks the 'region', or island, the object name, and the desired event. The subscriber also sets up a callback, and receives the paramaters associated with each event. I want to add a more flexible subscription, using regular expressions, etc., but that's more than I need right now. It might also be cool to add the ability to specify a script to run when an event is encountered, but for now it can just callback to a subscriber at a given url. Here's the basics of the event as it arrived to the framework. What's not shown is a generic 'payload' field, where I plan on pushing in the variables associated with each SL event.

At any rate, the 'control panel' I wrote for the tablet uses the ability to push messages into the sim by using a known region and object name. The new addition of the ability to push events out of the virtual space to subscribers is next on the plate, hence the interest in using agents on the 'real life' side. I think topic-based subscriptions on the agent side will help me figure out cool things to do given that I can hook into virtual events, plus it is just plain geek-fun.

The first task will be to have an avatar push a doorbell button in the sim, pick up that event, push it to the agent, and have the agent kick off a real-life doorbell chime. A stupid pet-trick, true, but the point will be to exercise this thing end-to-end, and then I'll have established a workable 2-way bridge to interesting things later. So far, the scripting/framework approach works out. Time will tell how well it scales, how lag-inducing it can be, etc. I've gone the approach of using conservative ping and sense rates, and it's been pretty smooth and stable so far.

Something whack that would be a fun side-project would be to wrap virtual devices with Jini, and have discoverable virtual services under a framework like that. This gets back to an idea I had a while back, using virtual spaces, virtual sensors, virtual actuators, and virtual people, to develop and prototype smart, ambient computing services. Given the collaborative nature of these environments, it might make sense!

Tuesday, July 31, 2007

Sun's open source 3D World, and 3D web as a training tool for WMD management

A couple interesting links, first is Sun's project Wonderland, open source client and server for their 3D world. This looks like it's in early stages of development, but you can run the client and the server on your own, always a plus!.

The vision for this multi-user virtual environment is to provide an environment that is robust enough in terms of security, scalability, reliability, and functionality that organizations can rely on it as a place to conduct real business. Organizations should be able to use Wonderland to create a virtual presence to better communicate with customers, partners, and employees.

Second, Idaho Bioterrorism Awareness and Preparedness Program, using 3D web (in this case Second Life) for incident management training...

This virtual environment spreads over two islands Asterix and Obelix (65536 x 2 sq. meters), with one island dedicated to a virtual town and the other a virtual hospital. The design of this virtual environment is influenced by dioramas frequently used by emergency services to support their tabletop exercises.

IBM Employees and Second Life Guidelines

IBM tells employees to behave in Second Life, from Network World.

Monday, July 30, 2007

NCSU, and using 3D environments for learning

UNC has a lot of interesting projects applying the 3D web to education, visit the UNC island sometime! Here's some info about similar initiatives at NCSU!

I went to an interesting Croquet demo last week, I've got some notes and am working on a write-up of what I heard.

Friday, July 27, 2007

Wired, SL, 3D web, the hype curve in action

This is sort of fun, the back and forth about advertising and the 3D web based on a Wired mag article, sort of like that previous LA Times article. In the bubble days, businesses thought they could sell sock monkeys and pet food, and just because it was on the web, they'd be millionaires. The corporations that think they are going to sell mac & cheese because they put up a virtual store are as deluded, and the press will be all over that, I imagine. The most useful thing in the back and forth is the fact that sites in virtual space often appeal to the long tail, versus a mass-market appeal. The long tail is ignored in the original Wired write-up, and I think that's the critical omission.

There's that old saw about asking a farmer what would help his farm work better, and him responding a better plow or a stronger mule, rather than responding that automated farm equipment would help. In other words, the farmer only can apply the world he knows to the question. I'd say we're in the middle of a prime example of the phenomena.There's also the old adage that we overestimate change in the short term, and underestimate it in the long term, and this has a lot to do with the shape of the hype curve.

History repeats itself, and at an accelerating rate, it seems.

Using the Wii with 3D Web as a training/sim device

From Wired...

For Stone, the Wiimote is the key to building realistic training simulators within the virtual world of Second Life. He is helping companies and universities do that through his WorldWired consultancy. Clients include a company interested in training workers for its power plants, a manufacturer of medical devices and pest-control firm Orkin.

Tuesday, July 24, 2007

The future of virtual worlds, LA Times says it's bleak

It's odd sometimes, the way backlashes go. I've been blogging a bit, and being somewhat evangelistic about the 3D web. I didn't invent the term, and wasn't among the first to catch on, but I have a gut feeling (me and Chertoff) that the term means something. Lately, especially after the LA Times article about the death of commerce on the 3D web, I've been approached by multiple people who want to explain to me that this is all a tempest in a teapot.

A proper, direct response to the LA Times article about Second Life can be found here, and here, so I won't try to recapitulate the common myths that make up such negative press. I was considering reasons why I am intrigued by the whole topic of virtual worlds, though, and I wanted to jot some of these down. These observations are shaped by my own interests, by past projects I worked, and so forth. You may discover other reasons to pay attention to the 3D web.

Real and Virtual are Merging

This is a drum I was beating well before I delved into Second Life. In this older post, and this one soon after, I talked about Mobile2.0 and Web2.0, trying to relate these terms to this larger idea of real and virtual merging. The main idea was that mobility and the new web were, in part, about the 'web of things'. Sensors and actuators talking on the web, and smarter applications to discover and manipulate this explosion of new information and services. Whole new types of applications stretching the definition of the web. Virtual worlds are important because they are a metaphor for this merging. In a way, our avatars allow us to cross the barrier, and physically inhabit the web of things. That's a bit sketchy, but I see the 3D web riding the coat-tails of the emerging web of things.

There are potential, practical benefits to visiting the virtual world to understand and manipulate the physical too. I've been interested lately in the development of EOLUS One, as described in this UgoTrade blog entry. This is a fairly wide-ranging project, but it does serve as an interesting illustration real world/virtual world merging.

People Make a Comeback

The ubiquitous social networking web site provides many benefits. I'll pick on a few, and tie them to a virtual world experience:

  • a venue to expand social/professional networks
  • a tool to maintain connections to existing friends
  • a platform to shape and present our own identity
  • a tool to filter and flag important information (use of social networks to compensate for a deficit of attention)
  • a collective tool to add value, from which we individually extract benefit
All of these points can be extracted from classic definitions of Web2.0, and many of the points are mirrored in virtual worlds such as Second Life.

Expanding Social Networks

The first point, virtual worlds as a venue for expanding social networks, is primarily a function of the ability of a virtual world to create an event, or common experience. Think about where friendships start, it is often based on some shared experience, like a college course, or a conference, or some notable event. Virtual worlds can provide an immersive, compelling experience from which these connections can take root.

Social networking also relies on sharing connections, in a friend-of-a-friend style. Given the existence of shared experience in virtual worlds, the familiar mechanism of meeting new people through current friends has a virtual analogue.

Maintaining Current Connections

It's probably a question for sociologists, but what quality of social experiences can be achieved in a virtual environment? I'm quite sure it's not the same as real life, but I also suspect it's a richer interaction than what one would suspect.

We're using tools like Twitter, Flickr, and FaceBook as a way to keep up with our friends and colleagues when we're separated by time or distance. The function of these tools do not map onto the real-time nature of virtual worlds, but I suspect that virtual worlds can add some unique new tools to serve these ends. One example that comes to mind is the ability to establish 'hang outs' particular to a group of friends and colleagues.

Shaping Identity

People use social applications as a way to shape and present themselves. Virtual worlds such as Second Life have an economy partially based on the customization of personal avatars. People take great care to build an image of themselves. Does this aspect of virtual worlds play into this basic function of social networking applications? I guess this is another one for the sociologists...

Tapping into Collective Power

Successful Web2.0 sites often become so because they provide tools to build something interesting, let the tools loose on the world, and leverage the resulting content. I'd toss out Wikipedia and Flickr as two prime examples. There's a fundamental principle at work there, and a lesson that virtual world developers need to take to heart.

Professional 3D developers really don't like Second Life. I picked that up! I can see why, I think the building tools are crummy. This is something I had observed in a previous blog entry, but it bears repeating...the quality of the tools matters, but more important than professional level, sophisticated building tools are accessible tools, available in-world, suitable for the average Joe to get something done. There are indeed master builders within environments like Second Life that could take advantage of special tools, but I will guess that the vast majority rely on simple constructs, and use the ecosystem to purchase the rest.

I think about how bad HTML is, and how crude the tools still are, and would not be suprised to find out that, back in the day, that the web was dismissed as consisting of poor technology in the hands of unqualified developers. I know there are two sides to the coin, as I still encounter poorly designed sites with flaming clip-art, but I look at how far the web has come based on simple HTML, and simple scripting, and don't think it wise to assume it won't happen again.

It's not there yet!

Don't take this as a Second Life fan site. There are lots of things lacking in Second Life, and lots of other virtual worlds out there. I'm going to a Croquet presentation this afternoon, and have begun looking at that tool, getting used to Blender, and intent on learning Squeak. The dust has not settled on the particulars, but I really do think the 3D web means something.

There are 'virtual natives' coming up fast. Under my watchful eye, my little kids spend a little time wandering around Nicktropolis, and similar sites that approximate virtual worlds. These kids don't even blink, they just jump right in, and they are right at home. It's a mistake to put our own preconceptions and limits on a new technology, based on our own experiences and habits. I liken this to the way that younger people don't have a problem editing and keeping documents out on the web, or in alternative, open-source office suites, versus the old MS Office stand-by. I look at my own kids, and it makes me think that the metaverse is as natural to them as Tom and Jerry was to me.

It's especially clear that issues like identity, security, scalability, and application development support all are lacking in many of the current contenders. The power of open source and standards needs to be applied to this space, but the 3D web is here, and it's going to keep growing, I feel confident in saying, even if everyone wants to observe that this is just a game with no future...heck, I'm still waiting for the death of Java!

Friday, July 20, 2007

Try NetBeans 6.0 Milestone as your Rails IDE

I don't have a comprehensive analysis, just a general feeling. I really like coding Rails apps using the NetBeans 6.0 Milestone. I love Eclipse, and switch between the IDE's depending on the specific task, so this is not coming from a particular camp.

I like RadRails a lot, but it seems to have stalled a bit. I kept having problems where the IDE would loose my Rake tasks. I found a fix to manually add an Eclipse builder to the project, point it at rake, etc. Even so, I still periodically see the app forget about Rake. A small complaint, really, but it frustrated me enough to switch. What I found in NetBeans is a rather tight-feeling, smooth IDE for Ruby on Rails. No big analysis, just a nice experience. I'm back to coding, and my IDE seems to not forget about rake. Now if only I can remember my anniversary coming up!

Tuesday, July 17, 2007

Mobile AJAX, then some really cool SL stuff

I read this blog posting about mobile AJAX with some interest. The main premise was that, due to the difficulty in porting to so many mobiles, even with the efforts of the J2ME specification, large-scale deployments remain prohibitive. The blog concludes:

AJAX offers a potentially better solution in comparison to the incumbents (J2ME and XHTML) due to a combination of fewer potential choke points because of its distribution mechanism. The economic models do not favour J2ME and AJAX offers a superior user experience to XHTML. It has the support of the developer community.

I think the idea is cool, but the one fly in the ointment is the fact that mobile connectivity is still so sketchy, and applications need to really support a 'sometimes connected', or event 'mostly connected' environment, as well as the ability to receive pushed-in data. I suppose there are all sorts of creative ways around this, but for the foreseeable future, web browsing on mobiles still sucks.

To other things, we're working on a really cool idea for physical/virtual mash-ups. Imagine a physical space embedded within a virtual space, where real people can see and interact with avatars, and vice versa. In this environment, the virtual avatars can 'reach in' and alter the physical environment, and real life individuals that inhabit this space can use physical 'things' to alter the virtual environment. Sort of a twilight-zone between two parallel universes!

The pic below shows me standing in the virtual room that surrounds the physical room. Imagine that the virtual walls are projected on the physical walls that surround the room occupants. On the walls are windows that avatars may approach. The avatars themselves see real-time streams of audio and video, so from their perspective they are looking into the real room. This sort of thing has been done before, but I think the context is unique. It'll be a cool place to explore the merging of the physical world and the virtual world. This is being tested right now by pushing a custom Second Life client through four monitors arranged as the walls of the room, and will eventually be projected on the actual walls of the real room. (I didn't write the client!)

It's just the start, a fascinating range of possibilities opens up from here. One SL friend, Uskala, mentioned the idea that the walls of the room could change from a meeting space to an auditorium, so that's something that I just implemented, where the walls reconfigure to reveal an auditorium space, and seating rises from the floor. Imagine giving a SL presentation by standing at a podium, looking out onto an audience of avatars. If time permits, I'd like to program these room alterations into the room control system, so a physical knob or button could select different room configurations. This is really cool stuff! More later as it develops, but the first tries today look promising.

Thursday, July 12, 2007


An obvious point, but do your Rails model validation code before developing your unit and functional tests very far, otherwise, you have to re-do a bunch of test cases....


Wednesday, July 11, 2007

Way cool video explaining the new web

This came to me from my friend Joel, at UNCG. In a couple of minutes, this cool YouTube video highlights many of the changes happening on the web, recommended!

Wednesday, June 20, 2007

3D web as disruptive technology - Mitch Kapor

I've had the videos of the IBM & MIT Media Labs conference on virtual worlds running in the corner of my monitor all morning, and I was highly impressed with Mitchell Kapor, Linden Lab Chair, and his view of virtual worlds as disruptive technology. He brings up the term macromyopia, which is a nice word that captures the idea that we overestimate change in the short term and underestimate it in the long term.

Anyhow, his talk is entertaining and thought-provoking, and worth the investment of about 45 minutes of your time.

Food for thought

Check out the prologue to "Everything is Miscellaneous", by David Weinberger. He's an interesting and engaging speaker, and writes in the same style.

Anyhow, I loved this quote, and I think about it in terms of what's happening with the 3D internet...

Those differences are significant. But they’re just the starting point. For something much larger is at stake than how we lay out our stores. The physical limitations that silently guide the organization of an office supply store also guide how we organize our businesses, our government, our schools. They have guided—and limited—how we organize knowledge itself. From management structures to encyclopedias, to the courses of study we put our children through, to the way we decide what’s worth believing, we have organized our ideas with principles designed for use in a world limited by the laws of physics.

Suppose that now, for the first time in history, we are able to arrange our concepts without the silent limitations of the physical. How might our ideas, organizations, and knowledge itself change?

For me this neatly captures a central idea of the 3D web. In a world without limitations, physics, or other constraints, how can we use the tools in a way that feels real, but that doesn't place the limits of physical world, or a static organization of information, into the virtual? This quote highlights both a mistake to be made, and new ways to think.

It also struck me last night that there is a common thread between the (admittedly modest) things I'm doing within Second Life, and past work I did on smart spaces and context aware computing. In some ways, the tools you wish were there in the physical world can be modeled in the virtual, sometimes with the same ends. In each case, physical and virtual, the goal is to respond to each individual, and provide a mesh of services around that person as they navigate the environment. I'm intrigued by the idea that some of these context-aware computing concepts could be applied within the metaverse toward the aims that David Weinberger describes. By the same token, I am interested in how the metaverse could be a testbed for context-aware applications. The whole environment is scripted, you can build sensors and actuators, have location, manipulate the environment, add social elements, etc. Model a smart home, classroom, or office in Second Life...It's certainly faster and cheaper than trying to build a testbed or living lab!

As a note, I happened upon this tidbit from Bob Sutor's blog, links to video now available from the MIT & IBM Conference: Virtual Worlds: Where Business, Society, Technology & Policy Converge which took place on Friday at MIT Media Labs.

Monday, June 18, 2007

Pics from the SL iCommons Summit

I popped in, and blogged a bit about his event here, and here's a link to some interesting pics from the occasion.

Second Earth in MIT Technology Review

From MIT Technology Review...

The World Wide Web will soon be absorbed into the World Wide Sim: an immersive, 3-D visual environment that combines elements of social virtual worlds such as Second Life and mapping applications such as Google Earth. What happens when the virtual and real worlds collide?

This is worthy of a read. The basic premise is that 3D worlds as part of a mash-up with real life locations and data will transform the way we view the 'web'. I'm down with that...

Friday, June 15, 2007

IBM Conference on Virtual Worlds


"We are now at the threshold of newly emerging (Web) platforms focused on participation and collaboration," he said. "The power of collaboration and community are one of the major drivers of innovation as companies figure out the capabilities to accelerate collaborative innovation."

Parris described some of IBM's initial uses of virtual worlds in a business context, including enhanced training, immersive social-shopping experiences, simulations for learning and rehearsing business processes, and event hosting.

Waiting for Lessig at 11:00

Popped into the iCommons summit in Second Life, waiting for 11:00 SL time when Larry Lessig will be speaking. Here's where I'm at..

Some audio probs right now they are working out.

Cool..things are worked out, watching a short film about the remix culture...

Oh well, looking at a grey screen, having media probs here..

OK, switched computers and I'm able to see, right now Johnathan Zittrain is speaking.

Larry talking about debate with Brett Cottle re copyright. How do people in creative commons movement get respect? We get that respect by demanding it loudly like they (copyright people) do. Who are 'we'...iow creative commons?

CC is a movement of open source for culture. Copyright's power comes from its's the command line interface that gets to the core of the machine, great for geeks, not good for most people. For most people, layers are put on top. Think about CC as a gui overlay for the copyright system's power. Another function is as a signal. The people displaying the cc sign send the message that they are part of the sharing economy. Money not part of terms of exchange...instead it's poison. This economy is important provider of value, (wikipedia, flickr). Money is not why people participate.

CC has a role in protecting the sharing economy. CC protects participation in the sharing economy.

"You are helping artists to starve!" as a criticism. Responds that CC can help artists cross over from a sharing economy to a commercial economy when they want to, and when appropriate. New component, beatnick, from creative commons, that allows commercial licensing of creative commons content. Enables bottom-up creativity. You share, and choose when to allow work to be commercially exploited.

We have allowed other side as if this is a debate about piracy. We are fighting for the right to steal, etc...E.g. defense of p2p, as if cc is fighting for the 'right to steal'.

How to respond? This is not a movement about the right to take, it's about the right to create, the right to share in the sense that the artists, creators can be free to choose without the government speaking for them.

General problem, people controlling government is that they only listen to money. (Global warming, healthcare as examples).

CC people need to stand for the movement and make it grow. Standing O.

Wednesday, June 13, 2007

iCommons Summit in Second Life

This looks interesting, another SL overlay of a RL conference...

The USC Center on Public Diplomacy, Linden Lab and iCommons are delighted to announce that the iCommons 2007 Summit in Dubrovnik, Croatia, will be run in parallel in Second Life!

The aim of running the iSummit 2007 in Second Life is to mix the real and virtual world for both attendees of the Summit, and for those who are unable to make it to Dubrovnik, thus expanding the community who will be able to learn, collaborate and share their knowledge and experiences of the Summit. The parallel summit will also help to introduce new users to Second Life and to build the global diversity of participants who are collaborating in-world.

The schedule is posted here, looks like some interesting people will be there, e.g. Larry Lessig, Jimmy Wales.

Making Connections, virtual reality, agent computing, robots, and even real human beings

So I spent a few minutes digging around after reading the Slashdot article about using AI, agents, and 3D visualization to train firefighters. Off on ZDNet is the original article, by Roland Piquepaille.

ZDNet describes the system this way:

The system is currently used by the Los Angeles Fire Department. DEFACTO has committees of AI ‘agents’ which can create disaster scenarios with images and maps seen in 3-D by the trainees. The software agents also evaluate the trainees’ answers and help them to take better decisions.

This is interesting in several ways.

Virtual simulation and training

One of the great potential uses of virtual worlds is the creation of immersive training and simulation environments. I'd anecdotally observe that interacting in a 3D environment with an avatar provides a pretty effective experience. Situations like a fire or a disaster are prime candidates for such an application. Other uses might include immersive language learning, law enforcement, or hospital/medical situations.

Collaborative visualization, ambient data, situational awareness

Collaborative is the key word here, because there are better, higher resolution methods for exploring data through visualization. A simple equation may be to combine your avatar, the avatars of collaborators, and the visualization, so that remotely distributed teams can fly around, point, manipulate, and refer to parts of a visualization as a group. This is somewhat linked to the themes illustrated by multi-touch displays, such as the Microsoft Surface Computer that I mentioned a few posts back.

I'm mostly looking at Second Life, for many reasons. It's safe to say that SL is not a platform for visualizations, but I have tried several small prototypes with the premise that the collaborative nature of these environments yields qualitatively different experiences. Another way of saying this is that it might be useful to look at ways of creating 3D visualizations within virtual environments, not necessarily as the best visualization tool, but as points of reference in virtual collaboration.

Take a look at this image from the DEFACTO page, and imagine how that application, combined with a collaborative, avatar-based environment, could have interesting possibilities, even as far as visualizing and managing an actual event, versus a simulation.

Agents again!

I had a brief run on some earlier projects where I looked at agent technology. At the time, we were looking at the state of context-aware computing, especially as it applied to the development of smarter mobile applications (location awareness, etc). This was mostly using the JADE agent framework, and was based on a research framework called CoBrA. Honestly, I have not been thinking about agents for a while, but this article made me think about agent technology again. Agents are a great model when you have heterogeneous, autonomous, entities that need to cooperate. Especially important is the ability to dynamically form associations, and negotiate to solve a shared task. Web2.0 talks about small things, loosely joined, and agents share that same philosophy in their own arena.

Agents have always struck me as not getting enough play in the whole 'next generation web' yap-space, especially considering the merging of the virtual (web) and physical world through the explosion of sensors and actuators that are starting to talk on the web. Both agent technology, and the physical/virtual merging still seem like blind-spots, when both may play an important part in the post-web2.0 world.

In this case, agents are seen as proxies for what Machinetta calls RAP's. Machinetta is one of the underpinnings of the DEFACTO system, and it is essentially an agent framework that supports negotiation, assignment of roles, and other aspects of team-work. RAP's are the Machinetta term for "Robot, Agent and/or Person". Cool...we got robots too!

Virtual/Physical merging

So this was just mentioned, and bears repeating. The web is not only the information and people, but also the parts of the physical world that are being hooked in. This has gone on for a while, but what is interesting is to see that merging playing out on something suggestive of a virtual environment as well. This is actually something I've been messing with in Second Life, though at a much less sophisticated level. The DEFACTO application seems to suggest some of the same notions, in any case.

Virtual ambient information

The last point I'd make is that this application shares some common characteristics of many of the location-aware mash-ups that are everywhere, especially using tools like Google Maps, Google Earth, and now Google Mapplets. This gets back to the original point about interacting with visualizations in an immersive environment. In a virtual, 3D space, it seems like the potential is there for mash-ups on steroids. Here's a shot from an earlier post of a modest example using 3D symbols on a map...

It might be hard to get the gist of this, but, just like in DEFACTO, virtual worlds can represent ambient information about state and situation by the appearance and behavior of the objects. There is no reason that these objects could not link to DEFACTO RAP's for example, and provide handles to communicate or interrogate the state of the various agents.

Lots of possibilities!

Monday, June 11, 2007

Rails Active Scaffold - from a DHSB

Saw this come across from my network, an article from IBM about the Rails ActiveScaffold plug-in...

This is a plug in to nicely handle all the CRUD that still required lots of coding using the vanilla Rails framework. The promised benefits include (quoting from the ActiveScaffold page):

  • An AJAXified table interface for creating, updating, and deleting objects
  • Automatic handling of ActiveRecord associations
  • Sorting, Search and Pagination
  • Graceful JavaScript degradation
  • RESTful API support (XML/YAML/JSON) baked in
  • Sexy CSS styling and theming support
  • More extension points than you can shake a stick at
  • Guaranteed to work on Firefox 1+, IE 6+ and Safari 2+
  • Released under the MIT License, the same one as Rails itself, so you can use it freely in your commercial applications.
Worth a try! Of the plug-ins and Rails extensions I've seen lately, this one looks promising.

This plug-in is good for me, as I found I'm a DHSB, from this programmer's test...what are you?

Your programmer personality type is:


You're a Doer.
You are very quick at getting tasks done. You believe the outcome is the most important part of a task and the faster you can reach that outcome the better. After all, time is money.

You like coding at a High level.
The world is made up of objects and components, you should create your programs in the same way.

You work best in a Solo situation.
The best way to program is by yourself. There's no communication problems, you know every part of the code allowing you to write the best programs possible.

You are a liBeral programmer.
Programming is a complex task and you should use white space and comments as freely as possible to help simplify the task. We're not writing on paper anymore so we can take up as much room as we need.

Google Developer Days video streams

YouTube has a wealth of info from the recent Google developer days (gears, mash-up, etc). Worth a look on a slow day.

Sunday, June 10, 2007

Bit of vid showing RL/SL mashup

Willi had captured a bit of video from Second Life showing a campus walkabout with his mobile -> second life reporter. What can I was a nice day on campus!

This afternoon, the band I'm in starts recording a new cd. As we're complete unknowns, (and probably deservedly so), we're all DIY. The first time we did this, about eight years ago, it was to a Tascam 80-8 with one bad channel, using a DBX unit that had 4 good channels. We're starting this one out with capability for about 80 tracks, and all the eq's, compressors, and misc. rack gear that we'd rationally want to use. In addition, we can carry the project between home studios and our tracking 'shed', and do independent overdubs. The whole thing is probably going direct to the web under Creative Commons...the march of technology is changing the lives of anyone with any creative impulse, and the web will allow us to reach the dozen people that would want to listen to us prattle on, long tail indeed!

Friday, June 8, 2007

Next version of SlIcer deployed

I've been up late a few nights on this, so allow me to go on a bit...This is the next version of SlIcer, which is essentially a utility for hookup up things in Second Life to things in real life. I've seen things like ObjectOverlord that work on the client code, but I wanted to do things that would work within the vanilla client. Good idea? Not sure, but at least it's workable.

What it does now:

  • Inventories people and objects within a sim using scripted sensor objects that are placed in strategic locations. This inventory can be for multiple regions, and is kept in a database.
  • Creates a queue for messages bound for Second Life. These messages are stored in a database, and delivered through scripted hub objects (co-located with the sensors). Essentially, the hubs poll the database for pending messages, which come down in a bundle, and are distributed to target nodes.
The test case is a mapping room, with a map on the floor, and 3D symbols that reflect state and position. Messages can come in from external applications, and the objects on the map change position, reconfigure to reflect state changes, and can also display floating text for other messages.

That's a picture of the map floor. The round object floating in air is the sensor/hub. Against the wall is our people sensor that looks for individuals, sort of virtual RFID. The SlIcer web app, which is still very much a work in progess, can show an inventory of everything discovered by our in-world sensors.

That's a screen shot that shows, for example, a rover_counter on the map. The database contains info like the last sense time, the x,y,z coordinates, etc. The cool thing is, there are simple URL's that an external app can call, targeting a region and object by their known name. This obviates the need to keep up with SL UUID's.

I'm an awful object builder, but this is my pitiful truck object in a 'stowed' external source (such as a mobile GPS unit), could send telemetry by calling a URL, this enqueues a message for delivery to the sim...

And bammo...state/position change...

What's next:

  • I've already got a database of objects, and it will be easy to add a table of arbitrary name/value properties per object. This gives a Silo-like capability to maintain object state outside of the sim. Objects could update their own state, or pick up state changes pushed in from the web. What would be cool is that that state can survive object name changes, and also re-rezzing. The drawback is that objects have to have a unique given name, I don't do duplicates.
  • Thinking about a pub/sub system for events. For example, do something when an object is rezzed, when an object moves, when a certain person walks into a room, etc. I thought about putting this up in an additional sim, and doing some stupid pet tricks where moving and object in one sim causes a change in an object in another.
From there, I'm not quite sure, but it seems to open a lot of possibilities up. I have some cruft in the database for doing some reliable delivery stuff, but that's not a burning issue right now. The whole thing is done using Ruby on Rails, which I am really keen on these days. This has not taken a huge effort, development can go very quickly one you make the mental jump!

Rails, SL, mash-ups, all in one cool is that...

Thursday, June 7, 2007

Cool visualizations of the net

Nice web interface from Akamai showing visualizations of real-time web traffic info...check it out here.

Wednesday, June 6, 2007

Progress on acts_as_authenticated and authorization in Rails

I'm happy to say I fairly quickly was able to implement authentication using 'acts_as_authenticated', a Rails plug-in. Props to the helpful Rubyists of Second Life for turning me on to that. I prefer the plug-in model to the engine model, much easier for me to grok.

So the steps were fairly simple. First, I went and grabbed acts_as_authenticated, per the helpful instruction page. If you have not tried a plug-in, it's worth it to do a bit of background to understand what is happening, I'd suggest the link-fest on the Rails Wiki as a primer. This gives you a basic database user repository. Then you can, in your controller, say things like:

class RolesController <>

before_filter :login_required

This filter will divert to a login page, along with signup, logout, password hashing, and other basic facilities. Badda-bing, badda-boom. Note that you can exclude various controller actions from the login requirement, so you can have guest pages, and other non-critical data in plain view.

Important to note, acts_as_authenticated only does the authentication part, so you need to go the extra mile to add authorization. There are a couple advertised plug-ins that sit on top of acts_as_authenticated, and I took a stab at the acl_system (actually, I grabbed acl_system2 out of SVN). The files and directories from acl_system2 go in your vendor/plugins directory in your Rails application. There are also a few pre-reqs to using the acl_system, as explained in the instructions:

You will need to have a current_user method that returns the currently logged in user. And you will need to make your User or Account model(or whatever you named it) have a has_and_belongs_to_many :roles. So you need a model called Role that has a title attribute. Once these two things are satisfied you can use this plugin.

So I created a role table in MySql, with id, title, and the usual created and updated dates. I added the following to my User model:

class User <>
has_and_belongs_to_many :roles

Along with this, I have a join table to link users and roles. Once this is configured, you can add additional filtering to the above authentication filter, as in this simple example:

class RolesController <>

before_filter :login_required
access_control [:list, :show, :new, :create, :update, :edit, :destroy, :index] => '(administrator)'

Note that this is simple, and the specification of complex action/role mappings looks fairly flexible. At any rate, it works in initial testing. Lots more to go, but this took much less time than grokking and implementing the user engine, YMMV, of course!

Nice Comparison of Ruby/Rails IDE's

This morning, my RadRails seems to have forgotten about my projects Rake tasks. I've seen that before, somewhat frustrating. Made me look at the grass across the fence again.

Here's a nice comparison of Rails IDE's, as part of my short detour into alternatives. It looks like grabbing the latest NB 6.0 Milestone gives you the Ruby support.

Monday, June 4, 2007

Rails Authentication

So as I'm working on a couple Rails apps, I'm worried about the best way to authenticate. I had originally done some things with rails authentication and authorization using a rails engine. I pretty much got it to work, but it seemed a bit kludgey. Part of this, I'm sure, is not quite grokking out how the engine was wired in to my app. Chalk it up to a state of perpetual newbie-ness.

So perhaps my feeling about engines is not totally unfounded. I'm not jumping into the whole debate, but there seems to be a split in the Rails community about engines, enough to look for alternatives. Anyhow, I asked a few more experienced rails programmers, and like a chorus they all told me to forget about engines and go with Acts as Authenticated. That's on my plate, I'm going to try this plug in as part of this SlIcer mash-up.

Additionally, I came across this nice review of Rails authorization tools...a good read.

Sunday, June 3, 2007

More along the lines of the MS Surface Computer

I"m linked to a video at Perceptive Pixel, I ran across this from Joho the Blog...It's worth a quick look, and expands on themes that surfaced last week with the little wave in the blogosphere around the Microsoft Surface Computer.

In Joho, David Weinberger makes the point that cool UI's are not necessarily usable UI's, which kind of bummed me out, I can't get past how cool it looked.

I know this isn't the point of Joho, but how would access to this thing make using computers a different thing? That is the question, versus how you would make the UI do what your keyboard and mouse does today. Also, break out of the 2D browser model. What about navigating tag clouds or graphs of links as part of searching...toss something into an RSS feed out to collaborators.

Anyhow, take a look at the video...

Friday, June 1, 2007

3D RL/SL location aware mash-up working...

Well, mostly...we have some tweaks to line up the RL coordinates and the offset on the map. We might need to re-do the map we use to texture our sandbox.

Anyhow, this image is my avatar on our 3D interactive mapping floor. The green prim represents Willi (approx31) walking around on the UNC campus with a mobile phone/GPS. Check Willi's blog for particulars, but we take the live GPS signal, and send the lat/long to a PHP script. That script converts to x and y offset from a known origin on our mapping table, which is then scaled to the map on the SL object. The upshot is that the map symbol moves in real-time based on Willi's GPS report. After we tighten down the positioning, we'll be looking for more things to instrument. A cool thing would be to outfit various vehicles with GPS transponders, and other status telemetry, as well as various individuals. Then, at any time, you could see the corresponding 3D symbols moving about and changing state. This is really cool to watch.

By the way, this is using the SlIcer framework, which I had originally proposed a few posts back, and which is up and running in it's first pass. I'm busy doing a second pass, with lots of optimizations and new features. Really that next version will be the first 'usable' one, and maybe can be used in other places, such as the UNC Island. One thing that's cool in the coming version is the ability to inventory and message across multiple regions, so you could move a prim on one sim, and have something happen on another...

Thursday, May 31, 2007

Google Gears

This was touted as Google going straight at Microsoft. A framework for going off-line with on-line apps, called Google Gears. It appears that Google Reader and other apps are going to be outfitted with this capability, which is very cool.

One of the most frequently requested features for Google's web applications is the ability to use them offline. Unfortunately, today's web browsers lack some fundamental building blocks necessary to make offline web applications a reality. In other words, we found we needed to add a few new gears to the web machinery before we could get our apps to run offline. Gears is a browser extension that we hope -- with time and plenty of input and collaboration from outside of Google -- can make not just our applications but everyone's applications work offline.

Too many irons in the fire to play with this now, but I'll file it away, sort of today's mini-buzz after yesterday's MSoft Surface Computer wave. I'm also catching that people are complaining about the Google Street View, check out Boing Boing for the blowback on that!

update: Here's another article on Gears.

Wednesday, May 30, 2007

Microsoft Surface Computing - HCI and UbiComp

Sometimes, it's hard for me to pin down what my own blog is about. I tend to run many threads at once, and end up thrashing sometimes, as I suspect anyone working in technology does these days. The past few weeks, it's been about Second Life, and that continues, but I'm looking at other areas as well, such as plain old Web2.0, ubiquitous computing, agent computing, mobility, location aware services, SOA, and dynamic scripting languages (specifically Ruby and Rails these days). In the mix somewhere is my original interest in Java/J2EE, along with things like Spring.

Rather than a testament to a short attention span, I think this wide variation in themes is actually a sign of the times we live in. Developers no longer learn one language, and roll into every project with the same set of tools. The evolving web, the evolution of mobility, and the pervasive field of networked information and devices that surround us everywhere we go make for an interesting and challenging time. I'd like to suggest that the disparate topics covered in this blog are on a converging trajectory. Maybe that's what this blog can be about.

Case in point, check out this short video on Microsoft's Surface Computer. I think this is an exciting platform that brings together a bunch of ideas. Essentially, this is a big, touch sensitive display that uses gestures to manipulate data. The cool thing is that it's multi-touch, so you can gesture with both hands, and multiple people can interact with the computer at the same time. In addition, the Surface Computer is sensitive to physical objects. It can sense with these objects, and also interact with other computers placed on the surface.

  • The 'multi-touch' is collaborative. Technology is getting more and more social. This reality is core to Web2.0, as well as the evolving 3D web. We're not isolated from each other anymore, we Twitter and blog, we IM and message, now we can compute together.
  • The Surface Computer bridges the physical and the virtual. In the video, they demonstrate placing a device on the surface, having it dynamically connect, and using a gesture to shoot a photograph into the device. The natural action of placing a device of interest on the collaborative surface, and being able to manipulate it, is a step towards useful ubiquitous computing.
  • The Surface Computer could be an interesting new metaphor for web collaboration in the way that avatar representation in Second Life creates a sense of immersion. I think it won't be long until you could assemble remotely around a common 3D web surface, with remote participants as avatars.
The combination of natural interface, immersion, and the ability to easily incorporate data from the web, or from other devices, in collaborative ways seems like a natural progression.

Tuesday, May 29, 2007

Google street view

Just go's amazing. Alas, Franklin Street not available...

Second Life Best Practices in Education - Link Dump

Here's a nice link dump to catch up on the SLBP confo last week...

WUNC today, the rise and fall of Friendster

On 'the Story' today..Jonathan Abrams from Friendster.


Dick talks with Jonathan about what he learned from the success and later failure of Friendster, and how he plans to compete with a new social networking project in what has become a very crowded field.

Friday, May 25, 2007

More of SL Best Practices in Education confo

A few keynotes this afternoon worth catching...catching the end of the IBM keynote. Place is SRO.

Search IBM and Secondlife in YouTube (do that later) to see some examples of their use of virtual worlds. ? on when virtual worlds truly become mainstream? Mainstream is difficult to argue..what is mainstream? Talk in terms of internet 1 (democratization of access), lots of people connected now took about 10 years. Web2.0 (democratization of participation), took about half the time. 3D internet is about people coming back. People are going to be involved in every aspect of environment, and it will happen fast.

Pirate Shipman, adjunct faculty at Pepperdine, is next keynote. Right brain attitudes important in today's world:

  • Design
  • Story
  • Sympathy
  • Empathy
  • Play
  • Meaning
Basic premise is that we need to focus on these areas, versus left brain logic. Pirate did a class project do develop virtual content reflecting these 6 aptitudes. This comes from the book "The Whole New Mind". Students given a small area and a 150 prim limit for their projects. We watch slides of the building progress on the class island. Students experimenting...Conclusion was that it was a powerful learning experience for all involved.

That's the keynote in progress.. do we teach in a virtual world? We need to discuss strenths and weakneses.


  • SL is a spatial experience. Virtual world has physicality. Shapes, sizes, movement, spatial relationships take on deep meaning.
  • SL is an immersive experience. We can respond as if we are really there...effects emotion and mood to be in a virtual situation.
  • SL is a social platform. We can craft and present an identity to others. "This makes the us that engages with others easier to become". (Interesting).
  • SL has tools to connect to and communicate with others. LSL allows us to develop socially aware objects.
  • SL democratizes the ability to create content and learning artifacts, it's a participatory medium. (This is what I think is the key point, which SL captures well).
  • SL enables collaborative development of objects. (I think building things with others, and having the tools in-world, is the key...this is why SL works, and why importing from blender, etc is not important, and rather not the point, but that's just me!).
Some talk about the playful spirit that is the game part of the environent...the keynote speaker is wearing a pirate eyepatch.


  • Effective communication of large amounts of data is difficult.
  • Technological overhead high.
  • Combo of 2d with SL lacks synergy most of the time. Showing 'flat' images, for example, still easier in a browser interface..
  • Activities outside of the scope of what second life does can usually be done better outside of SL. "Sometimes, though, the novelty may be enough".

OK...gonna hit some posters, then I gotta go....

Here's a parting shot of one of the posters, this one for the SL Genetics Center. All in all a remarkable day, and an effective use of SL. I've even got an inventory full of junk now I have to sift thru!.