Wednesday, June 20, 2007

3D web as disruptive technology - Mitch Kapor

I've had the videos of the IBM & MIT Media Labs conference on virtual worlds running in the corner of my monitor all morning, and I was highly impressed with Mitchell Kapor, Linden Lab Chair, and his view of virtual worlds as disruptive technology. He brings up the term macromyopia, which is a nice word that captures the idea that we overestimate change in the short term and underestimate it in the long term.

Anyhow, his talk is entertaining and thought-provoking, and worth the investment of about 45 minutes of your time.

Food for thought

Check out the prologue to "Everything is Miscellaneous", by David Weinberger. He's an interesting and engaging speaker, and writes in the same style.

Anyhow, I loved this quote, and I think about it in terms of what's happening with the 3D internet...

Those differences are significant. But they’re just the starting point. For something much larger is at stake than how we lay out our stores. The physical limitations that silently guide the organization of an office supply store also guide how we organize our businesses, our government, our schools. They have guided—and limited—how we organize knowledge itself. From management structures to encyclopedias, to the courses of study we put our children through, to the way we decide what’s worth believing, we have organized our ideas with principles designed for use in a world limited by the laws of physics.

Suppose that now, for the first time in history, we are able to arrange our concepts without the silent limitations of the physical. How might our ideas, organizations, and knowledge itself change?

For me this neatly captures a central idea of the 3D web. In a world without limitations, physics, or other constraints, how can we use the tools in a way that feels real, but that doesn't place the limits of physical world, or a static organization of information, into the virtual? This quote highlights both a mistake to be made, and new ways to think.

It also struck me last night that there is a common thread between the (admittedly modest) things I'm doing within Second Life, and past work I did on smart spaces and context aware computing. In some ways, the tools you wish were there in the physical world can be modeled in the virtual, sometimes with the same ends. In each case, physical and virtual, the goal is to respond to each individual, and provide a mesh of services around that person as they navigate the environment. I'm intrigued by the idea that some of these context-aware computing concepts could be applied within the metaverse toward the aims that David Weinberger describes. By the same token, I am interested in how the metaverse could be a testbed for context-aware applications. The whole environment is scripted, you can build sensors and actuators, have location, manipulate the environment, add social elements, etc. Model a smart home, classroom, or office in Second Life...It's certainly faster and cheaper than trying to build a testbed or living lab!


As a note, I happened upon this tidbit from Bob Sutor's blog, links to video now available from the MIT & IBM Conference: Virtual Worlds: Where Business, Society, Technology & Policy Converge which took place on Friday at MIT Media Labs.

Monday, June 18, 2007

Pics from the SL iCommons Summit

I popped in, and blogged a bit about his event here, and here's a link to some interesting pics from the occasion.

Second Earth in MIT Technology Review

From MIT Technology Review...

The World Wide Web will soon be absorbed into the World Wide Sim: an immersive, 3-D visual environment that combines elements of social virtual worlds such as Second Life and mapping applications such as Google Earth. What happens when the virtual and real worlds collide?

This is worthy of a read. The basic premise is that 3D worlds as part of a mash-up with real life locations and data will transform the way we view the 'web'. I'm down with that...

Friday, June 15, 2007

IBM Conference on Virtual Worlds

Coverage...

"We are now at the threshold of newly emerging (Web) platforms focused on participation and collaboration," he said. "The power of collaboration and community are one of the major drivers of innovation as companies figure out the capabilities to accelerate collaborative innovation."

Parris described some of IBM's initial uses of virtual worlds in a business context, including enhanced training, immersive social-shopping experiences, simulations for learning and rehearsing business processes, and event hosting.

Waiting for Lessig at 11:00


Popped into the iCommons summit in Second Life, waiting for 11:00 SL time when Larry Lessig will be speaking. Here's where I'm at..

Some audio probs right now they are working out.

Cool..things are worked out, watching a short film about the remix culture...

Oh well, looking at a grey screen, having media probs here..



OK, switched computers and I'm able to see, right now Johnathan Zittrain is speaking.





Larry talking about debate with Brett Cottle re copyright. How do people in creative commons movement get respect? We get that respect by demanding it loudly like they (copyright people) do. Who are 'we'...iow creative commons?

CC is a movement of open source for culture. Copyright's power comes from its complexity..it's the command line interface that gets to the core of the machine, great for geeks, not good for most people. For most people, layers are put on top. Think about CC as a gui overlay for the copyright system's power. Another function is as a signal. The people displaying the cc sign send the message that they are part of the sharing economy. Money not part of terms of exchange...instead it's poison. This economy is important provider of value, (wikipedia, flickr). Money is not why people participate.

CC has a role in protecting the sharing economy. CC protects participation in the sharing economy.

"You are helping artists to starve!" as a criticism. Responds that CC can help artists cross over from a sharing economy to a commercial economy when they want to, and when appropriate. New component, beatnick, from creative commons, that allows commercial licensing of creative commons content. Enables bottom-up creativity. You share, and choose when to allow work to be commercially exploited.

We have allowed other side as if this is a debate about piracy. We are fighting for the right to steal, etc...E.g. defense of p2p, as if cc is fighting for the 'right to steal'.

How to respond? This is not a movement about the right to take, it's about the right to create, the right to share in the sense that the artists, creators can be free to choose without the government speaking for them.

General problem, people controlling government is that they only listen to money. (Global warming, healthcare as examples).

CC people need to stand for the movement and make it grow. Standing O.

Wednesday, June 13, 2007

iCommons Summit in Second Life

This looks interesting, another SL overlay of a RL conference...

The USC Center on Public Diplomacy, Linden Lab and iCommons are delighted to announce that the iCommons 2007 Summit in Dubrovnik, Croatia, will be run in parallel in Second Life!

The aim of running the iSummit 2007 in Second Life is to mix the real and virtual world for both attendees of the Summit, and for those who are unable to make it to Dubrovnik, thus expanding the community who will be able to learn, collaborate and share their knowledge and experiences of the Summit. The parallel summit will also help to introduce new users to Second Life and to build the global diversity of participants who are collaborating in-world.

The schedule is posted here, looks like some interesting people will be there, e.g. Larry Lessig, Jimmy Wales.

Making Connections, virtual reality, agent computing, robots, and even real human beings

So I spent a few minutes digging around after reading the Slashdot article about using AI, agents, and 3D visualization to train firefighters. Off on ZDNet is the original article, by Roland Piquepaille.

ZDNet describes the system this way:

The system is currently used by the Los Angeles Fire Department. DEFACTO has committees of AI ‘agents’ which can create disaster scenarios with images and maps seen in 3-D by the trainees. The software agents also evaluate the trainees’ answers and help them to take better decisions.

This is interesting in several ways.

Virtual simulation and training

One of the great potential uses of virtual worlds is the creation of immersive training and simulation environments. I'd anecdotally observe that interacting in a 3D environment with an avatar provides a pretty effective experience. Situations like a fire or a disaster are prime candidates for such an application. Other uses might include immersive language learning, law enforcement, or hospital/medical situations.

Collaborative visualization, ambient data, situational awareness

Collaborative is the key word here, because there are better, higher resolution methods for exploring data through visualization. A simple equation may be to combine your avatar, the avatars of collaborators, and the visualization, so that remotely distributed teams can fly around, point, manipulate, and refer to parts of a visualization as a group. This is somewhat linked to the themes illustrated by multi-touch displays, such as the Microsoft Surface Computer that I mentioned a few posts back.

I'm mostly looking at Second Life, for many reasons. It's safe to say that SL is not a platform for visualizations, but I have tried several small prototypes with the premise that the collaborative nature of these environments yields qualitatively different experiences. Another way of saying this is that it might be useful to look at ways of creating 3D visualizations within virtual environments, not necessarily as the best visualization tool, but as points of reference in virtual collaboration.

Take a look at this image from the DEFACTO page, and imagine how that application, combined with a collaborative, avatar-based environment, could have interesting possibilities, even as far as visualizing and managing an actual event, versus a simulation.

Agents again!

I had a brief run on some earlier projects where I looked at agent technology. At the time, we were looking at the state of context-aware computing, especially as it applied to the development of smarter mobile applications (location awareness, etc). This was mostly using the JADE agent framework, and was based on a research framework called CoBrA. Honestly, I have not been thinking about agents for a while, but this article made me think about agent technology again. Agents are a great model when you have heterogeneous, autonomous, entities that need to cooperate. Especially important is the ability to dynamically form associations, and negotiate to solve a shared task. Web2.0 talks about small things, loosely joined, and agents share that same philosophy in their own arena.

Agents have always struck me as not getting enough play in the whole 'next generation web' yap-space, especially considering the merging of the virtual (web) and physical world through the explosion of sensors and actuators that are starting to talk on the web. Both agent technology, and the physical/virtual merging still seem like blind-spots, when both may play an important part in the post-web2.0 world.

In this case, agents are seen as proxies for what Machinetta calls RAP's. Machinetta is one of the underpinnings of the DEFACTO system, and it is essentially an agent framework that supports negotiation, assignment of roles, and other aspects of team-work. RAP's are the Machinetta term for "Robot, Agent and/or Person". Cool...we got robots too!

Virtual/Physical merging

So this was just mentioned, and bears repeating. The web is not only the information and people, but also the parts of the physical world that are being hooked in. This has gone on for a while, but what is interesting is to see that merging playing out on something suggestive of a virtual environment as well. This is actually something I've been messing with in Second Life, though at a much less sophisticated level. The DEFACTO application seems to suggest some of the same notions, in any case.

Virtual ambient information

The last point I'd make is that this application shares some common characteristics of many of the location-aware mash-ups that are everywhere, especially using tools like Google Maps, Google Earth, and now Google Mapplets. This gets back to the original point about interacting with visualizations in an immersive environment. In a virtual, 3D space, it seems like the potential is there for mash-ups on steroids. Here's a shot from an earlier post of a modest example using 3D symbols on a map...





It might be hard to get the gist of this, but, just like in DEFACTO, virtual worlds can represent ambient information about state and situation by the appearance and behavior of the objects. There is no reason that these objects could not link to DEFACTO RAP's for example, and provide handles to communicate or interrogate the state of the various agents.

Lots of possibilities!

Monday, June 11, 2007

Rails Active Scaffold - from a DHSB

Saw this come across from my del.icio.us network, an article from IBM about the Rails ActiveScaffold plug-in...


This is a plug in to nicely handle all the CRUD that still required lots of coding using the vanilla Rails framework. The promised benefits include (quoting from the ActiveScaffold page):

  • An AJAXified table interface for creating, updating, and deleting objects
  • Automatic handling of ActiveRecord associations
  • Sorting, Search and Pagination
  • Graceful JavaScript degradation
  • RESTful API support (XML/YAML/JSON) baked in
  • Sexy CSS styling and theming support
  • More extension points than you can shake a stick at
  • Guaranteed to work on Firefox 1+, IE 6+ and Safari 2+
  • Released under the MIT License, the same one as Rails itself, so you can use it freely in your commercial applications.
Worth a try! Of the plug-ins and Rails extensions I've seen lately, this one looks promising.

This plug-in is good for me, as I found I'm a DHSB, from this programmer's test...what are you?

Your programmer personality type is:

DHSB

You're a Doer.
You are very quick at getting tasks done. You believe the outcome is the most important part of a task and the faster you can reach that outcome the better. After all, time is money.


You like coding at a High level.
The world is made up of objects and components, you should create your programs in the same way.


You work best in a Solo situation.
The best way to program is by yourself. There's no communication problems, you know every part of the code allowing you to write the best programs possible.


You are a liBeral programmer.
Programming is a complex task and you should use white space and comments as freely as possible to help simplify the task. We're not writing on paper anymore so we can take up as much room as we need.

Google Developer Days video streams

YouTube has a wealth of info from the recent Google developer days (gears, mash-up, etc). Worth a look on a slow day.

Sunday, June 10, 2007

Bit of vid showing RL/SL mashup

Willi had captured a bit of video from Second Life showing a campus walkabout with his mobile -> second life reporter. What can I say...it was a nice day on campus!

This afternoon, the band I'm in starts recording a new cd. As we're complete unknowns, (and probably deservedly so), we're all DIY. The first time we did this, about eight years ago, it was to a Tascam 80-8 with one bad channel, using a DBX unit that had 4 good channels. We're starting this one out with capability for about 80 tracks, and all the eq's, compressors, and misc. rack gear that we'd rationally want to use. In addition, we can carry the project between home studios and our tracking 'shed', and do independent overdubs. The whole thing is probably going direct to the web under Creative Commons...the march of technology is changing the lives of anyone with any creative impulse, and the web will allow us to reach the dozen people that would want to listen to us prattle on, long tail indeed!

Friday, June 8, 2007

Next version of SlIcer deployed

I've been up late a few nights on this, so allow me to go on a bit...This is the next version of SlIcer, which is essentially a utility for hookup up things in Second Life to things in real life. I've seen things like ObjectOverlord that work on the client code, but I wanted to do things that would work within the vanilla client. Good idea? Not sure, but at least it's workable.

What it does now:

  • Inventories people and objects within a sim using scripted sensor objects that are placed in strategic locations. This inventory can be for multiple regions, and is kept in a database.
  • Creates a queue for messages bound for Second Life. These messages are stored in a database, and delivered through scripted hub objects (co-located with the sensors). Essentially, the hubs poll the database for pending messages, which come down in a bundle, and are distributed to target nodes.
The test case is a mapping room, with a map on the floor, and 3D symbols that reflect state and position. Messages can come in from external applications, and the objects on the map change position, reconfigure to reflect state changes, and can also display floating text for other messages.



That's a picture of the map floor. The round object floating in air is the sensor/hub. Against the wall is our people sensor that looks for individuals, sort of virtual RFID. The SlIcer web app, which is still very much a work in progess, can show an inventory of everything discovered by our in-world sensors.


That's a screen shot that shows, for example, a rover_counter on the map. The database contains info like the last sense time, the x,y,z coordinates, etc. The cool thing is, there are simple URL's that an external app can call, targeting a region and object by their known name. This obviates the need to keep up with SL UUID's.


I'm an awful object builder, but this is my pitiful truck object in a 'stowed' state..an external source (such as a mobile GPS unit), could send telemetry by calling a URL, this enqueues a message for delivery to the sim...




And bammo...state/position change...




What's next:

  • I've already got a database of objects, and it will be easy to add a table of arbitrary name/value properties per object. This gives a Silo-like capability to maintain object state outside of the sim. Objects could update their own state, or pick up state changes pushed in from the web. What would be cool is that that state can survive object name changes, and also re-rezzing. The drawback is that objects have to have a unique given name, I don't do duplicates.
  • Thinking about a pub/sub system for events. For example, do something when an object is rezzed, when an object moves, when a certain person walks into a room, etc. I thought about putting this up in an additional sim, and doing some stupid pet tricks where moving and object in one sim causes a change in an object in another.
From there, I'm not quite sure, but it seems to open a lot of possibilities up. I have some cruft in the database for doing some reliable delivery stuff, but that's not a burning issue right now. The whole thing is done using Ruby on Rails, which I am really keen on these days. This has not taken a huge effort, development can go very quickly one you make the mental jump!

Rails, SL, mash-ups, all in one project...how cool is that...

Thursday, June 7, 2007

Cool visualizations of the net

Nice web interface from Akamai showing visualizations of real-time web traffic info...check it out here.

Wednesday, June 6, 2007

Progress on acts_as_authenticated and authorization in Rails

I'm happy to say I fairly quickly was able to implement authentication using 'acts_as_authenticated', a Rails plug-in. Props to the helpful Rubyists of Second Life for turning me on to that. I prefer the plug-in model to the engine model, much easier for me to grok.

So the steps were fairly simple. First, I went and grabbed acts_as_authenticated, per the helpful instruction page. If you have not tried a plug-in, it's worth it to do a bit of background to understand what is happening, I'd suggest the link-fest on the Rails Wiki as a primer. This gives you a basic database user repository. Then you can, in your controller, say things like:

class RolesController <>

before_filter :login_required

This filter will divert to a login page, along with signup, logout, password hashing, and other basic facilities. Badda-bing, badda-boom. Note that you can exclude various controller actions from the login requirement, so you can have guest pages, and other non-critical data in plain view.

Important to note, acts_as_authenticated only does the authentication part, so you need to go the extra mile to add authorization. There are a couple advertised plug-ins that sit on top of acts_as_authenticated, and I took a stab at the acl_system (actually, I grabbed acl_system2 out of SVN). The files and directories from acl_system2 go in your vendor/plugins directory in your Rails application. There are also a few pre-reqs to using the acl_system, as explained in the instructions:

You will need to have a current_user method that returns the currently logged in user. And you will need to make your User or Account model(or whatever you named it) have a has_and_belongs_to_many :roles. So you need a model called Role that has a title attribute. Once these two things are satisfied you can use this plugin.

So I created a role table in MySql, with id, title, and the usual created and updated dates. I added the following to my User model:

class User <>
has_and_belongs_to_many :roles

Along with this, I have a join table to link users and roles. Once this is configured, you can add additional filtering to the above authentication filter, as in this simple example:

class RolesController <>

before_filter :login_required
access_control [:list, :show, :new, :create, :update, :edit, :destroy, :index] => '(administrator)'

Note that this is simple, and the specification of complex action/role mappings looks fairly flexible. At any rate, it works in initial testing. Lots more to go, but this took much less time than grokking and implementing the user engine, YMMV, of course!

Nice Comparison of Ruby/Rails IDE's

This morning, my RadRails seems to have forgotten about my projects Rake tasks. I've seen that before, somewhat frustrating. Made me look at the grass across the fence again.

Here's a nice comparison of Rails IDE's, as part of my short detour into alternatives. It looks like grabbing the latest NB 6.0 Milestone gives you the Ruby support.

Monday, June 4, 2007

Rails Authentication

So as I'm working on a couple Rails apps, I'm worried about the best way to authenticate. I had originally done some things with rails authentication and authorization using a rails engine. I pretty much got it to work, but it seemed a bit kludgey. Part of this, I'm sure, is not quite grokking out how the engine was wired in to my app. Chalk it up to a state of perpetual newbie-ness.

So perhaps my feeling about engines is not totally unfounded. I'm not jumping into the whole debate, but there seems to be a split in the Rails community about engines, enough to look for alternatives. Anyhow, I asked a few more experienced rails programmers, and like a chorus they all told me to forget about engines and go with Acts as Authenticated. That's on my plate, I'm going to try this plug in as part of this SlIcer mash-up.

Additionally, I came across this nice review of Rails authorization tools...a good read.

Sunday, June 3, 2007

More along the lines of the MS Surface Computer

I"m linked to a video at Perceptive Pixel, I ran across this from Joho the Blog...It's worth a quick look, and expands on themes that surfaced last week with the little wave in the blogosphere around the Microsoft Surface Computer.

In Joho, David Weinberger makes the point that cool UI's are not necessarily usable UI's, which kind of bummed me out, I can't get past how cool it looked.

I know this isn't the point of Joho, but how would access to this thing make using computers a different thing? That is the question, versus how you would make the UI do what your keyboard and mouse does today. Also, break out of the 2D browser model. What about navigating tag clouds or graphs of links as part of searching...toss something into an RSS feed out to collaborators.

Anyhow, take a look at the video...

Friday, June 1, 2007

3D RL/SL location aware mash-up working...


Well, mostly...we have some tweaks to line up the RL coordinates and the offset on the map. We might need to re-do the map we use to texture our sandbox.

Anyhow, this image is my avatar on our 3D interactive mapping floor. The green prim represents Willi (approx31) walking around on the UNC campus with a mobile phone/GPS. Check Willi's blog for particulars, but we take the live GPS signal, and send the lat/long to a PHP script. That script converts to x and y offset from a known origin on our mapping table, which is then scaled to the map on the SL object. The upshot is that the map symbol moves in real-time based on Willi's GPS report. After we tighten down the positioning, we'll be looking for more things to instrument. A cool thing would be to outfit various vehicles with GPS transponders, and other status telemetry, as well as various individuals. Then, at any time, you could see the corresponding 3D symbols moving about and changing state. This is really cool to watch.

By the way, this is using the SlIcer framework, which I had originally proposed a few posts back, and which is up and running in it's first pass. I'm busy doing a second pass, with lots of optimizations and new features. Really that next version will be the first 'usable' one, and maybe can be used in other places, such as the UNC Island. One thing that's cool in the coming version is the ability to inventory and message across multiple regions, so you could move a prim on one sim, and have something happen on another...