Monday, November 17, 2008

Renci Multi-Touch blog

Here's the multi-touch blog from Renci. There are two form factors that Renci is working with, a large multi-touch wall at Duke, and a horizontal touch-table at Europa.




My hope is that a touch-table will grace the Social Computer Room. A long-term vision would be to extend our Collage/InfoMesa ideas in the SCR, using the 360-degree display to provide visual real estate. Imagine a group working around a touch table, shooting images out to the 360-degree wall with gestures on the touch table.

Thursday, November 13, 2008

Putting Google Earth into a WPF window

This may end up being fruitless, but I was inspired by this multi-channel version of Microsoft Virtual Earth. It looks like they linked multiple version of Virtual Earth with different camera settings, and I wanted to try something similar with Google Earth for our Global Immersion dome. This four-projector rig needs a viewport and four camera views to work right. Can I create a full-screen app that has these four viewports, and have four synchronized version of Google Earth running? I don't know, but the first step was to see if I could create a WPF app that had Google Earth embedded in the WPF window. You can at least do that, and that's interesting in itself, because I can add a Google Earth widget to my InfoMesa/Collage experiments described here...

Anyhow, here's the window in all its (yawn) glory:



It was a bit of a slog to get it right, and I'll share the code that worked. First, I had to get Google Earth, which gives you this COM SDK. I did this using C#, Vis Studio 2008. I added a project ref to the COM Google Earth library, and created a class that extended HwndHost.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Windows.Interop;
using System.Runtime.InteropServices;
using EARTHLib;

namespace GeTest
{
class MyHwndHost : HwndHost
{


[DllImport("user32.dll")]
static extern int SetParent(int hWndChild, int hWndParent);
IApplicationGE iGeApp;


[DllImport("user32.dll", EntryPoint = "GetDC")]
public static extern IntPtr GetDC(IntPtr ptr);



[DllImport("user32.dll", EntryPoint = "GetWindowDC")]
public static extern IntPtr GetWindowDC(Int32 ptr);

[DllImport("user32.dll", EntryPoint = "IsChild")]
public static extern bool IsChild(int hWndParent, int hwnd);


[DllImport("user32.dll", EntryPoint = "ReleaseDC")]
public static extern IntPtr ReleaseDC(IntPtr hWnd, IntPtr hDc);

[DllImport("user32.dll", CharSet = CharSet.Auto)]
public extern static bool SetWindowPos(int hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags);

[DllImport("user32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr PostMessage(int hWnd, int msg, int wParam, int lParam);

//PInvoke declarations
[DllImport("user32.dll", EntryPoint = "CreateWindowEx", CharSet = CharSet.Auto)]
internal static extern IntPtr CreateWindowEx(int dwExStyle,
string lpszClassName,
string lpszWindowName,
int style,
int x, int y,
int width, int height,
IntPtr hwndParent,
IntPtr hMenu,
IntPtr hInst,
[MarshalAs(UnmanagedType.AsAny)] object pvParam);


readonly IntPtr HWND_BOTTOM = new IntPtr(1);
readonly IntPtr HWND_NOTOPMOST = new IntPtr(-2);
readonly IntPtr HWND_TOP = new IntPtr(0);
readonly IntPtr HWND_TOPMOST = new IntPtr(-1);
static readonly UInt32 SWP_NOSIZE = 1;
static readonly UInt32 SWP_NOMOVE = 2;
static readonly UInt32 SWP_NOZORDER = 4;
static readonly UInt32 SWP_NOREDRAW = 8;
static readonly UInt32 SWP_NOACTIVATE = 16;
static readonly UInt32 SWP_FRAMECHANGED = 32;
static readonly UInt32 SWP_SHOWWINDOW = 64;
static readonly UInt32 SWP_HIDEWINDOW = 128;
static readonly UInt32 SWP_NOCOPYBITS = 256;
static readonly UInt32 SWP_NOOWNERZORDER = 512;
static readonly UInt32 SWP_NOSENDCHANGING = 1024;
static readonly Int32 WM_CLOSE = 0xF060;
static readonly Int32 WM_QUIT = 0x0012;

private IntPtr GEHrender = (IntPtr)0;
private IntPtr GEParentHrender = (IntPtr)0;

internal const int
WS_CHILD = 0x40000000,
WS_VISIBLE = 0x10000000,
LBS_NOTIFY = 0x00000001,
HOST_ID = 0x00000002,
LISTBOX_ID = 0x00000001,
WS_VSCROLL = 0x00200000,
WS_BORDER = 0x00800000;



public ApplicationGEClass googleEarth;

protected override HandleRef BuildWindowCore(HandleRef hwndParent)
{

// start google earth
googleEarth = new ApplicationGEClass();

int ge = googleEarth.GetRenderHwnd();

IntPtr hwndControl = IntPtr.Zero;
IntPtr hwndHost = IntPtr.Zero;
int hostHeight = 200;
int hostWidth = 300;

// create a host window that is a child of this HwndHost. I'll plug this HwndHost class as a child of
// a border element in my WPF app,

hwndHost = CreateWindowEx(0, "static", "",
WS_CHILD | WS_VISIBLE,
0, 0,
hostHeight, hostWidth,
hwndParent.Handle,
(IntPtr)HOST_ID,
IntPtr.Zero,
0);


// set the parent of the Google Earth window to be the host I created here
int oldPar = SetParent(ge, (int) hwndHost);
// check to see if I'm now a child, for my own amusement
if (IsChild(hwndHost.ToInt32(), ge)) {
System.Console.WriteLine("now a child");
}

// return a ref to the hwndHost, which should now be the parent of the google earth window
return new HandleRef(this, hwndHost);
}

protected override void DestroyWindowCore(HandleRef hwnd)
{
throw new NotImplementedException();
}
}
}



The main window of my WPF app, in its constructor for the Window, just plugs this HwndHost as a child of a Border control:

public Window1()
{

InitializeComponent();
MyHwndHost hwndHost = new MyHwndHost();
border1.Child = hwndHost;


}


And you are off to the races! I found a lot of different approaches to this all over the web, but none of them seemed to work, as is often the case. Maybe this will work for you, or maybe it adds to the confusion.

UPDATE: can't run more than one Google Earth, so it's a bust, but still has use in my InfoMesa/Collage project. I wonder about Virtual Earth?

Wednesday, November 12, 2008

Video for Ubisense/MIDI demo

Here's the video that was described in this post. The point of this experiment was to see if we could reasonably map carpet squares in the Social Computing Room to MIDI notes, and output those notes to the on-board MIDI implementation.




Oh well...off to a Games4Learning event...see ya there!

Monday, November 10, 2008

A bit about the Social Computing Room

In the last blog entry, I had put down a spot to link to a description of the 'Social Computing Room', and realized that I didn't have one. So I wanted to fill in a few details and fix that link.

The Social Computing Room, (or SCR for short) is a visualization space at the Renci Engagement Center at UNC Chapel Hill. We're over near the hospital in the ITS-Manning building in Chapel Hill. It's one of three spaces, the other being the Showcase Dome (a 5-meter tilted Global Immersion dome), and Teleimmersion, which is a 4K (4 x HD resolution) stereo environment. We're working on some virtual tours for a new web site, so there should be some more info soon on those other spaces.

One of the primary features of the SCR is its 360-degree display. The room is essentially a 12,288x768 Windows desktop. (I've also tested a Mac in this environment, and it works as well). Here's a pic of the SCR...



The room has multiple cameras, wireless mics, multi-channel sound, 3D location tracking for people and objects, and is ultra-configurable (plenty of cat-6 for connecting things, Unistrut ceiling for adding new hardware). The room has so many possibilities that it gets difficult to keep up with all of the ideas. I think of it as a place where you can paint the walls with software, and make it into anything you want. There are currently a few emerging themes:

  • The SCR is a collaborative visualization space. The room seems especially suited for groups considering a lot of information, doing comparison, interperetation, and grouping. There is a lot of visual real estate, and the four-wall arrangement seems to lend itself to spatial organization of data. As groups use the space for this purpose, I'm trying to capture how they work, and what they need. The goal is to create a seemless experience for collaboration. This is the reason I've been interested in WPF, and the InfoMesa technology demonstrator, as covered in this previous post.
  • The SCR is a new media space. Its been used for art installations, and it has interesting possibilities for all sorts of interactive experiences, as illustrated by this recent experiment.
  • The SCR is a place for interacting with the virtual world. We're working on a Second Life client that would have a 360-degree perspective, so that we can embed the SCR inside of a larger virtual enviroment, enabling all sorts of new possibilities.
These are just a few of the areas I'm interested in. Each of the areas can be enhanced by the use of different types of sensors and robotics, and I've been started with Wiimotes, SunSpots, and the Ubisense location tracking hardware.

That's a bit about the SCR, it's a really fascinating environment, and if you are on the UNC campus, give me a shout out and I'll show you around!


Friday, November 7, 2008

Music and Media in the SCR

UPDATE: delay on getting the video done, should be here by early this week...MC

Here's an interesting prototype that combines the Social Computing Room with Max/MSP/Jitter and UbiSense. Video will follow soon.

The Social Computing Room (described here) has many potential applications, and one intriguing use is as an 'interactive media space'. The idea is that 360-degree visuals, combined with various types of sensors, software, and robotics, can create new kinds of experiences. Two examples that I can point to include the 'Spectacular Justice' exhibit that occurred last winter, as well as student work with Max/MSP/Jitter.

In this case, a prototype was written that uses UbiSense, which provides location tracking in 3D through an active tag. A Max object was written in Java to take the UbiSense location data off of a multicast stream, and push it out into Max-land. A second Max object was created to take the x,y,z data from UbiSense, in meters, and convert it into numbers that match up to the carpet squares on the floor of the Social Computing Room. Given those two new objects, the 'pad' number of the carpet square can be mapped to a MIDI note, and sent out through the Max noteout object.

Here's a picture of a simple patch:



What I'm really trying to show are techniqes for interacting with music and video. I could see using objects in the room that can be arranged to create musical patterns using an arpegiator or a loop player, and this can be combined with video on all four walls. MIDI or other methods can simultaneously control lights and other electronics. You could create a human theramin by having two people move around the room in relation to each other.

It's also interesting to let several people move around in the SCR holding multiple tags, you can create semi-musical patterns by working as a group. It's a fun thing, but points to some interesting possibilities. I've also adapted a Wiimote in the same manner.