InfoMesa is a project to allow scientists to do more science and more discovery in a collaborative and data-rich environment. The metaphor that we have elected to use as the underlying fabric of the InfoMesa is a Whiteboard.
InfoMesa allows any kind of data or visualization to be added to the Whiteboard. Far from static, these tools are interactive, allowing data to be absorbed from data sources like Oracle, SQL Server, Excel Spreadsheets, XML or even Cloud-based web services. InfoMesa, when complete will support imagery, video, 2D connected models, 3D models (lit in a photo realistically manner), web searches, results from web service calls, Image Tile Maps, ScatterPlots, Sticky Notes, Ink Notes, Rich Annotations and Associations.
Check the blog link for screen shots, it's a really interesting application, as well as a nice example of the capabilities of WPF (Windows Presentation Foundation). As a died-in-the-wool Java and Ruby programmer, I don't necessarily fit the typical Microsoft bandwagon profile, but I am having quite good success leveraging the WPF framework for the challenging environment in our Social Computing Room, which is essentially a 12,288 x 768 desktop.
A bit of a rewind, the Social Computing Room is part of the Renci Engagement Center at UNC Chapel Hill. The SCR has a 360-degree visual display running on all four walls, with 12 projectors per wall. The relevant model we were already working on was a supporting environment for researchers working in collaborative groups, and considering LOTS of data at one time. The SCR is a great venue for approaching problems that fit the 'small multiples' mold.
One of my colleagues, Dave Borland, had created a prototype called 'Collage'. This prototype used OpenGL and C++ (including some nifty wxWidgets sleight of hand to allow OpenGL to render such a large visual application). Collage handled cool things like letting the mouse, and any images, wrap all the way around the room. Collage could also play videos, and we were looking at adding other capabilities. Another cool part of Collage was the ability to intelligently 'lay out' images. For example, it was a common activity to expand each image to size to one 'projector', avoiding any stitch lines between displays. We were also working on displaying metadata about the images on command, sorting data various ways, and generally assisting 'small multiple' visualization tasks.
The downside of Collage is that it was a bit hard to extend, requiring a lot of OpenGL and wxWidgets prestidigitation to add new features. There were further plans to add Wiimote integration for multi-user input, and the ability to assign functionality to each wall. E.g., have a magnification wall, where thumbnails that were dragged to the magnification wall would automatically size for comparison.
After seeing InfoMesa in prototype form, I realized that many of the ideas we had in Collage mapped nicely onto the InfoMesa concept, and InfoMesa really moved the ball down the field. The first question I had was whether WPF would support, in a performant manner, a 12,288x768 desktop, and I was pleasantly suprised! The thing I've been working on for the past couple of weeks is taking the InfoMesa code, and adapting it to cover the functionality we already had in the OpenGL Collage prototype. I've been concentrating on the visual interface so far, leaving persistance, annotation, and metadata for later work. Here's a short vid of the CollageWPF adaptation of InfoMesa:
I wanted to hit on a few of the 'features' we've added, some requested by researchers who are using the prototype:
- InfoMesa is turned 'inside out', maximizing real estate. Wrapping controls and toolbars around the whole display doesn't work on a big wall, so I went with a right-click context menu. It would be cool for InfoMesa to expand full desktop or display the interface! Otherwise, it might be cool to concentrate on partitioning the application such that the host 'window' can be easily customized for various display types.
- Ability to automatically lay out and size imagery, which I implemented by creating a SceneManager to describe the environment, and a LayoutManager that can be subclassed for various layouts. The first LayoutManager does a scale and position to get one image per projector. The idea is that SceneManagers could be created for other viz environments, such as a 3x3 viz wall, or a 4K high-def display.
- I Started thinking about a base 'widget' that can be subclassed to create other tools. Here I've still got things to learn about InfoMesa! I also started thinking about how these subclassed widgets would keep and share metadata, and allow the host 'cavas', or 'Universe', to know its widgets, and be able to manipulate them for things like layouts.
- I added a widget to display 'time series' images in a player. It will eventually work by synchronizing multiple 'time series' viewers so researchers can consider different model runs simultaneously. I also added a widget to digest a power point, break into images, and then lay out those images.
- The original InfoMesa zooms the entire desktop. Researchers were really looking to scale individual images.
- I added mouse-over tool tips to display image metadata.
More later....be sure to vote tomorrow. As a political junkie, I'm sure I'll be pretty tired looking on Wednesday morning.