Mad props to the poster on the Panda forums BrianInSuwon. I had wrestled with getting an animated Actor into Panda3D after creating it in Blender. I think part of my problem was that I was using the newest version of Blender, and using envelopes to warp the mesh that I was putting over my armatures.
I'm really not trying to be a 3D animator, rather learning the mechanics of the animation and art path to better understand game engine development. I'm picking up Blender while picking up Panda. There are alternative methods of doing bone animation in Blender, so you really do have to do it one certain way to get it to work. I won't repeat the instructions, rather link off to the forum post, but this does work using the Chicken exporter plug-in for Blender.
Friday, June 27, 2008
Wednesday, June 25, 2008
Panda3D in a Global Immersion Dome
To the outside world (at least the four or five people that have visited my blog), the topics covered must appear seemingly random from week to week. Well, this week is no exception! I really like what I'm doing, because I never know what's around the next corner. Really, my experience is probably not all that new. It seems like deep knowledge of a narrow area has its place (e.g. the hot-shot Oracle DB administrator), but that doesn't seem to be the game anymore. Rather, the ultra-adaptable, multi-lingual (C, C++, Java, Python, Ruby, etc.) developer who can work in any environment or platform, multi-task, and turn on a dime, and work in multiple teams at once seems to be the profile. I'm not saying that I am the best example, but it's what I'm striving to be.
That said, I thought I'd pass along this tidbit from my latest adventure...
Serious gaming is a hot topic on campus. How can we utilize game engine platforms to create new tools for simulation, interactive visual applications, training, and learning? Marry this with unique assets such as an interactive dome, an ultra-high (4K) resolution stereo environment, and a cool 360-degree 'viz wall', and it gets even more interesting. Over the summer, my group is exploring this intersection of game engines, and unique environments, and reaching out to folks on campus to find like-minded 'explorers'.
The true topic of this post is getting Panda to work in our dome, which turned out to be fairly straight forward. The example I'm using is a bit 'hackey', because it reflects a lot of trial-and-error. Python is great for this fortunately. I'll try and annotate what did the trick, using the baked in Panda and scene you get with the download...
# our dome has a 2800x2100 display, create a window at the 0,0 origin with no borders like so...
from pandac.PandaModules import loadPrcFileData
loadPrcFileData("", """win-size 2800 2100
win-origin 0 0
undecorated 1""")
import direct.directbase.DirectStart
import math
from direct.task import Task
from direct.actor import Actor
from direct.interval.IntervalGlobal import *
from pandac.PandaModules import *
# create a function that sets up a camera to a display region, pointing a given direction
# note that we're essentially creating one borderless window divided into four equal regions.
# each region depicts a camera with a precise eye-point and aspect ratio.
#
# NOTE this geometry is particular to our dome
def createCamera(dispRegion, x, y, z):
camera=base.makeCamera(base.win,displayRegion=dispRegion)
camera.node().getLens().setViewHpr(x, y, z)
camera.node().getLens().setFov(112,86)
return camera
# set the default display region to inactive so we can remake it
dr = base.camNode.getDisplayRegion(0)
dr.setActive(0)
#settings for main cam, which we will not really be displaying. Actually, this code might be
# unnecessary!
base.camLens.setViewHpr(45.0, 52.5, 0)
base.camLens.setFov(112)
# set up my dome-friendly display regions to reflect the dome geometry
window = dr.getWindow()
dr1 = window.makeDisplayRegion(0, 0.5, 0, 0.5)
dr1.setSort(dr.getSort())
dr2 = window.makeDisplayRegion(0.5, 1, 0, 0.5)
dr2.setSort(dr2.getSort())
dr3 = window.makeDisplayRegion(0, 0.5, 0.5, 1)
dr3.setSort(dr3.getSort())
dr4 = window.makeDisplayRegion(0.5, 1, 0.5, 1)
dr4.setSort(dr4.getSort())
# create four cameras, one per region, with the dome geometry. Note that we're not using the
# base cam. I tried this at first, pointing the base cam at region 1. It worked, but it threw the
# geometry off for some reason. The fix was to create four cameras, parent them to the base
# cam, and off we go.
cam1 = createCamera((0, 0.5, 0, 0.5), 45.0, 52.5, 0)
dr1.setCamera(cam1)
cam2 = createCamera((0.5, 1, 0, 0.5), -45.0, 52.5, 0)
dr2.setCamera(cam2)
cam3 = createCamera((0, 0.5, 0.5, 1), 135.0, 52.5, 0)
dr3.setCamera(cam3)
cam4 = createCamera((0.5, 1, 0.5, 1), -135, 52.5, 0)
dr4.setCamera(cam4)
# loading some baked-in model
environ = loader.loadModel("models/environment")
environ.reparentTo(render)
environ.setScale(0.25,0.25,0.25)
environ.setPos(-8,42,0)
cam1.reparentTo(base.cam)
cam2.reparentTo(base.cam)
cam3.reparentTo(base.cam)
cam4.reparentTo(base.cam)
# rest of code follows...this works!
That said, I thought I'd pass along this tidbit from my latest adventure...
Serious gaming is a hot topic on campus. How can we utilize game engine platforms to create new tools for simulation, interactive visual applications, training, and learning? Marry this with unique assets such as an interactive dome, an ultra-high (4K) resolution stereo environment, and a cool 360-degree 'viz wall', and it gets even more interesting. Over the summer, my group is exploring this intersection of game engines, and unique environments, and reaching out to folks on campus to find like-minded 'explorers'.
The true topic of this post is getting Panda to work in our dome, which turned out to be fairly straight forward. The example I'm using is a bit 'hackey', because it reflects a lot of trial-and-error. Python is great for this fortunately. I'll try and annotate what did the trick, using the baked in Panda and scene you get with the download...
# our dome has a 2800x2100 display, create a window at the 0,0 origin with no borders like so...
from pandac.PandaModules import loadPrcFileData
loadPrcFileData("", """win-size 2800 2100
win-origin 0 0
undecorated 1""")
import direct.directbase.DirectStart
import math
from direct.task import Task
from direct.actor import Actor
from direct.interval.IntervalGlobal import *
from pandac.PandaModules import *
# create a function that sets up a camera to a display region, pointing a given direction
# note that we're essentially creating one borderless window divided into four equal regions.
# each region depicts a camera with a precise eye-point and aspect ratio.
#
# NOTE this geometry is particular to our dome
def createCamera(dispRegion, x, y, z):
camera=base.makeCamera(base.win,displayRegion=dispRegion)
camera.node().getLens().setViewHpr(x, y, z)
camera.node().getLens().setFov(112,86)
return camera
# set the default display region to inactive so we can remake it
dr = base.camNode.getDisplayRegion(0)
dr.setActive(0)
#settings for main cam, which we will not really be displaying. Actually, this code might be
# unnecessary!
base.camLens.setViewHpr(45.0, 52.5, 0)
base.camLens.setFov(112)
# set up my dome-friendly display regions to reflect the dome geometry
window = dr.getWindow()
dr1 = window.makeDisplayRegion(0, 0.5, 0, 0.5)
dr1.setSort(dr.getSort())
dr2 = window.makeDisplayRegion(0.5, 1, 0, 0.5)
dr2.setSort(dr2.getSort())
dr3 = window.makeDisplayRegion(0, 0.5, 0.5, 1)
dr3.setSort(dr3.getSort())
dr4 = window.makeDisplayRegion(0.5, 1, 0.5, 1)
dr4.setSort(dr4.getSort())
# create four cameras, one per region, with the dome geometry. Note that we're not using the
# base cam. I tried this at first, pointing the base cam at region 1. It worked, but it threw the
# geometry off for some reason. The fix was to create four cameras, parent them to the base
# cam, and off we go.
cam1 = createCamera((0, 0.5, 0, 0.5), 45.0, 52.5, 0)
dr1.setCamera(cam1)
cam2 = createCamera((0.5, 1, 0, 0.5), -45.0, 52.5, 0)
dr2.setCamera(cam2)
cam3 = createCamera((0, 0.5, 0.5, 1), 135.0, 52.5, 0)
dr3.setCamera(cam3)
cam4 = createCamera((0.5, 1, 0.5, 1), -135, 52.5, 0)
dr4.setCamera(cam4)
# loading some baked-in model
environ = loader.loadModel("models/environment")
environ.reparentTo(render)
environ.setScale(0.25,0.25,0.25)
environ.setPos(-8,42,0)
cam1.reparentTo(base.cam)
cam2.reparentTo(base.cam)
cam3.reparentTo(base.cam)
cam4.reparentTo(base.cam)
# rest of code follows...this works!
Friday, May 9, 2008
Techno-travels and HASTAC Part II
In brief, here's a demo of a physical/virtual mashup. In this case, UbiSense tracking is used on individuals within a space called the Social Computing Room, and depicted within a virtual representation of the same space.
One can think of a ton of ways to take this sort of thing. There are many examples of using the virtual world as a control panel for real-world devices and sensors, such as the Eolus One project. How can this idea be applied to communication between people, for social applications, etc. What sort of person-to-person interactions between persons in the SCR and remote visitors are possible? I have this idea that virtual visitors would fly in and view the actual SCR from a video wall. Then they could fly through the wall (through the looking glass) to see and communicate with the virtual people as they are arranged in the room. A fun thing we'll using as a demo at HASTAC.
One can think of a ton of ways to take this sort of thing. There are many examples of using the virtual world as a control panel for real-world devices and sensors, such as the Eolus One project. How can this idea be applied to communication between people, for social applications, etc. What sort of person-to-person interactions between persons in the SCR and remote visitors are possible? I have this idea that virtual visitors would fly in and view the actual SCR from a video wall. Then they could fly through the wall (through the looking glass) to see and communicate with the virtual people as they are arranged in the room. A fun thing we'll using as a demo at HASTAC.
Friday, May 2, 2008
Techno-Travels and HASTAC Part I

I'll be presenting at the HASTAC conference on May 24th at UCLA. The conference has a theme of 'TechnoTravels/TeleMobility: HASTAC in Motion". I'll quote the description of the theme:
This year’s theme is “techno-travels” and explores the multiple ways in which place, movement, borders, and identities are being renegotiated and remapped by new locative technologies. Featured projects will delve into mobility as a modality of knowledge and stake out new spaces for humanistic inquiry. How are border-crossings being re-conceptualized, experienced, and narrated in a world permeated by technologies of mobility? How is the geo-spatial web remapping physical geographies, location, and borderlands? How are digital cities interfacing with physical space? How do we move between virtual worlds? And what has become of sites of dwelling and stasis in a world saturated by techno-travels?
OK...so how do you take a bite out of that apple? In my case, the presentation is going to center on something called the 'Social Computing Room' (SCR), part of visualization center at UNC Chapel Hill. There are lots of different ways to approach the SCR. It's a visualization space for research, it's a canvas for art and new media projects, it's a classroom, a video conference center, a gaming and simulation environment, and it's a physical space that acts as a port between the physical world and the digital world. It's difficult when talking about interesting new ideas to avoid overstating the potential, but I'll try to use the SCR to talk about how physical and digital worlds converge, using the 'port' metaphor. Thinking about the SCR as a port can start by looking at a picture of the space. Now compare that picture with a capture of a virtual version, in this case within Second Life:

To me, the SCR is a port in the sense that it exists in both worlds, and the ongoing evolution of the space will explore the ways these two sides of the coin interact. Before I go there, perhaps a bit more about the HASTAC theme. In this installment, let's talk about borders in a larger sense, coming back to the SCR a bit down the road.
Techno-travels? Borders? Mobility? Borders are falling away in our networked world, this means the borders that exist between geographic places, and the borders between the physical and virtual world. The globe is a beehive of activity, and that activity can be comprehended in real time from any vantage point. A case in point are real time mashups between RSS feeds and Google Maps, such as flickrvision and twittervision. These mashups show uploads of photos to Flickr, and mapping of twitters around the globe. You can watch the action unfold from your desktop, no matter where you are. Borders between places start to disappear as you watch ordinary life unfold across the map, and from this perspective, the physical borders seem to mean less, like the theme song to that old kids show 'Big Blue Marble', if you want to date yourself. Sites like MySpace and Orkut have visitors from all over the world, as illustrated by this ComScore survey, and social networks don't seem to observe these borders either.
The term 'neogeography' was coined by Joab Jackson in National Geographic News, to describe the markup of the world by mashing up mapping with blogs. Sites such as Platial serve as an example of neogeography in action, essentially providing social bookmarking of places. Google Earth is being marked up as well...Using Google Earth and Google Street View, you can see and tag the whole world. Tools like Sketch-up allow you to add 3D models to Google Earth, such as this Manhattan view:

So we're marking up the globe, and moving beyond markup to include 3D modeling. Web2.0 and 'neogeography' add social networking too. At the outset, I also waived my hands a bit at the SCR by comparing real and virtual pictures of this 'port'. That's a bunch of different threads that can be tied together by including some of the observations in an excellent MIT Technology Review article called 'Second Earth'. In that article, Wade Roush looks at virtual worlds such as Second Life, and at Google Earth and asks, "As these two trends continue from opposite directions, it's natural to ask what will happen when Second Life and Google Earth, or services like them, actually meet." Instead of socially marking up the world, the crucial element is the ability to be present at the border between real and virtual, to recognize others who are inhabiting that place at that time, and to connect, communicate, and share experiences in those places. This gets to how I would define the SCR as a port.
The drawback to starting out with this 'Second Earth' model is that it limits the terrain to a recognizable spatial point. While a real place sometimes can serve as a point of reference in the virtual world, that also unnecessarily constrains the meaning. What is an art exhibit? What is a scientific visualization? What is any collection of information? As naturally as we mark up the world, we're also marking up the web, collaborating, and experiencing media in a continuous two-way conversation..that's a lot of what Web2.0 is supposed to be about. How can we create the same joint experience, where we're all present together as our real or virtual selves sharing a common experience? That to me is the central goal of 'techno-travels', and perhaps expands a bit on the idea of border crossing.
Anyhow, I'm trying to come up with my HASTAC presentation, and thinking aloud while I do it.
Tuesday, April 15, 2008
Hands-free control of Second Life
Cool video, though this interaction style still seems a bit awkward. I think the more interesting idea is the capture of gestures, which would be great for speaking to a group via an avatar, for example.
video here...
video here...
Monday, April 14, 2008
NetBeans 6.0.1 running like a pig...here's how I fixed it
Netbeans 6.0.1 was running like a pig on my ThinkPad T60p. I did a bit of poking around, and found this set of config changes quite helpful, so I'll pass them along:
This is in my netbeans.conf, which should be under Program Files/NetBeans 6.0.1/etc on Windows. The critical change was the memory config:
netbeans_default_options="-J-Dcom.sun.aas.installRoot=\"C:\Program Files\glassfish-v2ur1\" -J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Xverify:none -J-Dapple.laf.useScreenMenuBar=true"
Now NetBeans is running quite well. I'm hacking some Sun code samples to get data from the accelerometer to build a prototype air mouse. This isn't a standard mouse, but rather a way for multiple users to manipulate visualizations in the Social Computing Room. For grins, here's a shot of the space...
This is in my netbeans.conf, which should be under Program Files/NetBeans 6.0.1/etc on Windows. The critical change was the memory config:
netbeans_default_options="-J-Dcom.sun.aas.installRoot=\"C:\Program Files\glassfish-v2ur1\" -J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Xverify:none -J-Dapple.laf.useScreenMenuBar=true"
Now NetBeans is running quite well. I'm hacking some Sun code samples to get data from the accelerometer to build a prototype air mouse. This isn't a standard mouse, but rather a way for multiple users to manipulate visualizations in the Social Computing Room. For grins, here's a shot of the space...

Thursday, April 10, 2008
Note to Self
I knew this, forget where I wrote it down, so I'll memorialize it here. I need to add some files to a SunSPOT project (in this case a desktop client), but couldn't remember the property in the ANT script to point to additional classpath entries...viola!
main.class=org.sunspotworld.demo.TelemetryFrame
#main.args=COM1
#port=COM1
user.classpath=lib/log4j-1.2.15.jar
Of course, now I'll forget that I stuck it in my blog. I'm looking at using the spots to create a multi-user input interface to a 360 degree visualization environment (our Social Computing Room), at least as a proof-of-concept.
main.class=org.sunspotworld.demo.TelemetryFrame
#main.args=COM1
#port=COM1
user.classpath=lib/log4j-1.2.15.jar
Of course, now I'll forget that I stuck it in my blog. I'm looking at using the spots to create a multi-user input interface to a 360 degree visualization environment (our Social Computing Room), at least as a proof-of-concept.
Subscribe to:
Posts (Atom)