Well, if i may jump in here given that i've also had my eye on the leap motion interface device, i've been brainstorming ways to implement it... though, this is all just ideas though because i'm not a programmer.
so, i can imagine that if there were a way for the SDK to set up where to detect the tips of the fingers/stylus/chopstick you want to use, maybe having that translate into a raw vector 3 variable? with that, you could apply it to any game that requires interaction in the field (such as a RTS, third-person diablo-style click and even just a physics puzzler.)
from what i can tell, it seems that it's strengths are mostly about getting data in to the computer and that data can be then used for a lot of things... i think, however, given the nature of Playmaker, being able to set it up so that it can just extract the data in a format that playmaker could read and understand would be the best option...
some potential ways i can see it being used... if the SDK is doing most of the heavy lifting, having a way to detect when a hand is in there... then detect the joints on that hand and the tips of the fingers... then you can create a "hand" mesh that will align itself to those points so essentially you have a basic telemetry device. (mostly i can see this being used much like the mouse-cursor from Black-and-White... that "god sim" game that took the "god" part literally and became, essentially, one giant tamagotchi like game.)
though, not knowing how to code and not having access to the SDK i'm not sure how this would be done... so, in that respect, maybe to implement the Leap motion system, having a sort of "engine add-on" that will do that and just extract the appropriate information to the game in the vector details since it does look like it's able to get the raw vector information.
as for how to recognize gestures of the hands in game with Playmaker... well, all i can really think would be to have a "hand" system like explained above and the system detects the angles of the joints... and when an angle gets to a particular "gesture" (which you'd probably have to define manually or tailor a rig to understand it) it'd then toggle an event or conditional statement that you can use... so, when the angles of the joints in the "hand" seem to be approaching a sort of "pinch" gesture, you could then translate that into a sort of "grab" system where it would "pick up" a unit on the field (aligning it to the tip of one of the "fingers" with the appropriate offset) and when that "pinch" is released, it "drops" the unit (this is with an RTS perspective in mind... though i'm sure it could be applied to sim games like simcity as well.)
is the main issue with integrating it more about what to make actions to help detect it? if so, i'd probably want to have a bit of a tete-a-tete with someone that's much better at programming to help translate it so i could understand it more thoroughly.
... this and the Occulus are the two things that i'm totally jazzed for! the Leap for the fact that this could change the entire way we think about interacting with computers and the Occulus because it is a VR system that is priced so that it's well within the reach of the average consumer. (people spend around $400+ for televisions and monitors... hell, some are more than happing dropping that much on game consoles and significantly more on computer hardware... so, it's priced appropriately i'd think.)