Friday, 9 May 2014

Retargeting research

I wanted to experiment with retargetting motion with constraints from one joint chain to another but as I don't have the motion capture data I'd need yet I've set up some simple tests in Maya.

Firstly I've decided to just try applying movement from one joint to another without any complex rigs in place but then I think it would be a good idea to try and apply the movement to the lamp rig just to check the rig doesn't interfere or vice versa.

I've set up three tests: Uniform movement translation, non-uniform movement and then this multiply node that reverses movement to just double check it does what Norbert says it does (which i don't doubt) but also to check i know how to use it.

First test I've used a parent constraint which seems to work fine so long as you  have the joint chain in the same position as the source but if not then limit the constraints to just the rotation planes, having translation causes the entire joint chain to move and gives undesirable results. And if the joint chain doesn't match the orientation of the joint then again undesirable results are seen in the path of the movement...

I've been reading a paper by Hsieh, Chen and Ouhyoung from Taiwan University:
http://graphics.im.ntu.edu.tw/docs/cadcg05_hsieh.pdf

 Which actually looks at retargetting to different articulated figures and it looks like the best solution they came up with it to make a transition skeleton. This takes away the issues that were had with trying to retarget to a bone with a different orientation. They combat the difference in quantities of bones that make up the skeleton by applying the same motion from one source bone to many target bones. 
They create the skeleton by re positioning each target bone of the transition skeleton to align with each source bone.

While they do align the bones core direction they do not consider the twist in the bones which can lead to some unexpected results.






Thursday, 1 May 2014

Mocap Session

I recently had another session in the MoCap suit this time with Charlie in the suit, hoping to be able to get it up and running using a human template I hit some varied success but the new setup of markers was a little problematic and will need remedying. for example the right leg being tied so closely to the left causes a lot of occlusion which made the time for cleanup lengthy however it seemed that this could be fixed with only one set of markers on the leg being used.

As you can see from this screenshot the template itself did take to the markers and was responding in a reasonable manner.



Even if there were a couple of glitches like his right leg becoming detached...


Overall the main problem that kept recurring was how best to limit the movement. The clingfilm was bound around the upper legs and then again separately around the lower legs to allow for movement around the knees but this still seemed to limit the movement a little too much leaving much of the actions stiff which could affect performance. The clingfilm also slipped out of place moving the markers that were on top of it but this was solved using some double sided sticky tape to secure it in place. The feet wouldn't stay together either but binding them together would have made Charlie a little too unstable as he was already a bit shaky. We did place crash mats around in order to protect him a little and Mike was close by to give physical support if he needed it.

If I go with only using the one leg of markers then I wouldn't need to cling film him at all but it may make the balance look a little off... sacrifices sacrifices...

At this rate it seems much less hassle to just keyframe animate the lamp however it may be more worthwhile for more complex characters.. but maybe not.


Monday, 28 April 2014

MoCap Planning



Original Clip, just something short that will be clear if things go right...or more likely wrong.




77 frames long - 3.2 (ish) seconds

The length isn't so important for matching but it gives me a rough idea for timing when directing the shots. The ability to move as sharp as the lamp can when keyframed might be another issue that is flagged up by this. As usual I'm filling in a shot list so that everything is pre-planned before going in.




Obviously it's just the one shot but I'll need the information there anyway so there's no need to cannibalise what I already have.

The prop to signify the ball I was thinking could basically just be someone giving a visual cue, as it's not being recorded I can add in a virtual ball after. This means I'll need someone else in the room helping me other than charlie in the suit which is common place during regular shoots.

I'd also like to take in a tripod to film the shoot to help show the difference between what was actually captured and what the result was but I'd presume I'll need to get charlie to sign something stating that he is ok with this. 

Mocap suite visit 2 - Human template

I had a few ideas on how to get the custom VST working and calibrated to the lamp setup from last time but after spending a good while on my lonesome in the mocap suite turns out everything I read wasn't quite going to cut it so i gave up on creating a full custom setup and started to try and get the human template to work with the markers I had.

Turns out it can be quite tricky mapping the custom skeleton when many of the markers are different, you have to consider how they are connected to one another and also consider what movement I'm needing to capture.

Below is the information I have available to me, the top is the naming of the markers for the human VST and the bottom is the actual structure of the VST that then would be calibrated to the markers.



I thought maybe adding in extra unnecessary joints to help with the naming conventions might be an option in order to get the markers to link up in a suitable manner however this would give the skeleton a way of bending in a way I don't want. There might be a way of countering this by adding in an extra stick (green and yellow lines in the top image) which seem to be able to restrict movement somewhat but this will need to be something I test when next in the mocap suite. If this doesn't work then I can probably just try and fix it in the retargetting phase but this may be a massive pitfall for the project.

After careful thought I'm going to attempt the setup which involves using the knees as the main bend. The marker setup within the human template will allow for better data capture and range of movement.

The final setup of markers I will be using for this project will be as follows:

This setup allows the human layout to be applied with ease but not sacrificing movement. However the capture will be more difficult to direct as the knees will need to be reversed during retargetting.

Thursday, 13 March 2014

Motion Capture-session one

This session I was focusing on trying to work out custom templates. I also recognized that a major problem I have is overlooking the simple solutions.. like using the help documentation for blade. 

The session had some successes and produced it's own set of problems than need to be looked into before going back into the studio.

The setup we ended up landing on for this session marker wise was as follows:

The three markers on the head would be used for the rotation with their placement helping to track the direction of the face. The translation of the top half would be taken from the distance between those around the shoulders and the hips. The main part of the acting would come from the hip movements. The translation of the bottom half of the lamp would come from the information given between the hips and the knees and then the rotation for the base would come from the ankle and foot markers. The markers were place along the central part of the body especially when placing them on the toes as the legs would be held together some how.


This marker setup proved to have some problems with occlusion and swapping between the marker on the chin and the marker on the chest. This could be solved in two ways:

Either move the central marker on the chest to be on both shoulders like this


Or as was suggested by Norbert, use the multiply node in maya to take the movement from the knee area and reverse it to bend the right way. Eliminating the need for the shoulder/chest markers at all. 



This is a particularly interesting suggestion as it may yield more realistic results in terms of range of movement.

Actually creating the custom skeleton in blade proved to be quite a task. While there is functionality to build your own structure we were unable to find a way of renaming the markers or applying a custom set of bones to them meaning the session didn't result in any actual capture other than the initial ROM

Before the next session I want to look into applying custom bones (probably looking towards dynamic props for a solution) and also test out the ability to apply only the rotation of the source to a target rig to test out that ability.


Tuesday, 11 March 2014

Lamp/human comparisons

As I've chosen to retarget to the lamp rig there's a few things I need to do.

Look at the rig and how its constructed

Figure out how to translate movement from the source data can be applied across (marker placement).

Create motion analysis on how the lamp should move to help inform how successful the final result is.

The Rig
So the rig for the lamp is pretty simple in the grand scheme of things which is handy. Also the structure should be useful when figuring out where the markers should be placed in order to get the right sort of movement from the actor during capture.



Marker placement on people

First sort of analysis led pretty easily to this marker setup. It takes into account the joints in the rig and joints on the actor.
But when watching this, one of the poses jumped out as being an awkward one to manage:

the pose seen in the images below demonstrates a couple of issues with the current setup. The first being something that would need to be considered during the capture process concerning props to help with balance when the lamp is over extended. The second being the positioning of the head. The proposed solution in the image below actually might not help at all. At present there are five joints driving 2 in order to control the head but this might cause some issues for head mobility. I'd like to try some posing tests where I take images of the lamp in various poses and then get people to replicate them. This might help highlight some more issues that might need to influence changes to the marker system or more need for props during capture. I'll be paying close attention in particular to the head movement.
Motion analysis Lamp movement.

I've done a bit of motion analysis for lamp movement based mainly on Luxo Jr in the Pixar short. I also tried to look for movement which had a similar structure and it led me to believe that the best reference of movement for an angle poise lamp would probably need to come from a three legged cat with the front leg being the one thats missing. This is due to the bone structure of the front of the leg, neck and head, the agility a cat poses and the weight needed to be placed on the single limb to add emphasis (although a regular cat was working out fine)

While doing this motion research I started to consider how to measure success and I came to the conclusion that the best way of testing would be probably to hand animate a scene with the lamp rig I have and then try and get the motion capture to be able to achieve a similar thing. So there's a task that I need to do pretty promptly.


Later today
Technical side, looking at creating a custom vst (vicon skeleton template) I think it might involve manually labeling the markers in order to achieve something but i'm wondering if you can use the human skeleton and then just not record some parts.. would it complain?

By using the existing human template but then labeling the markers wrong (to give the illusion of the lamps structure) might give interesting results in the viewport but using selective retargetting later on in the pipeline this could prove to be a beneficial technique. Further more if you could choose to turn off certain limbs such as you can do with some rigs in maya in order to gain better visibility on the current focus this might limit the distraction of the parts of the existing template that aren't labelled in the traditional manner.

Things I want to try (in a more concise manner)

  • Selective markers applied to existing human template (traditional labeling, i.e. the hip is the hip)
  • Attempt to make a custom template (although i've found no help online in regards to doing this)
  • Complete skeleton, using existing template that labels in a way that forms structure of lamp within viewport (likely to be messy)


Extra thoughts:
Adding more subtle performance. Performance is still a major factor in what this project is trying to achieve. Ending up with a lamp that moves technically correctly is what I'm hoping to build on using this process which is why certain aspects are being considered.
For movements in the neck it might be nice to use the extra joint to influence the angle between 2 and 3.. maybe giving the option of blending between the two angles during the cleanup process? this could help to plus the performance at key moments.


Arms need to be kept from helping to add movement.. knees need to be kept together but not sure how the distance between the hips and the ankles can be kept consistent... might need to re consider where the markers are on the lower section of the body. 



Monday, 3 March 2014

Pipeline focus

Things have changed somewhat to help fit within the scope of my abilities and the time allowed. I knew i would struggle to sort out a small project but this has been a little ridiculous. Here's a brief summary of the new plan.


Redefine project to just one pipeline with tweaks.

Split into three parts, Pre capture, Mid Capture and Post Capture

Pre: Templates within Vicon Blade to help with varying structure, encompasses the technical side of setting up a motion capture session in order to accomodate for non human rigs

Mid: Augmenting the data using props and acting techniques

Post: Most likely using spline IK and lattice deforms to help constrain the data to the appropriate motions.


More research needed into "Pre" phase. So far have been looking into what information on the template function in vicon there is, it seems like most of the documentation is concerning human biped setup. There are tools to help label the markers appropriately in relation to one another. It could be as simple as figuring out which markers should be labelled as what in order to reconfigure the template to be suitable for non human subjects. This would need a little extra research into anatomy of the creatures intended to be used as rigs in order to help assess how best the markers should be laid out.

Having spoken to Norbert, I know know that adapting the skeletons to go into the appropriate creature shouldn't be all that difficult. The translations of the bones aren't recorded apart form the root joint so the rotations are the only things that'll need to be considered when capturing and establishing the range of motion. Again this will need to focus around the motion analysis to help inform marker placement.

I will only need one rig other than the human control. So I think for time sake the angle poise lamp may be the best option as the sort of anatomy is pretty self explanatory due to the joints it has. It's range of movement should be pretty explanatory too.

I have a mocap session scheduled for either Wednesday or Friday (Update: Friday confirmed) to allow for messing with templates. I'll book another for sometime after that to actually acquire the footage for the mid and post sections of the pipeline.

I think I'll be using the lamp as its the most non human with a load of reference footage for one being animated well (Pixar) I've started to do little motion studies on how the lamp moves and what motion it mimics. I'll upload scans later on as I'm using my personal sketchbook to do these in.

Monday, 24 February 2014

Rig choice

Based on the research done for the proposal I had overlooked some key examples of non human rig. I was still thinking alive when in actual fact that doesn't need to the the case. the revised list of rig types I would like to test the process on is as follows

  • Standard human
  • Quadruped
  • Inanimate object
  • Non human biped

This should allow a wide enough spread of rigs to see the results clearly. also some methods might work better for one type of rig than another.

The Actual rigs to use are as follows

Standard Human:
For this I will use the rig I produced last semester as I have a better understanding of it's skeleton and capabilities than a rig found online. I also built into the skeleton the naming conventions motion builder prefers when retargetting which should ease things down the line.

Singer

Quadruped:
I'm opting to use a rig that should show up problems clearly due to long limb length
The deer rig is freeware for non-commercial purposes. (Creator: John Vassallo)

Screen1

Inanimate object:
The disney research I found used a lamp as one of their examples as there is clear reference for how it could move (pixar's luxo, this means there is a basis already available of how a lamp can appear to be expressive which will give me something to compare the results too

Screen1

Non Human Bi-Ped:
Again this example of a bi ped will have plenty of reference to go off when it comes to performance during the capture.

Screen1

I need to get a move on with this. Am trying to arrange a meeting with Lynn so I can refocus properly. Now I have my rigs chosen (but still want to run them past Lynn) I should be able to plan and book out the mocap suite.. Then the retargetting methods can begin.

Friday, 31 January 2014

Physical Props (Proposal Breakdpwn Part 2)

The second section of my proposal spoke of physical props although this will actually include all the things to think about when capturing the initial data.


For example this quote from "Puppetology: Science or Cult?" by Brad deGraf and Emre Yilmaz in regards to the initial capture with non human characters in mind for the target
link: http://www.awn.com/mag/issue3.11/3.11pages/degrafmotion.php3

"Ironically, it's often better to have less data than more -- we usually use only 12 body sensors. If you had one sensor for every single moving part of the body, you'd have a lot more information tying you to the human form, but for our purposes we just want enough sensors to convey the broad lines and arcs of the body."

 Through my research I found a number of interesting contraptions that were used to modify human motion for use in motion capture.. They all looked a little dangerous though.

This was used to help influence the movements of the Pleo the dinosaur toy:
Pleo the Dinosaur


for planet of the apes andy Serkis used stilt like extensions on his arms to help mimic apelike movement more effectively.
Andy Serkis


Giant Studios, which motion-captured live performers in choreographed fights to provide critical data for the animators of Reel Steel

Reel Steel


Groot in the Guardians of the Galaxy films due to be released this year is played by vin Diesel who has been sighted getting used to the props that will help him with the extra height needed when performing.




Acting Techniques
Using acting techniques to manipulate the data during capture is a useful tactic and much safer for me to investigate than some of the contraptions mentioned.

Andy Serkis again is a great example of this in both his role as Caesar in planet of the apes and also as Gollem in the lord of the rings and hobbit films.

Andy Serkis
Disney did some research into this technique but instead of taking the straight capture they took key poses form the source data and matched it to the equivalent pose for the target character.

From Research paper by Yamane (2010)
link: http://www.disneyresearch.com/wp-content/uploads/nonhumanoid_sca10.pdf

Moving Forward
I think the main focus will be the acting techniques rather than the props other than maybe basic ones to help with weighting, for example if the character is meant to have more weight on the arms I could get the actor to hold something to imitate the extra weight.




Preserving Motion Through Retargetting (Proposal breakdown part 1)

This will be quite a long post outlining the work I did researching for the proposal. (I might split it into a few posts) I generally used my notebook to keep track of ideas rather than my blog which is why it has been a bit neglected however my notes are a little all over the place so I'll do my best to keep it from being confusing.

The main sections for my literature review were as follows:

  • Preserving motion through retargeting
  • Physical Props
  • Acting and Key Poses
  • Lattice deformations and Spline IK
  • Spasm: Procedural Animation and real-time retargeting.

1. Preserving motion through retargeting.

Gleicher - developed a solver which preserved desirable qualities of motion using space time constraints to allow interactive control of animation via UI. The process described in the 1998 paper delivered as part of SIGGRAPH '98 initially uses the angles of the source characters joints to drive the movement in the target character however this leads to problems when the character is scaled as shown in figure 2 taken from the journal itself. 

To fix this he used IK solvers to help apply constraints to the characters feet, forcing them to stay planted but as IK solvers consider each frame independently it doesn't know what the motion is in relation to the frames around it resulting in "high frequency jerkiness".


Spacetime constraints: (Witkin and Kass 1988) uses 3 factors to help dictate the movement needed. What the character needs to do or the mechanical action of whats being moved so from one place to another or onto something etc, How it needs to do it so characteristics of the movements. and also how the physical structure of the character would affect the movement for example how the joints are able to move, what the mass of the character is etc.

Main problem Gleicher had was that despite being able to replicate the movements of the source character it took out all personality. 

Gleichers work may be beneficial in transplanting movements from source to target for replicating the mechanics of the movement but when retargeting to a target that has a different structure to the source then I would have thought a little flex from the original movement might be useful but then again with space time constraints one of the factors is the physical structure of the character which could either break the movement entirely or cope with it.. Might be a useful thing to look into?

Moving Forward
I mentioned in my dissertation about unity's mecanim and Mayas human IK, both these tools focus on the retargetting side of things specifically. I feel like exploring the possibilities these two processes offer would be in my best interest so I'll dedicate a post specifically to research into their capabilities hopefully resulting in pipelines specific to those.