Thursday, 13 March 2014

Motion Capture-session one

This session I was focusing on trying to work out custom templates. I also recognized that a major problem I have is overlooking the simple solutions.. like using the help documentation for blade. 

The session had some successes and produced it's own set of problems than need to be looked into before going back into the studio.

The setup we ended up landing on for this session marker wise was as follows:

The three markers on the head would be used for the rotation with their placement helping to track the direction of the face. The translation of the top half would be taken from the distance between those around the shoulders and the hips. The main part of the acting would come from the hip movements. The translation of the bottom half of the lamp would come from the information given between the hips and the knees and then the rotation for the base would come from the ankle and foot markers. The markers were place along the central part of the body especially when placing them on the toes as the legs would be held together some how.


This marker setup proved to have some problems with occlusion and swapping between the marker on the chin and the marker on the chest. This could be solved in two ways:

Either move the central marker on the chest to be on both shoulders like this


Or as was suggested by Norbert, use the multiply node in maya to take the movement from the knee area and reverse it to bend the right way. Eliminating the need for the shoulder/chest markers at all. 



This is a particularly interesting suggestion as it may yield more realistic results in terms of range of movement.

Actually creating the custom skeleton in blade proved to be quite a task. While there is functionality to build your own structure we were unable to find a way of renaming the markers or applying a custom set of bones to them meaning the session didn't result in any actual capture other than the initial ROM

Before the next session I want to look into applying custom bones (probably looking towards dynamic props for a solution) and also test out the ability to apply only the rotation of the source to a target rig to test out that ability.


Tuesday, 11 March 2014

Lamp/human comparisons

As I've chosen to retarget to the lamp rig there's a few things I need to do.

Look at the rig and how its constructed

Figure out how to translate movement from the source data can be applied across (marker placement).

Create motion analysis on how the lamp should move to help inform how successful the final result is.

The Rig
So the rig for the lamp is pretty simple in the grand scheme of things which is handy. Also the structure should be useful when figuring out where the markers should be placed in order to get the right sort of movement from the actor during capture.



Marker placement on people

First sort of analysis led pretty easily to this marker setup. It takes into account the joints in the rig and joints on the actor.
But when watching this, one of the poses jumped out as being an awkward one to manage:

the pose seen in the images below demonstrates a couple of issues with the current setup. The first being something that would need to be considered during the capture process concerning props to help with balance when the lamp is over extended. The second being the positioning of the head. The proposed solution in the image below actually might not help at all. At present there are five joints driving 2 in order to control the head but this might cause some issues for head mobility. I'd like to try some posing tests where I take images of the lamp in various poses and then get people to replicate them. This might help highlight some more issues that might need to influence changes to the marker system or more need for props during capture. I'll be paying close attention in particular to the head movement.
Motion analysis Lamp movement.

I've done a bit of motion analysis for lamp movement based mainly on Luxo Jr in the Pixar short. I also tried to look for movement which had a similar structure and it led me to believe that the best reference of movement for an angle poise lamp would probably need to come from a three legged cat with the front leg being the one thats missing. This is due to the bone structure of the front of the leg, neck and head, the agility a cat poses and the weight needed to be placed on the single limb to add emphasis (although a regular cat was working out fine)

While doing this motion research I started to consider how to measure success and I came to the conclusion that the best way of testing would be probably to hand animate a scene with the lamp rig I have and then try and get the motion capture to be able to achieve a similar thing. So there's a task that I need to do pretty promptly.


Later today
Technical side, looking at creating a custom vst (vicon skeleton template) I think it might involve manually labeling the markers in order to achieve something but i'm wondering if you can use the human skeleton and then just not record some parts.. would it complain?

By using the existing human template but then labeling the markers wrong (to give the illusion of the lamps structure) might give interesting results in the viewport but using selective retargetting later on in the pipeline this could prove to be a beneficial technique. Further more if you could choose to turn off certain limbs such as you can do with some rigs in maya in order to gain better visibility on the current focus this might limit the distraction of the parts of the existing template that aren't labelled in the traditional manner.

Things I want to try (in a more concise manner)

  • Selective markers applied to existing human template (traditional labeling, i.e. the hip is the hip)
  • Attempt to make a custom template (although i've found no help online in regards to doing this)
  • Complete skeleton, using existing template that labels in a way that forms structure of lamp within viewport (likely to be messy)


Extra thoughts:
Adding more subtle performance. Performance is still a major factor in what this project is trying to achieve. Ending up with a lamp that moves technically correctly is what I'm hoping to build on using this process which is why certain aspects are being considered.
For movements in the neck it might be nice to use the extra joint to influence the angle between 2 and 3.. maybe giving the option of blending between the two angles during the cleanup process? this could help to plus the performance at key moments.


Arms need to be kept from helping to add movement.. knees need to be kept together but not sure how the distance between the hips and the ankles can be kept consistent... might need to re consider where the markers are on the lower section of the body. 



Monday, 3 March 2014

Pipeline focus

Things have changed somewhat to help fit within the scope of my abilities and the time allowed. I knew i would struggle to sort out a small project but this has been a little ridiculous. Here's a brief summary of the new plan.


Redefine project to just one pipeline with tweaks.

Split into three parts, Pre capture, Mid Capture and Post Capture

Pre: Templates within Vicon Blade to help with varying structure, encompasses the technical side of setting up a motion capture session in order to accomodate for non human rigs

Mid: Augmenting the data using props and acting techniques

Post: Most likely using spline IK and lattice deforms to help constrain the data to the appropriate motions.


More research needed into "Pre" phase. So far have been looking into what information on the template function in vicon there is, it seems like most of the documentation is concerning human biped setup. There are tools to help label the markers appropriately in relation to one another. It could be as simple as figuring out which markers should be labelled as what in order to reconfigure the template to be suitable for non human subjects. This would need a little extra research into anatomy of the creatures intended to be used as rigs in order to help assess how best the markers should be laid out.

Having spoken to Norbert, I know know that adapting the skeletons to go into the appropriate creature shouldn't be all that difficult. The translations of the bones aren't recorded apart form the root joint so the rotations are the only things that'll need to be considered when capturing and establishing the range of motion. Again this will need to focus around the motion analysis to help inform marker placement.

I will only need one rig other than the human control. So I think for time sake the angle poise lamp may be the best option as the sort of anatomy is pretty self explanatory due to the joints it has. It's range of movement should be pretty explanatory too.

I have a mocap session scheduled for either Wednesday or Friday (Update: Friday confirmed) to allow for messing with templates. I'll book another for sometime after that to actually acquire the footage for the mid and post sections of the pipeline.

I think I'll be using the lamp as its the most non human with a load of reference footage for one being animated well (Pixar) I've started to do little motion studies on how the lamp moves and what motion it mimics. I'll upload scans later on as I'm using my personal sketchbook to do these in.

Monday, 24 February 2014

Rig choice

Based on the research done for the proposal I had overlooked some key examples of non human rig. I was still thinking alive when in actual fact that doesn't need to the the case. the revised list of rig types I would like to test the process on is as follows

  • Standard human
  • Quadruped
  • Inanimate object
  • Non human biped

This should allow a wide enough spread of rigs to see the results clearly. also some methods might work better for one type of rig than another.

The Actual rigs to use are as follows

Standard Human:
For this I will use the rig I produced last semester as I have a better understanding of it's skeleton and capabilities than a rig found online. I also built into the skeleton the naming conventions motion builder prefers when retargetting which should ease things down the line.

Singer

Quadruped:
I'm opting to use a rig that should show up problems clearly due to long limb length
The deer rig is freeware for non-commercial purposes. (Creator: John Vassallo)

Screen1

Inanimate object:
The disney research I found used a lamp as one of their examples as there is clear reference for how it could move (pixar's luxo, this means there is a basis already available of how a lamp can appear to be expressive which will give me something to compare the results too

Screen1

Non Human Bi-Ped:
Again this example of a bi ped will have plenty of reference to go off when it comes to performance during the capture.

Screen1

I need to get a move on with this. Am trying to arrange a meeting with Lynn so I can refocus properly. Now I have my rigs chosen (but still want to run them past Lynn) I should be able to plan and book out the mocap suite.. Then the retargetting methods can begin.

Friday, 31 January 2014

Physical Props (Proposal Breakdpwn Part 2)

The second section of my proposal spoke of physical props although this will actually include all the things to think about when capturing the initial data.


For example this quote from "Puppetology: Science or Cult?" by Brad deGraf and Emre Yilmaz in regards to the initial capture with non human characters in mind for the target
link: http://www.awn.com/mag/issue3.11/3.11pages/degrafmotion.php3

"Ironically, it's often better to have less data than more -- we usually use only 12 body sensors. If you had one sensor for every single moving part of the body, you'd have a lot more information tying you to the human form, but for our purposes we just want enough sensors to convey the broad lines and arcs of the body."

 Through my research I found a number of interesting contraptions that were used to modify human motion for use in motion capture.. They all looked a little dangerous though.

This was used to help influence the movements of the Pleo the dinosaur toy:
Pleo the Dinosaur


for planet of the apes andy Serkis used stilt like extensions on his arms to help mimic apelike movement more effectively.
Andy Serkis


Giant Studios, which motion-captured live performers in choreographed fights to provide critical data for the animators of Reel Steel

Reel Steel


Groot in the Guardians of the Galaxy films due to be released this year is played by vin Diesel who has been sighted getting used to the props that will help him with the extra height needed when performing.




Acting Techniques
Using acting techniques to manipulate the data during capture is a useful tactic and much safer for me to investigate than some of the contraptions mentioned.

Andy Serkis again is a great example of this in both his role as Caesar in planet of the apes and also as Gollem in the lord of the rings and hobbit films.

Andy Serkis
Disney did some research into this technique but instead of taking the straight capture they took key poses form the source data and matched it to the equivalent pose for the target character.

From Research paper by Yamane (2010)
link: http://www.disneyresearch.com/wp-content/uploads/nonhumanoid_sca10.pdf

Moving Forward
I think the main focus will be the acting techniques rather than the props other than maybe basic ones to help with weighting, for example if the character is meant to have more weight on the arms I could get the actor to hold something to imitate the extra weight.




Preserving Motion Through Retargetting (Proposal breakdown part 1)

This will be quite a long post outlining the work I did researching for the proposal. (I might split it into a few posts) I generally used my notebook to keep track of ideas rather than my blog which is why it has been a bit neglected however my notes are a little all over the place so I'll do my best to keep it from being confusing.

The main sections for my literature review were as follows:

  • Preserving motion through retargeting
  • Physical Props
  • Acting and Key Poses
  • Lattice deformations and Spline IK
  • Spasm: Procedural Animation and real-time retargeting.

1. Preserving motion through retargeting.

Gleicher - developed a solver which preserved desirable qualities of motion using space time constraints to allow interactive control of animation via UI. The process described in the 1998 paper delivered as part of SIGGRAPH '98 initially uses the angles of the source characters joints to drive the movement in the target character however this leads to problems when the character is scaled as shown in figure 2 taken from the journal itself. 

To fix this he used IK solvers to help apply constraints to the characters feet, forcing them to stay planted but as IK solvers consider each frame independently it doesn't know what the motion is in relation to the frames around it resulting in "high frequency jerkiness".


Spacetime constraints: (Witkin and Kass 1988) uses 3 factors to help dictate the movement needed. What the character needs to do or the mechanical action of whats being moved so from one place to another or onto something etc, How it needs to do it so characteristics of the movements. and also how the physical structure of the character would affect the movement for example how the joints are able to move, what the mass of the character is etc.

Main problem Gleicher had was that despite being able to replicate the movements of the source character it took out all personality. 

Gleichers work may be beneficial in transplanting movements from source to target for replicating the mechanics of the movement but when retargeting to a target that has a different structure to the source then I would have thought a little flex from the original movement might be useful but then again with space time constraints one of the factors is the physical structure of the character which could either break the movement entirely or cope with it.. Might be a useful thing to look into?

Moving Forward
I mentioned in my dissertation about unity's mecanim and Mayas human IK, both these tools focus on the retargetting side of things specifically. I feel like exploring the possibilities these two processes offer would be in my best interest so I'll dedicate a post specifically to research into their capabilities hopefully resulting in pipelines specific to those.


Wednesday, 18 December 2013

Presentation slides and notes

This is long overdue. I got really bogged down with other coursework which is no excuse.



Slide 1
As written my Aim is to explore methods of retargeting motion capture data to non human characters in the hopes of discovering a more efficient pipeline than those currently available.

Slide2
This slide shows an example of the main thing i'm hoping to avoid. Especially when it comes to biped characters a real problem is getting them to move as if they aren't just a guy in a suit at a theme park.

Slide 3
This is the first of the existing examples i found. It's taken from a short called "40 Years" and the stills on screen are taken from a breakdown video they made. While it's very nice to see it visually I haven't been able to find any documentation on what processes they needed to go through to get to this point

Slide 4
The other example is Ted, They had Seth Macfarlane on set in the motion capture suit (the one that uses sensors rather than markers which eliminates the problem of occlusion and marker swapping as they resonate at different frequencies) As you can tell by the image there's a problem with proportions when its first captured which then needs to be fixed in cleanup.

Slide5
So as a result of the research (that i have really badly documented on this blog, but i'll get better!) I have four methods i can look into to try and improve the pipeline a little. I think that it would be best if i focused on this four alone to narrow the scope of the project but at the same time if i find something interesting that's relevant and worthwhile Its better that i follow that.

Slide 6
This is an example of how props can be used. I won't be able to do something as grand as these but I would be able to get the actor to hold weights to help shift their own weight. The obvious problems that come from using props to adapt a performance are listed in the bullet points. The center of gravity might appear wrong for the character if it's a quadruped but also marker occlusion/shifting and confidence. If you're in one of those rigs i'm not sure how confidently you'd be able to move.

Slide7
so a big chunk of my research currently has been looking at the procedural methods of animation found in spore. The animators had a massive task of having to produce animation for creatures they'd never seen so it's an interesting if not very wordy read

Slide 8
Tasks that i'd need to do to complete things

Slide9
A thankful hippo

Comments
The main comment i got after presenting was that it seemed like my scope was too big for the time we have. I think this was because my slides didn't really express that i only wanted to find ways of improving it slightly using techniques i found rather than fixing it entirely. Because of this i should probably look at revising my aim and objectives to reflect that more clearly.