Jump to content

Motion capture: Difference between revisions

4,596 bytes added ,  18 April 2022
robot: Update article (please report if you notice any mistake or error in this edit)
(Created page with "{{short description|Process of recording the movement of objects or people}} {{original research | date= June 2013}} File:Temporal-Control-and-Hand-Movement-Efficiency-in-Sk...")
 
(robot: Update article (please report if you notice any mistake or error in this edit))
Line 1: Line 1:
{{merge from|Motion-capture acting|discuss=Talk:Motion-capture_acting#Merge_into_Motion_capture|date=September 2021}}
{{short description|Process of recording the movement of objects or people}}
{{short description|Process of recording the movement of objects or people}}
{{original research | date= June 2013}}
{{original research | date= June 2013}}
[[File:Temporal-Control-and-Hand-Movement-Efficiency-in-Skilled-Music-Performance-pone.0050901.s001.ogv|thumb|300px|Motion capture of two [[pianist]]s' right hands playing the same piece (slow motion, no sound)<ref>{{Cite journal | last1 = Goebl | first1 = W. | last2 = Palmer | first2 = C. | editor1-last = Balasubramaniam | editor1-first = Ramesh | doi = 10.1371/journal.pone.0050901 | title = Temporal Control and Hand Movement Efficiency in Skilled Music Performance | journal = PLoS ONE | volume = 8 | issue = 1 | pages = e50901 | year = 2013 | pmid =  23300946| pmc =3536780 | bibcode = 2013PLoSO...850901G }}</ref>]]
[[File:Temporal-Control-and-Hand-Movement-Efficiency-in-Skilled-Music-Performance-pone.0050901.s001.ogv|thumb|300px|Motion capture of two [[pianist]]s' right hands playing the same piece (slow-motion, no-sound)<ref>{{Cite journal | last1 = Goebl | first1 = W. | last2 = Palmer | first2 = C. | editor1-last = Balasubramaniam | editor1-first = Ramesh | doi = 10.1371/journal.pone.0050901 | title = Temporal Control and Hand Movement Efficiency in Skilled Music Performance | journal = PLOS ONE | volume = 8 | issue = 1 | pages = e50901 | year = 2013 | pmid =  23300946| pmc =3536780 | bibcode = 2013PLoSO...850901G | doi-access = free }}</ref>]]
[[File:Two repetitions of a walking sequence of an individual recorded using a motion-capture system.gif|thumb|300px|Two repetitions of a walking sequence recorded using a motion-capture system<ref>{{Citation |last=Olsen | first=NL |last2=Markussen |first2=B | last3=Raket | first3=LL| year=2018 |title=Simultaneous inference for misaligned multivariate functional data |journal= Journal of the Royal Statistical Society Series C |volume=67 |issue=5 |pages=1147–76 |doi=10.1111/rssc.12276|arxiv=1606.03295 }}</ref>]]
[[File:Two repetitions of a walking sequence of an individual recorded using a motion-capture system.gif|thumb|300px|Two repetitions of a walking sequence recorded using a motion-capture system<ref>{{Citation |last1=Olsen | first1=NL |last2=Markussen |first2=B | last3=Raket | first3=LL| year=2018 |title=Simultaneous inference for misaligned multivariate functional data |journal= Journal of the Royal Statistical Society, Series C |volume=67 |issue=5 |pages=1147–76 |doi=10.1111/rssc.12276|arxiv=1606.03295 | s2cid=88515233 }}</ref>]]
'''Motion capture''' (sometimes referred as '''mo-cap''' or '''mocap''', for short) is the process of recording the [[motion (physics)|movement]] of objects or people. It is used in [[Military science|military]], [[entertainment]], [[sports]], medical applications, and for validation of computer vision<ref>David Noonan, Peter Mountney, Daniel Elson, Ara Darzi and Guang-Zhong Yang. A Stereoscopic Fibroscope for Camera Motion and 3D Depth Recovery During Minimally Invasive Surgery. In proc ICRA 2009, pp. 4463–68. http://www.sciweavers.org/external.php?u=http%3A%2F%2Fwww.doc.ic.ac.uk%2F%7Epmountne%2Fpublications%2FICRA%25202009.pdf&p=ieee</ref> and robotics.<ref>Yamane, Katsu, and Jessica Hodgins. "[https://pdfs.semanticscholar.org/8de6/2ececd067c3d9e7d6f3462164a9a821d9e0a.pdf Simultaneous tracking and balancing of humanoid robots for imitating human motion capture data]." Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. IEEE, 2009.</ref> In [[filmmaking]] and [[video game development]], it refers to recording actions of [[Motion capture acting|human actors]], and using that information to animate [[digital character]] models in 2D or 3D [[computer animation]].<ref>NY Castings, Joe Gatt, [http://www.nycastings.com/dmxreadyv2/blogmanager/v3_blogmanager.asp?post=motioncaptureactors Motion Capture Actors: Body Movement Tells the Story] {{webarchive|url=https://web.archive.org/web/20140703113656/http://www.nycastings.com/dmxreadyv2/blogmanager/v3_blogmanager.asp?post=motioncaptureactors |date=2014-07-03 }}, Accessed June 21, 2014</ref><ref name=twsBackstage>Andrew Harris Salomon, Feb. 22, 2013, Backstage Magazine, [http://www.backstage.com/news/spotlight/growth-performance-capture-helping-gaming-actors-weather-slump/ Growth In Performance Capture Helping Gaming Actors Weather Slump], Accessed June 21, 2014, "..But developments in motion-capture technology, as well as new gaming consoles expected from Sony and Microsoft within the year, indicate that this niche continues to be a growth area for actors. And for those who have thought about breaking in, the message is clear: Get busy...."</ref><ref name=twsGuardian>Ben Child, 12 August 2011, The Guardian, [https://www.theguardian.com/film/2011/aug/12/andy-serkis-motion-capture-acting Andy Serkis: why won't Oscars go ape over motion-capture acting? Star of Rise of the Planet of the Apes says performance capture is misunderstood and its actors deserve more respect], Accessed June 21, 2014</ref> When it includes face and fingers or captures subtle expressions, it is often referred to as '''performance capture'''.<ref name=twsWired>Hugh Hart, January 24, 2012, Wired magazine, [https://www.wired.com/2012/01/andy-serkis-oscars/ When will a motion capture actor win an Oscar?], Accessed June 21, 2014, "...the Academy of Motion Picture Arts and Sciences’ historic reluctance to honor motion-capture performances .. Serkis, garbed in a sensor-embedded Lycra body suit, quickly mastered the then-novel art and science of performance-capture acting. ..."</ref> In many fields, motion capture is sometimes called '''motion tracking''', but in filmmaking and games, motion tracking usually refers more to '''[[match moving]]'''.
'''Motion capture''' (sometimes referred as '''mo-cap''' or '''mocap''', for short) is the process of recording the [[motion (physics)|movement]] of objects or people. It is used in [[Military science|military]], [[entertainment]], [[sports]], medical applications, and for validation of computer vision<ref>David Noonan, Peter Mountney, Daniel Elson, Ara Darzi and Guang-Zhong Yang. A Stereoscopic Fibroscope for Camera Motion and 3-D Depth Recovery During Minimally Invasive Surgery. In proc ICRA 2009, pp. 4463–68. http://www.sciweavers.org/external.php?u=http%3A%2F%2Fwww.doc.ic.ac.uk%2F%7Epmountne%2Fpublications%2FICRA%25202009.pdf&p=ieee</ref> and robots.<ref>Yamane, Katsu, and Jessica Hodgins. "[https://pdfs.semanticscholar.org/8de6/2ececd067c3d9e7d6f3462164a9a821d9e0a.pdf Simultaneous tracking and balancing of humanoid robots for imitating human motion capture data]." Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. IEEE, 2009.</ref> In [[filmmaking]] and [[video game development]], it refers to recording actions of [[Motion capture acting|human actors]], and using that information to animate [[digital character]] models in 2-D or 3-D [[computer animation]].<ref>NY Castings, Joe Gatt, [http://www.nycastings.com/dmxreadyv2/blogmanager/v3_blogmanager.asp?post=motioncaptureactors Motion Capture Actors: Body Movement Tells the Story] {{webarchive|url=https://web.archive.org/web/20140703113656/http://www.nycastings.com/dmxreadyv2/blogmanager/v3_blogmanager.asp?post=motioncaptureactors |date=2014-07-03 }}, Accessed June 21, 2014</ref><ref name=twsBackstage>Andrew Harris Salomon, Feb. 22, 2013, Backstage Magazine, [http://www.backstage.com/news/spotlight/growth-performance-capture-helping-gaming-actors-weather-slump/ Growth In Performance Capture Helping Gaming Actors Weather Slump], Accessed June 21, 2014, "..But developments in motion-capture technology, as well as new gaming consoles expected from Sony and Microsoft within the year, indicate that this niche continues to be a growth area for actors. And for those who have thought about breaking in, the message is clear: Get busy...."</ref><ref name=twsGuardian>Ben Child, 12 August 2011, The Guardian, [https://www.theguardian.com/film/2011/aug/12/andy-serkis-motion-capture-acting Andy Serkis: why won't Oscars go ape over motion-capture acting? Star of Rise of the Planet of the Apes says performance capture is misunderstood and its actors deserve more respect], Accessed June 21, 2014</ref> When it includes face and fingers or captures subtle expressions, it is often referred to as '''performance capture'''.<ref name=twsWired>Hugh Hart, January 24, 2012, Wired magazine, [https://www.wired.com/2012/01/andy-serkis-oscars/ When will a motion capture actor win an Oscar?], Accessed June 21, 2014, "...the Academy of Motion Picture Arts and Sciences’ historic reluctance to honor motion-capture performances .. Serkis, garbed in a sensor-embedded Lycra body suit, quickly mastered the then-novel art and science of performance-capture acting. ..."</ref> In many fields, motion capture is sometimes called '''motion tracking''', but in filmmaking and games, motion tracking usually refers more to '''[[match moving]]'''.


In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used [[3D reconstruction from multiple images|images from multiple cameras to calculate 3D positions]],<ref>Cheung, German KM, et al. "[https://www.researchgate.net/profile/Takeo_Kanade/publication/3854315_Real_time_system_for_robust_3D_voxel_reconstruction_of_human_motions/links/02e7e51c9c14d5ba39000000/Real-time-system-for-robust-3D-voxel-reconstruction-of-human-motions.pdf A real time system for robust 3D voxel reconstruction of human motions]." Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on. Vol. 2. IEEE, 2000.</ref> often the purpose of motion capture is to record only the movements of the actor, not their visual appearance. This ''animation data'' is mapped to a 3D model so that the model performs the same actions as the actor. This process may be contrasted with the older technique of [[rotoscoping]].
In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used [[3D reconstruction from multiple images|images from multiple cameras to calculate 3D positions]],<ref>Cheung, German KM, et al. "[https://www.researchgate.net/profile/Takeo_Kanade/publication/3854315_Real_time_system_for_robust_3D_voxel_reconstruction_of_human_motions/links/02e7e51c9c14d5ba39000000/Real-time-system-for-robust-3D-voxel-reconstruction-of-human-motions.pdf A real time system for robust 3D voxel reconstruction of human motions]." Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on. Vol. 2. IEEE, 2000.</ref> often the purpose of motion capture is to record only the movements of the actor, not their visual appearance. This ''animation data'' is mapped to a 3D model so that the model performs the same actions as the actor. This process may be contrasted with the older technique of [[rotoscoping]].
Line 12: Line 13:
{{more citations needed section | date= February 2014}}
{{more citations needed section | date= February 2014}}
Motion capture offers several advantages over traditional [[computer animation]] of a 3D model:
Motion capture offers several advantages over traditional [[computer animation]] of a 3D model:
* Low latency, close to real time, results can be obtained. In entertainment applications this can reduce the costs of keyframe-based [[animation]].<ref name="Xsens MVN Animate - Products">{{Cite web|url=https://www.xsens.com/products/xsens-mvn-animate/|title=Xsens MVN Animate – Products|website=Xsens 3D motion tracking|language=en-US|access-date=2019-01-22}}</ref> The [[Hand Over]] technique is an example of this.
* Low latency, close to real time, results can be obtained. In entertainment applications this can reduce the costs of keyframe-based [[animation]].<ref name="Xsens MVN Animate - Products">{{Cite web|url=https://www.xsens.com/products/xsens-mvn-animate/|title=Xsens MVN Animate – Products|website=Xsens 3D motion tracking|language=en-US|access-date=2019-01-22}}</ref> The [[Hand Over]] technique is an example of this.
* The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries, giving a different personality only limited by the talent of the actor.
* The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries, giving a different personality only limited by the talent of the actor.
* Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner.<ref>{{cite magazine|title=The Next Generation 1996 Lexicon A to Z: Motion Capture|magazine=[[Next Generation (magazine)|Next Generation]]|issue=15 |publisher=[[Imagine Media]]|date=March 1996|page=37}}</ref>
* Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner.<ref>{{cite magazine|title=The Next Generation 1996 Lexicon A to Z: Motion Capture|magazine=[[Next Generation (magazine)|Next Generation]]|issue=15 |publisher=[[Imagine Media]]|date=March 1996|page=37}}</ref>
Line 22: Line 23:
* Specific hardware and special software programs are required to obtain and process the data.
* Specific hardware and special software programs are required to obtain and process the data.
* The cost of the software, equipment and personnel required can be prohibitive for small productions.
* The cost of the software, equipment and personnel required can be prohibitive for small productions.
* The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion.
* The capture system may have specific requirements for the space in which it is operated, depending on camera field of view or magnetic distortion.
* When problems occur, it is easier to shoot the scene again rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone.
* When problems occur, it is easier to shoot the scene again rather than trying to manipulate the data. Only a few systems allow real-time viewing of the data to decide if the take needs to be redone.
* The initial results are limited to what can be performed within the capture volume without extra editing of the data.
* The initial results are limited to what can be performed within the capture volume without extra editing of the data.
* Movement that does not follow the laws of physics cannot be captured.
* Movement that does not follow the laws of physics cannot be captured.
* Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with [[squash and stretch]] animation techniques, must be added later.
* Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with [[squash and stretch]] animation techniques, must be added later.
* If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, oversized hands, these may intersect the character's body if the human performer is not careful with their physical motion.
* If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, oversized hands, these may intersect the character's body if the human performer is not careful with his or her physical motion.


==Applications==
==Applications==
Line 33: Line 34:
[[File:Motion Capture Performers.png|thumb|right|250px|Motion capture performers from Buckinghamshire New University]]
[[File:Motion Capture Performers.png|thumb|right|250px|Motion capture performers from Buckinghamshire New University]]


[[Video game]]s often use motion capture to animate athletes, martial artists, and other in-game characters.<ref>Jon Radoff, Anatomy of an MMORPG, {{cite web |url=http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/ |title=Archived copy |access-date=2009-11-30 |url-status=dead |archive-url=https://web.archive.org/web/20091213053756/http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/ |archive-date=2009-12-13 }}</ref><ref name="GPro82">{{cite journal|title=Hooray for Hollywood! Acclaim Studios|journal=[[GamePro]]|issue=82|publisher=[[International Data Group|IDG]]|date=July 1995|pages=28–29}}</ref> As early as 1988, an early form of motion capture was used to animate the [[2D computer graphics|2D]] main character of the [[Martech]] video game ''[[Vixen (video game)|Vixen]]'', which was performed by model [[Corinne Russell]].<ref>{{cite magazine|magazine=[[Retro Gamer]]|title=Martech Games - The Personality People|page=51|issue=133|first=Graeme|last=Mason|url=https://issuu.com/michelfranca/docs/retro_gamer____133}}</ref> Motion capture was later notably used to animate the [[3D computer graphics|3D]] character models in the [[Sega Model 2]] [[arcade game]] ''[[Virtua Fighter 2]]'' in 1994.<ref>{{cite web|last=Wawro|first=Alex|title=Yu Suzuki Recalls Using Military Tech to Make Virtua Fighter 2 |url=http://www.gamasutra.com/view/news/228512/Yu_Suzuki_recalls_using_military_tech_to_make_Virtua_Fighter_2.php|website=[[Gamasutra]]|access-date=18 August 2016|date=October 23, 2014}}</ref> In mid-1995, developer/publisher [[Acclaim Entertainment]] had its own in-house motion capture studio built into its headquarters.<ref name="GPro82"/> [[Namco]]'s 1995 arcade game ''[[Soul Edge]]'' used passive optical system markers for motion capture.<ref>{{cite web |url=http://www.motioncapturesociety.com/resources/industry-history |title=History of Motion Capture |publisher=Motioncapturesociety.com |access-date=2013-08-10 |archive-url=https://web.archive.org/web/20181023162411/http://www.motioncapturesociety.com/resources/industry-history |archive-date=2018-10-23 |url-status=dead }}</ref>
[[Video games]] often use motion capture to animate athletes, [[martial artists]], and other in-game characters.<ref>Jon Radoff, Anatomy of an MMORPG, {{cite web |url=http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/ |title=Archived copy |access-date=2009-11-30 |url-status=dead |archive-url=https://web.archive.org/web/20091213053756/http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/ |archive-date=2009-12-13 }}</ref><ref name="GPro82">{{cite journal|title=Hooray for Hollywood! Acclaim Studios|journal=[[GamePro]]|issue=82|publisher=[[International Data Group|IDG]]|date=July 1995|pages=28–29}}</ref> As early as 1988, an early form of motion capture was used to animate the [[2D computer graphics|2D]] [[player characters]] of [[Martech]]'s video game ''[[Vixen (video game)|Vixen]]'' (performed by model [[Corinne Russell]])<ref>{{cite magazine|magazine=[[Retro Gamer]]|title=Martech Games - The Personality People|page=51|issue=133|first=Graeme|last=Mason|url=https://issuu.com/michelfranca/docs/retro_gamer____133}}</ref> and [[Magical Company]]'s 2D arcade [[fighting game]] ''Last Apostle Puppet Show'' (to animate digitized [[Sprite (computer graphics)|sprites]]).<ref>{{cite web |title=Pre-Street Fighter II Fighting Games |url=http://www.hardcoregaming101.net/fighters/fighters8.htm |website=Hardcore Gaming 101 |page=8 |access-date=26 November 2021}}</ref> Motion capture was later notably used to animate the [[3D computer graphics|3D]] character models in the [[Sega Model 1|Sega Model]] [[arcade games]] ''[[Virtua Fighter (video game)|Virtua Fighter]]'' (1993)<ref name="CVG158">{{cite magazine |url=https://retrocdn.net/images/8/84/CVG_UK_158.pdf#page=12 |title=Sega Saturn exclusive! Virtua Fighter: fighting in the third dimension |magazine=[[Computer and Video Games]] |publisher=[[Future plc]] |issue=158 (January 1995) |date=15 December 1994 |pages=12–3, 15–6, 19}}</ref><ref name="Maximum">{{cite journal|title=Virtua Fighter|journal=Maximum: The Video Game Magazine|issue=1|publisher=[[Emap International Limited]]|date=October 1995|pages=142–3}}</ref> and ''[[Virtua Fighter 2]]'' (1994).<ref>{{cite web|last=Wawro|first=Alex|title=Yu Suzuki Recalls Using Military Tech to Make Virtua Fighter 2 |url=http://www.gamasutra.com/view/news/228512/Yu_Suzuki_recalls_using_military_tech_to_make_Virtua_Fighter_2.php|website=[[Gamasutra]]|access-date=18 August 2016|date=October 23, 2014}}</ref> In mid-1995, developer/publisher [[Acclaim Entertainment]] had its own in-house motion capture studio built into its headquarters.<ref name="GPro82"/> [[Namco]]'s 1995 arcade game ''[[Soul Edge]]'' used passive optical system markers for motion capture.<ref>{{cite web |url=http://www.motioncapturesociety.com/resources/industry-history |title=History of Motion Capture |publisher=Motioncapturesociety.com |access-date=2013-08-10 |archive-url=https://web.archive.org/web/20181023162411/http://www.motioncapturesociety.com/resources/industry-history |archive-date=2018-10-23 |url-status=dead }}</ref>


Movies use motion capture for CG effects, in some cases replacing traditional cel animation, and for completely [[computer-generated imagery|computer-generated]] creatures, such as [[Gollum]], [[The Mummy (1999 film)|The Mummy]], [[Peter Jackson's King Kong|King Kong]], [[Davy Jones (Pirates of the Caribbean)|Davy Jones]] from ''[[Pirates of the Caribbean (film series)|Pirates of the Caribbean]]'', the [[Pandoran biosphere#Na'vi|Na'vi]] from the film [[Avatar (2009 film)|''Avatar'']], and Clu from ''[[Tron: Legacy]]''. The Great Goblin, the three [[Troll (Middle-earth)#Troll types|Stone-trolls]], many of the orcs and goblins in the 2012 film ''[[The Hobbit: An Unexpected Journey]]'', and [[Smaug]] were created using motion capture.
Movies use motion capture for CG effects, in some cases replacing traditional cel animation, and for completely [[computer-generated imagery|computer-generated]] creatures, such as [[Gollum]], [[The Mummy (1999 film)|The Mummy]], [[Peter Jackson's King Kong|King Kong]], [[Davy Jones (Pirates of the Caribbean)|Davy Jones]] from ''[[Pirates of the Caribbean (film series)|Pirates of the Caribbean]]'', the [[Pandoran biosphere#Na'vi|Na'vi]] from the film [[Avatar (2009 film)|''Avatar'']], and Clu from ''[[Tron: Legacy]]''. The Great Goblin, the three [[Troll (Middle-earth)#Troll types|Stone-trolls]], many of the orcs and goblins in the 2012 film ''[[The Hobbit: An Unexpected Journey]]'', and [[Smaug]] were created using motion capture.
The film ''[[Batman Forever]]'' (1995) used some motion capture for certain special effects. [[Warner Bros]] had acquired motion capture technology from [[arcade video game]] company Acclaim Entertainment for use in the film's production.<ref>{{cite magazine |title=Coin-Op News: Acclaim technology tapped for "Batman" movie |magazine=[[Play Meter]] |date=October 1994 |volume=20 |issue=11 |page=22 |url=https://archive.org/details/play-meter-volume-20-number-11-october-1994/Play%20Meter%20-%20Volume%2020%2C%20Number%2011%20-%20October%201994/page/22}}</ref> Acclaim's 1995 [[Batman Forever (video game)|video game of the same name]] also used the same motion capture technology to animate the digitized [[Sprite (computer graphics)|sprite]] graphics.<ref>{{cite magazine |title=Acclaim Stakes its Claim |magazine=RePlay |date=January 1995 |volume=20 |issue=4 |page=71 |url=https://archive.org/details/re-play-volume-20-issue-no.-4-january-1995/RePlay%20-%20Volume%2020%2C%20Issue%20No.%204%20-%20January%201995/page/n68}}</ref>


''[[Star Wars: Episode I – The Phantom Menace]]'' (1999) was the first feature-length film to include a main character created using motion capture (that character being [[Jar Jar Binks]], played by [[Ahmed Best]]), and [[India]]n-[[United States|American]] film ''[[Sinbad: Beyond the Veil of Mists]]'' (2000) was the first feature-length film made primarily with motion capture, although many character animators also worked on the film, which had a very limited release. 2001's ''[[Final Fantasy: The Spirits Within]]'' was the first widely released movie to be made primarily with motion capture technology.  Despite its poor box-office intake, supporters of motion capture technology took notice. ''[[Total Recall (1990 film)|Total Recall]]'' had already used the technique, in the scene of the x-ray scanner and the skeletons.
''[[Star Wars: Episode I – The Phantom Menace]]'' (1999) was the first feature-length film to include a main character created using motion capture (that character being [[Jar Jar Binks]], played by [[Ahmed Best]]), and [[India]]n-[[United States|American]] film ''[[Sinbad: Beyond the Veil of Mists]]'' (2000) was the first feature-length film made primarily with motion capture, although many character animators also worked on the film, which had a very limited release. 2001's ''[[Final Fantasy: The Spirits Within]]'' was the first widely released movie to be made primarily with motion capture technology.  Despite its poor box-office intake, supporters of motion capture technology took notice. ''[[Total Recall (1990 film)|Total Recall]]'' had already used the technique, in the scene of the x-ray scanner and the skeletons.
Line 57: Line 60:
In Marvel's ''[[The Avengers (2012 film)|The Avengers]]'', Mark Ruffalo used motion capture so he could play his character [[Bruce Banner (Marvel Cinematic Universe)|the Hulk]], rather than have him be only CGI as in previous films, making Ruffalo the first actor to play both the human and the Hulk versions of Bruce Banner.
In Marvel's ''[[The Avengers (2012 film)|The Avengers]]'', Mark Ruffalo used motion capture so he could play his character [[Bruce Banner (Marvel Cinematic Universe)|the Hulk]], rather than have him be only CGI as in previous films, making Ruffalo the first actor to play both the human and the Hulk versions of Bruce Banner.


[[FaceRig]] software uses facial recognition technology from ULSee.Inc to map a player's facial expressions and the body tracking technology from [[Perception Neuron]] to map the body movement onto a 3D or 2D character's motion onscreen.<ref>{{cite web|url=http://www.polygon.com/2014/6/30/5858610/this-facial-recognition-software-lets-you-be-octodad|title=This facial recognition software lets you be Octodad|first=Alexa Ray|last=Corriea|date=30 June 2014|access-date=4 January 2017|via=www.polygon.com}}</ref><ref>{{cite web|url=http://kotaku.com/turn-your-human-face-into-a-video-game-character-1490049650|title=Turn Your Human Face Into A Video Game Character|first=Luke|last=Plunkett|work=kotaku.com|access-date=4 January 2017}}</ref>
[[FaceRig]] software uses facial recognition technology from ULSee.Inc to map a player's facial expressions and the body tracking technology from Perception Neuron to map the body movement onto a 3D or 2D character's motion onscreen.<ref>{{cite web|url=http://www.polygon.com/2014/6/30/5858610/this-facial-recognition-software-lets-you-be-octodad|title=This facial recognition software lets you be Octodad|first=Alexa Ray|last=Corriea|date=30 June 2014|access-date=4 January 2017|via=www.polygon.com}}</ref><ref>{{cite web|url=http://kotaku.com/turn-your-human-face-into-a-video-game-character-1490049650|title=Turn Your Human Face Into A Video Game Character|first=Luke|last=Plunkett|work=kotaku.com|access-date=4 January 2017}}</ref>
 
During ''[[Game Developers Conference]]'' 2016 in San Francisco ''[[Epic Games]]'' demonstrated full-body motion capture live in Unreal Engine. The whole scene, from the upcoming game ''[[Hellblade: Senua's Sacrifice|Hellblade]]'' about a woman warrior named Senua, was rendered in real-time. The keynote<ref>{{cite web|url=https://www.fxguide.com/featured/put-your-digital-game-face-on/|title=Put your (digital) game face on|date=24 April 2016|work=fxguide.com|access-date=4 January 2017}}</ref> was a collaboration between ''[[Unreal Engine]]'', ''[[Ninja Theory]]'', ''[[3Lateral]]'', ''Cubic Motion'', ''IKinema'' and ''[[Xsens]]''.


During ''[[Game Developers Conference]]'' 2016 in San Francisco ''[[Epic Games]]'' demonstrated full-body motion capture live in Unreal Engine. The whole scene, from the upcoming game ''[[Hellblade: Senua's Sacrifice|Hellblade]]'' about a woman warrior named Senua, was rendered in real-time. The keynote<ref>{{cite web|url=https://www.fxguide.com/featured/put-your-digital-game-face-on/|title=Put your (digital) game face on|date=24 April 2016|work=fxguide.com|access-date=4 January 2017}}</ref> was a collaboration between ''[[Unreal Engine]]'', ''[[Ninja Theory]]'', ''[[3Lateral]]'', ''[[Cubic Motion]]'', ''[[IKinema]]'' and ''[[Xsens]]''.
Indian film ''[[Adipurush]]'' based on Ramayana. The film is said to be a magnum opus using high-end and real-time technology such as Xsens motion capture and facial capture used by Hollywood to bring the world of Adipurush to life. Adipurush is the story of Lord Ram.


==Methods and systems==
==Methods and systems==
Line 79: Line 84:
An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a three-dimensional fix can be obtained. Typically a system will consist of around 2 to 48 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.
An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a three-dimensional fix can be obtained. Typically a system will consist of around 2 to 48 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.


Vendors have constraint software to reduce the problem of marker swapping since all passive markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment.<ref>{{cite journal|title=Motion Capture: Optical Systems|journal=[[Next Generation (magazine)|Next Generation]]|issue=10|publisher=[[Imagine Media]]|date=October 1995|page=53}}</ref> Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are [[velcro]]ed to a performer wearing a full body spandex/lycra [[Mo-cap suit|suit designed specifically for motion capture]]. This type of system can capture large numbers of markers at frame rates usually around 120 to 160 fps although by lowering the resolution and tracking a smaller region of interest they can track as high as 10000 fps.
Vendors have constraint software to reduce the problem of marker swapping since all passive markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment.<ref>{{cite journal|title=Motion Capture: Optical Systems|journal=[[Next Generation (magazine)|Next Generation]]|issue=10|publisher=[[Imagine Media]]|date=October 1995|page=53}}</ref> Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are [[velcro]]ed to a performer wearing a full-body spandex/lycra [[Mo-cap suit|suit designed specifically for motion capture]]. This type of system can capture large numbers of markers at frame rates usually around 120 to 160 fps although by lowering the resolution and tracking a smaller region of interest they can track as high as 10,000 fps.


===Active marker===
===Active marker===
Line 124: Line 129:


====Traditional systems====
====Traditional systems====
Traditionally markerless optical motion tracking is used to keep track on various objects, including airplanes, launch vehicles, missiles and satellites. Many of such optical motion tracking applications occur outdoors, requiring differing lens and camera configurations. High resolution images of the target being tracked can thereby provide more information than just motion data. The image obtained from NASA's long-range tracking system on space shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident. Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage compared to radar in that the objects must be reflecting or emitting sufficient light.<ref>{{Cite journal| doi = 10.1007/BF00216781| title = Optical tracking of artificial satellites| year = 1963| last1 = Veis | first1 = G.| journal = Space Science Reviews| volume = 2| issue = 2| pages = 250–296| bibcode=1963SSRv....2..250V}}</ref>
Traditionally markerless optical motion tracking is used to keep track on various objects, including airplanes, launch vehicles, missiles and satellites. Many of such optical motion tracking applications occur outdoors, requiring differing lens and camera configurations. High resolution images of the target being tracked can thereby provide more information than just motion data. The image obtained from NASA's long-range tracking system on space shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident. Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage compared to radar in that the objects must be reflecting or emitting sufficient light.<ref>{{Cite journal| doi = 10.1007/BF00216781| title = Optical tracking of artificial satellites| year = 1963| last1 = Veis | first1 = G.| journal = Space Science Reviews| volume = 2| issue = 2| pages = 250–296| bibcode=1963SSRv....2..250V| s2cid = 121533715}}</ref>


An optical tracking system typically consists of three subsystems: the optical imaging system, the mechanical tracking platform and the tracking computer.
An optical tracking system typically consists of three subsystems: the optical imaging system, the mechanical tracking platform and the tracking computer.
Line 135: Line 140:


The software that runs such systems are also customized for the corresponding hardware components. One example of such software is OpticTracker, which controls computerized telescopes to track moving objects at great distances, such as planes and satellites. Another option is the software SimiShape, which can also be used hybrid in combination with markers.
The software that runs such systems are also customized for the corresponding hardware components. One example of such software is OpticTracker, which controls computerized telescopes to track moving objects at great distances, such as planes and satellites. Another option is the software SimiShape, which can also be used hybrid in combination with markers.
====RGB-D Cameras====
RGB-D cameras such as [[kinect]] captures both the color and depth images. By fusing the two images, 3D colored [[voxel]] can be captured, allowing motion capture of 3D human motion and human surface in real time.
Because of the use of a single-view camera, motions captured are usually noisy. Machine learning techniques have been proposed to automatically reconstruct such noisy motions into higher quality ones, using methods such as [[lazy learning]]<ref>{{cite journal |last1=Shum |first1=Hubert P. H. |last2=Ho |first2=Edmond S. L. |last3=Jiang |first3=Yang |last4=Takagi |first4=Shu |title=Real-Time Posture Reconstruction for Microsoft Kinect |journal=IEEE Transactions on Cybernetics |date=2013 |volume=43 |issue=5 |pages=1357–1369 |doi=10.1109/TCYB.2013.2275945|pmid=23981562 |s2cid=14124193 }}</ref> and [[Gaussian]] models.<ref>{{cite journal |last1=Liu |first1=Zhiguang |last2=Zhou |first2=Liuyang |last3=Leung |first3=Howard |last4=Shum |first4=Hubert P. H. |title=Kinect Posture Reconstruction based on a Local Mixture of Gaussian Process Models |journal=IEEE Transactions on Visualization and Computer Graphics |date=2016 |volume=22 |issue=11 |pages=2437–2450 |doi=10.1109/TVCG.2015.2510000|pmid=26701789 |s2cid=216076607 |url=http://nrl.northumbria.ac.uk/id/eprint/25559/1/07360215.pdf }}</ref> Such method generate accurate enough motion for serious applications like ergonomic assessment.<ref>{{cite journal |last1=Plantard |first1=Pierre |last2=Shum |first2=Hubert P. H. |last3=Pierres |first3=Anne-Sophie Le |last4=Multon |first4=Franck |title=Validation of an Ergonomic Assessment Method using Kinect Data in Real Workplace Conditions |journal=Applied Ergonomics |date=2017 |volume=65 |pages=562–569 |doi=10.1016/j.apergo.2016.10.015|pmid=27823772 }}</ref>


==Non-optical systems==
==Non-optical systems==


===Inertial systems===
===Inertial systems===
Inertial motion capture<ref>{{Cite web|url=http://www.xsens.com/images/stories/PDF/MVN_white_paper.pdf|title=Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors}}</ref> technology is based on miniature inertial sensors, biomechanical models and sensor fusion algorithms.<ref>{{Cite web|url=https://www.xsens.com/fascination-motion-capture/|title=A history of motion capture|website=Xsens 3D motion tracking|language=en-US|access-date=2019-01-22}}</ref> The motion data of the inertial sensors ([[inertial guidance system]]) is often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial systems use inertial measurement units (IMUs) containing a combination of gyroscope, magnetometer, and accelerometer, to measure rotational rates. These rotations are translated to a skeleton in the software. Much like optical markers, the more IMU sensors the more natural the data. No external cameras, emitters or markers are needed for relative motions, although they are required to give the absolute position of the user if desired. Inertial motion capture systems capture the full six degrees of freedom body motion of a human in real-time and can give limited direction information if they include a magnetic bearing sensor, although these are much lower resolution and susceptible to electromagnetic noise. Benefits of using Inertial systems include: capturing in a variety of environments including tight spaces, no solving, portability, and large capture areas. Disadvantages include lower positional accuracy and positional drift which can compound over time. These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates. They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is rising amongst game developers,<ref name="Xsens MVN Animate - Products"/> mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are now available from various manufacturers and base prices range from $1,000 to US$80,000.
Inertial motion capture<ref>{{Cite web|url=http://www.xsens.com/images/stories/PDF/MVN_white_paper.pdf|title=Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors}}</ref> technology is based on miniature inertial sensors, biomechanical models and [[sensor fusion]] algorithms.<ref>{{Cite web|url=https://www.xsens.com/fascination-motion-capture/|title=A history of motion capture|website=Xsens 3D motion tracking|language=en-US|access-date=2019-01-22}}</ref> The motion data of the inertial sensors ([[inertial guidance system]]) is often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial systems use inertial measurement units (IMUs) containing a combination of gyroscope, magnetometer, and accelerometer, to measure rotational rates. These rotations are translated to a skeleton in the software. Much like optical markers, the more IMU sensors the more natural the data. No external cameras, emitters or markers are needed for relative motions, although they are required to give the absolute position of the user if desired. Inertial motion capture systems capture the full six degrees of freedom body motion of a human in real-time and can give limited direction information if they include a magnetic bearing sensor, although these are much lower resolution and susceptible to electromagnetic noise. Benefits of using Inertial systems include: capturing in a variety of environments including tight spaces, no solving, portability, and large capture areas. Disadvantages include lower positional accuracy and positional drift which can compound over time. These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates. They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is rising amongst game developers,<ref name="Xsens MVN Animate - Products"/> mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are now available from various manufacturers and base prices range from $1,000 to US$80,000.


===Mechanical motion===
===Mechanical motion===
Line 148: Line 158:


=== Stretch sensors ===
=== Stretch sensors ===
Stretch sensors are flexible parallel plate capacitors that measure either stretch, bend, shear, or pressure and are typically produced from silicone. When the sensor stretches or squeezes its capacitance value changes. This data can be transmitted via Bluetooth or direct input and used to detect minute changes in body motion. Stretch sensors are unaffected by magnetic interference and are free from occlusion. The stretchable nature of the sensors also means they do not suffer from positional drift, which is common with inertial systems.
Stretch sensors are flexible parallel plate capacitors that measure either stretch, bend, shear, or pressure and are typically produced from silicone. When the sensor stretches or squeezes its capacitance value changes. This data can be transmitted via Bluetooth or direct input and used to detect minute changes in body motion. Stretch sensors are unaffected by magnetic interference and are free from occlusion. The stretchable nature of the sensors also means they do not suffer from positional drift, which is common with inertial systems. Stretchable sensors, on the other hands, due to the material properties of their substrates and conducting materials, suffer from relatively high [[signal-to-noise ratio]], requiring [[Filter (software)|filtering]] or [[machine learning]] to make them usable for motion capture. These solutions result in higher [[Latency (engineering)|latency]] when compared to alternative sensors.


==Related techniques==
==Related techniques==
Anonymous user