{"id":337,"date":"2020-08-19T19:01:06","date_gmt":"2020-08-19T19:01:06","guid":{"rendered":"https:\/\/realitylab.uw.edu\/staging\/?p=337"},"modified":"2020-08-19T19:07:49","modified_gmt":"2020-08-19T19:07:49","slug":"our-researchers-at-eccv20","status":"publish","type":"post","link":"https:\/\/realitylab.uw.edu\/staging\/?p=337","title":{"rendered":"Our Researchers at ECCV&#8217;20"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"alignright is-resized\"><a href=\"https:\/\/eccv2020.eu\/\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/logo_eccv20-1-1005x1024.png\" alt=\"logo of ECCV'20 online\" class=\"wp-image-405\" width=\"191\" height=\"194\" srcset=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/logo_eccv20-1-1005x1024.png 1005w, https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/logo_eccv20-1-294x300.png 294w, https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/logo_eccv20-1-768x783.png 768w, https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/logo_eccv20-1.png 1155w\" sizes=\"auto, (max-width: 191px) 100vw, 191px\" \/><\/a><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/eccv2020.eu\/\">The 2020 European Conference on Computer Vision<\/a> (ECCV&#8217;20) has accepted papers from several of our UW Reality Lab researchers:  <em>Reconstructing NBA Players<\/em>; <em>People As Scene Probe<\/em>; and <em>Lifespan Age Transformation Synthesis<\/em> (all listed below).  These papers will be part of the August 23-28 conference, being held entirely online due to the covid-19 pandemic.  ECCV is &#8220;<em>the top European conference in the image analysis area.<\/em>&#8221; <\/p>\n\n\n\n<p>Links to the project sites are below, along with abstracts, links, videos, and code &#8211; where available.<\/p>\n\n\n\n<p>Congratulations to all these researchers!<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p style=\"color:#064dc6\" class=\"has-text-color has-background has-medium-font-size has-light-gray-background-color\"><a href=\"http:\/\/grail.cs.washington.edu\/projects\/lifespan_age_transformation_synthesis\/\"><strong>Lifespan Age Transformation Synthesis<\/strong><\/a> <br><a href=\"https:\/\/homes.cs.washington.edu\/~royorel\/\">Roy Or-El<\/a>, <a href=\"https:\/\/homes.cs.washington.edu\/~soumya91\/\">Soumyadip Sengupta<\/a>, <a href=\"https:\/\/www.ohadf.com\/\">Ohad Fried<\/a>, <a href=\"https:\/\/research.adobe.com\/person\/eli-shechtman\/\">Eli Shechtman<\/a>, <a href=\"https:\/\/sites.google.com\/view\/irakemelmacher\/\/\">Ira Kemelmacher-Shlizerman<\/a><\/p>\n\n\n\n<p style=\"text-align:left\"><a href=\"https:\/\/github.com\/royorel\/Lifespan_Age_Transformation_Synthesis\"><strong>CODE<\/strong><\/a><strong>    &#8212;    <\/strong><a href=\"https:\/\/colab.research.google.com\/github\/royorel\/Lifespan_Age_Transformation_Synthesis\/blob\/master\/LATS_demo.ipynb\"><strong>COLAB DEMO<\/strong><\/a><strong>  <\/strong><a href=\"https:\/\/colab.research.google.com\/github\/royorel\/Lifespan_Age_Transformation_Synthesis\/blob\/master\/LATS_demo.ipynb\">  <\/a><strong>&#8212; <\/strong><a href=\"https:\/\/colab.research.google.com\/github\/royorel\/Lifespan_Age_Transformation_Synthesis\/blob\/master\/LATS_demo.ipynb\">  <\/a><strong> <\/strong><a href=\"https:\/\/github.com\/royorel\/FFHQ-Aging-Dataset\"><strong>DATA<\/strong><\/a><\/p>\n\n\n\n<div class=\"wp-block-media-text\" style=\"grid-template-columns:34% auto\"><figure class=\"wp-block-media-text__media\"><video controls src=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/video2.webm\"><\/video><\/figure><div class=\"wp-block-media-text__content\">\n<p><strong><em>Abstract:<\/em> <\/strong>We address the problem of single photo age progression and regression&#8212;the prediction of how a person might look in the future, or how they looked in the past. Most existing aging methods are limited to changing the texture, overlooking transformations in head shape that occur during the human aging and growth process. This limits the applicability of previous methods to aging of adults to slightly older adults, and application of those methods to photos of children does not produce quality results. We propose a novel multi-domain image-to-image generative adversarial network architecture, whose learned latent space models a continuous bi-directional aging process. The network is trained on the FFHQ dataset, which we labeled for ages, gender, and semantic segmentation. Fixed age classes are used as anchors to approximate continuous age transformation. Our framework can predict a full head portrait for ages 0&#8211;70 from a single photo, modifying both texture and shape of the head. We demonstrate results on a wide variety of photos and datasets, and show significant improvement over the state of the art.<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-columns has-3-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-video\"><video height=\"256\" style=\"aspect-ratio: 256 \/ 256;\" width=\"256\" controls src=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/video1-10.webm\"><\/video><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-video\"><video height=\"256\" style=\"aspect-ratio: 256 \/ 256;\" width=\"256\" controls src=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/video4-5.webm\"><\/video><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-video\"><video height=\"256\" style=\"aspect-ratio: 256 \/ 256;\" width=\"256\" controls src=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/video3-5.webm\"><\/video><\/figure>\n\n\n\n<p><\/p>\n<\/div>\n<\/div>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Lifespan Age Transformation Synthesis\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/9fulnt2_q_Y?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p style=\"color:#064dc6\" class=\"has-text-color has-background has-medium-font-size has-light-gray-background-color\"><strong><a href=\"https:\/\/grail.cs.washington.edu\/projects\/shadow\/\">People as Scene Probes<\/a><\/strong><br><a href=\"https:\/\/homes.cs.washington.edu\/~yifan1\/\">Yifan Wang<\/a>, <a href=\"https:\/\/homes.cs.washington.edu\/~seitz\/\">Steve Seitz<\/a>, <a href=\"https:\/\/homes.cs.washington.edu\/~curless\/\">Brian Curless<\/a><\/p>\n\n\n\n<div class=\"wp-block-media-text alignwide\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"928\" height=\"467\" src=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/PeopleSceneProbes1.jpg\" alt=\"\" class=\"wp-image-354\" srcset=\"https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/PeopleSceneProbes1.jpg 928w, https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/PeopleSceneProbes1-300x151.jpg 300w, https:\/\/realitylab.uw.edu\/staging\/wp-content\/uploads\/2020\/07\/PeopleSceneProbes1-768x386.jpg 768w\" sizes=\"auto, (max-width: 928px) 100vw, 928px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p><strong><em>Abstract:<\/em> <\/strong>By analyzing the motion of people and other objects in a scene, we demonstrate how to infer depth, occlusion, lighting, and shadow information from video taken from a single camera viewpoint. This information is then used to composite new objects into the same scene with a high degree of automation and realism. In particular, when a user places a new object (2D cut-out) in the image, it is automatically rescaled, relit, occluded properly, and casts realistic shadows in the correct direction relative to the sun, and which conform properly to scene geometry. We demonstrate results (best viewed in supplementary video) on a range of scenes and compare to alternative methods for depth estimation and shadow compositing.<\/p>\n<\/div><\/div>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"People as Scene Probes\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/bYJ_WdnsEbI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p style=\"color:#064dc6\" class=\"has-text-color has-background has-medium-font-size has-light-gray-background-color\"><strong><a href=\"http:\/\/grail.cs.washington.edu\/projects\/nba_players\/\">Reconstructing NBA Players<\/a><\/strong><br><a href=\"https:\/\/homes.cs.washington.edu\/~lyzhu\/\">Luyang Zhu<\/a>, <a href=\"http:\/\/www.krematas.com\/\">Konstantinos Rematas<\/a>, <a href=\"https:\/\/homes.cs.washington.edu\/~curless\/\">Brian Curless<\/a>, <a href=\"https:\/\/homes.cs.washington.edu\/~seitz\/\">Steve Seitz<\/a>, <a href=\"https:\/\/sites.google.com\/view\/irakemelmacher\/\/\">Ira Kemelmacher-Shlizerman<\/a><\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Reconstructing NBA Players\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/jIn5sO4DAF0?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p><strong><em>Abstract:<\/em><\/strong> Great progress has been made in 3D body pose and shape estimation from a single photo. Yet, state-of-the-art results still suffer from errors due to challenging body poses, modeling clothing, and self occlusions. The domain of basketball games is particularly challenging, as it exhibits all of these challenges. In this paper, we introduce a new approach for reconstruction of basketball players that outperforms the state-of-the-art. Key to our approach is a new method for creating poseable, skinned models of NBA players, and a large database of meshes (derived from the NBA2K19 video game), that we are releasing to the research community. Based on these models, we introduce a new method that takes as input a single photo of a clothed player in any basketball pose and outputs a high resolution mesh and 3D pose for that player. We demonstrate substantial improvement over state-of-the-art, single-image methods for body shape reconstruction.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The 2020 European Conference on Computer Vision (ECCV&#8217;20) has accepted papers from several of our UW Reality Lab researchers: Reconstructing NBA Players; People As Scene Probe; and Lifespan Age Transformation&#8230; <a class=\"read-more-link\" href=\"https:\/\/realitylab.uw.edu\/staging\/?p=337\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":8,"featured_media":445,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[1],"tags":[41,43,45,40,46,39,42,28,47,48],"class_list":["post-337","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-curless","tag-eccv","tag-eccv-2020","tag-kemelmacher","tag-or-el","tag-rematas","tag-seitz","tag-sengupta","tag-wang","tag-zhu","gt-excerpt","gt-excerpt-thumbnail-square"],"_links":{"self":[{"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/posts\/337","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=337"}],"version-history":[{"count":58,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/posts\/337\/revisions"}],"predecessor-version":[{"id":442,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/posts\/337\/revisions\/442"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=\/wp\/v2\/media\/445"}],"wp:attachment":[{"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=337"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=337"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/realitylab.uw.edu\/staging\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=337"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}