The Reality Lab Lectures bring important researchers and practitioners from a variety of disciplines to the University of Washington to present their work in Augmented, Virtual, and Mixed Reality and to discuss the past and future of the field.
These lectures are free and ordinarily open to the public, but due to the health concerns around COVID-19, the lectures from 2020 onward have been held virtually and later posted to our YouTube channel. Subscribe to be notified of recently uploaded talks!
Body mapping is an existing arts and health research method that invites participants to visually translate their embodied experience of pain and emotions by drawing onto an outline of the body. Hatsumi is a creative research startup that has translated this process into a virtual reality experience. By enabling participants to illustrate in an immersive environment with 3d drawing tools, this approach offers new opportunities to drastically enhance and expand its applications across healthcare, research design and knowledge translation.
This talk will explore the underpinning theories behind body mapping, virtual art therapy and demonstrate examples of the experience in use. We will also explore the potential of digitising body mapping as a new research tool to gather quantitative and qualitative data and be used as a tool for diagnosis and new discoveries related to the diversity of human sensory realities.
Bio: Sarah is a producer, consultant and founder of Hatsumi (which means to see for the first time). They develop work at the intersection of immersive technology, participatory art, and storytelling to improve physical and mental health. Over the last few years that have been creating BodyMap, a creative participatory tool created to help people communicate and understand the embodied experience of pain and emotion using 3D drawing and sound. She is the producer at Explore Deep, an award-winning clinically validated breath controlled VR experience designed to reduce anxiety, developed in close collaboration with the Games for Emotional and Mental Health Lab, Radboud University. She has worked with a number of organisations across the immersive and healthcare space including Immerse UK, Healthcare Education England and the XR Safety Initiative Medical Council and continues to create opportunities to bring together practitioners across academia, healthcare and the creative industries to create an equitable and just future. She is also an End of Life Doula in training, a non-medical support for people going through the end of life process.There are many challenges to realizing this vision properly. The NYU Future Reality Lab and its collaborators are working on many of them. This talk will give an overview of many of the key areas of research, including how to guarantee universal accessibility, user privacy and rights management, low latency networking, design and construction of shared virtual worlds, correct rendering of spatial audio, biometric sensing, and a radical rethinking of user interface design.
Bio: Ken Perlin, a professor in the Department of Computer Science at New York University, directs the Future Reality Lab, and is a participating faculty member at NYU MAGNET. His research interests include future reality, computer graphics and animation, user interfaces and education. He is chief scientist at Parallux, Tactonic Technologies and Autotoon. He is an advisor for High Fidelity and a Fellow of the National Academy of Inventors. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as membership in the ACM/SIGGRAPH Academy, the 2020 New York Visual Effects Society Empire Award the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation.Dr. Jayaram’s professional achievements and philosophy have evolved through three distinct spheres of work: leadership of a global engineering team at Intel; entrepreneurship as a co-founder of three start-ups; and academic research, teaching, and mentorship as a professor at Washington State University. The three companies she co-founded were in the areas of VR media experiences for sports/concerts, CAD interoperability, and VR/CAD/AI for enterprise design. The company VOKE VR was acquired by Intel in 2016.
At Intel, Uma built teams and technologies for high-profile executions such as the first ever Live Olympics VR experience at the Winter Olympics in 2018 and experiences for world-class leagues and teams such as NFL, NBA, PGA, NCAA, and La Liga. The teams continued to evolve technologies for VR and Volumetric/Spatial Computing executions including media streaming & processing, cloud integrations, platform services, off-site NOC services, distribution over CDN, and SDKs for mobile apps and HMDs. Uma’s teams have a very distinctive culture that combines intellectual vitality, disciplined execution, and a strong employee-centric foundation.
Uma has an undergraduate degree in Mechanical Engineering from IIT Kharagpur. She earned her MS and PhD degrees from Virginia Tech, where she recently received a Distinguished Alumna award and currently sits on the advisory board.
Dr. S. (Jay) Jayaram has driven the digitization and personalization of sports for fans through immersive technologies. Through his influential work in virtual reality over the past 25 years, he has brought VR to a wide spectrum of domains - from live events in sports and concerts to VR for engineering applications and training. He has brought to market highly innovative solutions and products combining virtual reality, immersive environments, powerful user controllable media experiences, and social networks. The work done by his team in Virtual Assembly and Virtual Prototyping in the 90s continues to be widely referenced by groups around the world. Dr. Jay has co-founded several companies including VOKE (acquired by Intel in 2016), Integrated Engineering Solutions, and Translation Technologies. He was also a Professor at Washington State University and co-founded the WSU Virtual Reality Laboratory in 1994. Most recently he was Chief Technology Officer and Chief Product Officer at Intel Sports. He is currently the CEO of a startup, QuintAR, Inc.
Abstract: Virtual Reality and Augmented Reality to become the primary device for interaction with digital content, beyond the form factor, we need to understand what types of things we can do in VR that would be impossible with other technologies. That is, what does spatial computing bring to the table. For once, the spatialization of our senses. We can enhance audio or proprioception in complete new ways. We can grab and touch objects with new controllers, like never before. Even in then empty space between our hands. But how fast do we adapt to the new sensory experiences?
Avatars are also unique to VR. They represent other humans but can also substitute our own bodies. And we perceive our world through our bodies. Hence avatars also change our perception of our surroundings. In this presentation we will explore the uniqueness of VR, from perception to avatars and how they can ultimately change our behavior and interactions with the digital content.
Bio: Dr Mar Gonzalez Franco is a computer scientist at Microsoft Research. Her expertise and research spans from VR to avatars to haptics and her interests include the further understanding of human perception and behavior. In her research she uses real-time computer graphics an tracking systems to create immersive experiences and study human responses. Prior to pivoting into industrial research she held several positions and completed her studies in leading academic institutions including the Massachusetts Institute of Technology, University College London, Tsinghua University and Universidad de Barcelona. Later she also took roles as a scientist in the startup world, joining Traity.com to build mathematical models of trust. She created and led an Immersive Technologies Lab for the leading aeronautic company Airbus. Over the years her work has been covered by tech and global media such as Fortune Magazine, TechCrunch, The Verge, ABC News, GeekWire, Inverse, Euronews, El Pais, and Vice. She was named one in 25 young tech talents by Business Insider, and won the MAS Technology award in 2019. She continues to serve as a field expert for governmental organizations worldwide such as the US-NSF, the NSERC in Canada and the EU evaluating Future and Emerging Technologies projects.Abstract: In this talk, I will describe early steps taken at FRL Pittsburgh in achieving photorealistic telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you.
Telepresence is, perhaps, the only application that has the potential to bring billions of people into VR. It is the next step along the evolution from telegraphy to telephony to videoconferencing. Just like telephony and video-conferencing, the key attribute of success will be “authenticity”: users' trust that received signals (e.g., audio for the telephone and video/audio for VC) are truly those transmitted by their friends, colleagues, or family. The challenge arises from this seeming contradiction: how do we enable authentic interactions in artificial environments?
Our approach to this problem centers around codec avatars: the use of neural networks to address the computer vision (encoding) and computer graphics (decoding) problems in signal transmission and reception. The creation of codec avatars require capture systems of unprecedented 3D sensing resolution, which I will also describe.
Bio: Yaser Sheikh is an Associate Professor at the Robotics Institute, Carnegie Mellon University. He also directs the Facebook Reality Lab in Pittsburgh, which is devoted to achieving photorealistic social interactions in AR and VR. His research broadly focuses on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. With colleagues and students, he has won the Honda Initiation Award (2010), Popular Science’s "Best of What’s New" Award, best student paper award at CVPR (2018), best paper awards at WACV (2012), SAP (2012), SCA (2010), ICCV THEMIS (2009), best demo award at ECCV (2016), and he received the Hillman Fellowship for Excellence in Computer Science Research (2004). Yaser has served as a senior committee member at leading conferences in computer vision, computer graphics, and robotics including SIGGRAPH (2013, 2014), CVPR (2014, 2015, 2018), ICRA (2014, 2016), ICCP (2011), and served as an Associate Editor of CVIU. His research has been featured by various media outlets including The New York Times, BBC, MSNBC, Popular Science, and in technology media such as WIRED, The Verge, and New Scientist.In this special lecture, we invited leaders of Seattle Area VR/AR startups to share their experiences in the form of a panel discussion. Topics of dicussion included their company's key product/vision, challenges, strategies, fundraising, and other experiential topics.
There are dangers and many unknowns in using VR, but it also can help us hone our performance, recover from trauma, improve our learning and communication abilities, and enhance our empathic and imaginative capacities. Like any new technology, its most incredible uses might be waiting just around the corner.
Bio: Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. He earned a B.A. cum laude from the University of Michigan in 1994 and a Ph.D. in cognitive psychology from Northwestern University in 1999. He spent four years at the University of California, Santa Barbara as a Post-Doctoral Fellow and then an Assistant Research Professor.
Bailenson studies the psychology of Virtual Reality (VR), in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how VR can transform education, environmental conservation, empathy, and health. He is the recipient of the Dean’s Award for Distinguished Teaching at Stanford.
He has published more than 100 academic papers, in interdisciplinary journals such as Science and PLoS One, as well domain-specific journals in the fields of communication, computer science, education, environmental science, law, marketing, medicine, political science, and psychology. His work has been continuously funded by the National Science Foundation for 15 years.
Bailenson consults pro bono on VR policy for government agencies including the State Department, the US Senate, Congress, the California Supreme Court, the Federal Communication Committee, the U.S. Army, Navy, and Air Force, the Department of Defense, the Department of Energy, the National Research Council, and the National Institutes of Health.
His first book Infinite Reality, co-authored with Jim Blascovich, was quoted by the U.S. Supreme Court outlining the effects of immersive media. His new book, “Experience on Demand”, was reviewed by The New York Times, The Wall Street Journal, The Washington Post, Nature, and The Times of London, and was an Amazon Best-seller.
Abstract: What’s the social good issue you are passionate about? To change people’s hearts and minds, what is the story that needs to be experienced immersively? What if you could connect with a team of like-minded members with the cross-functional skills to realize your idea in VR? What if you could get started NOW? What would you build?
In this talk, we will discuss our experiences with the VR for the Social Good Initiative at the University of Florida (www.vrforthesocialgood.com), and provide a plan for those interested to implement the program at your institution.
The VR for the Social Good Initiative was started in 2015 by professors in journalism and computer science professor, Sriram Kalyanaraman and Benjamin Lok. Their goal was to connect people (e.g. researchers, startups, non-profits) who are seeking solutions to social good issues with people (e.g. students) who could solve problems by creating virtual reality stories. Connecting seekers and solvers would enable many new ideas of how to apply VR to social good issues to be generated, tested, and evaluated.
The VR for the Social Good Initiative started offering classes in the Summer of 2017. The class has no prerequisites and no programming required. The class is open to students, from freshman to graduate student, of all majors. Students in the class are from a wide set of backgrounds including nursing, psychology, journalism, engineering, graphic design, education, building construction, the sciences, amongst others.
And this approach is scalable beyond existing approaches of traditional “VR classes” because the VR for Social Good Initiative leverages the concepts of lean software development, Agile development methodology, and the scrum agile framework. In the first year of the class, over 175 students created 36 projects. Next year we are having over 300 students across multiple colleges participate. Students from the course have gone on to join research groups, been involved in publications, generated initial data for funding, and participated in prestigious competitions, such as Oculus’s Top 100 Launchpad bootcamp.
We are expanding the class to hundreds to potentially a thousand of students a year. This is an opportunity to have thousands of people trained to create immersive stories to solve social good problems. This would be transformative to both the VR field and society in general. But everything here can be replicated at your school and your community. All materials for the class are available online at www.vrforthesocialgood.com. Empowering those that know the social good issues the best to be creators of immersive stories would be transformative in how society addresses our toughest problems. We are enabling all to become creators of solutions, not just consumers.
Professor Lok received a UF Term Professorship (2017-2020), the Herbert Wertheim College of Engineering Faculty Mentoring Award (2016), a NSF Career Award (2007-2012), and the UF ACM CISE Teacher of the Year Award in 2005-2006. He and his students in the Virtual Experiences Research Group have received Best Paper Awards at ACM I3D (Top 3, 2003) and IEEE VR (2008). He currently serves as the chair of the Steering Committee of the IEEE Virtual Reality conference. Professor Lok is an associate editor of Computer and Graphics, and ACM Computing Surveys