The Reality Lab Lectures bring important researchers and practitioners from a variety of disciplines to the University of Washington to present their work in Augmented, Virtual, and Mixed Reality and to discuss the past and future of the field.

These lectures are free and ordinarily open to the public, but due to the health concerns around COVID-19, the lectures from 2020 onward have been held virtually and later posted to our YouTube channel. Subscribe to be notified of recently uploaded talks!

Shahram Izadi speaking at the UW Reality Lab Lectures

Lectures at a Glance

2023


Thomas Lewis

Spatial Computing Cloud Advocate Lead,
Microsoft
"A Guided Tour Across the Metaverse"
Watch on YouTube  Watch on YouTube  ❐

February 28, 2023


Thomas Lewis
Abstract: The volume is increasing from news around the Metaverse as well as Mixed Reality across Virtual and Augmented Reality. It can be a bit confusing. Also, what is a Metaverse? Does the Metaverse exist today or is it a conceptual north star? Join us for a fast-paced tour of all the realities and how we can think about the Metaverse today and what it will be in the future.

Bio: Thomas Lewis is a Spatial Computing Cloud Advocate Lead in Microsoft’s Developer Relations. Thomas has worked in a variety of roles and geographies at Microsoft for over 20 years. He’s currently advocating on behalf of developers, designers, creators, and builders of Mixed Reality experiences. After putting on a Mixed Reality headset, he knew that he had experienced a taste of the future and sees the beauty, sadness, and hope that it can bring to humans.

Mark Billinghurst

Director, Empathic Computing Laboratory
Professor, University of South Australia in Adelaide
"Empathic Computing; New Directions for Collaborative XR"
Watch on YouTube  Watch on YouTube  ❐

February 21, 2023


Mark Billinghurst
Abstract: This talk introduces Empathic Computing, a new approach for developing collaborative systems using Augmented Reality (AR) and Virtual Reality (VR). Empathic Computing combines AR, VR, and physiological sensing with machine learning to create systems that increase understanding between people. An overview of the core concepts of Empathic Computing will be given, as well as examples of current work in the field. Finally, directions for future work will be discussed in the context of the Metaverse, digital characters, AI, and other important technology trends.

Bio: Mark Billinghurst is Director of the Empathic Computing Laboratory, and Professor at the University of South Australia in Adelaide, Australia, and also at the University of Auckland in Auckland, New Zealand. He earned a PhD in 2002 from the University of Washington and conducts research on how virtual and real worlds can be merged, publishing over 700 papers on Augmented Reality, Virtual Reality, remote collaboration, Empathic Computing, and related topics. In 2013 he was elected as a Fellow of the Royal Society of New Zealand, and in 2019 was given the ISMAR Career Impact Award in recognition for lifetime contribution to AR research and commercialization. In 2022 he was inducted into the inaugural class of the IEEE VGTC VR Academy, and in 2023 elected as a Fellow of the IEEE.

Ben Poole

Research Scientist,
Google Brain
"2D Priors for 3D Generation"
Watch on YouTube  Watch on YouTube  ❐

January 31, 2023


Ben Poole
Abstract: Large scale datasets of images with text descriptions have enabled powerful models that represent and generate pixels. But progress in 3D generation has been slow due to the lack of 3D data and efficient architectures. In this talk, I’ll present DreamFields and DreamFusion: two approaches that enable 3D generation from 2D priors using no 3D data. By turning 2D priors into loss functions, we can optimize 3D models (NeRFs) from scratch via gradient descent. These methods enable high-quality generation of 3D objects from diverse text prompts. Finally, I’ll discuss a fundamental problem with our approach and how continued progress on pixel-space priors like Imagen Video can unlock new 3D capabilities.

Bio: Ben Poole is a research scientist at Google Brain in San Francisco working on deep generative models for images, video, and 3D. He completed his PhD at Stanford University advised by Surya Ganguli in the Neural Dynamics and Computation lab. His thesis was on computational tools to develop a better understanding of both biological and artificial neural networks. He’s worked at DeepMind, Google Research, Intel Research Pittsburgh, and the NYU Center for Neural Science.

2022


Chloe LeGendre

Research Scientist,
Netflix
"Relighting Portraits using Machine Learning"
Watch on YouTube  Watch on YouTube  ❐

February 22, 2022


Chloe LeGendre
Abstract: Until very recently, when you captured a portrait photograph, the lighting of the original scene was “baked” into the image. However, recent advances in computer graphics techniques leveraging machine learning have enabled photographers to change the lighting in a portrait after its capture. In this talk, I will introduce the general problem of computational portrait relighting, and I will then introduce two recent machine learning based approaches that tackle this problem. The first approach forms the basis for a recent computational photography feature developed by Google, called “Portrait Light,” which allows users to add a synthetic fill flash into a portrait image. This feature launched for the Google Pixel Phone and in Google Photos in October 2020. The second approach tackles more generalized relighting - allowing you to relight a portrait and realistically composite its subject into an entirely new scene. These recent advancements mean that photographers and cinematographers will be able to spend less time setting up lighting, as changing the scene’s lighting can now be accomplished in post-production -- well after principal photography.

Bio: Chloe LeGendre is a Senior Research Scientist at Netflix, working in a computer graphics group focused on research at the intersection of machine learning and filmmaking. She earned her Ph.D. in Computer Science at the University of Southern California's Institute for Creative Technologies (USC ICT) in 2019, advised by Professor Paul Debevec as an Annenberg Research Fellow. From 2011 to 2015, she was an applications scientist in imaging and augmented reality for L'Oreal USA Research and Innovation, where she helped launch the AR cosmetics try-on app “Makeup Genius,” with 20 million+ global downloads. From 2018 to 2021, she was a software engineer and researcher at Google AR/VR and Google Research, where she worked on ML-driven advancements for ARCore and then computational photography features for Google Photos and the Google Pixel phone. Chloe also obtained an M.S. in Computer Science in 2015 from Stevens Institute of Technology, where she recently taught Intro to Computer Graphics as adjunct faculty.

Bob Crockett

Co-Founder, HaptX
Professor, Cal Poly State Univ.
"Hardware is Hard: Blazing a Trail to Develop Natural Haptics for VR"
Watch on YouTube  Watch on YouTube  ❐

February 15, 2022


Bob Crockett
Abstract: HaptX, Inc. was founded in 2012 with a compelling, if audacious, vision: develop a system capable of providing accurate physical inputs (touch, force, torque, temperature) across an entire user’s body to a degree that the virtual world becomes indistinguishable from the physical world. While this system does not require breakthrough technologies, it is both and engineering and business challenge. This presentation will cover some of the highlights and pitfalls of the journey to bring such a system to market. With increasing corporate and public awareness of the Metaverse, haptic technologies are moving to center stage as an enabling piece of the puzzle; creating haptic technologies that are enterprise-grade is a challenge that HaptX has been working on for a decade now. Yes, it is hard to start a hardware-based startup company. But it continues to be one heck of a ride.

Bio: Dr. Crockett is a specialist in development and commercialization of disruptive technologies. Over the past three decades he has worked in the Aerospace, Biotechnology, Medical Device, and Consumer Products industries in leadership roles on both the strategic and tactical sides of engineering and IP development. Dr. Crockett received his Ph.D. from University of Arizona in Materials Science and Engineering. He holds an M.B.A. from Pepperdine University and a B.S. in Mechanical Engineering from University of California, Berkeley. He has recently served as Associate Dean for Innovation Infrastructure in the College of Engineering, and is currently a Professor in Biomedical Engineering. In addition to his academic work in Innovation & Entrepreneurship, Dr. Crockett is currently involved in four technology-based startup companies, including serving as the Director of X-Lab for HaptX, Inc.

Meredith Ringel Morris

Director of People + AI Research,
Google
"Accessible by Design: An Opportunity for Virtual Reality"

February 1, 2022


Meredith Ringel Morris
Abstract: Too often, the accessibility of technology to people with disabilities is an afterthought (if it is considered at all); post-hoc or third-party patches to accessibility, while better than no solution, are less optimal than interface designs that consider ability-based concerns from the start. Virtual Reality (VR) technologies are at a crucial point of near-maturity, with emerging, but not yet widespread, commercialization; as such, VR technologies have an opportunity to integrate accessibility as a fundamental, developing cross-industry standards and guidelines to ensure high-quality, inclusive experiences that could revolutionize the power and reach of this medium. In this talk, I will discuss the needs, opportunities, and challenges of creating accessible VR. I will then present several inclusive VR designs: the Canetroller, which provides audio and haptic information to allow a completely blind person to navigate a VR scene; SeeingVR, a toolkit that can modify a Unity-based VR scene post hoc to support a range of accessibility options for people with several low vision conditions; and several prototypes exploring sound accessibility in VR for end-users who are d/Deaf or hard of hearing.

Bio: Meredith Ringel Morris is Director of People + AI Research at Google. Prior to joining Google Research, Dr. Morris was Research Area Manager for Interaction, Accessibility, and Mixed Reality at Microsoft Research, where she founded Microsoft’s Ability research group. She is also an Affiliate Professor at the University of Washington in the Allen School of Computer Science & Engineering and in The Information School. Dr. Morris is an ACM Fellow and a member of the ACM SIGCHI Academy. Her research on collaboration and social technologies has contributed new systems, methods, and insights to diverse areas of computing including gesture interaction, information retrieval, and accessibility. Dr. Morris earned her Sc.B. in Computer Science from Brown University and her M.S. and Ph.D. in Computer Science from Stanford University.

Sebastià V. Amengual Garí

Research Scientist,
Reality Labs Research @ Meta
"Towards Audio Presence in Mixed Realities"
Watch on YouTube  Watch on YouTube  ❐

January 25, 2022


Sebastia Amengual Gari
Abstract: True audio presence in Mixed Reality (XR) occurs when virtual sounds seamlessly blend with the rest of our environment, belonging to our space and being truly indistinguishable from real sources. The challenges of achieving audio presence in portable devices with limited compute lie in developing novel methods for fast, lightweight, dynamic, and yet perceptually accurate modelling of the entire binaural audio rendering chain. In this talk we will review some of the most recent research conducted within the Audio Team at Reality Labs Research (RL-R), including real-time room acoustics rendering, HRTF modelling, high fidelity 6 DoF research systems, and auditory perception.

Bio: Sebastià V. Amengual Garí received the Diploma in telecommunications with a major in sound and image from the Polytechnic University of Catalonia, Barcelona, Spain, and completed his master’s thesis from the Norwegian University of Science and Technology, Trondheim, Norway, in 2014. He received the Doctoral degree (Dr.-Ing.) from the University of Music, Detmold, Germany, in 2017, with a focus on interaction of room acoustics and live music performance using virtual acoustic environments. Since 2018, he has been a Research Scientist with Reality Labs Research @ Meta (formerly Oculus Research, and Facebook Reality Labs) working on room acoustics, spatial audio, and auditory perception. His research interests lie in the intersection of audio, perception, and music.

2021


David Smith

CEO and Founder,
Croquet Corporation
"The Augmented Conversation"
Watch on YouTube  Watch on YouTube  ❐

April 20, 2021


David Smith
Abstract: Human communication mediated by computers and Augmented Reality devices will enable us to dynamically express, share and explore new ideas with each other via live simulations as easily as we talk about the weather. This collaboration provides a “shared truth” — what you see is exactly what I see, I see you perform an action as you do it, and we both see exactly the same dynamic transformation of this shared information space. When you express an idea, the computer, a full participant in this conversation, instantly makes it real for both of us enabling us to critique and negotiate the meaning of it. This shared virtual world will be as live, dynamic, pervasive, and visceral as the physical. Augmented Reality is not just the next wave of collaboration and computing, it is a fundamental shift in how we will engage with our world and each other, and how we will understand and solve the huge problems we face as a species. AR will replace your PC, your phone, your tablet. It will be an always on and always on you supercomputer — it will amplify our intentions and ideas enabling all of us to create and explore new universes together. In this talk, David Smith will introduce the Croquet programming platform for live collaboration, built by some of best known researchers in the history of computer science, and also touch on the core cognitive psychology first principles on which it is based.

Bio: David Alan Smith is the CEO and Founder of Croquet Corporation, developing Croquet Greenlight, a first step toward an Augmented Reality Collaborative Operating System. Smith is a computer scientist and entrepreneur who has focused on interactive 3D and using 3D as a basis for new user environments and entertainment for over thirty years. His specialty is system design and advanced user interfaces. He is a pioneer in 3D graphics, robotics, telepresence, artificial intelligence and augmented reality (AR). He creates world-class teams and ships impossible products. Smith was Chief Innovation Officer and Lockheed Martin Senior Fellow at Lockheed Martin, where he was focused on next generation human centric computing and collaboration platforms. Before joining Lockheed Martin, Smith was the chief architect of the Croquet Project, an open source virtual world collaboration platform where he worked with Alan Kay (Turing Award winner) and David P Reed (created UDP, co-created TCP/IP) and was later CTO and co-founder of Teleplace, Inc. providing a collaboration platform developed specifically for enterprises based on Croquet. In 1987, Smith created The Colony, the very first real time 3D adventure game/shooter and the precursor to today's first-person shooters. The game was developed for the Apple Macintosh and won the "Best Adventure Game of the Year" award from MacWorld Magazine. Smith founded Virtus Corporation in 1990 and developed Virtus Walkthrough, the first real-time 3D design application for personal computers. Walkthrough won the first "Breakthrough Product of the Year" from MacUser Magazine. Smith also co-founded Red Storm Entertainment with author Tom Clancy and co-created the Rainbow Six game franchise. David gave the keynote address at IEEE VR 2017.

Andy Wilson

Partner Research Manager,
Microsoft Research
"RealityShader: Holograms without Headsets"
Watch on YouTube  Watch on YouTube  ❐

April 27, 2021


Andy Wilson
Abstract: At Microsoft Research we have been exploring the use of depth cameras and projectors to augment reality without the use of headsets. Projects such as Illumiroom, RoomAlive and Room2Room video conferencing transform the physical environment using projection mapping. More recent work demonstrates fluid transition from traditional VR use, to a mode of VR where parts of the physical environment are rendered with the virtual scene, to using projection mapping in place of the headset. This body of work demonstrates the broad applicability of augmented reality, transcending form factor.

Bio: Andy Wilson is a partner researcher at Microsoft Research. There he has been applying sensing technologies to enable new modes of human-computer interaction. These days he is focused on augmented and virtual reality, ubiquitous computing and realtime interactive computer vision. He contributed to Microsoft’s earliest efforts to commercialize depth cameras, leading to Kinect and worked extensively on the original Surface interactive table. Before joining Microsoft, Andy obtained his BA at Cornell University, and MS and PhD at the MIT Media Laboratory.

Andreea Ion Cojocaru

CEO and Co-Founder,
NUMENA
"Where Are You, Who Are You? The Thinning Thickness of the Real"
Watch on YouTube  Watch on YouTube  ❐

May 11, 2021


Andreea Ion Cojocaru
Abstract: Neither what we refer to as “reality” nor “we” or “I” are unified a priori concepts. In fact, reality is constructed and we are constructed within it. The first part of the lecture will define the nature of this process both with and without technological mediation from a phenomenological perspective. The second part will look at physical and virtual architecture in the context of virtual reality. I will argue that the use of VR as mediating technology creates a pluralistic I and redefines the traditional subject-object relationship. The talk will end with speculation on the implications of this conclusion for a near future lived across realities.

Bio: ​Andreea Ion Cojocaru is a licensed architect and software developer. Her work focuses on developing a framework for spatial experience that spans from physical space to the technologically mediated space of immersive tech. The central thesis of her work is that by expanding the possibilities and affordances of spatial experience, we are expanding and redefining notions of identity, subjectivity and modes of collective being. Her methodology is grounded in phenomenology, from classical phenomenology (Husserl and Merleau-Ponty) to post-phenomenology (an approach to the philosophy of technology developed by Don Ihde), to experimental phenomenology. Andreea is also the CEO and co-founder of NUMENA, an award-winning interdisciplinary company that designs and develops bo​​th physical and virtual spaces. NUMENA has worked with clients such as BMW and B. Braun to develop experimental virtual experiences and is currently developing a virtual reality tool for spatial design. Andreea was formally trained as an architect at MIT and Yale University where she was awarded the gold medal for best graduating master student by the American Institute of Architects. Prior to NUMENA, she gained design and project management experience in architecture practices such as Kohn Pederson Fox and Robert A.M. Stern in New York. She is the recipient of numerous fellowships and is a frequent guest speaker at international events in the XR space.

Sarah Ticho

Founder & CEO,
Hatsumi
"Investigating Our Sensory Realities Through Immersive Art"
Watch on YouTube  Watch on YouTube  ❐

May 25, 2021


Sarah Ticho
Abstract: We are feeling bodies, seeking the novel and dulling the painful every day. But how can we communicate experiences to others when sometimes words are not enough? How do we make the invisible experiences, visible?

Body mapping is an existing arts and health research method that invites participants to visually translate their embodied experience of pain and emotions by drawing onto an outline of the body. Hatsumi is a creative research startup that has translated this process into a virtual reality experience. By enabling participants to illustrate in an immersive environment with 3d drawing tools, this approach offers new opportunities to drastically enhance and expand its applications across healthcare, research design and knowledge translation.

This talk will explore the underpinning theories behind body mapping, virtual art therapy and demonstrate examples of the experience in use. We will also explore the potential of digitising body mapping as a new research tool to gather quantitative and qualitative data and be used as a tool for diagnosis and new discoveries related to the diversity of human sensory realities.

Bio: Sarah is a producer, consultant and founder of Hatsumi (which means to see for the first time). They develop work at the intersection of immersive technology, participatory art, and storytelling to improve physical and mental health. Over the last few years that have been creating BodyMap, a creative participatory tool created to help people communicate and understand the embodied experience of pain and emotion using 3D drawing and sound. She is the producer at Explore Deep, an award-winning clinically validated breath controlled VR experience designed to reduce anxiety, developed in close collaboration with the Games for Emotional and Mental Health Lab, Radboud University. She has worked with a number of organisations across the immersive and healthcare space including Immerse UK, Healthcare Education England and the XR Safety Initiative Medical Council and continues to create opportunities to bring together practitioners across academia, healthcare and the creative industries to create an equitable and just future. She is also an End of Life Doula in training, a non-medical support for people going through the end of life process.

2020


Ken Perlin

Professor of Computer Science,
NYU Future Reality Lab
"How to Build a Holodeck"
Watch on YouTube  Watch on YouTube  ❐

May 26, 2020


Perlin pic
Abstract: In the age of COVID-19 it is more clear than ever that there is a compelling need for better remote collaboration. Fortunately a number of technologies are starting to converge which will allow us to take such collaborations to a whole new level. Imagine that when you join an on-line meeting you are present with your entire body, and that you can see and hear other people as though you are all in the same room.

There are many challenges to realizing this vision properly. The NYU Future Reality Lab and its collaborators are working on many of them. This talk will give an overview of many of the key areas of research, including how to guarantee universal accessibility, user privacy and rights management, low latency networking, design and construction of shared virtual worlds, correct rendering of spatial audio, biometric sensing, and a radical rethinking of user interface design.

Bio: Ken Perlin, a professor in the Department of Computer Science at New York University, directs the Future Reality Lab, and is a participating faculty member at NYU MAGNET. His research interests include future reality, computer graphics and animation, user interfaces and education. He is chief scientist at Parallux, Tactonic Technologies and Autotoon. He is an advisor for High Fidelity and a Fellow of the National Academy of Inventors. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as membership in the ACM/SIGGRAPH Academy, the 2020 New York Visual Effects Society Empire Award the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation.

Brian Schowengerdt

Co-Founder and Chief Science Officer, Magic Leap
"Extended Reality: Use Cases and Design Considerations for AR, MR, and VR"
Watch on YouTube  Watch on YouTube  ❐

May 19, 2020


Schowengerdt pic
Abstract: Extended reality refers to a superset of experiences that involve the presentation of digital content with varying degrees of real world visibility (from virtual reality, in which the real world is invisible, to mixed reality systems that integrate digital content into the real world in roughly equal proportions, to systems that show the real world with only light digital augmentations). In addition, today’s augmented reality, mixed reality, and virtual reality systems vary across a number of other criteria (small to large field of view, head-tracked vs. non-tracked, head-worn vs. external display, stereoscopic vs. non-stereoscopic, single focus vs. multi-focus, and their ability to represent occlusions). We will discuss a number of representative examples across the XR spectrum, what kinds of experiences are best suited to the different platforms, and design considerations for each platform — with particular emphasis on how these technologies interact with human perception and cognition.

Bio: Brian Schowengerdt is the Co-Founder and Chief Science and Experience Officer of Magic Leap, and an Affiliate Assistant Professor of Mechanical Engineering in the University of Washington's Human Photonics Lab. Schowengerdt received his Bachelor's degree (summa cum laude) in 1997 from the University of California, Davis, with a triple major in psychology, philosophy, and German. He received his Ph.D. (2004, U.C. Davis) in psychology, with an emphasis in cognition and perception, and conducted his doctoral research at the U.W. Human Interface Technology Lab, where he studied display system design, optical engineering, and mechanical engineering in the course of developing mixed reality and virtual reality systems. He is an inventor on more than 100 issued and pending patents, has given numerous plenary and invited presentations at display industry conferences, and has authored a variety of papers on light field displays, novel microdisplays, and human perception. Schowengerdt has served as the Chair of the Display System committee and Program Vice Chair for 3D for the SID International Symposia, and the Program Committee Co-Chair for the Laser Display Conference, and as associate editor of the Journal of the SID and guest editor for Information Display magazine. Since 2000, he has combined knowledge of sensory physiology, optics, and mechanical engineering to develop and miniaturize mixed reality systems matched to the needs of human perceptual systems.

Rosie Summers

3D Animator, VR Artist, Tilt Brush Live Performer
"Building Virtual Realities"
Watch on YouTube  Watch on YouTube  ❐

May 12, 2020


Summer pic
Abstract: In this talk, I will be sharing my industry experience of building virtual realities, from the inside out using VR creative tools such as Tilt Brush and Quill. Exploring the benefits of using these powerful tools to bring a whole new dimension to the way you create characters, worlds alongside it’s rapid visualisation, from pre-vis to production. I will talk about my personal journey into immersive mediums and the tips and tricks I have learnt along the way on all things VR worldbuilding, even including a live demonstration of Tilt Brush (if all goes well).

Bio: Rosie is a 3D animator and Virtual Reality Artist at XR Games, which recently released Angry Birds Movie 2 VR: Under Pressure for PlayStation VR. She brings characters and worlds to life in high speed through her work as a VR artist, covering all areas of the production pipeline, and has a breadth of experience working with a whole range of headsets. She also loves the performative element that comes with creating art in virtual space and has performed live VR paintings at numerous festivals and events, working with high profile clients such as the BBC and Google.

Uma Jayaram & Jay Jayaram

Former Managing Director and Principal Engineer, Intel Sports. ASME Fellow

CEO QuintAR Inc. Former Chief Technology and Product Officer, Intel Sports
"Bringing VR/AR experiences to Live Sports – Opportunities and Challenges"
Watch on YouTube  Watch on YouTube  ❐

April 28, 2020


Jayaram pic Jayaram pic
Abstract: This presentation will focus on the significant opportunity for VR and AR in the sports industry and use the journey of VOKE VR from startup through acquisition by Intel and subsequent growth to illustrate challenges and successes along the way. The speakers will bring out the nuances of working in the sports industry, bringing new technology to fans, specific VR related technical and production challenges and the future of immersive media in sports and entertainment.

Bio:

Dr. Jayaram’s professional achievements and philosophy have evolved through three distinct spheres of work: leadership of a global engineering team at Intel; entrepreneurship as a co-founder of three start-ups; and academic research, teaching, and mentorship as a professor at Washington State University. The three companies she co-founded were in the areas of VR media experiences for sports/concerts, CAD interoperability, and VR/CAD/AI for enterprise design. The company VOKE VR was acquired by Intel in 2016.

At Intel, Uma built teams and technologies for high-profile executions such as the first ever Live Olympics VR experience at the Winter Olympics in 2018 and experiences for world-class leagues and teams such as NFL, NBA, PGA, NCAA, and La Liga. The teams continued to evolve technologies for VR and Volumetric/Spatial Computing executions including media streaming & processing, cloud integrations, platform services, off-site NOC services, distribution over CDN, and SDKs for mobile apps and HMDs. Uma’s teams have a very distinctive culture that combines intellectual vitality, disciplined execution, and a strong employee-centric foundation.

Uma has an undergraduate degree in Mechanical Engineering from IIT Kharagpur. She earned her MS and PhD degrees from Virginia Tech, where she recently received a Distinguished Alumna award and currently sits on the advisory board.

Dr. S. (Jay) Jayaram has driven the digitization and personalization of sports for fans through immersive technologies. Through his influential work in virtual reality over the past 25 years, he has brought VR to a wide spectrum of domains - from live events in sports and concerts to VR for engineering applications and training. He has brought to market highly innovative solutions and products combining virtual reality, immersive environments, powerful user controllable media experiences, and social networks. The work done by his team in Virtual Assembly and Virtual Prototyping in the 90s continues to be widely referenced by groups around the world. Dr. Jay has co-founded several companies including VOKE (acquired by Intel in 2016), Integrated Engineering Solutions, and Translation Technologies. He was also a Professor at Washington State University and co-founded the WSU Virtual Reality Laboratory in 1994. Most recently he was Chief Technology Officer and Chief Product Officer at Intel Sports. He is currently the CEO of a startup, QuintAR, Inc.


Mar Gonzalez Franco

Senior Researcher, Microsoft
"Impossible outside Virtual Reality"
Watch on YouTube  Watch on YouTube  ❐

April 21, 2020


Franco pic

Abstract: Virtual Reality and Augmented Reality to become the primary device for interaction with digital content, beyond the form factor, we need to understand what types of things we can do in VR that would be impossible with other technologies. That is, what does spatial computing bring to the table. For once, the spatialization of our senses. We can enhance audio or proprioception in complete new ways. We can grab and touch objects with new controllers, like never before. Even in then empty space between our hands. But how fast do we adapt to the new sensory experiences?

Avatars are also unique to VR. They represent other humans but can also substitute our own bodies. And we perceive our world through our bodies. Hence avatars also change our perception of our surroundings. In this presentation we will explore the uniqueness of VR, from perception to avatars and how they can ultimately change our behavior and interactions with the digital content.

Bio: Dr Mar Gonzalez Franco is a computer scientist at Microsoft Research. Her expertise and research spans from VR to avatars to haptics and her interests include the further understanding of human perception and behavior. In her research she uses real-time computer graphics an tracking systems to create immersive experiences and study human responses. Prior to pivoting into industrial research she held several positions and completed her studies in leading academic institutions including the Massachusetts Institute of Technology, University College London, Tsinghua University and Universidad de Barcelona. Later she also took roles as a scientist in the startup world, joining Traity.com to build mathematical models of trust. She created and led an Immersive Technologies Lab for the leading aeronautic company Airbus. Over the years her work has been covered by tech and global media such as Fortune Magazine, TechCrunch, The Verge, ABC News, GeekWire, Inverse, Euronews, El Pais, and Vice. She was named one in 25 young tech talents by Business Insider, and won the MAS Technology award in 2019. She continues to serve as a field expert for governmental organizations worldwide such as the US-NSF, the NSERC in Canada and the EU evaluating Future and Emerging Technologies projects.

Henry Fuchs

Professor of Computer Science,
UNC Chapel Hill
"Nextgen AR Glasses: Autofocus, Telepresence, Personal Assistants"
Watch on YouTube  Watch on YouTube  ❐

April 7, 2020


Fuchs pic
Abstract: Just as today's mobile phones are much more than simply telephones, tomorrow's Augmented Reality glasses will be much more than simply displays in one's prescription eyewear. For starters, these glasses will include autofocus capabilities for more comfortable viewing of real-world surroundings than today's bifocals and progressive lenses. Their internal displays will depth-accommodate the virtual objects for comfortable viewing of the combined real-and-virtual scene. The glasses will include eye-, face-, and body tracking for gaze control, and for user-interface, and for scene capture to support telepresence. The glasses will also display virtually embodied personal assistants next to the user. These assistants will be more useful than today's Siri and Alexa because they will be aware of the user's situation, gaze, and pose. These virtual assistants will appear and move around in the user's real surroundings. Although early versions of these capabilities are starting to emerge in commercial products, widespread impact will occur only when a critical mass of many capabilities are integrated into a single device. This talk will describe several recent experiments and speculate on future directions.

Bio: Henry Fuchs (PhD, Utah 1975) is the Federico Gil Distinguished Professor of Computer Science and Adjunct Professor of Biomedical Engineering at UNC Chapel Hill. He has been active in computer graphics since the 1970s, with rendering algorithms (BSP Trees), high performance graphics hardware (Pixel-Planes), office of the future, virtual reality, telepresence, and medical applications. He is a member of the National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, recipient of the SIGGRAPH Steven Anson Coons Award, and an honorary doctorate from TU Wien, the Vienna University of Technology.

2019


Yelena Rachitsky

Oculus VR
"The Hierarchy of Being: Embodying our Virtual Selves"
Watch on YouTube  Watch on YouTube  ❐

May 28, 2019

CSE2, Room G10
2:00pm

tews pic
Abstract: I'll take an interdisciplinary approach to investigating how the body, movement, and presence of others can deliver immersion and specific behaviors in VR. The talk will bridge academic ideas with currently available VR experiences to connect ideas around embodiment, environment, and social interactions, making a strong case around the need for academics and content creators to work more closely together.

Bio: Yelena Rachitsky is an Executive Producer of Experiences at Oculus, overseeing dozens of groundbreaking, narrative-driven VR projects that range from Pixar's first VR project to original independent work. Prior to Oculus, she was the Creative Producer at Future of Storytelling (FoST), which aims to change how people communicate and tell stories in the digital age. Yelena also helped program for the Sundance Film Festival and Institute's New Frontier program and spent four years in the documentary division at Participant Media, working on films like Food Inc. and Waiting for Superman. She's passionate about big creative ideas that will make technology meaningful.

Philip Rosedale

Founder, Secondlife
Founder, HighfidelityVR
"VR and Virtual Worlds"
Watch on YouTube  Watch on YouTube  ❐

May 21, 2019

CSE2, Room G10
2:00pm

Rosedale pic
Abstract: I'll cover what I've learned and seen so far, from early VR hardware prototypes in the 90s, to the creation of Second Life starting in 1999, through the Rift Kickstarter and the founding of High Fidelity. Finally, what thoughts I can offer on how VR and Virtual Worlds may affect humanity in the near future.

Bio: Philip Rosedale is CEO and co-founder of High Fidelity, a company devoted to exploring the future of next-generation shared virtual reality. Prior to High Fidelity, Rosedale created the virtual civilization Second Life, populated by one million active users generating US$700M in annual transaction volumes. In addition to numerous technology inventions (including the video conferencing product called FreeVue, acquired by RealNetworks in 1996 where Rosedale later served as CTO), Rosedale has also worked on experiments in distributed work and computing.

Jessica Brillhart

Founder, Vrai Pictures
Director, m ss ng p eces
"Radical Experimentation: Creating Content for Emerging Technologies"
Watch on YouTube  Watch on YouTube  ❐

May 14, 2019

CSE2, Room G10
2:00pm

Brillhart pic
Abstract: Emerging technology has the capacity to expand our understanding of the world and evolve our ability to connect to each other – but in order to do this in any meaningful way, it is imperative to consider the technology’s affects on current pipelines and ecosystems, its impact on culture as a whole, and its ability to satiate – or change entirely – the current wishes and desires of a media-consuming society. The talk will be presented in three parts or “case studies,” each focusing on a specific emergent technology where creative development was leveraged as a means to test and better understand the technology in question. Each case study will explore how the considerations previously listed were met and the output that resulted. The first case study will focus on Jump, a 360-degree stereoscopic live-action virtual reality pipeline developed at Google. The second case-study will focus on Bose AR, an audio-only augmented reality technology developed at Bose which allows users to experience spatial and immersive audio. The third will focus on Inception, a convolutional network. Initially trained on ImageNet to extrapolate contents of images, the process was then reversed to instead identify and and enhance patters in those images – essentially dream upon them. The result was a computer vision program called DeepDream. I propose that in order to truly and adequately address the potential societal and cultural affects of emerging technology, iterative creative output and experimentation must not only be implemented but continuously encouraged. Not only does this necessary process maximize an emergent technology’s chances of success – thus playing a crucial role in that technology’s development – but it also prepares it for introduction into a modern society. This process initiates an ecosystem in which users at scale are able to understand a technology’s potential, thus willfully embracing it as part of their lives in both an observational and active capacity.

Bio: Jessica Brillhart is an immersive director, writer, and theorist who is widely known for her pioneering work in virtual reality. She is the founder of the mixed reality studio, Vrai Pictures. Previously, Brillhart was the Principal Filmmaker for VR at Google where she worked with engineers to develop Google Jump, a virtual reality live-action ecosystem. Since then, Brillhart has made a range of highly acclaimed immersive experiences, working with such groups as NASA, Bose, the Philharmonia Orchestra in London, Googleʼs Artists and Machine Intelligence program, the Montreal Canadiens, Frank Gehry, and (unofficially) Weather Channel. Her work explores the potential of immersive mediums while also diving into a number of important medium and mediarelated issues, such as access, disability, and cultural representation. Brillhart has taken the stage at Google IO, Oculus Connect, FMX, and the New Yorker Tech Fest; she has worked as an advisor for Sundance New Frontiers, the Independent Film Project (IFP), and Electric South; and has been a judge for World Press Photo. ADC Young Guns, SXSW, and the Tribeca Film Festival. Her Medium publication, In the Blink of a Mind, has been used by universities, master classes, and creators all over the world. She was recognized as a pioneer in the field of immersive technology and entertainment by MIT and was part of their TR35 list in 2017. Most recently, Brillhart delivered the Convergence Keynote at SXSW 2019 and launched a spatial audio platform, Traverse, which won SXSW’s Special Jury Prize for The Future of Experience.

Yaser Sheikh

Director, Facebook Reality Labs, Pittsburgh
CMU
"Photorealistic Telepresence"
Watch on YouTube  Watch on YouTube  ❐

May 7, 2019

CSE2, Room G10
2:00pm

sheik pic

Abstract: In this talk, I will describe early steps taken at FRL Pittsburgh in achieving photorealistic telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you.

Telepresence is, perhaps, the only application that has the potential to bring billions of people into VR. It is the next step along the evolution from telegraphy to telephony to videoconferencing. Just like telephony and video-conferencing, the key attribute of success will be “authenticity”: users' trust that received signals (e.g., audio for the telephone and video/audio for VC) are truly those transmitted by their friends, colleagues, or family. The challenge arises from this seeming contradiction: how do we enable authentic interactions in artificial environments?

Our approach to this problem centers around codec avatars: the use of neural networks to address the computer vision (encoding) and computer graphics (decoding) problems in signal transmission and reception. The creation of codec avatars require capture systems of unprecedented 3D sensing resolution, which I will also describe.

Bio: Yaser Sheikh is an Associate Professor at the Robotics Institute, Carnegie Mellon University. He also directs the Facebook Reality Lab in Pittsburgh, which is devoted to achieving photorealistic social interactions in AR and VR. His research broadly focuses on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. With colleagues and students, he has won the Honda Initiation Award (2010), Popular Science’s "Best of What’s New" Award, best student paper award at CVPR (2018), best paper awards at WACV (2012), SAP (2012), SCA (2010), ICCV THEMIS (2009), best demo award at ECCV (2016), and he received the Hillman Fellowship for Excellence in Computer Science Research (2004). Yaser has served as a senior committee member at leading conferences in computer vision, computer graphics, and robotics including SIGGRAPH (2013, 2014), CVPR (2014, 2015, 2018), ICRA (2014, 2016), ICCP (2011), and served as an Associate Editor of CVIU. His research has been featured by various media outlets including The New York Times, BBC, MSNBC, Popular Science, and in technology media such as WIRED, The Verge, and New Scientist.

Timoni West

Director of XR, Unity
"Tools & Systems for Spatial Computing"
Watch on YouTube  Watch on YouTube  ❐

April 29, 2019

CSE, Room 691
2:00pm

West pic
Abstract: Timoni West leads Unity's XR research arm, focusing on new tools for augmented and mixed reality today, and helping to define what systems we need to put into place in order to build the foundation for strong spatial computing in the future. In this talk she will go over both the vision for the future, systems being proposed, and current tools in progress to help developers build robust, interesting augmented reality applications.

Bio: Timoni West is the Director of XR Research at Unity, where she leads a team of cross-disciplinary artists and engineers exploring new interfaces for human-computer interaction. Currently, her team focuses on spatial computing: how we will live, work, and create in a world where digital objects and the real world live side-by-side. One of her team’s first tools, EditorXR, a tool for editing Unity projects directly in virtual reality, won SF Design Week’s first-ever Virtual Tech award in 2018. A longtime technologist, Timoni was formerly SVP at Alphaworks, co-founder of Recollect, and CEO of Department of Design, a digital agency. She's worked for startups across the country, including Foursquare, Flickr, Causes, and Airtime. Timoni serves on the OVA board and is an advisor to Tvori and Spatial Studios, among others. In 2017, Timoni was listed in Next Reality News’ Top 50 to Watch. Additionally, she serves on XRDC’s advisory board, is a Sequoia Scout, and was a jury member for ADC’s 2018 Awards in Experiential Design.

Andrew Rabinovich

Head of AI, Magic Leap
"Multi Task Learning for Computer Vision"
Watch on YouTube  Watch on YouTube  ❐

April 23, 2019

CSE2, Room G10
2:00pm

Rabinovich pic
Abstract: Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts. Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks. In this talk I will present novel gradient based methods which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, these techniques improve accuracy and reduce overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques. They match or surpasses the performance of exhaustive grid search methods. Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.

Bio: Andrew Rabinovich is a leading scientist in Deep Learning and computer vision research. He has been studying machine learning with an emphasis on computer vision for over 15 years, is the author of numerous patents and peer-reviewed publications, and founded a biotechnology startup. Andrew Received a PhD in Computer Science from UC San Diego in 2008, worked on R&D for Google, and is currently Director of Deep Learning and Head of AI at Magic Leap.

Tom Furness

Professor, University of Washington
Director, HIT Lab
"My Attempts to Save the World"
Watch on YouTube  Watch on YouTube  ❐

April 9, 2019

CSE2, Room G10
2:00pm

Furness pic
Abstract: Over a career spanning 53 years, Prof. Furness has been exploring and developing technologies for facilitating bandwidth between humans and computing machines. His work has encompassed fighter cockpits, virtual reality, retinal displays, educational tools, medical simulators, pain, phobias, molecular modeling, scanning fiber endoscopes and entertainment systems. This quest has been punctuated with side trips and ‘aha’ experiences that have led to unanticipated destinations. Dr. Furness plans to talk about lessons learned on his journey including unexpected delights…with an aim to inspire, entertain and challenge.

Bio: Thomas Furness is a pioneer in human interface technology and grandfather of virtual reality. He is the founder of the Human Interface Technology Laboratories (HIT Lab) at UW, at the University of Canterbury, New Zealand, and the University of Tasmania. He developed advanced cockpits and virtual interfaces for the U.S. Air Force and authored their Super Cockpit program. Currently, he is Professor of Industrial and Systems Engineering and Adjunct Professor in Electrical & Computer Engineering and Human Centred Design and Engineering (HCDE) at The University of Washington.

2018


VR Start-Up Panel Discussion

December 4, 2018


Cheshier's Pic

Jared Cheshier

CTO/Co-Founder, PlutoVR
Key's Pic

Forest Key

CEO/Founder, Pixvana
Giovanni's Pic

John SanGiovanni

CEO/Co-Founder, Visual Vocal

In this special lecture, we invited leaders of Seattle Area VR/AR startups to share their experiences in the form of a panel discussion. Topics of dicussion included their company's key product/vision, challenges, strategies, fundraising, and other experiential topics.


Shahram Izadi

Director, AR/VR at Google
"Virtual Teleportation"

November 27, 2018


Izadi's Pic
Abstract: From the standpoint of the core technology, AR/VR has made massive advances in recent years, from consumer headsets to low-cost and precise head tracking. Arguably however, AR/VR is still a technology in need of the killer app. In this talk, I'll argue for why the killer app is immersive telepresence, aka virtual teleportation. The concept of virtual teleportation is not new, we've all been dreaming about it since the holograms of Star Wars. However, with the advent of consumer AR/VR headsets, it is now tantalisingly close to becoming fact rather than just science fiction. At its core, however, there's a fundamental machine perception problem still to solve -- the digitization of humans in 3D and in real-time. In this talk I'll cover the work that we have done at Microsoft, perceptiveIO and now Google on this topic. I'll outline the challenges ahead for us to create a consumer product in this space. I'll demonstrate some of the core algorithms and technologies that can get us closer to making virtual teleportation a reality in the future.

Bio: Dr. Shahram Izadi is a director at Google within the AR/VR division. Prior to Google he was CTO and co-founder of perceptiveIO, a Bay-Area startup specializing in real-time computer vision and machine learning techniques for AR/VR. His company was acquired by Alphabet/Google in 2017. Previously he was a partner and research manager at Microsoft Research (both Redmond US and Cambridge UK) for 11 years where he led the interactive 3D technologies (I3D) group. His research focuses on building new sensing technologies and systems for AR/VR. Typically, this meant developing new sensing hardware (depth cameras and imaging sensors) alongside practical computer-vision or machine-learning algorithms and techniques for these technologies. He was at Xerox PARC in 2000-2002, and obtained his PhD from the Mixed Reality Lab at the University of Nottingham, UK, in 2004. In 2009, he was named one of the TR35, an annual list published by MIT Technology Review magazine, naming the world's top 35 innovators under the age of 35. He has published over 120 research papers (see DBLP & Google Scholar), and more than 120 patents. His work has led to products and projects such as the Microsoft Touch Mouse, Kinect for Windows, Kinect Fusion, and most recently HoloLens and Holoportation.

Cassidy Curtis

Technical Art Lead, Google
"From Windy Day to Age of Sail: Five Years of Immersive Storytelling at Google Spotlight Stories"

November 20, 2018


Cassidy Curtis's Pic
Abstract: How can you make a movie, but give the audience the camera? This is the question that launched Google Spotlight Stories. Technical Art Lead Cassidy Curtis will talk about how the group’s work has evolved from its origins in mobile immersive storytelling to VR, film and beyond. He’ll show examples from stories that span a range of visual styles, directorial voices and storytelling strategies, from linear (Age of Sail, Pearl) to highly interactive (Back to the Moon, Rain or Shine) and discuss the discoveries the team has made along the way.

Bio: Cassidy Curtis has worked in computer animation for three decades, in many corners of the field. As a math major (and art minor) from Brown University, he got his start developing image processing and particle systems, and animating TV commercials at R/Greenberg, Xaos, and PDI. He was a researcher and instructor in UW’s GRAIL lab, exploring non-photorealistic rendering (Computer Generated Watercolor and Loose and Sketchy Animation) and teaching an early iteration of the Capstone Animation class. At DreamWorks Animation he rigged characters for Shrek, and then animated them on films from Madagascar to How to Train Your Dragon (on which he co-supervised the main character, Toothless.) In 2015, he jumped into real-time graphics and immersive storytelling, joining Google Spotlight Stories to develop the non-photorealistic look of Patrick Osborne’s Oscar-nominated and Emmy-winning short Pearl, and has continued on to work on Jorge Gutierrez’ Emmy-nominated Son of Jaguar and John Kahrs’ Age of Sail, which recently premiered at the 2018 Venice Film Festival.

Gordon Stoll

Engineer, Valve
"The Devil in the Details: Measurement, calibration, and why it's hard to make a high-quality VR system"

November 6, 2018


Gordon Stoll's Pic
Abstract: Recently we've seen the arrival of a new wave of virtual reality devices, arguably including the first genuinely usable consumer VR. There have been a large number of different devices built by different players, large and small, and even though their top-level architectures are similar they vary wildly in the quality of the end-user's experience. At best the user is genuinely transported to another world, and at worst they tear the headset off and never try it again. In this talk I'll discuss the non-obvious differences in VR systems and how errors that intuitively seem negligible are not so negligible when they're strapped to your face. I’ll talk at a high level about our work at Valve on the complex puzzle of diagnosing these errors and figuring out how to measure them in order to make higher-quality VR systems. I'll go into some detail on one (hopefully) useful example: a simple technique for measuring room-scale 3D tracking quality against ground truth.

Bio: Gordon is an engineer working on virtual reality at Valve. Over the past 6+ years he has helped to develop the technology behind the original Valve "Room" demo, the HTC Vive, and Valve's SteamVR tracking ("Lighthouse"). Most of his work has been in figuring out how to measure things, which is much, much more fun than it sounds. He developed the methods used to calibrate and test the HMD optics and the tracking basestations through multiple generations and has contributed to a number of other measurement and calibration systems including those for tracked objects and cameras.

Paul Debevec

Senior Scientist, Google VR
"Creating Photoreal Digital Actors (and Environments) for Movies, Games, and Virtual Reality"
Watch on YouTube  Watch on YouTube  ❐

October 30, 2018


Paul Debevec's Pic
Abstract: Presenting recent work from USC ICT and Google VR for recording and rendering photorealistic actors and environments for movies, games, and virtual reality. The Light Stage facial scanning systems are geodesic spheres of inward-pointing LED lights which have been used to help create digital actors based on real people in movies such as Avatar, Benjamin Button, Maleficent, Furious 7, Blade Runner: 2049, and Ready Player One. Light Stages can also reproduce recorded omnidirectional lighting environments and have recently been extended with multispectral LED lights to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. Our full-body Light Stage 6 system was used in conjunction with natural language processing and an automultiscopic projector array to record and project interactive hologram-like conversations with survivors of the Holocaust. I will conclude the talk by presenting Google VR's "Welcome to Light Fields", the first downloadable virtual reality light field experience which records and displays 360 degree photographic environments that you can move around inside of with six degrees of freedom, creating VR experiences which are far more comfortable and immersive.

Bio: Paul Debevec is a Senior Scientist at Google VR and an adjunct research professor at the USC Institute for Creative Technologies in Los Angeles. His Ph.D. thesis (1996) under Prof. Jitendra Malik presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Using Façade he led the creation of virtual cinematography of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were used to create virtual backgrounds in The Matrix. Debevec pioneered high dynamic range image-based lighting techniques. At USC ICT, he continued the development of Light Stage devices for recording geometry and appearance, and helped create new 3D Display devices for telepresence and teleconferencing. http://www.debevec.org/

Jeremy Bailenson

Professor, Stanford
"Experience On Demand: What Virtual Reality Is, How It Works, and What It Can Do"
Watch on YouTube  Watch on YouTube  ❐

October 23, 2018


Jeremy Bailenson's Pic
Abstract: Virtual reality is able to effectively blur the line between reality and illusion, pushing the limits of our imagination and granting us access to any experience imaginable. With well-crafted simulations, these experiences, which are so immersive that the brain believes they’re real, are already widely available with a VR headset and will only become more accessible and commonplace. But how does this new medium affect its users, and does it have a future beyond fantasy and escapism?

There are dangers and many unknowns in using VR, but it also can help us hone our performance, recover from trauma, improve our learning and communication abilities, and enhance our empathic and imaginative capacities. Like any new technology, its most incredible uses might be waiting just around the corner.



Bio: Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. He earned a B.A. cum laude from the University of Michigan in 1994 and a Ph.D. in cognitive psychology from Northwestern University in 1999. He spent four years at the University of California, Santa Barbara as a Post-Doctoral Fellow and then an Assistant Research Professor.

Bailenson studies the psychology of Virtual Reality (VR), in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how VR can transform education, environmental conservation, empathy, and health. He is the recipient of the Dean’s Award for Distinguished Teaching at Stanford.

He has published more than 100 academic papers, in interdisciplinary journals such as Science and PLoS One, as well domain-specific journals in the fields of communication, computer science, education, environmental science, law, marketing, medicine, political science, and psychology. His work has been continuously funded by the National Science Foundation for 15 years.

Bailenson consults pro bono on VR policy for government agencies including the State Department, the US Senate, Congress, the California Supreme Court, the Federal Communication Committee, the U.S. Army, Navy, and Air Force, the Department of Defense, the Department of Energy, the National Research Council, and the National Institutes of Health.

His first book Infinite Reality, co-authored with Jim Blascovich, was quoted by the U.S. Supreme Court outlining the effects of immersive media. His new book, “Experience on Demand”, was reviewed by The New York Times, The Wall Street Journal, The Washington Post, Nature, and The Times of London, and was an Amazon Best-seller.


Doug Lanman

Director of Computational Imaging, Facebook Reality Labs
"Reactive Displays: Unlocking Next-Generation VR/AR Visuals with Eye Tracking"

October 16, 2018


Doug Lanman's Pic
Abstract: As personal viewing devices, head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays occupying a shared environment. Viewing optics, display components, and sensing elements may all be tuned for a single user. It is the latter element that helps differentiate from the past, with individualized eye tracking playing an important role in unlocking higher resolutions, wider fields of view, and more comfortable visuals than past displays. This talk will explore the “reactive display” concept and how it may impact VR/AR devices in the coming years.

Bio: Douglas Lanman is the director of computational imaging at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a B.S. in applied physics with honors from Caltech in 2002 and M.S. and Ph.D. degrees in electrical engineering from Brown University in 2006 and 2010, respectively. He was a senior research scientist at NVIDIA Research from 2012 to 2014, a postdoctoral associate at the MIT Media Lab from 2010 to 2012, and an assistant research staff member at MIT Lincoln Laboratory from 2002 to 2005.

Ben Lok

Professor, University of Florida
"Virtual Storytelling, Real Change"
Watch on YouTube  Watch on YouTube  ❐

October 9, 2018


Ben Lok's Pic

Abstract: What’s the social good issue you are passionate about? To change people’s hearts and minds, what is the story that needs to be experienced immersively? What if you could connect with a team of like-minded members with the cross-functional skills to realize your idea in VR? What if you could get started NOW? What would you build?

In this talk, we will discuss our experiences with the VR for the Social Good Initiative at the University of Florida (www.vrforthesocialgood.com), and provide a plan for those interested to implement the program at your institution.

The VR for the Social Good Initiative was started in 2015 by professors in journalism and computer science professor, Sriram Kalyanaraman and Benjamin Lok. Their goal was to connect people (e.g. researchers, startups, non-profits) who are seeking solutions to social good issues with people (e.g. students) who could solve problems by creating virtual reality stories. Connecting seekers and solvers would enable many new ideas of how to apply VR to social good issues to be generated, tested, and evaluated.

The VR for the Social Good Initiative started offering classes in the Summer of 2017. The class has no prerequisites and no programming required. The class is open to students, from freshman to graduate student, of all majors. Students in the class are from a wide set of backgrounds including nursing, psychology, journalism, engineering, graphic design, education, building construction, the sciences, amongst others.

And this approach is scalable beyond existing approaches of traditional “VR classes” because the VR for Social Good Initiative leverages the concepts of lean software development, Agile development methodology, and the scrum agile framework. In the first year of the class, over 175 students created 36 projects. Next year we are having over 300 students across multiple colleges participate. Students from the course have gone on to join research groups, been involved in publications, generated initial data for funding, and participated in prestigious competitions, such as Oculus’s Top 100 Launchpad bootcamp.

We are expanding the class to hundreds to potentially a thousand of students a year. This is an opportunity to have thousands of people trained to create immersive stories to solve social good problems. This would be transformative to both the VR field and society in general. But everything here can be replicated at your school and your community. All materials for the class are available online at www.vrforthesocialgood.com. Empowering those that know the social good issues the best to be creators of immersive stories would be transformative in how society addresses our toughest problems. We are enabling all to become creators of solutions, not just consumers.



Bio: Ben Lok is a Professor in the Computer and Information Sciences and Engineering Department at the University of Florida and co-founder of Shadow Health, Inc., an education company. His research focuses on virtual humans and mixed reality in the areas of virtual environments, human-computer interaction, and computer graphics. Professor Lok received a Ph.D. (2002, advisor: Dr. Frederick P. Brooks, Jr.) and M.S. (1999) from the University of North Carolina at Chapel Hill, and a B.S. in Computer Science (1997) from the University of Tulsa. He did a post-doc fellowship (2003) under Dr. Larry F. Hodges.

Professor Lok received a UF Term Professorship (2017-2020), the Herbert Wertheim College of Engineering Faculty Mentoring Award (2016), a NSF Career Award (2007-2012), and the UF ACM CISE Teacher of the Year Award in 2005-2006. He and his students in the Virtual Experiences Research Group have received Best Paper Awards at ACM I3D (Top 3, 2003) and IEEE VR (2008). He currently serves as the chair of the Steering Committee of the IEEE Virtual Reality conference. Professor Lok is an associate editor of Computer and Graphics, and ACM Computing Surveys


Gordon Wetzstein

Asst. Professor, Stanford
"Computational Near-eye Displays: Engineering the Interface between our Visual System and the Digital World"
Watch on YouTube  Watch on YouTube  ❐

June 8, 2018


Gordon Wetzstein's Pic
Abstract: Immersive visual and experiential computing systems, i.e. virtual and augmented reality (VR/AR), are entering the consumer market and have the potential to profoundly impact our society. Applications of these systems range from communication, entertainment, education, collaborative work, simulation and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Thus, developing near-eye display systems that provide a high-quality user experience is of the utmost importance. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a significant source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display. In this talk, we discuss frontiers of engineering next-generation opto-computational near-eye display systems to increase visual comfort and provide realistic and effective visual experiences.

Bio: Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab, an interdisciplinary research group focused on advancing imaging, microscopy, and display systems. At the intersection of computer graphics, machine vision, optics, scientific computing, and perception, Prof. Wetzstein's research has a wide range of applications in next-generation consumer electronics, scientific imaging, human-computer interaction, remote sensing, and many other areas. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist in the Camera Culture Group at the MIT Media Lab. He received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an Alain Fournier Ph.D. Dissertation Award, an NSF CAREER Award, an Alfred P. Sloan Fellowship, a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.

2016


"Questions, Answers and Reflections"
Watch on YouTube  Watch on YouTube  ❐

May 31, 2016


Neal Stephenson's Pic
Bio: Neal Stephenson is an American writer and game designer known for his works of speculative fiction. His novels have been variously categorized as science fiction, historical fiction, cyberpunk, and "postcyberpunk". Other labels, such as "baroque", have been used. Stephenson's work explores subjects such as mathematics, cryptography, linguistics, philosophy, currency, and the history of science. He also writes non-fiction articles about technology in publications such as Wired. He has worked part-time as an advisor for Blue Origin, a company (funded by Jeff Bezos) developing a manned sub-orbital launch system, and is also a cofounder of Subutai Corporation, whose first offering is the interactive fiction project The Mongoliad.





"Lessons Learned in Prototyping for Emerging Hardware"

May 24, 2016


Drew Skillman and Patrick Hackett's Pic
Abstract: Drew Skillman and Patrick Hackett are veterans of the videogame industry, currently working at Google on Tilt Brush, a virtual reality painting application. They've spent the past 5 years working with emerging hardware, including the Kinect, Leap Motion Controller, PS4 Camera, Oculus Rift, Meta AR Glasses, GearVR, and the HTC Vive. They'll show off some of the things they've made, share the lessons they've learned, and talk about where they'd like to see things go.

Bio: Drew Skillman works with the Google VR team developing Tilt Brush, a Virtual Reality application that allows anyone to paint in 3D space at room scale. That project developed as part of a number of different augmented reality, virtual reality, and natural motion experiences at the company he co-founded (Skillman & Hackett) before it was acquired by Google in 2015. Prior to his work in VR, Drew developed games at Double Fine Productions in San Francisco, as a technical artist, visual effects artist, lighting artist, and project lead. His shipped titles include Happy Action Theater, Kinect Party, Brutal Legend, Stacking, Iron Brigade, Dropchord, Little Pink Best Buds, Autonomous, and DLC, for "The Playroom", a PS4 launch title. Drew received a B.A. in Physics from Reed College and focuses on interesting problems at the intersection of art and technology.

Patrick Hackett is currently on the Google VR Team developing Tilt Brush, a virtual reality application that allows anyone to paint in 3D space at room scale. Tilt Brush developed as part of a number of different augmented reality, virtual reality, and natural motion experiences at the company he co-founded, Skillman & Hackett, before it was acquired by Google in 2015. Patrick is a long-time proponent of rapid prototyping, and gave a talk at GDC 2013 regarding the various ways him and his team at Double Fine contorted the Kinect to create the game Double Fine Happy Action Theater. Shipped titles include Tilt Brush, Dropchord, Kinect Party, Happy Action Theater, Iron Brigade, Brutal Legend, and MX vs. ATV: Untamed, with contributions to Massive Chalice, The Cave, and numerous other prototypes and experiments.

Michael Gourlay

Principal Dev. Lead, Hololens
"Insider Tips for Developing on Virtual and Augmented Reality Platforms"
Watch on YouTube  Watch on YouTube  ❐

May 17, 2016


Michael Gourlay's Pic
Abstract: Developing games and applications for VR and AR platforms entails special constraints and abilities unique to those platforms. This talk will explain enough technology behind how they work so that developers can exploit strengths and avoid pitfalls. Also, the talk will cover how the distinction between VR and AR impacts developers.

Bio: Dr. Michael J. Gourlay works as a Principal Development Lead in the Environment Understanding group of Analog R&D, on augmented reality platforms such as HoloLens. He previously worked at Electronic Arts (EA Sports) as the Software Architect for the Football Sports Business Unit, as a senior lead engineer on Madden NFL, on character physics and the procedural animation system used by EA, on Mixed Martial Arts (MMA), and as a lead programmer on NASCAR. He wrote the visual effects system used in EA games worldwide and patented algorithms for interactive, high-bandwidth online applications. He also architected FranTk, the game engine behind Connected Career and Connected Franchise. He also developed curricula for and taught at the University of Central Florida (UCF) Florida Interactive Entertainment Academy (FIEA), an interdisciplinary graduate program that teaches programmers, producers and artists how to make video games and training simulations. He is also a Subject Matter Expert for Studio B Productions, and writes articles for Intel on parallelized computational fluid dynamics simulations for video games. Prior to joining EA, he performed scientific research using computational fluid dynamics (CFD) and the world's largest massively parallel supercomputers. His previous research also includes nonlinear dynamics in quantum mechanical systems, and atomic, molecular and optical physics, stealth, RADAR and massively parallel supercomputer design. He also developed pedagogic orbital mechanics software. Michael received his degrees in physics and philosophy from Georgia Tech and the University of Colorado at Boulder.

Michael Abrash

Chief Scientist, Oculus
"Virtual reality – The Biggest Step since Personal Computing... and maybe more"
Watch on YouTube  Watch on YouTube  ❐

May 3, 2016


Michael Abrash's Pic
Abstract: Over the last 40 years the personal computing paradigm that came together at Xerox PARC has hugely changed how we work, play, and communicate, by bringing the digital world into the real world in human-oriented ways. Now we’re at the start of the next great paradigm shift - virtual reality - which puts us directly into the digital world. The long-term impact is unknowable, but potentially even greater than personal computing; taken to its logical limit, VR can create the full range of experiences of which humans are capable. The technology required to move VR forward is broad and challenging, and a lot of time and research will be required, but VR is very likely to once again change the way we work, play, and communicate. This talk will take a high-level look at what will be needed to make that happen.

Bio: Michael Abrash is Chief Scientist of Oculus. He was the GDI development lead for the first two versions of Windows NT, joined John Carmack to write Quake at Id Software, worked on the first two versions of Xbox, co-authored the Pixomatic software renderer at Rad Game Tools, worked on Intel’s Larrabee project, worked on both augmented and virtual reality at Valve, and currently leads the Oculus Research team working on advancing the state of the art of AR and VR. He is also the author of several books, including Michael Abrash’s Graphics Programming Black Book, and has written and spoken frequently about graphics, performance programming, and virtual reality.

Steve Sullivan

Lead, Holographic Video Team, Microsoft
"Video Holograms for MR and VR"

April 26, 2016


Steve Sullivan's Pic
Abstract: We will discuss Microsoft’s recent work on free-viewpoint video, covering the algorithms, production process, and application to video holograms on Hololens, VR, and traditional 2D experiences.

Bio: Steve currently leads the Holographic Video Team for Hololens at Microsoft, creating free-viewpoint video of people and performances for mixed reality, virtual reality, and traditional 2D experiences. Prior to joining Microsoft, Steve was Director of R&D at ILM and then Senior Technology Officer for Lucasfilm. He led R&D across the Lucas Divisions, advancing the state of the art in computer graphics and content creation tech for film, TV, and games. He contributed to over 70 films and received three Academy Awards for Technology for matchmoving, image-based modeling, and on-set motion capture. He is a member of the Academy of Motion Picture Arts and Sciences, and currently serves on the Academy's Science and Technology Council. He received a PhD in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign, with an emphasis on automatic object modeling, recognition, and surface representations.

Clay Bavor

VP, VR/AR, Google
"Place, Time, and Memory"
Watch on YouTube  Watch on YouTube  ❐

April 19, 2016


Clay Bavor's Pic
Abstract: With some of Google's technical investments in VR as a background, Clay will share his thoughts how VR will change much about the way we live, including the nature of place, time, and memory.

Bio: Clay joined Google in 2005 and has been involved with a number of projects across the company, including Search, AdWords, and AdMob. Since 2012, he has led product management and design for some of Google's most popular applications, such as Gmail, Google Docs, and Google Drive. Clay was one of the creators of Google Cardboard and has led the growth of Google's VR projects. Clay grew up in Los Altos, California, and created his first VR project at the age of 11, when he used HyperCard and hundreds of scanned photographs to create a virtual version of his parents' house. Clay holds a B.S.E. in Computer Science from Princeton University.

Ashraf Michail

Software Architect, Hololens
"HoloLens – From Product Vision to Reality"

April 12, 2016


Ashraf Michail's Pic
Abstract: "Ashraf will talk about the HoloLens vision, some of the technical challenges, and how the HoloLens vision was turned into reality. This talk will include a discussion of some of the difficult problems solved by HoloLens including:
  •   - How HoloLens displays a stable hologram
  •   - How HoloLens understands the environment you are in
  •   - How HoloLens understands your input including gaze, gesture, and voice
  •   - How HoloLens custom silicon innovation such as the HPU enabled HoloLens to become an untethered holographic computer"


Bio: Ashraf has been the Software Architect for Microsoft HoloLens for the past several years working on both software and hardware design. He has worked on HoloLens from the early days of product inception to shipping the HoloLens Development Edition. Ashraf has a been developing platform and graphics technologies for the Microsoft Windows operating system groups since 1998 contributing to a variety of devices including Windows Desktop, Xbox, and Windows Phone. Prior to his work on HoloLens, Ashraf was known for computer graphics innovation and operating system work throughout a variety of Microsoft products.

Brian Murphy

Artist, Microsoft Studios
"4 Years Sculpting With Light"
Watch on YouTube  Watch on YouTube  ❐

April 5, 2016


Brian Murphy's Pic
Abstract: Brian will talk about what he's learned over the last 4 years designing holographic experiences. Specifically, he’ll cover lessons learned developing an immersive virtual travel experience called "HoloTour", dozens of commercial partner applications, and variety of experiments, games, and demos. He’ll talk about what worked, what didn’t, and how designing for holograms presents unique challenges relative to conventional Virtual Reality.

Bio: Brian Murphy is an artist and designer who has spent the last 4 years working within Microsoft Studios, developing experiences for the HoloLens. Before that he was involved with many incubation projects, including major contributions to Kinect during its earliest stages. He has co-authored 12 patents related to emerging technologies and has helped ship 5 titles within Team Xbox, including "Kinect Adventures" which has sold more than 24 million units worldwide. Prior to his 9+ years in the game industry, Brian rattled around as a film maker, musician, editorial illustrator, and construction worker... so, if you need a wall knocked out he’s still pretty handy with a sledge hammer.

Nick Whiting

Technical Director, Epic Games
"The Making of Bullet Train"
Watch on YouTube  Watch on YouTube  ❐

March 29, 2016


Nick Whiting's Pic
Abstract: The session will cover the entire process of creating Epic Games' "Bullet Train" VR demo, from start to finish. Highlighting design considerations surrounding the user experience of adding interaction to traditionally passive experiences, including a breakdown of alternative paths that were considered but didn't make the cut. The speakers will discuss where they had to diverge from their original design choices in order to match the players' expectations of the world they interact with. See how a small team created the entire "Bullet Train" VR demo from scratch in only 10 weeks, and understand the specific design considerations and tradeoffs used to match players' expectations of a highly-kinetic interactive VR experience.

Bio: Nick Whiting oversees the development of the award-winning Unreal Engine 4's virtual reality efforts, as well as the Blueprint visual scripting system. In addition to shipping the recent "Bullet Train," "Thief in the Shadows," "Showdown," and "Couch Knights" VR demos, he has helped titles in the blockbuster "Gears of War" series, including "Gears of War 3" and "Gears of War: Judgment."