The Reality Lab Lectures bring important researchers and practitioners from a variety of disciplines to the UW campus to present their work in Augmented, Virtual, and Mixed Reality and to discuss the future of the field.

Lectures are free and open to the public, with limited seating. Public will be admitted on a first come first serve basis. Some of these lectures will be filmed and later posted on our YouTube channel - subscribe to be notified of recently uploaded talks!

Shahram Izadi speaking at the UW Reality Lab Lectures

Lectures at a Glance

2019


Tom Furness

Professor, University of Washington
Director, HIT Lab
"My Attempts to Save the World"
Watch on YouTube  Watch on YouTube  ❐

April 9, 2018

CSE2, Room G10
2:00pm

Furness pic
Abstract: Over a career spanning 53 years, Prof. Furness has been exploring and developing technologies for facilitating bandwidth between humans and computing machines. His work has encompassed fighter cockpits, virtual reality, retinal displays, educational tools, medical simulators, pain, phobias, molecular modeling, scanning fiber endoscopes and entertainment systems. This quest has been punctuated with side trips and ‘aha’ experiences that have led to unanticipated destinations. Dr. Furness plans to talk about lessons learned on his journey including unexpected delights…with an aim to inspire, entertain and challenge.

Bio: Thomas Furness is a pioneer in human interface technology and grandfather of virtual reality. He is the founder of the Human Interface Technology Laboratories (HIT Lab) at UW, at the University of Canterbury, New Zealand, and the University of Tasmania. He developed advanced cockpits and virtual interfaces for the U.S. Air Force and authored their Super Cockpit program. Currently, he is Professor of Industrial and Systems Engineering and Adjunct Professor in Electrical & Computer Engineering and Human Centred Design and Engineering (HCDE) at The University of Washington.

Andrew Rabinovich

Head of AI, Magic Leap
"Multi Task Learning for Computer Vision"
Watch on YouTube  Watch on YouTube  ❐

April 23, 2019

CSE2, Room G10
2:00pm

Rabinovich pic
Abstract: Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts. Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks. In this talk I will present novel gradient based methods which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, these techniques improve accuracy and reduce overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques. They match or surpasses the performance of exhaustive grid search methods. Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.

Bio: Andrew Rabinovich is a leading scientist in Deep Learning and computer vision research. He has been studying machine learning with an emphasis on computer vision for over 15 years, is the author of numerous patents and peer-reviewed publications, and founded a biotechnology startup. Andrew Received a PhD in Computer Science from UC San Diego in 2008, worked on R&D for Google, and is currently Director of Deep Learning and Head of AI at Magic Leap.

Timoni West

Director of XR, Unity
"Tools & Systems for Spatial Computing"
Watch on YouTube  Watch on YouTube  ❐

April 29, 2019

CSE, Room 691
2:00pm

West pic
Abstract: Timoni West leads Unity's XR research arm, focusing on new tools for augmented and mixed reality today, and helping to define what systems we need to put into place in order to build the foundation for strong spatial computing in the future. In this talk she will go over both the vision for the future, systems being proposed, and current tools in progress to help developers build robust, interesting augmented reality applications.

Bio: Timoni West is the Director of XR Research at Unity, where she leads a team of cross-disciplinary artists and engineers exploring new interfaces for human-computer interaction. Currently, her team focuses on spatial computing: how we will live, work, and create in a world where digital objects and the real world live side-by-side. One of her team’s first tools, EditorXR, a tool for editing Unity projects directly in virtual reality, won SF Design Week’s first-ever Virtual Tech award in 2018. A longtime technologist, Timoni was formerly SVP at Alphaworks, co-founder of Recollect, and CEO of Department of Design, a digital agency. She's worked for startups across the country, including Foursquare, Flickr, Causes, and Airtime. Timoni serves on the OVA board and is an advisor to Tvori and Spatial Studios, among others. In 2017, Timoni was listed in Next Reality News’ Top 50 to Watch. Additionally, she serves on XRDC’s advisory board, is a Sequoia Scout, and was a jury member for ADC’s 2018 Awards in Experiential Design.

Yaser Sheikh

Director, Facebook Reality Labs, Pittsburgh
CMU
"Photorealistic Telepresence"
Watch on YouTube  Watch on YouTube  ❐

May 7, 2019

CSE2, Room G10
2:00pm

sheik pic

Abstract: In this talk, I will describe early steps taken at FRL Pittsburgh in achieving photorealistic telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you.

Telepresence is, perhaps, the only application that has the potential to bring billions of people into VR. It is the next step along the evolution from telegraphy to telephony to videoconferencing. Just like telephony and video-conferencing, the key attribute of success will be “authenticity”: users' trust that received signals (e.g., audio for the telephone and video/audio for VC) are truly those transmitted by their friends, colleagues, or family. The challenge arises from this seeming contradiction: how do we enable authentic interactions in artificial environments?

Our approach to this problem centers around codec avatars: the use of neural networks to address the computer vision (encoding) and computer graphics (decoding) problems in signal transmission and reception. The creation of codec avatars require capture systems of unprecedented 3D sensing resolution, which I will also describe.

Bio: Yaser Sheikh is an Associate Professor at the Robotics Institute, Carnegie Mellon University. He also directs the Facebook Reality Lab in Pittsburgh, which is devoted to achieving photorealistic social interactions in AR and VR. His research broadly focuses on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. With colleagues and students, he has won the Honda Initiation Award (2010), Popular Science’s "Best of What’s New" Award, best student paper award at CVPR (2018), best paper awards at WACV (2012), SAP (2012), SCA (2010), ICCV THEMIS (2009), best demo award at ECCV (2016), and he received the Hillman Fellowship for Excellence in Computer Science Research (2004). Yaser has served as a senior committee member at leading conferences in computer vision, computer graphics, and robotics including SIGGRAPH (2013, 2014), CVPR (2014, 2015, 2018), ICRA (2014, 2016), ICCP (2011), and served as an Associate Editor of CVIU. His research has been featured by various media outlets including The New York Times, BBC, MSNBC, Popular Science, and in technology media such as WIRED, The Verge, and New Scientist.

Jessica Brillhart

Founder, Vrai Pictures
Director, m ss ng p eces
"Radical Experimentation: Creating Content for Emerging Technologies"
Watch on YouTube  Watch on YouTube  ❐

May 14, 2019

CSE2, Room G10
2:00pm

Brillhart pic
Abstract: Emerging technology has the capacity to expand our understanding of the world and evolve our ability to connect to each other – but in order to do this in any meaningful way, it is imperative to consider the technology’s affects on current pipelines and ecosystems, its impact on culture as a whole, and its ability to satiate – or change entirely – the current wishes and desires of a media-consuming society. The talk will be presented in three parts or “case studies,” each focusing on a specific emergent technology where creative development was leveraged as a means to test and better understand the technology in question. Each case study will explore how the considerations previously listed were met and the output that resulted. The first case study will focus on Jump, a 360-degree stereoscopic live-action virtual reality pipeline developed at Google. The second case-study will focus on Bose AR, an audio-only augmented reality technology developed at Bose which allows users to experience spatial and immersive audio. The third will focus on Inception, a convolutional network. Initially trained on ImageNet to extrapolate contents of images, the process was then reversed to instead identify and and enhance patters in those images – essentially dream upon them. The result was a computer vision program called DeepDream. I propose that in order to truly and adequately address the potential societal and cultural affects of emerging technology, iterative creative output and experimentation must not only be implemented but continuously encouraged. Not only does this necessary process maximize an emergent technology’s chances of success – thus playing a crucial role in that technology’s development – but it also prepares it for introduction into a modern society. This process initiates an ecosystem in which users at scale are able to understand a technology’s potential, thus willfully embracing it as part of their lives in both an observational and active capacity.

Bio: Jessica Brillhart is an immersive director, writer, and theorist who is widely known for her pioneering work in virtual reality. She is the founder of the mixed reality studio, Vrai Pictures. Previously, Brillhart was the Principal Filmmaker for VR at Google where she worked with engineers to develop Google Jump, a virtual reality live-action ecosystem. Since then, Brillhart has made a range of highly acclaimed immersive experiences, working with such groups as NASA, Bose, the Philharmonia Orchestra in London, Googleʼs Artists and Machine Intelligence program, the Montreal Canadiens, Frank Gehry, and (unofficially) Weather Channel. Her work explores the potential of immersive mediums while also diving into a number of important medium and mediarelated issues, such as access, disability, and cultural representation. Brillhart has taken the stage at Google IO, Oculus Connect, FMX, and the New Yorker Tech Fest; she has worked as an advisor for Sundance New Frontiers, the Independent Film Project (IFP), and Electric South; and has been a judge for World Press Photo. ADC Young Guns, SXSW, and the Tribeca Film Festival. Her Medium publication, In the Blink of a Mind, has been used by universities, master classes, and creators all over the world. She was recognized as a pioneer in the field of immersive technology and entertainment by MIT and was part of their TR35 list in 2017. Most recently, Brillhart delivered the Convergence Keynote at SXSW 2019 and launched a spatial audio platform, Traverse, which won SXSW’s Special Jury Prize for The Future of Experience.

Philip Rosedale

Founder, Secondlife
Founder, HighfidelityVR
"VR and Virtual Worlds"
Watch on YouTube  Watch on YouTube  ❐

May 21, 2019

CSE2, Room G10
2:00pm

Rosedale pic
Abstract: I'll cover what I've learned and seen so far, from early VR hardware prototypes in the 90s, to the creation of Second Life starting in 1999, through the Rift Kickstarter and the founding of High Fidelity. Finally, what thoughts I can offer on how VR and Virtual Worlds may affect humanity in the near future.

Bio: Philip Rosedale is CEO and co-founder of High Fidelity, a company devoted to exploring the future of next-generation shared virtual reality. Prior to High Fidelity, Rosedale created the virtual civilization Second Life, populated by one million active users generating US$700M in annual transaction volumes. In addition to numerous technology inventions (including the video conferencing product called FreeVue, acquired by RealNetworks in 1996 where Rosedale later served as CTO), Rosedale has also worked on experiments in distributed work and computing.

Yelena Rachitsky

Oculus VR
"The Hierarchy of Being: Embodying our Virtual Selves"
Watch on YouTube  Watch on YouTube  ❐

May 28, 2019

CSE2, Room G10
2:00pm

tews pic
Abstract: I'll take an interdisciplinary approach to investigating how the body, movement, and presence of others can deliver immersion and specific behaviors in VR. The talk will bridge academic ideas with currently available VR experiences to connect ideas around embodiment, environment, and social interactions, making a strong case around the need for academics and content creators to work more closely together.

Bio: Yelena Rachitsky is an Executive Producer of Experiences at Oculus, overseeing dozens of groundbreaking, narrative-driven VR projects that range from Pixar's first VR project to original independent work. Prior to Oculus, she was the Creative Producer at Future of Storytelling (FoST), which aims to change how people communicate and tell stories in the digital age. Yelena also helped program for the Sundance Film Festival and Institute's New Frontier program and spent four years in the documentary division at Participant Media, working on films like Food Inc. and Waiting for Superman. She's passionate about big creative ideas that will make technology meaningful.

2018


VR Start-Up Panel Discussion

December 4, 2018


Cheshier's Pic

Jared Cheshier

CTO/Co-Founder, PlutoVR
Key's Pic

Forest Key

CEO/Founder, Pixvana
Giovanni's Pic

John SanGiovanni

CEO/Co-Founder, Visual Vocal

In this special lecture, we invited leaders of Seattle Area VR/AR startups to share their experiences in the form of a panel discussion. Topics of dicussion included their company's key product/vision, challenges, strategies, fundraising, and other experiential topics.


Shahram Izadi

Director, AR/VR at Google
"Virtual Teleportation"
Watch on YouTube  Video Coming Soon!

November 27, 2018


Izadi's Pic
Abstract: From the standpoint of the core technology, AR/VR has made massive advances in recent years, from consumer headsets to low-cost and precise head tracking. Arguably however, AR/VR is still a technology in need of the killer app. In this talk, I'll argue for why the killer app is immersive telepresence, aka virtual teleportation. The concept of virtual teleportation is not new, we've all been dreaming about it since the holograms of Star Wars. However, with the advent of consumer AR/VR headsets, it is now tantalisingly close to becoming fact rather than just science fiction. At its core, however, there's a fundamental machine perception problem still to solve -- the digitization of humans in 3D and in real-time. In this talk I'll cover the work that we have done at Microsoft, perceptiveIO and now Google on this topic. I'll outline the challenges ahead for us to create a consumer product in this space. I'll demonstrate some of the core algorithms and technologies that can get us closer to making virtual teleportation a reality in the future.

Bio: Dr. Shahram Izadi is a director at Google within the AR/VR division. Prior to Google he was CTO and co-founder of perceptiveIO, a Bay-Area startup specializing in real-time computer vision and machine learning techniques for AR/VR. His company was acquired by Alphabet/Google in 2017. Previously he was a partner and research manager at Microsoft Research (both Redmond US and Cambridge UK) for 11 years where he led the interactive 3D technologies (I3D) group. His research focuses on building new sensing technologies and systems for AR/VR. Typically, this meant developing new sensing hardware (depth cameras and imaging sensors) alongside practical computer-vision or machine-learning algorithms and techniques for these technologies. He was at Xerox PARC in 2000-2002, and obtained his PhD from the Mixed Reality Lab at the University of Nottingham, UK, in 2004. In 2009, he was named one of the TR35, an annual list published by MIT Technology Review magazine, naming the world's top 35 innovators under the age of 35. He has published over 120 research papers (see DBLP & Google Scholar), and more than 120 patents. His work has led to products and projects such as the Microsoft Touch Mouse, Kinect for Windows, Kinect Fusion, and most recently HoloLens and Holoportation.

Cassidy Curtis

Technical Art Lead, Google
"From Windy Day to Age of Sail: Five Years of Immersive Storytelling at Google Spotlight Stories"

Novemeber 20, 2018


Cassidy Curtis's Pic
Abstract: How can you make a movie, but give the audience the camera? This is the question that launched Google Spotlight Stories. Technical Art Lead Cassidy Curtis will talk about how the group’s work has evolved from its origins in mobile immersive storytelling to VR, film and beyond. He’ll show examples from stories that span a range of visual styles, directorial voices and storytelling strategies, from linear (Age of Sail, Pearl) to highly interactive (Back to the Moon, Rain or Shine) and discuss the discoveries the team has made along the way.

Bio: Cassidy Curtis has worked in computer animation for three decades, in many corners of the field. As a math major (and art minor) from Brown University, he got his start developing image processing and particle systems, and animating TV commercials at R/Greenberg, Xaos, and PDI. He was a researcher and instructor in UW’s GRAIL lab, exploring non-photorealistic rendering (Computer Generated Watercolor and Loose and Sketchy Animation) and teaching an early iteration of the Capstone Animation class. At DreamWorks Animation he rigged characters for Shrek, and then animated them on films from Madagascar to How to Train Your Dragon (on which he co-supervised the main character, Toothless.) In 2015, he jumped into real-time graphics and immersive storytelling, joining Google Spotlight Stories to develop the non-photorealistic look of Patrick Osborne’s Oscar-nominated and Emmy-winning short Pearl, and has continued on to work on Jorge Gutierrez’ Emmy-nominated Son of Jaguar and John Kahrs’ Age of Sail, which recently premiered at the 2018 Venice Film Festival.

Gordon Stoll

Engineer, Valve
"The Devil in the Details: Measurement, calibration, and why it's hard to make a high-quality VR system"

November 6, 2018


Gordon Stoll's Pic
Abstract: Recently we've seen the arrival of a new wave of virtual reality devices, arguably including the first genuinely usable consumer VR. There have been a large number of different devices built by different players, large and small, and even though their top-level architectures are similar they vary wildly in the quality of the end-user's experience. At best the user is genuinely transported to another world, and at worst they tear the headset off and never try it again. In this talk I'll discuss the non-obvious differences in VR systems and how errors that intuitively seem negligible are not so negligible when they're strapped to your face. I’ll talk at a high level about our work at Valve on the complex puzzle of diagnosing these errors and figuring out how to measure them in order to make higher-quality VR systems. I'll go into some detail on one (hopefully) useful example: a simple technique for measuring room-scale 3D tracking quality against ground truth.

Bio: Gordon is an engineer working on virtual reality at Valve. Over the past 6+ years he has helped to develop the technology behind the original Valve "Room" demo, the HTC Vive, and Valve's SteamVR tracking ("Lighthouse"). Most of his work has been in figuring out how to measure things, which is much, much more fun than it sounds. He developed the methods used to calibrate and test the HMD optics and the tracking basestations through multiple generations and has contributed to a number of other measurement and calibration systems including those for tracked objects and cameras.

Paul Debevec

Senior Scientist, Google VR
"Creating Photoreal Digital Actors (and Environments) for Movies, Games, and Virtual Reality"
Watch on YouTube  Watch on YouTube  ❐

October 30, 2018


Paul Debevec's Pic
Abstract: Presenting recent work from USC ICT and Google VR for recording and rendering photorealistic actors and environments for movies, games, and virtual reality. The Light Stage facial scanning systems are geodesic spheres of inward-pointing LED lights which have been used to help create digital actors based on real people in movies such as Avatar, Benjamin Button, Maleficent, Furious 7, Blade Runner: 2049, and Ready Player One. Light Stages can also reproduce recorded omnidirectional lighting environments and have recently been extended with multispectral LED lights to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. Our full-body Light Stage 6 system was used in conjunction with natural language processing and an automultiscopic projector array to record and project interactive hologram-like conversations with survivors of the Holocaust. I will conclude the talk by presenting Google VR's "Welcome to Light Fields", the first downloadable virtual reality light field experience which records and displays 360 degree photographic environments that you can move around inside of with six degrees of freedom, creating VR experiences which are far more comfortable and immersive.

Bio: Paul Debevec is a Senior Scientist at Google VR and an adjunct research professor at the USC Institute for Creative Technologies in Los Angeles. His Ph.D. thesis (1996) under Prof. Jitendra Malik presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Using Façade he led the creation of virtual cinematography of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were used to create virtual backgrounds in The Matrix. Debevec pioneered high dynamic range image-based lighting techniques. At USC ICT, he continued the development of Light Stage devices for recording geometry and appearance, and helped create new 3D Display devices for telepresence and teleconferencing. http://www.debevec.org/

Jeremy Bailenson

Professor, Stanford
"Experience On Demand: What Virtual Reality Is, How It Works, and What It Can Do"
Watch on YouTube  Watch on YouTube  ❐

October 23, 2018


Jeremy Bailenson's Pic
Abstract: Virtual reality is able to effectively blur the line between reality and illusion, pushing the limits of our imagination and granting us access to any experience imaginable. With well-crafted simulations, these experiences, which are so immersive that the brain believes they’re real, are already widely available with a VR headset and will only become more accessible and commonplace. But how does this new medium affect its users, and does it have a future beyond fantasy and escapism?

There are dangers and many unknowns in using VR, but it also can help us hone our performance, recover from trauma, improve our learning and communication abilities, and enhance our empathic and imaginative capacities. Like any new technology, its most incredible uses might be waiting just around the corner.



Bio: Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. He earned a B.A. cum laude from the University of Michigan in 1994 and a Ph.D. in cognitive psychology from Northwestern University in 1999. He spent four years at the University of California, Santa Barbara as a Post-Doctoral Fellow and then an Assistant Research Professor.

Bailenson studies the psychology of Virtual Reality (VR), in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how VR can transform education, environmental conservation, empathy, and health. He is the recipient of the Dean’s Award for Distinguished Teaching at Stanford.

He has published more than 100 academic papers, in interdisciplinary journals such as Science and PLoS One, as well domain-specific journals in the fields of communication, computer science, education, environmental science, law, marketing, medicine, political science, and psychology. His work has been continuously funded by the National Science Foundation for 15 years.

Bailenson consults pro bono on VR policy for government agencies including the State Department, the US Senate, Congress, the California Supreme Court, the Federal Communication Committee, the U.S. Army, Navy, and Air Force, the Department of Defense, the Department of Energy, the National Research Council, and the National Institutes of Health.

His first book Infinite Reality, co-authored with Jim Blascovich, was quoted by the U.S. Supreme Court outlining the effects of immersive media. His new book, “Experience on Demand”, was reviewed by The New York Times, The Wall Street Journal, The Washington Post, Nature, and The Times of London, and was an Amazon Best-seller.


Doug Lanman

Director of Computational Imaging, Facebook Reality Labs
"Reactive Displays: Unlocking Next-Generation VR/AR Visuals with Eye Tracking"
Watch on YouTube  Video Coming Soon!

October 16, 2018


Doug Lanman's Pic
Abstract: As personal viewing devices, head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays occupying a shared environment. Viewing optics, display components, and sensing elements may all be tuned for a single user. It is the latter element that helps differentiate from the past, with individualized eye tracking playing an important role in unlocking higher resolutions, wider fields of view, and more comfortable visuals than past displays. This talk will explore the “reactive display” concept and how it may impact VR/AR devices in the coming years.

Bio: Douglas Lanman is the director of computational imaging at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a B.S. in applied physics with honors from Caltech in 2002 and M.S. and Ph.D. degrees in electrical engineering from Brown University in 2006 and 2010, respectively. He was a senior research scientist at NVIDIA Research from 2012 to 2014, a postdoctoral associate at the MIT Media Lab from 2010 to 2012, and an assistant research staff member at MIT Lincoln Laboratory from 2002 to 2005.

Ben Lok

Professor, University of Florida
"Virtual Storytelling, Real Change"
Watch on YouTube  Watch on YouTube  ❐

October 9, 2018


Ben Lok's Pic

Abstract: What’s the social good issue you are passionate about? To change people’s hearts and minds, what is the story that needs to be experienced immersively? What if you could connect with a team of like-minded members with the cross-functional skills to realize your idea in VR? What if you could get started NOW? What would you build?

In this talk, we will discuss our experiences with the VR for the Social Good Initiative at the University of Florida (www.vrforthesocialgood.com), and provide a plan for those interested to implement the program at your institution.

The VR for the Social Good Initiative was started in 2015 by professors in journalism and computer science professor, Sriram Kalyanaraman and Benjamin Lok. Their goal was to connect people (e.g. researchers, startups, non-profits) who are seeking solutions to social good issues with people (e.g. students) who could solve problems by creating virtual reality stories. Connecting seekers and solvers would enable many new ideas of how to apply VR to social good issues to be generated, tested, and evaluated.

The VR for the Social Good Initiative started offering classes in the Summer of 2017. The class has no prerequisites and no programming required. The class is open to students, from freshman to graduate student, of all majors. Students in the class are from a wide set of backgrounds including nursing, psychology, journalism, engineering, graphic design, education, building construction, the sciences, amongst others.

And this approach is scalable beyond existing approaches of traditional “VR classes” because the VR for Social Good Initiative leverages the concepts of lean software development, Agile development methodology, and the scrum agile framework. In the first year of the class, over 175 students created 36 projects. Next year we are having over 300 students across multiple colleges participate. Students from the course have gone on to join research groups, been involved in publications, generated initial data for funding, and participated in prestigious competitions, such as Oculus’s Top 100 Launchpad bootcamp.

We are expanding the class to hundreds to potentially a thousand of students a year. This is an opportunity to have thousands of people trained to create immersive stories to solve social good problems. This would be transformative to both the VR field and society in general. But everything here can be replicated at your school and your community. All materials for the class are available online at www.vrforthesocialgood.com. Empowering those that know the social good issues the best to be creators of immersive stories would be transformative in how society addresses our toughest problems. We are enabling all to become creators of solutions, not just consumers.



Bio: Ben Lok is a Professor in the Computer and Information Sciences and Engineering Department at the University of Florida and co-founder of Shadow Health, Inc., an education company. His research focuses on virtual humans and mixed reality in the areas of virtual environments, human-computer interaction, and computer graphics. Professor Lok received a Ph.D. (2002, advisor: Dr. Frederick P. Brooks, Jr.) and M.S. (1999) from the University of North Carolina at Chapel Hill, and a B.S. in Computer Science (1997) from the University of Tulsa. He did a post-doc fellowship (2003) under Dr. Larry F. Hodges.

Professor Lok received a UF Term Professorship (2017-2020), the Herbert Wertheim College of Engineering Faculty Mentoring Award (2016), a NSF Career Award (2007-2012), and the UF ACM CISE Teacher of the Year Award in 2005-2006. He and his students in the Virtual Experiences Research Group have received Best Paper Awards at ACM I3D (Top 3, 2003) and IEEE VR (2008). He currently serves as the chair of the Steering Committee of the IEEE Virtual Reality conference. Professor Lok is an associate editor of Computer and Graphics, and ACM Computing Surveys


Gordon Wetzstein

Asst. Professor, Stanford
"Computational Near-eye Displays: Engineering the Interface between our Visual System and the Digital World"
Watch on YouTube  Watch on YouTube  ❐

June 8, 2018


Gordon Wetzstein's Pic
Abstract: Immersive visual and experiential computing systems, i.e. virtual and augmented reality (VR/AR), are entering the consumer market and have the potential to profoundly impact our society. Applications of these systems range from communication, entertainment, education, collaborative work, simulation and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Thus, developing near-eye display systems that provide a high-quality user experience is of the utmost importance. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a significant source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display. In this talk, we discuss frontiers of engineering next-generation opto-computational near-eye display systems to increase visual comfort and provide realistic and effective visual experiences.

Bio: Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab, an interdisciplinary research group focused on advancing imaging, microscopy, and display systems. At the intersection of computer graphics, machine vision, optics, scientific computing, and perception, Prof. Wetzstein's research has a wide range of applications in next-generation consumer electronics, scientific imaging, human-computer interaction, remote sensing, and many other areas. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist in the Camera Culture Group at the MIT Media Lab. He received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an Alain Fournier Ph.D. Dissertation Award, an NSF CAREER Award, an Alfred P. Sloan Fellowship, a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.

2016


"Questions, Answers and Reflections"
Watch on YouTube  Watch on YouTube  ❐

May 31, 2016


Neal Stephenson's Pic
Bio: Neal Stephenson is an American writer and game designer known for his works of speculative fiction. His novels have been variously categorized as science fiction, historical fiction, cyberpunk, and "postcyberpunk". Other labels, such as "baroque", have been used. Stephenson's work explores subjects such as mathematics, cryptography, linguistics, philosophy, currency, and the history of science. He also writes non-fiction articles about technology in publications such as Wired. He has worked part-time as an advisor for Blue Origin, a company (funded by Jeff Bezos) developing a manned sub-orbital launch system, and is also a cofounder of Subutai Corporation, whose first offering is the interactive fiction project The Mongoliad.





"Lessons Learned in Prototyping for Emerging Hardware"

May 24, 2016


Drew Skillman and Patrick Hackett's Pic
Abstract: Drew Skillman and Patrick Hackett are veterans of the videogame industry, currently working at Google on Tilt Brush, a virtual reality painting application. They've spent the past 5 years working with emerging hardware, including the Kinect, Leap Motion Controller, PS4 Camera, Oculus Rift, Meta AR Glasses, GearVR, and the HTC Vive. They'll show off some of the things they've made, share the lessons they've learned, and talk about where they'd like to see things go.

Bio: Drew Skillman works with the Google VR team developing Tilt Brush, a Virtual Reality application that allows anyone to paint in 3D space at room scale. That project developed as part of a number of different augmented reality, virtual reality, and natural motion experiences at the company he co-founded (Skillman & Hackett) before it was acquired by Google in 2015. Prior to his work in VR, Drew developed games at Double Fine Productions in San Francisco, as a technical artist, visual effects artist, lighting artist, and project lead. His shipped titles include Happy Action Theater, Kinect Party, Brutal Legend, Stacking, Iron Brigade, Dropchord, Little Pink Best Buds, Autonomous, and DLC, for "The Playroom", a PS4 launch title. Drew received a B.A. in Physics from Reed College and focuses on interesting problems at the intersection of art and technology.

Patrick Hackett is currently on the Google VR Team developing Tilt Brush, a virtual reality application that allows anyone to paint in 3D space at room scale. Tilt Brush developed as part of a number of different augmented reality, virtual reality, and natural motion experiences at the company he co-founded, Skillman & Hackett, before it was acquired by Google in 2015. Patrick is a long-time proponent of rapid prototyping, and gave a talk at GDC 2013 regarding the various ways him and his team at Double Fine contorted the Kinect to create the game Double Fine Happy Action Theater. Shipped titles include Tilt Brush, Dropchord, Kinect Party, Happy Action Theater, Iron Brigade, Brutal Legend, and MX vs. ATV: Untamed, with contributions to Massive Chalice, The Cave, and numerous other prototypes and experiments.

Michael Gourlay

Principal Dev. Lead, Hololens
"Insider Tips for Developing on Virtual and Augmented Reality Platforms"
Watch on YouTube  Watch on YouTube  ❐

May 17, 2016


Michael Gourlay's Pic
Abstract: Developing games and applications for VR and AR platforms entails special constraints and abilities unique to those platforms. This talk will explain enough technology behind how they work so that developers can exploit strengths and avoid pitfalls. Also, the talk will cover how the distinction between VR and AR impacts developers.

Bio: Dr. Michael J. Gourlay works as a Principal Development Lead in the Environment Understanding group of Analog R&D, on augmented reality platforms such as HoloLens. He previously worked at Electronic Arts (EA Sports) as the Software Architect for the Football Sports Business Unit, as a senior lead engineer on Madden NFL, on character physics and the procedural animation system used by EA, on Mixed Martial Arts (MMA), and as a lead programmer on NASCAR. He wrote the visual effects system used in EA games worldwide and patented algorithms for interactive, high-bandwidth online applications. He also architected FranTk, the game engine behind Connected Career and Connected Franchise. He also developed curricula for and taught at the University of Central Florida (UCF) Florida Interactive Entertainment Academy (FIEA), an interdisciplinary graduate program that teaches programmers, producers and artists how to make video games and training simulations. He is also a Subject Matter Expert for Studio B Productions, and writes articles for Intel on parallelized computational fluid dynamics simulations for video games. Prior to joining EA, he performed scientific research using computational fluid dynamics (CFD) and the world's largest massively parallel supercomputers. His previous research also includes nonlinear dynamics in quantum mechanical systems, and atomic, molecular and optical physics, stealth, RADAR and massively parallel supercomputer design. He also developed pedagogic orbital mechanics software. Michael received his degrees in physics and philosophy from Georgia Tech and the University of Colorado at Boulder.

Michael Abrash

Chief Scientist, Oculus
"Virtual reality – The Biggest Step since Personal Computing... and maybe more"
Watch on YouTube  Watch on YouTube  ❐

May 3, 2016


Michael Abrash's Pic
Abstract: Over the last 40 years the personal computing paradigm that came together at Xerox PARC has hugely changed how we work, play, and communicate, by bringing the digital world into the real world in human-oriented ways. Now we’re at the start of the next great paradigm shift - virtual reality - which puts us directly into the digital world. The long-term impact is unknowable, but potentially even greater than personal computing; taken to its logical limit, VR can create the full range of experiences of which humans are capable. The technology required to move VR forward is broad and challenging, and a lot of time and research will be required, but VR is very likely to once again change the way we work, play, and communicate. This talk will take a high-level look at what will be needed to make that happen.

Bio: Michael Abrash is Chief Scientist of Oculus. He was the GDI development lead for the first two versions of Windows NT, joined John Carmack to write Quake at Id Software, worked on the first two versions of Xbox, co-authored the Pixomatic software renderer at Rad Game Tools, worked on Intel’s Larrabee project, worked on both augmented and virtual reality at Valve, and currently leads the Oculus Research team working on advancing the state of the art of AR and VR. He is also the author of several books, including Michael Abrash’s Graphics Programming Black Book, and has written and spoken frequently about graphics, performance programming, and virtual reality.

Steve Sullivan

Lead, Holographic Video Team, Microsoft
"Video Holograms for MR and VR"

April 26, 2016


Steve Sullivan's Pic
Abstract: We will discuss Microsoft’s recent work on free-viewpoint video, covering the algorithms, production process, and application to video holograms on Hololens, VR, and traditional 2D experiences.

Bio: Steve currently leads the Holographic Video Team for Hololens at Microsoft, creating free-viewpoint video of people and performances for mixed reality, virtual reality, and traditional 2D experiences. Prior to joining Microsoft, Steve was Director of R&D at ILM and then Senior Technology Officer for Lucasfilm. He led R&D across the Lucas Divisions, advancing the state of the art in computer graphics and content creation tech for film, TV, and games. He contributed to over 70 films and received three Academy Awards for Technology for matchmoving, image-based modeling, and on-set motion capture. He is a member of the Academy of Motion Picture Arts and Sciences, and currently serves on the Academy's Science and Technology Council. He received a PhD in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign, with an emphasis on automatic object modeling, recognition, and surface representations.

Clay Bavor

VP, VR/AR, Google
"Place, Time, and Memory"
Watch on YouTube  Watch on YouTube  ❐

April 19, 2016


Clay Bavor's Pic
Abstract: With some of Google's technical investments in VR as a background, Clay will share his thoughts how VR will change much about the way we live, including the nature of place, time, and memory.

Bio: Clay joined Google in 2005 and has been involved with a number of projects across the company, including Search, AdWords, and AdMob. Since 2012, he has led product management and design for some of Google's most popular applications, such as Gmail, Google Docs, and Google Drive. Clay was one of the creators of Google Cardboard and has led the growth of Google's VR projects. Clay grew up in Los Altos, California, and created his first VR project at the age of 11, when he used HyperCard and hundreds of scanned photographs to create a virtual version of his parents' house. Clay holds a B.S.E. in Computer Science from Princeton University.

Ashraf Michail

Software Architect, Hololens
"HoloLens – From Product Vision to Reality"

April 12, 2016


Ashraf Michail's Pic
Abstract: "Ashraf will talk about the HoloLens vision, some of the technical challenges, and how the HoloLens vision was turned into reality. This talk will include a discussion of some of the difficult problems solved by HoloLens including:
  •   - How HoloLens displays a stable hologram
  •   - How HoloLens understands the environment you are in
  •   - How HoloLens understands your input including gaze, gesture, and voice
  •   - How HoloLens custom silicon innovation such as the HPU enabled HoloLens to become an untethered holographic computer"


Bio: Ashraf has been the Software Architect for Microsoft HoloLens for the past several years working on both software and hardware design. He has worked on HoloLens from the early days of product inception to shipping the HoloLens Development Edition. Ashraf has a been developing platform and graphics technologies for the Microsoft Windows operating system groups since 1998 contributing to a variety of devices including Windows Desktop, Xbox, and Windows Phone. Prior to his work on HoloLens, Ashraf was known for computer graphics innovation and operating system work throughout a variety of Microsoft products.

Brian Murphy

Artist, Microsoft Studios
"4 Years Sculpting With Light"
Watch on YouTube  Watch on YouTube  ❐

April 5, 2106


Brian Murphy's Pic
Abstract: Brian will talk about what he's learned over the last 4 years designing holographic experiences. Specifically, he’ll cover lessons learned developing an immersive virtual travel experience called "HoloTour", dozens of commercial partner applications, and variety of experiments, games, and demos. He’ll talk about what worked, what didn’t, and how designing for holograms presents unique challenges relative to conventional Virtual Reality.

Bio: Brian Murphy is an artist and designer who has spent the last 4 years working within Microsoft Studios, developing experiences for the HoloLens. Before that he was involved with many incubation projects, including major contributions to Kinect during its earliest stages. He has co-authored 12 patents related to emerging technologies and has helped ship 5 titles within Team Xbox, including "Kinect Adventures" which has sold more than 24 million units worldwide. Prior to his 9+ years in the game industry, Brian rattled around as a film maker, musician, editorial illustrator, and construction worker... so, if you need a wall knocked out he’s still pretty handy with a sledge hammer.

Nick Whiting

Technical Director, Epic Games
"The Making of Bullet Train"
Watch on YouTube  Watch on YouTube  ❐

March 29, 2016


Nick Whiting's Pic
Abstract: The session will cover the entire process of creating Epic Games' "Bullet Train" VR demo, from start to finish. Highlighting design considerations surrounding the user experience of adding interaction to traditionally passive experiences, including a breakdown of alternative paths that were considered but didn't make the cut. The speakers will discuss where they had to diverge from their original design choices in order to match the players' expectations of the world they interact with. See how a small team created the entire "Bullet Train" VR demo from scratch in only 10 weeks, and understand the specific design considerations and tradeoffs used to match players' expectations of a highly-kinetic interactive VR experience.

Bio: Nick Whiting oversees the development of the award-winning Unreal Engine 4's virtual reality efforts, as well as the Blueprint visual scripting system. In addition to shipping the recent "Bullet Train," "Thief in the Shadows," "Showdown," and "Couch Knights" VR demos, he has helped titles in the blockbuster "Gears of War" series, including "Gears of War 3" and "Gears of War: Judgment."