Keynote
Speakers

Andrew Glassner
Wētā FX
Short Bio
Andrew Glassner is a Senior Research Engineer at Wētā FX, where he develops tools to help artists produce amazing visual effects for movies and television.
Glassner has served as Papers Chair for SIGGRAPH ’94, Founding Editor of the Journal of Computer Graphics Tools, and Editor-in-Chief of ACM Transactions on Graphics. Andrew is a well-known writer of numerous technical papers and books. Some of his books include the “Graphics Gems” series, the “Andrew Glassner’s Notebook” series, “Principles of Digital Image Synthesis,” and “Deep Learning: A Visual Approach.” His most recent book is “Quantum Computing: From Concepts to Code.” He has carried out research at the NYIT Computer Graphics Lab, Xerox PARC, Microsoft Research, the Imaginary Institute, Unity, and Wētā FX.
Glassner has written and directed live-action and animated films, written several novels and screenplays, and was writer-director of an online multiplayer murder-mystery game for The Microsoft Network.
In his spare time, Andrew paints, plays and writes music, and hikes.
Quantum Computing and Computer Graphics
Some technological revolutions change societies and the tools they depend on. Recently, electronics, computers, and cell phones have upended our cultures, and AI seems to be doing it again. Next on the horizon are quantum computers.
These devices – already built and working – offer us capabilities completely unlike those of classical computers. One of their key features is called quantum parallelism. This refers to the ability of a quantum computer to evaluate an arbitrary number of inputs (billions! trillions! any number you can dream of) simultaneously, in the time it takes to evaluate only one. Nothing is perfect, though: from these vast results, we can only extract one output at a time – and we usually cannot even choose which one we’ll get! Navigating this situation, and others like it, is leading us into a new art of programming based on new ideas.
When quantum computers become plentiful, cheap, and reliable (and they’re becoming more of all of these things every day), many of the algorithms we use every day in computer graphics will be radically changed. We’ll use quantum computing in tasks from modeling and rendering to simulation and interaction. In this talk I’ll discuss the key ideas underlying quantum computers, and speculate on their applications in computer graphics. Quantum computing will transform our field – this is the perfect time to prepare!

Ayush Bhargava
Meta Reality Labs
Short Bio
Dr. Ayush Bhargava is a User Experience Researcher at Meta Reality Labs. His work at Meta sits at the intersection of human perception, inputs and interaction pushing the boundaries of spatial computing on a regular basis. He uses the lens of perception to understand human behavior in order to improve the overall user experience for immersive experiences.
Dr. Bhargava earned his PhD in Computer Science from Clemson University focusing on affordance perception and interaction in Virtual Reality. His past work in the field of VR has covered a wide variety of topics including self-avatars, perceptual calibration, perception-action, educational simulations, 3D interaction, tangibles and Cybersickness.
Affordances and Interaction in Immersive environments
Immersive environments, such as VR, MR and AR, present unique challenges and opportunities for human-computer interaction. Central to effective interaction in these spaces is the concept of affordance—the perceived relationship between an object and the actor that determines what actions can be performed. This keynote explores research investigating foundational principles of affordance perception, the critical role affordances play in shaping user experience and designing affordances within immersive environments. The talk will examine how affordances guide user behavior, enable intuitive interactions, and support seamless engagement with virtual objects and spaces. Drawing on interdisciplinary research and existing guidelines, the talk will highlight design principles for creating clear, discoverable, and meaningful affordances that enhance usability and presence. Attendees will gain insights into leveraging affordances to bridge the gap between physical and digital realities, ultimately advancing the design of immersive technologies that feel natural, accessible, and empowering.

Daniel Aliaga
Purdue University
Short Bio
Dr. Aliaga is an Associate Professor of Computer Science at Purdue University. He obtained his Ph.D and M.S. from UNC Chapel Hill, and Bachelor’s from Brown University. He joined Purdue in 2003, co-founding the Computer Graphics and Visualization Laboratory (CGVLAB). Dr. Aliaga has held visiting professor positions at ETH Zurich Information Architecture and also ETH Computer Science, INRIA Sophia-Antipolis, and KAUST in Saudi Arabia. Dr. Aliaga’s has over 160 refereed publications covering multiple disciplines, holds membership in 90+ program committees, has on-going international multi-disciplinary collaborations (i.e., with computer science, urban planning, architecture, meteorology, atmospheric/earth sciences, engineering, archaeology, and more), and has given over 50 invited talks (in the US, Brasil, Colombia, Ecuador, France, Japan, Korea, Peru, Qatar, Sweden, and Switzerland; TEDx). Further, Dr. Aliaga has been technical advisor on multiple startups (Synthicity, UrbanSim, Authentise). Prof. Aliaga has obtained 30 external peer-reviewed grants totaling $45M (and is PI on 22 of them), with funding sources including NSF, IARPA, USDA, Internet2, MTC, Google, Microsoft, and Adobe.
Dr. Aliaga is currently advising 5 Ph.D. students and 2 M.S. students, has graduated 13 PhD students and 3 MS students, and has been committee member of 27 additional PhD students, has supervised the research of 24 undergraduates, and has hosted 9 visiting scholars. His PhD advisees have received a total of 11 Purdue fellowships/grants. In addition, Prof. Aliaga is Associate Editor for IEEE TVCG and for Visual Computing Journal (previously for Computer Graphics Forum and Graphical Models) and PC member for SIGGRAPH, CVPR, ICCV, Eurographics, AAAI, I3D, IEEE Vis. He has received a Fulbright Scholar Award, a Discovery Park Faculty Research Fellowship, is a member of ACM SIGGRAPH and ACM SIGGRAPH Pioneers, and has multiple times been Chair of Faculty Diversity for College of Science at Purdue.
Urban Visual Computing
As the world population grows, unorganized migration to cities can be disastrous and thus there is a widespread need to improve urban design, modeling, and simulation to yield a sustainable urbanization process. Towards this, urban visual computing develops semi-automatic, automatic, and neural algorithms that convert incomplete and unstructured data into controllable and editable models for use in urban digital twins, simulation, visualization, education, cultural heritage, and entertainment. Given the interdisciplinary nature of my research area, my projects have been in collaboration with experts in urban planning, atmospheric sciences, remote sensing, civil engineering, architecture, urban ecology/forestry, and archaeology.
In this talk, I will present my work in urban visual computing. First, I will present my methodology focused on inverse procedural modeling, where the rules and parameter values defining geometric structures are inferred, rather than given. This includes multiple novel image processing algorithms, 3D reconstruction methods, and 3D spatial augmented reality. Second, I will summarize my generative modeling and urban AI approaches which enable inferring 3D models, urban layouts, images, sketches, point clouds, roads, facades, buildings, cities, and vegetation. Third, I will report on the development and impact of several large computational urban projects in gray (e.g., buildings) and green (e.g., trees) design, planning, and modeling (e.g., U-Tree, Core3D, WUDAPT, UrbanVision) and technology transfer as an advisor to several startups.

Diego Thomas
Kyushu University
Short Bio
Professor Diego Thomas completed his Master’s degree at ENSIMAG-INPG (Engineering School of Computer Science and Mathematics), Grenoble, France, in 2008. He received the Ph.D. degree from the National Institute of Informatics, Tokyo, Japan, in 2012, as a student of SOKENDAI. He is currently an Associate Professor at Kyushu University, Fukuoka, Japan, since 2023. His research interest is 3D Vision, motion synthesis, Computer Graphics and Digital humans. He is author or co-author 90 peer-reviewed journal/international conference papers. He is a regular reviewer for international conferences/journals in computer vision. He has also served several international conferences, including PSIVT’19 (area chair), MPR’19 (program chair), IPSJ’21 (session chair), 3DV’20 (local chair). He received the MIRU Nagao Award in 2024.
Creating bridges between the digital and physical realms with 3D vision
In this talk, I will present our recent advances in building digital 3D human avatars and explore how these innovations are poised to transform the future of VR and human–machine interaction. Modern societies rely extensively on machines, yet most consume vast amounts of energy and resources. To ensure sustainable progress, we must design intelligent systems that are both efficient and adaptive. Recent breakthroughs in artificial intelligence, such as Large Language Models (LLMs), have revolutionized the digital domain. However, meaningful human–machine interaction takes place in the physical world, where perception, embodiment, and motion are key. To bridge this gap, our research develops AI-driven 3D vision models capable of perceiving, reconstructing, and understanding the human body in rich detail. By leveraging modern machine learning techniques, we can capture 3D body shapes, deformations, and interactions across diverse environments. I will present our latest results in 3D shape reconstruction from single, few, and multi-view images, as well as new approaches for 3D human motion retargeting and text-driven motion synthesis. Together, these advances mark an important step toward the creation of realistic, responsive avatars—laying the foundation for the next generation of VR experiences and human–machine collaboration.

Dinesh Manocha
University of Maryland, College Park
Short Bio
Dinesh Manocha is the Paul Chrisman Iribe Professor of Computer Science and Electrical and Computer Engineering and a Distinguished University Professor at the University of Maryland, College Park. His research interests include virtual environments, physically based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 750 papers and supervised 50 PhD dissertations. He is a fellow of the Association for the Advancement of Artificial Intelligence, the American Association for the Advancement of Science, the ACM, the Institute of Electrical and Electronics Engineers (IEEE), and the National Academy of Inventors. He is also a member of the ACM’s Special Interest Group on Computer Graphics and Interactive Techniques and the IEEE Visualization and Graphics Technical Community’s Virtual Reality Academy. Manocha is the recipient of a Pierre Bézier Award from the Solid Modeling Association, a Distinguished Alumni Award from the Indian Institute of Technology Delhi, and a Distinguished Career Award in Computer Science from the Washington Academy of Sciences. He was also a co-founder of Impulsonic, a developer of physics-based audio simulation technologies that was acquired by Valve Corporation in November of 2016.
Audio Processing in the Age of Large Language Models
Audio comprehension—including speech, non-speech sounds, and music—is essential for AI agents to interact effectively with the world. Yet, research in audio processing has lagged behind other areas like language and vision, hindered by limited datasets, the need for advanced architectures, and training methods suited to the inherent complexities of audio. Our group is trying to bridge this gap with innovative solutions, starting with GAMA, our large audio-language model designed for advanced audio perception and complex reasoning. GAMA is built with a specialized architecture, optimized audio encoding, and a novel alignment dataset, that is used for audio understanding, reasoning, and hallucination reduction. GAMA’s development builds on our past research, such as MAST and SLICER and EH-MAM, which are novel approaches for learning strong audio representations from unlabeled data. Complementing this, we introduced ReCLAP, a state-of-the-art audio-language encoder, and CompA, one of the first projects to tackle compositional reasoning in audio-language models—a critical challenge given audio’s inherently compositional nature. We recently developed Audio Flamingo 2, an audio-language model with advanced long-audio understanding and reasoning capabilities. Audio Flamingo 2 and 3 achieve the state-of-the-art performance across over a large number of benchmarks.
Looking forward, we envision LALMs becoming integral to daily life, capable of conversational speech QA, information-extraction-based QA, and addressing knowledge-driven questions about diverse audio inputs. Achieving these ambitious goals requires both advanced data and architectures. Synthio, our latest synthetic data generation framework, supports this mission by generating data for complex audio understanding. Progress must also be measurable, so we’re dedicated to establishing comprehensive benchmarks. Our recent work, MMAU, rigorously tests LALMs on real-world tasks.

Gregory F. Welch
University of Central Florida
Short Bio
Gregory Welch is a Pegasus Professor and the AdventHealth Endowed Chair in Healthcare Simulation at the University of Central Florida (UCF), with appointments in the College of Nursing, the College of Engineering & Computer Science (Computer Science), and the Institute for Simulation & Training, and is a Co-Director of the Synthetic Reality Laboratory. He received a B.S. degree in Electrical Engineering Technology from Purdue University in 1986, withHighest Distinction, and a Ph.D. in Computer Science from the University of North Carolina at Chapel Hill in 1996. Prior to UCF he was a Research Professor at UNC, he worked on the Voyager Spacecraft Project at NASA’s Jet Propulsion Laboratory, and on airborne electronic countermeasures at Northrop-Grumman’s Defense Systems Division. He conducts research in areas including virtual and augmented reality, human-computer interaction, human motion tracking, and human surrogates for training and practice, with a focus on applications such as healthcare and defense. He has co-chaired numerous international conferences, workshops, and seminars in these areas, co-authored over 150 associated publications, and is a co-inventor on multiple patents. His 1995 introductory article on the Kalman filter has been cited over 9000 times. His awards include the 2018 Institute of Electrical and Electronics Engineers (IEEE) Virtual Reality Technical Achievement Award, and the 2016 IEEE International Symposium on Mixed and Augmented Reality’s Long Lasting Impact Paper Award. He is presently serving on the World Economic Forum’s Global Future Council on Virtual Reality and Augmented Reality, the International Virtual Reality Healthcare Association’s Advisory Board, as an Associate Editor for the journals PRESENCE: Virtual and Augmented Reality and Frontiers in Virtual Reality, and as an expert witness on intellectual property matters. He is a Fellow of the IEEE and a Fellow of the National Academy of Inventors (NAI), and a Member of the UCF Chapter of the National Academy of Inventors, the Association for Computing Machinery (ACM), the European Associationfor Computer Graphics, and multiple healthcare-related societies. He is an ACM SIGGRAPH Pioneer and serves as an IEEE Technical Expert for Virtual, Augmented and Mixed Reality.
Beyond XR: The Human Filter
Extended Reality (XR) systems, including Virtual Reality (VR) and Augmented Reality (AR), are rapidly advancing, with growing capabilities to model a user’s behavior, appearance, and surroundings. XR systems can sense head position, posture, eye movement, voice, and even cognitive load, and can display virtual stimuli through standard sensory channels in ways that may be indistinguishable from real-world stimuli. While today’s XR systems are almost exclusively dedicated to the practice of what we would typically think of as XR, e.g., for training, education, or entertainment, they could do so much for humans beyond simply “doing XR.”
In this talk I will discuss leveraging the nexus of developments in XR systems, smartphones, and smart watches, and well-established principles and mechanisms from control theory, to develop a holistic, principled, and generalized means for the continuous optimal estimation of a range of intrinsic human characteristics. I will also discuss how, in a complementary manner, head-worn and other devices could be used to produce visual, aural, and tactile stimuli for individual users at any moment, in the context of whatever they are doing, to influence the user in helpful ways. I will motivate the ideas, discuss a possible theoretical framework, and some example application areas.

Guodong Rong
Meta Reality Labs
Short Bio
Dr. Guodong Rong received his Ph.D. degree from National University of Singapore. He is currently a software engineer at Meta Reality Labs as the tech lead of VR compositor. He has over 20 years of experience in graphics and VR related areas in both academia and industry. Before joining Meta, he has worked in NVIDIA, Samsung, Google, Huawei, Baidu, and LG as a software engineer, and in University of Texas at Dallas as a postdoc researcher. His research interests include computer graphics, VR/AR, computational geometry, and autonomous driving simulation.
Why is VR Graphis Hard?
VR Graphics has lots of unique properties which raise great challenges to achieve good user experience. This talk will explain some of these challenges related to VR system hardware, as well as to human factors, so that the audience can learn why VR graphics is hard. Some optimization techniques will also be briefly covered to show how Meta addresses some of those challenges in their VR devices.

Luciana Nedel
UFRGS
Short Bio
Luciana Nedel is a full professor at the Institute of Informatics of UFRGS, where she has been teaching and doing research in virtual reality, interactive visualization, and human-computer interaction since 2002. She received her PhD in Computer Science from the Swiss Federal Institute of Technology (EPFL) in Lausanne, Switzerland, in 1998. In her research career, she has been involved in projects with industry and in cooperation with different Universities abroad. Her main research interests include virtual and augmented reality, immersive visual analytics, and 3D User interfaces (3DUI). She is a member of IEEE, ACM, and SBC, where she contributed as program committee chair many times: IEEE VR 2025 full papers, Interact 2025 short papers, IEEE VR 2022 journal papers, etc. She is also an associated editor for IEEE TVCG (Transactions on Visualization and Computer Graphics, Computers & Graphics, IEEE Computer Graphics & Applications, The Visual Computer Journal (TVC), Frontiers in Virtual Reality, and SBC JBCS (Journal of the Brazilian Computer Society).
Designing Immersive Simulators: Where Technology Meets Human Learning
“What I hear, I forget… what I see, I remember… but what I do, I learn!” (Confucius)
For centuries, human learning has been based on direct exposure to tasks and the environments in which they occur. Over time, symbolic tools were developed to teach complex concepts in safer ways, as in the use of chess to explore principles of military strategy. Today, applied games and immersive environments carry this tradition forward, including flight simulators, virtual reality medical training, and collaborative platforms where teams can test strategies in complex but safe scenarios.
In this talk, I will discuss some challenges we encounter when creating immersive simulators. On the technological side, we must address the demand for realistic, real-time graphics, precise motion tracking, low latency, and the seamless integration of multiple sensory inputs. On the human side, we face the challenge of designing experiences that are truly effective for learning. This involves balancing realism with usability, preventing cognitive overload, accommodating individual differences, and designing scenarios that reflect real-world tasks while allowing learners to critically reflect on their actions. Throughout the talk, I will share lessons we’ve learned and highlight some open questions that continue to drive this field forward.

Marcio Filho
ACJOGOS-RJ
Short Bio
Márcio Filho é uma das principais lideranças institucionais do setor de jogos eletrônicos no Brasil, tendo atuado como presidente reeleito da Associação de Criadores de Jogos do Estado do RJ (ACJOGOS-RJ), no mandato 2024–2026. Participou ativamente da formulação e articulação do Marco Legal dos Games (Lei Federal nº 14.852/2024), aprovado em 2024, consolidando-se como um dos principais articuladores políticos da regulamentação do setor no país. Também foi parecerista do 1º Edital Público de Cultura Geek do Brasil (Niterói-RJ) e conselheiro de Orçamento Participativo da Cultura entre 2021 e 2023. Sua atuação inclui ainda a organização de simpósios acadêmicos nacionais sobre jogos, realidade virtual e computação, em parceria com a Sociedade Brasileira de Computação.
Com mais de 16 anos de experiência no desenvolvimento de jogos e soluções gamificadas, Márcio é fundador da GF Corp e criador da plataforma CASE — referência internacional em inovação baseada em jogos para educação e capacitação. É certificado em Gamification pela Wharton Business School (UPenn) e especialista em ensino virtual pela University of Columbia-Irvine (UCI). Entre 2008 e 2025, desenvolveu mais de 60 jogos para organizações como SESI, SESC, FURNAS, entre outras, além de registrar mais de uma dezena de propriedades intelectuais no INPI. Sua trajetória combina excelência técnica, pensamento estratégico e forte articulação entre o setor criativo e as políticas públicas.
Antes e Depois do Marco Legal dos Games
O que era, é e poderá ser o setor de jogos no Brasil depois da lei federal 14852/2024
Somente 40 anos após o lançamento do primeiro jogo brasileiro a atividade econômica de criação de jogos eletrônicos foi reconhecida pelo estado nacional. Nessa palestra, Márcio Filho explorará o que nos levou até aqui, os percalços e bastidores na aprovação da lei e o que vem pela frente depois da aprovação do mais avançado texto regulatório do setor no mundo.
Marcio Filho é presidente da ACJOGOS-RJ, a Associação de Criadores de Jogos do Rio de Janeiro, especialista em games e sociedade, empreendendo na área há mais de 20 anos. Foi articulador do Marco Legal dos Games (lei federal 14852/2024), garantindo um texto legal que atendesse os interesses do setor.

MIng Lin
University of Maryland at College Park
Short Bio
Ming C Lin is currently Distinguished University Professor, Barry Mersky and Capital One E-Nnovate Endowed Professor of Computer Science at the University of Maryland at College Park. She is also an Amazon Scholar, former Elizabeth Stevinson Iribe Chair of Computer Science at UMD and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC) – Chapel Hill. She received her B.S., M.S., Ph.D. degrees in Electrical Engineering and Computer Science respectively from the University of California, Berkeley. She is a Fellow of National Academy of Inventors, ACM, IEEE, Eurographics, ACM SIGGRAPH Academy, and IEEE VR Academy.
Dynamics-Aware Learning: From Simulated Reality to the Physical World
In this talk, we present an overview of some of our recent works on the differentiable programming paradigm for learning, control, and inverse modeling. These include using dynamics-inspired, learning-based algorithms for detailed garment recovery from video and 3D human body reconstruction from single- and multi-view images, to differentiable physics for robotics, quantum computing and VR applications. Our approaches adopt statistical, geometric, and physical priors and a combination of parameter estimation, shape recovery, physics-based simulation, neural network models, and differentiable physics, with applications to virtual try-on and robotics. We conclude by discussing possible future directions and open challenges.

Sri Kalyanaraman
Michigan State University
Short Bio
Sriram “Sri” Kalyanaraman is Senior Associate Dean for Research at Michigan State University’s College of Communication Arts and Sciences. His prior academic experience includes stints at the University of North Carolina-Chapel Hill and the University of Florida. At UF, he directed the Media Effects and Technology Lab and was a co-founder of the VR for Social Good (VR4SG) initiative, which resulted in one of the largest VR classes in the world. His current program of research primarily focuses on immersive media platforms and technologies to create, test, and disseminate stories and messages to improve the human spirit and condition.
Beyond Empathy: The Science of Immersive Storytelling for Social Good
Immersive and interactive technologies have transformed mediated communication with their ability for audiences to experience vivid perspective-taking. Such perspective-taking has led to technologies such as virtual reality (VR) being widely proclaimed as “empathy machines.” In addition to empathy, immersive experiences also foster sensemaking, shared experiences, and accelerated futures. This talk discusses how the science of immersive storytelling is especially effective in sustainability research with implications in such areas as climate science, health and well-being, and social equality.

Soraia Raupp Musse
PUC/RS
Short Bio
Soraia Raupp Musse is a Full Professor at the Polytechnic School of PUCRS (Pontifical Catholic University of Rio Grande do Sul, Brazil) and a CNPq Productivity Fellow. She holds degrees in Computer Science from PUCRS (BSc, 1990), UFRGS (MSc, 1994), and EPFL in Switzerland (MSc, 1997; Ph.D., 2000), with a postdoctoral fellowship at the University of Pennsylvania (2016). Her research focuses on graphics processing, including virtual humans, crowd simulation, visual perception, and computer vision. She has authored over 220 publications in leading journals and conferences such as Elsevier Computers & Graphics, IEEE TVCG, Computer Graphics Forum, SIGGRAPH, and MIG, and co-authored four internationally published books with Springer-Verlag, including the first book on Crowd Simulation. She has supervised more than 180 theses and served on over 140 academic committees. Her work has been recognized with 48 awards, including the Google Research Award (2022), the Santander Science and Innovation Award (2013), and the Finep Innovation Award (2003). She is currently Editor-in-Chief of the Journal of the Brazilian Computer Society (JBCS) and has chaired numerous conferences, including service on national research committees for CNPq and CAPES. In 2024, she was honored as the Featured Researcher at SBGames, South America’s premier conference on digital games, and will serve as a keynote speaker at SIBGRAPI, SVR, and SBGames 2025. She also coordinates the newly established INCT-SiM-AI, a Brazilian National Institute of Science and Technology focused on AI-driven personalized solutions for climate disaster response.
From Human Bias to Embodied AI: Shaping the Future of Virtual Humans
This talk explores the evolution of virtual humans, tracing their development from early computer graphics representations to today’s intelligent, embodied agents. We begin by revisiting the historical and conceptual foundations of virtual humans and their roles in simulations and entertainment, highlighting how these milestones have shaped the way we perceive and design VHs in digital environments. The discussion then focuses on two contemporary aspects of virtual humans: first, understanding human perception and bias toward VHs, and second, the rise of Embodied Conversational Agents (ECAs), with an emphasis on how advances in speech, emotion modeling, and non-verbal behavior have enhanced human–agent interaction. Building on these trends, we examine how integrating ECAs and virtual humans with Large Language Models (LLMs) is significantly enhancing agents’ ability to reason, contextualize, and engage in fluid, human-like dialogue. The talk concludes by reflecting on the future of embodied interaction, outlining the opportunities and challenges emerging at the intersection of computer graphics, cognitive modeling, and generative AI.