Challenges and Opportunities of Using Artificial Intelligence and Multisensory Social Interactions in Immersive Extended Realities

A CHI2023 workshop

 
 
 
 

CALL FOR PARTICIPATION

In 10-15 years from now, Virtual, Augmented, and eXtended Reality (VR/AR/XR) are likely to be as ubiquitous as smartphones are today leading to new kinds of social media. Interaction with others will be more natural while entirely new experiences and ways of sharing will become possible. How will our social interactions change in light of increasing virtualization, unprecedented social scale information load, and ubiquitous intelligence? How will novel multisensory interfaces (haptics and audio) and digital assistants manifest in social XR platforms? And how can we anticipate, predict and avoid prospects of technological misuse on topics such as trust, privacy, and social exclusion and divides?

In this one-day in-person CHI workshop, we will discuss such topics together with 4 expert keynote speakers and panelists and will then work together to discover some of the challenges and opportunities of using artificial intelligence and multisensory social interactions in immersive extended realities. To that end, we invite potentially interested participants to submit a 2–4-page abstract showing how their own research work fits into this theme. Your paper can be anything from a demonstrator, a proof of concept, a work in progress, a completed unpublished research or technical work, or even a vision paper that can stimulate research communities to pursue and innovate in new directions related to the workshop theme.

For each accepted paper, at least one author will have to register for both the workshop and at least one day of the conference. Accepted papers will be made publicly available on the workshop website.

Important information

Submissions are accepted through easy-chair (link TBD)

Format: ACM Primary Article Template, single column, 2-4 pages, including references.

Submission Deadline: 12th February 2023

Acceptance Notification: 20th February 2023

Workshop Date: 23rd or 24th April 2023

FURTHER INFO AND MOTIVATION

Our goal is to bring together the growing community of academic and industry experts working on AI and multisensory technologies for social XR applications and have an open expert discussion about the challenges and opportunities ahead. 

Workshop Organizers:

  1. Orestis Georgiou (Ph.D. 2011) is Head of R&D Partnerships at Ultraleap and is also co-PI of the TOUCHLESS H2020 project. Dr. Georgiou has also published over 80 articles in leading journals and conferences in Mathematics, Physics, Engineering, Computer Science, and Medicine.

  2. Michele Geronazzo (M.Sc. 2009, Ph.D. 2014) is an Associate Professor in Computer Engineering at the University of Padova and part of the coordination unit of the EU-H2020 project SONICOM at Imperial College London. He is the Editor of the book “Sonic Interactions in Virtual Environments”. His main research interests involve spatial audio modeling/synthesis, multimodal XR, and sound in human-computer interaction.

  3. Noshaba Cheema (M.Sc. 2019) is a Ph.D. student at the Max-Planck Institute for Informatics and a researcher at the German Research Center for Artificial Intelligence (DFKI) in Germany. Through DFKI she is involved in the EU-H2020 project CAROUSEL as a work package leader, managing the work package about “Motion Intelligence”.

  4. Yuri De Pra (M.Sc 2012, Ph.D. 2021) is a Post-doc researcher at the Research Center “E. Piaggio”, University of Pisa, and part of the EU-H2020 project EXPERIENCE. His main research interests involve the design of multimodal interfaces and experimental protocols for advanced human-computer interaction.

  5. Esen Kucuktutuncu (M.Sc. 2020) is a Ph.D. student at the Event Lab in Universidad de Barcelona. She is involved in the EU-H2020 project GuestXR, focusing on the development of social VR spaces and the integration of RL-based virtual agents to foster pro-social behavior.

Key Challenges to be addressed at the Workshop:

KC1 - Haptics for the metaverse

The haptics technology market is experiencing an explosion of interest. This is mostly driven by the ever-increasing integration of haptics into consumer devices and a keen adoption by the medical and automotive industries. Meanwhile, the recent pandemic has pushed more people online, for work and for leisure, revealing a “touch gap” that permeates every aspect of modern society: from our languages and the words we use to describe touch, our ways of teaching and learning, our art, our commerce, through to our technologies and the very science that underpins them. Meanwhile, immersive and realistic experiences are developing rapidly and the emergence of a metaverse proposes new use cases where prolonged utilization and social interactions become more frequent and effective than reality. This change in direction presents the haptics and HCI communities with new challenges and opportunities. To that end, the KC1 calls for a discussion on innovative cutaneous and tactile technologies enhanced by AI, machine learning, and big data, as well as their use toward the broader metaverse vision.

How can we enable machines to automatically generate the haptic signals currently lost in the virtual transition?

KC2 - Immersive audio in social XR

Although technologies for spatializing audio are becoming available in consumer products, all popular platforms for teleconferencing and music are mostly limited to simple monophonic or stereophonic audio rendering. State-of-the-art techniques for rendering immersive audio enable personalization for each listener within an acoustic scene/space. Artificial intelligence models can offer several opportunities, for example dynamically changing the location of various participants in a teleconference, supporting communication and meaningful social interactions. The level of personalization can be limited from static spatial mixing to automatic optimization that is dependent and dynamically adaptive concerning the relevance of the content and auditory capabilities to each participant (e.g. including hearing impairments). Reaching a high level of comfort and, more generally, the overall quality of the social experience requires the exploration of the relationships between listeners, audio technologies, and extended reality environments. Accordingly, KC2 is intended to strengthen the spatial centrality of sound in embodied social interactions, as well evidenced by the emerging discipline of sonic interactions in virtual environments.

How can auditory virtual scenes be personalized and adapted for significant social experiences in extended realities?


KC3 - Digital characters and dancing in XR

The current online experience is passive and disconnected. Internet users are isolated from real-world sensations and feelings such as the presence of others, their touch, or their feelings. Current online applications fail to address the stress and mental health problems caused by the lack of contact, isolation, and loneliness. The pandemic with its lockdowns and social distancing has even made this problem more acute and visible to society. Dance is a profoundly social human activity, which uniquely combines creativity, feeling, sensing, thinking, and doing. Together with music and language, dancing is one of the few behaviors that naturally occurs in children and is attested universally across cultures. With the progress made in immersive technologies, haptics and artificial intelligence online communication can be extended to body language and communication of emotions. AI can further be used for creating, animating, and controlling digital characters that can interact bodily with humans by stimulating interaction and giving a feeling of presence and/or crowd.  To this end, KC3 calls for a discussion on the use of digital characters for making humans feel happy and connected in order to combat loneliness and isolation.

How can we combat loneliness and isolation by using immersive technologies and artificial intelligence for making people move and dance together?


KC4 - Personalized XR platforms and the assessment of space-time perception

The Extended personal reality (EPR) comprises the complex interplay between a subject’s multisensory perception, emotional responses, past experiences, and the mental representation of self in space and time. To this extent, a solid scientific framework to assess and manipulate time perception is needed. Time perception is a subjective internal representation of the objective time, and may be modulated by neurological disorders, as well as mental state, cognitive load, arousal, and attention. In humans, time perception is linked with space perception, and the disentanglement of the dimensions may be challenging, especially in a VR environment. Space-time representation may be assessed by exploiting multisensory illusions; to illustrate, in the Kappa effect, the increase in space between two stimuli can dilate their associated perceived time interval, whereas, in the Tau effect, the time elapsed between consecutive stimuli delivered to different locations can affect their perceived position. The KC4 calls for a discussion on methods and technologies to estimate and manipulate space-time perception, maybe in VR scenarios, through EPR.

How can we estimate and manipulate space-time perception in a personalized extended reality?


KC5 - Fostering pro-social behaviors in multisensory platforms

The current rapid development of immersive XR platforms, along with the gradual adoption of multisensory integration within these environments, brings forth the challenge of assuring that people using platforms have a pleasant time, and fruitful conversation. Social virtual spaces are becoming a hub where diverse people from different points of the world and demographics come together, and complex themes such as identity, gender, environment, and politics are prominent. We are now aware of many examples of how complex it can be to foster pro-social behavior in such spaces and to tackle challenges such as harassment, discrimination, and social exclusion. A feasible approach for tackling these challenges would be integrating AI into the platform, in many forms such as moderator, assistant, or agent which could serve the benefit of promoting pro-social interactions in XR. KC5 aims to address and call for a discussion of creative and beneficial uses of AI that will foster positive behavior within these mixed reality, multisensory platforms.

How can we leverage learning machines and multisensory integrations for promoting pro-social interactions in XR with the probable future integration of AI?

Flow Diagram of Workshop process

flow diagram of workshop process