Dr. Chao Mei is currently the Director of Human Factors Engineering at TCL Rayneo XR, leading the research of Human-Computer Interaction for AR wearable devices. He actively contributes to the IEEE VR and 3DUI community and serves as the Chair of the IEEE VR 3DUI contest. Before TCL Raynow, he worked in Meta Reality Labs, OPPO U.S. Research Center, and the Dept. of Software Engineering and Game Development at Kennesaw State University. He is an NSF grant awardee. He received his Ph.D in Computer Science from the Computer Science Dept. at The University of Texas at San Antonio on Aug. 2016. His Ph.D. advisor is Dr. John Quarles who is the director of the San Antonio Virtual Environment (SAVE) lab. His main research areas are: Human-Computer Interaction – Virtual/Augmented Reality, Special Education, Educational Gaming and Software Engineering. He also got an M.B.A. degree from Georgia Institute of Technology.
A prototype of
The overall logic of AVET is to first expose the user to a Virtual Environment (VE) to finish a task (e.g., listen to a virtual teacher’s talking) with potential distractions. At the second step, the system will detect the user’s attention with eye-tracking data and physiological signals, then remove all the detected distractions (e.g., a red mug, the virtual teacher’s necklace) from the VE. The last step is to gradually put all distractions back into the VE, which will allow the user exposure to one more distraction object at a time. In this step, the system will provide positive/negative reinforcements to reinforce the desired specific attention. Figure 1 – a) shows the initial VE which is a classroom and a virtual teacher stands in front of the user. The Cyan and the red balls representing the current left and right eye gazing points of the user. The eye gazing point in this figure is just for demonstration purpose, it is hidden from the actual users. b) shows the user is gazing at the teacher. c) shows the user is distracted by the globe and gazed at the globe. d) shows another distraction – the table. e) shows that all the distractions have been removed from the scene. f) shows one of the
We conducted a preliminary evaluation
Virtual Reality (VR) training games have many potential benefits for autism spectrum disorder (ASD) therapy, such as increasing motivation and improving the abilities of performing daily living activities. Persons with ASD often have deficits in hand-eye coordination, which makes many activities of daily living difficult. A VR game that trains hand-eye coordination could help users with ASD improve their quality of life. Moreover, incorporating users’ interests into the game could be a good way to build a motivating game for users with ASD.
I developed a Customizable Virtual Human (CVH) which enables users with ASD to easily customize the appearance of a virtual human and then interact with the CVH in a 3D task. Specifically, we investigated the effects of CVHs with a VR hand-eye coordination training game – Imagination Soccer – and conducted a user study on adolescents with high functioning ASD.
I as a main developer with our team made this project for National M.S society as an virtual online extension for their annual found raising event. In the traditional offline event, people get together and walk for 1 or 2 miles in order to raise found for M.S. society to support M.S research and M.S. patients’ daily life. Although lots of people are enthusiastic to fight for M.S. disease together, but they are not able to present in a certain place at the same time. We simulate this whole event as an online virtual reality game, so that the presenting population could be extended to people all of the world. For more information and downloading this project, please step to: http://184.108.40.206/index.html
This project worked with Oculus Rift and Razer Hydra. I made this project for my research on evaluating the usability of 3DUI for kids with autism. With the virtual hands controlled by users’ real hands, users are able to pick up milk/cereal box, pour and mix cereal and milk in the bowl. Measuring cups could be used to accurately deliver a certain amount of milk and cereal. During the users interactions with the simulator, data such as the movement trail of user’s hands, tasks performances etc. were recorded and analysed. I am trying to find the general guidelines for building 3DUI Virtual Reality tools for autism therapy. To download and try this simulator please click here.
In addition, I made small tests to measure your sense of manipulating 3D virtual objects, it could be download by click here. The task is simply rotating or translating the house at the left side to make its orientation the same as the house at the right side.
This project is a collaboration between The University of Texas at San Antonio and the University of Florida. I am the developer of the anesthesia simulator tool. Below is a video overview of this project, but it is an relatively old version. For more current version, please go to projects page and download.
And the research we conducted with my tool:
AIM: We studied whether anesthesia providers account for racial differences in propofol sensitivity (reported in the anesthesia literature) when selecting loading doses for propofol sedation and analgesia.
METHODS: We developed a mixed reality simulator consisting of a 3D virtual human and a physical mannequin. Based on published data, propofol pharmacodynamics was altered, in order of increasing sensitivity (loss of consciousness, LOC, determined by loss of response to verbal commands at lower effect site concentrations), for Caucasian, Black and Indian (South Asian) patients. With IRB approval and informed consent, anesthesia providers administered propofol sedation and analgesia for upper GI endoscopy to three consecutive simulated male patients (Caucasian, Indian, Black) that were otherwise similar. Users interacted with the mannequin (verbal, jaw thrusts, shaking); the virtual representation depicted movement and pain response and a different patient appearance based on race.
RESULTS: There were 37 study participants (23 males, 14 females; 13 faculty members, 10 residents, 8 nurse anesthetists, 3 fellows, 3 anesthesiology assistants; age: 28-68, 38.6±10.1 years; experience delivering propofol during sedation and analgesia: 1-20, 6.8±5.8 years. The loading doses were Caucasian (0.27-1.71, 0.77±0.31 mg/kg), Indian (0.29–1.71, 0.80±0.32 mg/kg), Black (0.25-1.71 0.79±0.28 mg/kg). The time durations of oversedation (LOC) were Caucasian (0-318, 147±85 s), Indian (26–338, 207±68 s), Black (0-367, 191±81 s). Between patient races, there was no significant difference in loading doses (p=???) and a significant difference in LOC duration (p=???).
CONCLUSIONS: If the above data collected in a simulated environment at an academic health center in the Southeast United States are representative of actual clinical practice, it indicates a race-blind formulaic pproach that predisposes sensitive races to oversedation and a learning gap that may need to be addressed in training programs.