Status Quo
The development of Figments.nrw began at the end of 2020, shortly after the kick-off of the AR/VR.nrw project. Initially, the focus was on creating functional prototypes as quickly as possible, whereas we currently focus on the first evaluation prototype (slated for end of 2021).
A whole series of smaller sub-steps have already been pursued to this end:
- Selection of a suitable version control system (VCS), in our case a Skylab instance hosted by the Institute of Visual Computing at Bonn-Rhein-Sieg University of Applied Sciences.
- Establishment of suitable communication channels for the scientific staff working on the project, realized by means of Mattermost.
- Selection of a suitable integrated development environment taking into account the requirements and needs of all stakeholders involved in the project. After ample consideration, Unity was chosen here over Unreal Engine, which will be used in the 2020 Long Term Support (LTS) version.
- Building the basic software framework for the realization of augmented and virtual reality, here using OpenXR in Unity.
- Adoption of a suitable coding style for C#.
- Development of a website for communication about the activities in the project.
- Establishment of regular operational and strategic meetings of the project management and the project staff.
Current Prototype & Features
This work led to a current prototype, which allows several participants to simultaneously enter a virtual learning environment via VR-HMD, PC or Android devices. So far, the following is (more or less) possible in this environment (list not complete):
- Import of 3D objects and environments
- Move objects (e.g. grab, drop, bump)
- Navigation via room-scale VR and virtual teleportation
- “Holographic” user interface that is worn on the virtual wrist and can be used for more complex (system) settings.
Experimental Features
Experimental functions include, for example:
- Integration of Power Point slides via virtual projection surface
- Mirroring of the desktop surface (only if PCs are used)
- Drawing on virtual surfaces
Dev #01: Current Activities
As mentioned in our roadmap, the first user study is planned for the coming semester. The focus of our current efforts is to develop the necessary functions for this.
While we will publish additional information on the subject content orientation of this first study (located in the area of crystalline structures) later, we will use this opportunity to test first modular extensions (import of subject-specific file formats) in addition to the basic functions of Figments.nrw. The main focus will be on investigating the usability of the software, and the suitability of different device types (especially in the area of VR-HMDs) for various teaching scenarios.
The project team discusses the current and planned developments in weekly meetings. In the past week, the following points were addressed, among others.
Scenes & Settings
Virtual Avatars
Spatial Audio & Voice Chat
In addition to avatars, which enable non-verbal communication in virtual space on the level of the visible and observable counterpart, we are currently developing further interaction, collaboration and communication tools.
The exchange via the spoken word is a very important aspect here, which we wanted to address as quickly as possible in the development of Figments.nrw. Since there is a lot to consider here, in addition to auditory factors and qualitative aspects of it, also in the context of synchronization via networks, we are devoting a lot of attention to this point.
Virtual reality offers advantages here through a positionally accurate location of the participants (as well as any noise sources that may be in the room), which can be tapped using spatial audio. Here, signals become accessible binaurally, and can thus contribute significantly to orientation in the virtual space. In addition, this opens up possibilities for auditory feedback, e.g. in learning processes.
Drawing & Writing in 3D
Another way to enable communication in virtual space is through writing and drawing. Via controller or hand tracking, a natural interaction is possible that comes surprisingly close to writing in reality (e.g., using a pen or brush).
While we have already successfully implemented three-dimensional drawing in virtual reality, we are currently working on being able to write on two-dimensional virtual surfaces (e.g., a whiteboard) as well. We see potential here both in the habit with which users can get to grips with these drawing tools and in the possibility of saving and exporting these two-dimensional recordings conventionally (e.g. as JPG/PNG) in order to use them later.