Dev #03: Input, video streaming, voice chat & crystals

Today we talk about the integration of video streaming and voice chat with Agora as well as the implementation of individualized avatars with the help of ReadyPlayer.Me. We also talk about a first learning scenario.

Dev #03: Current activities

Slowly but surely, the development of is progressing. The developers are coming back from their well-deserved summer break and accordingly the development is picking up speed again. In the last weeks the implementation of new features was the main focus. These new features will be merged in the near future to create a new updated source. Furthermore, some small problems with the camera control in the desktop mode were fixed.

Refactoring of the input methods

Currently the manual binding of user input and actions to be triggered is a potential source of errors, poorly readable and difficult to maintain. To eliminate this source of errors and to improve the readability of the code, we are revising the input system. The goal is that the entire input is controlled with code. This will result in additional work, especially during the merge. However, we assume that the simpler readability and the prevention of errors will outweigh the effort in the long run.

Video streaming & voice chat

For many learning scenarios, communication between teachers and learners as well as between learners is an important element. For this reason, the implementation of communication in is quite important. Besides voice chat, the integration of video chat can be an enriching, additional communication method, which we want to investigate further. We are currently using Agora to avoid a custom development. One disadvantage of this implementation is that the licencing conditions of this subscription-based solution are not compatible with the open-source approach of Therefore, based on the modular basic structure of our application, we will enable the integration individualised modules for voice/video chat as easily as possible. In addition to the technical implementation, the settings menu was also supplemented so that the user now has the possibility to adjust the desired device settings in VR. Thus, in VR the user can select which audio or video devices are to be used. However, like the other user interfaces implemented so far, we are working with functional placeholders. The design will be adapted from a UX/UI point of view at a later stage.

Learning content: Crystal structures

Presentation mode:

The first prototypical learning unit is currently under construction. The topic of crystal structures was selected for the demonstrator, as these are important for many subject areas, but especially for materials science. One problem in the field of materials science is that although illustrative plastic models exist, these are expensive and fragile. This makes it impossible to pass the models around in class and experience the structures haptically. By visualising the structures in VR, we want to address this challenge. In the first learning unit, which will be implemented with, it will be possible to create crystal structures from individual atoms, arrange them according to different structures and visualise them.

Avatars with ReadyPlayer.Me

As described in Dev #01, the personal avatar is also relevant for immersion in a virtual environment. In, we will therefore enable teachers and learners to use individualised avatars. Currentlx,  ReadyPlayer.Me is being integrated. This web app makes it possible to create customized avatars and to integrate them directly into one’s own software via an API or to download them as a GLTF. We came closer to implementing this function last week. As explained in Dev #02, can be used both on the desktop and in VR. While the integration of avatars in the latter has already been completed, the implementaion for desktop users will be realised soon.

Table of Contents