Release Version 0.2.0

We need your Feedback

After a lot of work and an unbelievable amount of input from our inner circle and from external experts, we are now happy to provide you with the first version of for you to test and scrutinize. As always in such comprehensive projects, there were setbacks and delays and unfortunately we could not yet implement everything as we had planned.

As version 0.2.0, thus bears in its name what we want to elaborate on in the further course of this article: The development – even if we have already achieved a lot – is not yet finished. There is always more to do, bugs to fix and also your feedback to consider.

And here we hope for your cooperation:
Please test!

We have set up a public Gitlab where you can not only download the latest release of, but also submit bug reports and feedback.

Scope of Version 0.2.0

As announced in the last post, there are unfortunately still open construction sites in While we were able to fix some of them in the last two weeks, there are still some things that could be improved. Specifically, we see a need for action in the following functions, which are therefore not yet or not fully available in this version:

  • Network & Collaboration: Unfortunately, we could not completely fix the bugs in the network communication via dedicated servers. We were able to successfully network with each other again and also enter the virtual world together in VR/desktop – but unfortunately without voice chat. Here, unfortunately, WebRTC in conjunction with university firewalls is still throwing a spanner in the works. We are working on it. Since we believe that we also need active communication options for meaningful collaboration in VR, without having Discord, Zoom or the like running in the background, we will add this feature in a later version.
  • Create Figments: Figments represent a collection of one or more assets that have been spatially positioned and related, for example. They can also contain multiple states, such as different configurations or arrangements of the assets. While they are initially loaded in a default state, users can navigate the individual states linearly, similar to a Power Point. At this point, unfortunately, the last user interfaces for the creation of the figures are missing, for which there was no time yet due to the bug fixes in preparation of this release.
  • Data exchange: We currently integrate via a WebDAV interface. The shared folder we use for this is shared via the University of Wuppertal. This allows us to distribute large files to many different users at the same time and be sure that they are hosted on local servers. We have planned to additionally give you the possibility to include your own Sciebo folder, quasi as another potential directory for your files. This function has already been created in the main menu, but is still locked for this release.

For this version, this means that you can import your own content, but not yet save it as a figment. This will be done in the next version (see roadmap).

Download, Installation and User Directories version 0.2.0 is initially released as a Windows-based application only and can be used with wired or wirelessly connected VR headsets as well as in desktop mode. Since we use OpenXR as the interface to the hardware, all common headsets (as long as they are OpenXR compatible) should be supported. Later versions will additionally be released as an Android version that can be sideloaded to Android-based headsets.

The latest version of can be downloaded from our public Gitlab. Just follow the link above, which will take you directly to the releases. Alternatively you can use the direct links at the beginning of this article. v0.2.0 is available in both an installable version (recommended) and as a ZIP archive.

  • Installer: This installs on your computer, just as it is known from other applications.
  • Archive: If you do not want to install directly, you can also download an archive and unpack it. You can then start the application with the “FigmentsNRW.exe”.

With the installation or the first run of your user directory is set up, which can be found in the Windows Explorer at the following address:


In order to internally ensure a correct mapping between file format and intended use, we use a directory-based approach. In concrete terms, this means that custom content can only be imported correctly if it has been stored in the correct directory. The most relevant folders are as follows:

  • \GTLF\: Currently we only support the import of 3D models in GLTF format. These can be in binary format (.glb), embedded version (.gltf without attachments) or separate version (.gltf + .bin + texture folder). You can copy the files directly into the \GTLF\ directory or create your own folder structures. Just make sure that GLTFs with external textures always need a texture folder in the same directory. GLTF is specifically designed for efficient transfer and loading of 3D scenes and models in virtual reality (VR) and other real-time 3D applications. It is an open standard maintained by the Khronos Group. GLTF files are compact and contain both the geometry and the appearance and animations of 3D models. This format aims to balance file size and rendering quality, making it ideal for VR environments where performance and visual fidelity are critical.
  • \Images\: For image files (currently .png).
  • \PresentationSlides\: For presentations, although we currently only support those that were exported as single images (e.g. from PowerPoint). Simply copy the folder containing the slides of your presentation into this directory. The folder title corresponds to the presentation title.
  • \CrystallineStructures\: Exemplary for the extensibility of our FileIO core, an importer for crystalline structures has been implemented. You can store them in this folder as Crystallographic Information File (.cif). In these are dynamically visualized as a 3D model and can be configured based on the stored data.

The remaining folders are used for internal processes, e.g. to store your individualized avatars, figures, drawings or audio recordings locally. Main Menu

Desktop and VR-Modus

After starting you will get to the main menu. If you want to jump right in, you can click on the Desktop or VR button on the right side. If you click on VR but end up in desktop mode, your headset was not recognized correctly. Please then check if the headset is correctly connected to the computer, e.g. via Pico Link and Steam VR. If you are using multiple headsets from different manufacturers, please make sure that the correct OpenXR Runtime is used (e.g. in the SteamVR Settings). Some manufacturers like to write themselves into the default, e.g. when switching between Microsoft’s Mixed Reality Toolkit (e.g. for the Reverb G2) and SteamVR (e.g. for the Pico).


We have been a partner of for some time, with whose help we realize customizable avatars for With our avatar editor you can create your own avatars and use them in Just add your avatar code under Profile, either as a full-body avatar for desktop mode or as a half-body avatar for virtual reality. You can also give your avatar a name here.

Settings and localization is mostly localized in English and German. We are pretty sure we made some mistakes along the way, so if you notice any text that is not translated or translated wrong, please let us know. By default uses the current default language of your system, but you can change within the application at any time (also ingame).

With the network settings and the settings for external asset databases, two more advanced functions are hidden here that are not yet part of this release.

The Basics


In desktop mode, you move around as you would in a computer game using the mouse and keyboard. Use WASD to move around and rotate by holding down the right mouse button. You can open the menu by pressing the escape key.


More information on the individual functions available in in virtual reality can be found below. A basic feature of virtual reality is that supports room-scale tracking and is operated by two controllers.

  • Navigation: You can move naturally at any time if you have the space to do so in reality. If you don’t have that space, you can teleport using the right controller. Just hold the analog stick (or the touch equivalent) [Primary2DAxis] forward, aim and release. Snap-Turn is also enabled, meaning that if you tap the analog stick left or right, you will automatically turn 90°. Alternative navigation metaphors, like Fly or Grab Move, are introduced below.
  • Activation: To activate functions or to operate the menu, the trigger of the controller is used [trigger]. With most controller types, this is ergonomically most sensible to reach with the index finger.
  • Manipulation: To grip objects (or the world) we use the grip trigger of the controller [grip]. On most controllers, this is operated with the middle finger (Pico, Reverb, Quest) or the palm (Index).
  • Quick UI: To be able to quickly call different functions we also use the primary button of the controller [primaryButton], in most cases called “A” or similar.

Quick UI

The right controller also gives you access to the Quick UI – the interface to all advanced functions of

To open the Quick UI, simply hold down the primary button on your controller (on the Pico, for example, this is the “A” button). A few shortcuts to interaction and navigation metaphors are then displayed on your hand in a semicircle. While the menu represents a simple function call, the other shortcuts usually switch you to another player state. What these are conceptually about can be found below.

To select one of these functions, simply move your hand in the approximate direction of an icon. This doesn’t have to be exact, and after a short time it can be done almost off the cuff. Then you just have to release the primary button and the function is activated. The following functions can be found in the QuickUI:

  • Menu: Opens the menu of, through which you can import objects, create figments and enter rooms.
  • Audio Annotationen: With this you can record your own voice and place it as a note in the room.
  • 3D Drawing: Draw your notes in space.
  • Grab-Move: An alternative locomotion metaphor where you grab and move the world around you. More on that in a moment.
  • Objekt Selection: Objects in the virtual world can be edited. In order to explicitly distinguish between the interaction during learning (usually grasping, moving, and activating) and the design phase (editing), we have separated out the selection.
  • Visual Scripting: With the help of these tools, objects can be logically linked with each other. Unfortunately, this is currently still primarily an experimental feature.
  • Default: The central button brings you back to the standard state at any time, i.e. to exit the drawing mode. Since you don’t have to move your hand, you can return to the standard state at any time with a short press of the button.

Players and their states

Conceptual information on the player states

With player states we control how you can interact with and manipulate the virtual world. Since we use a variety of different interaction and navigation metaphors, overlaps in the control concepts unfortunately cannot be avoided. For example, if the navigation metaphor “teleport” involves using the analog stick, we can’t use it for the navigation metaphor “fly” at the same time.

Depending on what you want to do in the virtual environment at a given time, we switch between the required player states and thus avoid that, for example, a button does not do what you expect it to do.

Internal player states

We use the system to differentiate between different player states internally, for example, to distinguish whether you want to pick up an object directly in your hand, grab it from a distance using Ray, or simply press it using Poke interaction. These internal states are usually transient, meaning they are only active for the short period of time they are needed.

Active player states

When you switch to a dedicated active state via the Quick UI or the menu, for example, some of the functions you can perform with the controllers change. The active states are listed below. In parentheses is the input that is changed by the state.

  • Default: In standard mode, when you are in no other active player state, you can teleport (and rotate grabbed objects) with the analog stick, trigger activations with the trigger, and grab and manipulate assets with the grip trigger. Additionally, you can also grab assets with both hands to scale and rotate them.
  • Audio-Annotation (Trigger): In this state, your right controller switches to recording mode. While you hold down the trigger, a recording is started via your microphone. As soon as you release it, the recording stops and the annotation is finalized. Currently, in addition to the audio-only component, it is visualized by a speaker that you can pick up and position like any other asset. Via a context menu on the recording, you can, among other things, set whether it should be played via surround sound or broadcast and whether it has to be triggered or starts automatically.
  • 3D-Drawing (Trigger): With the trigger held down, your right controller switches to drawing mode and you can now draw. These annotations are also created as 3D objects and can be manipulated and positioned in the environment after creation.
  • Grab-Move (Grip): In this state, the left and right controllers switch to Grab Move mode. In this state, you can move as if you were grabbing the entire world around you. If you hold down the grip trigger one-handed and move your arm, you’ll essentially pull yourself along the world. If you hold down the grip triggers with both hands, you can not only move, but manipulate the world. Bring your hands together or apart to scale the world – move them around an imaginary axis to each other to rotate the world. A small visual aid shows you your relative size to the world. Currently we have limited the scaling to the range 0.1x-250x. Let us know if you want to get even bigger or smaller.
  • Objekt-Selection (Trigger): In this state, you can select objects with your right hand. Simply point to them or touch them and press the trigger. An outline visualizes currently selected objects. If you click on an already selected object again, the selection will be cancelled. Selected objects can be edited (more easily) in the Asset Editor (see below).
  • Visual Scripting (Trigger): In this mode several things happen, which will be described later in a separate article. Functionally, you can create a new node on an object with a long press of the trigger, which allows you to extend it with logic and functions. In addition, a developer menu is currently floating in front of you in this mode, from which global functions (e.g. ProximityTrigger) can be dragged into space and new programs can be generated.

Menu structure

The menu is divided into two core components: The Toolbar, which is always visible and from which you can access thematically defined areas of the menu. Above this is the canvas, on which selected menu areas are displayed. Conceptually, we distinguish here between the tools that are most likely to be of interest to different user groups, supplemented by status displays and settings.

  • Home: The view that is displayed when the menu is called up for the first time. Currently, we only use this for information about us and the current version of, but it would be conceivable to expand this in the future. For example, basic information about the current learning content or the currently connected learners could be collected here.
  • Learning: This area is currently used to display information about the current session and to create annotations. This is complementary to the Quick UI, but allows the pen color to be selected for 3D drawings, for example. Conceptually, we see a lot of potential here to integrate e.g. learning tasks, success checks and feedback methods.
  • Authoring: The core needed for designing immersive worlds. This is where all the methods are collected that can be used to import, edit, merge, store, and share content.
  • Settings: Here, various parameters of the application can be adjusted, e.g. for the method of movement and the personal protection sphere.

How these tools work and how you can use them is described below.

Virtual Reality: Stationary or player-bound user interfaces

While the menu can be opened via Escape on the desktop and otherwise behaves like common user interfaces, additional methods are available in virtual reality:

If you access the menu via the Quick UI, it will appear right in front of you. It will also follow you as you navigate virtual space, whether you walk, teleport, fly, or move the world. It is now effectively in a player-locked mode, also known as Lazy Follow.

The menu can also be grabbed and moved by you just like any other asset. To prevent this from happening accidentally, there is a white bar below the menu, the so-called GrabBar. When you grab the menu, it stays at the position you chose, it is then in stationary mode. The menu remains there until you either close it or reopen it. So that you don’t have to search for it in large rooms, it always appears in front of you in Lazy Follow mode when you call it up, even if it was previously fixed in the room.

Tools and kits

Conceptual: From assets to figures and spaces

The following key components form the basis of our immersive virtual experiences – spaces, figments and assets. In order to adequately define these terms and illustrate how these components interact in the design process, let’s expand a bit:

  • Spaces: Spaces, accordingly, are essentially Unity scenes from a software development perspective. These scenes comprise a static 3D environment with precalculated lighting. Spaces effectively serve as an immersive framework into which assets and figurines can be imported, creating a cohesive and interactive virtual environment. Since it is unfortunately not technically possible to perform the computationally intensive calculation of the lighting (the so-called light mapping) at runtime of the program, these spaces can only be created in the development environment.
  • Figments: Figments are curated sets of assets that can encapsulate multiple states, each representing different configurations or arrangements. Each figment begins with a base state that represents the assets as they were originally created. Furthermore, figments can include multiple additional states, each of which can either house the same assets in a new configuration, alternate assets, or a mixture of both. This flexibility enables dynamic presentations and interactive experiences, making Figments a versatile tool for presenting content. States can be navigated in a linear fashion, similar to a structured progression, which is especially useful for presenting educational or linear learning content within our application.

    You can create and share figments directly in Virtual Reality and use, reuse, and customize them as needed. For repetitive configurations, you can create a figment template and then reuse it as the basis for new creations.

    At least in theory, this system can also be used to create room-filling environments if, for example, a suitably large 3D model is imported as a GLTF. However, as described above, lightmapping is not possible (e.g. to make environments aesthetically pleasing), so we recommend either creating rooms directly or integrating the lightmap into the model’s textures. However, since the latter is a rather complex procedure that also makes subsequent lighting more difficult, we will not go into this in more detail.
  • Assets: In immersive worlds, we generally use three-dimensional objects for visualization. There is therefore an increased need to be able to use 3D data as effectively as possible. The relationship between the creation effort and the added value to be generated by using this data must be as favorable as possible. In order to be able to integrate 3D data from external sources, e.g. as open educational materials (OER) and/or in object databases, as many of the common file formats as possible should be supported. With, we have chosen GLTF as the exchange and import format for 3D assets, as described above.

    In addition, we summarize under the term assets all content that you can import into via the file browsers, i.e. not only 3D models but also images, texts, audio files and the like.

Space Travel

We have prepared a number of spaces for you to explore as you wish. If you have not changed anything in the start menu, you will always start in our empty template (TemplateScene). Via the menu you can also show help elements and gizmos, e.g. a grid on the floor. Via the browser you can select and enter one of the alternative spaces. If you are in the virtual room with several people (which is only possible via your local network in the current version), all logged-in users will also be teleported to the new space.

Importing Assets

You can use the Asset Browser to import your own files into the virtual environment. By default, your local user directory is selected as the source. However, you can also use the dropdown to select the WebDAV directory we created, which is hosted by the University of Wuppertal at is inherently designed for collaborative use. In the context of file import, this means that assets must not only be visible on your site, but also available to all other logged-in users at runtime. In addition, movements and general state changes must be synchronized. For this reason, all assets you import will be copied to the central WebDAV directory, from where they will be distributed to all users and thus duplicated. Please take this into account if you want to include your own assets.

Assets, whether from your local file system or from WebDAV, are distributed and synced to all logged-in users (and stragglers) during import. This is visualized by a small “loading capsule” so that it is already apparent during the upload/download where an asset will appear. By default, assets spawn about 3m in front of you, but in the menu you can also set them to appear in the center of the room. You can also decide whether the menu should be closed automatically after the import.

Editing Assets

The Asset Editor allows you to edit individual objects, for example to include them in the physics simulation (or not). You can choose whether all assets in the current room are displayed in the view or only the assets you have selected. Essentially, the following things can be set here:

  • Transform: The position, rotation and scaling of the asset.
  • Physics: Whether, and how the asset participates in the physics simulation.
  • Interactions: This can be used to lock the interactions used in per asset, e.g. to prevent them from being moved or deleted by users.


Future releases of will be numbered based on Semantic Versioning and follow the MAJOR.MINOR.PATCH scheme.

Bug fix updates will be released for version 0.2.0 presented in this post in the form of patches via our Gitlab.

Version 1.0.0 will thus be the first major release, until then the following minor releases are planned:

  • v0.3.0: Figments-Browser and -Editor (Refactoring & UI)
  • v0.4.0: Visual Scripting (Refactoring & UI)
  • v0.5.0: Netcode Update

These version targets are quite fluid and will be adjusted according to need and capacity on our part. There is no schedule for these versions, as we are now doing this work on a voluntary basis.


The source code will be published on our Gitlab outside the outlined release scheme as soon as the coordination with the corresponding departments and legal consultations are completed.

Table of Contents