UI and Spatial

TL;DR

In this sprint I was focusing mainly on refactoring the UI system and starting work on the spatial systems. Specifically:

Unity UI GraphView

I was interested to see if I could build  a graph based UI system with screens and actions to hook together a UI. First I created screen nodes as scriptable objects which automagically check if they have buttons and add button ports which I can then add actions to in the graph. I followed the Youtube tutorials by Inde Wafflus and The Kiwi Coder to create the graph interface in Graph-View.

I was able to build a (mostly) working graph UI system with which I could create screens, hook up actions in series or parallel and this all worked at runtime. Amazing! However, I quickly realised that the complexity of even the starting scene (including registration, login etc.) made the graph grow very quickly and become quite hard to read. I then decided to abandon the Graph UI system in favor of a more Event based approach.

Refactoring the UI for Unity Atoms

Seeing that the project was growing rapidly with just the starting UI needs, I decided to abandon the Graph UI in favour of a more robust scriptable object solution. Thankfully, the way I implemented the Graph UI nodes made it easy for me to transfer these to the new event based system. I integrated the wonderful Unity Atoms package which gave me a great starting base to build upon. Unity Atoms revolves around using scriptable objects as events and actions to make the code modular (read more here if interested). First I created a Screen Manager Runner referencing a Scriptable Object Screen Manager and injecting the UI document that I wanted to populate with all things UI. 

The simple MonoBehaviour that injects the UI Document into the screen manager

The screen manager houses all the screens, defines a starting screen, references the events that are important for this UI and also references the notification events. The screen manager listenes for relevant events and opens or closes screens if able (if the screens have been loaded, if not the manager waits and retries). If some part of the UI dispatches a notification, the screen manager will inject this into the currently open screen.

The Screen Manager

Finally, what would a screen manager be without screens to display? I created custom screen elements (which I called Atom Screens) which are Scriptable Objects. These Atom Screens need a UXML housing all the screen elements. I can then plug in any AtomActions I want into different parts of the screen. I can set actions to run after loading a screen (say register for some specific events), after opening a screen (say checking to see if the screen should be skipped) and after closing a screen (say clearing the notification queue). I can of course also add actions to button click events. Thhe buttons are automagically extracted from the uxml and each button can trigger any number of actions. The screens are also of a specific type: fullscreens cover the whole screen and only 1 fullscreen can be active at a time, these live at the back of the display. Popup screens are a layer in front of fullscreen and drawn on top of full screens without closing fullscreens.

The Atom Screen

Testing Spatial Assets

Once the UI was refactored and working to a degree, I moved onto investigating the possibilities of adding spatial capabilities. I found the GOMap and the Online Maps assets on the asset store. I found both assets worked great out of the box (with some minor tweaking) and were able to generate maps in Unity. however, GOMap didnt offer a straight forward way of using custom MapBox styles (at least I didnt figure out how) and Online Maps seemed to move the map and all objects rather than the character GameObject which seemed not the right fit for my project. I eventually opted for the MapBox Unity SDK. Unfortunately, development and maintenance seem to have been discontinued for this SDK (I tried contacting MapBox several times with no answer on this).

Nevertheless, after deleting all the unnecessary AR stuff and adjusting the code here and there, the SDK offers a great base to start building on top of. In addition, the MapBox SDK seemed to be built in a way that seemed intuitive for me: locations come from a location provider which can be set differently for device and editor, needed map tiles are identified either around a transform or in camera view and then relayed to the tile factory which makes a tile in Unity, downloads the texture and adds it to the object. Many events allow tapping into and many methods exist for doing spatial recalculations: for example, OnLocationUpdate fires every time the “users” location changes and GeoToWorldPosition takes in a lat and long and the map and translates coordinates into unity world space, awesome!

Uber H3

For this location based game to work, I need some form of hierarchical spatial indexing system to keep track of different regions in Firebase and to allow interactions with space. Think of it like the game board. I opted for Ubers H3 hexagonal indexing system, as hexagons are the bestagons especially when it comes to spatial phenomena and they look good in a game. I managed to get Pocketkens H3.NET port to C# working in Unity which allowed me to use all the nice H3 methods. So the workflow is: I get the users location and calculate the H3 index the user is in. I then get the vertex coordinates of each of the corners of the H3 hexagon and build the hexagons triangles, normals and UVs in Unity world space. I implemented two methods, one to create a 2D plane of the hexagon and another to create a fully 3D extruded hexagon. 

The 3D Hexagon
The 3D Hexagon

Once I had was able to draw the users current H3 tile, it was easy to get the neighbouring tiles and draw those too. The tile manager also listens for user location changes and draws the necessary tiles if a users location changes (sidenote: I will start working on the caching system soon). This allows me to move the user around and the map tiles as well as the H3 tiles are updated in real-time. Oh, and this also works on my phone where the location provider uses my devices GPS coordinates instead of a preset location.

Clouds

One of the major parts of the game will be exploration and discovering new and familiar places. I thus need a way to show the player which regions have been discovered and which ones are still shrouded in mystery, if possible without breaking emersion. The idea is that regions that have not been discovered will be covered in clouds, or maybe an Arcane Mist or some sort of visual indication. When a player then unlocks a region in the game, the region will be freed of the mist and the areas within unlocked regions will be interactable. So, each player will slowly be able to clear away the fog/clouds/arcane mist by playing the game. This also allows me to potentially tap into realtime weather data and adjust the visuals accordingly which would be quite cool. For now, I just made a simple cloud shader (following this Youtube tutorial).