H3 Regions and Touch Interactions

TL;DR

In this sprint I focussed on:

H3 Tile Provider

A first big task of the past weeks was implementing a H3 tile provider. This should take in a settings object and build all the H3 hexagonal tiles that are needed. 

The h3 provider settings object used to generate the h3 regions around the players position

Once the tile provider has figured out which H3 tiles need to be drawn to the screen, it creates the individual H3 tile objects and adds a tile renderer to them, which figures out how to create the mesh.

H3 Tile Renderer

The tile renderer really only needs a tiles H3Index. It then gets the centroid of this index and the boundry vertices. Using this information, the H3 tile renderer projects the lat-lng coordinates of the individual tile vertices to Unity world space using the functions provided with the Mapbox SDK. It then creates faces for each of the triangles (centroid and two boundary vertices). In addition, I have added the possibility to only draw a hex ring instead of a full hexagon. This is useful for things like highlighting a tile. If you are interested in how this works on a basic hex mesh, check out the awesome tutorials on Youtube by Game Dev Guide

H3 Firebase

In addition, I started setting up some basic data transfers between the server (Google Firebase) and the game. Now, when a user selects an area the contribution window opens. The user can now upload a description and this description will be saved to the server. I visualised H3 tiles that have some information Firebase with purple clouds instead of the more white clouds.

PC and Touchscreen Input System

Arcane Shift will primarily be played on mobile phones (at first at least), but development will take place on the PC. I thus needed a way to be able to design and test the game on the PC and then build it to the phone and have the same interaction functionalities. I decided to use Unity’s new input system and after reading up on many articles and indulging in the great tutorials by SamYam and CodeMonkey, and after a lot of trial and error and debuging, I built simple user input system that works with both the mouse as well as the touchscreen. Now basic gesters such as rotating and zooming the camera as well as various “touch” interactions such as tapping, double tapping and tap-holding.

Third-Person Camera with Cinemachine

For this project, I am using Unity’s new input system. To add some additional effects and make life easier in the long run, I found people recommended using Cinemachine as the camera provider. I found that adding a Freelook Camera opened up a lot of cool features, however, there did not seem to be a camera controller for a typical third-person camera. Specifically, I wanted something along the lines of the World of Warcraft camera where you can move the camera around the player by clicking and moving the mouse and zooming in and out through scrolling. The equivalent of this for touchscreens would be tap and drag to move the camera and pinching to zoom in or out. Unfortunatly, I did not find a lot of information on camera controllers for Cinemachine using Unity’s new input system so I created my own camera controller with a lot of different features. I was then able to hookup the camera controller to the different input events and had a working system for both testing in the Editor as well as building it to the smartphone. 

The Cinemachine Camera Controller script...

I implemented the camera controller with a lot of different settings to be adjustable to any other project. First, we can set a Cinemachine Freelook Camera and if none is set, it will try to get the camera on the current object. Then there are various starting properties such as min and max distances as well as general settings. We can also set a zoom curve which will allow the zoom sensitivity to be adjusted over zoom distance. This is useful for example to have smaller zoom amounts close to an object and larger amounts further away from objects.

It also comes with different sensitivity values for X and Y camera movement (vertical and orbital rotations) as well as zoom speed (how fast should the camera move from the current zoom to the new zoom) and zoom sensitivity (e,g, how far should we zoom with every mouse wheel tick?).

Introducing a Naming Convention

Seeing the project is getting bigger by the week, it was time to bring a bit of order into the chaos of my filesystem. I decided to introduce my own project specific naming convention so that I can keep an overview over my files and so that it makes searching for files easier. I am using Kebab-Casing and naming files as follows:

[type]_[location]__[domain]–[name]–[interaction / modifier]

 [classtypes]

The type of file this is

a: action

e: event

o: object

v: variable

c: constant

p: prefab

[location]

The location of this file

ms: multi scene

bs: boot scene

ss: starting scene

es: exploration scene

[domain]

The domain this file is used in

auth: the authentication system

ui: the ui system

h3: h3 system

cam: camera system

art: game art

uxml: unity ui xml file

uss: unity style sheet

[name]

The name of this file

region-toggle-interactions

[interaction / modifier]

Additional info about this file

interactions: open, close, enable, disable etc.

modifiers: single, double, hold etc.

For example, an event that is used on the Exploration Scene only that is used to toggle the loading screen in the notification manager could be named:

e_es__nm–notification-toggle-loading

Or an Atom Action that opens up the startscreen of the startscene after a set delay would be called:

a_ss__sm–start-screen-open–delayed

Or the h3 regional tile settings object:

o_es__h3–region-settings

I will probably update this convention as I go along and work with it, but for now this is a good start.

Clouds

And finally, I was not too happy with the clouds that I made using the shader graph. So I looked around at other easy solutions that would look a bit better and stumbled upon making clouds with Unity’s particle system. Being inspired by these  tutorials (by Game Dev Guide and Etredal), I tinkered around at my own solution. I made a simple particle system and fed in the mesh renderers from the H3 tile meshes I created earlier. This looks great in my opinion and seeing I can easily adjust the density of the clouds by increasing and decreasing particle amounts and spawn rates, this allows me to adjust this to my needs. 

Now why do I think this is important? Well, one of the key motivators of games are elements of exploration and discovery. I want users to be intruiged by a thick cloud cover, only to discover some see-through areas when zooming in. I want users to be able to distinguish between tiles that have never before been unlocked (thicker cloud coverage, hard to see through), tiles that have been unlocked by other players (lighter cloud coverage, easier to see through) and tiles that have already been unlocked by the player (no clouds, sunshine all the way!). I want player to be motivated to uncover the world in Arcane Shift and in so doing, upload much needed data for scientific analyses later. Anyways, clouds, check it out:

Tutorial

And as an additional extra, I thought I would start sharing some of the things I figure out and decided to make my first ever Youtube tutorial on how to use Unity’s new input system with modifiers to detect pinch interactions on a touchscreen. Check it out here.