top of page
SPR_Background_1_edited.jpg

Specialization - Dynamic UI

 

After working with UI on a few of the earlier projects I often felt like it was something of an afterthought and had to stand back for other "more important" parts of the projects. It was just something that had to be there to load levels and change settings and show some stats in game. This and the fact that it usually just consist of some images that reacts to hover and click but is otherwise quite static makes them not very fun to interact with. I wanted to try and do something more interesting and dynamic.

One of the best parts about studying at TGA is all the nice and talented people you meet along the way. So once I had decided on this as my specialization it was very natural for me to reach out to fellow student Kate Ekberg, whom I had worked with before and knew were interested in UI art. I pitched my idea to her and showed some clips of what I had in mind. As luck would have it she agreed to do this together with me.

Kate is not just an amazing UI artist but also a great person to work with and very dependable. You can find this project described from her point of view as well as her other work at kateekberg.artstation.com.

Inspiration

 

The main inspiration for this has been the Persona games, Persona 5 in particular. It´s just so visually distinct and interesting. No matter what you do in the menu it comes alive and reacts to your input. It also really brings home the theme of the game and enhances it rather than taking you out of the immersion.

Startup

 

Already from the start I had a list of features I wanted to try and implement. Transition between scenes with "hidden" loading and unloading, animated UI elements (animation, fading and moving around) and the some morphing effects when you interact with things.

 

The fact that I did not do this alone meant that some extra steps had to be taken at the start. So the first week was all about planning and preparation as well as establishing a pipeline for the work to come. I had some meetings with Kate to find a scope that would allow us both to show off what we wanted to. My main fear at this point was overscoping, so I was hesitant to include too many gameplay features since this was a standalone thing and not a part of a game project. I already had a clear idea of what features I wanted to do, but it was important that those goals aligned with what Kate wanted to do as well. We decided on a Detective "mini game" with some conversations and an evidence menu were you would get clues that would trigger notifications and changes in the UI. The base for the UI system was built in Project 7 and you can read about that here.

Animated

Animated UI elements

 

I started with what I assumed would be the easiest feature to implement, animation of UI elements. Since I was familiar with the engine from Project 7 I knew where to start and I only had to make small adjustments to the existing UI-system. I added a struct with all Animation-related information that I kept in a sharedptr and treated as any other information that UI-elements has. As for the actual Animations themselves it was just a regular spritesheet setup with some added information to be able to loop, oscillate and reverse as needed. I also created the Services I needed for fading and moving elements, mostly just using lerp between set values for the objects.

An animation that triggers when new evidence is added.

Transition

Transition

My first idea was to use a masking shader for the transitions. The idea being to load a Canvas while it was masked and then at the same time unmask it and mask the one I wanted to unload until the transition was done. After some research I set up a masking shader to test my theory. While it was very cool and I could see uses for that type of shader in other areas, I didn't feel it would help me here since it would be hard to get timings and precision right with transitions. It was clear my problem required a different solution. Luckily for me the solutions was basically done already, I just hadn't thought about it. If transitions were animations I could use that system I had just designed. By adding a "Triggerframe" to my animation struct (or several if I really wanted to) I could check when during an animation my scenes were hidden and ready to load/unload and set that frame as the trigger. This also allowed me to chain animations and different events off each other with these triggers.

Staggered transition animations playing and triggers loading/unloading of the scenes at the right time.

Morphing

Morphing

 

Since I hadn't worked with shaders that much I knew this would be a challenge. I did some more research and asked around for advice on possible solutions. In the end I settled on using the VertexShader to solve the morphing. I had a pretty clear idea of how I wanted to implement it from a logical standpoint. My idea was that each corner in a rect could have a radius and a speed for the morphing, allowing me lots of flexibility and ways to adjust them as needed to create cool effects. A quick test in the engine revealed a small issue, the VertexShader didn't work it's magic on the UI-rects the way I had assumed and my images were split in triangles and morphing all over the screen. While it looked cool, really cool actually (and I saved it in the back of my mind for potential use later) it wasn't what I needed. But some less than elegant if-statements later it finally worked as I envisioned and I could start updating the UI-editor to test it our for real. 

I was really happy with how it behaved and the amount of control I had on the movements. But another issue I hadn't considered surfaced, blending. The engine used RGB Additive blending while I was looking for CMYK blending, so the colors didn't mix the way I was hoping for. In addition the alpha blending meant that all colors would be added, even from other elements in the background and I had no way to make my morph render independently. After some discussions it was clear that the engine would not be able to support what I wanted. So I settled on using the morphs without the blending at all and just on regular images. But when I started testing with images it revealed yet another issue, shaders (ie morphs) render after regular UI elements in the engine. So any morph I added would render with shaders and obstruct anything behind it, less then ideal since I obviously wanted to be able to layer things freely with the priority system already place. The solution was simply to split my UI elements in two layers, each with individual priorities. So anything that needed to be rendered in front of a shaders became a morph with 0s for morphing values as default and thus rendered in the correct order.

Finally I had all the pieces in place. A morph in the back and an animated exclamation mark in front.

Finishing touches

 

I had all that I needed from a tech perspective and Kates art started popping up so I could start putting everything together. It was all clear sailing from here on, or was it? It is here that I made the mistake I was trying hard to avoid from the start: overscoping. I had not really accounted for how to handle all the information from the game states and conversations. How would I know what conversation triggered what? When should new evidence appear? Lots of gameplay related questions I did not really have time to make systems to handle. But that didn't stop me from trying and I spent to much time the last two weeks trying to make a game, instead of focusing on what we wanted to show off. In the end I scrapped the systems and kept it simple and linear.

Final

Result and reflections

 

Overall I am very happy with how it turned out. I managed to create support for the features I wanted and for the most part managed to follow the plan we set up at the start. I learned a ton of new things and Kate made everything look amazing as always!

This is a video of the full demo we created!

Special thanks!

 

I have asked a lot of people for advice and help during this project. But two people in particular have stood out and always aided me when I needed support or just run my thoughts by someone. 

Neo Nemeth is one of the engine and graphics programmers that work on the LLL engine I used during this project and his advice and input regarding shaders and rendering has been invaluable.

Erik Ljungman has been a constant source of support and knowledge, both when it comes to the engine, the UI system or just general programming.

bottom of page