Programming

Combat System

Overview

I implemented our combat system according to a combat mechanic diagram provided to me by the content producers.


The goal was to make combat fluid with simple comboes, animation cancelling mechanics, utilize multiple weapons, critical hits and have status effects that affect entities.


I used Unity's scriptable objects to create combo attacks that can each be given their own animation and attack parameters. I also experimented with creating an interface for the attack.


Game Designers can choose from a multitude of options to easily create different attacks to create interesting comboes.


Attack handling will then use the combo pieces provided to execute attacks in sequence.


Animation cancelling is done inside the animator via checking the attack phase, which can be done with animation frame precision.

Experiences

The main challenge was implementing from design; Combat calculations were precisely determined but attack timings and making the system as modifiable as possible provided challenge in both implementation and planning


Overall I met both goals in the system being easily accessible from the inspector and being expandable if more weapons or attacks were to be added.


This system taught me about implementing from design documentation where I have to consult the game designers about how they want the system to work and look like in their hands.


I wanted to use C# interfaces as a part of studying better programming practices. I feel like I got a grasp of how to use an interface when designing a feature but when to use one requires a bit more work in the future.


Currently the game has only one simple sword combo for the player and the enemies only use a single base attack. Adding different attacks would allow the player to attack in different patterns.


Status effects exist in code but aren't tested for production and there is no planned way of them being afflicted to entities.

Enemy Ai

Overview

I designed and developed our core AI and pathfinding using a weighted system where the enemy will select a random point in it's patrol area if it has not yet located the player.


Once an enemy spots the player, they will all alert to the player and try to pathfind towards to player.


The system uses directed raycasts to determine if any obstacles to the path is hit and then weighs that specific direction using simple parameters given.


After the player is in attack range, the enemy will execute an attack that can be cancelled by the player by stunning it.


After attacking the enemy will attempt to move towards the player again until it kills the player or dies itself.

Enemy vision visualization

Experiences

The main challenge was in making the enemy predictable enough so that the player can dodge and avoid their attacks but still providing challenging content to the player.


Overall the pathfinding works in the environments but it could use more parameters to determine a path such as other enemy locations and player proximity.


The design process and the document resulting from it really helped me understand the amount of work to make something basic work both mechanically and in game feel.


The enemies could also be more responsive to player actions but I feel like I gained a basic understanding of both the design and implementation of enemy AI.


For example, just making the enemy move towards the player limits the player's options to deal with the threat. Adding delays and readability to their actions instantly gives the player more choice in their approach to combat situations.¨


Visualizing the enemy attack range in game and in editor is something that should be done via either VFX or debug gizmos.


Creating design documentation by myself really helped me in understanding how many moving parts actually need to be described for the instructions to be clear and measurable to implement.


Currently the game has only the melee variant of enemy AI implemented but the groundwork allows for the expansion of the enemy AI which was one of my set learning goals.

Controller Support

Overview

One of the key features we had envisioned was having controller support for the game.


I used Unity's Input System package to accept both mouse and keyboard as well as controller inputs simultaneously in-game.


Ensuring that the controller input would be equally fluid required constant testing using both input methods.


Controlling the character and various menus is possible using both input methods and transitions between them are smooth.


The input system uses events that can be subscribed to, that let various scripts know which inputs have been entered and use them in their logic.

Experiences

The main challenge was in making sure that both inputs feel natural and can be switched between fluidly. Unity's Input System package made the progress easy to understand.


Unity Events, C# Events and delegates were new topics to me and I needed to refer to Unity documentation often to fully grasp the process behind them. I gained a decent understanding on how and when to use these techniques and used events later in refactoring the skill code.


Constant testing with both methods of input increased the workload for every feature but in the end the controller gameplay feels better in my opinion than playing with the standard MnK input.


Custom Key Remapping was a requested feature that we did not have the time to start implementing.


Currently the game works using both MnK and controller inputs and they can be swapped between in runtime.


Summary

Overview

Started out with basic understanding of Unity and C# with no idea how interfaces, events or delegates work. Put everything in Update() instead of calling methods when needed.


Read a small book about game programming patterns that helped understand the scale of thinking one should employ in a game project.


Learned Unity Editor scripts, Gizmos and thinking towards making inspector accessible to game designers.


Learned to use interfaces, events and delegates in C#. Singleton pattern was useful in managers. Best practices are still a question mark.


Procedural generation was never done but learning goals towards making overall better performing and extendable code were met. Readability is a future goal.

Experiences

I wanted to challenge myself to find new ways to do mechanics I've already implemented before and to create a more structured game architecture


In some systems, I succeeded (Combat System) and in some I got it working (Room Selection) but there's always room for improvement.


After succesfully creating something that functions, looks good in the inspector and is expandable, I kept trying to emulate that success instead of figuring a new way of implementing the feature.


For example, seeing how well the singleton pattern worked in Game Manager, I immediately copied it into the player manager and skill selection manager.


I'd say I managed to create 90% of the features correctly and working but the game feel is lacking and some of the code is functioning but not pretty. "Is this a correct way of doing this?" is a question I ask myself often.


I've become better at identifying and solving problems in the code implementation but my understanding of game design is holding me back.


Lots of responsibility being placed on me. I produced ~80% of the game's codebase and was responsible for all of the base systems and gameplay loop logic.