Testing

Overview

Our High Level Test Plan had a biweekly stability test where test cases would be evaluated based on their implementation in-game.


We held internal concept discussions that had the goal of fleshing out the vision to every member of the team


We held 2 public concept tests. The 1st one had individual playtesters come in and record their gameplay session as well as give feedback on their experience.


The 2nd was a blind test where the testers were tasked with giving feedback via a questionnaire form-


I then compiled the results from each test held in a Sprint and discussed the results with the team to produce action points we could tackle during the next sprint.

Reporting

I reported our initial sprints with written reports but as time went on the need for these reports faded since no one actually read them. I moved to verbal reporting based on the written report after 2 full sprints


Reporting bugs was difficult as I was essentially sending them back to my own desk. Later on we decided that other members of the group should take part in reporting bugs to create actionable fixes.


I understood the amount of work that goes into properly documenting each and every little change made but also that not everything can be written down. Balancing between fixing content vs. writing reports.

Results

Verbal reporting gave us discussion points but providing data and analysis gave us better direction for planning the next sprint.


Better camera controls, settings, movement bugs and enemy pathfinding were features that we immediately worked on after getting feedback on them.


Refocus on the core game mechanics during sprint 4 was also a result of public testing and internal discussion on what made the game fun to play in the first place.


Mechanics like having a right-click ranged ability and jump attack weren't implemented due to conflicting ideas surrounding the vision of the game.

Experiences

First weeks of testing felt inconclusive and pointless due to not having much of a game to test but towards the end testing started becoming easier and felt more meaningful.


Understanding the amount of work that goes to ensuring a feature is fully ready to production took me almost the entire course. It didn't help that I was my own gatekeeper.


I took on too much responsibility and should've left the testing itself to available personnel instead of trying to brute force through test cases. During later stages we got our bug reporting process to work but there are still improvements to be considered.


For example. Gameplay test 1 was just me writing 20 test cases that I myself tested and bug reported if there were any. This devolved into a process where we would let 'basically complete' features pass which caused problems later on. I should've pushed back on having more robust testing process but in the end the work would've been on my table.


I understood the importance of having regular, robust testing as bugs piled on and they weren't properly recorded so I nobody was on track on what actually worked and what didn't.


We eventually fixed this with a more freeform process of reporting bugs, essentially turning the bug report document into a tasklist.


Overall I felt like I started out with a good procedure that fell to pieces and picked it up later on during the course. Better structured testing would've saved us a lot of headaches and time.


I would definitely like to do testing in the future but I felt like I was too focused with actually making the game to create a proper testing environment.