Welcome to the Eipix blog!

We hope to make it a valuable resource for aspiring game developers, experienced game developers who share some of our problems, as well as general gaming aficionados who enjoy learning about how games are made. We will do our best to regularly shine a light on various aspects of the game development process through first hand accounts from our colleagues.

The honor of being the first Eipix blogger falls on the shoulders of our colleague Nemanja Turanski. Nemanja is a medior game tester at Eipix with more than two years of experience chasing bugs in our games. He finds automatized testing software immensely helpful in his everyday work, but he is also aware of its limitations. Without further ado – take it away, Nemanja!

 

Image_1
Nemanja Turanski on his daily job of virtual pest removal

 

Before we get into the dirty details, let’s just establish one simple fact: game testing is a grueling, time-consuming work. With our hectic production schedule, it’s safe to say that it would be impossible for us to test the games properly without some sort of automatization of the process. With that in mind, automatized software is every tester’s best friend (sad, I know).

The needs for and the possibilities of automatized testing vary considerably depending on the genre of the games being tested. Specifically, with the hidden object puzzle adventure (HOPA) genre, in which Eipix mainly operates, testing automatization is both applicable and useful, but it has its share of flaws to add to its advantages, and it can never fully replace good old manual testing.

The automatization in the testing of HOPA games boils down to one very useful function implemented in a number of different ways, and that’s autoplay, or the independent play of the games done by our game engine.

Some of the ways in which autoplay is implemented at Eipix are:

  • General autoplay: the first and primary form of autoplay going from start to finish without any interruptions with the sole purpose of reaching the end of the game.
  • Dialogue autoplay: this type of autoplay stops at every dialogue scene, which makes it easier for the testers to test the dialogues and saves time.
  • Minigame autoplay: this form of autoplay stops at every mini-game (mini-puzzle).
  • HO autoplay: this form of autoplay stops at every hidden object puzzle (a scene with hidden elements).
  • Cinematic autoplay: this type of autoplay stops at every cinematic scene (animated scenes without player interactions that serve to accentuate the most important moments in the game).

Now that we know this, let’s get into the game of pros and cons…

Advantages of testing automatization

The one obvious advantage of testing automatization is saving time, which – as any half-good game developer will tell you – is the most valuable and scarce currency in the gamedev world.

Autoplay allows us to inspect the game’s functionality in a quick and simple manner, with the tester there to simply monitor the process. The autoplay option will run through the entire gameplay much faster than any tester could manually, which allows us to detect problems quickly and efficiently. These problems are generally divided into two groups: game blockers, which effectively prevent us from continuing the game, and minor defects, such as graphic elements not appearing where they are supposed to be, which doesn’t dramatically interrupt the gameplay but does spoil the aesthetic impression, as well as the overall impression on the game.

Autoplay also allows us a greater focus on the specific task at hand, because it leads us precisely to the desired point in the game, without wasting any time on any previous stations down that path. The automatized process not only mimics manual testing at a faster rate – it can also cause and reveal errors that would be very difficult to detect through manual testing due to its ability to conduct multiple operations at a compressed time period.

Bugs are sometimes found even in the simplest actions that us humans would normally take for granted and simply skip. Through their experience, game testers develop certain habits and logic, and it leads us to perform certain actions without verifying them first, thus sometimes (OK, often) failing to notice some very obvious glitches. Not the thorough, meticulous mind of a machine, though. Autoplay never skips on those obvious bugs. Observe the graphic proof in the GIF below:

gif-1

What you see here is a simple case of a misplaced pivot. OK, I know, let me clarify: when using a hint, an arrow will highlight a certain area of the screen that indicates a certain object or a certain action. The pivot is the precise point in the scene that the hint arrow highlights. So, when a pivot is wrongly placed, the hint will highlight the wrong area of the screen. Manual testing will often fail to detect this problem because sooner or later we will click somewhere in the vicinity of the hinted object/action, but autoplay will immediately stop and reveal the problem.

Now that we have highlighted the numerous advantages of testing automatization, let’s get down to certain areas of the automatized process where manual testing still reigns supreme.

The flaws of testing automatization

The most obvious flaw of autoplay is that it is not human. Duh.

As I’ve already mentioned, autoplay can perform simultaneous actions at a pace impossible for human beings. Therefore, it can often reveal problems that cannot be reproduced manually. In other words, it sometimes detects problems that no human player could ever cause. The end result is game testers wasting a whole bunch of time trying to reproduce a problem that will actually never occur in real life (real-life gaming, that is). The following GIF represents a common type of this issue.

gif-4

What happens here is that, due to its speedy style, autoplay has gone through the click-through game (basically, a mini-game without a “Skip” option) before the click-through elements have even been activated. In other words, autoplay acts as if it has already resolved a situation that hasn’t even begun, and therefore, it cannot continue with the game. In “human” gaming this issue would never occur.

Due to the variety of devices players use to play our game, we are required to test our games on a number of different devices, from high-end gaming machines to tired old low-end PCs that are past their expiration date. Due to its high performance speeds, autoplay can occasionally cause the FPS (frame rate per second) to drop, often causing an overload on weaker devices and blocking their functioning entirely.

Since the sole purpose of autoplay is to find faults in gameplay, it will often miss minor faulty game elements that don’t affect the gameplay. These are mainly corrupted or missing graphics that are obvious to the human eye. Observe the example below:

gif-3

See how the top center section of the scene goes black for a moment, and how the cursor leaves hover marks behind. That’s not supposed to be that way, but autoplay will just skip through that part as if everything is OK, because the gameplay was not affected. It may not seem like a big deal, but the gaming experience is all about immersion, and anything that breaks the beautiful illusion is a big no-no.

Due to its specific programming, autoplay doesn’t use all game elements. For instance, in order to save time (and our engine programmers from writing too much unnecessary code), autoplay will skip certain game situations instead of taking the “pedestrian” step-by-step path of completing individual sequential actions. However, in certain specific situations, like the one illustrated below, autoplay is unable to “cheat” the required sequence of actions and the testing has to be done manually.

gif-2

The illustrated situation shows a mini-game that autoplay was unable to skip. Our autoplay doesn’t actually click the “skip/hint” options available to players, but instead uses its own customary “cheat” function that mimics the above mentioned options. The problem here is that this is not a standard mini-game that opens in a new window, but instead happens within the scene. In this specific case, autoplay cannot use its “cheat” option, and therefore stops, regardless of the fact that there’s nothing wrong with the gameplay. In real-life play, the player will perform the required sequence of actions or simply use the “skip” option.

Finally, one crucial flaw of any automatized testing software, including our autoplay, is that it follows its own predetermined, identical course of action in every situation. As a consequence, it has no flexibility and is unable to detect errors that can be found by performing certain actions in a slightly different order. Since players are far less predictable than machines and tend to do things in their own weird individual ways, we as game testers must do our best to think outside the box and try every conceivable way to “confuse” the game, because some players out there will most certainly find a way to do it.

Image_2
Our QA team delighting in the fact that machines cannot replace them (yet)

Ultimately, playing a video-game is an intuitive, and therefore, highly individual process. As no two players are completely alike, no machine, regardless of how smart and sophisticated, can fully replace the human element in testing – and that’s not just something I’m saying because my bosses are reading this.

As much as automatized testing helps, we all still have to go through the entire game manually dozens of times. Still, a smart combination of automatized and manual testing will save you countless hours of work, and every effort to optimize and improve the automatization will be highly valuable in the long run. Just don’t expect it to do your work for you.