Ritual Lead Tester Slams Sony's Quality Assurance
Michael Russell, the quality assurance manager at Ritual, detailed his experiences with Sony's Q&A department after being offered a job at the company.
After Sony flew in the former lead tester from Microsoft Games Studio first class for a job interview back in April, Michael Russell claims he turned down the job due to Sony's general Q&A practices.
"The straw that broke the camel's back came in the last hour of my interview. I was told that the way that Sony tests their games is that there are one or two test leads on a project starting at about six months out. At T-8 weeks, between 80 and 100 temporary testers are brought on to test the game for those eight weeks. That's it. This was done for financial reasons, and as a QA Manager, I would be expected to run test the same way."
Russel disagrees with this approach, citing the greater chances of game delays or release day bugs when compared to having more testers in place from the beginning of development. Aside from the the technical issues, Russel also didn't like the general attitude Sony seemed to have toward its game testers.
"There was talk about issues that only came up on production UMD's for PSP games, major friction between test and development teams with little to no management backing for test, little to no shared technology, extremely lax "user effect" bug metrics for determining whether or not to fix something, and a variety of other fairly hefty issues, not just from a process standpoint, but a overall culture standpoint," said Russel. "Microsoft is known for giving QA a bit too much say in the products that are developed, but the feeling I got inside Sony was that QA was seen as nothing but a bunch of monkeys with controllers."
This blog post was sparked by Sam Kalman, another game tester, who discussed the reproducible bugs found in the Sony-published Genji for the PS3. The first is an issue that prevents the player from finishing a quest if he tries the wrong solution to a puzzle.
"I'm 100% sure that this bug was a known issue when the decision to ship was made," said Kalman. "It's more likely that the stakeholders decided that the bug was not severe enough to fix in the remaining time prior to ship. Someone might have performed a calculation comparing the estimated number of people that this bug would affect and the cost of money and/or risk involved in fixing the bug for the final game."
Kalman went on to detail bugs that prevent the player from finishing the game at all: "Of course, there is no way for me to be sure what happened during testing, or why this game did ship with what I would consider to be a severity 1 bug. But I do know that this game did ship with a blocking issue. You can't finish the damn game because of this bug, and you have to start over from the beginning unless you were lucky like Chris. In my mind, this is worse than a crash bug, and right up there with data loss bugs. I think it's inexcusable that a game of this profile would ship with this type of issue, and I encourage anyone affected by this bug to scream bloody murder to Sony Support."
As a former EA employee with many friends having worked as the "monkeys with controllers" I can verify that this is indeed both the environment and the mindset of Sony's QA department. I worked with one former Sony QA Lead who told me stories of how he and other leads allowed their testers to throw gamepads in frustration if they felt they needed to let off some steam. EA is also a testing sweatshop, but that kind of behavior is completely insane. Testing is a crucial part of game development that some of the larger publishers like Sony and EA tend to shrug, when facing corporate deadlines. As gamers, we all understand the frustration of not getting our hands on an exciting new title, but we are also the most vocal group in the market when a game ships with "Severity 1" bugs. Game publishing is a business, so shipping an unfinished game can still make business sense for any publisher. If the game buying public wants the quality of their games to improve, we need to vote with our wallets. For myself, having worked in a small section of the industry, I always wait a couple of weeks following a game release to see what the final quality of the title is. I'm not one of those fanatics who boycotts EA or Sony for all of their titles, but I won't give them my hard-earned cash for a bug-infested title like BF2.
This makes my wonder if there should be game industry firms dedicated strictly to Quality Assurance. There might be ones for all I know. It would seem to be the best solution the the current state of things. Instead of having a large, permanent, and costly QA department or having to hire a few bus loads of beta testers every cycle, publishers and developers could hire a QA firm to do the task of Alpha and Beta testing. That way you get to "hire" experienced dedicated personnel for less money. There would have to be small group of dedicated in-house build testers, but why not contract out the QA duties to a QA firm?
That's putting a lot of onus on the indy QA firm. Example:
Sony hires 'SuperGames QA Firm" and the QA firm begins bug testing Audible The Porcupine for bugs. SuperGames finds a BUNCH of 'Severity 1' bugs, and reports them to Sony. Sony says, "Thanks, but this will cost a buttload to fix, so no."
Then the fallout is that Sony has the opportunity to shrug off any future bugs and say, "Not our fault - we hired SuperGames to QA!" Then the poor QA firm takes the heat for doing shitty QA on a triple A title, and doesn't get anymore business from elsewhere.
I worked for a software development company (I'm assuming there is some correlation to game development) and as we continued to build new products the QA became more and more valued. The QA was only over budget on the poorly planned projects. The ones with solid functional and technical specifications didn't blow their QA budget. Looking back at it all now, it's that simple.
If proper QA is too costly for a game developer, then I'd be more worried about the competence of the developer. The cost of QA is inversely proportional the quality of planning and execution.
One of the most central reasons why large publishers will probably never fully back 3rd Party test labs is the security issue. Can you picture M$oft sending Halo 3 out to a 3rd Party lab via FTP or UPS - never happen. The potential for theft, loss or malicious distribution are too high. Most large companies will probably always try to handle QA internally.