Part 1 – An Automated Web We Weave: QA Automation and Moab at Adaptive Computing

iStock_000002010192Small“Things would be so much simpler if we had more automation.” I know no one has ever heard…or said that. It seems obvious. If computers are running all your tests for you, there is less work for people to do, right? Quality Assurance can do their work from the beach, and you only need one QA Engineer or less… right? I suppose that might work if you stopped all development and refused to support any new OSs, hardware, or software integration.

While automation has lots of benefits, like faster test results and a better baseline due to more frequent execution, it takes a lot of work to keep it in working order. Even if you are only trying to keep up with OS changes and 3rd party library changes, there is a ton of work to do. If you want to keep up your automation level after you add a couple dozen of prolific developers, you can easily keep as many QA Engineers busy as any manual testing shop would employ. Add into the mix multiple release branches, multiple feature branches, multiple product branches and a dash of special code branches that need testing, and you have a lot of test runs to coordinate, let alone maintain. On top of that, each suite of tests runs a different length of time and some of them have special hardware or system requirements. For example, some test runs need multiple servers, a GPU, or special software installed (like Valgrind to check for memory leaks).

Here is a summary of what we run on a weekly basis at Adaptive Computing:

  • 36 Servers
  • 63 active branches across 14 branch families
  • About 3,000 tests per Moab Branch
  • About 800 tests per TORQUE Branch
  • About 35,000 tests per day
  • 331 active runs per week

If you have an unlimited hardware budget, you can buy as many servers as it takes to run all these tests as often as you want. In the real world, this turns into a real challenge. Sometimes it is easy to see where you have a block of free time to run tests. Most of the time, everything looks busy, but when you try to get more servers, IT tells you that you are not even using 20% of your available CPU over an average 24 hour period. You can only get so far using Cron jobs and a pair of human eyes. With every run you add, it gets harder and harder to find sufficient gaps for ever larger automated test runs.

In an upcoming part two of this blog, I’ll explain how Adaptive Computing QA eats our own cooking to help solve this automated test scheduling dilemma. (spoiler alert, it’s Moab!)

 

Facebook Twitter Email