As a tester working on the ground-breaking new version of SQL Response I’m haunted by the following question: “How do you performance test a product like SQL Response?”
Yes, I am literally haunted by voices in my head asking me this question. After all, this is a product that needs to exist on a customer’s network with the lowest possible impact while monitoring potentially a couple of hundred highly active servers for dozens of symptoms that could imply problems. Compounding this problem is the fact that the performance metrics have to cover not just the installed application host machine but also the monitored servers, domain controllers and the network itself. So it’s not an easy problem. It’s going to take a LOT of discussion, meticulous planning and technical inspiration.
A lot of Red Gate employees have SQL Server installed on their machines, so it would be simple for us to ask for volunteers and have our wonderful new product happily monitoring 150 SQL Servers…. However, many of these SQL Servers are doing precisely nothing. We could build a product that could monitor them for eternity without the slightest performance hiccup. However, we’re a bit more informed than that. Red Gate makes a lot of effort to communicate with SQL Server DBAs and one thing we never hear is “Our SQL Servers just sit on our desktop machines ready for the occasional time when we need to do a quick bit of testing. Then we all go home for the weekend and don’t have to worry”.
There are some decent tools out there, notably the free Microsoft tools SQLIO and SQLIOSim. Although these definitely stress a system they don’t actually involve SQL Server in any way so wouldn’t necessarily create the kind of realistic activity we’re interested in seeing. We’ve evaluated a tool called SQL Stress which seems very nice. One problem – it doesn’t have command line support so isn’t great for an automated test system. We could always roll our own SQL Scripts and write a very basic multi-threaded C# app to run them. Easy to do but again not likely to create realistic activity. The most promising idea we have is to create a clone of the server that runs the hugely popular (and Red Gate sponsored) community site SQL Server Central. Having this we could then use a trace replay tool to continuously run trace data collected from the site itself. This would be quite an undertaking but we feel it would be worth it to create a realistic performance scenario.
So.. those are our ideas. But what we’d love to know is – what ideas do you experienced DBAs have about this kind of thing? Maybe what we’re trying to achieve sounds similar to a problem that you’ve encountered in the past? Maybe you’ve already thought of an inspired solution to this problem, have packaged it, released it and are reading this from a luxury yacht somewhere off the coast of Hawaii? Or maybe there’s a well known solution that has become standard DBA practice? We’d love to hear those ideas.