Without better performance data from your live system, I can make a guess that you are probably having disk I/O issues.
When doing data center storage architecture I use an average capability of 100 native IOPS per spindle for 7.2K RPM drives. If you are running 5.4K drives this gets worse.
A generic R/W mix on the pool might be 3:1 meaning for every 3 reads there is 1 write. Every workload on every pool varies somewhat. If your workload fit that mix, for every 4 pool level I/O operations there are 5 spindle I/O operations, 3 reads and 2 writes (1 to each in a mirror pair), so I’d calculate your total IOPS capacity for the 4 drive RAID1+0 set as ((4 x 100) * .8) or 320 IOPS.
Again this is a generic ratio/mix, not one representative of your particular workload. If it were 100% reads your setup would net an average of 400 IOPS, if 100% writes it would decrease to 200 IOPS. This is of course discounting caching. That’s also assuming that I/O sizes are well matched to pool ashift and dataset blocksizes.
Running a database server and a nameserver on something with a capability between 200 and 400 IOPS doesn’t give you a lot of room to work with. Depending on things like how the slave database is updated (synchronous commits vs periodic log shipping), database size, proper indexing, online users, batch processing, etc., your database performance could vacillate between acceptable to abysmal.
Sorry this doesn’t spell out a precise answer for you, but given the paucity of information that you’ve provided, that’s about the best answer I can give.