Disks being randomly kicked out

I have a Supermicro 36 bay chassis and I currently have a 35 drive Zpool with 7 vdevs of 5 drives each configured in raidz1. Every so often a drive will disappear from the zpool and I have to dismount the zpool, pull the drive out, plug it back in, do some refreshes in the unRAID gui and the drive comes back. I remount the pool, it resilvers, and all is good. This has happened about three or four times in the past week and I have been copying data to the pool the whole time. I am running this under unRAID 6.12.3. I’ve had this server for a long time and never experienced anything like this. I have a single HBA connecting the front and read back planes, and the hardware is fins as far as I can tell. Since this is a backup server, I’m not using top quality drives, there are many Seagate archive drives in the pool which worked perfectly in my other unRAID server in the past. I am sure its a big no no to use these drives in ZFS and this might be my problem, all the drives that have kicked out of the zpool so far have been Seagate archive drives. i have four WD Datacenter drives in the zpool and not one of them has had an issue.
Jus curious as to why this is happening, happy to join this community.

Since this is a backup server, I’m not using top quality drives, there are many Seagate archive drives in the pool

This is probably the issue, especially if the issues have been apparently randomly and evenly distributed throughout the drives (and the HBA ports). Although I’ve seen folks say they had no issues with ZFS and Seagate Archive drive-managed SMR disks, I’ve seen quite a few more reporting the kinds of intermittent issues you’re having.

What’s most likely happening is when the pool gets heavily loaded, some of these SMR drives stop responding to commands for long enough that ZFS decides they’re deranged gear, in which case it will kick them out of their vdev, if sufficient redundancy remains for it to continue operating without the evicted disk.