If Nvidia has over the years evolved their GPUs to make use of x86's advantages and worked around its warts, that might make it a little more difficult for non-x86 to slot into role. Maybe the way x86 does interrupts, maybe its SIMD instructions are ideal for managing and mangling the flow of data.
I would be interested in learning more about this, out of idle curiosity. If this is a real issue it sure seems like something that Nvidia should be able to fix, given that they have control over their GPU architecture and if ARM's Neoverse design isn't quite up to snuff designing their own ARM cores (with custom instructions, different interrupt methodology etc. if necessary) seems like something a company with a $4.5 trillion market cap could afford.