It's also a way to design planned obsolescence into their own products. If we assume Nvidia doesn't deem AMD as a threat, without meaningful updates, consumers wouldn't buy Nvidia's GPUs and their year-on-year gaming revenue would fall.
There's a reason the results look all the same. AI takes results from ALL online human development, but acts as a single entity. So basically what it outputs is cumulative human viewpoint, focused as if it were a single person. Therefore, it looks similar because it's a singular viewpoint.
I don't think Nvidia's promises of it not being slop but another way of customization and you can be unique can ever be true, because when you are drawing as an artist, you are controlling every single aspect of the design, down to the individual lines. You will need to have that sort of control, which at that point you lose the whole point of doing neural rendering in the first place, because everything you don't do will be filled by a computer.
So eventually the vision seems to be where you do everything using an AI prompt. I guess we're back to using typewriters again.
Also, remember graphics cards basically started as approximating output, rather than accurate output. That is one way the Voodoo founders achieved acceptable performance with low cost, by using FP32 over FP64. The problem you are having with DLSS and grainy, blurry textures is related to this. To save compute, they are further approximating rather than being accurate.
Using neural renders as future GPUs would create far more distortions though. They are going as low as Int8 for AI. If it becomes future of gaming, the issue you are seeing with anti-aliasing such as DLSS would be seen everywhere, in textures, in movement, lighting, vertex transformations, geometry.