It's a bigger L3 cache with lower latency. It's possible none of the benchmark software is attuned to that, although they're running so many benchmarks it may just mean very narrow scenarios where you see the benefit.
> it may just mean very narrow scenarios where you see the benefit.
I would assume this to be the case. It should be beneficial at higher thread counts if more code and shared data is in the cache. All the tests, however, seem to be very CPU-bound and graphics related. I'm curious about the chess ones, where it seems one model benefited more from the extra L3 than the other, as well as the one where Windows had significantly lower performance (could it be small file IO?).
Finding the cases where the larger cache is most beneficial would be an interesting project on its own.
Linux outperforming Windows on every single benchmark it seems, but the 3D V-Cache made no difference on most of them.
I don't know much about it other than it should be a faster cache?
Is is possible neither OS is fully utilizing it?
It's a bigger L3 cache with lower latency. It's possible none of the benchmark software is attuned to that, although they're running so many benchmarks it may just mean very narrow scenarios where you see the benefit.
> it may just mean very narrow scenarios where you see the benefit.
I would assume this to be the case. It should be beneficial at higher thread counts if more code and shared data is in the cache. All the tests, however, seem to be very CPU-bound and graphics related. I'm curious about the chess ones, where it seems one model benefited more from the extra L3 than the other, as well as the one where Windows had significantly lower performance (could it be small file IO?).
Finding the cases where the larger cache is most beneficial would be an interesting project on its own.