mitchman
Gold Member
He writes for The Verge.Doesn't he write for wndows central? Being xbox fanboy is probably a part of job description![]()
He writes for The Verge.Doesn't he write for wndows central? Being xbox fanboy is probably a part of job description![]()
Here's the same with high settings, yes the Ps5 is out performing in resolution as it should but its where I expect it. Its not punching above its weight.And setting for ps5 and xsx are high according to Alex. Not medium setting . So ps5 is outperforming 5700xt
He writes for The Verge.
Here's the same with high settings, yes the Ps5 is out performing in resolution as it should but its where I expect it. Its not punching above its weight.
why is this happenning the TF differrence shud be able compensate for the lack of good tools.
A faster weaker can't provide the comparable level of performance because is more utilisedListen, we coders suck at parallel processing. The theory is right, the tools are known, but sometimes it’s just not as easy. It can be a problem of the architecture, the libraries, the third party software and even the management. So of course a faster weaker system is going to be more utilised than a broader and more powerful one. Also, I think the “only 10GB fast” design was a mistake.
As you can see in the OP, it can. If you use 9 out of 10TF in a narrow design and 8 out of 12TF in a broad one you are loosing with the most powerful machine. I know it shouldn’t be like that but look, the PS3 had 9 processors back in 2006 and still Intel beated way faster AMD CPUs because of single core performance this very last generation. Games released few years ago and in the DF reviews of the CPUs is the same story. It’s not just that easy.A faster weaker can't provide the comparable level of performance because is more utilised
Yep, when people say "its because the old tools available" they are basically stating MS incompetence with their new console launchThere are two possibilities here. Either:
- Phil Spencer had 3-4 years to prepare for next gen but inexplicably winded up with not a single new XGS title to launch with their new hardware. Somehow development tools are not ready either, despite MS specialising in this area and being the creators of DX12. If that is true he should be sacked as like David Jaffe said, this is total incompetence and taking the piss out of their own fans.
- Tflops, which has been marketed for the past year as the ultimate metric for performance, is in reality far from it and just one part of the real world picture, and a theoretical one to boot. Xbox fans have been taken for a ride with this marketing as in reality PS5 outperforms the XsX quite handedly.
Which is it? I think it's probably a mix of both. The unifying truth here is incompetence, either in leadership or marketing/design.
That is not what I said.That's not true at all.
Vsync at 60FPS waits for a frame to be finished between 16.6ms, it doesn't matter if it has finished in 16.5 ms or in 10 ms. The only thing vsync needs to be activated at 60fps is to consistently have higher than 60 FPS but that doesn't mean that uncapped framerate would be 70 or 80FPS.
It doesn't work like this. What you say is a generic nonsense.As you can see in the OP, it can. If you use 9 out of 10TF in a narrow design and 8 out of 12TF in a broad one you are loosing with the most powerful machine. I know it shouldn’t be like that but look, the PS3 had 9 processors back in 2006 and still Intel beated way faster AMD CPUs because of single core performance this very last generation. Games released few years ago and in the DF reviews of the CPUs is the same story. It’s not just that easy.
Thank God you're here, we needed an amateur armchair analysis
And 1440p.Yes i know but the video i posted is above 60fps as well
So, how do you explain the delta in performance from a weaker machine?It doesn't work like this. What you say is a generic nonsense.
Is not that weaker and it has it's own advantage.So, how do you explain the delta in performance from a weaker machine?
Isn't PC and Series X versions virtually the same code? That's purpose of new GDK and GameCore. Wouldn't that mean that if there is bug in XBox version it would be also present in PC version? Do PS4 and XOne version have similar performance drops with torch?
I will add one more thing. Valhalla stutter also on world map. Is map render that costly? There is clearly something wrong with some scenarios that tank xbox performance. And that don't look like some bottleneck but some software bug.
to be honest its the same shit xDHe writes for The Verge.
It's the usual "hope and future promises " narrative for xbox. Xbox sales sucked? "Phil only just took over from Mattrick he will right this ship" .Yep, when people say "its because the old tools available" they are basically stating MS incompetence with their new console launch
dont forget ben "fart in the wind" kuchera,american press always shill hard from microsoftThat works too - iirc VErge, Ars Technica and Polygon/Gies were biggest defenders of Microsoft orginal 2013 vision of Xbox no used games policy.
Talk about generic nonsense. If both opeates at full output is weaker. In 10GB of RAM the bandwidth advantge is significative. The raw output of operations is 20% bigger in this theorical scenario and in the end that is what you need to render. There is no two ways arroun it. Tools aside (right now the XSX is operating 40% worse than its theorical differential with PS5, this is going to change over time), the logical explanation is that PS5 is way more efficient, that is it's own advantage. An architecture to take the most out of those 10.3 TF.Is not that weaker and it has it's own advantage.
So they run their consoles as virtual machines? I did wonder, but hadn't read it anywhere.
If so that's probably the most straightforward explanation. There may be no issues in the GDK per se, it's just inherently less efficient - but NOT egregiously so. Probably would also explain MS's drive for performance as they need the overhead.
Nick Baker: There was lot of bitty stuff to do. We had to make sure that the whole system was capable of virtualisation, making sure everything had page tables, the IO had everything associated with them. Virtualised interrupts.... It's a case of making sure the IP we integrated into the chip played well within the system. Andrew?
Andrew Goossen: I'll jump in on that one. Like Nick said there's a bunch of engineering that had to be done around the hardware but the software has also been a key aspect in the virtualisation. We had a number of requirements on the software side which go back to the hardware. To answer your question Richard, from the very beginning the virtualisation concept drove an awful lot of our design. We knew from the very beginning that we did want to have this notion of this rich environment that could be running concurrently with the title. It was very important for us based on what we learned with the Xbox 360 that we go and construct this system that would disturb the title - the game - in the least bit possible and so to give as varnished an experience on the game side as possible but also to innovate on either side of that virtual machine boundary.
We can do things like update the operating system on the system side of things while retaining very good compatibility with the portion running on the titles, so we're not breaking back-compat with titles because titles have their own entire operating system that ships with the game. Conversely it also allows us to innovate to a great extent on the title side as well. With the architecture, from SDK to SDK release as an example we can completely rewrite our operating system memory manager for both the CPU and the GPU, which is not something you can do without virtualisation. It drove a number of key areas... Nick talked about the page tables. Some of the new things we have done - the GPU does have two layers of page tables for virtualisation. I think this is actually the first big consumer application of a GPU that's running virtualised. We wanted virtualisation to have that isolation, that performance. But we could not go and impact performance on the title.
We constructed virtualisation in such a way that it doesn't have any overhead cost for graphics other than for interrupts. We've contrived to do everything we can to avoid interrupts... We only do two per frame. We had to make significant changes in the hardware and the software to accomplish this. We have hardware overlays where we give two layers to the title and one layer to the system and the title can render completely asynchronously and have them presented completely asynchronously to what's going on system-side.
System-side it's all integrated with the Windows desktop manager but the title can be updating even if there's a glitch - like the scheduler on the Windows system side going slower... we did an awful lot of work on the virtualisation aspect to drive that and you'll also find that running multiple system drove a lot of our other systems. We knew we wanted to be 8GB and that drove a lot of the design around our memory system as well.
Yes and you would know that i know that if you read all my previous posts regarding this.And 1440p.
Virtualization is nothing new and is not a problem (unless they seriously fucked something upgrading software).It is an optimised version of what they described here AFAIK: https://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview
And that's possible. If it was running only games written in XDK and not new GDK. That was probably one of first finished units. We know that GDK was not finished even in July this year, and we know that from official MS documentation. First info about Series S was in June release (in note that GPU profiling for Lockhat is curently not working in some scenarios)and a year ago phil spencer said he took his console home with him.
I'm not convinced about that map. AFAIK it's exactly same map as in Origins and Odyssey. They were displayed without tearing and dips on standard XOne.That *could* be an issue... but I know the cost of adding another dynamic light source into the scene. Every single object rendered has to now go through yet another light loop to determine shadows for each of the objects. It's just that expensive so the *cost* of it is valid. I would not point it to something optimized about DX12. They could try to optimize that code, but I doubt it as it plays too close to the rendering engine. My 3090 is already running at 96%+ GPU usage. There isn't much more to squeeze out.
Transparancies are also VERY costly. And this map is a 3D one, not 2D.
Already debunked by Alex Battlestar Galactica himself.
Virtualization has overhead... the hardware call needs to pass by the virtualization logic to reach the actual hardware.Virtualization is nothing new and is not a problem (unless they seriously fucked something upgrading software).
Well, you know how some of the guys are... "imagine later in the gen when we can get full native 4k with 50 dynamic lights in the scene all running every RT feature there is at the same framerate!!!"
The resolution can actually be lower in busier sections on XSX than PS5.I want to address that torch.
It takes away a good 10FPS from a 3090 on 4k/Ultra. The torch is an expensive render.. why? Because having more than 1 shadow casting light source wreaks havoc on ALL GPUs. It's a shame that that is the case but here we are where creating shadows continues to be very expensive. PS5 drivers are just really good at the dynamic res and probably drops resolution down enough to keep the FPS high.
Virtualization is nothing new and is not a problem (unless they seriously fucked something upgrading software).
That looks more like the res was lower on PS5 than Xbox than texture differences.XSX version have better textures
![]()
Lot's of people in this thread when in reality virtualization has probably less effect on performance than API differences between PS5 and XSX.Who is saying it is a problem? It is still not 100% free, I do not get the point.
On PC, the differences between character texture settings are much bigger than that. Its probably just an optical illusion.XSX version have better textures
![]()
Amazing difference...XSX version have better textures
![]()
XSX version have better textures
![]()
Well, you know how some of the guys are... "imagine later in the gen when we can get full native 4k with 50 dynamic lights in the scene all running every RT feature there is at the same framerate!!!"
Lot's of people in this thread when in reality virtualization has probably less effect on performance than API differences between PS5 and XSX.
vsync causes stuttering when the frame can't be output at the exact interval, it will then need to wait a full frame interval, causing a stutter. VRR is fine with black levels as long as the frametimes aren't too far off the ideal frametimes.
It is still has overhead.Lot's of people in this thread when in reality virtualization has probably less effect on performance than API differences between PS5 and XSX.
XSX version have better textures
![]()
AC Games are terrible at optimization wanna let you guys know. AC Unity ran better on the Xbox One then on PS4. Does not mean that the console is weaker there is more to this than just that.
XSX version have better textures
![]()
XSX version have better textures
![]()