• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Qualcomm Unveils Snapdragon X Elite CPU PC Benchmarks: Oryon Core Faster & Efficient Than Intel 13th Gen & Apple M2 Max, GPU Faster Than AMD RDNA 3

Yep. The only read hurdle has been the hardware. The upclocked mobile phone SoCs Qualcomm has been providing havent had the juice to do justice to desktop workloads.

That changes next year, fingers crossed




Nuvia/Qualcomm didn’t just slap on an ARM Cortex reference design and call it a day…
That wasn't my point.

My point was that ARM by itself has yet, demonstrably at least, to boast a performance ceiling rivaling that of x86.
 

Thick Thighs Save Lives

NeoGAF's Physical Games Advocate Extraordinaire
They've also released bench results for their new Snapdragon 8 Gen 3 SoC and it looks like it trumps the new A17 Pro from Apple in both GPU perf and multicore CPU tests. They're still behind in singlethreaded perf (for now) but that might change when they release the new Nuvia cores sometime next year.



It completely overshadows every other smartphone chipset and even seems to match the performance a Radeon 780M/GTX 1050Ti with an overclock.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
That wasn't my point.

My point was that ARM by itself has yet, demonstrably at least, to boast a performance ceiling rivaling that of x86.
Apple is doing some amazing work with their ARM cores and their GPU’s (I cannot wait to see how they implemented RT in detail on top of their PowerVR GPU based TBDR design… PowerVR/IMG Tech had some advanced RT tech that has not had time to shine yet, some features beyond what nVIDIA has at the moment too, but they had nobody big implementing it… until now I guess).

Still, their best designs are still nowhere near the best AMD or Intel can produce performance wise, even without going to crazy power consumption levels. We will see with the M3 and M3 Pro/Max/Ultra designs soon (they are in 3nm processes btw, so that should be taken in consideration vs competition on 5nm).
 

Panajev2001a

GAF's Pleasant Genius
They've also released bench results for their new Snapdragon 8 Gen 3 SoC and it looks like it trumps the new A17 Pro from Apple in both GPU perf and multicore CPU tests. They're still behind in singlethreaded perf (for now) but that might change when they release the new Nuvia cores sometime next year.


We will see outside of synthetic benchmarks (this is where Apple shines: their OS and frameworks are designed for their HW and the apps devs make leverage those frameworks) and even in those they do not outshine Apple, they are ahead in some a bit, match them in some, and lose to them in others. I think the big win is that they are on 4nm vs Apple on 3nm, but I might be wrong.

Brain drain at Apple is real though…
 

64bitmodels

Reverse groomer.
Now they are switching to ARM, which Nintendo have been using from the original Gameboy right up to the Switch 2.
Yk it's crazy to see ARM's evolution like that. From running what is essentially slower NES games to running Xbox 360 tier games at higher frames and res than the original consoles
 

Ozriel

M$FT
Interesting, but they are certainly selective with their benchmarks. E.g. Focusing on single thread and then claiming a 30% power advantage over M2 Max when there really isn’t much difference in single threaded performance between the M2/M2 Pro/M2 Max since the difference between them are numbers of cpu cores (multi-core) or the Max having the same cpu cores as the Pro but twice the gpu power which does nothing for cpu performance (only graphics performance) but does increase power usage. M2 Max versions of laptops have slightly worse battery life compared to M2 Pro.

So basically they’re following Apple’s lead in being selective with charts and benchmarks?
 

Ozriel

M$FT
They've also released bench results for their new Snapdragon 8 Gen 3 SoC and it looks like it trumps the new A17 Pro from Apple in both GPU perf and multicore CPU tests. They're still behind in singlethreaded perf (for now) but that might change when they release the new Nuvia cores sometime next year.



Hmm. If a SD chipset’s GPU could hit 1050Ti tier when clocked high and (I presume) actively cooled, it’s certainly worth getting some hopes high for an Nvidia designed GPU in the next Switch console.
 
Is it confirmed that the latest chips from Qualcomm run on the latest Arm V9.2A microarchitecture? I dont know if Apple's A17 chip does?!
 
If Wifi7 can go up to 46 Gigabits per second, does that mean your ISP is going to provide 46 Gigabits per second data speeds? What is the fastest ISP provider right now currently in United States?
 

LordOfChaos

Member
If Wifi7 can go up to 46 Gigabits per second, does that mean your ISP is going to provide 46 Gigabits per second data speeds? What is the fastest ISP provider right now currently in United States?

There's no real relation, wifi speed also gets quickly cut down with a number of factors like distance, devices, other interference etc. Most people aren't on connections faster than even 1gbps, most of the benefit is within the network and integrity and distance

Google Fiber will offer 20gbps

 
Last edited:

winjer

Gold Member
First benchmarks in the wild, quite good. I wonder if M3 launched today can consistently be a bit over 3000 single core.



Some people don't know what real world performance means.
Geekbench is a synthetic benchmark, a good one, but not a real world program.
Let's wait and see how it performs in reality.
 

Papa_Wisdom

Member
Hacking Rocco Botte GIF by Mega64


I just want to know how this will improve my “e1ite HaX0r Sk1llz”
 
Apple switched its entire product line from Intel to Apple Silicon so developers were forced to switch over too. I suspect that every new application now runs natively on an M1/M2/M3 chip. But where's the incentive for Windows developers to release ARM versions when there are hardly any people using the platform?

It's a huge effort to make Windows on ARM a success when the benefits of switching are negligible. When you buy an Intel/AMD PC/laptop you know it's going to compatible with every application/game and hardware you ever wanted to use, with ARM it's always going to be a question mark. How much cheaper and more energy efficient does an ARM laptop have to be to outweigh the negatives?

The quality of the X86 emulation is a major factor in whether or not ARM ever takes over the desktop space. Needs to be seamless and performative, to the point that the end-user doesn't even realize it is happening.
 

GymWolf

Gold Member
The desktop CPU space is about to get very interesting with both these guys and Nvidia developing products.

Long overdue, it's been getting stale with AMD dragging Intel along of late.
Nvidia is developing cpus?

So an all nvidia console is finally a possibility? (An extremely remote one but still...)
 
Last edited:

LordOfChaos

Member
Nvidia is developing cpus?

So an all nvidia console is finally a possibility? (An extremely remote one but still...)


The 2025 party is going to be WILD. From the old duopoly to Nvidia, Qualcomm, AMD, and Apple all throwing down in the ARM space. Hopefully Intel has the memo too and is just downplaying ARM until theirs is ready like a company does.
 

GymWolf

Gold Member
Designing these processors to emulate x86 is not sustainable in the long run. Microsoft will most likely make an updated SDK (Project Voltera) using the snapdragon elite processor, but software developers can utilize the current Project Voltera SDK kit to get the groundwork started at least for apps. If they do that on an old Qualcomm processor, the elite will run it blazing fast.

With Build 2024, i hope MSFT will develop updated tools and API's-and even utilize generative AI to help coders make native and optimized code for both x86 and ARM in the easiest to way possible. They have already incorporated Xamarin into visual studio (one base code for windows, android, and iOS), so this shouldn't be that hard.

They also need to optimize pen support for stand alone windows tablets/laptops/all in one PC's utilizing GPU/NPU (since its getting beefier now instead of those shitty iGPU's from intel), with low latency/lag and close to 1:1 input/touch to response.
 
it is what is needed tho as many devs of older games cannot or will not port their games to ARM. we need emulation for those obscure titles

i think you will run into a similar situation where Microsoft is having the hardest time persuading software developers who make apps for Android and iOS to make native apps to Windows store (that's where you run into shit like this):



The SDK with generative AI in assisting developers make Windows x86 to Windows ARM and Android+iOS to Windows x86 and Windows ARM is the key. I think MSFT will have to spoon feed developers
 

Tams

Member
This is the technology acquired when Qualcomm acquired the startup company Nuvia.

The founders of Nuvia are all ex-Apple chip design people, and presumably worked on Apple Silicon before jumping ship to found Nuvia.

What I'm saying is this is a big deal. x86 is decrepit and should have been replaced a decade ago or earlier. Getting PC's off x86 is good for everyone, except Intel, and honestly fuck Intel.

You do know that Qualcomm are just as scummy a company, right?

Even other megacorporations don't like working with them. Why do you think Samsung keep plugging away at Exynos?
 
Last edited:

LordOfChaos

Member
Nanoreview has some scores for the M3 family, not sure if they're real but they're not out of line

M3: 3027 and 11883
M3 Pro: 3087 and 14982
M3 Max: 3187 and 21890

That would make the M3 Pro already tie the mid next year X Elite on the CPU end and with a much better GPU, but the question is what priced laptops the X Elite end up in. Does it bring M3 Pro CPU perf to much cheaper systems? Average load power and idle power are still questions too
 

Neo_GAF

Banned
The Windows-on-ARM project is going on a decade old at this point and MS still doesn't have anything remotely close to Rosetta. People don't really understand how strong Apple is at programming software emulation like this, going all the way back to the original Motorola to PowerPC transition decades ago now. Apple also designs the Apple Silicon and thus there are also hardware optimizations to help with conversion of code from Intel to Apple Silicon even above and beyond just basic software emulation.

That said, Apple is very good at making developers get off their asses and port code ASAP or die. This is something that Microsoft has never been able to do, MS exercises like zero control over the Windows developer ecosystem. I don't even run any apps on my M1 MacBook Pro which are still Intel native except mother fucking Steam because mother fucking Valve are the laziest mother fucking pieces of shit in the universe. How the fuck has Steam not been made Apple Silicon native yet, Gabe?
guess fucking why: there is no money to make on apple silicon. most of the games do not work, the rosetta emulation doesnt work and most people on a mac dont care about gaming.
i have three friends who code(on mac) and all of them play games on a beefy rig or consoles.
one person tried shadow pc on their mac, was satisfied and then the CS fucked them over which is why they stopped caring about it in the end.

would love to see progress here, but i dont see apple doing anything here. when was the last time apple did any shit for gaming on a mac?
90s?
 
AMD, NVIDIA in particular are not going to let make ARM based CPU/NPU/GPU's to run a stuttery/janky windows 11/12 ARM OS along with x86 games to run slow/laggy/low res through emulation. MSFT along with CPU/GPU manufacturers will have to coordinate together to make it NATIVE and run it smooth as silk. Qualcomm will simply benefit off of this. If it were not for qualcomm i dont thik we would even have windows on arm (starting with windows RT).
 
The future of chip making is pretty exciting:

1) Intel using glass substrate instead of organic substrate (possible for other chip manufacturers)



2) New semiconductor material Rhenium, Selenium, Chlorine speeds up the flow of electrons 100-1000x faster:
New Scientist Article on Semiconductors

3) Nanoimprint Lithography possible replacing EUV lithography


4) Further reduction and shrinkage on nodes (0.2nm etc):



5) GAAFET replacing FINFET (?!)

"With GAAFET transistors still in the testing phase, we cannot declare final performance or efficiency improvements. Samsung estimates that at the same lithographic process of 3nm, the figures are 50% power savings, 30% performance improvement, and 45% area reduction."

6) More efficient applied materials process

 
Will future all-in one PC's and tablet/laptop PC's finally improve Webcams? I am so sick of shitty webcams. Usually the built in cameras are NOT integrated with the motherboard (like how iphones and androids phones are), it connects via shitty old USB, but perhaps USB 4.0 (inside the laptop connected to the motherboard) you can finally get 4K DSLR type video and photos?

The current best webcam is Opal C1 but it connects externally not built in.

 

tusharngf

Member

Snapdragon X Elite 12-Core CPU Benchmarks Leak Out: On Par With Current-Gen AMD & Intel Chips​

The top Qualcomm Snapdragon X Elite CPU will come in 12-core configurations with a total of 8 high-performance and 4 efficiency-optimized cores based on TSMC's 4nm process node. The clock speeds for the chip will be set at 4.3 GHz across 1-2 cores and 3.8 GHz for all-core while adopting a large 42 MB cache. The chip was leaked out earlier in Geekbench 6 where it showed competitive performance but the latest results give us a look at what we can expect from the final silicon that is heading out around the mid of 2024.​

Qualcomm-Snapdragon-X-Elite-12-Core-CPU-Benchmarks-Leak.png

m6FpS4R.jpg

 
Last edited:

Imtjnotu

Member

Snapdragon X Elite 12-Core CPU Benchmarks Leak Out: On Par With Current-Gen AMD & Intel Chips​

The top Qualcomm Snapdragon X Elite CPU will come in 12-core configurations with a total of 8 high-performance and 4 efficiency-optimized cores based on TSMC's 4nm process node. The clock speeds for the chip will be set at 4.3 GHz across 1-2 cores and 3.8 GHz for all-core while adopting a large 42 MB cache. The chip was leaked out earlier in Geekbench 6 where it showed competitive performance but the latest results give us a look at what we can expect from the final silicon that is heading out around the mid of 2024.​

Qualcomm-Snapdragon-X-Elite-12-Core-CPU-Benchmarks-Leak.png

m6FpS4R.jpg

the wild thing about this is it runs mostly at 7w. with highs up to 15w.

QUALCOMM has always had amazing Tops also so ai accelerated tasks are going to fly in this machine with passive cooling.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
the wild thing about this is it runs mostly at 7w. with highs up to 15w.

QUALCOMM has always had amazing Tops also so ai accelerated tasks are going to fly in this machine with passive cooling.

What does the current AMD chips run at?
 

night13x

Member
It looks good on paper but I am interested to see how it does in actual real world examples. Still - more competition is good for everyone.
 

Tams

Member
What does the current AMD chips run at?

15W to 35W or 45W depending on the bios settings.

Below 15W, they don't really any noticeable increase in efficiency (aka you get less performance for less power used).

The one in the Steam Deck is down around 10-12W, but it's also Zen 2, not Zen 3.
 

magnumpy

Member
man Intel has seriously fallen off. and with software emulation, x86 isn't really a reason to pick architectures. and without x86, Intel is less and less attractive. I hear their hardware foundry technology, specifically chip packaging is leading edge, but this looks very bad for them. even AMD CPUs are taking them to the cleaners, with Intel only winning due to clock speed, which has additional heat and power consumption downsides :eek:
 
Top Bottom