It's 30 FPS but it's also doing much more visually. It's not a straight-up conversion cutting the performance in half.
Original:
Remaster:
486 MHz single core PowerPC CPU + 162 MHz GPU (a pre-pixel shaders system of color combiners), 3 MB of eDRAM, 24 MB of main RAM (low latency DRAM, the 1-T SRAM), a another very low bandwidth 16 MB of DRAM pool (A-RAM?). Game is rendered at 480i@60 Hz… let’s assume that the effective load on the GPU is 240p@60 Hz to make the comparison even more lenient.
Vs
4 Cortex A-57 cores running at 1+ GHz in handheld mode (1.02 GHz in handheld mode and up to 1.7 GHz in boosted mode docked) + 307.2 MHz GPU in undocked mode (up to 460 MHz undocked when in boost mode though and 768 MHz docked) + 4 GB of RAM.
Now, the game would be essentially the same CPU load wise, so no contest there, and resolution wise even assuming Nintendo is doing PS2 style field rendering (GPU only renders 240 odd lines from one frame and the. 240 lines, etc… if framerate goes even a single frame below 60 Hz the game appearing a lot more aliased and lower resolution) it would be a 3x resolution jump… with a much much faster GPU to boot.
GCN’s (quality) pixel fillrate was about 648 MPixels while Switch should be closer to 4.9 GPixels/s when undocked assuming the unboosted GPU clock speed of 307.2 MHz (well over 12 GPixels/s docked). The GPU can reach 460 MHz which is quite a jump for such parallel systems.
While FLOPS are not everything, we are talking about ~9 GFLOPS vs 157 GFLOPS assuming the base undocked clocks).