• Akira Toriyama passed away

    Let's all commemorate together his legendary work and his impact here

[Nikkei/Rumor] Next Switch development is "progressing well", due in 2024 [UPDATE: VGC/Eurogamer share more details]

What would be the best time for a new Switch 2024 launch ?

  • April-May 2024

    Votes: 158 42.8%
  • June-September 2024

    Votes: 77 20.9%
  • October-November 2024

    Votes: 134 36.3%

  • Total voters
    369
One thing to keep in mind is that what you will see on reveal day is what you will get for the next seven years. The BotW -> TotK transition is a perfect example of that.

So the jump in visuals and playability better be impressive.
I doubt this is the case because BOTW was a WiiU game and Switch was just a slight upgrade compared to it (I think is a jump like PS4 to PS4 Pro), while early big Switch 2 games will probably be cross-gen games with Switch 1 so there will be some limitations due to it. I think this will be more a PS5/Series X situation in which it will take a couple years to get 1st party games that take full advantage of the hardware, of course the difference is a lot smaller that it used to be gone are the PS3 days in which early vs late games in the same console seem almost a full console jump.
 
One thing to keep in mind is that what you will see on reveal day is what you will get for the next seven years. The BotW -> TotK transition is a perfect example of that.

So the jump in visuals and playability better be impressive.

Usually, we don’t see the best graphics on a console until a couple of years until the cycle, once publishers have learned to max out the hardware. BOTW-> TOTK isn’t a perfect example, in my view, as it was an iterative sequel utilizing the same engine developed for the Wii U.

Switch 2 will have a slew of cross-gen games for a good long while, too, so outside Nintendo, I don’t expect publishers to be pushing the new system to its limit.

Not that anyone looking to buy a Nintendo Console day one is going to be super focused on graphics anyway.
 
One thing to keep in mind is that what you will see on reveal day is what you will get for the next seven years. The BotW -> TotK transition is a perfect example of that.

So the jump in visuals and playability better be impressive.

Well, the jump from this
screen-lrg-2.jpg


to this was actually pretty impressive for a portable device
05kV0iHXtm9IkEu1PMkh9pX-4..v1569481406.jpg



Nowadays, even imagining a strong jump going by this
DMoLvj9UQAAOQds.jpg


to this
RKDNhBbH8eGWdqlurhFJFTY5.jpg


my personal opinion is that it won't be as effective as the Switch has been compared to the 3DS.
Of course the "trick" Nintendo played was a very well managed one, going to put all their eggs in one basket, migrating ALL their fanbase (that have been bigger on portable devices than on home console since the forever) into an hybrid system

If you look at it from a (visual) Wii U-to-Switch poing of view the topic changes a lot
 
Yes, AMD's behind Nvidia for RT (and likely always will be so long as they adhere to the Direct X standard) but that's not what I meant. I just meant the ray accelerators in both are pretty substandard and badly implemented, waiting 2 years for RDNA3 probably would've boosted things dramatically due to the architectural changes (which I expect PS5 Pro will).
I look forward to whomever can improve on ray accelerators, but it might have to come through with just accelerating more than bvh traversal (which only intel and nvidia does). intersection testing is probably something that can't do too much to improve

One thing to keep in mind is that what you will see on reveal day is what you will get for the next seven years. The BotW -> TotK transition is a perfect example of that.

So the jump in visuals and playability better be impressive.
Ratchet and Clank Rift Apart graphics on a Nintendo handheld could be that

 
Usually, we don’t see the best graphics on a console until a couple of years until the cycle, once publishers have learned to max out the hardware. BOTW-> TOTK isn’t a perfect example, in my view, as it was an iterative sequel utilizing the same engine developed for the Wii U.
Your argument could be used to reach another conclusion: if TotK was built on the same engine, then that would have allowed the development team to spend comparatively more time polishing the visuals.

Based on experience, whatever the Switch 2's output is, it 'd better deliver on Day 1. It will, mind you, but I hope it will enough to carry us to 2030 without feeling long in the tooth.
 
Your argument could be used to reach another conclusion: if TotK was built on the same engine, then that would have allowed the development team to spend comparatively more time polishing the visuals.

Based on experience, whatever the Switch 2's output is, it 'd better deliver on Day 1. It will, mind you, but I hope it will enough to carry us to 2030 without feeling long in the tooth.


Totk actually has more polished visual
And frame rate
And physics

They spent A LOT of otpimization process on making it working flawlessly

But I get your point

Thing is: Nintendo always focus hardware resources on other aspects than graphics
 
Last edited:
Your argument could be used to reach another conclusion: if TotK was built on the same engine, then that would have allowed the development team to spend comparatively more time polishing the visuals.

Based on experience, whatever the Switch 2's output is, it 'd better deliver on Day 1. It will, mind you, but I hope it will enough to carry us to 2030 without feeling long in the tooth.
The visuals and performance are very much improved. I just don’t think either game was designed to be a graphical showcase. You can get a better feel for the generational differences, looking at other first party titles.

I don‘t really see how something that looks great in 2024 can avoid looking long in the tooth in 2030. At least on the TV. They can always make screen improvements like they did with the OLED.
 
The physics systems in TotK with Ultra Hand in particular wouldn't have been possible on the Wii U. Some of the devs in an interview also said that they wanted to have huge underground sections in BotW already but the hardware prevented that.
That's where the main generational differences are.
 
Since it was brought up:

The main difference between AMD and Nvidia’s ray tracing solution based on my reading is that Nvidia feeds the entire ray tracing workflow into the RT cores, whereas AMD ray tracing acceleration only computes a piece of it in its hardware acceleration method and passes the rest of the workflow to the less purpose-designed cores, which can bottleneck the RT workflow. That’s why Sony is considering a dedicated RTU chip unless AMD changes their hardware acceleration method.
You're reading far more into my comments than you need to.

And you’re also pigeonholing Nintendo's options when they still have a fuckton more. Like why use FSR1 when they could use FSR2? It's because, as you even said, they don't want to use more cycles than they have to. EPD also might prefer a raw image than the softer image TAA solutions generate. Besides, the biggest reason I don't see them using DLSS is because it's not an open product. If Nintendo needs to inact nuclear options, then they lose access to DLSS. It would make sense for them to spin up their own solution through a team like NERD who is already doing so in the offline space
I’m not sure how I anyone could read something different into your comments when the comment was so broad. “I don’t think Nintendo will use DLSS” is just way too broad and includes no nuance or context, there’s only one way to read that, especially when DLSS is already all-but-confirmed as part of NVN2.

A raw image at 4K simply is not going to be possible with the hardware proposed. And they’re not using FSR 2 likely because the Maxwell GPU in Switch does not meet FSR 2’s minimum hardware requirements (the lowest-spec Nvidia graphics cards I’ve seen supported by FSR 2 are on the Pascal architecture, the gen above Maxwell).

And the thing is that AI-upsampling is a technology that appears here to stay in some fashion or another, when the newest graphics card market entrant (Intel) is opting to include this technology in their own graphics cards.

The real concern with DLSS is the closed nature of the neural network DLSS relies on, but it seems that Nintendo has already patented a workaround, which seems to involve the neural network used in image generation being under their own purview rather than using Nvidia’s, including neural networks trained specifically for each game, genre or series in question (in lieu of the more broad and wide-scope neural network Nvidia uses), neural network updates distributed within the game package itself rather than through firmware/driver updates, adding to the neural network through frame data collected on consumer devices passively during play to further train the neural network across the hundreds of millions of devices they intend to sell and permitting games to specify the neural network they intend to use to process the upsampled output (meaning 3rd-parties could opt to use Nvidia’s neural network, if they so chose).

Still uses DLSS Super Resolution, but changes it to permit multiple neural network options and thus lets Nintendo lock down all the important training data under their own purview that is then potentially transferable (ensuring backward compatibility/future emulation) if they opt to walk away from Nvidia in the future and go for a different GPU that includes tensor math accelerators, as is seemingly the future of GPUs now for the mid-to-long-term. And if this patent happens to prevent any other device maker from leveraging DLSS/XeSS/etc in this fashion without paying them to license that patent? So much the better for Nintendo, I guess.

Some fully bespoke AI upsampling solution does not make much sense, they don’t have to reinvent the wheel to protect themselves in the event of a change in SoC supplier.
Also the super resolution side can already make a game run smoother.
Well… can, yes, but also not necessarily? On the PC end, it’s billed as a method to increase frames per second because that’s what benefits PC gaming more often than not, but DLSS Super Resolution more simply reduces GPU workload per frame to generate a 4K image. How a developer intends to use the compute power freed up on the GPU by not rendering in native 4K is entirely up to said developer and what else they may wish to prioritize. For a dedicated mobile game device running at lower clocks than a typical PC hardware setup, that could mean prioritizing any number of other things over framerate.
 
Last edited:
Besides, the biggest reason I don't see them using DLSS is because it's not an open product. If Nintendo needs to inact nuclear options, then they lose access to DLSS. It would make sense for them to spin up their own solution through a team like NERD who is already doing so in the offline space
Nintendo uses chips with specific features all the time in their handhelds and consoles.

As long if Nintendo uses Nvidia chips they can use DLSS. If they decide to use a GPU from another company they will use other upscaling mechanics.

DLSS is hardware accelerated it is faster than any software implementation.
 
Your argument could be used to reach another conclusion: if TotK was built on the same engine, then that would have allowed the development team to spend comparatively more time polishing the visuals.

Based on experience, whatever the Switch 2's output is, it 'd better deliver on Day 1. It will, mind you, but I hope it will enough to carry us to 2030 without feeling long in the tooth.
I think anything beyond PS4 is when most games start looking 'good enough', and when development resources, not the hardware's power, start becoming the limiting factor in a game's visuals.
 
Nintendo uses chips with specific features all the time in their handhelds and consoles.

As long if Nintendo uses Nvidia chips they can use DLSS. If they decide to use a GPU from another company they will use other upscaling mechanics.

DLSS is hardware accelerated it is faster than any software implementation.
that's because the tensor cores come with the architecture, they can't get them without. and it's part of the selling point of getting NIntendo to use it. I just think EPD will continue their lack of AA and instead keep going with high resolution raw pixel output, until I'm proven incorrect, of course. DLSS will be there for everyone else, from Monolith Soft, to the EPD teams that work with external companies like they did with Pikmin
 
Since it was brought up:

The main difference between AMD and Nvidia’s ray tracing solution based on my reading is that Nvidia feeds the entire ray tracing workflow into the RT cores, whereas AMD ray tracing acceleration only computes a piece of it in its hardware acceleration method and passes the rest of the workflow to the shader modules, which can bottleneck the RT workflow. That’s why Sony is considering a dedicated RTU chip unless AMD changes their hardware acceleration method.

I’m not sure how I anyone could read something different into your comments when the comment was so broad. “I don’t think Nintendo will use DLSS” is just way too broad and includes no nuance or context, there’s only one way to read that, especially when DLSS is already all-but-confirmed as part of NVN2.

A raw image at 4K simply is not going to be possible with the hardware proposed. And they’re not using FSR 2 likely because the Maxwell GPU in Switch does not meet FSR 2’s minimum hardware requirements (the lowest-spec Nvidia graphics cards I’ve seen supported by FSR 2 are on the Pascal architecture, the gen above Maxwell).

And the thing is that AI-upsampling is a technology that appears here to stay in some fashion or another, when the newest graphics card market entrant (Intel) is opting to include this technology in their own graphics cards.

The real concern with DLSS is the closed nature of the neural network DLSS relies on, but it seems that Nintendo has already patented a workaround, which seems to involve the neural network used in image generation being under their own purview rather than using Nvidia’s, including neural networks trained specifically for each game, genre or series in question (in lieu of the more broad and wide-scope neural network Nvidia uses), neural network updates distributed within the game package itself rather than through firmware/driver updates, adding to the neural network through frame data collected on consumer devices passively during play to further train the neural network across the hundreds of millions of devices they intend to sell and permitting games to specify the neural network they intend to use to process the upsampled output (meaning 3rd-parties could opt to use Nvidia’s neural network, if they so chose).

Still uses DLSS Super Resolution, but changes it to permit multiple neural network options and thus lets Nintendo lock down all the important training data under their own purview that is then potentially transferable (ensuring backward compatibility/future emulation) if they opt to walk away from Nvidia in the future and go for a different GPU that includes tensor math accelerators, as is seemingly the future of GPUs now for the mid-to-long-term. And if this patent happens to prevent any other device maker from leveraging DLSS/XeSS/etc in this fashion without paying them to license that patent? So much the better for Nintendo, I guess.

Some fully bespoke AI upsampling solution does not make much sense, they don’t have to reinvent the wheel to protect themselves in the event of a change in SoC supplier.

Well… can, yes, but also not necessarily? On the PC end, it’s billed as a method to increase frames per second because that’s what benefits PC gaming more often than not, but DLSS Super Resolution more simply reduces GPU workload per frame to generate a 4K image. How a developer intends to use the compute power freed up on the GPU by not rendering in native 4K is entirely up to said developer and what else they may wish to prioritize. For a dedicated mobile game device running at lower clocks than a typical PC hardware setup, that could mean prioritizing any number of other things over framerate.
I'm not terribly familiar with all of this, but do newer Nvidia chips not have tensor math acceleration?
 
I suspect Nintendo probably isn't too worried about their internal software using or not using some nVidia proprietary technologies, by the time they would need to worry about getting the software to run on another platform, there will likely be many work arounds.
 
I'm not terribly familiar with all of this, but do newer Nvidia chips not have tensor math acceleration?
they have dedicated silicon for it

I suspect Nintendo probably isn't too worried about their internal software using or not using some nVidia proprietary technologies, by the time they would need to worry about getting the software to run on another platform, there will likely be many work arounds.
they're capable of making their own since NERD has already accomplished some of that work
 
I'm not terribly familiar with all of this, but do newer Nvidia chips not have tensor math acceleration?
So, first off, just to clear the air because I recognize I'm using technical words a bit loosely:

"Tensor math" is a just a simplified phrase I picked up along the way to refer to "the necessary math used by machine learning tools". What it (roughly) means in reality is the manipulation of tensors, which are (again, roughly) arrays of matrices. Real hardcore algebra shit no one outside a handful of professionals needs to know more about (and I sure didn't need to know even that much, but here we are). "Tensor math" is just how I've come to describe it, others describe it as "matrix calculations" or lord knows what other mathematical jargon that could be used to talk about roughly the same bloody thing. So, for any pedants out there, if I or anyone else like me uses "tensor math" incorrectly, don't be too harsh about it, we're not all MIT grads here. But with that out of the way...

All Nvidia RTX GPUs since the GeForce RTX 20 series have featured "Tensor cores", Nvidia's tensor math hardware accelerators. As Nvidia has iterated on their GPU architecture, their Tensor cores have become more and more capable (won't bore anyone with the how), with their Ampere GPU architecture currently featuring the 3rd generation of them. For clarity, the T239 being developed for Nintendo uses the Ampere architecture and is 100% expected to include Tensor cores.

And as I mentioned, they are not alone, Intel's big swing at the GPU market, the Arc series released last year, features a similar hardware acceleration method that they call "XMX cores" (short for Xe Matrix eXtension cores). It's roughly doing the same thing as a Tensor core, but slightly differently (and seemingly better, not bad for a first swing by Intel there).

So 1 of the 2 big GPU makers and the newest GPU market entrant are both betting big on hardware acceleration for AI image upsampling (and ray tracing). It's a tech advance that's not going away and we should largely expect AMD to move in that general direction... eventually.
I suspect Nintendo probably isn't too worried about their internal software using or not using some nVidia proprietary technologies, by the time they would need to worry about getting the software to run on another platform, there will likely be many work arounds.
There is going to be constant iteration and variation on the idea of AI image upsampling, we can't even be guaranteed Nvidia will keep calculating things at the hardware level the way it does now even 5 years from now. The most important bit of the technology (and the way to keep things hardware-agnostic) is retaining a separate "neural network", or specifically all the data used to properly train it. And Nintendo seems pretty geared up to go in that direction if that patent I remembered is any indication.

Let's be clear: "DLSS" doesn't mean anything and cannot do anything meaningfully without the "neural network" that is fed into it. And Nintendo is not and never has been confined to using Nvidia's neural network. And if it does use its own, it's still technically "DLSS", because it's using the exact same tools (Tensor cores) and the same method with a different data set that it controls.

Basically, how this works is you give a supercomputer with similar but more powerful capability to perform specific math equations (in this case, tensor math) a series of low-res images, then give it matching high quality "target images" that you want the low-res image to look like, and it is tasked with finding the most efficient way mathematically to transform the low-res image into the target image. Now do that ad infinitum, pick out the methods that produced the results closest to the target images, and repeat with slight variations on the successful methods to try and improve the result. Then do that with thousands upon thousands (perhaps upon millions) of low-res and target images.
To describe it a bit like CGP Grey does but in context, a supercomputer is doing millions of practice reproductions from worse-quality images with a bunch of variations of "artist bots". What it spits out is a "neural network", which is basically a bot or bots with the computer equivalent of muscle memory and pattern recognition, selecting only those bots that created images that near-flawlessly resembled the specified target images given to it in the least amount of time. The more you train, the better the bots. It's more involved than that, but you get the idea.

"DLSS" is taking a "neural network" created using Tensor cores in a supercomputer environment over time into an on-the-fly reproduction, creating a brand-new image only from similar-but-likely-different new low-res images, in a much more time-constrained environment but using the same tools (in Nvidia's case, Tensor cores on a lower-scale GPU) and likely getting the best possible replication of what the "target image" would have been if it had been created beforehand.

But without that first step of creating the "neural network", you could never achieve the frame upsampling in a tiny fraction of a second that DLSS provides. So long as you have all of the data used to train a "neural network" on upsampling using Tensor cores and you retain control of the most efficient training methods to achieve the desired result, you can re-create that "neural network" for ANY purpose-built math accelerator like Intel's XMX cores or whatever comes next with very little fuss (by replacing some server blades and re-training a new one with the same data, basically).

So long story short, you're right, even if Nintendo just used Nvidia's "neural network" for its own games, it's not the big problem it's being made out to be because they can create their own at any time, but they're still using DLSS, be it with their own bespoke "neural network" or the game-agnostic one Nvidia has been creating over the past several years. Their own "neural network" would just make any hypothetical future hardware transition to a new SoC manufacturer so much easier from a backwards compatibility/emulation standpoint. DLSS/Tensor cores will be the least of Nintendo's concerns in that regard, quite frankly.
 
Last edited:
So, first off, just to clear the air because I recognize I'm using technical words a bit loosely:

"Tensor math" is a just a simplified phrase I picked up along the way to refer to "the necessary math used by machine learning tools". What it (roughly) means in reality is the manipulation of tensors, which are (again, roughly) arrays of matrices. Real hardcore algebra shit no one outside a handful of professionals needs to know more about (and I sure didn't need to know even that much, but here we are). "Tensor math" is just how I've come to describe it, others describe it as "matrix calculations" or lord knows what other mathematical jargon that could be used to talk about roughly the same bloody thing. So, for any pedants out there, if I or anyone else like me uses "tensor math" incorrectly, don't be too harsh about it, we're not all MIT grads here. But with that out of the way...

All Nvidia RTX GPUs since the GeForce RTX 20 series have featured "Tensor cores", Nvidia's tensor math hardware accelerators. As Nvidia has iterated on their GPU architecture, their Tensor cores have become more and more capable (won't bore anyone with the how), with their Ampere GPU architecture currently featuring the 3rd generation of them. For clarity, the T239 being developed for Nintendo uses the Ampere architecture and is 100% expected to include Tensor cores.

And as I mentioned, they are not alone, Intel's big swing at the GPU market, the Arc series released last year, features a similar hardware acceleration method that they call "XMX cores" (short for Xe Matrix eXtension cores). It's roughly doing the same thing as a Tensor core, but slightly differently (and seemingly better, not bad for a first swing by Intel there).

So 1 of the 2 big GPU makers and the newest GPU market entrant are both betting big on hardware acceleration for AI image upsampling (and ray tracing). It's a tech advance that's not going away and we should largely expect AMD to move in that general direction... eventually.

There is going to be constant iteration and variation on the idea of AI image upsampling, we can't even be guaranteed Nvidia will keep calculating things at the hardware level the way it does now even 5 years from now. The most important bit of the technology (and the way to keep things hardware-agnostic) is retaining a separate "neural network", or specifically all the data used to properly train it. And Nintendo seems pretty geared up to go in that direction if that patent I remembered is any indication.

Let's be clear: "DLSS" doesn't mean anything and cannot do anything meaningfully without the "neural network" that is fed into it. And Nintendo is not and never has been confined to using Nvidia's neural network. And if it does use its own, it's still technically "DLSS", because it's using the exact same tools (Tensor cores) and the same method with a different data set that it controls.

Basically, how this works is you give a supercomputer with similar but more powerful capability to perform specific math equations (in this case, tensor math) a series of low-res images, then give it matching high quality "target images" that you want the low-res image to look like, and it is tasked with finding the most efficient way mathematically to transform the low-res image into the target image. Now do that ad infinitum, pick out the methods that produced the results closest to the target images, and repeat with slight variations on the successful methods to try and improve the result. Then do that with thousands upon thousands (perhaps upon millions) of low-res and target images.
To describe it a bit like CGP Grey does but in context, a supercomputer is doing millions of practice reproductions from worse-quality images with a bunch of variations of "artist bots". What it spits out is a "neural network", which is basically a bot or bots with the computer equivalent of muscle memory and pattern recognition, selecting only those bots that created images that near-flawlessly resembled the specified target images given to it in the least amount of time. The more you train, the better the bots. It's more involved than that, but you get the idea.

"DLSS" is taking a "neural network" created using Tensor cores in a supercomputer environment over time into an on-the-fly reproduction, creating a brand-new image only from similar-but-likely-different new low-res images, in a much more time-constrained environment but using the same tools (in Nvidia's case, Tensor cores on a lower-scale GPU) and likely getting the best possible replication of what the "target image" would have been if it had been created beforehand.

But without that first step of creating the "neural network", you could never achieve the frame upsampling in a tiny fraction of a second that DLSS provides. So long as you have all of the data used to train a "neural network" on upsampling using Tensor cores and you retain control of the most efficient training methods to achieve the desired result, you can re-create that "neural network" for ANY purpose-built math accelerator like Intel's XMX cores or whatever comes next with very little fuss (by replacing some server blades and re-training a new one with the same data, basically).

So long story short, you're right, even if Nintendo just used Nvidia's "neural network" for its own games, it's not the big problem it's being made out to be because they can create their own at any time, but they're still using DLSS, be it with their own bespoke "neural network" or the game-agnostic one Nvidia has been creating over the past several years. Their own "neural network" would just make any hypothetical future hardware transition to a new SoC manufacturer so much easier from a backwards compatibility standpoint.
Speak to me like someone with no tech knowledge. Next Nintendo console is it possible to play lets say Final Fantasy 7 remake that came out on ps4. Or will the console be less powerful.
 
Speak to me like someone with no tech knowledge. Next Nintendo console is it possible to play lets say Final Fantasy 7 remake that came out on ps4. Or will the console be less powerful.
Anything that released on PS4 should be portable outside PSVR stuff tech wise, PS5/XSX stuff Switch 2 will be below Series S but should be able to compete with it thanks to superior RAM and DLSS (also will have better RT but for both consoles RT will not rlly be a thing). Switch 2 will be closer to XSS than Switch was to base One which should make ports more easy but still I’d expect an important amount of AAA games to skip it but less than Switch (plus Switch 2 comes from Switch 1 success so more early support)
 
Last edited:
Speak to me like someone with no tech knowledge. Next Nintendo console is it possible to play lets say Final Fantasy 7 remake that came out on ps4. Or will the console be less powerful.
It will depend on clocks and memory bandwidth but most estimates come in somewhere around the last gen midcycle refreshes (PS4 Pro, One X) for general docked capability. Then it'll have hardware accelerated AI reconstruction and ray-tracing on top which could help it punch up closer to current gen. Series S lowering the bar a bit further (relatively) helps too but storage may end up the bigger differentiator (or not).

Basically anything that could run on PS4 will almost certainly run (better) on Switch 2, and most things that can run on Series S can probably be downported depending on Switch 2's approach to storage.
 
Speak to me like someone with no tech knowledge. Next Nintendo console is it possible to play lets say Final Fantasy 7 remake that came out on ps4. Or will the console be less powerful.
Yes. It will play games designed to PS4 specification. What we were discussing is just the specifics of an included technology that can take that PS4-quality game the new hardware can do and make it look like it was rendered in 4K/2160p while docked using AI witchcraft.
 
Power-wise, the most interesting game to watch for is GTA6. If the rumors of GTA6 releasing on PS4 end up being true, there won't be ANY PS5-game that anyone can say "this won't be possible on Switch 2". At that'd point it'd all be a matter of "don't want", and not "cannot".
 
Power-wise, the most interesting game to watch for is GTA6. If the rumors of GTA6 releasing on PS4 end up being true, there won't be ANY PS5-game that anyone can say "this won't be possible on Switch 2". At that'd point it'd all be a matter of "don't want", and not "cannot".
Something could still waylay a port though. Like storage issues killing the early GTAV port on Switch.
 
Something could still waylay a port though. Like storage issues killing the early GTAV port on Switch.
If Nintendo wants GTA6 that badly they will get it on their system. Storage won't be as much of a problem if Rockstar makes them separate and offer one portion as a download, like they now do for GTA5
 
So, first off, just to clear the air because I recognize I'm using technical words a bit loosely:

"Tensor math" is a just a simplified phrase I picked up along the way to refer to "the necessary math used by machine learning tools". What it (roughly) means in reality is the manipulation of tensors, which are (again, roughly) arrays of matrices. Real hardcore algebra shit no one outside a handful of professionals needs to know more about (and I sure didn't need to know even that much, but here we are). "Tensor math" is just how I've come to describe it, others describe it as "matrix calculations" or lord knows what other mathematical jargon that could be used to talk about roughly the same bloody thing. So, for any pedants out there, if I or anyone else like me uses "tensor math" incorrectly, don't be too harsh about it, we're not all MIT grads here. But with that out of the way...
For those who are interested, tensors were first applied outside of a pure abstract math context in Albert Einstein's Theory of General Relativity. They are best described metaphorically as a matrix but made into a cube matrix rather than a square/rectangle matrix*. Then, the values in the rows/columns, z dimension etc. change according to which direction one views it from.

The fact that Einstein saw what was originally a purely abstract and challenging mathematical construct and saw how to apply it to describing the real world is still just stunning to me. That is one reason why his name is still synonymous with genius decades after his death. His earlier work on Special Relativity and the Photoelectric Effect are much more accessible reads on Wikipedia. His solving the Photoelectric Effect is still stunning genius over a century later.

I think it's incredible that Einstein's work paved the way for fancier handheld toys!



(*this cube shape can be generalized to higher order mathematical dimensions similar to how you can take a number then square it then cube it than to the fourth power and fifth power etc. I learned to not try to think about what any of this means in terms of geometry but just follow the math)
 
Power-wise, the most interesting game to watch for is GTA6. If the rumors of GTA6 releasing on PS4 end up being true, there won't be ANY PS5-game that anyone can say "this won't be possible on Switch 2". At that'd point it'd all be a matter of "don't want", and not "cannot".

I think that GTA5 could run on Switch, but it isn't, so yeah: it could totally be a case of "don't want/not interest in" apart from the understandable "cannot run"

Switch 2 could improve over Switch in terms of third parties, but it still will be a Nintendo different form factor business model oriented whatsoever
 
I think that GTA5 could run on Switch, but it isn't, so yeah: it could totally be a case of "don't want/not interest in" apart from the understandable "cannot run"

Switch 2 could improve over Switch in terms of third parties, but it still will be a Nintendo different form factor business model oriented whatsoever
One of the big differences on Switch 2 wikl be (according to rumors) the higher storage. If GTA6 can fit on the internal storage, that'd be a huge point for a port.
 
I think the bigger problem with GTA5 was that, at the time, you downloaded one product, GTA5+GTAO. they since separated them them, so it's two downloads. so that would sorta solve the file size problem

One of the big differences on Switch 2 wikl be (according to rumors) the higher storage. If GTA6 can fit on the internal storage, that'd be a huge point for a port.
since leaks stated that dev kits had up to 512GB, that won't be a problem. for now
 
since leaks stated that dev kits had up to 512GB, that won't be a problem. for now
512GB is enough to not be an issue for developers for the whole gen, is big enough that file size it isn’t an issue. Of course at one point it will fall short for consumers in the long run but is enough to fit all future games (I don’t see 500gb+ games this gen)
 
GTA6 is going to be a heavy live service game I presume. If it skips Switch, it will probably be due to business reasons as in not having a big enough market to continually invest resources on. Supporting both PS5/Xbox and Switch 2 will be a heavy load on developers since there will be significant extra work. I think the market will be significant though but I could see a situation where there is not enough staff to work on both mobile and high end platforms.
 
One thing to keep in mind is that what you will see on reveal day is what you will get for the next seven years. The BotW -> TotK transition is a perfect example of that.

So the jump in visuals and playability better be impressive.
TotK is not a good example since all the spare overhead from optimization is spent on gameplay instead of visual. The increased RAM goes toward having more objects on screen and they do not disappear at much greater a distance. The physic system and the NPC assault takes most of the CPU improvement. BotW done with TotK optimization can be much better visually.
 
Well, the jump from this
screen-lrg-2.jpg


to this was actually pretty impressive for a portable device
05kV0iHXtm9IkEu1PMkh9pX-4..v1569481406.jpg



Nowadays, even imagining a strong jump going by this
DMoLvj9UQAAOQds.jpg


to this
RKDNhBbH8eGWdqlurhFJFTY5.jpg


my personal opinion is that it won't be as effective as the Switch has been compared to the 3DS.
Of course the "trick" Nintendo played was a very well managed one, going to put all their eggs in one basket, migrating ALL their fanbase (that have been bigger on portable devices than on home console since the forever) into an hybrid system

If you look at it from a (visual) Wii U-to-Switch poing of view the topic changes a lot
Xenoblade games on the Switch were held back by the low image qualities because of the low resolution, but with a simple resolution improvement even without other post processing effects, it's going to look closer to TOA and even more impressive because of the scale differences between Xeno games Area and TOA Area sizes.

kODQ1Pi.jpg

Jt0VCYh.jpg

MywF8yn.jpg
 
Xenoblade games on the Switch were held back by the low image qualities because of the low resolution, but with a simple resolution improvement even without other post processing effects, it's going to look closer to TOA and even more impressive because of the scale differences between Xeno games Area and TOA Area sizes.

kODQ1Pi.jpg

Jt0VCYh.jpg

MywF8yn.jpg


would the DLSS push esactly this kind of results/improvements in terms of image quality, based on the rumors?
 
GTA6 is going to be a heavy live service game I presume. If it skips Switch, it will probably be due to business reasons as in not having a big enough market to continually invest resources on. Supporting both PS5/Xbox and Switch 2 will be a heavy load on developers since there will be significant extra work. I think the market will be significant though but I could see a situation where there is not enough staff to work on both mobile and high end platforms.
What do you mean by "market"? The switch platform is massive, larger then the other two. As of September of 22, the NSO had 36 million paid subscribers. If you mean online infrastructure, and how silly Nintendo executes it, then yes, that lies solely on Nintendos shoulders.

The market is absolutely there.
 
would the DLSS push esactly this kind of results/improvements in terms of image quality, based on the rumors?
Probably even better. Remember that DLSS-video where "Control" had even more detail added thanks to DLSS as compared to its nativ higher resolution?
 
What do you mean by "market"? The switch platform is massive, larger then the other two. As of September of 22, the NSO had 36 million paid subscribers. If you mean online infrastructure, and how silly Nintendo executes it, then yes, that lies solely on Nintendos shoulders.

The market is absolutely there.
I think by market they mean one that is very receptive to MTX. Presumably that would be one R* looks at since the money maker is GTAO. We know there is one due to the ABK acquisition which told us about Minecraft. However it remains to be seen if other companies capitalize on establishing themselves there with quality products.
 
Is Drake going to use SD cards or what Xbox and PS is using?

That is the big question. There are fast internal storage Nintendo can use but nothing for expandable storage. What Xbox and PS are using is too big and too energy hungry. The available options are a combination of expensive, energy hungry, or low availability. SD card is the likely option but that means not being to leverage the fast internal storage if games can be played off the card. Or it will only be trunk storage and you will have to move games in and out of storage to play them.
 
That is the big question. There are fast internal storage Nintendo can use but nothing for expandable storage. What Xbox and PS are using is too big and too energy hungry. The available options are a combination of expensive, energy hungry, or low availability. SD card is the likely option but that means not being to leverage the fast internal storage if games can be played off the card. Or it will only be trunk storage and you will have to move games in and out of storage to play them.
cold storage isn't going to be a thing anymore given how inhibitive it is. games made for Drake will just have to take into account the read speeds of an SD card and if they're that inhibitive, then the game won't make it. I don't expect a lot Ratchet and Clank Rift Apart-esque game designs, and even then, ports can be tuned to diminish those effects
 
I think that GTA5 could run on Switch, but it isn't, so yeah: it could totally be a case of "don't want/not interest in" apart from the understandable "cannot run"

Switch 2 could improve over Switch in terms of third parties, but it still will be a Nintendo different form factor business model oriented whatsoever

Physical Cart storage size and price will be the other deciding factor on how many AAA games get ported to Switch 2.
 
Usually, we don’t see the best graphics on a console until a couple of years until the cycle, once publishers have learned to max out the hardware. BOTW-> TOTK isn’t a perfect example, in my view, as it was an iterative sequel utilizing the same engine developed for the Wii U.

Switch 2 will have a slew of cross-gen games for a good long while, too, so outside Nintendo, I don’t expect publishers to be pushing the new system to its limit.

Not that anyone looking to buy a Nintendo Console day one is going to be super focused on graphics anyway.

Also TOTK is actually a solid visual/tech upgrade. Better load times, longer draw distance, visual touch ups, a LOT more going on in the world with all the wild contraptions, even more physics shenanigans at once, sky islands in the distance and more demanding powers, the seamless transition between the sky and surface etc.
 
Also TOTK is actually a solid visual/tech upgrade. Better load times, longer draw distance, visual touch ups, a LOT more going on in the world with all the wild contraptions, even more physics shenanigans at once, sky islands in the distance and more demanding powers, the seamless transition between the sky and surface etc.
And at least for me, in addition to all the above, while BOTW was a battery hog on my Day 1 Switch, ToTK wasn't at all, despite all improvements.
 
Last edited:
That is the big question. There are fast internal storage Nintendo can use but nothing for expandable storage. What Xbox and PS are using is too big and too energy hungry. The available options are a combination of expensive, energy hungry, or low availability. SD card is the likely option but that means not being to leverage the fast internal storage if games can be played off the card. Or it will only be trunk storage and you will have to move games in and out of storage to play them.
Low availability is a solvable problem for some of the other options, particularly the most cost-effective and energy efficient of the bunch (UFS Card). SanDisk in particular, who have been selling Nintendo-licensed microSD cards, would roughly know what kind of sales volume they could anticipate for external storage on new hardware, so a few conference calls and a signed contract later could get SanDisk to ramp up production on a format with low to no availability because the market for such a product would be provable.
Physical Cart storage size and price will be the other deciding factor on how many AAA games get ported to Switch 2.
There’s been some positive developments there based on Macronix‘s published technology papers and production roadmap over on Famiboards, it’s really good news.
 
cold storage isn't going to be a thing anymore given how inhibitive it is. games made for Drake will just have to take into account the read speeds of an SD card and if they're that inhibitive, then the game won't make it. I don't expect a lot Ratchet and Clank Rift Apart-esque game designs, and even then, ports can be tuned to diminish those effects
I agree. Don't see many games relying on super fast storage for their gameplay. That said, I am loving the super fast loading time on current gen games and I don't want to go back. It's torture playing through Lego City Undercover and Red Faction Guerilla on the Switch.

Low availability is a solvable problem for some of the other options, particularly the most cost-effective and energy efficient of the bunch (UFS Card). SanDisk in particular, who have been selling Nintendo-licensed microSD cards, would roughly know what kind of sales volume they could anticipate for external storage on new hardware, so a few conference calls and a signed contract later could get SanDisk to ramp up production on a format with low to no availability because the market for such a product would be provable.
Believe me, I preach the gospel of UFS card any chance I get. Energy efficient due to being designed for mobile devices as well as rapid speed improvement and very cost-effective due to large scale manufacturing of its embedded variant. I can't see a better replacement for SD card. Not just for the Switch but for my PC and laptop as well. With Nintendo sale volume, they can easily kickstart UFS into a mass consumer format.
 
I don't see ufs cards ever living. M.2 2230s are more viable in my eyes. It can be power limited and still be fast enough

Sadly, you're probably right and M.2 are pretty great but not really a replacement for SD card. They're not meant to be carried around plugging in and out to transfer files between my devices. With how large movies can be, a high speed option would be great. Portable drives are great but being able to leave it in the device until you need it elsewhere is a great benefit.
 
I don't see ufs cards ever living. M.2 2230s are more viable in my eyes. It can be power limited and still be fast enough
3 times the size in length, width and depth compared to a microSD or UFS card. Packaging them in a way that makes them approachable to an average consumer only adds to their physical size. Not hot-swappable (cannot be removed while the device is running).
And this is top-of-mind reasons why it’s a poor replacement from a consumer adoption and industrial design perspective for the type of device this console is going to be and who it will sell to.
 
Last edited:
There are usb3.2 usbc enclosures for 2230 ssds.

Microsd cards have their use but they’re way too expensive for the storage space and long term fiability they provide (they’re very easy to get data corruptions on them)

You can now get 2tb 2230 ssds for less than the price of a 1tb microsd card
 
There’s been some positive developments there based on Macronix‘s published technology papers and production roadmap over on Famiboards, it’s really good news.
That's good to hear. Switch cart prices and sizes were definitely a negative factor this gen.
 
Eurogamer report

A recent report pinned Switch 2's arrival for the latter part of next year, with development kits now in the hands of some key partners. This chimed with what Eurogamer had also previously heard, though on timing I understand Nintendo is keen to launch the system sooner if possible.

After TGS the floodgates will really open at this pace.
 
Back
Top Bottom