The memory limits are starting to bite

Why you run out of memory

32-bit gaming is going to come to an end.  Not today. Not tomorrow, but a lot sooner than most people think.

That's because no matter how much memory your PC has, no matter how much virtual memory you have, a given process on a 32-bit Windows machine only gets 2 gigabytes of memory (if the OS had been better designed, it would have been 4 gigs but that's another story).

Occasionally you run into people in the forums who say "I got an out of memory error".  And for months we couldn't figure it out.  We don't have any memory leaks that we know of and the people who reported it had plenty of virtual memory.  So what was the deal?

The problem was a basic misunderstanding on how memory in Windows is managed.  We (myself included) thought that each process in Windows may only get 2 gigabytes of memory but if it ran out of that memory, it would simply swap to the disk drive.  Thus, if a user had a large enough page file, no problem.  But that's not how it works.  After 2 gigabytes of memory, the system simply won't allocate the process any more memory. It simply fails and you will end up with a crashed game.

This is a very significant problem.  In Galactic Civilizations II v1.7, we'll at least be able to address this with more aggressive dealocation routines (which I really hate having to do, I really prefer the idea of once something is used, to keep it around for performance -- I've always been a proponent of performance over memory use).  But we'll be able to do it here without any noticeable affect in performance.

No, the real problem is in future games. If 2 gigabytes is the limit and a relatively low impact game like Galactic Civilizations is already running into it (and it's no memory hog), what's coming up next?  How about this -- when your video card runs out of memory for textures, it goes to system memory. And I think (but haven't tested this) that the memory it grabs belongs to the game process. 

Console game developers would simply laugh at our complaints and say that we just need to get better at de-allocating memory.  But that's only a short-term solution.  Gamers, particularly PC gamers, want to see their games get better looking and more sophisticated. 

So at some point, in the next few years, serious gamers and high-end games are going to have to require 64-bit machines to play them.  It'll probably be several years before it becomes common but it's coming. 

The good short-term news for GalCiv players is that we'll be able to have larger sized galaxies in GalCiv II: Twilight of the Arnor without running out of memory and users of GalCiv II will be able to avoid running out of memory once they get v1.7.

62,983 views 47 replies
Reply #1 Top

I'm not sure I would have expected any different from the group that (even if slightly misquoted) originally thought that 64K would be plenty of RAM for anyone.   Actually they were probably talking/thinking about 640K, but even then that wasn't enough.

The size limits and restrictions that Microsoft and Intel have been using have been problematic for years, but we keep going, and going, and going with backwards compatibility and a stubborn desire not to change from what we have used in the past to some whole new system that would require all new versions of every application we could ever imagine running on them.

You well know that is a big part of the reason (the many would say better operating system) OS/2 was never able to displace Windows.  Of course another part of the reason was that application support for any Windows alternative always lagged and IBM never wanted to pay enough green-mail to developers to move their future work to OS/2.

If people would work within the model Microsoft used with the Xbox 360, where only a handful of selected games (applications) were backwards compatible through an emulation system, but basically everyone started all over again from scratch, then things might be a whole lot better off now.  Some would say this approach is exactly what is happening with Vista where you do see 64-bit support and applications are being re-written to take advantage of that power -- or they are working with-in a 32-bit emulation type world where their applications may run, but perform poorly or don't quite work as they should because of security models, hardware access restrictions, etc.

Reply #2 Top
Its good to see an article that illustrates positive thinking and action. It gets tiring to see tirade after tirade about what might have been, could have been or should have been, usually resulting in a general self seeking "told you so" statement, negative hindsight is an easy to achieve skill. I am more interested in people who resolve issues, and get on with life. Yet again full marks to Stardock, I look forward to V1.7 and ongoing developments.
Reply #3 Top
I second Zydor's compliment, with the exception of preferring good critical thinking to "positive thinking."

I'm guilty of some of the whinging that Zydor mentions, but it is worth knowing why things went wrong as well as what you're going to do next. For example, the key problem Brad points out could well have been avoided by changes to the pacing of development cycles or the influence that sales and marketing units tend to have over external communications. Because such problems can and likely will recur, we'd all be better off if the organizations involved were more open to constructive criticsm than is currently popular in this ad-saturated world.
Reply #4 Top
I have a 64bit system and running XP 64bit edition. Will we eventually see 64bit builds of Galciv II?
Reply #5 Top
Oh I can whinge with the best of them, I am no paragon of vertue    Its just nice to see a positive article.
Reply #6 Top
WARNING: geek content following...

This is not just a Window's issue. Userspace applications on 32 bit systems have had this limit since virtual memory managers were created. Every process is given 4 GB of virtual memory space, the most that can be addresses in 32-bits. Half is given to the kernel and half is given to the process. This is where 2GB comes from. But it does not stop there, code space, data space and stack space all have to come from that 2GB that is left. So when you are ready to allocate that first set of bytes, there is really not much virtual memory address space left to do really big things.

The good news though, is that this is a per process limit. Meaning that every process gets a new virtual address space to play in. So with interprocess communication, IPC, and shared memory access, the 32 bit arch continues to move forward with the 4GB addressing limitations.

Technology such a PAE and GT4 has also come up, so that 32 bit hardware can stay around still and address > 4GB of physical memory. Maybe 64bit wil catch on and this limitations will be removed. Only time will tell...
Reply #7 Top
have you brought this up with mirco soft
Reply #8 Top
I stated that memory was going to be the driving force to 64 bit computing 2 years ago.  Gone are the days when memory conservation was the way to a top programming job (I do remember those days).  But as coding in general has gotten sloppier (code optimization is a thing of the past), memory usage has sky rocketed.  And I blame Microsoft for that.  Eventually we had to hit the 32bit wall, and it is now, not in the future.  In 2 years, only low end systems will be non-64bit.
Reply #9 Top
I stated that memory was going to be the driving force to 64 bit computing 2 years ago. Gone are the days when memory conservation was the way to a top programming job (I do remember those days). But as coding in general has gotten sloppier (code optimization is a thing of the past), memory usage has sky rocketed. And I blame Microsoft for that. Eventually we had to hit the 32bit wall, and it is now, not in the future. In 2 years, only low end systems will be non-64bit.




i am hitting that same wall with my 64 bit. although it doesn't crash it just stops saving.
Reply #10 Top
That wall is in the teraoctects (ie 1000s of Gb ) for the 64bits OSs so no. What you hit is the "emulated" 32bits memory wall which still affect 64bits systems using 32bits apps. That why, with GC2 and any other 32bits apps, using a 64bit OS makes no differences (except lower performances of course).
Reply #11 Top
64-bit has its own problems on the hardware side, though. How many people here have tried routing a 64-bit bus line on a microprocessor before? This isn't like the "lack of vision" problem Bill Gates had with the 640K memory thing. You pay a significant performance, price, and power penalty when you go 64-bit on the hardware. That's why things like the Intel Itanium were outrageously over-priced, YEARS behind schedule, and little market for them. To put it really simply: if you make everything 64-bit, you are more than doubling the size of your chip. And if you double the chip size, that means it computes at half the speed, at twice the cost, and twice the wattage. Now, there are several tricks we are trying in the industry to make it not that bad, but it's kinda hard when the average guy at Best Buy expects the 64-bit chips to be FASTER than 32-bit.

I *guarantee* you this problem is more easily solved in the software. We may see more 64-bit Operating Systems in the next few years that do some swizzling on the hardware's 32-bit address lines (which incurs a performance penalty in itself), but if you want 64-bit hardware--you're gonna pay. Understand that you're going to pay more for a CPU that runs slower & hotter, in return for your >4 Gig. support. Maybe at some point in the distant future the penalty for running 64-bit hardware will be less than emulating 64-bit behavior in the software, but it's just that--distant. For now, I highly recommend squeezing all you can out of 32-bit while you can. Sorry.
Reply #12 Top
If you've got an app that keeps eating memory, what difference what the limit is?

This just seems like common sense. Even if windows did start swapping more memory, the game would suffer. (and eventually you'd hit a wall there too)

I'm sorry, but if you're going to use dynamic memory, you'd better clean up after yourself. So does this mean the Out of Memory problem may actually get fixed?
Reply #13 Top
So the out of memory error will finally be addressed in 1.7? I certainly hope so. My frustration point reached its limit last week when my well-advanced game would not go more than a turn or two, and I just got so tired of reloading and replaying the same turns over and over again. (Yes, I have the latest NVidia drivers, latest Vista patches, 2 Gig RAM, etc.) I'm afraid I'm simply going to have to quit playing the game for now, and will look forward to 1.7 and a return to GalCiv 2.
Reply #14 Top
but if you want 64-bit hardware--you're gonna pay.


Uh, pretty much all consumer-level hardware has been 64 bit for a while now--32 bit died with the Athlon XP and Pentium 4E. Sure, the majority of OSes are still 32 bit, but a typical PC has been capable of well more for a couple of years now.
Reply #15 Top
Well, I'll be happy with the memory leak addressed. I didn't think SD would leave me high and dry, as their customer support has been top notch, but blaming it all on the OS seems a bit strange. I understand all too well the limitations of certain software, and hardware, but the way this game can eat memory can be out of this world. I'm not on the forefront of the 32/64 bit information line, so I'll let you computer guys work it out and I'll just pay the difference for another good computer later, but my good computer now, than runs every game I own at full (or near full) settings, can't run GC2 without crashing. I know that different games allocate memory differently, so it's nearly apples and oranges, but it seems to me that if a Supreme Commander match with a near equal map size, and an equal number of enemies (while not nearly an intelligent as the GC2 AI) runs smoother, longer, and completely without fail, and GC2 will run a few turns and then ctd, without exception, then something is amiss.

I'm not sure what it was, but GC2:DA worked great when I bought it, it reinvigorated the experience for me and I played countless games, and now it doesn't. The sooner it's fixed, the happier I'll be, because I'm not willing to buy another expansion with the last one still broken (for me). Here's hoping 1.7, and 2.0 squash it. Or they'll re-release the last patch that didn't have the error.   
Reply #16 Top
64-bit has its own problems on the hardware side, though. How many people here have tried routing a 64-bit bus line on a microprocessor before? This isn't like the "lack of vision" problem Bill Gates had with the 640K memory thing. You pay a significant performance, price, and power penalty when you go 64-bit on the hardware. That's why things like the Intel Itanium were outrageously over-priced, YEARS behind schedule, and little market for them.


I tend to disagree. 64bit CPUs have been around for ages (since 1961), the technology has been well-tested. Lots of supercomputers and other big non-consumer hardware already uses 64bit computing. Only recently 64bit entered the consumer world.

Note that the Itanium still isn't consumer hardware, the Itanium has been designed for servers and high performance computing. The reason it failed isn't due to 64bit, it's due to its overall architecture:
"Itanium's architecture differs dramatically from the x86 and x86-64 architectures used in other Intel processors. The architecture is based on explicit instruction-level parallelism, with the compiler making the decisions about which instructions to execute in parallel."

And that's the problem: You need a special compiler and you need to write your code in a special way so that the compiler can easily autoparallelize it.

This gets even more difficult in other architectures, like the Cell as used in the PS3. Still, most of the console titles only use a fraction of the Cells capability, mainly because most console games are ported to every single console there is. And since these games are made by companies interested in profit and not high performance, they don't optimize there code to one platform.

Now back to 64bit in the consumer world: The current 64bit consumer level CPUs are a dream. Fully compatible with old 32bit code and a 64bit instruction set that's very similar to the 32bit one (which makes it relatively easy for compiler makers regarding optimization).

Greatly lifted memory limits, virtualization support, multiple cores, that are the key features of the new consumer level CPUs in my opinion. To my knowledge, the next version of Windows will no longer be available in 32bit mode, so in five years or so i think no new computers will ship with a 32bit only CPU. Skip forward a few years and CPU vendors will start to throw the legacy x96 stuff overboard.

It's a slow process, but be glad it's a smooth transition, nobody is forced to throw away all old software and start from scrath from one day to the other.
Reply #17 Top
How about this -- when your video card runs out of memory for textures, it goes to system memory. And I think (but haven't tested this) that the memory it grabs belongs to the game process.


My understanding is that however much texture memory you're using on the video card is automatically duplicated within the game's process already (either the game does it itself, or it's handled by DirectX). So for people (like myself) with 512MB graphics cards, if the game uses up 500MB (or so) of memory for textures, it's going to use up 500MB of that process' virtual address space to store a duplicate of everything stored in video RAM. That is, unless it's running DirectX 10, which apparently doesn't have to do that. Now if you run out of texture memory, it starts allocating system memory (via the chipset GART for AGP cards or the onboard GART for PCI Express cards).

Microsoft issued a patch for Vista just recently that addresses something related to this, though I don't understand the specifics, I just know that it cut down my overall memory usage as well as the memory used for my games.

I'm curious. Is GC2 large address-aware? Most games these days should be.
Reply #18 Top

As I said, there isn't a memory leak in GalCiv.  What happens is that through the course of the game, more and more ships, ship designs, etc. get created which uses more and more memory (especially on larger sized galaxies).

What can (and is being done) is to more aggressively deallocate memory for things that aren't on screen but there is a price to pay - performance.  The difference, hopefully, will be negligible.

People really need to quit assuming that because an application or game consumes memory through the course of running that it has some sort of "leak".  Because when people are so quick to yell "leak" it may distract developers from the report of a legitimate leak.

Reply #19 Top
In IC design we are drawing millions and millions of shapes, and we are staying comfortably below 1 Gig. of memory. The performance optimization we do is that if you zoom out, most of the smaller shapes don't need to be drawn. But they're still resident in memory. I have to wonder what exactly in Galciv is hogging the memory? I would expect the galaxy itself to be a sparse matrix--you just have an array of pointers, most of which are nil. And then the ships & planet textures I would expect to be templates. If you're flattening the data structure and storing the graphical information for each & every instance of the object, that would do it.

Maybe what Stardock is doing already is kind of similar to what we do: would you assume the user doesn't zoom in that much, and not store all that intricate data in RAM? Cause if you store ALL of the graphical data for a gigantic galaxy, as if the user was zoomed in for it all, that's a killer. But the user is not going to zoom in on an entire galaxy. But even then, I would expect the planet & ship textures to be stored in their own objects, not on every instance.


P.S. Contrary to what Intel and AMD marketing say, their microprocessor architectures are not truly 64-bit native. They'll fetch 64-bit assembly instructions and then decode them into 32-bit micro-ops (or uOps). It's emulation, the same way you would do it in the software. The Itanium and PowerPC64, by contrast, are native. If you compare the benchmarks of 32-bit software vs. 64-bit on PowerPC64, you will find the 32-bit applications are slightly slower (that's because you have 64-bit hardware emulating 32-bit). But Pentium 4 and Hammer architectures, it's the other way around: you take a hit running 64-bit software on their "64-bit machines". We are trying all sorts of tricks to reduce the performance penalty running 64-bit, but if companies sunk equal man-hours into 32-bit and 64-bit processor designs, the 32-bit will always outperform the 64. Maybe that's a sacrifice people are willing to make in exchange for the larger memory space, but really you're better off staying 32-bit for as long as you can.
Reply #20 Top

but really you're better off staying 32-bit for as long as you can.

But even if GalCiv fully optomized their code, and made it nice and neat, there is still the limitation problem that we are already running into in non-gaming apps for the simple reason that Microsoft does not optimize code period.  So running Office and IE can kill you in 2gb.  Or even 4 if the "bug" was eliminated.  It may not be time to go to 64 bit, but it is time to start planning to move to it in the near future.

Reply #21 Top
If IE is hitting a 2 Gig. limit, we are in a sad state indeed. That means you're transmitting millions of shapes over the net.

It would be nice if the industry would accept 48-bit instead of 64. That would mean a 50% area penalty on our chips instead of 100%. I can't imagine what application requires that the number of bits be a power of 2.
Reply #22 Top
Any word on memory fragmentation? (Actually fragmentation of the logical address space assigned to the application)

If plenty of space is still free, fragmentation won't (ever) be an issue. But if the memory limit gets reached, fragmentation can make things worse.
Reply #23 Top
I'm not a programmer, but I'm pretty sure that if you set a large memory aware flag (or something, don't remember its exact name) Windows x64 versions would be able to address 3 or 4 GiB of RAM to the game. This could be particularly useful for Vista, since some changes to how graphics memory is handled in Vista makes the process use even more virtual memory than in XP. There have been a few articles on anandtech dealing with this issue (Part 1, Part 2 and Part 3). If you're really going to be getting close to the limit on XP, people with modern graphics cards in Vista may get in trouble, but apparently this can be fixed, at least for the x64 versions of you set the appropriate flag.

I'm sure you know all this much better than I do, but just thought I'd mention it anyway

edit: I see Pyrion mentioned this earlier. As you can see in the anandtech articles a very recent game which is actually having crashes on large maps because of this issue (Supreme Commander) is not large address aware. The developers seem to have really dropped the ball on that one, as it should apparently not be very difficult to support this, and it would have saved a lot of players from a lot of crashes.
Reply #24 Top
What advice do I get out of this article?

Buy a new comp with Windows vista 64 bit and plug in 4 gb ram?

Or stick windows xp 32 bit and 2 gb of ram?
Reply #25 Top
What advice do I get out of this article?

users of GalCiv II will be able to avoid running out of memory once they get v1.7.