Memory allocation - problems when used ram greater 1.2 GB
Memory allocation - problems when used ram greater 1.2 GB
I am the developer of GeoControl (terrain generator, written in PB).
More and more users (mostly game developers) are asking for terrains greater than 4096*4096 points. After building in the possibility for 8192*8192 terrains into GC, memory allocations errors occured.
To keep the ram below 1.5 GB I did many "Global Dims", arrays to zero and back to full size, permanently during execution. It is sure, that at all no more than 1.2 GB ram were needed. Nethertheless windows run into problems and failed to manage the memory, with randomly allocation errors.
Did someone here know the reason? I know from many application, not written in PB like Vue or Carrara, that they have the same problem.
Would the memory usage change, if someone run GeoControl on a 64bit system, or does the emulation to 32bit also change the internal memory management of windows to 32 bit?
Request for large terrains will grow fast, as the new game engines can handle 32768*32768 maps for shading.
More and more users (mostly game developers) are asking for terrains greater than 4096*4096 points. After building in the possibility for 8192*8192 terrains into GC, memory allocations errors occured.
To keep the ram below 1.5 GB I did many "Global Dims", arrays to zero and back to full size, permanently during execution. It is sure, that at all no more than 1.2 GB ram were needed. Nethertheless windows run into problems and failed to manage the memory, with randomly allocation errors.
Did someone here know the reason? I know from many application, not written in PB like Vue or Carrara, that they have the same problem.
Would the memory usage change, if someone run GeoControl on a 64bit system, or does the emulation to 32bit also change the internal memory management of windows to 32 bit?
Request for large terrains will grow fast, as the new game engines can handle 32768*32768 maps for shading.
Hi JoRo,
My basic understanding of the Windows memory model (32-bit) is as follows:
Maximum 32-bit address space = 2 ^ 32 = 4GB
But, MS Windows reserves the upper 2GB, reducing the available memory down to 2GB.
This remaining 2GB needs to accommodate the system routines/services, libraries (DLLs), heap and stack.
Therefore, the larger the requirements of the heap/stack/etc..., there is less address space available for general use. So, if 500MB is used for heap/stack/DLLs, etc..., that leaves around 1.5GB for dynamic allocation.
This might explain why you are now hitting limits.
I believe that in a 64-bit environment, as the address range is greater, the limits increase accordingly. However, I believe there are still limits, but I'm not sure what they are.
There are other people far more clever than I that can probably give you a lot more clarification...won't be long before they post
My basic understanding of the Windows memory model (32-bit) is as follows:
Maximum 32-bit address space = 2 ^ 32 = 4GB
But, MS Windows reserves the upper 2GB, reducing the available memory down to 2GB.
This remaining 2GB needs to accommodate the system routines/services, libraries (DLLs), heap and stack.
Therefore, the larger the requirements of the heap/stack/etc..., there is less address space available for general use. So, if 500MB is used for heap/stack/DLLs, etc..., that leaves around 1.5GB for dynamic allocation.
This might explain why you are now hitting limits.
I believe that in a 64-bit environment, as the address range is greater, the limits increase accordingly. However, I believe there are still limits, but I'm not sure what they are.
There are other people far more clever than I that can probably give you a lot more clarification...won't be long before they post

If the temperature today was 0 degrees, how can it be twice as cold tomorrow?
Hi again,
I just thought this article might be useful:
http://msdn.microsoft.com/msdnmag/issues/0700/hood/
It is a little dated (2000), but it gives a lot of info on the Windows memory management model.
Sorry if it is a bit heavy.
I just thought this article might be useful:
http://msdn.microsoft.com/msdnmag/issues/0700/hood/
It is a little dated (2000), but it gives a lot of info on the Windows memory management model.
Sorry if it is a bit heavy.
If the temperature today was 0 degrees, how can it be twice as cold tomorrow?
- Kaeru Gaman
- Addict
- Posts: 4826
- Joined: Sun Mar 19, 2006 1:57 pm
- Location: Germany
Hi JoRo
you didn't mention the physical memory...
I guess the machine should have 2GB RAM to avoid a lot of swapping when the program needs 1.2GB
if you allocate and free too fast and need a lot of swapping while this,
I can frankly imagine windows failing to do it always right.
I don't know about a 64bit environment, if a 32bit prog could be run there in a full 4GB segment....
if you are heading forward to the really big maps
> the new game engines can handle 32768*32768 maps for shading.
you should convert the whole project to 64bit to be on the safe site.
you didn't mention the physical memory...
I guess the machine should have 2GB RAM to avoid a lot of swapping when the program needs 1.2GB
if you allocate and free too fast and need a lot of swapping while this,
I can frankly imagine windows failing to do it always right.
I don't know about a 64bit environment, if a 32bit prog could be run there in a full 4GB segment....
if you are heading forward to the really big maps
> the new game engines can handle 32768*32768 maps for shading.
you should convert the whole project to 64bit to be on the safe site.
oh... and have a nice day.
Hi JoRo,
I've written a quick Test-Code which reproduces this Problem.
I'm not sure, but afaik windows swaps only full areas of an allocated block - it doesn't splitt these Blocks to swap them to HDD.
So it works with smaller Memory-blocks quite reliable, but doesn't work with large ones.
As you can see in the Testcode, PB cannot allocate the memory for the Dim-Field if the required Memory-Block is larger than the physically free Memory in your System (e.g. you can see the physically free Memory in the Taskmanager).
With a free physically Memory of about 780 MB, the created Debugger-Output will be like this:
I've written a quick Test-Code which reproduces this Problem.
I'm not sure, but afaik windows swaps only full areas of an allocated block - it doesn't splitt these Blocks to swap them to HDD.
So it works with smaller Memory-blocks quite reliable, but doesn't work with large ones.
As you can see in the Testcode, PB cannot allocate the memory for the Dim-Field if the required Memory-Block is larger than the physically free Memory in your System (e.g. you can see the physically free Memory in the Taskmanager).
Code: Select all
Structure BigStruct
BigSpace.l[25600] ; Will reserve 100KB for each structured Element
EndStructure
For n = 5000 To 16000 Step 1000 ; This Loop will reserve between 500 MB and 1.6 GB of Ram in 100 MB Steps
Debug "Trying to allocate "+Str(n * 102400)+" Bytes."
Global Dim Test.BigStruct(0)
Global Dim Test.BigStruct(n)
Debug "Memoryadress of the Array: "+Str(@Test()) : Debug ""
Test(n-1)\BigSpace[25599] = n
MessageRequester("Memoryallocation by Dim() Test",Str(n * 102400)+" Bytes allocated by 'Global Dim'."+#CRLF$+"Testvalue: "+Str(Test(n-1)\BigSpace[25599]))
Next n
Greets, PL.Debugger wrote:Trying to allocate 512000000 Bytes.
Memoryadress of the Array: 23199796
Trying to allocate 614400000 Bytes.
Memoryadress of the Array: 537460788
Trying to allocate 716800000 Bytes.
Memoryadress of the Array: 537460788
Trying to allocate 819200000 Bytes.
Memoryadress of the Array: 0
Hmmm .... just changed the above code to increase the the Array-Size step-by-step using ReDim().
First it looked like the memory-swapping to HDD will work.
But increasing the Array-size that way seems to defragment the Memory very much and ends up with an "invalid memory acces" from ReDim().
[Edit] @Fred:
On my System ReDim() generates an IMA while trying to allocate 600 MB.
Just before ReDim() generates an IMA, the memoryusage of this process growes up to over 800 MB (observed with TaskManager) - even though ReDim() should only increase the Size to 6000 Fields (600MB).
Could it be, that there is a Bug in ReDim()?
First it looked like the memory-swapping to HDD will work.
But increasing the Array-size that way seems to defragment the Memory very much and ends up with an "invalid memory acces" from ReDim().
Code: Select all
Structure BigStruct
BigSpace.l[25600] ; Will reserve 100KB for each structured Element
EndStructure
For n = 5000 To 16000 Step 1000 ; This Loop will reserve between 500 MB and 1.6 GB of Ram in 100 MB Steps
Debug "Trying to allocate "+Str(n / 10)+" MByte." : Debug ""
Global Dim Test.BigStruct(0)
For m = 1000 To n Step 1000
ReDim Test.BigStruct(m)
Debug "Allocating Block "+Str(m/1000)+" - Arraysize is now "+Str(m / 10)+" MByte"
Debug "Memoryadress of the Array: "+Str(@Test())
Next m
Debug ""
Test(n-1)\BigSpace[25599] = n
MessageRequester("Memoryallocation by Dim() Test",Str(n /100)+" MByte allocated by 'Global Dim'."+#CRLF$+"Testvalue: "+Str(Test(n-1)\BigSpace[25599]))
Next n
On my System ReDim() generates an IMA while trying to allocate 600 MB.
Just before ReDim() generates an IMA, the memoryusage of this process growes up to over 800 MB (observed with TaskManager) - even though ReDim() should only increase the Size to 6000 Fields (600MB).
Could it be, that there is a Bug in ReDim()?
In some cases splitting the memory would be possible, but not in all. The single block/array size is about 260 MB (8192*8192*4).
I am thinking about a tiling, so that the maximum size would be 4192*4192*4, which runs rock stable, but that is only possible, with swapping to HDD and back, which will cost a lot more time. As time encreases by encreasing size, this is a problem. A terrain caculation for an erosion then may take half a day.
I hoped, that on a 64bit system, the internal memory addressing works with 64 bit, even if the application uses 32 bit.
I am thinking about a tiling, so that the maximum size would be 4192*4192*4, which runs rock stable, but that is only possible, with swapping to HDD and back, which will cost a lot more time. As time encreases by encreasing size, this is a problem. A terrain caculation for an erosion then may take half a day.
I hoped, that on a 64bit system, the internal memory addressing works with 64 bit, even if the application uses 32 bit.
I strongly doubt it, I would guess this is very much OS related (or maybe PB related I suppose). I wonder if the same thing happens on 2000/XP & Vista? Or a better test, does it happen on windows server 2003 where memory management for larger apps is better?JoRo wrote: I hoped, that on a 64bit system, the internal memory addressing works with 64 bit, even if the application uses 32 bit.
Page fault counters in perfmon may help you determine which methods are causing more swapping than others which for what you are doing sounds like an area of important performance optimising.
Paul Dwyer
“In nature, it’s not the strongest nor the most intelligent who survives. It’s the most adaptable to change” - Charles Darwin
“If you can't explain it to a six-year old you really don't understand it yourself.” - Albert Einstein
“In nature, it’s not the strongest nor the most intelligent who survives. It’s the most adaptable to change” - Charles Darwin
“If you can't explain it to a six-year old you really don't understand it yourself.” - Albert Einstein
You need a 64bit program on a 64bit OS to take advantage of 2GB+ memory.pdwyer wrote:I strongly doubt it, I would guess this is very much OS related (or maybe PB related I suppose).JoRo wrote: I hoped, that on a 64bit system, the internal memory addressing works with 64 bit, even if the application uses 32 bit.
You need to rethink your memory model/use.JoRo wrote:I am the developer of GeoControl (terrain generator, written in PB).
More and more users (mostly game developers) are asking for terrains greater than 4096*4096 points. After building in the possibility for 8192*8192 terrains into GC, memory allocations errors occured.
Also. (8192*8192)*4 (*4 bytes is i.e. 32bit)
Sums up to 268435456 bytes. Not exactly 1.2GB is it?
My advice is store the whole thing on disk instead as a virtual bitmap in a way, and only load into memory the parts that are displayed/used.
Load them in form of tile sections. Sure it's a bit slower, but you no longer have any size limits this way. Besides available disk space that is

- Kaeru Gaman
- Addict
- Posts: 4826
- Joined: Sun Mar 19, 2006 1:57 pm
- Location: Germany
Most Geo projects like that tend to run on specifically setup hardware and software.
Trying to do it on a "desktop" computer will always be a pain and slow sadly.
A 64bit app on a 64bit OS will obviously work better, but that too is a desktop system you will still get performance and memory issues.
Even 16GB memory may not be enough if things grow really big.
You need (if you are to be "future proof") deal with several TB of data.
Try googling for some source codes out there, you might get a nice tip or two on disk caching stuff like this.
Sorry that there is no easy or magic solution
Trying to do it on a "desktop" computer will always be a pain and slow sadly.
A 64bit app on a 64bit OS will obviously work better, but that too is a desktop system you will still get performance and memory issues.
Even 16GB memory may not be enough if things grow really big.
You need (if you are to be "future proof") deal with several TB of data.
Try googling for some source codes out there, you might get a nice tip or two on disk caching stuff like this.
Sorry that there is no easy or magic solution

filemapping could be of help perhaps in this case? though you still would run into OS filesystem limits
( PB6.00 LTS Win11 x64 Asrock AB350 Pro4 Ryzen 5 3600 32GB GTX1060 6GB)
( The path to enlightenment and the PureBasic Survival Guide right here... )
( The path to enlightenment and the PureBasic Survival Guide right here... )