Page 1 of 1
Allocatememory and local variables
Posted: Wed Dec 27, 2006 3:40 pm
by hellhound66
Removed.
Posted: Wed Jan 03, 2007 8:49 pm
by kinglestat
I agree
a directive to switch it on and off such as
ZeroMemoryEnable
ZeroMemoryDisable
would be perfect
Posted: Thu Jan 04, 2007 3:33 am
by Kaeru Gaman
yup, really good idea.
with mem-allocating zeroising is obsolete.
even for variables its not always needed, especially when you work with EnableExplicit,
where you have to Define a variable.
there the "=0" could be implemented as a default-function of the Define,
and the standard-zeroising could be completely off.
also for Dim it would be possible to add some "zeroise" flag,
if you really need an empty array, what would be a seldom case in my opinion.
Posted: Thu Jan 04, 2007 4:40 am
by ts-soft
For the Mem you can simple use a macro
Code: Select all
#HEAP_ZERO_MEMORY = 8
Macro AllocateMemory(Size, Flag = #HEAP_ZERO_MEMORY)
HeapAlloc_(GetProcessHeap_(), Flag, Size)
EndMacro
Macro FreeMemory(Memory)
HeapFree_(GetProcessHeap_(), 0, Memory)
EndMacro
Macro MemorySize(Memory)
HeapSize_(GetProcessHeap_(), 0, Memory)
EndMacro
Mem = AllocateMemory(1000, 0)
If Mem
Debug PeekS(Mem, 1000)
Debug MemorySize(Mem)
Debug FreeMemory(Mem)
EndIf
AllocateMemory(Size, 0) ; for non zero_init
that's good enough for me
IMHO native isn't faster
Posted: Sat Jan 06, 2007 7:18 pm
by hellhound66
Removed.
Posted: Sun Jan 07, 2007 9:44 pm
by Rescator
Not much of an issue for me, if performance is an issue and you blame the zeroing when allocating memory you really need to rethink the way you use memory.
Even ReAllocateMemory is not advised if you are doing lots of allocations and deallocations.
In fact, just the act of many allocations and deallocations is highly discourage, allocate a buffer memory instead, keep track of the memory used of the buffer with your own variable instead. This would save you at least 2 calls or more.
Likewise if you need to allocate a lot of memory pieces, it may be better to allocate just a large chunk, use a array of pointers or similar to point to the start of each area.
(nice if you load many small files into memory that you need to work with at the same time)
So if the automatic zeroing in PB's memory allocation slows down your software then you need to rethink the way you handle memory instead.
Personally I try to allocate all the memory I need at program start,
that chunk of memory is reused as much as possible, the size is static so it has a large speed gain over the reallocation call as well.
The drawback is that you need to know how much memory you would need at program start which is not always easy.
However, even when allocating later on you could simply check the file sizes in the directory and make the temp mem the size of the largest file (depending on file size obviously) and then process the files.
There is also the WindowsAPI filemapping which lets Windows deal with the entire memory stuff. Not sure what the speedgain is compared with a static chunk of memory that is reused, maybe filemapping is a bit slower.
Constantly allocating and deallocating is discouraged for another reason, memory fragmentation.
I like the idea of a flag or similar, but for me it does not matter as I rarerly need the memry zeroed and most of the time I never rely on it being zeroed anyway. (since I tend to reuse memory allocations as much as possible)
Then again the zeroing does not hurt me that much since it is only done a few times while running a program.
A ZeroMemory(*ptr,len) is something I've missed though, it's not that hard to make a loop and do it yourself obviously but it'd be nice to have a optimized one and even native. (on Windows the API one could be wrapped I guess)
Posted: Sat Jan 13, 2007 12:47 am
by hellhound66
Removed.
Posted: Sat Jan 13, 2007 4:05 am
by Tranquil
Rescator wrote:Not much of an issue for me, if performance is an issue and you blame the zeroing when allocating memory you really need to rethink the way you use memory.
Even ReAllocateMemory is not advised if you are doing lots of allocations and deallocations.
In fact, just the act of many allocations and deallocations is highly discourage, allocate a buffer memory instead, keep track of the memory used of the buffer with your own variable instead. This would save you at least 2 calls or more.
Likewise if you need to allocate a lot of memory pieces, it may be better to allocate just a large chunk, use a array of pointers or similar to point to the start of each area.
(nice if you load many small files into memory that you need to work with at the same time)
So if the automatic zeroing in PB's memory allocation slows down your software then you need to rethink the way you handle memory instead.
Personally I try to allocate all the memory I need at program start,
that chunk of memory is reused as much as possible, the size is static so it has a large speed gain over the reallocation call as well.
The drawback is that you need to know how much memory you would need at program start which is not always easy.
However, even when allocating later on you could simply check the file sizes in the directory and make the temp mem the size of the largest file (depending on file size obviously) and then process the files.
There is also the WindowsAPI filemapping which lets Windows deal with the entire memory stuff. Not sure what the speedgain is compared with a static chunk of memory that is reused, maybe filemapping is a bit slower.
Constantly allocating and deallocating is discouraged for another reason, memory fragmentation.
I like the idea of a flag or similar, but for me it does not matter as I rarerly need the memry zeroed and most of the time I never rely on it being zeroed anyway. (since I tend to reuse memory allocations as much as possible)
Then again the zeroing does not hurt me that much since it is only done a few times while running a program.
A ZeroMemory(*ptr,len) is something I've missed though, it's not that hard to make a loop and do it yourself obviously but it'd be nice to have a optimized one and even native. (on Windows the API one could be wrapped I guess)
Hm, what to do if you are coding a network server? You need a buffer for sending and receiving datas for each connected client. Allocate them just at start on maximum capacity?
ZeroFilling should be optional I thing. Should not be very hard to implement for the PB Team I think, isnt it?
Posted: Sat Jan 13, 2007 9:09 am
by Rescator
hellhound66 wrote:It's very kind of you, but I didn't ask about coding tricks'n'tips, how I can evade this problem, but to post a request.
Hey! I'm actually agreeing with you here

Posted: Sat Jan 13, 2007 9:22 am
by Rescator
Tranquil wrote:Hm, what to do if you are coding a network server? You need a buffer for sending and receiving datas for each connected client. Allocate them just at start on maximum capacity?
I would probably allocate (or reserve) two extra parts of memory at startup,
at a fixed size, if it's files this work great as they are saved/read buffered to/from a disk anyway, and if it all happens "in memory" I'd try and use fixed size, the last one need is exploitable allocations (never trust values you have no control over). When it comes to files I tend to prefer 64KB out of a old habit. And you would not need a buffer for each client, heck you could probably get by with a single in/out buffer as the code is sequential anyway. Now when it comes to multiple threads the issue is worse, and you'd need to mess with mutex and so on. I believe apache and similar run multiple threads which each handles "x" number of clients and a single buffer per thread shared. (just a wild guess, it's what I would do anyway)
Anyway, to steer back to the topic.
I'm all for shaving off a few cpu cycles by making zeroing optional at allocation/reallocation.
I really like what ts-soft did in his post as it's backwards compatible and Fred/PB Team could probably do it in a similar way.
And then just add a ZeroMemory() function (wrap SecureZeroMemory() on PB Windows?) like I mentioned in my post further up.
Posted: Sat Jan 13, 2007 9:38 am
by Rescator
This is to complete ts-soft's code snippet:
Code: Select all
#HEAP_ZERO_MEMORY = 8
Macro AllocateMemory(Size, Flag = #HEAP_ZERO_MEMORY)
HeapAlloc_(GetProcessHeap_(), Flag, Size)
EndMacro
Macro FreeMemory(Memory)
HeapFree_(GetProcessHeap_(), 0, Memory)
EndMacro
Macro MemorySize(Memory)
HeapSize_(GetProcessHeap_(), 0, Memory)
EndMacro
Macro ZeroMemory(Memory)
RtlZeroMemory_(Memory, HeapSize_(GetProcessHeap_(), 0, Memory))
EndMacro
Mem = AllocateMemory(1000, 0)
If Mem
Debug PeekS(Mem, 1000)
Debug MemorySize(Mem)
Debug ZeroMemory(Mem)
Debug FreeMemory(Mem)
EndIf
Note! I see originally local variables was mentioned as well,
I doubt it would be possible to easily change the default behavior while still remaining backwards compatible. (too much code rely on variables being 0 at creation)
So yeah, only option there is a EnableZeromemory and DisableZeromemory, and to remain backwards compatible EnableZeromemory would be enabled by default obviously.
Posted: Sat Jan 13, 2007 1:29 pm
by hellhound66
Removed.