Hmm, the compiler should complain here, this can't work.
@Fred: The compiler should complain about any code
written inside a Structure definition. I've seen too much new users
try something like 'NewList Something.l()' inside a Structure and wonder
why it doesn't work.
Timo
Last edited by freak on Thu Aug 21, 2003 6:13 pm, edited 1 time in total.
... where x\union would contain a pointer to the data, and you'd assign an appropriate handle for it by checking the value of x\tymed (as in the select code in the union definition). Eg.
If x\tymed = #TYMED_GDI
hBitmap.HBITMAP = @x\union
; Or possibly PeekL (x\union), or something like that...
EndIf
(I'm assuming some function is going to be stuffing data into this structure here.)
The above is 99% certain NOT to be the actual correct structure, but I think this could be one way to approach it... but I've been known to be completely wrong before
Hi-Toro: your example would work ok too, I forget to mention it . It just doesn't allow to use different name for the same data. StructureUnion are useful to reduce memory usage when dealing with differents similars objects.
Union can mean different things, but in structures it means "memory overlay". That is, the different field names and types in a StructureUnion lie on top of each other, and the last one changed determines the content for all of them, though not necessarily the entire content.
By this, I just mean that the first memory address is shared by them all, but if one field is one byte long, another is two bytes, and a third is four bytes, then the only memory that the three share in common is the first byte, Change the first byte and it effects all three. Change the second byte, and only the 2-byte and 4-byte elements are effected, though neither one completely.
If you want to have the same structure give you four 1-byte references, two 2-byte references, and one 5-byte references in the same memory area, you would create a structure for 5 1-bytes, another for two 2-bytes, and a third for a single 4-byte. Then with a StructureUnion to overlay them, you would end up with the 4-byte item being dealt with one byte at a time if you like, or either 2-byte item being dealt with one byte at a time, or even the 4-byte item being accessed two byptes at a time (upper and lower byte pairs).
Let's see if I can demonstrate this a little bit better:
Well, that isn't too bad on preview. It looks screwy as all get out in the source because of the difficulty with proportional spacing and vertical alignment. But if it looks okay, it might help with illistrating my point.
has-been wanna-be (You may not agree with what I say, but it will make you think).
Hi-Toro: your example would work ok too, I forget to mention it .
Ah, cool -- wish I'd figured that out back in the AmiBlitz days -- remember the problems they caused?
Nice explanation of the different type size possiblities there -- though why anyone would deliberately use this stuff other than for porting evil C code I don't know!
StructureUnion actually has had many justifications. A common one is
receiving data in a fixed-sized format, then using some part of the content to figure out which, of several valid formats . a given record might belong to.
a billing record, an order transaction record, a log record, a database update record, and so on. By standardizing on a record size, any given record could easily be located over the length of the resulting file, read, replaced, or even deleted without disturbing records either before it or after it.
This technique was particularly useful with older equipment, like tape drives. Consider if you decided to use your VCR tape to record data, and had an accurate way of determining the preciise area on the tape where a given record should be. You could high speed to that position, slow down, and read just the one record. In that way you could hop over records of a different type and accumulate just those records that had a correlation to each other. Of course you either have to have an external reference to indicate which records correspond to each other, or you have to embed pointers or offsets to where the next previous record or following record should be.
You might consider this to be an obsolete concept, but actually, it is how we manage to create files on serial media like CDs, Hard Drives, Floppy disks, and even move data from multiple sessions over a single internet connection. It even is a part of our ability to handle multiple files simultaneously on a single drive.
It also means that each record can contain virtually any type of data that can be imagined. You normally nust need a few fields in the record that perform such things as identify the record so that you can verify that it is the correct one sought, identify where it fits in the sequence of records so that you know that the chain to it ihas not been broken, identify the previous and/or following record, provide a checksum/CRC to verify the contents of this record, then let the software that receives the record worry about the content and structure of the data in the record.
And that software could again use the record size as a primary concern and standardize on a size that will allow any subsequent record to be accurately determined beforehand, so that you can push the hardware to get there as quickly as possible. Then you could read or write that one record without concern about an impact on adjacent records. Because you are not resizing the record, there is no need to contract or expand. on what has already been stored. You can add another record at the end if you need to, delete an existing record by nulling out some part of the information it needs to be considered valid, and even relink the previous and following records to point to each other, skipping over the deleted record entirely. And in an effort towards greater efficiency and constraint on runaway storage needs, you can reassign deleted records when new records are needed.
Sounds a bit like creaping fragmentation on a hard drive, doesn't it? Well, that is what happens, and usually at some point an effort to realign the records in a storage media to be adjacent to each other and consecutive will pay off bid dividents in terms of recovery of speed on access, but in the meantime, this method of interweaving fixed-sized records lets us do things with serial media that just could not be done well otherwise.
If you were planning a database, you might believe that you want a free-form structure where anything of any length could be stored. But the penalties are that you could only process that file from beginning to end, since you could never know otherwise where anything was or how to get to it. You could use an external index and lookup structure, but you would still have to contend with the fact that deleting a record of one size would not provide you with enough space to write a larger record, so you would have to append to the end of the file, and if you overwrote a deleted record with one somewhat shorter, the gap not used eats up some disk space as well, which again means a creap in file size, and no effective way to recover the lost space until the whole file is regenerated.
A way to deal with this might be to decide on a minimin size record instead, and allow several records to be chained together for form larger, less constrained records. You might have a primary record in one data base, then use secondary records in a second database, and the primary fecord would point precisely to where the first record occurs in the secondary database, and that record points to the next, if there is a next, and so on. So in the first database, Any given primary record can quickly and easily be found, and from that, it can be determined if any secondary records are involved and where they are located, at least the one, and from there, additional records can be sought in the secondary databases (there can be more than one).
Whyt bother building databases, when you can just use existing code, such as SQL or or ODBC? Maybe just to learn how it is doen. Maybe because it might take less time to roll-your-own than to learn someone else's code/ Maybe because you have some ideas about databasing you want to explore. Maybe because you want a nice, tight, fast database and a huge, chunky, overly flexable standard just does not seem like the best way to go to optimize your application.
If you consider a record as a sequence of byte data that can represent most anything, but in your case should conform to certain rules, you can either determine which set of rules using Select Case and If ElseIf statements, parsing up those bytes on the fly, or you can just position them into a Structure that resides in a StructureUnion, and then based on a simple detemrination of which record form is involved, immediately use the associated field structure to access the contents in the correct manner.
The first method was a dynamic approach, where you would use commands such as Mid(), Left(), Right(), and Val() to convert the data into some preferred form. But you run up against the limitation that you cannot handle a zero-bute (Null) character with string commands in PureBASIC (or C.C++, or some other BASICs as well). A structure allows you to handle it as byte codes, or words, or integers, or doublewords (dwords), or longs, or if need be, as Chr() or strings.
Long winded, I know, but I'm just trying to convey some of the inherant power that comes with using Structures, and where StructureUnion often can play an important role in putting data passively (no active code involved) into more useful forms.
Has-been Wanna-be (You may not like what I say, but it will make you think)