Page 3 of 3

Posted: Thu Jul 17, 2003 9:24 am
by Pupil
matthew180 wrote: As for the files over 2G, etc., I only meant that in order to maintian a byte offset into a 2G < file < 4G, you need an unsigned 32-bit number, and of course I know that you are well aware of that...
Personally i don't think an unsigned long is suitable for files over 2G Win API doesn't use it so why mess things up, better use 64-bit quads for these files to be more in line with the API.

Posted: Thu Jul 17, 2003 3:15 pm
by matthew180
That's probaly a better solution (64-bit quad.) I have no idea what the Win32 API uses, I never looked. I do know that in *BSD we use off_t which is an unsigned 64-bit number. But that means even more work for Fred! ;-)

Matthew

My first test program and found this out the hard way

Posted: Sun Jul 27, 2003 4:35 pm
by netmon
The first little test i did, and find i cant use an unsigned byte. then i look around for the .ub thing and then do a seach for more info.. not surprised to see this mentioned. Why do i have to use a 2 byte var. just to do the same job as 1 byte minus the neg sign (no pun intended). I can only guess arrays are affected by this also.

And I like the idea of casting!

Posted: Sun Apr 11, 2004 10:39 pm
by Shannara
About that todo list.. is there one out there somewhere?

Posted: Fri Jul 16, 2004 12:57 am
by newbie
still no news about unsigned variables type support in a next PB version ?

..

Posted: Fri Jul 16, 2004 3:54 am
by NoahPhense
*post removed because poster is too stupid to understand topic*

- np

Posted: Fri Jul 16, 2004 6:05 pm
by ivory
Thought I would add my 2 cents.

Most mainframe compilers allow mixing of signed and unsigned integers. In fact, most mainframe compilers go a step further and allow the programmer to define integer and non-integers with programmer defined lengths (and programmer defined fixed decimal lengths for non-integers).

Now for the mainframe, the fixed point math was built into the original hardware (I say original, because most hardware today has been replaced with Power Risc processors emulating the old hardware).

But my point is this: Mixing Signed and Unsigned was inefficient, because the processor had to strip the sign (force to unsigned) as part of the operation. All programmers were warned of the speed problem, but they were allowed to do it, because sometimes its the right thing to do.

And as far as casting is concerned, please don't force me to cast everything, but by all means allow me to.

Posted: Fri Jul 16, 2004 6:34 pm
by blueznl
when come to think of it, do we need unsigned? to save a byte? nah... if i would have to choose between quad's (64 bits) signed, and bytes (unsigned) that would to me be a no-brainer

in fact, i have a tendency to stick to ints for most purposes, even when dealing with bytes...

re. windows api: winapi (32 bits) uses 32 + 32 to handle files larger than 2 gig, with a quad we'd be able to handle that for a while :-)

Posted: Sun Jul 18, 2004 4:17 am
by oldefoxx
I think some clarification is in order. When you say "casting", you seem
to be saying "the number of bytes required to represent that value in
PureBasic". Typically, depending upon the value involved, you would
use 1 (Byte), 2 (Word, unsigned, or Integer, signed), 4 (DWord,
unsigned, or Long, signed), and 8 (Quad, signed) bytes in conjunction
with each other.

Now the difference between signed and unsigned values lies in (1)
what the uppermost bit represents, and (2) what the remaining bits
represent if the uppermost bit is set and the number is signed.

Computations involving signed and unsigned values should not be
inherantly different in terms of processing speed, unless you decide
that the negative signed values must be converted to positive unsigned
values first. That would take an extra instruction, the ASM "Neg"
instruction. Not much of a speed penalty there, actually, except that
you also have to use a compare and jump command so that you do
not negate positive values. But this is not necessary if you do not start by
presuming that sign conversion needs to be done automatically by the
compiler under these conditions. If you mix signed and unsigned
numbers in computations, you best know what the outcome is going
to be, because the compiler cannot be sure of how you intend to
manage this. All you have to do in cases of uncertainty is surround the
signed value with Abs() to resolve this matter as Microsoft would do it.
There are other concepts possible as well - unsigned values treated
as signed, or efforts made to block expressions involving the mixing of
signed and unsigned terms in the same expression.

The real distinction between signed and unsigned values is how the
compiler needs to evaluate them. Should it use the "Above", or the
"Above Or Equal To" test, or should it use the "Greater Than", or the
"Greater Than Or Equal To" test? Or checking the carry/borrow, overflow/
underflow flags? One set of terms tests for unsigned relationships, and
the other tests for signed relationships. Unfortunately, you cannot really
specify which evaluation method to use to the compiler, except by
reverting to inline ASM code. But that is always your fallback option.

Additional data types (numerics and more), do three things for us: (1)
allow for a greater range of values, (2) permit more ways to convert
code from other sources into PureBasic, and (3) give us more ways to
code in meeting new challenges. So I am highly in favor of it. But the
"How" of making these new types integrate into what we already have is
going to require a lot of core rewrites, and obviously an extended syntax.