Page 1 of 1

multiply(a,b)

Posted: Sat Aug 30, 2003 1:15 pm
by sec
Code updated For 5.20+

Code: Select all

;Mul(a,b)
;Program by sec :)
Procedure mul(a.l,b.l)
  mult.l=0
  Repeat
    If a & %1 = 1
      mult = mult + b
    EndIf
    a = a >> 1
    b = b << 1
  Until a=0
  ProcedureReturn mult
EndProcedure

a=12
b=13
MessageRequester("","mul("+Str(a)+","+Str(b)+")=" + Str(mul(a,b)),0)
:idea:

Posted: Sat Aug 30, 2003 8:17 pm
by plouf
nice procudeure seems like a software multiplier
maybe i miss somethink but what is the difference from * ?
156 = mul(12,13)
156 = 12 * 13

Posted: Sun Aug 31, 2003 6:39 am
by sec
maybe don't different, but use shift will faster

Posted: Sun Aug 31, 2003 7:13 am
by Paul
Speed Test.....

Code: Select all

Procedure mul(a.l,b.l) 
  mult.l=0 
  Repeat 
    If a & %1 = 1 
      mult = mult + b 
    EndIf 
    a = a >> 1 
    b = b << 1 
  Until a=0 
  ProcedureReturn mult 
EndProcedure 


a=12 
b=13 

tick=GetTickCount_()
For tmp=1 To 1000000
  result=mul(a,b)
Next
endtick=GetTickCount_()
Debug endtick-tick


tick=GetTickCount_()
For tmp=1 To 1000000
  result=a*b
Next
endtick=GetTickCount_()
Debug endtick-tick
On my computer:
mul(a,b) ... 437 ms
a*b ... 31 ms

Posted: Sun Aug 31, 2003 11:42 am
by sec
Paul: You defeat me.

Posted: Sun Aug 31, 2003 7:25 pm
by Berikco
sec wrote:maybe don't different, but use shift will faster
The PureBasic compiler is smart, he already do this kind of optimizing :)

Posted: Mon Sep 01, 2003 9:33 pm
by Psychophanta
Hey, at least sec's routine, even could be absurd, it is a little bit tricky, doesn't it?

8)

Integer Mujltiply

Posted: Tue Sep 02, 2003 11:08 pm
by D'Oldefoxx
Internally, the CPU does shift and add for each integer multiply operation. It also does the reverse for divide, which is shift and trial subtracts, which are going to take longer since you multiply two 16-bit or 32-bit values together, but you divide a 32-bit value by 16 bits, or a 64-bit value by 32 bits in order to get a result. So emulating the same process with shifts and adds is going to be somewhat less efficient because you have to perform multiple instructions to get the same result that the CPU gets internally by an optimized hardware process.

So the concept is sound. In fact, this is the likely technique if you want to build your own multiply and divide processes for exceptionally large numbers that invlove many byte segments.

The problem is, if you add two values, such as 999 + 999, you could get a result that is too big to fit in the same number of digits. Here, if you only have 3 digits allowed, the answer would appear to be 998, since the 1 would overflow. The same thing happens with binary registers, and the overflow bit goes into the Carry bit. However, if you multiply 999 * 999, the result would b 998001, or twice as many digits needed to hold the result. If you only allowed for three digits, you would see the result 001.

These limitations are unacceptable, right? So you decide that you want registers to have six digits. But now suppose you want to add 998001 and 12345 together. You would get 1010346, but since we now have a six-digit limit, you end up with 010346. And for multiplies, you get results that can required 12 digits - twice our six-digit limit. Working with powers, or fractions, or even logs and trigs can require substantially more digits in order to get acceptable results.

In other words, you can rapidly reach a point where you need a substantial number of digits to represent values preciesly, or you need some method of representing exceptionally large or small numbers to a reasonable degree of accuracy. This is where floating point numbers get involved.

But the loss of accuracy when using floating point may be unacceptable for some purposes, and the result may be to incorporate a Big Integer concept, which jjust means using as many bytes as necessary to represent any integer up to the memory size or addressing required.