I admit my ignorance of the fine art of hacking running programs on a
working system. My experience with computers dates back to the mid
1960's when code writing was an effort to advance the art and form, not
to create code that was injurous or distructive in nature. So when I say
that I don't know anything about softice and don't intend to, it isn't
because I feel smug at that ignorance - I just simply have a problem
with the idea of going in that direction.
I have to ask what it is that I need to protect, or for that matter, what
others need to protect, that PureBasic needs to adopt such a method.
Most people here are probably interesting in learning to write their own
programs, but once they learn how difficult it is to complete elaborate
projects, they likely will divert much of that energy and interest
elsewhere. For instance, some will continue to treat it as a hobby,
perhaps collecting sample code and writing a few utilities, others will
look for commercial outlets for their programming talents, and likely
moving on to other languages, and perhaps a few will focus on making
that major game or application a reality. I confess to fitting in the first
category, having found that full time programming can consume you.
I went through some burnout phases after getting in too deep, and I
also found I don't have the imagination for creating games from scratch.
As you know, walking a stream of ASM statements, watching registers,
and trying to track what is actually going on is both tedious and
difficult. You have to know not only a lot of details about the architecture
of the PC, but you have to draw conclusions as to what the sum of all
those activities comes to. I think you gloss over just how demanding
that effort is when you compare what C++ does, compared to what
PureBasic does. For a fact, PowerBasic is probably more structured
and repetative when viewed in ASM mode, and only represents a small
subset of the operands that could be used by a C++ programmer, but
interpreting that code and comprehending what is happening within the
program is still a major challenge - unless you are the type of gearhead
that literally feeds on this type of stuff.
The white paper you are so enamored with may be all that you think it
is. But if you bought the toughest hasp lock in the world, you then have
to think about that skimpy hasp behind it. If you installed a stronger
hasp, then you have to think about the weak hinges that were used.
If you upgraded the hingers, then you have to think about the flimsy
wood or the thin sheet metal of the boxk itself. And once you decide
that everything needs to be bolstered, you might want to think about
whether the contents deserve that much protection.
For me, it would be definate overkill. I publish on the forums or give away
most of what I do that I find has some value. For you, where you are
banking your livelihood on what you accomplish, perhaps no safe could be
too strong. So you have your wants and needs. I have mine.
The point is, why would Fred be compelled to support either of us? If
this technique would work for him, for his product, then perhaps he would
want to explore and extend it to protect PureBasic. But why would he
then divulge it to anyone else? It only weakens his defenses if he
makes it available to others, even brings it to their attention. Why would
you want a feature in PureBasic that you would then use and depend upon
that might ultimately be turned against you?
The best protection is ignorance. The furtherance of ignorance is
secrecy. In military parlance, you can have cover and you can have
concealment. If covered but in sight, you draw fire, and with enough
force, you can be overwhelmed. If concealed, you draw no fire, until you
are discovered. Staying concealed is the best way to defend yourself,
but having good cover as well extends the time before your defenses
completely fail.
What it sounds to me like is that you have an expert in the area of
software and system security who says, "You know, if you did this and
this, then it would be so tough that even I would have a hard time
getting pass that defense". So he thinks now. Then he publishes. And
people adopt his technique. Then someone else says "Hey, I know how
to identify if and where this technique is being used. And I figured out
a way to counter it". And it time it comes to light that the protection
scheme has been compromised.
So who gets the blame? The guy that wrote the white paper? The guy
that implemented the method? The software producer that was deluded
into thinking this method would ultimately and always protect his code?
You still have the concept that one man's faith in his own intellectual
achievement will withstand any assault. Alexander the Great split the
Gordon knot, rather than accept the challenge posed by the master
craftsman that created that tangle. Saddam wanted a ground war where
he was strongest, but you don't deliberately take on your opponent where
he wants or expects you. The Great Wall of China didn't keep the
Mongolians out.
I once worked for a company that sold systems and software to clients,
and had secret administrative accounts that we could use in an emergency
to get into their systems remotely to override their settings and restore
the system if they screwed it up. You would call these "Back Doors",
and we were confident that these accounts were secure. We got into the
systems over dialup lines, which we had to ask the clients to enable,
which was their assurance that we were not getting in there unless there
was a real need. But we rarely told them what we had to do to overcome
their problem, and were not to show them or tell them what software was
involved, which was there to help us in our task - it was not permitted for
them to use this software themselves, since they had not paid for it, and it
was only there for us to use in these controlled situations when it was
absolutely necessary.
One client promptly attached a serial printer to the serial port in parallel
with the modem, and got a hardcopy of our user name, password, and
every command we gave and response back from the system. This
went on for the longest time, until one guy on site decided to brag on the
fact that he knew exactly what I was doing, what software I was using,
and that they had often gotten in using the same account. I reported this
to my Manager, who just shrugged. Without physical control over the
client's premises, we had no true protection.
I repeat: Without physical control over the outer layers, you have no true
defense. If a software protection scheme became universal (such as
becoming a feature of PureBasic), then that in itself would guarantee that
someone, somewhere, is going to hack it. All you are doing is promoting
a temporary and false sence of security, and leaving the method
implementer at possible risk of liability for any promises he makes in the
use of it.
It is well known that Banks have vaults. That does not keep them from
being robbed. Any metal can be cut through or blown apart. What the
alarm eletronics and timers do is alert security forces as to what is
happening, so that they can then respond before the walls or door is
breached. But the electonics can be bypassed as well, assuming that
the thieves are knowledgeable enough or have recourse to the right
information. By defeating the alarms, and given enough time to work on
the hardware without detection, the whole protection scheme can be made
to fail. So a necessary ingredient is to have close supervision over the
physical and software layers as well. You don't get this when the client
has the system, and your software is installed and running on it.
There was a drive manufacturer that produced a drive that worked with
the Commodore C64 computer. It was an intelligent drive, meaning that
the C64 would issue high level commands, and the drive would interpret
these and respond appropriately. It worked fine, and proved to be
fully compatable with programs written for the Commodore version of the
drive - even to accepting overwrite software that would replaced the
drive's internal code and permit custom operations to be performed.
So naturally, people wondered how this clone could do this, unless it
copied the Commodore's internal software exactly, or was a Clean Room
reversed-engineered effort. Otherwise, the suspicion was that the
internal software could be an unlicensed, illegal copy of Commodore's
code.
The surprise was that the code buried in the internal ROM chip was
completely garbage. It made no sence, especially since the guys that
checked it out recognized the processor chip that was installed. They
then monitored the data lines between the drive and the computer, and
could see that the dialogue was normal - too normal, in fact.
So then then took a closer look at the drive's board layout, and saw that
the ROM chips lines were crossed where they were attached to the
processor. When they monitored the crossed lines at the processor end,
they saw that the code had swapped bit positions and now was an
exact duplicate of the code used in the Commodore drive. The company
that produced the drive was forced off the market, and their defense that
the ROM code was not storing the copyrighted code from Commodore was
rejected by the court. If they could have kept people out of the box
longer, they could have gotten away with it longer. But eventually,
someone would probably have gotten in, or found some means to prove it
had to be Commodore's code anyway.
has-been wanna-be (You may not agree with what I say, but it will make you think).
Having devoted a lot of effort to trying to show why a generalized,
widely known method of software security is at great risk of being
successfully hacked, and that all software security is only secure
by a perpetuated state of ignorance (secrecy), I did want to make
one further observation:
You don't have to ask for this feature in PureBasic. If you can do
this in another language that is capable of producing libraries, then
you can create a library that provides for that need. But you would
then be foolish to divulge what it does or how it does it. Mums the
word, right? Of course it can still be hacked and eventually
defeated, but if there is only a limited amount of distribution, then
the justification for hacking it might remain low.
Look at the Cotton Gin. Once it was established that it was a method
from separating cotton fibers from seeds, an endless array of copycats
and variants became the norm. It was too necessary and too successful
to be profitable. If Eli Whitney could have figured out a way to hide
the concept behind his gin, then he could have been are more
sucessful with it. But to patent it, he had to make a full disclosure.
That is why Coke has never been patented, just as many other
recipies are never divulged. That is why many companies guard their
source code as closely as possible. But there is no real protection
against reverse engineering of running code or clean room duplication
of functionality.
I worked for a company that refused to give clients access to the
source code. However, one client got a clause in their contract that
said that they would get one copy of all the source code that was installed
on the system. This caused a real stink in the company. But finally, the
code was delivered to the client - several pallets of huge boxes of
printouts for over 2 and a half million lines of code, unindexed, in
no certain order. As epected, the client found the code to be less
than useful. Fact is, I heard they eventually destroyed it, deciding they
were in the service business, not any business of hacking their
supplier's code.
The method described in the "White Paper" involves an effort to randomly
select a region of memory, then to randomly select an offset position
somewhere in that region, and to randomly fill the unused portion of the
region with characters. The region of memory assigned will become a
pointer reference. The offset will either be a pointer offset or a modified
pointer. Reference to it then will become one based on pointers. That
in itself is a clue to what is going on. A call to any secure memory
manager would also be a clue. A call to any validation process would
be exposed as a Procedure, and an examination of what the Procedure
does (check code against some area of memory outside the limits of
of the program and returning a TRUE if they match, means that all the
hacker has to do is modify the code to pass back a TRUE and exit the
Procedure. A quick hack job, just like Alexander the Great's stroke of
the sword.
Of course a good program will try to image itself in such a way that any
change in it's code will cause it to fail, so it may rely on a recheck of its
checksum, check the image in memory against the one on the hard
drive, check the last time it was updated, or use some custom process
that leads to the same end. But these can also be detected and hacked.
There's no way around it. Software is inherantly insecure.
At some point the program has to get on with the business at hand, the
tasks that it is suppose to perform, and all a hacker has to do is find
where that point is and go right to it. Then all the elaborate safeguards
have been blown away. By encrypting the program, you ensure that the
decryption process has to run successfully before that point can be
found and used, but the weakness is that the decryption key has to be
known to the decryption process, so if you can uncover the key in
your hack, you can run the decryption process at will. You don't even
have to worry about how the encryption/decryption process works.
You see? Everything that you can think to do in software to protect it
becomes laid out in tiny ASM steps in your program for others to
examine, determine what it is that you are doing, then neutralize. You
can't prevent that from happening, so your best hope is social
engineering - counting on the fact that most people tend to be honest,
(or too lazy to be dishonest), most people will not bother if they see
that it isn't straight forward (too much time and effort involved, with
no known endpoint), and if they finally consent to just buy the program,
most people have no strong motivation to just give it away - not if they
want to see the vender continue in business in order to provide support
or create upgrades later.
I think the main issue is whether the user feels that they got fair value
for their dollar. Software is expensive to create, but cheap to duplicate
and distribute. Volume of sales should play a role in the final cost.
When the price is perceived as being too costly (such as with Microsoft
products), people look for reasonable (less expensive) alternatives or
illegal copies. I don't think that there is any doubt that Microsoft charges
more han it has to, but at the same time, I tend to feel that it charges
less than some other company might if they had that leverage on their
side.
Microsoft was not unopposed in the OS market, or the office suite
market, or the browser market, or any other market that they entered,
but they were highly competative, either beating out the competition,
underpricing the competition (even giving away the operating system
with every new computer), or absorbing the competition. I remember
when decent production software started at about $3,000 for a single
license (back when PCs were nothng like they are today, and cost as
much or more than a new car). Microsoft has been instrumental in
setting a lower pricepoint for much of the software in use today. Some
software still runs into the thousands of dollars, but increasingly, there
are competing products that underprice them, though perhaps not yet
with the same range of features. Some really decent software can be
had for $10 or less, and most independent producers seem to be looking
at the $15 to under $50 as the sweet spot for selling their own wares.
Lower price increases number of sales, provided that the product has
some market appeal. However, selling at too low a price point
undervalues a product, causing the buyer to become wary that the
product is overhyped or somehow inferior. You have to understand that
people expect to pay a fair and reasonable price for what they get.
Their experience shows that there is a lot of exceptions with the network
and software available over it, but they will still pay for something if they
are convinced that it meets their needs and isn't unfairly priced. Any
excessive efforts to safequard the software, especially if it goes far
beyond the inherant value of the software, will anger legitimate
customers to no end, who often feel that regardless of licensing terms,
once they pay for the software it is theirs to use how and where they
will. You bought a program for your laptop - you aren't going to install it
on your home desktop as well? You buy a new computer - you aren't
going to transfer all your data files and applications to it from your old
one? You find one or more aps or games that won't run after you try to
copy them - are you going to have kind thoughts about the people that
put out that program? What about protected software that you have
to uninstall from your present machine to reinstall on your new machine
- do you find that excessive and frustrating? What if you were planning
to have that old PC around as your emergency backup?
Technique then is one question. Need is another. And Justification is
a subjective matter in the mind of the beholder. You have to find a
balance between these different factors. People who put trust ahead
of profit are not always rewarded, but put profit ahead of your customer,
and watch your business go up in flames. It happened to TurboTax,
which found that guarding the store kept the customer out in droves (or
was that TurboCut? Not quite sure now) Anyway, it cost them because it
broke the bond of customer loyalty by showing a lack of faith in the
customer's integrity, regardless of whether that trust was deserved or not.
has-been wanna-be (You may not agree with what I say, but it will make you think).
Everyone one of your points, inbetween delusional rants, can be countered. Your post shows your ignorance not only of Thinstall protection capabilities of your .exe on the hard drive, but of moving to more advanced programming techniques involving methods of coding that make it harder to hack a program in a computers memory.
In all your figurative language and pie in the sky poetry of words, you fail to grasp the idea that sections of code that can randomly change their memory location at time of execution, makes it much more difficult to hack. It's really that simple. Your rants about other protection is irrelevant to the purpose of this message thread. I use to think like you before Thinstall. As you seem to be so interested in my feature request, you should go to the website and browse the forums. The good stuff is now password protected, but there is still some valuable information in the public forums. There was even a challenge and debate between one of the greatest hackers (and a competitor of Thinstall) I've ever seen and Jonathan. Back and forth, Jonathan created a protected .exe, and this guy hacked it, back and forth for days until Jonathan learned how he hacked it, and made a modification to Thinstall that is simply brilliant. Jonathan has been at this for years, and continues to challenge anyone on Earth to hack a Thinstall .exe, and to date, no one has been able to do it or who has at least come forward even as an anonymous poster. The secret of business is to know something that nobody else knows.
Your post reminds me that you are where I use to be in my thinking of software protection:
10 = little understanding
9 <----- you are here - throw in the towl, not possible, it's all relative dude
8
7
6
5
4
3
2
1 <------ Jonathan Clark is here - very possible, and this is how.
All of your points are amateur points, points made time and time again that reflect a conceptional understanding of about 9, with little non-sequitors and history mixed in. No matter how beautifully and artfully you make your points, it still reflects an understanding of 9. The reason you maintain that understanding level is precisely as you stated, your lively-hood does not depend on the security of your software.
Any man in this life, in any discipline, who out of necessity life depends on that something, is a man far more capable and determined in that something than a man who does it lack-a-dazical, for fun, or as a hobby. Necessity never made a good bargain.
Men of genius do not excel in any profession because they labor in it, but rather they labor in it because they excel.
Now go learn more about software protection and security. Go learn more about Thinstall, there's a free download. Try and hack it. Break into the .exe in memory and step through the ASM instructions, look for the pointers. Try and find someone else who can hack it. Buy it and test it for yourself. Get the manuals and learn how it works. In otherwords, quit talking in pie in the sky, sage on the stage pretty language from your hobby-ist armchair, and get your hands dirty with some practical hands on learning and testing. For only when you do this will your conceptual level of understanding come down from a 9; When you do practical hands on research on your own, and stop recycling information you've heard from others. Only when you do this, will you fully understand and appreciate this feature request and I would expect you to post a message stating: "Man, I thought I knew so much until I did what you said. I now understand software security so much better than I did before. I now fully appreciate your feature request."
As far as I'm concerned, you are small fry, not even an employee of Fantasie Software and have nothing to do with the value chain associated with PureBasic and my business. I don't have the time or desire to continue this pissing contest with you. Other than to say you need to humble yourself and not interfere with other peoples Feature Requests for you will make enemies that way. Go and do your delusional rants and debates elsewhere if you desire, but stay out of my Feature Requests communications to Fantasie Software. For if you nip at my heels, you are gonna get kicked. NUFF SAID.
Speaking of rants, I stopped reading your response after a few lines,
You are equating my limited knowledge of the art of hacking to the
obsessive idea that nobody can beat a given product. I never made that
claim myself, and as you point out, one successful hack of the software
had to be countered to bring it to its present level. And as you also
point out, you don't know of another successful hack - but then why would
someone tell you if they succeeded? There may be greater rewards by
not telling you, and keeping the method to themselves.
Conceptually, any knot can be untied. Any trail can be unravelled. Any
thing that follows a logical sequence will yield to a logical examination.
You might suceed in completely erasing all evidence you were ever there
if you abandon that sequence, but a program walks that sequence every
time it runs. It becomes the roadmap for what you are doing and how
you did it. A gifted hacker will be able to understand and work it out.
I've tried to illustrate this, but if you don't like my examples, there are
others. It's just a tug of war between those that seek to protect and
those that seek to expose.
If you are deluded into thinking you have the unobtainable, the perfect
solution to the need to protect software against all forms of attack and
analysis, then that's fine by me. I am afraid that I regard your perception
as even more limited and blinded than mine. I understand the greater
principal involved. You are blinded by the immediacy of an association
that has been of benefit to you. It would be a pity if you were ever
proved wrong. You may never be proved wrong, or you may never
know of it firsthand. Doesn't mean that it won't happen. Certainly does
not mean it can't happen.
Randomness is in fact a dillusion - it merely means we cannot factor in all
the necessary elements to determine an outcome. But any form of
managed randomness is only a pseudo-random sequence. That is, from
the same given seed, you will always get the same outcome. That makes
it useful. That also makes it vulnerable. The seed is the prime
vulnerability, but even the pattern demonstrated as the pseudo-random
sequence is followed becomes a signature.
However, this debate is not worth my time anymore. Really. It's not a
topic I need to follow, it's not a product I would buy or deploy, it's not a
feature I would use, and it is not something I would try to prove or
disprove on my own. So it has no value for me.
has-been wanna-be (You may not agree with what I say, but it will make you think).