I'm using it to convert an ascii file name to unicode for some stuff for API in an application that operates on 32 bit and 64 bit systems.
1. Does PokeS operate any differently in 64 bit then it does in 32 bit? (just trying to verify something)
2. How do you verify that the conversion from ascii to unicode was actually done without looking at the result, that the conversion actually took place? (is that possible?)
3. How do you detect if the original ExeName$, before it was placed in the PokeS procedure, was indeed ascii to begin with?
The advantage of a 64 bit operating system over a 32 bit operating system comes down to only being twice the headache.
2. Providing you have a memory buffer of the correct length, is PokeS() ever going to fail?
3. Well PokeS() assumes that the given source string is Ascii encoded if you are running without the Unicode switch and Unicode encoded otherwise. The question of whether ExeName$ is Ascii encoded, within the context of the code you posted, is thus irrelevant and not really appropriate. I mean, how can it contain an Ascii string if running in Unicode etc? The answer is that it cannot.
Seems to me that this last question is really only appropriate if using memory buffers as opposed to PB strings etc. In which case compare the original buffer with the newly created Unicode string. That will tell you if the original buffer held a Unicode string or not. Other than that, determining whether an arbitrary buffer contains a Unicode string is notoriously difficult.
I may look like a mule, but I'm not a complete ass.
2. Providing you have a memory buffer of the correct length, is PokeS() ever going to fail?
3. Well PokeS() assumes that the given source string is Ascii encoded if you are running without the Unicode switch and Unicode encoded otherwise. The question of whether ExeName$ is Ascii encoded, within the context of the code you posted, is thus irrelevant and not really appropriate. I mean, how can it contain an Ascii string if running in Unicode etc? The answer is that it cannot.
Seems to me that this last question is really only appropriate if using memory buffers as opposed to PB strings etc. In which case compare the original buffer with the newly created Unicode string. That will tell you if the original buffer held a Unicode string or not. Other than that, determining whether an arbitrary buffer contains a Unicode string is notoriously difficult.
Just wanted an assurance there was no differences in operation between 64 bit and 32 bit here. I found my answer though, there isn't.
The app can be compiled as ascii and not unicode. So if I feed it something for ExeName$, how do I ensure that ExeName$ was ascii to begin with if i don't know to begin with? Its not the complillation thats the question, the app is compiled ascii, but the ExeName$ can vary between ascii and unicode, so how does one determine which it is? The API still requires a unicode string regardless of the compillation, but the input to the program can contain either ascii or unicode. If I feed it unicode I don't need the conversion, if its fed ascii then the conversion is needed. So I need to determine at the beginning if its being fed ascii or unicode.
The advantage of a 64 bit operating system over a 32 bit operating system comes down to only being twice the headache.
Your question does not make sense to me. If you are compiling in Ascii mode then ALL Purebasic strings can really only hold Ascii strings. If you attempt to place a Unicode character array into such a string variable and that array happens to contain an Ascii character, then the string will terminate at that character (because the high byte will be zero and this zero will terminate an Ascii string variable).
If you are compiling in Ascii and the api function requires a Unicode string - then make the conversion. If you are compiling in Unicode; don't make the conversion. Conditional compilation can sort this out.
I may look like a mule, but I'm not a complete ass.
srod wrote:Your question does not make sense to me. If you are compiling in Ascii mode then ALL Purebasic strings can really only hold Ascii strings. If you attempt to place a Unicode character array into such a string variable and that array happens to contain an Ascii character, then the string will terminate at that character (because the high byte will be zero and this zero will terminate an Ascii string variable).
If you are compiling in Ascii and the api function requires a Unicode string - then make the conversion. If you are compiling in Unicode; don't make the conversion. Conditional compilation can sort this out.
The app its self can hold only ascii strings internally to the app. But its just not the app its self, the app receives input from external sources and this input can be either ascii or unicode. So I wanted to determine if what the app was being fed was either ascii or unicode.
The advantage of a 64 bit operating system over a 32 bit operating system comes down to only being twice the headache.
Nevermind, got it figured out sort of. The main difference between ASCII and Unicode is that ASCII uses one byte to represent each character and the Unicode format uses 2 bytes. So based upon that, i'm going to test the characters for size (1 or 2 bytes)
The advantage of a 64 bit operating system over a 32 bit operating system comes down to only being twice the headache.
The app its self can hold only ascii strings internally to the app. But its just not the app its self, the app receives input from external sources and this input can be either ascii or unicode. So I wanted to determine if what the app was being fed was either ascii or unicode.
Well, even so, you cannot stuff a Unicode string into a PB string variable when compiling in Ascii mode for the aforementioned reasons. For this you should work with memory buffers and not PB string variables.
I may look like a mule, but I'm not a complete ass.
The app its self can hold only ascii strings internally to the app. But its just not the app its self, the app receives input from external sources and this input can be either ascii or unicode. So I wanted to determine if what the app was being fed was either ascii or unicode.
Well, even so, you cannot stuff a Unicode string into a PB string variable when compiling in Ascii mode for the aforementioned reasons. For this you should work with memory buffers and not PB string variables.
Yes, thats what i'm going to do.
The advantage of a 64 bit operating system over a 32 bit operating system comes down to only being twice the headache.