|Bottom of This Page|
Most programming languages have a means of defining a character as a numeric code and, conversely, converting the code back to the character.
ASCII - American Standard Code for Information Interchange. A coding standard for characters, numbers, and symbols that is the same as the first 128 characters of the ASCII character set but differs from the remaining characters.
The ASCII character set (excluding the extended characters defined by IBM) is divided into four groups of 32 characters.
The first 32 characters, ASCII codes 0 through 1Fh, form a special set of non-printing characters called the control characters. We call them control characters because they perform various printer/display control operations rather than displaying symbols.
Examples of common control characters include:
Unfortunately, different control characters perform different operations on different output devices. There is very little standardization among output devices. To find out exactly how a control character affects a particular device, you will need to consult its manual.
The second group of 32 ASCII character codes comprise various punctuation symbols, special characters, and the numeric digits. The most notable characters in this group include the:
Note that the numeric digits differ from their numeric values only in the high order nibble. By subtracting 30h from the ASCII code for any particular digit you can obtain the numeric equivalent of that digit.
The third group of 32 ASCII characters is reserved for the upper case alphabetic characters.
The ASCII codes for the characters "A" through "Z" lie in the range 41h through 5Ah. Since there are only 26 different alphabetic characters, the remaining six codes hold various special symbols.
The fourth, and final, group of 32 ASCII character codes are reserved for the lower case alphabetic symbols, five additional special symbols, and another control character (delete).
Note that the lower case character symbols use the ASCII codes 61h through 7Ah. If you compare the ASCII codes for the upper and lower case characters to binary, you will notice that the upper case symbols differ from their lower case equivalents in exactly one bit position.
The only place these two codes differ is in bit five. Upper case characters always contain a zero in bit five; lower case alphabetic characters always contain a one in bit five. You can use this fact to quickly convert between upper and lower case. If you have an upper case character you can force it to lower case by setting bit five to one. If you have a lower case character and you wish to force it to upper case, you can do so by setting bit five to zero. You can toggle an alphabetic character between upper and lower case by simply inverting bit five.
View the ASCII CODE CHART
Indeed, bits five and six determine which of the four groups in the ASCII character set you're in:
|Bit 6||Bit 5||Group|
|0||1||Digits and Punctuation|
|1||0||Upper Case and Special|
|1||1||Lower Case and Special|
So you could, for instance, convert any upper or lower case (or corresponding special) character to its equivalent control character by setting bits five and six to zero.
Consider, for a moment, the ASCII codes of the numeric digit characters:
The decimal representations of these ASCII codes are not very enlightening. However, the hexadecimal representation of these ASCII codes reveals something very important; the low order nibble of the ASCII code is the binary equivalent of the represented number.
By stripping away (i.e., setting to zero) the high order nibble of a numeric character, you can convert that character code to the corresponding binary representation. Conversely, you can convert a binary value in the range 0 through 9 to its ASCII character representation by simply setting the high order nibble to three. Note that you can use the logical-AND operation to force the high order bits to zero; likewise, you can use the logical-OR operation to force the high order bits to 0011 (three).
Note that you cannot convert a string of numeric characters to their equivalent binary representation by simply stripping the high order nibble from each digit in the string. Converting 123 (31h 32h 33h) in this fashion yields three bytes: 010203h, not the correct value which is 7Bh. Converting a string of digits to an integer requires more sophistication than this; the conversion above works only for single digits.
Bit seven in standard ASCII is always zero. This means that the ASCII character set consumes only half of the possible character codes in an eight bit byte. The PC uses the remaining 128 character codes for various special characters including international characters (those with accents, etc.), math symbols, and line drawing characters. Note that these extra characters are a non-standard extension to the ASCII character set. Most printers support the PC's extended character set.
Should you need to exchange data with other machines which are not PC-compatible, you have only two alternatives: stick to standard ASCII or ensure that the target machine supports the extended IBM-PC character set. Some machines, like the Apple Macintosh, do not provide native support for the extended IBM-PC character set. However you may obtain a PC font which lets you display the extended character set. Other computers (e.g., Amiga and Atari ST) have similar capabilities. However, the 128 characters in the standard ASCII character set are the only ones you should count on transferring from system to system.
Despite the fact that it is a "standard", simply encoding your data using standard ASCII characters does not guarantee compatibility across systems. While it's true that an "A" on one machine is most likely an "A" on another machine, there is very little standardization across machines with respect to the use of the control characters. Indeed, of the 32 control codes plus delete, there are only four control codes commonly supported ^P; backspace (BS), tab, carriage return (CR), and line feed (LF). Worse still, different machines often use these control codes in different ways. End of line is a particularly troublesome example. MS-DOS, CP/M, and other systems mark end of line by the two-character sequence CR/LF. Apple Macintosh, Apple II, and many other systems mark the end of line by a single CR character. UNIX systems mark the end of a line with a single LF character. Needless to say, attempting to exchange simple text files between such systems can be an experience in frustration. Even if you use standard ASCII characters in all your files on these systems, you will still need to convert the data when exchanging files between them. Fortunately, such conversions are rather simple.
Despite some major shortcomings, ASCII data is the standard for data interchange across computer systems and programs. Most programs can accept ASCII data; likewise most programs can produce ASCII data. If you will program in the assembly language you will be dealing with ASCII characters, and it would for that reason be wise to study the layout of the character set and memorize a few key ASCII codes (e.g., "0", "A", "a", etc.).
To obtain an ALT Character:
For example: You can write the letter "a" using its ASCII character decimal value of "97".
The rapid growth of the Web has created a need for a new global standard for software. An international language encoding standard been created so that computers in one world language community can communicate with those in another language community. This new standard is Unicode, a 16-bit standard that is 100% compatible with 7-bit ASCII. For recent developments on Unicode, visit http://www.unicode.org/ or see the characters.
Sources: Various books, the Internet, and various encyclopedias.
Kilder: Forskellige bøger, internettet og forskellige leksikoner.