So what exactly makes the Universal Asynchronous Receiver-Transmitter universal? The UART has a long history, starting way back in the 1840's with some of the first telegraph systems. Back then, when the telegraph key was held down, a current would flow in the receiver, pushing a stylus into a strip of paper, leaving a "mark". The Morse code signals sent would then visually display on the paper, making it simple to read the transmitted message. Of course, it didn't take long before the operators got so used to hearing the patterns of clicks that they found they could just as easily listen to the message as write it on a piece of paper, and sounds began being used instead of a mechanical system. Of course, the sounds would turn on when a current was flowing in the receiver, so the signal was still divided into "marks", where current was flowing, and "spaces", where it was not. In other words, the "standard" had changed from a stylus on a paper to listening by ear, but the "protocol" of Morse code stayed the same.
Morse code was a phenomenal technology change, making it possible to send messages easily over very long distances, particularly when radio was implemented, and wires connecting between the source and destination were no longer needed. While that was happening, someone realized a financial benefit could be obtained by using the technology, and hence was born the first ticker tape machine for the stock market. These machines changed the technique slightly. Instead of using special codes for each character, a series of pulses would be sent to turn a printing wheel from its current position to the next letter to be printed. A special pulse signal would instruct the printer to stamp the current letter onto the tape. As technology improved, rather than a rotary printing wheel the Baudot code was developed as a new protocol, equating particular pulse patterns to particular characters.
Like telegraphy, the teletype grew with the new technologies of radio and, in particular, the computer. Teletype machines became useful not only as a means of communication between people, but also as an interface to early computers. Instructions could be sent by typing a particular pattern of keys, sending a particular pattern of pulses to the computer. Results would be sent back with a similar pattern of pulses to a printer, which would translate them back to the letters and numbers we needed to understand them. But even though the technologies had taken different paths, both came from the same beginnings with Samuel Morse. As such, some characteristics and naming conventions stuck; in particular the use of "mark" and "space" to designate when current was flowing (logic high) and when it was not (logic low).
In computers, a change was made from detecting current flow to just measuring a voltage. Some of the conventions continued, which is why in the RS-232 standard has logic high as a negative voltage. The negative voltage originally would open the current of a teletype machine to produce a "mark" signal. A positive voltage would cut off the current, producing a space. As it turns out, different circumstances (and sometimes just different companies) would require a slightly different standard for sending serial data. connectors and voltage levels for mark and space wouldn't be the same, but the protocol (the way of encoding the characters in pulses) used would carry over. In particular, the use of transistors made it easy to create a universal system that could be understood by any computer or device, as long as each device had something to convert the transistor logic (TTL) signals into whatever standard they expected. The protocol was changed as well, using ASCII to encode the data into digital information. Thus was born the Universal Asynchronous Receiver-Transmitter. (Note that TTL uses positive voltage, be it 5 V, 3.3 V, or anything else, for "mark" and 0 V for "space".)
Asynchronous
Now that it's clear what makes the UART universal, let's look at what is meant by asynchronous. In radio, you can send a message (by voice, digital code, morse code, or whatever), but the message cannot be received unless someone is listening. For serial communication, it requires more than just a signal saying data is ready to be transmitted, unfortunately. Imagine a system where I'm going to send a message to you by holding up a giant sign. We agree before hand that at 1:32 PM I will put the sign up, and at the designated time you look in my direction, see the sign, and read the message. This would constitute a parallel type of transmission--each letter was visible all at once. Now let's say I just don't have access to a big enough piece of paper to write the whole message, but I can send you one letter at a time. So we agree that every 10 seconds, I'll hold up a new letter. You come at the specified time and see me hold up the first letter, which you record on a piece of paper. Every 10 seconds you look back, and I'm holding a new letter up and you record it. This is serial communication. But what happens if one of us has a bad clock, and it's saying 10 seconds are up when, say, 12 seconds have passed. Eventually the mismatched timing causes you to either record the same letter twice or miss a letter completely, depending on whose clock is faster. In order to ensure the message gets through, our clocks need to be synchronized.
There are synchronous methods of serial communication, including both SPI and I2C, which we'll address in the future. These methods have synchronized clocks by using the same clock for the sender and the receiver. The disadvantage is that sharing a clock means another wire. It's clear from the history that developed the UART why a clock signal was not included along with the message; instead, both the transmitter and receiver agree before hand at what rate the data will be sent. This allows sender and receiver to have their own clocks, which don't have to be synchronized in terms of when the second hand ticks, but it does require that each person's clock is accurate. Asynchronous communication simplifies the connection by not needing a second signal in parallel with the data, at the cost of needing an accurate way to time intervals between data.
Receiver-Transmitter
Enough history; let's look at how the UART actually transmits information. Whatever protocol we may be using, we are able to encode data as a series of 1's and 0's. We can encode a number as its binary representation, or we can encode a character as a particular binary number. In any case, we have a certain number of 1's and 0's to send. In a UART, we also add on at least two extra bits: one to designate the start of a new set of data, and one to designate the end. These start and stop bits with the data bits in between constitute what we call one "frame" of data. Using ASCII encoding, often 7 bits of data are sent. In addition, a 10th bit would be sent between the data and the stop to help determine if the data received was correct or not. If the sender and receiver agree that every frame will have an even number of 1's in it, then this "parity bit" would be 1 or 0, depending on the number of 1's in the rest of the message. The receiver could then look at the 7 data bits and parity bit, add up the number of 1's, and if the total number is even be confident that they received the right message. In 8 and 16 bit systems like a microcontroller, it could also make sense to send data in 8 bit segments instead. Often times no parity bit is included in this case, to keep the total data length of each frame to 10 bits. The compromise is that there is no way to check for errors in the transmission, but generally error checking is only necessary under particular circumstances.
Let's say we want to encode the letter "D" using 7-bit ASCII encoding and odd parity. The ASCII code for the letter "D" is 0x44, or 0b1000100 in 7-bit binary. Now we face a choice: do we send the least significant bit first, or the most significant bit first? The typical protocol used in UART is what we call "little-endian", meaning we start with the least significant bit. (This makes sense when you think in terms of a shift-register; the SR in a UART pushes bits from high to low, so you send the lowest bit first.)
Representation of the ASCII character "D" in UART TTL. |
Hopefully this gives you a clear picture on how UART works. There are really no limitations on the protocol you use, so long as you have a start and stop bit. As long as the sender and receiver agree on what goes in the middle and at what rate the information comes, it will work. The standard protocols such as those illustrated here are convenient as it's very simple to use a computer to read the data coming from the microcontroller. In addition, keep in mind that there are standard speeds for transmitting bits (bits per second (bps) or baud), which are leftover from the old teletype days. In any case, many systems are limited to using these rates, so it's often a good idea to standardize to them. If you have your own internal system, use what ever baud rate is convenient, but do remember that a lot of computers and devices may expect one of the more conventional rates, like 300, 1200, 4800, 9600, or 115200 baud.
We've identified one of the key things we'll need for a successful UART: a good clock. Next time we'll look at some options, their limitations, and how to implement them.
Reader Exercises: Using 7 bit encoding with even parity, what would the bit stream look like to send the character "j"? How about the character "k"?
Using 8 bit encoding with no parity, what would the bit stream look like to send the newline character, "\n"? How about the character "&"?
3 comments:
Great tutorial! Explain the principles of UART very well! Keep up with the blog, it is very helpful!
Thank you! My translation.
Very well written and easy to understand. I recommend that you continue creating tutorials because you have a great way of communicating concepts!
Post a Comment