| CDMA vs TDMA | 
Last Updated: 03-Mar-2003
NOTE: During this discussion I will use the generic term 
"CDMA" to refer to the IS-95 standard. Technically speaking, CDMA is only a 
means to transmit bits of information, while IS-95 is a transmission protocol 
that employs CDMA. You may also hear the term "TDMA" used to refer generically 
to the IS-136 standard. Once again, TDMA is only a method of transmitting bits, 
while IS-136 is a protocol that happens to employ TDMA.
I spend quite a 
bit of time reading the messages that flow through the various PCS newsgroups 
and forums on the Internet, and if one thing is abundantly clear, it is that 
people don't seem to know the true differences between CDMA and TDMA. And who 
could blame them? There is so much hype surrounding these two competing 
technologies that it is difficult for a regular PCS subscriber to know who is 
telling the truth.
I personally am NOT an RF engineer, nor do I work for 
any of the cellular or PCS companies. It is however, my hobby to keep up with 
the latest developments in mobile communication (as this web site amply 
demonstrates). I would like to clear the air by interjecting my own spin on this 
debate. I hope that by the time you finish reading this editorial, you will have 
a better understanding of the true strengths and weaknesses of both 
technologies.
The Basics
Let's 
begin by learning what these two acronyms stand for. TDMA stands for "Time 
Division Multiple Access", while CDMA stands for "Code Division Multiple 
Access". Three of the four words in each acronym are identical, since each 
technology essentially achieves the same goal, but by using different methods. 
Each strives to better utilize the radio spectrum by allowing multiple users to 
share the same physical channel. You heard that right. More than one person can 
carry on a conversation on the same frequency without causing interference. This 
is the magic of digital technology.
Where the two competing technologies 
differ is in the manner in which users share the common resource. TDMA does it 
by chopping up the channel into sequential time slices. Each user of the channel 
takes turns transmitting and receiving in a round-robin fashion. In reality, 
only one person is actually using the channel at any given moment, but he or she 
only uses it for short bursts. He then gives up the channel momentarily to allow 
the other users to have their turn. This is very similar to how a computer with 
just one processor can seem to run multiple applications 
simultaneously.
CDMA on the hand really does let everyone transmit at the 
same time. Conventional wisdom would lead you to believe that this is simply not 
possible. Using conventional modulation techniques, it most certainly is 
impossible. What makes CDMA work is a special type of digital modulation called 
"Spread Spectrum". This form of modulation takes the user's stream of bits and 
splatters them across a very wide channel in a pseudo-random fashion. The 
"pseudo" part is very important here, since the receiver must be able to undo 
the randomization in order to collect the bits together in a coherent 
order.
If you are still having trouble understanding the differences 
though, perhaps this analogy will help you. This my own version of an excellent 
analogy provided by Qualcomm:
Imagine a room full of people, all trying 
to carry on one-on-one conversations. In TDMA each couple takes turns talking. 
They keep their turns short by saying only one sentence at a time. As there is 
never more than one person speaking in the room at any given moment, no one has 
to worry about being heard over the background din. In CDMA, each couple talk at 
the same time, but they all use a different language. Because none of the 
listeners understand any language other than that of the individual to whom they 
are listening, the background din doesn't cause any real 
problems.
Voice 
Encoding
At this point many people 
confuse two distinctly different issues involved in the transmission of digital 
audio. The first is the WAY in which the stream of bits is delivered from one 
end to the other. This part of the "air interface" is what makes one technology 
different from another. The second is the compression algorithm used to squeeze 
the audio into as small a stream of bits as possible.
This latter 
component is known at the "Voice Coder", or Vocoder for short. Another term 
commonly used is CODEC, which is a similar word to modem. It combines the 
terms "COder" and "DECoder". Although each technology has chosen their own 
unique CODECs, there is no rule saying that one transmission method needs to use 
a specific CODEC. People often lump a technology's transmission method with its 
CODEC as though they were single entities. We will discuss CODECs in greater 
detail later on in this article.
Voice encoding schemes differ slightly 
in their approach to the problem. Because of this, certain types of human voice 
work better with some CODECs than they do with others. The point to remember is 
that all PCS CODECs are compromises of some sort. Since human voices 
have such a fantastic range of pitch and tonal depth, one cannot expect any 
single compromise to handle each one equally well. This inability to cope with 
all types of voice at the same level does lead some people to choose one 
technology over another.
All of the PCS technologies try to minimize 
battery consumption during calls by keeping the transmission of unnecessary 
data to a minimum. The phone decides whether or not you are presently 
speaking, or if the sound it hears is just background noise. If the phone 
determines that there is no intelligent data to transmit, it blanks the audio 
and it reduces the transmitter duty cycle (in the case of TDMA) or the number of 
transmitted bits (in the case of CDMA). When the audio is blanked, your caller 
would suddenly find themselves listening to "dead air", and this may cause them 
to think the call has dropped.
To avoid this psychological problem, many 
service providers insert what is known as "Comfort Noise" during the blanked 
periods. Comfort Noise is synthesized white noise that tries to mimic the volume 
and structure of the real background noise. This fake background noise assures 
the caller that the connection is alive and well. 
However, in newer CODECs such as EVRC (used exclusively on 
CDMA systems), background noise is generally suppressed even while the user is 
talking. This piece of magic makes it sound as though the cell phone user is 
not in a noisy environment at all. Under these conditions, Comfort Noise 
is neither necessary, nor desirable. You can read my article on EVRC by clicking here.
CDMA
Now that we have a rudimentary understanding of the two 
technologies, let's try and examine what advantages they provide. We'll begin 
with CDMA, since this new technology has created the greatest "buzz" in the 
mobile communications industry.
One of the terms you'll hear in 
conjunction with CDMA is "Soft Handoff". A handoff occurs in 
any cellular system when your call switches from one 
cell site to another as you travel. In all other technologies, this handoff 
occurs when the network informs your phone of the new channel to which it must 
switch. The phone then stops receiving and transmitting on the old channel, and 
commences transmitting and receiving on the new channel. It goes without saying 
that this is known as a "Hard Handoff".
In CDMA however, every site are 
on the SAME frequency. In order to begin listening to a new site, the phone only 
needs to change the pseudo-random sequence it uses to decode the desired data 
from the jumble of bits sent for everyone else. While a call is in progress, the 
network chooses two or more alternate sites that it feels are handoff 
candidates. It simultaneously broadcasts a copy of your call on each of these 
sites. Your phone can then pick and choose between the different sources for 
your call, and move between them whenever it feels like it. It can even combine 
the data received from two or more different sites to ease the transition from 
one to the other.
This arrangement therefore puts the phone in almost 
complete control of the handoff process. Such an arrangement should ensure that 
there is always a new site primed and ready to take over the call at a moment's 
notice. In theory, this should put an end to dropped calls and audio 
interruptions during the handoff process. In practice it works quite well, but 
dropped calls are still a fact of life in a mobile environment. However, CDMA 
rarely drops a call due to a failed handoff.
A big problem facing CDMA 
systems is channel pollution. This occurs when signals from too many 
base stations are present at the subscriber's phone, but none are dominant. When 
this situation occurs, audio quality degrades rapidly, even when signal seem 
otherwise very strong. Pollution occurs frequently in densely populated urban 
environments where service providers must build many sites in close proximity. 
Channel pollution can also result from massive multipath problems caused by many 
tall buildings. Taming pollution is a tuning and system design issue. It is up 
to the service provider to reduce this phenomenon as much as possible.
In 
defense of CDMA however, I should point out that the new EVRC CODEC is far 
more robust than either of the earlier CODECs. Because of its increased 
robustness, it provides much more consistent audio in the face of high frame 
error rates. EVRC is an 8 kilobit CODEC that provides audio quality that is 
almost as good to the older 13 kilobit CODEC. Since CDMA consumes only as 
much of the "ether" as a user takes, switching everyone to an 8 kilobit CODEC 
was an inevitable move. 
Don't confuse EVRC with the old (and unlamented) 8 kilobit 
CODEC implemented in the early days of CDMA deployment. That CODEC was simply 
awful, and very few good things could be said about it. EVRC is a far more 
advanced compression algorithm that cleans up many of the stability problems 
inherent in the two older CODECs. The sound reproduction is slightly 
muddier than the 13 kilobit CODEC, but the improvement in stability makes up for 
this.
Supporters often cite capacity as one 
CDMA's biggest assets. Virtually no one disagrees that CDMA has a very high 
"spectral efficiency". It can accommodate more users per MHz of bandwidth than 
any other technology. What experts do not agree upon is by how much. 
Unlike other technologies, in which the capacity is fixed and easily computed, 
CDMA has what is known as "Soft Capacity". You can always add just one 
more caller to a CDMA channel, but once you get past a certain point, you begin 
to pollute the channel such that it becomes difficult to retrieve an error-free 
data stream for any of the participants.
The ultimate capacity of 
a system is therefore dependent upon where you draw the line. How much 
degradation is a carrier willing to subject their subscribers to before they 
admit that they have run out of useable capacity? Even if someone does 
set a standard error rate at which these calculations are made, it does not mean 
that you personally will find the service particularly acceptable at that error 
rate.
TDMA
Let's move 
away from CDMA now and have a look at TDMA. Before we can go any further though, 
I should note that there are actually three different flavors of TDMA 
in the PCS market. Each of these technologies implements TDMA in a slightly 
different way. The most complex implementation is, without a doubt, GSM. It 
overlays the basic TDMA principles with many innovations that reduce the 
potential problems inherent in the system.
To reduce the effects of 
co-channel interference, multipath, and fading, the GSM network 
can use something known as Frequency Hopping. 
This means that your call literally jumps from one channel to another at fairly 
short intervals. By doing this, the likelihood of a given RF problem is 
randomized, and the effects are far less noticeable to the end user. Frequency 
Hopping is always available, but not mandated. This means that your GSM provider 
may or may not use it.
IS-136 is another form for TDMA, and it is this 
implementation that people generically refer to as TDMA. I personally wish they 
wouldn't do this, since it confuses the issue. It makes it sound as though 
IS-136 is the only TDMA technology. Naming conventions aside, IS-136 is 
probably the crudest implementation of TDMA. It will suffer from various 
maladies far more easily than GSM, but it does have one unique feature that 
compensate for its crudeness. It is the only technology that integrates with 
existing analog systems. While CDMA can provide handoffs from digital to analog, 
there is no way to send the call back to digital. In IS-136 you can go both ways 
at any time.
iDEN is a proprietary Motorola technology that no other 
company seems to participate in. Only Motorola makes iDEN phones, and only 
Motorola makes iDEN infrastructure equipment. Perhaps the company guards its 
technology on purpose. iDEN performs reasonably well, but its chosen CODEC is 
not quite as good as those on GSM or CDMA. In my experience, the quality of iDEN 
depends a lot on which iDEN phone you use. Some of Motorola's later models (such 
as the i85, i80, and i90) have improved things markedly.
Each of the 
three TDMA technologies uses a different CODEC. GSM sports a CODEC called EFR 
(short for Enhanced Full Rate). This CODEC is arguable the best sounding one 
available in the PCS world. IS-136 used to sound horrible, but in the fall of 
1997 they replaced their old CODEC with a new one. This new CODEC sounds much 
better than the old, but it doesn't quite match the GSM and CDMA 
entries.
TDMA systems still rely on the switch to determine when to 
perform a handoff. Unlike the old analog system however, the switch does not do 
this in a vacuum. The TDMA handset constantly monitors the signals coming from 
other sites, and it reports this information to the switch without the caller 
being aware of it. The switch then uses this information to make better handoff 
choices at more appropriate times. 
Perhaps the most annoying aspect of TDMA system to some 
people is the obviousness of handoffs. Some people don't tend to hear them, and 
I can only envy those individuals. Those of us who are sensitive to the slight 
interruptions caused by handoffs will probably find GSM the most frustrating. 
It's handoffs are by far the most messy. When handoffs occur infrequently (such 
as when we are stationary or in areas with few sites), they really don't present 
a problem at all. However, when they occur very frequently (while travelling in 
an area with a huge number of sites) they can become 
annoying.
Spectral Efficiency
Channel capacity in a TDMA system is fixed and indisputable. 
Each channel carries a finite number of "slots", and you can never 
accommodate a new caller once each of those slots is filled. Spectral efficiency 
varies from one technology to another, but computing a precise number is still a 
contentious issue. For example, GSM provides 8 slots in a channel 200 kHz wide, 
while IS-136 provides 3 slots in a channel only 30 kHz wide. GSM therefore 
consumes 25 kHz per user, while IS-136 consumes only 10 kHz per user.
One 
would be sorely tempted to proclaim that IS-136 has 2.5 times the capacity of 
GSM. In a one-cell system this is certainly true, but once we start deploying 
multiple cells and channel reuse, the situation becomes more complex. Due to 
GSM's better error management and frequency hopping, the interference of a 
co-channel site is greatly reduced. This allows frequencies to be reused more 
frequently without a degradation in the overall quality of the 
service.
Capacity is measured in "calls per cell per MHz". An IS-136 
system using N=7 reuse (this means you have 7 different sets of frequencies to 
spread out around town) the figure is 7.0 (which is an unfortunate coincidence, 
as there is no direct relationship to the N=7 value). In GSM we get figures of 
5.0 for N=4 and 6.6 for N=3. It was hoped that IS-136 could use tighter reuse 
than N=7, but its inability to cope with interference made this 
impossible.
Computing this figure for CDMA requires that certain 
assumptions are made. Formulas have been devised, and using very 
optimistic assumptions, CDMA can provide a whopping 45 users per cell 
per MHz. However, when using more pessimistic (and perhaps more 
realistic) assumptions, the value is 12. That still gives CDMA an almost 2:1 
advantage over the TDMA competition.
In-building Coverage
Now let's deal with another issue involving CDMA and TDMA. 
In-building coverage is something that many people talk about, but few people 
properly understand. Although CDMA has a slight edge in this department, due to 
a marginally greater tolerance for weak signals, all the technologies fair about 
the same. This is because the few dB advantage CDMA has is often "used up" when 
the provider detunes the sites to take advantage of this process 
gain.
So, while a CDMA phone might be able to produce a reasonable 
call with a signal level of -106 dBm, whereas a GSM phone might need -99 dBm to 
provide the same level of service, does this mean that CDMA networks will always 
have a 7 dB advantage? If all things were equal, then yes, but they aren't 
equal. As I mentioned earlier, channel pollution is a big issue with CDMA 
networks, and to keep channel pollution to a minimum in urban environments a 
CDMA provider needs to keep site overlap to a minimum. Subsequently, a CDMA 
network engineer will use that 7 dB advantage to his advantage by 
de-tuning the network accordingly. This means that CDMA users will frequently 
see markedly lower signal levels indoors than a GSM user will, but in the end it 
all works out about the same.
Buildings come in many configurations, but 
the most important aspect to their construction is the materials used. Steel 
frame buildings, or those with metal siding, shield their interiors more 
thoroughly than buildings made of wood. Large window openings allow signals to 
penetrate more deeply into buildings, unless the windows have metallic tint on 
them. Malls with glass roofs will generally provide better service than fully 
enclosed ones. More important than the type of building however is the proximity 
of the nearest site. When a site is located just outside of a building it can 
penetrate just about any building material. When a site is much further away 
however, the signals have a much harder time of getting past the walls of a 
structure
When it comes to distance, remember that signals are subject to the "distance squared law". This means that signals decrease by the square of the distance. A site at 0.25 kilometers away will have 4 times the signal strength of a site at 0.50 kilometers away, and 16 times that of a site 1.0 kilometers away. Distance squared however is the rate of signal reduction in free space. Recent studies have shown that terrestrial communications are usually subject to rates as high as "Distance cubed", or even "Distance to the 4th". If the latter is true, then a site 1.0 kilometers away will actually be 256 times weaker than a site 0.25 kilometers away.
In-building penetration is therefore less a technology issue 
than it is an implementation issue. Service providers who have sites close to 
the buildings you commonly visit will inevitably look better those who don't. 
Never use someone else's in-building experiences unless you expect to go in the 
same buildings as they do. You cannot make useful generalizations about 
in-building coverage based upon one person's experience.
CDMA does have 
one peculiarity concerning in-building penetration that does not affect TDMA. 
When the number of users on a channel goes up, the general level of signal 
pollution goes up in tandem. To compensate for this the CDMA system directs each 
phone to transmit with slightly more power. However, if a phone is already at 
its limit (such as might be the case inside a building) it cannot do anything to 
"keep up with the pack". This condition is known as "the shrinking coverage 
phenomenon" or "site breathing". During slow periods of the day you might find 
coverage inside a specific building quite good. During rush hour however, you 
might find it exceedingly poor (or non-existent).
Some Final 
Observations
CDMA really comes into 
its element when you are out in the countryside with few sites covering large 
expanses of land. Under these conditions CDMA provides extremely stable audio 
with few frame errors to mess things up. This is because Channel Pollution is 
almost non-existent in these situations. Under similar conditions TDMA suffers 
too readily from interference and it will often blank the audio. Many people who 
use CDMA systems in sparsely populated areas have given this technology 
extremely high marks.
TDMA systems also have great difficulties in open 
regions just outside densely populated areas. In this situation your phone is 
exposed to signals coming from countless sites in the densely populated areas, 
but there are no dominant signals from a close-by site. CDMA can suffer under 
these conditions too (due to channel pollution), but not quite so badly. Valleys 
don't present a big problem for TDMA, but high ground is a killer. You can 
experience choppiness in the audio even when your signal indicator is reading 2 
or 3 bars.
So in the end, can we really proclaim a winner in the CDMA vs 
TDMA war? For the time being I think not. Perhaps in the future when newer 
technologies built around wider bandwidth CDMA technologies come into existence 
the issue will warrant another look. By that time, even GSM will have moved to 
CDMA as its air interface of choice, but don't let that fool you into believing 
that they think the current TDMA air interface is inadequate for its purpose. 
Future standards are being built around high speed data.
If you are 
presently in the market for a new phone, my advise to you is to ignore the hype 
surrounding the technologies and look at service provider instead. Judge each 
with an eye to price, phone choice, coverage, and reputation. Technology should 
play a very small roll in your choice. If you follow this advice, you'll 
probably be much happier with the phone and service you inevitably wind up 
with.