Redian新闻
>
2G, 3G, 4G, and everything in between: an Engadget wireless primer ZT
avatar
2G, 3G, 4G, and everything in between: an Engadget wireless primer ZT# PDA - 掌中宝
t*5
1
大家都哪里看的?
avatar
c*d
2
【 以下文字转载自 LeisureTime 讨论区 】
发信人: liquer (咣咣珰), 信区: LeisureTime
标 题: 春节猜字游戏第三波(附参赛id)
发信站: BBS 未名空间站 (Sun Feb 7 12:29:17 2010, 美东)
14
15
16
17
18
19
20
21
到目前为止交稿的id(部分,不确定id的我都没写,以免误导。欢迎自曝):
ttango, jz09, iamwhatieat, riversider, LeChuck, JazzCat2008, phoenixali,
NINI621, musicloa, alittlewitch, phoenixali, upupupu, ifulemitbbs,
MountEverest, loca, sasa, suwen
如果漏了哪个id请自己跳出来或者站内给我信。谢谢!
avatar
f*m
3
http://www.engadget.com/2011/01/17/2g-3g-4g-and-everything-in-b
an-engadget-wireless-prim/
It's hard to believe nowadays, but in a simpler time, cellphones really
were called "cell phones," not dumbphones, smartphones, feature phones,
or superphones. They bulged in your pocket -- if they fit in your pocket
at all, that is -- and they made calls. That's it. None of this social
networking, messaging, browsing, Instagramming, Flash 10.1 nonsense.
They didn't upload 5 megapixel photos to Flickr, and they most certainly
didn't turn into wireless hotspots.
Of course, those bleak days are mercifully behind us now -- but as
carriers around the world start to light up a promising new generation
of high-speed wireless networks, things are beginning to get a little
confusing. Just what is "4G," anyway? It's one higher than 3G, sure, but
does that necessarily mean it's better? Why are all four national
carriers in the US suddenly calling their networks 4G? Is it all the
same thing? Answering those questions requires that we take a take a
little walk through wireless past, present, and future... but we think
it's a walk you'll enjoy.
First things first: "G" stands for "generation," so when you hear
someone refer to a "4G network," that means they're talking about a
wireless network based on fourth-generation technology. And actually,
it's the definition of a "generation" in this context that has us in
this whole pickle in the first place; it's the reason why there's so
much confusion. But more on that in a bit -- first, let's take a trip
down memory lane into the primordial ooze that gave rise to the first
generation way back in the day.
1G
Our journey begins in the early 1980s with the introduction of several
groundbreaking network technologies: AMPS in the US and a combination of
TACS and NMT in Europe. The meanings of those acronyms are unimportant -
- there won't be a quiz later. All you really need to know is that
unlike earlier systems, these new standards were given enough spectrum
for reasonably heavy use by subscribers, were fully automated on the
carrier's end without requiring any human operator intervention, and
used electronics that could be miniaturized enough to fit into smallish
packages (think Motorola DynaTAC -- early prototype pictured right).
Though there were several generations of mobile telephone services
before these that date all the way back to the 1950s, the trifecta of
AMPS, TACS, and NMT is commonly considered to be the first generation --
"1G," if you will -- because they made cellphones practical to the
masses for the very first time. They were robust, reliable, and would
eventually come to blanket the entirety of many industrialized nations
around the world.
Thing is, no one was thinking about data services in the 1G days; these
were purely analog systems that were conceived and designed for voice
calls and very little else. Modems existed that could communicate over
these networks -- some handsets even had them built-in -- but because
analog cellular connections were susceptible to far more noise than
conventional landlines, transfer speeds were ridiculously slow. And even
if they'd been fast, it wouldn't have really mattered; per-minute rates
on AMPS networks in the 80s made cellphones luxuries and Wall Street
powerbroker business necessities, not must-haves for the everyman.
Besides, the technology didn't exist for an awesome smartphone that
could consume that much data anyhow. Oh, and YouTube had yet to be
invented. The stars simply hadn't yet aligned.
2G
The early nineties saw the rise of the first digital cellular networks,
which had a number of obvious benefits over the analog networks they
were supplanting: improved sound quality, better security, and higher
total capacity, just to name a few biggies. GSM got off to an early
start in Europe, while D-AMPS and an early version of Qualcomm's CDMA
known as IS-95 took hold in the US. (You might remember D-AMPS better as
"TDMA," though that's technically not descriptive enough -- GSM also
employs the TDMA multiplexing scheme, even though the two standards are
incompatible.) No one disputes that these systems collectively
represented the second generation of wireless networks -- they were
authentically different and revolutionary. Furthermore, a solid decade
had gone by at that point since the first 1G networks had gone live.
This stuff was definitely borne of a new generation.
Still, these nascent 2G standards didn't have intrinsic, tightly-coupled
support for data services woven into them. Many such networks supported
text messaging, though, so that was a start -- and they also supported
something called CSD, circuit-switched data. CSD allowed you to place a
dial-up data call digitally, so that the network's switching station was
receiving actual ones and zeroes from you rather than the screech of an
analog modem. Put simply, it meant that you could transfer data faster -
- up to 14.4kbps, in fact, which made it about as fast as an early- to
mid-nineties landline modem.
At the end of the day, though, CSD was a hack -- a way to repurpose
these voice-centric networks for data. You still had to place a "call"
to connect, so the service wasn't always available. The experience was
very similar to using a dial-up modem at home: either you were online,
or you weren't. Services like push email and instant messaging to your
phone were basically science fiction. Furthermore, because a CSD
connection was a call, you were burning minutes to get connected -- and
these technologies were in play at a time when monthly minute buckets on
cellular plans were measured in the dozens, not the hundreds or
thousands. Unless you had a company writing a check for your wireless
bill every month, using CSD for anything more than an occasional novelty
wasn't practical.
2.5G: you know you're in trouble when you need a decimal place
The 4G identity crisis we're dealing with today really started well over
a decade ago, around the time that standards bodies were hard at work
finalizing 3G technologies. The General Packet Radio Service -- GPRS --
marked a watershed moment in cellular history when it was hammered out
in 1997, because it offered a bolt-on for GSM networks that permitted
"always-on" data services. No more dial-up CSD silliness -- as long as
your account and phone were appropriately configured, you could
seamlessly use data whenever and however you needed it. For all
practical purposes, your handset was permanently connected to the
internet. It was also much faster than anything CSD could offer: over
100kbps in theory (though we don't recall ever getting anything much
above 40kbps in practice during the service's heyday). GPRS also marked
one of the first times that operators could effectively bill by the
kilobyte, rather than by the minute. Kind of a good news / bad news
situation there, we suppose.
The 4G identity crisis we're dealing with today really started well over
a decade ago, around the time that standards bodies were hard at work
finalizing 3G technologies.
Needless to say, GPRS was a big deal -- especially since it came right
at the time when people started really checking their email accounts
incessantly. In fact, it was transformational. AT&T Wireless, Cingular,
and VoiceStream (which would later become T-Mobile USA) all deployed the
service, as did virtually every GSM operator in the world. It didn't
earn the right to add a generational notch, though. You see, by the time
GPRS was on the market, the UN's International Telecommunications Union
had already put together its IMT-2000 standard, an official list of
specifications that a "true" 3G technology would have to meet. Most
importantly, IMT-2000 required stationary speeds of 2Mbps and mobile
speeds of 384kbps -- benchmarks that GPRS couldn't meet even on its best
day.
And that's the story of how GPRS got stuck as a tweener: better than 2G,
not good enough to be 3G. It was important enough so that it might have
earned the right to be called 3G had the ITU not already drawn the line,
but that's how the cookie crumbles. Turns out it would just be the first
of many, many generational schisms over the next decade.
3G, 3.5G, 3.75G... oh, and 2.75G, too
In addition to the aforementioned speed requirements, the ITU's official
3G specification also called out that compatible technologies should
offer smooth migration paths from 2G networks. To that end, a standard
called UMTS rose to the top as the 3G choice for GSM operators, and
CDMA2000 came about as the backward-compatible successor to IS-95.
Following the precedent set by GPRS, CDMA2000 offered CDMA networks an
"always-on" data connection in the form of a technology called 1xRTT.
Here's where it gets a little confusing: even though CDMA2000 on the
whole is officially a 3G standard, 1xRTT is only slightly faster than
GPRS in real-world use -- 100kbps or so -- and therefore is usually
lumped in with GPRS as a 2.5G standard. Fortunately, CDMA2000 also
defined the more advanced 1xEV-DO protocol, and that's where the real 3G
money was at, topping out at around 2.5Mbps.
The first CDMA2000 and UMTS networks launched between 2001 and 2003, but
that wasn't to say that manufacturers and standards organizations were
standing still with the 2G technology path, either. EDGE -- Enhanced
Data-rates for GSM Evolution -- was conceived as an easy way for
operators of GSM networks to squeeze some extra juice out of their 2.5G
rigs without investing serious money on UMTS hardware upgrades and
spectrum. With an EDGE-compatible phone, you could get speeds over
double what you got on GPRS; not bad at the time. Many European
operators didn't bother with EDGE, having already committed to going big
with UMTS, but Cingular -- likely looking to buy itself time -- jumped
at the opportunity and became the first network to roll it out in 2003.
So where would EDGE fit, then? Depends who you ask. It's not as fast as
UMTS or EV-DO, so you might say it's not 3G. But it's clearly faster
than GPRS, which means it should be better than 2.5G, right? Indeed,
many folks would call EDGE a 2.75G technology, eliciting sighs from
fraction-haters everywhere. The ITU doesn't help matters, officially
referring to EDGE as an ITU-2000 Narrowband technology -- basically, a
2G standard capable of eking 3G-esque speeds.
As the decade rolled on, CDMA2000 networks would get a nifty software
upgrade to EV-DO Revision A, offering slightly faster downlink speeds
and significantly faster uplink speeds -- the original specification
(called EV-DO Revision 0) only allowed for uploads of about 150kbps,
impractical for the rampant picture and video sharing we're all doing
with our phones and laptops these days. Revision A can do about ten
times that. Can't very well lump an upgrade that big in with 3G, can
you? 3.5G it is, then! Ditto for UMTS: HSDPA would add significantly
faster downlink speeds, and HSUPA would do the same for the uplink.
Further refinements to UMTS would produce HSPA+, dual-carrier HSPA+, and
HSPA+ Evolution, ranging in theoretical speeds from 14Mbps all the way
past a mind-boggling 600Mbps. So, what's the deal? Is it safe to say
we've hit a new generation yet, or is this just 3.75G the same way that
EDGE was 2.75G?
Lies, damn lies, and 4G
Just as it did with the 3G standard -- IMT-2000 -- the ITU has taken
ownership of 4G, bundling it into a specification known as IMT-Advanced.
It's no slouch, either: the document calls for 4G technologies to
deliver downlink speeds of 1Gbps when stationary and 100Mbps when
mobile, roughly 500-fold and 250-fold improvements over IMT-2000,
respectively. Those are truly wild speeds that would easily outstrip the
average DSL or cable broadband connection, which is why the FCC has been
so insistent that wireless technology plays a key role in getting
broadband data to rural areas -- it's more cost-effective to plant a
single 4G tower that can cover several dozen miles than it is to blanket
farmland with fiber optics.
Unfortunately, those specs are so aggressive that no commercialized
standard in the world currently meets them. Historically, WiMAX and
Long-Term Evolution (LTE), the standard generally accepted to succeed
both CDMA2000 and GSM, have been thought of as "4G technologies," but
that's only partially true: they both make use of a newer, extremely
efficient multiplexing scheme called orthogonal frequency-division
multiple access (OFDMA, as opposed to the older CDMA or TDMA we've been
using for the past twenty years) and they both lack a dedicated voice
network -- 100 percent of their spectrum is used for data services,
which means that voice calls would be treated as VoIP (not unlike Skype
or Vonage). Considering how heavily data-centric our mobile society has
become, that's a good thing.
Where WiMAX and LTE fall short, though, is in raw speed. The former tops
out at around 40Mbps and the latter around 100Mbps theoretical, while
practical, real-world speeds on commercial networks so far have tended
to range between around 4Mbps and 30Mbps -- well short of IMT-Advanced's
lofty (and, arguably, most important) goal. Updates to these standards -
- WiMAX 2 and LTE-Advanced, respectively -- promise to do the job, but
neither has been finalized yet... and production networks that make use
of them are still years away.
That said, you could still easily argue that the original WiMAX and LTE
standards are authentically different enough from the classically-
defined 3G standards to call them a true generational upgrade -- and
indeed, most (if not all) of the carriers around the world that have
deployed them have referred to them as "4G." It's an obvious marketing
advantage for them, and the ITU -- for all the good it's trying to do --
has no jurisdiction to stop it. Both technologies (LTE in particular)
will be deployed to many, many more carriers around the globe over the
next several years, and the use of the "4G" moniker is only going to
grow. It can't be stopped.
Arguably, it was T-Mobile's move that really sparked a fundamental
rethinking of what '4G' means to the phone-buying public.
The story's not over, though. T-Mobile USA, which has no publicly-
announced plans to upgrade its HSPA network to LTE any time soon,
decided late last year to start branding its HSPA+ upgrade as "4G," too.
In principle, the move makes some sense: the 3G technology path will
ultimately scale to speeds well beyond what mere LTE can achieve today,
eventually coming within spitting distance of IMT-Advanced. There are
many markets where T-Mobile's 21Mbps HSPA+ network is faster than
Sprint's WiMAX network. And neither Sprint, Verizon, nor MetroPCS -- the
three American carriers with live WiMAX / LTE networks -- are offering
VoIP yet; they're still falling back on their 3G airwaves for voice, and
will continue to do so for some time. Furthermore, T-Mobile will upgrade
to 42Mbps this year, still without touching LTE!
Arguably, it was T-Mobile's move that really sparked a fundamental
rethinking of what "4G" means to the phone-buying public. AT&T, which is
in the process of upgrading to HSPA+ and will start offering LTE in some
markets later this year, is calling both of these networks 4G -- and
naturally, neither Sprint nor Verizon have even thought about backing
down on their end. All four US national carriers seem entrenched at this
point, having successfully stolen the 4G label from the ITU -- they've
taken it, run with it, and reshaped it.
Wrap-up
ITU lays down law: WiMAX 2, LTE-Advanced are 4G, everyone else is a
buster
ITU capitulates, admits that the term '4G' could apply to LTE, WiMAX,
and 'evolved 3G technologies'
Sprint's XOHM WiMAX service gets official
AT&T: both HSPA+ and LTE are '4G,' 20 such devices planned for this year
4G at CES 2011: AT&T, Verizon, and T-Mobile make big moves
So where does this all leave us? In short, carriers seem to have won
this battle: the ITU recently backed down, saying that the term 4G "may
also be applied to the forerunners of these technologies, LTE and WiMAX,
and to other evolved 3G technologies providing a substantial level of
improvement in performance and capabilities with respect to the initial
third generation systems now deployed." And in a way, we think that's
fair -- no one would argue that the so-called "4G" network of today
resembles the 3G network of 2001. We can stream extremely high-quality
video, upload huge files in the blink of an eye, and -- given the right
circumstances -- even use some of these networks as DSL replacements.
Sounds like a generational leap to us.
Whether WiMAX 2 and LTE-Advanced will ultimately be called "4G" by the
time they're available is unclear, but our guess is that they won't --
the experience you'll have on those networks will be vastly different
than the 4G of today. And let's be honest: the world's marketing
departments have no shortage of Gs at their disposal.
avatar
c*d
4
还有这个

【在 c*****d 的大作中提到】
: 【 以下文字转载自 LeisureTime 讨论区 】
: 发信人: liquer (咣咣珰), 信区: LeisureTime
: 标 题: 春节猜字游戏第三波(附参赛id)
: 发信站: BBS 未名空间站 (Sun Feb 7 12:29:17 2010, 美东)
: 14
: 15
: 16
: 17
: 18
: 19

相关阅读
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。