John Fremlin's blog: An abundance of unused local radio bandwidth

Posted 2010-01-09 01:15:00 GMT

Interesting point (October 2009) from Brough Turner about larger buffer sizes actually being harmful for for data connexions at phone basestations: Has AT&T Wireless data congestion been self-inflicted? The idea is that the layers below the IP stack should not buffer too aggressively (which seems desirable to help people out who are driving through tunnels for example), as that means that in normal operation, the large buffers might fill up and result in additional latency without any benefit.

Here on holiday in Thailand, 3G licences are apparently still being held up by the lack of transparent governance, but in Malaysia, where I was last week, Cellcom has rolled out 3G quite widely. Despite good signal strengths, the data rates through Cellcom are typically a couple of kB/s — might as well save battery with plain GPRS and even forget about EDGE! It was extremely apparent that this repeated assumption from Brough's post does not hold true: the bottleneck link is the over-the-air link, i.e. the connection from radio access network or UTRAN [radio towers] to the Mobile Station (MS) [phone] (without this assumption the argument that buffer sizes should not be too large is actually stronger, and, to be pedantic, over the air microwave links are often used as backhaul).

In fact, getting a decent pipe to the basestations seems not to be easy. I understand that in more plain-dealing jurisdictions they are augmenting the expensive telecoms links with cheaper DSL connexions, but given that many broadband consumers in Malaysia are languishing on (claimed downstream) 2Mbps ADSL lines that seem to be woefully under-provisioned, this might not be so in effective. Implementing 7.2Mbps HSPDA with an achieved data rate of say 3 Mbps means there is more bandwidth between the phone and base station than to the wider Internet.

Many years ago, I volunteered at an (semi-charitable) ISP in Mozambique that was facing a similar squeeze, trying to resell a 1 Mbps (downstream) satellite link to numerous customers. each with much higher bandwidth amplified antenna wifi connexions to the central routers.

One possible way out is to supply each basestation with a cache of content and allow that to be accessed quickly. A few terabytes of disk is not costly nowadays — massive amounts of multimedia could be delivered (the concern being obtaining agreement from the content monopolists), but how about a fast local Wikipedia cache? Is any company offering solutions in this area?

Another alternative is to encourage peer-to-peer sharing over the abundant local bandwidth.

Both of these ideas are useful to even ISPs with decent upstream pipes. And provided the laws of physics are not rewritten, then local communications will always be more efficient than long distance ones (a very important principle at all stages of a computing architecture). Shame that NNTP is the only widespread solution in this space(?), as the protocol has been ripe for an overhaul for aeons. The concept should be appealing to the social media enthusiasts: somewhere between an area-specific conversation and a local broadcast.

If only the Web two point whoa! flat-earth tunnel-visionaries would stop trying to see shapes in clouds and form some buzzword cults in this environmentally beneficial, natural community-building, opportunity laden space . . .

Post a comment