Capabilities of Mobile Satellite Systems (MSS)
The most significant attribute of any satellite communication system is the wide area coverage that can be provided with very high guarantees of availability and consistency of service. The satellite component of UMTS can potentially provide the terrestrial service user with a global service without regard to incompatible terrestrial standards used elsewhere. Existing satellite mobile services have proved very attractive to the maritime and aeronautical sectors and they have also been of great benefit to emergency services, relief agencies, journalists, and expeditions over recent years.
Services are now extending to the land mobile market where hand-portable voice terminals are now technically feasible. The next subclauses address the key attributes of wide area coverage and types of services appropriate for satellite UMTS.
6.1 Large area coverage
A single satellite can see very large areas of the Earth: a single LEO can illuminate an area of 6 000 km diameter and a GSO can illuminate about 1/3rd of the globe. Within these areas, the spacecraft antenna can be designed to maintain a near-constant power flux density on the Earth's surface irrespective of range. However for the GSO and HEO (and possibly the LEO or MEO), the spacecraft antenna may need to be arranged as a cluster of spot beams (1 000 to 2 000 km diameter) in order to make hand-held terminals feasible and to achieve spectrum efficiency. Such spot beams require large spacecraft antennas for either GSO or HEO systems. The advantages of HEO and GSO are that it is possible to deploy a satellite system to fulfil a regional requirement rather than a global one, and frequency planning and co-ordination may be relatively straightforward. Furthermore, the ground infrastructure to support the satellites could follow traditional Land Earth Station (LES) approaches.The only satellite system that cannot provide polar coverage is GSO. With this restriction, any satellite constellation can provide assured line-of-sight global coverage unaffected by weather. Operation to shadowed or in-building terminals would require an additional link margin in the order of 20 dB or more, depending on the coverage required. Note that in cities, the terrestrial UMTS service is likely to be available and therefore in-building and city coverage may not be essential.
The line-of-sight case requires polarisation matching between the satellite and the mobile terminal. To avoid the need for polarisation tracking, mobile communications have traditionally used circular polarisation.
6.2 Flexible networks and services
A feature of most present day satellites is the use of "transparent transponders". Compared to conventional cellular base stations, the satellite transponder is little more than a frequency shifting amplifier. This does have drawbacks with regard to some aspects of system design but it also means that any one satellite is reasonably independent of modulation system or access method, or of service data rate or networking. This has led to satellites being used for a variety of applications, each with different terrestrial architectures. Provided the basic satellite parameters are satisfactory, these services can be introduced long after launch. Future satellites may not be quite so flexible as some studies propose to use on-board processing to improve capacity, spectrum efficiency and satellite payload performance. The transparency concept has however proved extremely costeffective and any on-board processing function is likely to be at least re-configurable and re-programmable. Another feature that might be introduced for MEO or LEO is the inter-satellite link to simplify terrestrial networking between satellites during handover.
The transparency concept has enabled mobile satellite systems to efficiently support a range of services beyond that of voice telephony:
- high data rate services (up to 64 kbit/s) to larger antenna (0,15 m ~ 1,0 m, 8 dBi ~ 20 dBi) mobile or fixed
terminals;
- group call and broadcasting;
- low data rate paging, alerting and two-way messaging;
- terminal location finding.
Some current satellite systems are designed so that extra services can be provided at very little additional cost. This is particularly effective when services are offered as a package to perhaps offset the requirement for line-of-sight paths for low-cost voice telephony.
Friday, August 27, 2010
7 Limitations of Mobile Satellite Systems
7 Limitations of Mobile Satellite Systems
7.1 Delay and Doppler
The delay and Doppler effects associated with satellite links are due entirely to the mechanical laws governing the satellite orbit. Any system design must take full account of these effects. For example, simple delay has an impact on speech quality that will require echo cancellers to be used at interfaces with the analogue network. Delay also requires allowances to be made in signalling protocols and power level control.
Changes in delay are the result of integrated Doppler shifts on the bit data rate and are significant for all orbits except GSO during a call and particularly during satellite handover. Such changes are likely to require a data buffer to maintain the delay at a constant maximum value. The data buffer can reside in either the LES or the mobile terminal between the two echo control devices and is required for both receive and transmit. Doppler shift itself complicates signal acquisition and spectrum management. The Doppler shift will not be identical for the in-bound and out-bound links due to the different feeder and mobile link microwave frequencies. Furthermore, the shift is in different directions if corrected at the mobile terminal. For LEO and MEO orbits, the shift may need to be individually corrected for each mobile; for HEO, common Doppler compensation can be incorporated in the LES or onboard the satellite.
7.2 Low link margins
Emphasis has already been made on the importance of keeping impairment margins low. An illustration can be based on the calculation of Carrier-to-Noise (C/N) ratios for uplink, downlink, and the total link. Assuming the downlink has C/Nd = 10 dB and is near the performance threshold, the feeder will need a 13 dB margin (C/Nu = 23 dB) to maintain
the degradation to less than 0,2 dB (i.e. C/Nt = 9,8 dB). Operation at levels just above threshold are only feasible for satellite because of the stable propagation path and because most impairments (including the large noise contribution) can be considered to be random. These low margins, compared to the terrestrial environment, result in longer signal acquisition times.
All impairments must be carefully analysed and include: imperfect in-band filtering, group delays, out-of-channel emissions (which demand very tight power amplifier linearity requirements, carrier to interference ratios, etc.) Multi-path also requires especial attention. In the terrestrial environment, multipath propagation normally results in inter-symbol interference that can be compensated with equalisers. The effects of multipath fading itself are often negligible within the main service areas because the detected signal level is sufficiently above threshold. In the satellite path, multipath delays are often short enough to be ignored (except for aircraft and ships) due to the comparatively high elevation angle of the radio path. However multi-path fading, in which a multipath signal partially cancels the main signal, can reduce the final signal below the modem operating threshold. Hand-offs between successive LEO, MEO, or HEO satellites will be more complex because of the small operating margins which makes it difficult to promptly detect signal disappearance. Satellite signal qualities often cannot be assessed from signal level (which is swamped by thermal noise) but are often estimated from the activity within the forward error correction algorithm. This requires time averaging and cannot be an instantaneous measurement. Satellite diversity reception
might alleviate some of these issues. Mobile terminals with high gain (directive) antennas have further problems with signal acquisition as the antenna may need to be mechanically or electronically steered towards the satellite before the signal rises above the detection
threshold.
7.3 Spectrum and orbit matters
Limited spectrum availability will constrain the potential capacity of the satellite component and hence will orientate personal satellite services towards low bit rate voice and data. Spectrum issues are very complex but can be broadly classified into three areas:
- feeder link planning;
- mobile frequency co-ordination;
- mobile frequency re-use and spectrum efficiency.
Global agreements exist for planning GSO systems via the ITU RS (formerly IFRB) for designated frequency bands. Feeder links are normally in one of the established Fixed Satellite Service (FSS) bands and are straightforward except for the large bandwidths required to support peak traffic on each satellite. Mobile frequency co-ordination is not simple however, particularly as their antenna patterns are near omni-directional and any mobile system is likely to require exclusive access to a frequency band. The next problem, that of re-using the frequencies as frequently as possible, is very similar in concept to terrestrial cellular planing except that isolation is provided by satellite antenna beam shaping rather instead of geographical spacing. Feeder links for non-GSO satellites are more complex, particularly because of the lack of established procedures for the
many possible orbits. Furthermore, there is no orbital registration akin to that in the GSO orbit where orbital positions are assigned to particular operators and countries. LEO and MEO may require several widely spaced feeder LESs per satellite sector or inter-satellite links to prevent the feeder link interfering with the geostationary orbit. In either case,
there will be additional delay and Doppler jumps. For HEO orbits where the satellites appear to operate at the same part of the celestial sphere, feeder link planning may not be difficult as GSO-type procedures could be applied. The magnitude of the orbit and spectrum planning problems is partly illustrated by figure 2 which shows an azimuth -
elevation diagram for a fixed land earth station site at a latitude of approximately 50° North (It is not computed from simulated systems but shows only the principle. Therefore slight differences to simulated orbit constellations may exist)
The dotted line, extending from East to West in the shape of an arc, represents the geostationary orbit with two fixed GSO satellites designated 1 and 2. The three LEO tracks belong to one system of approximately polar orbits. The LEO satellites designated 1 and 2 travel North-South. LEO satellite 1 is about to hand over to LEO satellite 2. The LEO satellite 3 travels South-North, but this satellite No. 3 appears here at this track, due to the Earth's revolution, only at a time shift of half a day with respect to the satellites travelling North-South. The slight drift of the three LEO satellites towards West is caused by the Earth's rotation, and hence the rotation of the earth station site, towards East.
At the north-north-western horizon and in the East of the zenith there are two loop-shaped tracks of the HEO satellites designated 1, 2 and 3. The dotted lines extending from the loop near the zenith show the branches of the track where the communication payloads are inactive, as is here the case for the HEO satellite 2. From the diagram one can conclude that a fixed earth station (according to CCIR Recommendation 465 [1]) at this site can communicate with GSO satellite 1 and HEO satellite 1, even when the LEO system is in operation. On the contrary, the links with GSO satellite 2 and HEO satellite 3 could not co-exist with LEO satellite 3, since it passes both the other satellite positions.
Assuming, the LEO satellites' orbit period were not adjusted with the Earth's rotation, then the LEO satellite tracks would scan across the sky like the lines on a television screen, and co-existence with neither GSO nor HEO satellites on the same frequencies would be possible.
7.4 Scope for technical developments
7.4.1 Signal to Noise (S/N) levels
Satellite systems operate very close to theoretical signal to noise demodulation thresholds. There is virtually no scope for reduction in receive thermal noise levels at the satellite or at the mobile terminal as noise levels are dominated by the Earth's background thermal noise (290 K). The noise performance of modern amplifiers is almost
insignificant against this background. The only scope to improve signal to noise margins (for example to provide shadow or in-building operation) is to improve satellite antenna gain.
7.4.2 Hand-held terminal antennas
Present operational mobile satellite systems provide voice services with medium gain steered antennas in the gain range 8 dBi to 15 dBi. Low data rate services can use unsteered lower gain antennas with gains between 0 dBi and 4 dBi.The challenge for UMTS is to provide voice telephony to hand-held terminals using unsteered low gain antennas. The hand-held target imposes practical limitations on the form of antenna and it is unlikely that antennas will have usable gains greater than 0 dBi. However this does not prohibit the use of higher gain antennas for particular applications or circumstances.
7.4.3 Satellites
Satellite technology and commercial launcher capabilities have matured over the past ten years allowing systems planners to design complex systems with confidence. However, reliability is paramount for commercial satellite services and therefore only well established technology, proven in space, is normally considered for major projects.
The satellite antenna is a critical system element. In order to allow operation with low-performance hand-held PES's, the satellite antenna must provide a high gain. This can only be achieved by using advanced array-type antenna technology, including electronic beam forming and beam steering. The resulting spot (cell) diameters on the Earth's surface are typically in the range 1 000 km to 3 000 km.
7.4.4 Digital modulation techniques
The potential capacity of any satellite system is limited essentially by the availability of frequency spectrum and onboard satellite DC power. Hence, for most cost effective operation, it is of paramount importance that power and spectrally efficient transmission schemes are employed. Current research is continuing to make worthwhile progress in
this area.
7.4.5 Voice coding
Lower bit rate voice codecs have been widely used in mobile satellite systems compared to terrestrial systems to reduce power and spectrum requirements. Continuing codec development, coupled with advances in semiconductor integration, is likely to yield improved speech quality and some reduction in overall power/spectrum demands. Target performances for UMTS speech codecs have been set for both terrestrial and satellite components, taking into account the progress that is expected to be made by the time UMTS is introduced.
7.1 Delay and Doppler
The delay and Doppler effects associated with satellite links are due entirely to the mechanical laws governing the satellite orbit. Any system design must take full account of these effects. For example, simple delay has an impact on speech quality that will require echo cancellers to be used at interfaces with the analogue network. Delay also requires allowances to be made in signalling protocols and power level control.
Changes in delay are the result of integrated Doppler shifts on the bit data rate and are significant for all orbits except GSO during a call and particularly during satellite handover. Such changes are likely to require a data buffer to maintain the delay at a constant maximum value. The data buffer can reside in either the LES or the mobile terminal between the two echo control devices and is required for both receive and transmit. Doppler shift itself complicates signal acquisition and spectrum management. The Doppler shift will not be identical for the in-bound and out-bound links due to the different feeder and mobile link microwave frequencies. Furthermore, the shift is in different directions if corrected at the mobile terminal. For LEO and MEO orbits, the shift may need to be individually corrected for each mobile; for HEO, common Doppler compensation can be incorporated in the LES or onboard the satellite.
7.2 Low link margins
Emphasis has already been made on the importance of keeping impairment margins low. An illustration can be based on the calculation of Carrier-to-Noise (C/N) ratios for uplink, downlink, and the total link. Assuming the downlink has C/Nd = 10 dB and is near the performance threshold, the feeder will need a 13 dB margin (C/Nu = 23 dB) to maintain
the degradation to less than 0,2 dB (i.e. C/Nt = 9,8 dB). Operation at levels just above threshold are only feasible for satellite because of the stable propagation path and because most impairments (including the large noise contribution) can be considered to be random. These low margins, compared to the terrestrial environment, result in longer signal acquisition times.
All impairments must be carefully analysed and include: imperfect in-band filtering, group delays, out-of-channel emissions (which demand very tight power amplifier linearity requirements, carrier to interference ratios, etc.) Multi-path also requires especial attention. In the terrestrial environment, multipath propagation normally results in inter-symbol interference that can be compensated with equalisers. The effects of multipath fading itself are often negligible within the main service areas because the detected signal level is sufficiently above threshold. In the satellite path, multipath delays are often short enough to be ignored (except for aircraft and ships) due to the comparatively high elevation angle of the radio path. However multi-path fading, in which a multipath signal partially cancels the main signal, can reduce the final signal below the modem operating threshold. Hand-offs between successive LEO, MEO, or HEO satellites will be more complex because of the small operating margins which makes it difficult to promptly detect signal disappearance. Satellite signal qualities often cannot be assessed from signal level (which is swamped by thermal noise) but are often estimated from the activity within the forward error correction algorithm. This requires time averaging and cannot be an instantaneous measurement. Satellite diversity reception
might alleviate some of these issues. Mobile terminals with high gain (directive) antennas have further problems with signal acquisition as the antenna may need to be mechanically or electronically steered towards the satellite before the signal rises above the detection
threshold.
7.3 Spectrum and orbit matters
Limited spectrum availability will constrain the potential capacity of the satellite component and hence will orientate personal satellite services towards low bit rate voice and data. Spectrum issues are very complex but can be broadly classified into three areas:
- feeder link planning;
- mobile frequency co-ordination;
- mobile frequency re-use and spectrum efficiency.
Global agreements exist for planning GSO systems via the ITU RS (formerly IFRB) for designated frequency bands. Feeder links are normally in one of the established Fixed Satellite Service (FSS) bands and are straightforward except for the large bandwidths required to support peak traffic on each satellite. Mobile frequency co-ordination is not simple however, particularly as their antenna patterns are near omni-directional and any mobile system is likely to require exclusive access to a frequency band. The next problem, that of re-using the frequencies as frequently as possible, is very similar in concept to terrestrial cellular planing except that isolation is provided by satellite antenna beam shaping rather instead of geographical spacing. Feeder links for non-GSO satellites are more complex, particularly because of the lack of established procedures for the
many possible orbits. Furthermore, there is no orbital registration akin to that in the GSO orbit where orbital positions are assigned to particular operators and countries. LEO and MEO may require several widely spaced feeder LESs per satellite sector or inter-satellite links to prevent the feeder link interfering with the geostationary orbit. In either case,
there will be additional delay and Doppler jumps. For HEO orbits where the satellites appear to operate at the same part of the celestial sphere, feeder link planning may not be difficult as GSO-type procedures could be applied. The magnitude of the orbit and spectrum planning problems is partly illustrated by figure 2 which shows an azimuth -
elevation diagram for a fixed land earth station site at a latitude of approximately 50° North (It is not computed from simulated systems but shows only the principle. Therefore slight differences to simulated orbit constellations may exist)
The dotted line, extending from East to West in the shape of an arc, represents the geostationary orbit with two fixed GSO satellites designated 1 and 2. The three LEO tracks belong to one system of approximately polar orbits. The LEO satellites designated 1 and 2 travel North-South. LEO satellite 1 is about to hand over to LEO satellite 2. The LEO satellite 3 travels South-North, but this satellite No. 3 appears here at this track, due to the Earth's revolution, only at a time shift of half a day with respect to the satellites travelling North-South. The slight drift of the three LEO satellites towards West is caused by the Earth's rotation, and hence the rotation of the earth station site, towards East.
At the north-north-western horizon and in the East of the zenith there are two loop-shaped tracks of the HEO satellites designated 1, 2 and 3. The dotted lines extending from the loop near the zenith show the branches of the track where the communication payloads are inactive, as is here the case for the HEO satellite 2. From the diagram one can conclude that a fixed earth station (according to CCIR Recommendation 465 [1]) at this site can communicate with GSO satellite 1 and HEO satellite 1, even when the LEO system is in operation. On the contrary, the links with GSO satellite 2 and HEO satellite 3 could not co-exist with LEO satellite 3, since it passes both the other satellite positions.
Assuming, the LEO satellites' orbit period were not adjusted with the Earth's rotation, then the LEO satellite tracks would scan across the sky like the lines on a television screen, and co-existence with neither GSO nor HEO satellites on the same frequencies would be possible.
7.4 Scope for technical developments
7.4.1 Signal to Noise (S/N) levels
Satellite systems operate very close to theoretical signal to noise demodulation thresholds. There is virtually no scope for reduction in receive thermal noise levels at the satellite or at the mobile terminal as noise levels are dominated by the Earth's background thermal noise (290 K). The noise performance of modern amplifiers is almost
insignificant against this background. The only scope to improve signal to noise margins (for example to provide shadow or in-building operation) is to improve satellite antenna gain.
7.4.2 Hand-held terminal antennas
Present operational mobile satellite systems provide voice services with medium gain steered antennas in the gain range 8 dBi to 15 dBi. Low data rate services can use unsteered lower gain antennas with gains between 0 dBi and 4 dBi.The challenge for UMTS is to provide voice telephony to hand-held terminals using unsteered low gain antennas. The hand-held target imposes practical limitations on the form of antenna and it is unlikely that antennas will have usable gains greater than 0 dBi. However this does not prohibit the use of higher gain antennas for particular applications or circumstances.
7.4.3 Satellites
Satellite technology and commercial launcher capabilities have matured over the past ten years allowing systems planners to design complex systems with confidence. However, reliability is paramount for commercial satellite services and therefore only well established technology, proven in space, is normally considered for major projects.
The satellite antenna is a critical system element. In order to allow operation with low-performance hand-held PES's, the satellite antenna must provide a high gain. This can only be achieved by using advanced array-type antenna technology, including electronic beam forming and beam steering. The resulting spot (cell) diameters on the Earth's surface are typically in the range 1 000 km to 3 000 km.
7.4.4 Digital modulation techniques
The potential capacity of any satellite system is limited essentially by the availability of frequency spectrum and onboard satellite DC power. Hence, for most cost effective operation, it is of paramount importance that power and spectrally efficient transmission schemes are employed. Current research is continuing to make worthwhile progress in
this area.
7.4.5 Voice coding
Lower bit rate voice codecs have been widely used in mobile satellite systems compared to terrestrial systems to reduce power and spectrum requirements. Continuing codec development, coupled with advances in semiconductor integration, is likely to yield improved speech quality and some reduction in overall power/spectrum demands. Target performances for UMTS speech codecs have been set for both terrestrial and satellite components, taking into account the progress that is expected to be made by the time UMTS is introduced.
Growth drivers of mobile satellite communication services
Growth drivers of mobile satellite communication services
Deregulation
Governments throughout the world are opening up their telecommunications systems whether it be through spectrum allocations, privatization, competition or access. Governments have realized that there is a strong correlation between telecommunications services and economic growth. Therefore they have started to knock down the walls that existed within their telecommunications markets and are encouraging investment in the latest technologies so that their countries do not fall behind in communications.
Technology
Technological developments have improved the power and versatility of satellites, today they have greater capacity and lower costs. For instance, the smaller size of many of today’s satellites lowers the cost of launching satellites. At the same time recent digital technologies (TDMA - Time Division Multiple Access, CDMA - Code Division
Multiple Access) are being applied to satellite systems which a increases capacity and lowers the cost of launching a system.
Globalization
People no longer are isolated from the world. People are affected by trade like never before; Nike and Gillette are no longer just U.S. companies. Because people are traveling halfway around the world on a moment’s notice, there is a demand for communications services that allow people to stay in touch no matter where they are. People want
to be able to make a phone call and receive one -- they want one telephone number that can be used anytime, anywhere in the world. Thus, we feel the development of the global economy is a key driver of the mobile communications business.
Economic growth
Economic growth throughout the world has increased living standards which also drives demand for communications services. As individuals increase their economic stature, one of the first things they desire is a phone. This is a positive for satellite service providers. As developing economies continue to grow and enter the global economy, the demand for satellite services will increase because people will be able to afford it, and the need for mobile services will increase.
Demand for phone service
More than 3 billion of the world’s people do not have phone service. The waiting list for landline telephone service has over 50 million names with the average wait greater than 1.5 years. On average, there are slightly fewer than 12 phones lines per every 100 people in the world. This is far lower than what exists in developed countries such as Sweden (68 lines) and the U.S. (60 lines). We believe that because the wait is so long, many do
not even attempt to get service -- this could understate the actual number of people
waiting for phone services. Atthe same time, Iridium (through research by Booz Allen and Gallup) has determined that the demand from worldwide travelers for mobile satellite services will be 42 million people by the year 2002. Regardless of how youlook at the numbers, there is a significant amount of people without phone services throughout the world. Also, phone services are not developed in many countries, so travelers are unable to access a reliable phone. Satellite communications services will solve the needs of worldwide travelers and provide phone services to many areas of the world that currently do not have phone service.
Mobile communications trends
Cellular demand continues to explode throughout the world with some estimates of 500 million subscribers by the year 2002. Cellular phone bills in Third World countries are higher than the average bill in the U.S. This suggests that demand for mobile communications services continues to grow at a very fast pace and that developing
countries are willing to pay for phone services. An ubiquitous phone service offered by satellite companies will benefit from these trends in cellular communications.
Deregulation
Governments throughout the world are opening up their telecommunications systems whether it be through spectrum allocations, privatization, competition or access. Governments have realized that there is a strong correlation between telecommunications services and economic growth. Therefore they have started to knock down the walls that existed within their telecommunications markets and are encouraging investment in the latest technologies so that their countries do not fall behind in communications.
Technology
Technological developments have improved the power and versatility of satellites, today they have greater capacity and lower costs. For instance, the smaller size of many of today’s satellites lowers the cost of launching satellites. At the same time recent digital technologies (TDMA - Time Division Multiple Access, CDMA - Code Division
Multiple Access) are being applied to satellite systems which a increases capacity and lowers the cost of launching a system.
Globalization
People no longer are isolated from the world. People are affected by trade like never before; Nike and Gillette are no longer just U.S. companies. Because people are traveling halfway around the world on a moment’s notice, there is a demand for communications services that allow people to stay in touch no matter where they are. People want
to be able to make a phone call and receive one -- they want one telephone number that can be used anytime, anywhere in the world. Thus, we feel the development of the global economy is a key driver of the mobile communications business.
Economic growth
Economic growth throughout the world has increased living standards which also drives demand for communications services. As individuals increase their economic stature, one of the first things they desire is a phone. This is a positive for satellite service providers. As developing economies continue to grow and enter the global economy, the demand for satellite services will increase because people will be able to afford it, and the need for mobile services will increase.
Demand for phone service
More than 3 billion of the world’s people do not have phone service. The waiting list for landline telephone service has over 50 million names with the average wait greater than 1.5 years. On average, there are slightly fewer than 12 phones lines per every 100 people in the world. This is far lower than what exists in developed countries such as Sweden (68 lines) and the U.S. (60 lines). We believe that because the wait is so long, many do
not even attempt to get service -- this could understate the actual number of people
waiting for phone services. Atthe same time, Iridium (through research by Booz Allen and Gallup) has determined that the demand from worldwide travelers for mobile satellite services will be 42 million people by the year 2002. Regardless of how youlook at the numbers, there is a significant amount of people without phone services throughout the world. Also, phone services are not developed in many countries, so travelers are unable to access a reliable phone. Satellite communications services will solve the needs of worldwide travelers and provide phone services to many areas of the world that currently do not have phone service.
Mobile communications trends
Cellular demand continues to explode throughout the world with some estimates of 500 million subscribers by the year 2002. Cellular phone bills in Third World countries are higher than the average bill in the U.S. This suggests that demand for mobile communications services continues to grow at a very fast pace and that developing
countries are willing to pay for phone services. An ubiquitous phone service offered by satellite companies will benefit from these trends in cellular communications.
Friday, August 20, 2010
1 smartcard
2. Overview
Today, the SIM card’s basic functionality in wireless communications is subscriber authentication and roaming. Although such features may be achieved via a centralized intelligent network (IN) solution or a smarter handset, there are several key benefits that could not be realized without the use of a SIM card, which is external to a mobile handset. These benefits—enhanced security, improved logistics, and new marketing opportunities—are key factors for effectively differentiating wireless service offerings. This tutorial assumes a basic knowledge of the wireless communications industry and will discuss the security benefits, logistical issues, marketing opportunities, and customer benefits associated with smart cards.
2.1. Smart Card Overview
The smart card is one of the latest additions to the world of information technology (IT). The size of a credit card, it has an embedded silicon chip that enables it to store data and communicate via a reader with a workstation or network. The chip also contains advanced security features that protect the card’s data.
Smart cards come in two varieties: microprocessor and memory. Memory cards simply store data and can be viewed as small floppy disks with optional security. Memory cards depend on the security of a card reader for their processing. A microprocessor card can add, delete, and manipulate information in its memory on the card. It is like a miniature computer with an input and output port, operating system, and hard disk with built-in security features.
Smart cards have two different types of interfaces. Contact smart cards must be inserted into a smart-card reader. The reader makes contact with the card module’s electrical connectors that transfer data to and from the chip. Contactless smart cards are passed near a reader with an antenna to carry out a transaction. They have an electronic microchip and an antenna embedded inside the card, which allow it to communicate without a physical contact. Contactless cards are an ideal solution when transactions must be processed quickly, as in mass transit or toll collection.
A third category now emerging is a dual interface card. It features a single chip that enables a contact and contactless interface with a high level of security.
Two characteristics make smart cards especially well suited for applications in which security-sensitive or personal data is involved. First, because a smart card contains both the data and the means to process it, information can be processed to and from a network without divulging the card’s data. Secondly, because smart cards are portable, users can carry data with them on the smart card rather than entrusting that information on network storage or a backend server where the information could be sold or accessed by unknown persons (see Figure).
Figure. Information and Personalization
A smart card can restrict the use of information to an authorized person with a password. However, if this information is to be transmitted by radio frequency or telephone lines, additional protection is necessary. One form of protection is ciphering (scrambling data). Some smart cards are capable of ciphering and deciphering, so the stored information can be transmitted without compromising confidentiality. Smart cards can cipher into billions of foreign languages and choose a different language at random every time they communicate. This process ensures that only authenticated cards and computers are used and makes hacking or eavesdropping virtually impossible.
The top five applications for smart cards throughout the world currently are as follows:
1. public telephony—prepaid phone memory cards using contact technology
2. mobile telephony—mobile phone terminals featuring subscriber identification and directory services
3. banking—debit/credit payment cards and electronic purse
4. loyalty—storage of loyalty points in retail and gas industries
5. pay-TV—access key to TV broadcast services through a digital set-top box
The benefits of using smart cards depend on the application. In general, applications supported by smart cards benefit consumers where their lifestyles intersect with information access and payment-related processing technologies. These benefits include the ability to manage or control expenditures more effectively, reduce fraud and paperwork, and eliminate the need to complete redundant, time-consuming forms. The smart card also provides the convenience of having one card with the ability to access multiple services, networks, and the Internet.
Today, the SIM card’s basic functionality in wireless communications is subscriber authentication and roaming. Although such features may be achieved via a centralized intelligent network (IN) solution or a smarter handset, there are several key benefits that could not be realized without the use of a SIM card, which is external to a mobile handset. These benefits—enhanced security, improved logistics, and new marketing opportunities—are key factors for effectively differentiating wireless service offerings. This tutorial assumes a basic knowledge of the wireless communications industry and will discuss the security benefits, logistical issues, marketing opportunities, and customer benefits associated with smart cards.
2.1. Smart Card Overview
The smart card is one of the latest additions to the world of information technology (IT). The size of a credit card, it has an embedded silicon chip that enables it to store data and communicate via a reader with a workstation or network. The chip also contains advanced security features that protect the card’s data.
Smart cards come in two varieties: microprocessor and memory. Memory cards simply store data and can be viewed as small floppy disks with optional security. Memory cards depend on the security of a card reader for their processing. A microprocessor card can add, delete, and manipulate information in its memory on the card. It is like a miniature computer with an input and output port, operating system, and hard disk with built-in security features.
Smart cards have two different types of interfaces. Contact smart cards must be inserted into a smart-card reader. The reader makes contact with the card module’s electrical connectors that transfer data to and from the chip. Contactless smart cards are passed near a reader with an antenna to carry out a transaction. They have an electronic microchip and an antenna embedded inside the card, which allow it to communicate without a physical contact. Contactless cards are an ideal solution when transactions must be processed quickly, as in mass transit or toll collection.
A third category now emerging is a dual interface card. It features a single chip that enables a contact and contactless interface with a high level of security.
Two characteristics make smart cards especially well suited for applications in which security-sensitive or personal data is involved. First, because a smart card contains both the data and the means to process it, information can be processed to and from a network without divulging the card’s data. Secondly, because smart cards are portable, users can carry data with them on the smart card rather than entrusting that information on network storage or a backend server where the information could be sold or accessed by unknown persons (see Figure).
Figure. Information and Personalization
A smart card can restrict the use of information to an authorized person with a password. However, if this information is to be transmitted by radio frequency or telephone lines, additional protection is necessary. One form of protection is ciphering (scrambling data). Some smart cards are capable of ciphering and deciphering, so the stored information can be transmitted without compromising confidentiality. Smart cards can cipher into billions of foreign languages and choose a different language at random every time they communicate. This process ensures that only authenticated cards and computers are used and makes hacking or eavesdropping virtually impossible.
The top five applications for smart cards throughout the world currently are as follows:
1. public telephony—prepaid phone memory cards using contact technology
2. mobile telephony—mobile phone terminals featuring subscriber identification and directory services
3. banking—debit/credit payment cards and electronic purse
4. loyalty—storage of loyalty points in retail and gas industries
5. pay-TV—access key to TV broadcast services through a digital set-top box
The benefits of using smart cards depend on the application. In general, applications supported by smart cards benefit consumers where their lifestyles intersect with information access and payment-related processing technologies. These benefits include the ability to manage or control expenditures more effectively, reduce fraud and paperwork, and eliminate the need to complete redundant, time-consuming forms. The smart card also provides the convenience of having one card with the ability to access multiple services, networks, and the Internet.
2 smartcard
2. Overview
Today, the SIM card’s basic functionality in wireless communications is subscriber authentication and roaming. Although such features may be achieved via a centralized intelligent network (IN) solution or a smarter handset, there are several key benefits that could not be realized without the use of a SIM card, which is external to a mobile handset. These benefits—enhanced security, improved logistics, and new marketing opportunities—are key factors for effectively differentiating wireless service offerings. This tutorial assumes a basic knowledge of the wireless communications industry and will discuss the security benefits, logistical issues, marketing opportunities, and customer benefits associated with smart cards.
2.1. Smart Card Overview
The smart card is one of the latest additions to the world of information technology (IT). The size of a credit card, it has an embedded silicon chip that enables it to store data and communicate via a reader with a workstation or network. The chip also contains advanced security features that protect the card’s data.
Smart cards come in two varieties: microprocessor and memory. Memory cards simply store data and can be viewed as small floppy disks with optional security. Memory cards depend on the security of a card reader for their processing. A microprocessor card can add, delete, and manipulate information in its memory on the card. It is like a miniature computer with an input and output port, operating system, and hard disk with built-in security features.
Smart cards have two different types of interfaces. Contact smart cards must be inserted into a smart-card reader. The reader makes contact with the card module’s electrical connectors that transfer data to and from the chip. Contactless smart cards are passed near a reader with an antenna to carry out a transaction. They have an electronic microchip and an antenna embedded inside the card, which allow it to communicate without a physical contact. Contactless cards are an ideal solution when transactions must be processed quickly, as in mass transit or toll collection.
A third category now emerging is a dual interface card. It features a single chip that enables a contact and contactless interface with a high level of security.
Two characteristics make smart cards especially well suited for applications in which security-sensitive or personal data is involved. First, because a smart card contains both the data and the means to process it, information can be processed to and from a network without divulging the card’s data. Secondly, because smart cards are portable, users can carry data with them on the smart card rather than entrusting that information on network storage or a backend server where the information could be sold or accessed by unknown persons (see Figure).
Figure. Information and Personalization
A smart card can restrict the use of information to an authorized person with a password. However, if this information is to be transmitted by radio frequency or telephone lines, additional protection is necessary. One form of protection is ciphering (scrambling data). Some smart cards are capable of ciphering and deciphering, so the stored information can be transmitted without compromising confidentiality. Smart cards can cipher into billions of foreign languages and choose a different language at random every time they communicate. This process ensures that only authenticated cards and computers are used and makes hacking or eavesdropping virtually impossible.
The top five applications for smart cards throughout the world currently are as follows:
1. public telephony—prepaid phone memory cards using contact technology
2. mobile telephony—mobile phone terminals featuring subscriber identification and directory services
3. banking—debit/credit payment cards and electronic purse
4. loyalty—storage of loyalty points in retail and gas industries
5. pay-TV—access key to TV broadcast services through a digital set-top box
The benefits of using smart cards depend on the application. In general, applications supported by smart cards benefit consumers where their lifestyles intersect with information access and payment-related processing technologies. These benefits include the ability to manage or control expenditures more effectively, reduce fraud and paperwork, and eliminate the need to complete redundant, time-consuming forms. The smart card also provides the convenience of having one card with the ability to access multiple services, networks, and the Internet.
Today, the SIM card’s basic functionality in wireless communications is subscriber authentication and roaming. Although such features may be achieved via a centralized intelligent network (IN) solution or a smarter handset, there are several key benefits that could not be realized without the use of a SIM card, which is external to a mobile handset. These benefits—enhanced security, improved logistics, and new marketing opportunities—are key factors for effectively differentiating wireless service offerings. This tutorial assumes a basic knowledge of the wireless communications industry and will discuss the security benefits, logistical issues, marketing opportunities, and customer benefits associated with smart cards.
2.1. Smart Card Overview
The smart card is one of the latest additions to the world of information technology (IT). The size of a credit card, it has an embedded silicon chip that enables it to store data and communicate via a reader with a workstation or network. The chip also contains advanced security features that protect the card’s data.
Smart cards come in two varieties: microprocessor and memory. Memory cards simply store data and can be viewed as small floppy disks with optional security. Memory cards depend on the security of a card reader for their processing. A microprocessor card can add, delete, and manipulate information in its memory on the card. It is like a miniature computer with an input and output port, operating system, and hard disk with built-in security features.
Smart cards have two different types of interfaces. Contact smart cards must be inserted into a smart-card reader. The reader makes contact with the card module’s electrical connectors that transfer data to and from the chip. Contactless smart cards are passed near a reader with an antenna to carry out a transaction. They have an electronic microchip and an antenna embedded inside the card, which allow it to communicate without a physical contact. Contactless cards are an ideal solution when transactions must be processed quickly, as in mass transit or toll collection.
A third category now emerging is a dual interface card. It features a single chip that enables a contact and contactless interface with a high level of security.
Two characteristics make smart cards especially well suited for applications in which security-sensitive or personal data is involved. First, because a smart card contains both the data and the means to process it, information can be processed to and from a network without divulging the card’s data. Secondly, because smart cards are portable, users can carry data with them on the smart card rather than entrusting that information on network storage or a backend server where the information could be sold or accessed by unknown persons (see Figure).
Figure. Information and Personalization
A smart card can restrict the use of information to an authorized person with a password. However, if this information is to be transmitted by radio frequency or telephone lines, additional protection is necessary. One form of protection is ciphering (scrambling data). Some smart cards are capable of ciphering and deciphering, so the stored information can be transmitted without compromising confidentiality. Smart cards can cipher into billions of foreign languages and choose a different language at random every time they communicate. This process ensures that only authenticated cards and computers are used and makes hacking or eavesdropping virtually impossible.
The top five applications for smart cards throughout the world currently are as follows:
1. public telephony—prepaid phone memory cards using contact technology
2. mobile telephony—mobile phone terminals featuring subscriber identification and directory services
3. banking—debit/credit payment cards and electronic purse
4. loyalty—storage of loyalty points in retail and gas industries
5. pay-TV—access key to TV broadcast services through a digital set-top box
The benefits of using smart cards depend on the application. In general, applications supported by smart cards benefit consumers where their lifestyles intersect with information access and payment-related processing technologies. These benefits include the ability to manage or control expenditures more effectively, reduce fraud and paperwork, and eliminate the need to complete redundant, time-consuming forms. The smart card also provides the convenience of having one card with the ability to access multiple services, networks, and the Internet.
3 smartcard
3. Introduction to Smart Cards in Wireless Communications
Smart cards provide secure user authentication, secure roaming, and a platform for value-added services in wireless communications. Presently, smart cards are used mainly in the Global System for Mobile Communications (GSM) standard in the form of a SIM card. GSM is an established standard first developed in Europe. In 1998, the GSM Association announced that there are now more than 100 million GSM subscribers. In the last few years, GSM has made significant inroads into the wireless markets of the Americas.
Initially, the SIM was specified as a part of the GSM standard to secure access to the mobile network and store basic network information. As the years have passed, the role of the SIM card has become increasingly important in the wireless service chain. Today, SIM cards can be used to customize mobile phones regardless of the standard (GSM, personal communications service [PCS], satellite, digital cellular system [DCS], etc.).
Today, the SIM is the major component of the wireless market, paving the way to value-added services. SIM cards now offer new menus, prerecorded numbers for speed dialing, and the ability to send presorted short messages to query a database or secure transactions. The cards also enable greeting messages and company logotypes to be displayed.
Other wireless communications technologies rely on smart cards for their operations. Satellite communications networks (Iridium and Globalstar) are chief examples. Eventually, new networks will have a common smart object and a universal identification module (UIM), performing functions similar to SIM cards.
Smart cards provide secure user authentication, secure roaming, and a platform for value-added services in wireless communications. Presently, smart cards are used mainly in the Global System for Mobile Communications (GSM) standard in the form of a SIM card. GSM is an established standard first developed in Europe. In 1998, the GSM Association announced that there are now more than 100 million GSM subscribers. In the last few years, GSM has made significant inroads into the wireless markets of the Americas.
Initially, the SIM was specified as a part of the GSM standard to secure access to the mobile network and store basic network information. As the years have passed, the role of the SIM card has become increasingly important in the wireless service chain. Today, SIM cards can be used to customize mobile phones regardless of the standard (GSM, personal communications service [PCS], satellite, digital cellular system [DCS], etc.).
Today, the SIM is the major component of the wireless market, paving the way to value-added services. SIM cards now offer new menus, prerecorded numbers for speed dialing, and the ability to send presorted short messages to query a database or secure transactions. The cards also enable greeting messages and company logotypes to be displayed.
Other wireless communications technologies rely on smart cards for their operations. Satellite communications networks (Iridium and Globalstar) are chief examples. Eventually, new networks will have a common smart object and a universal identification module (UIM), performing functions similar to SIM cards.
4 smartcard
4. Easing Logistical Issues
All subscribers may easily personalize and depersonalize their mobile phone by simply inserting or removing their smart cards. The card’s functions are automatically enabled by the electronic data interchange (EDI) links already set between carriers and secure personalization centers. No sophisticated programming of the handset is necessary.
By placing subscription information on a SIM card, as opposed to a mobile handset, it becomes easier to create a global market and a distribution network of phones. These noncarrier-specific phones can increase the diversity, number, and competition in the distribution channel, which can ultimately help lower the cost of customer acquisition.
Smart cards make it easier for households and companies to increase the number of subscriptions, thereby increasing usage. They also help to create a market for ready-to-use preowned handsets that require no programming before use.
Additionally, managing fraud is also eased by smart cards. In a handset-centric system, if a phone is cloned, the customer must go to a service center to have the handset reprogrammed, or a new phone must be issued to the customer. In a smart card–based system, the situation can be handled by merely issuing a new card; customers can continue using their current phones. The savings in terms of cost and convenience to both carrier and customer can be substantial.
All subscribers may easily personalize and depersonalize their mobile phone by simply inserting or removing their smart cards. The card’s functions are automatically enabled by the electronic data interchange (EDI) links already set between carriers and secure personalization centers. No sophisticated programming of the handset is necessary.
By placing subscription information on a SIM card, as opposed to a mobile handset, it becomes easier to create a global market and a distribution network of phones. These noncarrier-specific phones can increase the diversity, number, and competition in the distribution channel, which can ultimately help lower the cost of customer acquisition.
Smart cards make it easier for households and companies to increase the number of subscriptions, thereby increasing usage. They also help to create a market for ready-to-use preowned handsets that require no programming before use.
Additionally, managing fraud is also eased by smart cards. In a handset-centric system, if a phone is cloned, the customer must go to a service center to have the handset reprogrammed, or a new phone must be issued to the customer. In a smart card–based system, the situation can be handled by merely issuing a new card; customers can continue using their current phones. The savings in terms of cost and convenience to both carrier and customer can be substantial.
Sunday, August 15, 2010
1. INTRODUCTION
1. INTRODUCTION
In a world of rapid technological change, there is a greater need for people to communicate and connect with each other and take appropriate and timely access to information regardless of the location of each of the individuals or the information. The growing demands and requirements for ubiquitous wireless communication systems have led to the need for better understanding of key issues in communication theory and electromagnetic and its implications for the design of high capacity wireless systems. In constant development of mobile environments, the major wireless providers in the market kept on monitoring the growth of 4th generation (4G) mobile technology. 2G and 3G are well established as a mobile power technology worldwide. 3G is the obstacle to obtaining a market share for many different reasons and 4G is reaching a certain confidence.
In 2010 the total mobile subscriber base in North America, Europe and Asia-Pacific, is expected to grow to 2.5 billion and the penetration is over 50%. This kind of demand growth will require support from higher-capacity networks.
FIGURE 1 ESTIMATED a combined population of mobile subscribers
Given the technology of large technology, 4G mobile as an example, give people for more convenience and ease of lifestyle. With the "anytime, anywhere, anything," the ability, the 4G wireless technology will benefit all people regardless of time and place. Taking into account the global perspective, this technology is to be the way to communicate and connect all the time with the most pervasive. Therefore, given the ubiquity of networks, electronic commerce (or m-commerce), unified messaging and networking peer-to-peer, expansion into wireless and mobile environment must reach its fullest potential. The trail goes to the 4G mobile technology includes many significant trends. Major players have been investing and 2G mobile technology success. 4G mobile technologies are perceived to provide fast, high data rate or bandwidth, and offer packetized data communications. Since 4G is still under the cloud of the creation of sensible rules, the forces of the ITU and the IEEE as a number of tasks to work on the possible realization of the 4G mobile standards as well. The latest Internet users experiences boom in the industry forced to look for means to provide high data rate regardless of mobility.4G is being discussed as a solution to research and vision and needs are being standardized in various standards bodies. 4G service vision is offered of this research. There are still large room for the purpose of the vision of application services: 3G is lagging behind in its marketing and about a decade of change is left for 4G. However, we believe that this work promotes the discussion of 4G services through the presentation of our vision of 4G services.
This paper also outlines the current trends of the next generation of wireless communications and investigate candidates 4G technologies. Based on this research, four scenarios will be discussed to predict and analyze 4G. The final section offers some of the political implications and issues.
What is 3G?
- 3G refers to third generation mobile telephony (ie, cellular) technology. The third generation, as its name suggests, follows two earlier generations (1G and 2G).
- The 3G technology is now well established current mobiles across the world.
What is 4G?
- Following the storyline of the rules of cell phone technology has lengthened by 1G, 2G, 3G, 4G describes the world completely new value without advanced 3G networks. The third generation
- 4G, which is also known as "beyond 3G" or "fourth generation" mobile technology, refers to changes in brand new and complete replacement of 3G wireless communications.
- As the data transmission speeds greater than 2G to 3G, the jump from 3G to 4G again promises even higher data rates that existed in previous generations. 4G promises voice, data and multimedia quality in real time (streaming) are all the time and anywhere.
Generation mobile networks:
➢ The first generation (1G) analog voice systems
➢ Second Generation (2G) digital voice systems
➢ second generation - advanced (2.5 G): The combination of voice and data communications
➢ Third Generation (3G) digital voice and data communications:
• Development of a broader mobile network
• Manipulation of Internet access, email, messaging, multimedia
• Access to any services (voice, video, data, etc)
• Requires high quality transmission
➢ Fourth Generation (4G): All-IP Mobile Network
- The ubiquitous wireless communications
- Transparent to all services
- Integration of multi-
In a world of rapid technological change, there is a greater need for people to communicate and connect with each other and take appropriate and timely access to information regardless of the location of each of the individuals or the information. The growing demands and requirements for ubiquitous wireless communication systems have led to the need for better understanding of key issues in communication theory and electromagnetic and its implications for the design of high capacity wireless systems. In constant development of mobile environments, the major wireless providers in the market kept on monitoring the growth of 4th generation (4G) mobile technology. 2G and 3G are well established as a mobile power technology worldwide. 3G is the obstacle to obtaining a market share for many different reasons and 4G is reaching a certain confidence.
In 2010 the total mobile subscriber base in North America, Europe and Asia-Pacific, is expected to grow to 2.5 billion and the penetration is over 50%. This kind of demand growth will require support from higher-capacity networks.
FIGURE 1 ESTIMATED a combined population of mobile subscribers
Given the technology of large technology, 4G mobile as an example, give people for more convenience and ease of lifestyle. With the "anytime, anywhere, anything," the ability, the 4G wireless technology will benefit all people regardless of time and place. Taking into account the global perspective, this technology is to be the way to communicate and connect all the time with the most pervasive. Therefore, given the ubiquity of networks, electronic commerce (or m-commerce), unified messaging and networking peer-to-peer, expansion into wireless and mobile environment must reach its fullest potential. The trail goes to the 4G mobile technology includes many significant trends. Major players have been investing and 2G mobile technology success. 4G mobile technologies are perceived to provide fast, high data rate or bandwidth, and offer packetized data communications. Since 4G is still under the cloud of the creation of sensible rules, the forces of the ITU and the IEEE as a number of tasks to work on the possible realization of the 4G mobile standards as well. The latest Internet users experiences boom in the industry forced to look for means to provide high data rate regardless of mobility.4G is being discussed as a solution to research and vision and needs are being standardized in various standards bodies. 4G service vision is offered of this research. There are still large room for the purpose of the vision of application services: 3G is lagging behind in its marketing and about a decade of change is left for 4G. However, we believe that this work promotes the discussion of 4G services through the presentation of our vision of 4G services.
This paper also outlines the current trends of the next generation of wireless communications and investigate candidates 4G technologies. Based on this research, four scenarios will be discussed to predict and analyze 4G. The final section offers some of the political implications and issues.
What is 3G?
- 3G refers to third generation mobile telephony (ie, cellular) technology. The third generation, as its name suggests, follows two earlier generations (1G and 2G).
- The 3G technology is now well established current mobiles across the world.
What is 4G?
- Following the storyline of the rules of cell phone technology has lengthened by 1G, 2G, 3G, 4G describes the world completely new value without advanced 3G networks. The third generation
- 4G, which is also known as "beyond 3G" or "fourth generation" mobile technology, refers to changes in brand new and complete replacement of 3G wireless communications.
- As the data transmission speeds greater than 2G to 3G, the jump from 3G to 4G again promises even higher data rates that existed in previous generations. 4G promises voice, data and multimedia quality in real time (streaming) are all the time and anywhere.
Generation mobile networks:
➢ The first generation (1G) analog voice systems
➢ Second Generation (2G) digital voice systems
➢ second generation - advanced (2.5 G): The combination of voice and data communications
➢ Third Generation (3G) digital voice and data communications:
• Development of a broader mobile network
• Manipulation of Internet access, email, messaging, multimedia
• Access to any services (voice, video, data, etc)
• Requires high quality transmission
➢ Fourth Generation (4G): All-IP Mobile Network
- The ubiquitous wireless communications
- Transparent to all services
- Integration of multi-
2. 4G Technology Trends
2. 4G Technology Trends
2.1. 4G FEATURES
Convergence of services: The idea of convergence means that creating the atmosphere that can eventually provide seamless, high reliability and quality of mobile communications services and broadband ubiquitous services over wired and wireless convergence without the problem of limited land space and, through the ubiquitous connectivity. The convergence between industries also accelerated by the formation of alliances through participation in various projects to provide converged services. 4G mobile systems are mainly characterized by a horizontal communication model, where access technologies such as different cellular, wireless, wireless LAN type, short-range wireless connectivity, and cable systems will be combined into a common platform to complement between them in the best possible way for the service requirements of different situations and environments of radio. The development is expected to inspire the progressive trend of information technology is far from being the current focus of the technical convergence fully mobile and widespread media. The trends from the perspective of services include the integration of services and the convergence of service delivery mechanisms. According to these trends, the mobile network architecture will become flexible and versatile, and new services will be easy to implement.
Broadband Services: Broadband is a basis for the purpose of enabling multimedia communications, including video service, which requires transmission of large amounts of data, but, of course, calls for the media convergence issue, based in the transport of packages, advocating the integration of different media in different qualities. The position growing broadband services as asymmetric digital subscriber line (ADSL) systems and optical fiber access and office or home LAN is expected to lead to a demand for similar services in the communication environment mobile. 4G characteristics of application services give broadband service benefits:
1) Low cost to make broadband services available to users to exchange various types of information is necessary to reduce costs considerably in order to keep the cost or below cost of existing service.
2) Wide Area Coverage One of the characteristics of mobile communications is that it is the availability and omnipresent. This advantage is important for future mobile communications. In particular, it is important to maintain the service area in which the terminals of the new system can be used during the transition from existing system to a new system.
3) Wide range of services is mobile communication skills for different types of users. In the future, we hope that the advanced system performance and functionality to introduce a variety of services not only phone service tradicional.Los services should be easy for anyone to use.
INTERACTIVE BCN (ALL-IP) with home networks, telemetry, SENSOR-NETWORK SERVICES: Since technologies are becoming more collaborative and essential. The development of all network services based on All-IP network is needed for more converged services. IP-based unified network for services far beyond the convergence of quality land through active access is what the broadband convergence network is. ALL-IP convergence and Next Generation IP-based network cable or backbone cable, which can be implemented quickly if convergence. All IP networks and IP multimedia technology services are the main trends in the wired and wireless network. The idea of broadband network convergence (BCN) fit into the provision of a common service architecture, unified and flexible that it can support multiple types of services and management applications across multiple types of transport networks.The main purpose of giving effect to service more interactive 4G network driven by the convergence of broadband is its applicability to home networks, telemetry, and the sensor network. Partnership will provide a converged network and application service more beneficial, especially if the computer is broadband users and their suppliers. To give more emphasis on this service request, an example is the creation of home networks and their implementation is bound to give more benefits to users and society in terms of broadband connectivity. Much more than the implementation of broadband convergence network, the application of telemetry will emphasize more tangible in the implementation of 4G mobile technology.
FLEXIBILITY AND PERSONAL SERVICE:
The main concern in the design of network security is the flexibilidad.sistemas 4G 4G support comprehensive and personalized services, providing system stability
performance and service quality. To support multimedia services, high speed data reliably good system will be provided. At the same time, a low rate of data transmission costs are maintained. In order to meet the demands of these diverse users, service providers should design personal and customized services for them. Personal mobility is a concern in the management of mobility. Personal mobility focuses on the movement of users instead of users
terminals, and is the sending of personal and
customized operating environments. Implementation of SDR to 4G offers an advantage of the benefits to service providers, manufacturers and users, and, for service providers;
1) Improving the effectiveness of infrastructure resources.
2) High efficiency space
3) Reduce operating expenses appropriate to a reduced need for hardware updates to the site.
4) Reduce the cost of capital because of increased use of available network elements.
5) Improbable and faster time to market of new services and applications. The benefit of SDR for manufacturers is through a decrease in the number of independent platforms that will be necessary for the purposes of different wireless technologies.
2.1. 4G FEATURES
Convergence of services: The idea of convergence means that creating the atmosphere that can eventually provide seamless, high reliability and quality of mobile communications services and broadband ubiquitous services over wired and wireless convergence without the problem of limited land space and, through the ubiquitous connectivity. The convergence between industries also accelerated by the formation of alliances through participation in various projects to provide converged services. 4G mobile systems are mainly characterized by a horizontal communication model, where access technologies such as different cellular, wireless, wireless LAN type, short-range wireless connectivity, and cable systems will be combined into a common platform to complement between them in the best possible way for the service requirements of different situations and environments of radio. The development is expected to inspire the progressive trend of information technology is far from being the current focus of the technical convergence fully mobile and widespread media. The trends from the perspective of services include the integration of services and the convergence of service delivery mechanisms. According to these trends, the mobile network architecture will become flexible and versatile, and new services will be easy to implement.
Broadband Services: Broadband is a basis for the purpose of enabling multimedia communications, including video service, which requires transmission of large amounts of data, but, of course, calls for the media convergence issue, based in the transport of packages, advocating the integration of different media in different qualities. The position growing broadband services as asymmetric digital subscriber line (ADSL) systems and optical fiber access and office or home LAN is expected to lead to a demand for similar services in the communication environment mobile. 4G characteristics of application services give broadband service benefits:
1) Low cost to make broadband services available to users to exchange various types of information is necessary to reduce costs considerably in order to keep the cost or below cost of existing service.
2) Wide Area Coverage One of the characteristics of mobile communications is that it is the availability and omnipresent. This advantage is important for future mobile communications. In particular, it is important to maintain the service area in which the terminals of the new system can be used during the transition from existing system to a new system.
3) Wide range of services is mobile communication skills for different types of users. In the future, we hope that the advanced system performance and functionality to introduce a variety of services not only phone service tradicional.Los services should be easy for anyone to use.
INTERACTIVE BCN (ALL-IP) with home networks, telemetry, SENSOR-NETWORK SERVICES: Since technologies are becoming more collaborative and essential. The development of all network services based on All-IP network is needed for more converged services. IP-based unified network for services far beyond the convergence of quality land through active access is what the broadband convergence network is. ALL-IP convergence and Next Generation IP-based network cable or backbone cable, which can be implemented quickly if convergence. All IP networks and IP multimedia technology services are the main trends in the wired and wireless network. The idea of broadband network convergence (BCN) fit into the provision of a common service architecture, unified and flexible that it can support multiple types of services and management applications across multiple types of transport networks.The main purpose of giving effect to service more interactive 4G network driven by the convergence of broadband is its applicability to home networks, telemetry, and the sensor network. Partnership will provide a converged network and application service more beneficial, especially if the computer is broadband users and their suppliers. To give more emphasis on this service request, an example is the creation of home networks and their implementation is bound to give more benefits to users and society in terms of broadband connectivity. Much more than the implementation of broadband convergence network, the application of telemetry will emphasize more tangible in the implementation of 4G mobile technology.
FLEXIBILITY AND PERSONAL SERVICE:
The main concern in the design of network security is the flexibilidad.sistemas 4G 4G support comprehensive and personalized services, providing system stability
performance and service quality. To support multimedia services, high speed data reliably good system will be provided. At the same time, a low rate of data transmission costs are maintained. In order to meet the demands of these diverse users, service providers should design personal and customized services for them. Personal mobility is a concern in the management of mobility. Personal mobility focuses on the movement of users instead of users
terminals, and is the sending of personal and
customized operating environments. Implementation of SDR to 4G offers an advantage of the benefits to service providers, manufacturers and users, and, for service providers;
1) Improving the effectiveness of infrastructure resources.
2) High efficiency space
3) Reduce operating expenses appropriate to a reduced need for hardware updates to the site.
4) Reduce the cost of capital because of increased use of available network elements.
5) Improbable and faster time to market of new services and applications. The benefit of SDR for manufacturers is through a decrease in the number of independent platforms that will be necessary for the purposes of different wireless technologies.
2.2. CANDIDATE SERVICES BEYOND 3G:
2.2. CANDIDATE SERVICES BEYOND 3G:
3GPP LTE:
As a campaign on multiple paths in wireless technology standards has caused a considerable confusion in the market, the initiative in 3GPP LTE or so-called Third Generation Partnership Program - Long Term Evolution is the name given to a project developed by the UniversalMobile Telecommunications System (UMTS) mobile phone standard to cope with and manage future needs in terms of wireless technology. Its objectives are to improve efficiency, reduce costs, improve services, making use of new spectrum opportunities and better integration with other open standards. Since the project is currently underway, has put some specific targets, which is tilted about improving a name UMTS technology technology fourth generation mobile communications, essentially a wireless broadband Internet with voice services and other built on top.
The aim of the project consists of:
Download the rate of 100 Mbps and upstream rates of 50 Mbps for 20 MHz of spectrum Sub-5ms latency for small IP packets.
Greater flexibility of the spectrum with spectrum slices as small as 1.6MHz.
Living with legacy standards (users can transparently start a call or data transfer in an area using an LTE standard, and if coverage is not available, continue the operation without any action hand over GSM / GPRS or UMTS W-CDMA based)
3GPP LTE is emerging as the 3GPP standards development. The project was designed as the standard 2.5 GHz technology "3G extension band." Compared with UMTS, 3GPP LTE is exclusive and only packet-switched IP-based, and which means that the core network switching circuit does not exist.
WiMAX and WiBro:
WiMAX is Worldwide Interoperability for Microwave Access and this technology is a standard created by the IEEE to form the base IEEE 802.16 pm this rule is the name WiBro mobile WiMAX service in Korea. WiBro uses the mobile WiMAX system profile. The system profile contains a complete list of features that the team is required or permitted to support As a result, WiBro offers the same capabilities and features of mobile WiMAX. It describes the technology as an alternative to cable and DSL, and a standards-based technology that allows and enables the delivery of last mile wireless broadband access.
The aim of the project consists of:
➢ peak downstream sector data rates up to 46 Mbps, assuming a DL / UL ratio of 3:1 and maximum speeds of sector data uplink up to 14 Mbps, assuming a DL / UL ratio of 1:1 10 MHz channel
➢ Support for end to end IP based Quos.
➢ Different from 1.25 to 20 MHz channel to meet the varied requirements worldwide.
In a market dominant operators are more interested and involved in the use of WiMAX low-cost transport of low-cost voice services. WiMAX has a two-step evolutionary stage. First, the expansion of global fixed wireless market is not going to happen as a result of WiMAX technology, the slow migration behavior of purchase property WiMAX equipment. The adoption and implementation of WiMAX equipment, service providers are skeptical and waiting until prices fall to the point where service providers can not expect to pay disregard WiMAX. At present, users will see the beginning of the 2nd stage of WiMAX technology, which is the dawn of the conservation of the metropolitan area. Since the call 802.16e wireless broadband standards has already been approved, laptops and other mobile devices can now integrate WiMAX chipsets, so that the user can now access Internet ubiquitously in areas WiMAX.Therefore, the second phase of WiMAX could be very disruptive and annoying for 3G operators and could lead to a round overlay WiMAX network in urban areas.
IEEE 802.20:
The so-called IEEE 802.20 or Mobile Broadband Wireless Access (MBWA) specification is also the first IEEE standard that explicitly addresses the needs of mobile customers in moving vehicles. The design parameters of the specification include support for vehicular mobility up to 250 kilometers per hora.Este criteria will support the use in cars and trucks in the fleet, as well as commuter trains use at high speed in large part the world. Whereas support 802.16e 's roaming is generally limited to local and regional areas, 802.20 shares with 3G the ability to support global roaming. Like 802.16e
802.20 QoS to give a good quality of services for low-latency, unlike 3G cellular data service, which is an inherently high latency architecture. 802.20 802.16e Both also share the efficiency synchronous between the uplink and downlink, unlike the asynchronous nature of 3G cellular networks, which have lower efficiency uplinks, in relation to their greater efficiency descendentes.La links
uplinks can be beneficial to business users who must perform large data sync or upload to central corporate systems of mobile systems. The 802.20 standard plans combine a number of desirable features of 802.16e networks with 3G cellular data, while reducing the limitations of both modalities. Therefore, 802.20 solutions address the need for a comprehensive
spectrum of functionality for mobile business and personal computing
implementations.
3GPP LTE:
As a campaign on multiple paths in wireless technology standards has caused a considerable confusion in the market, the initiative in 3GPP LTE or so-called Third Generation Partnership Program - Long Term Evolution is the name given to a project developed by the UniversalMobile Telecommunications System (UMTS) mobile phone standard to cope with and manage future needs in terms of wireless technology. Its objectives are to improve efficiency, reduce costs, improve services, making use of new spectrum opportunities and better integration with other open standards. Since the project is currently underway, has put some specific targets, which is tilted about improving a name UMTS technology technology fourth generation mobile communications, essentially a wireless broadband Internet with voice services and other built on top.
The aim of the project consists of:
Download the rate of 100 Mbps and upstream rates of 50 Mbps for 20 MHz of spectrum Sub-5ms latency for small IP packets.
Greater flexibility of the spectrum with spectrum slices as small as 1.6MHz.
Living with legacy standards (users can transparently start a call or data transfer in an area using an LTE standard, and if coverage is not available, continue the operation without any action hand over GSM / GPRS or UMTS W-CDMA based)
3GPP LTE is emerging as the 3GPP standards development. The project was designed as the standard 2.5 GHz technology "3G extension band." Compared with UMTS, 3GPP LTE is exclusive and only packet-switched IP-based, and which means that the core network switching circuit does not exist.
WiMAX and WiBro:
WiMAX is Worldwide Interoperability for Microwave Access and this technology is a standard created by the IEEE to form the base IEEE 802.16 pm this rule is the name WiBro mobile WiMAX service in Korea. WiBro uses the mobile WiMAX system profile. The system profile contains a complete list of features that the team is required or permitted to support As a result, WiBro offers the same capabilities and features of mobile WiMAX. It describes the technology as an alternative to cable and DSL, and a standards-based technology that allows and enables the delivery of last mile wireless broadband access.
The aim of the project consists of:
➢ peak downstream sector data rates up to 46 Mbps, assuming a DL / UL ratio of 3:1 and maximum speeds of sector data uplink up to 14 Mbps, assuming a DL / UL ratio of 1:1 10 MHz channel
➢ Support for end to end IP based Quos.
➢ Different from 1.25 to 20 MHz channel to meet the varied requirements worldwide.
In a market dominant operators are more interested and involved in the use of WiMAX low-cost transport of low-cost voice services. WiMAX has a two-step evolutionary stage. First, the expansion of global fixed wireless market is not going to happen as a result of WiMAX technology, the slow migration behavior of purchase property WiMAX equipment. The adoption and implementation of WiMAX equipment, service providers are skeptical and waiting until prices fall to the point where service providers can not expect to pay disregard WiMAX. At present, users will see the beginning of the 2nd stage of WiMAX technology, which is the dawn of the conservation of the metropolitan area. Since the call 802.16e wireless broadband standards has already been approved, laptops and other mobile devices can now integrate WiMAX chipsets, so that the user can now access Internet ubiquitously in areas WiMAX.Therefore, the second phase of WiMAX could be very disruptive and annoying for 3G operators and could lead to a round overlay WiMAX network in urban areas.
IEEE 802.20:
The so-called IEEE 802.20 or Mobile Broadband Wireless Access (MBWA) specification is also the first IEEE standard that explicitly addresses the needs of mobile customers in moving vehicles. The design parameters of the specification include support for vehicular mobility up to 250 kilometers per hora.Este criteria will support the use in cars and trucks in the fleet, as well as commuter trains use at high speed in large part the world. Whereas support 802.16e 's roaming is generally limited to local and regional areas, 802.20 shares with 3G the ability to support global roaming. Like 802.16e
802.20 QoS to give a good quality of services for low-latency, unlike 3G cellular data service, which is an inherently high latency architecture. 802.20 802.16e Both also share the efficiency synchronous between the uplink and downlink, unlike the asynchronous nature of 3G cellular networks, which have lower efficiency uplinks, in relation to their greater efficiency descendentes.La links
uplinks can be beneficial to business users who must perform large data sync or upload to central corporate systems of mobile systems. The 802.20 standard plans combine a number of desirable features of 802.16e networks with 3G cellular data, while reducing the limitations of both modalities. Therefore, 802.20 solutions address the need for a comprehensive
spectrum of functionality for mobile business and personal computing
implementations.
3. SCENARIOS AND APPLICATIONS
3. SCENARIOS AND APPLICATIONS
STAGE PRESENTATION:
A key feature of 4G is likely that the availability of data rates significantly higher than those of third generation (3G). It has been suggested that transmission rates of up
100 Mbps for high mobility and 1 Gbps for low mobility should be the target value. These types of data suggest a greater spectrum efficiency and lower cost per bit will be a key requirement for these systems in the future. Other important elements and it is expected that it may increase the flexibility of mobile terminals and networks, multimedia facilities and connections for high data rate. Future systems convergence will, of course, other characteristic. On the basis of these views and characteristics of the 4th generation (4G) wireless telecommunications future, re-issue of spectrum allocation, and feasibility of the technology, the advent of 4G service will bring a series of environmental changes competitive environment, regulation and policies, and changing wireless communication service in the future. It is therefore very important that we hope that kind of chance we have to prepare well for 4G service.
ROUTE OPTIONS evolution towards 4G
Several scenarios are described to show the status of the wireless communications industry as the 4G. These scenarios are based on different wireless access technologies such as WiMAX, WiBro, 3G LTE and IEEE802.20. In ongoing studies in the standardization of 4G industries and related bodies, one of the objectives is to establish a comprehensive system that seamlessly connects wireless improved forms of existing wireless systems such as 3G WCDMA, HSDPA.In this scenario, existing companies will maintain current customer base and integrated 4G services. On the other hand, however, has been made possible by technological innovation that 3G wireless services are not developed as competitors against the 3G services such as WiMAX, or IEEE increase of more than 802. In addition, individuals and organizations have begun to provide wireless communication open and free through the opening, through various technologies. Figure 2 shows that these different evolution path to 4G. For the scenarios provided in the document, it was assumed that the arrival of 4G service will be after 2012. 4G service will determine whether you can technically support 4G characteristics, and keep the market with service differentiation from competitors. In order to provide the embodiment of 4G systems, construct four scenarios in the paper.
Applications
• Voice
The voice is and remains the most important type of application in mobile telecommunications.
The most important features of ASCI include the following:
Voice service (VBS): the ability of a single phone to talk to a group of mobile;
group voice call services (VGCS): the ability of a group of mobile phones to talk to each other;
High priority and priority multilevel (EMLPP): The fact that emergency calls can anticipate less urgent calls.
• Messaging
• Multimedia Messaging Services (MMS)
- The text, sounds, images and video
- Transition from Short Message Service (SMS)
- Open Internet standards for messaging
• Internet Access
• Web Applications
- Information Portals
- Wireless Markup Language (WML) with signs of use of Wireless Application Protocol (WAP)
• location-based applications
➢ Emergency Services
• E911 - Enhanced 911
➢ The added value of personal services
• Friend Finder, directions
➢ Commercial Services
• Coupons or offers from local shops
➢ internal network
• Traffic and coverage of measurements
•
• lawful interception Extensions
➢ location (in 3D), speed and direction
• With date and time
➢ The measurement accuracy
➢ Response Time
• a measure of quality of service
➢ Security and Privacy
• Authorized clients, secure information exchange,
Control privacy by the user and / or operator
• Games
➢ Games will be another important application segment 3G/4G.
• Electronic Agents
➢ electronic agents are supposed to play an important role in mobile working in the future - as the agents are sent to carry out searches and tasks on the Internet and inform their owners.
➢ They e-care, e-secretaries, e-consultants, administrators, mailing, etc. This type of control is what we expect home automation applications.
• Requests for appointments
➢ These are already very popular in Asia.
➢ It can be a simple bulletin board with announcements of appointments in combination with an anonymous e-mail server, or it could be one heart-mobile chat room.
Strengths, Weaknesses, Opportunities and Threats of 4G
Considering the features of 4G, which is expected scenarios and market trends and applications, we can find the strengths, weaknesses, opportunities and threats of 4G with a better understanding. The lists and tracking results.
Strengths in 4G:
➢ 4G visions take into account the installed base and past investments
➢ Strong position of telecommunications providers in the market expected.
➢ faster data transmission and higher bit rate and bandwidth, allowing more business and marketing applications
➢ It has advantage to customize multimedia communication tools
Weakness in 4G:
➢ No large community of users advanced mobile data applications even
➢ The growing gap between vendors and telecom operators
➢ You may not offer full Internet experience because of the limited speed and bandwidth
➢ comparatively higher cost for the use and deployment of infrastructure compared accelerated generation mobile
Opportunities in 4G:
➢ evolutionary approach can provide opportunities for 4G
➢ The emphasis on heterogeneous networks leverages past investments
➢ Strategic Alliance coalition and opportunities for traditional telecommunications industries not
➢ sophisticated and mature commercialization of 4G technology
➢ encourage more applications from e-commerce and m-commerce
➢ stimulates consumption global economy and restore consumer confidence, thus putting into opportunities for the telecommunications
➢ It is hoped and predicted that consumers will continue to replace terminals with new technologies at a rapid pace.
➢ desirable higher rates of data capacity, the growth opportunity for the 4G is very bright and hopeful.
Threats on 4G:
➢ fastest rate of growth and evolution of another region
➢ From the 3G mobile is still in the market, tightening the market competition in the mobile industry.
STAGE PRESENTATION:
A key feature of 4G is likely that the availability of data rates significantly higher than those of third generation (3G). It has been suggested that transmission rates of up
100 Mbps for high mobility and 1 Gbps for low mobility should be the target value. These types of data suggest a greater spectrum efficiency and lower cost per bit will be a key requirement for these systems in the future. Other important elements and it is expected that it may increase the flexibility of mobile terminals and networks, multimedia facilities and connections for high data rate. Future systems convergence will, of course, other characteristic. On the basis of these views and characteristics of the 4th generation (4G) wireless telecommunications future, re-issue of spectrum allocation, and feasibility of the technology, the advent of 4G service will bring a series of environmental changes competitive environment, regulation and policies, and changing wireless communication service in the future. It is therefore very important that we hope that kind of chance we have to prepare well for 4G service.
ROUTE OPTIONS evolution towards 4G
Several scenarios are described to show the status of the wireless communications industry as the 4G. These scenarios are based on different wireless access technologies such as WiMAX, WiBro, 3G LTE and IEEE802.20. In ongoing studies in the standardization of 4G industries and related bodies, one of the objectives is to establish a comprehensive system that seamlessly connects wireless improved forms of existing wireless systems such as 3G WCDMA, HSDPA.In this scenario, existing companies will maintain current customer base and integrated 4G services. On the other hand, however, has been made possible by technological innovation that 3G wireless services are not developed as competitors against the 3G services such as WiMAX, or IEEE increase of more than 802. In addition, individuals and organizations have begun to provide wireless communication open and free through the opening, through various technologies. Figure 2 shows that these different evolution path to 4G. For the scenarios provided in the document, it was assumed that the arrival of 4G service will be after 2012. 4G service will determine whether you can technically support 4G characteristics, and keep the market with service differentiation from competitors. In order to provide the embodiment of 4G systems, construct four scenarios in the paper.
Applications
• Voice
The voice is and remains the most important type of application in mobile telecommunications.
The most important features of ASCI include the following:
Voice service (VBS): the ability of a single phone to talk to a group of mobile;
group voice call services (VGCS): the ability of a group of mobile phones to talk to each other;
High priority and priority multilevel (EMLPP): The fact that emergency calls can anticipate less urgent calls.
• Messaging
• Multimedia Messaging Services (MMS)
- The text, sounds, images and video
- Transition from Short Message Service (SMS)
- Open Internet standards for messaging
• Internet Access
• Web Applications
- Information Portals
- Wireless Markup Language (WML) with signs of use of Wireless Application Protocol (WAP)
• location-based applications
➢ Emergency Services
• E911 - Enhanced 911
➢ The added value of personal services
• Friend Finder, directions
➢ Commercial Services
• Coupons or offers from local shops
➢ internal network
• Traffic and coverage of measurements
•
• lawful interception Extensions
➢ location (in 3D), speed and direction
• With date and time
➢ The measurement accuracy
➢ Response Time
• a measure of quality of service
➢ Security and Privacy
• Authorized clients, secure information exchange,
Control privacy by the user and / or operator
• Games
➢ Games will be another important application segment 3G/4G.
• Electronic Agents
➢ electronic agents are supposed to play an important role in mobile working in the future - as the agents are sent to carry out searches and tasks on the Internet and inform their owners.
➢ They e-care, e-secretaries, e-consultants, administrators, mailing, etc. This type of control is what we expect home automation applications.
• Requests for appointments
➢ These are already very popular in Asia.
➢ It can be a simple bulletin board with announcements of appointments in combination with an anonymous e-mail server, or it could be one heart-mobile chat room.
Strengths, Weaknesses, Opportunities and Threats of 4G
Considering the features of 4G, which is expected scenarios and market trends and applications, we can find the strengths, weaknesses, opportunities and threats of 4G with a better understanding. The lists and tracking results.
Strengths in 4G:
➢ 4G visions take into account the installed base and past investments
➢ Strong position of telecommunications providers in the market expected.
➢ faster data transmission and higher bit rate and bandwidth, allowing more business and marketing applications
➢ It has advantage to customize multimedia communication tools
Weakness in 4G:
➢ No large community of users advanced mobile data applications even
➢ The growing gap between vendors and telecom operators
➢ You may not offer full Internet experience because of the limited speed and bandwidth
➢ comparatively higher cost for the use and deployment of infrastructure compared accelerated generation mobile
Opportunities in 4G:
➢ evolutionary approach can provide opportunities for 4G
➢ The emphasis on heterogeneous networks leverages past investments
➢ Strategic Alliance coalition and opportunities for traditional telecommunications industries not
➢ sophisticated and mature commercialization of 4G technology
➢ encourage more applications from e-commerce and m-commerce
➢ stimulates consumption global economy and restore consumer confidence, thus putting into opportunities for the telecommunications
➢ It is hoped and predicted that consumers will continue to replace terminals with new technologies at a rapid pace.
➢ desirable higher rates of data capacity, the growth opportunity for the 4G is very bright and hopeful.
Threats on 4G:
➢ fastest rate of growth and evolution of another region
➢ From the 3G mobile is still in the market, tightening the market competition in the mobile industry.
4. PROSPECTS FOR 3G/4G / MOBILE next-generation technology
4. PROSPECTS FOR 3G/4G / MOBILE next-generation technology
• 5G (real-world wireless) completed WWWW: World Wide Wireless Web:
The idea of WWWW (World Wide Wireless Web), starts from 4G technologies. The next evolution is based on 4G and completed his idea of forming a REAL world inalámbrica.Por So 5G should make an important difference and add more services and benefits for everyone 4G, 5G should be a smarter technology that interconnects around the world without limits.
• 5G (real-world wireless) completed WWWW: World Wide Wireless Web:
The idea of WWWW (World Wide Wireless Web), starts from 4G technologies. The next evolution is based on 4G and completed his idea of forming a REAL world inalámbrica.Por So 5G should make an important difference and add more services and benefits for everyone 4G, 5G should be a smarter technology that interconnects around the world without limits.
5. CONCLUSION
5. CONCLUSION
Arriving with the analysis of this 4G technology, it is inevitable that completely replace 3G 4G long term. However, 4G and 3G tend to maintain a relationship of cooperation to competition in the short term. To 4G for growth in the futures market, it will inevitably compete with 3G and 3G customer acquisition. As also analyzed and investigated through the scenarios, the comparison was made here that among the three candidates for 4G. All service providers and manufacturers of strategy to a high rate of high data mobility in the case of 3GPP, WiMAX and WiBro even targeted. However, the concern of mainstream service providers about the regulation, market uncertainty and financial burden. There are also new issue of spectrum allocation that must be resolved and determined, as well as the viability of the technology.Either way, there are still many opportunities for 4G. In these circumstances, that flourished in the future telecommunications market, each technology should be completed soon and their standards developed systems to meet the needs of consumer demands in a timely manner. On the other hand, technical development, change and innovation should be reflected in regulatory policy in the future.
Arriving with the analysis of this 4G technology, it is inevitable that completely replace 3G 4G long term. However, 4G and 3G tend to maintain a relationship of cooperation to competition in the short term. To 4G for growth in the futures market, it will inevitably compete with 3G and 3G customer acquisition. As also analyzed and investigated through the scenarios, the comparison was made here that among the three candidates for 4G. All service providers and manufacturers of strategy to a high rate of high data mobility in the case of 3GPP, WiMAX and WiBro even targeted. However, the concern of mainstream service providers about the regulation, market uncertainty and financial burden. There are also new issue of spectrum allocation that must be resolved and determined, as well as the viability of the technology.Either way, there are still many opportunities for 4G. In these circumstances, that flourished in the future telecommunications market, each technology should be completed soon and their standards developed systems to meet the needs of consumer demands in a timely manner. On the other hand, technical development, change and innovation should be reflected in regulatory policy in the future.
Wednesday, August 11, 2010
IP Tracing
IP Tracing
Getting the Internet Protocol or the IP Address of a remote system is said to the most important step in hacking of a system. Sometimes, however we get an IP in order to get more information on someone or some host. But, how can
an IP Address be used to get more information on the location etc of a system? Well, this manual is aimed at answering just this question.
Actually, the IP address (Actually the entire TCP/IP Protocol) is structured or designed such that one cannot tell as to in which country a system having the given IP is situated, by simply looking at it. An IP Address has no fields, which
tell you the country in which the computer using it resides in. So, all myths like ‘The Second or the third field of an IP stands for the country in which the system using it resides’ are definitely false and untrue. However, yes sometimes one can guess or deduce as to in which country and even in which city the system using an
IP resides in, by simply looking at the first three fields of the IP. Let us take an example to understand what I mean to say by this. Now, before I move on the example, let us understand how exactly IP Addresses are awarded to you.
Firstly, your ISP registers at the central authority and gets a particular range of IP addresses between which the various customers (people who dial into their servers) can be awarded IP addresses. Most ISP’s are given a Class C
network Address. A class C Network address contains a 24-bit Network Prefix (the first three fields) and an 8-bit Host number (the last field). It is referred to as "24's" and is commonly used by most ISP's.
Like in the real world, everyone has got an individual Home Address or telephone number so that, that particular individual can be contacted on that number or address, similarly all computers connected to the
Internet are given a unique Internet Protocol or IP address which can be used to contact that particular computer. In geek language an IP address would be a decimal notation that divides the 32- bit Internet addresses (IP) into four 8-
bit fields.
Does the IP address give me some information or do the numbers stand for anything?
Let take the example of the following IP address: 202.144.49.110 Now the first part, the numbers before the first decimal i.e. 209 is the Network number or the Network Prefix.. This means that it identifies the number of the
network in which the host is. The second part i.e. 144 is the Host Number that is it identifies the number of the host within the Network. This means that in the same Network, the network number is same. In order to provide flexibility in the size of the Network, here are different classes of IP addresses:
Address Class Dotted Decimal Notation Ranges
Class A ( /8 Prefixes) 1.xxx.xxx.xxx through 126.xxx.xxx.xxx
Class B ( /16 Prefixes) 128.0.xxx.xxx through 191.255.xxx.xxx
Class C ( /24 Prefixes) 192.0.0.xxx through 223.255.255.xxx
The various classes will be clearer after reading the next few lines.
Each Class A Network Address contains a 8 bit Network Prefix followed by a 24-bit host number. They are considered to be primitive. They are referred to as "/8''s" or just "8's" as they have an 8-bit Network prefix. In a Class B Network Address there is a 16 bit Network Prefix followed by a 16-bit Host number. It is referred to as "16's". A class C Network address contains a 24-bit Network Prefix and a 8 bit Host number. It is referred to as "24's" and is commonly used by most ISP's. Due to the growing size of the Internet the Network Administrators faced many problems. The Internet routing tables were beginning to grow and now the administrators had to request another network number from the Internet before a
new network could be installed at their site. This is where sub-netting came in.
Now if your ISP is a big one and if it provides you with dynamic IP addresses then you will most probably see that whenever you log on to the net, your IP address will have the same first 24 bits and only the last 8 bits will keep changing. This is due to the fact that when sub-netting comes in then the IP Addresses structure becomes:
xxx.xxx.zzz.yyy
where the first 2 parts are Network Prefix numbers and the zzz is the Subnet number and the yyy is the host number. So you are always connected to the same Subnet within the same Network. As a result the first 3 parts will remain
the same and only the last part i.e. yyy is variable.
For Example, if say an ISP xyz is given the IP: 203.98.12.xx Network address then you can be awarded any IP,
whose first three fields are 203.98.12. Get it?
So, basically this means that each ISP has a particular range in which to allocate all its subscribers. Or in other words, all subscribers or all people connected to the internet using the same ISP, will have to be in this range. This in effect
would mean that all people using the same ISP are likely to have the same first three fields of their IP Addresses. This means that if you have done a lot of (By this I really mean a lot) of research, then you could figure out which
ISP a person is using by simply looking at his IP. The ISP name could then be used to figure out the city and the country of the person. Right? Let me take an example to stress as to how cumbersome but easy (once the research is done) the above method can be. In my country, say there are three main ISP’s:
ISP Name Network Address Allotted
ISP I 203.94.47.xx
ISP II 202.92.12.xx
ISP III 203.91.35.xx
Now, if I get to know the IP of an e-pal of mine, and it reads: 203.91.35.12, then I
Getting the Internet Protocol or the IP Address of a remote system is said to the most important step in hacking of a system. Sometimes, however we get an IP in order to get more information on someone or some host. But, how can
an IP Address be used to get more information on the location etc of a system? Well, this manual is aimed at answering just this question.
Actually, the IP address (Actually the entire TCP/IP Protocol) is structured or designed such that one cannot tell as to in which country a system having the given IP is situated, by simply looking at it. An IP Address has no fields, which
tell you the country in which the computer using it resides in. So, all myths like ‘The Second or the third field of an IP stands for the country in which the system using it resides’ are definitely false and untrue. However, yes sometimes one can guess or deduce as to in which country and even in which city the system using an
IP resides in, by simply looking at the first three fields of the IP. Let us take an example to understand what I mean to say by this. Now, before I move on the example, let us understand how exactly IP Addresses are awarded to you.
Firstly, your ISP registers at the central authority and gets a particular range of IP addresses between which the various customers (people who dial into their servers) can be awarded IP addresses. Most ISP’s are given a Class C
network Address. A class C Network address contains a 24-bit Network Prefix (the first three fields) and an 8-bit Host number (the last field). It is referred to as "24's" and is commonly used by most ISP's.
Like in the real world, everyone has got an individual Home Address or telephone number so that, that particular individual can be contacted on that number or address, similarly all computers connected to the
Internet are given a unique Internet Protocol or IP address which can be used to contact that particular computer. In geek language an IP address would be a decimal notation that divides the 32- bit Internet addresses (IP) into four 8-
bit fields.
Does the IP address give me some information or do the numbers stand for anything?
Let take the example of the following IP address: 202.144.49.110 Now the first part, the numbers before the first decimal i.e. 209 is the Network number or the Network Prefix.. This means that it identifies the number of the
network in which the host is. The second part i.e. 144 is the Host Number that is it identifies the number of the host within the Network. This means that in the same Network, the network number is same. In order to provide flexibility in the size of the Network, here are different classes of IP addresses:
Address Class Dotted Decimal Notation Ranges
Class A ( /8 Prefixes) 1.xxx.xxx.xxx through 126.xxx.xxx.xxx
Class B ( /16 Prefixes) 128.0.xxx.xxx through 191.255.xxx.xxx
Class C ( /24 Prefixes) 192.0.0.xxx through 223.255.255.xxx
The various classes will be clearer after reading the next few lines.
Each Class A Network Address contains a 8 bit Network Prefix followed by a 24-bit host number. They are considered to be primitive. They are referred to as "/8''s" or just "8's" as they have an 8-bit Network prefix. In a Class B Network Address there is a 16 bit Network Prefix followed by a 16-bit Host number. It is referred to as "16's". A class C Network address contains a 24-bit Network Prefix and a 8 bit Host number. It is referred to as "24's" and is commonly used by most ISP's. Due to the growing size of the Internet the Network Administrators faced many problems. The Internet routing tables were beginning to grow and now the administrators had to request another network number from the Internet before a
new network could be installed at their site. This is where sub-netting came in.
Now if your ISP is a big one and if it provides you with dynamic IP addresses then you will most probably see that whenever you log on to the net, your IP address will have the same first 24 bits and only the last 8 bits will keep changing. This is due to the fact that when sub-netting comes in then the IP Addresses structure becomes:
xxx.xxx.zzz.yyy
where the first 2 parts are Network Prefix numbers and the zzz is the Subnet number and the yyy is the host number. So you are always connected to the same Subnet within the same Network. As a result the first 3 parts will remain
the same and only the last part i.e. yyy is variable.
For Example, if say an ISP xyz is given the IP: 203.98.12.xx Network address then you can be awarded any IP,
whose first three fields are 203.98.12. Get it?
So, basically this means that each ISP has a particular range in which to allocate all its subscribers. Or in other words, all subscribers or all people connected to the internet using the same ISP, will have to be in this range. This in effect
would mean that all people using the same ISP are likely to have the same first three fields of their IP Addresses. This means that if you have done a lot of (By this I really mean a lot) of research, then you could figure out which
ISP a person is using by simply looking at his IP. The ISP name could then be used to figure out the city and the country of the person. Right? Let me take an example to stress as to how cumbersome but easy (once the research is done) the above method can be. In my country, say there are three main ISP’s:
ISP Name Network Address Allotted
ISP I 203.94.47.xx
ISP II 202.92.12.xx
ISP III 203.91.35.xx
Now, if I get to know the IP of an e-pal of mine, and it reads: 203.91.35.12, then I
ip tracing
can pretty easily figure out that he
uses ISP III to connect to the internet. Right? You might say that any idiot would be able to do this. Well, yes and no.
You see, the above method of finding out the ISP of a person was successful only because we already had the ISPand Network Address Allotted list with us. So, what my point is, that the above method can be successful only after a
lot of research and experimentation. And, I do think such research can be helpful sometimes.
Also, this would not work, if you take it all on in larger scale. What if the IP that you have belongs to someone living
in a remote igloo in the North Pole? You could not possibly get the Network Addresses of all the ISP’s in the world,
could you?
NOTE: In the above case, you also get to know the city of the system using the given IP, as most ISP’s use different
network addresses in different cities. Also, some ISP’s are operational in a single city.
So, is there a better method of getting the location of an IP? Yes, Reverse DNS lookups hold the key.
Just as DNS lookup converts the hostname into IP address, a Reverse DNS Lookup converts the IP address of a host
to the hostname. By hostname, what I mean to say is that it given us the name of the remote system in alphabets and
numbers and periods. For Example, mail2.bol.net.in would be a hostname, while 203.45.67.98 would not be a
hostname.
The popular and wonderful Unix utility ‘nslookup’ can be used for performing Reverse DNS lookups.
So, if you using a *nix box or if you have access to a shell account, then the first this to do is to locate where the
nslookup command is hidden by issuing the following command:
' whereis nslookup '.
Once you locate where the utility is hidden, you could easily use it to perform both normal and reverse DNS lookups.
As this is not a manual on using the ‘nslookup’ command, I will simply giving a basic relevant outline. In order to
get a more detailed description of how this works or how to use it, read the *nix man pages or the documentation.
We can use ‘nslookup’ to perform a reverse DNS lookup by mentioning the IP of the host at the prompt.
For Example,
$>nslookup IP Address
Note: The below IP’s and corresponding hostnames have been made up. They may not actually exist.
Let us say, that above, instead of IP Address, we type 203.94.12.01 (which would be the IP I want to trace.).
$>nslookup 203.94.12.01
Then, you would receive a response similar to: mail2.bol.net.in
Now, if you carefully look at the hostname that the Reverse DNS lookup, gave us, then the last part reveals the
country in which system resides in. You see, the ‘.in’ part signifies that the system is located in India. All countries
have been allotted country codes, which more often than not are the last part of the hostnames of the systems located
in that country. This method can also be used to figure out as to which country a person lives in, if you know his email address. For Example, if a person has an email address ending in .ph then he probably lives in Philippines and
if it ends in .il then he lives in Israel and so on. Some common country codes are:
Country Code
Australia .au
Indonesia .id
India .in
Japan .jp
Israel .il
Britain .uk
For a complete list of country codes, visit:
http://www.alldomains.com/
http://www.iana.org/domain-names.html
General Extra Tip: To get the complete list of US State Abbreviation codes, visit:
http://www.usps.gov/ncsc/lookups/abbr_state.txt
Windows users can perform Reverse DNS queries by downloading an utility called Samspade from:
http://www.samspade.com/
Another method of getting the exact geographical location of a system on the globe is by making use of the WHOIS database. The WHOIS database is basically the main database, which contains a variety of information like contact details, name etc on the person who owns a particular domain name. So, basically what one does in a WHOIS query, is supply the WHOIS service with the hostname on which he wants more information. The WHOIS service then replies with the information stored in its database. This method can be used to get some pretty accurate information on a particular IP or hostname; however, it is probably of no use if you are trying to point out the exact location of a dynamic IP. But, again this can be used to get atleast the city in which the ISP used by the victim is situated.
You can carry out WHOIS queries at: http://www.alldomains.com/
You could also directly enter the following in the location bar of your Browser and perform a WHOIS enquiry.
Enter the following in the location bar of your browser:
http://205.177.25.9/cgi-bin/whois?abc.com
Note: Replace abc.com with the domain name on which you want to perform a WHOIS query.
This method cannot be used to get the contact address of a person, if the IP that you use to trace him, belongs to his
ISP. So, either you need to know the domain name (which is registered on his name) or have to remain satisfied
knowing only the city (and ISP) used by the person.
uses ISP III to connect to the internet. Right? You might say that any idiot would be able to do this. Well, yes and no.
You see, the above method of finding out the ISP of a person was successful only because we already had the ISPand Network Address Allotted list with us. So, what my point is, that the above method can be successful only after a
lot of research and experimentation. And, I do think such research can be helpful sometimes.
Also, this would not work, if you take it all on in larger scale. What if the IP that you have belongs to someone living
in a remote igloo in the North Pole? You could not possibly get the Network Addresses of all the ISP’s in the world,
could you?
NOTE: In the above case, you also get to know the city of the system using the given IP, as most ISP’s use different
network addresses in different cities. Also, some ISP’s are operational in a single city.
So, is there a better method of getting the location of an IP? Yes, Reverse DNS lookups hold the key.
Just as DNS lookup converts the hostname into IP address, a Reverse DNS Lookup converts the IP address of a host
to the hostname. By hostname, what I mean to say is that it given us the name of the remote system in alphabets and
numbers and periods. For Example, mail2.bol.net.in would be a hostname, while 203.45.67.98 would not be a
hostname.
The popular and wonderful Unix utility ‘nslookup’ can be used for performing Reverse DNS lookups.
So, if you using a *nix box or if you have access to a shell account, then the first this to do is to locate where the
nslookup command is hidden by issuing the following command:
' whereis nslookup '.
Once you locate where the utility is hidden, you could easily use it to perform both normal and reverse DNS lookups.
As this is not a manual on using the ‘nslookup’ command, I will simply giving a basic relevant outline. In order to
get a more detailed description of how this works or how to use it, read the *nix man pages or the documentation.
We can use ‘nslookup’ to perform a reverse DNS lookup by mentioning the IP of the host at the prompt.
For Example,
$>nslookup IP Address
Note: The below IP’s and corresponding hostnames have been made up. They may not actually exist.
Let us say, that above, instead of IP Address, we type 203.94.12.01 (which would be the IP I want to trace.).
$>nslookup 203.94.12.01
Then, you would receive a response similar to: mail2.bol.net.in
Now, if you carefully look at the hostname that the Reverse DNS lookup, gave us, then the last part reveals the
country in which system resides in. You see, the ‘.in’ part signifies that the system is located in India. All countries
have been allotted country codes, which more often than not are the last part of the hostnames of the systems located
in that country. This method can also be used to figure out as to which country a person lives in, if you know his email address. For Example, if a person has an email address ending in .ph then he probably lives in Philippines and
if it ends in .il then he lives in Israel and so on. Some common country codes are:
Country Code
Australia .au
Indonesia .id
India .in
Japan .jp
Israel .il
Britain .uk
For a complete list of country codes, visit:
http://www.alldomains.com/
http://www.iana.org/domain-names.html
General Extra Tip: To get the complete list of US State Abbreviation codes, visit:
http://www.usps.gov/ncsc/lookups/abbr_state.txt
Windows users can perform Reverse DNS queries by downloading an utility called Samspade from:
http://www.samspade.com/
Another method of getting the exact geographical location of a system on the globe is by making use of the WHOIS database. The WHOIS database is basically the main database, which contains a variety of information like contact details, name etc on the person who owns a particular domain name. So, basically what one does in a WHOIS query, is supply the WHOIS service with the hostname on which he wants more information. The WHOIS service then replies with the information stored in its database. This method can be used to get some pretty accurate information on a particular IP or hostname; however, it is probably of no use if you are trying to point out the exact location of a dynamic IP. But, again this can be used to get atleast the city in which the ISP used by the victim is situated.
You can carry out WHOIS queries at: http://www.alldomains.com/
You could also directly enter the following in the location bar of your Browser and perform a WHOIS enquiry.
Enter the following in the location bar of your browser:
http://205.177.25.9/cgi-bin/whois?abc.com
Note: Replace abc.com with the domain name on which you want to perform a WHOIS query.
This method cannot be used to get the contact address of a person, if the IP that you use to trace him, belongs to his
ISP. So, either you need to know the domain name (which is registered on his name) or have to remain satisfied
knowing only the city (and ISP) used by the person.
ip tracing
Say, the victim has registered a domain name and you want to use it to find out the city in which he resides. Now, one thing to remember in this case is that, if the victim has registered the domain name using any of the various free .
com registration services like Namezero.com etc, then the domain name would probably be registered on thecompany’s name and not the victim’s name. So, a WHOIS query will give information on the ISP and not the victim.
NEWBIE NOTE: The WHOIS service by default runs on Port 43 of a system. Try performing a WHOIS query by
telnetting to Port 43 and manually typing out the query. I have never tried it, however, it might be fun.
Yet another and probably the second most efficient method (after Reverse DNS queries) of tracing an IP to its exact geographical location, is to carry out a ‘traceroute’ on it. The ‘tracert’ or ‘traceroute’ commands give you the names
or IP’s of the routers through which it passes, before reaching the destination. Windows users can perform a trace of an IP, by typing the following at the command line prompt:
C:\windows>tracert IP or Hostname
For more information about the usage and syntax of this command, type: ‘tracert’ at the command prompt. Anyway, now let us see what is the result, when I do a tracert on my IP. Remember I live in New Delhi which is a city in India. Watch the names of the hostnames closely, as you will find that they reveal the cities through which the packet passes.
C:\windows>tracert 203.94.12.54
Tracing route to 203.94.12.54 over a maximum of 30 hops
1 abc.netzero.com (232.61.41.251) 2 ms 1 ms 1 ms
2 xyz.Netzero.com (232.61.41.0) 5 ms 5 ms 5 ms
3 232.61.41.10 (232.61.41.251) 9 ms 11 ms 13 ms
4 we21.spectranet.com (196.01.83.12) 535 ms 549 ms 513 ms
5 isp.net.ny (196.23.0.0) 562 ms 596 ms 600 ms
6 196.23.0.25 (196.23.0.25) 1195 ms1204 ms
7 backbone.isp.ny (198.87.12.11) 1208 ms1216 ms1233 ms
8 asianet.com (202.12.32.10) 1210 ms1239 ms1211 ms
9 south.asinet.com (202.10.10.10) 1069 ms1087 ms1122 ms
10 backbone.vsnl.net.in (203.98.46.01) 1064 ms1109 ms1061 ms
11 newdelhi-01.backbone.vsnl.net.in (203.102.46.01) 1185 ms1146 ms1203 ms
12 newdelhi-00.backbone.vsnl.net.in (203.102.46.02) ms1159 ms1073 ms
13 mtnl.net.in (203.194.56.00) 1052 ms 642 ms 658 ms
So, the above shows us that the route taken by a data to reach the supplied IP is somewhat like this:
Netzero (ISP from which the data is sent) ---à Spectranet (A Backbone Provider) -----à New York ISP ---àNew
York Backbone -à Asia --à South Asia -à India Backbone --à New Delhi Backbone --à Another router in New
Delhi Backbone ---à New Delhi ISP.
So, basically this tracert does reveal my real location, which is: New Delhi, India, South Asia. Get it?
Sometimes, doing a ‘tracert’ on an IP, does not give useful information. You see in the above example, the
hostnames returned revealed the city or country in which the system is located. Although, more often than not, you will get such helpful hostnames, sometimes the hostnames returned are very vague and unhelpful. So what do you do then? Well, fret not. Simply do the below procedure. Let us say that the trace ends at the hostname abc.com. This is very vague and gives absolutely no clue as to where
the system is located. However, what you could do is, launch your browser and
visit: http://www.abc.com/ Now, abc.
com is probably an ISP and an ISP, will definitely give its location and the cities in which it operates. So, you could still have a good chance of learning the definite city of the victim. A very interesting utility is the VisualRoute utility, (http://www.visualroute.com/) which traces a hostname or IP and shows the path taken by the packet to reach the destination on a world map. It is very useful and reveals some excellent information. However, it sometimes does tend to be inaccurate.
com registration services like Namezero.com etc, then the domain name would probably be registered on thecompany’s name and not the victim’s name. So, a WHOIS query will give information on the ISP and not the victim.
NEWBIE NOTE: The WHOIS service by default runs on Port 43 of a system. Try performing a WHOIS query by
telnetting to Port 43 and manually typing out the query. I have never tried it, however, it might be fun.
Yet another and probably the second most efficient method (after Reverse DNS queries) of tracing an IP to its exact geographical location, is to carry out a ‘traceroute’ on it. The ‘tracert’ or ‘traceroute’ commands give you the names
or IP’s of the routers through which it passes, before reaching the destination. Windows users can perform a trace of an IP, by typing the following at the command line prompt:
C:\windows>tracert IP or Hostname
For more information about the usage and syntax of this command, type: ‘tracert’ at the command prompt. Anyway, now let us see what is the result, when I do a tracert on my IP. Remember I live in New Delhi which is a city in India. Watch the names of the hostnames closely, as you will find that they reveal the cities through which the packet passes.
C:\windows>tracert 203.94.12.54
Tracing route to 203.94.12.54 over a maximum of 30 hops
1 abc.netzero.com (232.61.41.251) 2 ms 1 ms 1 ms
2 xyz.Netzero.com (232.61.41.0) 5 ms 5 ms 5 ms
3 232.61.41.10 (232.61.41.251) 9 ms 11 ms 13 ms
4 we21.spectranet.com (196.01.83.12) 535 ms 549 ms 513 ms
5 isp.net.ny (196.23.0.0) 562 ms 596 ms 600 ms
6 196.23.0.25 (196.23.0.25) 1195 ms1204 ms
7 backbone.isp.ny (198.87.12.11) 1208 ms1216 ms1233 ms
8 asianet.com (202.12.32.10) 1210 ms1239 ms1211 ms
9 south.asinet.com (202.10.10.10) 1069 ms1087 ms1122 ms
10 backbone.vsnl.net.in (203.98.46.01) 1064 ms1109 ms1061 ms
11 newdelhi-01.backbone.vsnl.net.in (203.102.46.01) 1185 ms1146 ms1203 ms
12 newdelhi-00.backbone.vsnl.net.in (203.102.46.02) ms1159 ms1073 ms
13 mtnl.net.in (203.194.56.00) 1052 ms 642 ms 658 ms
So, the above shows us that the route taken by a data to reach the supplied IP is somewhat like this:
Netzero (ISP from which the data is sent) ---à Spectranet (A Backbone Provider) -----à New York ISP ---àNew
York Backbone -à Asia --à South Asia -à India Backbone --à New Delhi Backbone --à Another router in New
Delhi Backbone ---à New Delhi ISP.
So, basically this tracert does reveal my real location, which is: New Delhi, India, South Asia. Get it?
Sometimes, doing a ‘tracert’ on an IP, does not give useful information. You see in the above example, the
hostnames returned revealed the city or country in which the system is located. Although, more often than not, you will get such helpful hostnames, sometimes the hostnames returned are very vague and unhelpful. So what do you do then? Well, fret not. Simply do the below procedure. Let us say that the trace ends at the hostname abc.com. This is very vague and gives absolutely no clue as to where
the system is located. However, what you could do is, launch your browser and
visit: http://www.abc.com/ Now, abc.
com is probably an ISP and an ISP, will definitely give its location and the cities in which it operates. So, you could still have a good chance of learning the definite city of the victim. A very interesting utility is the VisualRoute utility, (http://www.visualroute.com/) which traces a hostname or IP and shows the path taken by the packet to reach the destination on a world map. It is very useful and reveals some excellent information. However, it sometimes does tend to be inaccurate.
Sunday, August 8, 2010
1. INTRODUCTION
1. INTRODUCTION
DNA Computer can store billions of times more information then your PC hard drive and solve complex problems in a less time.We know that computer chip manufacturers are racing to make the next microprocessor that will more faster. Microprocessors made of silicon will eventually reach their limits of speed and miniaturization. Chips makers need a new material to produce faster computing speeds.
To understand DNA computing lets first examine how the conventional computer process information. A conventional computer performs mathematical operations by using electrical impulses to manipulate zeroes and ones on silicon chips. A DNA computer is based on the fact the information is “encoded” within deoxyribonucleic acid (DNA) as as patterns of molecules known as nucleotides. By manipulating the how the nucleotides combine with each other the DNA computer can be made to process data. The branch of computers dealing with DNA computers is called DNA Computing.
The concept of DNA computing was born in 1993, when Professor Leonard Adleman, a mathematician specializing in computer science and cryptography accidentally stumbled upon the similarities between conventional computers and DNA while reading a book by James Watson. A little more than a year after this, In 1994, Leonard M. Adleman, a professor at the University of Southern California, created a storm of excitement in the computing world when he announced that he had solved a famous computation problem. This computer solved the traveling salesman problem also known as the “Hamiltonian path" problem,which is explained later. DNA was shown to have massively parallel processing capabilities that might allow a DNA based computer to solve hard computational problems in a reasonable amount of time.
There was nothing remarkable about the problem itself, which dealt with finding the shortest route through a series of points. Nor was there anything special about how long it took Adleman to solve it — seven days — substantially greater than the few minutes it would take an average person to find a solution. What was exciting about Adleman’s achievement was that he had solved the problem using nothing but deoxyribonucleic acid (DNA) and molecular chemistry.
DNA Computer can store billions of times more information then your PC hard drive and solve complex problems in a less time.We know that computer chip manufacturers are racing to make the next microprocessor that will more faster. Microprocessors made of silicon will eventually reach their limits of speed and miniaturization. Chips makers need a new material to produce faster computing speeds.
To understand DNA computing lets first examine how the conventional computer process information. A conventional computer performs mathematical operations by using electrical impulses to manipulate zeroes and ones on silicon chips. A DNA computer is based on the fact the information is “encoded” within deoxyribonucleic acid (DNA) as as patterns of molecules known as nucleotides. By manipulating the how the nucleotides combine with each other the DNA computer can be made to process data. The branch of computers dealing with DNA computers is called DNA Computing.
The concept of DNA computing was born in 1993, when Professor Leonard Adleman, a mathematician specializing in computer science and cryptography accidentally stumbled upon the similarities between conventional computers and DNA while reading a book by James Watson. A little more than a year after this, In 1994, Leonard M. Adleman, a professor at the University of Southern California, created a storm of excitement in the computing world when he announced that he had solved a famous computation problem. This computer solved the traveling salesman problem also known as the “Hamiltonian path" problem,which is explained later. DNA was shown to have massively parallel processing capabilities that might allow a DNA based computer to solve hard computational problems in a reasonable amount of time.
There was nothing remarkable about the problem itself, which dealt with finding the shortest route through a series of points. Nor was there anything special about how long it took Adleman to solve it — seven days — substantially greater than the few minutes it would take an average person to find a solution. What was exciting about Adleman’s achievement was that he had solved the problem using nothing but deoxyribonucleic acid (DNA) and molecular chemistry.
2. Some Informations About DNA:-
2. Some Informations About DNA:-
“ Deoxyribonucleic acid”. The molecules inside cells that carry genetic information and pass it from one generation to the next. See mitosis, chromosomes.
We have heard the term DNA a million times. You know that DNA is something inside cells .We know that each and every one looks different and this is because of they are having different DNA.
Have you ever wondered how the DNA in ONE egg cell and ONE sperm cell can produce a whole human being different from any other? How does DNA direct a cell's activities? Why do mutations in DNA cause such trouble (or have a positive effect)? How does a cell in your kidney "know" that it's a kidney cell as opposed to a brain cell or a skin cell or a cell in your eye? How can all the information needed to regulate the cell's activities be stuffed into a tiny nucleus?
A basic tenet is that all organisms on this planet, however complex they may beperceived to be,are made of the same type of genetic blueprint.The mode by which that blue print is coded is the factor that decides our physical makeup-from color of our eyes to what ever we are human.
To begin to find the answers to all these questions, you need to learn about the biological molecules called nucleic acids.
An organism (be it bacteria, rosebush, ant or human) has some form of nucleic acid Which is the chemical carrier of its genetic information. There are two types of nucleic acids, deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) which code for all the information that determines the nature of the organism's cells. As a matter of fact, DNA codes for all the instructions needed for the cell to perform different functions. Did you know that human DNA contains enough information to produce about 100,000 proteins?
Genes are made up of DNA ,which is shaped like a twisted ladder with rungs made up of molecules called nucleotide bases linked together in specific pairs.The arrangement of these bases along the DNA provides the cell with instructions on making proteins. DNA is tightly coiled into rod-shaped structures called chromosomes, which are stored in the nucleus of the cell. There are 22 pairs of chromosomes in each body cell plus two sex chromosomes.
2.1) Structure of DNA:-
This structure has two helical chains each coiled round the same axis (see diagram). We have made the usual chemical assumptions, namely, that each chain consists of phosphate diester groups joining ß-D-deoxyribofuranose residues with 3',5' linkages. The two chains (but not their bases) are related by a dyad perpendicular to the fibre axis. Both chains follow right- handed helices, but owing to the dyad the sequences of the atoms in the two chains run in opposite directions.
There is a residue on each every 3.4 A. in the z-direction. We have assumed an angle of 36° between adjacent residues in the same chain, so that the structure repeats after 10 residues on each chain, that is, after 34 A. The distance of a phosphorus atom from the fibre axis is 10 A. As the phosphates are on the outside, cations have easy access.
The structure is an open one, and its water content is rather high. At lower water contents we would expect the bases to tilt so that the structure could become more compact.
The novel feature of the structure is the manner in which the two chains are held together by the purine and pyrimidine bases. The planes of the bases are perpendicular to the fibre axis. The are joined together in pairs, a single base from the other chain, so that the two lie side by side with identical z-co-ordinates. One of the pair must be a purine and the other a pyrimidine for bonding to occur.
The hydrogen bonds are made as follows : purine position 1 to pyrimidine position 1 ; purine position 6 to pyrimidine position 6.
If it is assumed that the bases only occur in the structure in the most plausible tautomeric forms (that is, with the keto rather than the enol configurations) it is found that only specific pairs of bases can bond together. These pairs are : adenine (purine) with thymine (pyrimidine), and guanine (purine) with cytosine (pyrimidine).
In other words, if an adenine forms one member of a pair, on either chain, then on these assumptions the other member must be thymine ; similarly for guanine and cytosine. The sequence of bases on a single chain does not appear to be restricted in any way. However, if only specific pairs of bases can be formed, it follows that if the sequence of bases on one chain is given, then the sequence on the other chain is automatically determined.
It has been found experimentally (3,4) that the ratio of the amounts of adenine to thymine, and the ration of guanine to cytosine, are always bery close to unity for deoxyribose nucleic acid.
It is probably impossible to build this structure with a ribose sugar in place of the deoxyribose, as the extra oxygen atom would make too close a van der Waals contact. The previously published X-ray data (5,6) on deoxyribose nucleic acid are insufficient for a rigorous test of our structure. So far as we can tell, it is roughly compatible with the experimental data, but it must be regarded as unproved until it has been checked against more exact results. Some of these are given in the following communications. We were not aware of the details of the results presented there when we devised our structure, which rests mainly though not entirely on published experimental data and stereochemical arguments.
It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.
2.2) Arrangement of Nucleotieds in DNA :-
• One strands:-
Strands of DNA are long polymers of millions of linked nucleotides. These nucleotides consist of one of four nitrogen bases, a five carbon sugar and a phosphate group. The nucleotides that make up these polymers are named alter,the nitrogen bases that comprise it, namely, Adenine (A), Cytosine (C), Guanine (G), and Thymine (T). These nucleotides only combine in such a way that C always pairs with G, and T always pairs with A. These two strands of a DNA molecule are anti-parallel in that each strand runs in a opposite direction. Here below figure shows two strands of DNA and the bonding principles of the four types of nucleotides.
The linkage of the sugar-phosphate "backbone" of a single DNA strand is such that there is a directionality. That is, the phosphate on the 5' carbon of deoxyribose is linked to the 3' carbon of the next deoxyribose. This lends a directionality to a DNA strand which is said to have a 5' to 3' direction. The two strands of a DNA double helix are arranged in opposite directions and are said to be anti-parallel in that one strand is 5' - 3' and the complementary strand is 3' - 5'.
• Double Helix:-
The particular order of the bases arranged along the suger-phosphate backbone is called the DNA sequnce and the combinations of the four nucleotides in the estimated millions long polymer strands results in a billions of combinations within a single DNA double helix. These massive amounts of combinations allow for the multitude of differences between every living thing on the plane-form the large scale (for example, mammals as opposed to plants)to the small scale (differences in human hair colour). Here the above fig. Shows the double helix shape of the DNA.
“ Deoxyribonucleic acid”. The molecules inside cells that carry genetic information and pass it from one generation to the next. See mitosis, chromosomes.
We have heard the term DNA a million times. You know that DNA is something inside cells .We know that each and every one looks different and this is because of they are having different DNA.
Have you ever wondered how the DNA in ONE egg cell and ONE sperm cell can produce a whole human being different from any other? How does DNA direct a cell's activities? Why do mutations in DNA cause such trouble (or have a positive effect)? How does a cell in your kidney "know" that it's a kidney cell as opposed to a brain cell or a skin cell or a cell in your eye? How can all the information needed to regulate the cell's activities be stuffed into a tiny nucleus?
A basic tenet is that all organisms on this planet, however complex they may beperceived to be,are made of the same type of genetic blueprint.The mode by which that blue print is coded is the factor that decides our physical makeup-from color of our eyes to what ever we are human.
To begin to find the answers to all these questions, you need to learn about the biological molecules called nucleic acids.
An organism (be it bacteria, rosebush, ant or human) has some form of nucleic acid Which is the chemical carrier of its genetic information. There are two types of nucleic acids, deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) which code for all the information that determines the nature of the organism's cells. As a matter of fact, DNA codes for all the instructions needed for the cell to perform different functions. Did you know that human DNA contains enough information to produce about 100,000 proteins?
Genes are made up of DNA ,which is shaped like a twisted ladder with rungs made up of molecules called nucleotide bases linked together in specific pairs.The arrangement of these bases along the DNA provides the cell with instructions on making proteins. DNA is tightly coiled into rod-shaped structures called chromosomes, which are stored in the nucleus of the cell. There are 22 pairs of chromosomes in each body cell plus two sex chromosomes.
2.1) Structure of DNA:-
This structure has two helical chains each coiled round the same axis (see diagram). We have made the usual chemical assumptions, namely, that each chain consists of phosphate diester groups joining ß-D-deoxyribofuranose residues with 3',5' linkages. The two chains (but not their bases) are related by a dyad perpendicular to the fibre axis. Both chains follow right- handed helices, but owing to the dyad the sequences of the atoms in the two chains run in opposite directions.
There is a residue on each every 3.4 A. in the z-direction. We have assumed an angle of 36° between adjacent residues in the same chain, so that the structure repeats after 10 residues on each chain, that is, after 34 A. The distance of a phosphorus atom from the fibre axis is 10 A. As the phosphates are on the outside, cations have easy access.
The structure is an open one, and its water content is rather high. At lower water contents we would expect the bases to tilt so that the structure could become more compact.
The novel feature of the structure is the manner in which the two chains are held together by the purine and pyrimidine bases. The planes of the bases are perpendicular to the fibre axis. The are joined together in pairs, a single base from the other chain, so that the two lie side by side with identical z-co-ordinates. One of the pair must be a purine and the other a pyrimidine for bonding to occur.
The hydrogen bonds are made as follows : purine position 1 to pyrimidine position 1 ; purine position 6 to pyrimidine position 6.
If it is assumed that the bases only occur in the structure in the most plausible tautomeric forms (that is, with the keto rather than the enol configurations) it is found that only specific pairs of bases can bond together. These pairs are : adenine (purine) with thymine (pyrimidine), and guanine (purine) with cytosine (pyrimidine).
In other words, if an adenine forms one member of a pair, on either chain, then on these assumptions the other member must be thymine ; similarly for guanine and cytosine. The sequence of bases on a single chain does not appear to be restricted in any way. However, if only specific pairs of bases can be formed, it follows that if the sequence of bases on one chain is given, then the sequence on the other chain is automatically determined.
It has been found experimentally (3,4) that the ratio of the amounts of adenine to thymine, and the ration of guanine to cytosine, are always bery close to unity for deoxyribose nucleic acid.
It is probably impossible to build this structure with a ribose sugar in place of the deoxyribose, as the extra oxygen atom would make too close a van der Waals contact. The previously published X-ray data (5,6) on deoxyribose nucleic acid are insufficient for a rigorous test of our structure. So far as we can tell, it is roughly compatible with the experimental data, but it must be regarded as unproved until it has been checked against more exact results. Some of these are given in the following communications. We were not aware of the details of the results presented there when we devised our structure, which rests mainly though not entirely on published experimental data and stereochemical arguments.
It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.
2.2) Arrangement of Nucleotieds in DNA :-
• One strands:-
Strands of DNA are long polymers of millions of linked nucleotides. These nucleotides consist of one of four nitrogen bases, a five carbon sugar and a phosphate group. The nucleotides that make up these polymers are named alter,the nitrogen bases that comprise it, namely, Adenine (A), Cytosine (C), Guanine (G), and Thymine (T). These nucleotides only combine in such a way that C always pairs with G, and T always pairs with A. These two strands of a DNA molecule are anti-parallel in that each strand runs in a opposite direction. Here below figure shows two strands of DNA and the bonding principles of the four types of nucleotides.
The linkage of the sugar-phosphate "backbone" of a single DNA strand is such that there is a directionality. That is, the phosphate on the 5' carbon of deoxyribose is linked to the 3' carbon of the next deoxyribose. This lends a directionality to a DNA strand which is said to have a 5' to 3' direction. The two strands of a DNA double helix are arranged in opposite directions and are said to be anti-parallel in that one strand is 5' - 3' and the complementary strand is 3' - 5'.
• Double Helix:-
The particular order of the bases arranged along the suger-phosphate backbone is called the DNA sequnce and the combinations of the four nucleotides in the estimated millions long polymer strands results in a billions of combinations within a single DNA double helix. These massive amounts of combinations allow for the multitude of differences between every living thing on the plane-form the large scale (for example, mammals as opposed to plants)to the small scale (differences in human hair colour). Here the above fig. Shows the double helix shape of the DNA.
3. Operations on DNA :
3. Operations on DNA :
While a number of equivalent formalizations exist, we follow the descriptions. Note that the types of operations available are result of the capability of molecular biology rather than the wishes of algorithms designers. Also note that this algorithms are performed in constant time on testtubes which, for the sake of this discussion, may be of arbitrary size this operations are :
1) MERGE :
This is the simple operations of combining the contents of two test tubes in a third tube.
2) ANNEAL :
This is the process by which complementary strands of DNA are paired to form the famous double-helix structure of Watson and crick. Annealing is achieved by cooling a DNA solution, which encourages pairing. Adleman uses this in step 1 to generate all legal paths through the graph.
3) MELT :
Melting is inverse operation of annealing. By heating the contents of a tube, double-stranded DNA sequences are denatured, or separated into its two single-stranded parts.
4) SEPERATION BY LENGTH :
The contents of test tube can be separated by increasing length. This is achieved by hel electrophoresis, whereby longer strands travel more slowly through the gel. This operation was used by Adleman in step 3 of his solution to HP.
5) SEPERATION BY SEQUENCE :
This operation allows one to remove from solution all the DNA strands that contain a desired sequence. This is performed by generating the strands whose complement is the desired sequence. This newly generated strands is attached to magnetic substance which is used to extract the sequences after annealing. This operation crux of Adleman’s step 4 .
6) COPYING/AMPLIFICATION :
Copies are made of DNA strands in a test tube. The strands to be copied must have known sequences at both the beginning and end in order for this operation to be performed.
7) APPEND :
This process makes a DNA strand longer by adding a character or strand to the end of each sequence.
8) DETECT :
It is also possible to analyze test tube inorder to determine whether or not it contains atleast one strand of DNA.
This operation, for example, is the last in Adleman’s algorithm where we attempt to find a DNA sequence that has survived the previous steps.
While a number of equivalent formalizations exist, we follow the descriptions. Note that the types of operations available are result of the capability of molecular biology rather than the wishes of algorithms designers. Also note that this algorithms are performed in constant time on testtubes which, for the sake of this discussion, may be of arbitrary size this operations are :
1) MERGE :
This is the simple operations of combining the contents of two test tubes in a third tube.
2) ANNEAL :
This is the process by which complementary strands of DNA are paired to form the famous double-helix structure of Watson and crick. Annealing is achieved by cooling a DNA solution, which encourages pairing. Adleman uses this in step 1 to generate all legal paths through the graph.
3) MELT :
Melting is inverse operation of annealing. By heating the contents of a tube, double-stranded DNA sequences are denatured, or separated into its two single-stranded parts.
4) SEPERATION BY LENGTH :
The contents of test tube can be separated by increasing length. This is achieved by hel electrophoresis, whereby longer strands travel more slowly through the gel. This operation was used by Adleman in step 3 of his solution to HP.
5) SEPERATION BY SEQUENCE :
This operation allows one to remove from solution all the DNA strands that contain a desired sequence. This is performed by generating the strands whose complement is the desired sequence. This newly generated strands is attached to magnetic substance which is used to extract the sequences after annealing. This operation crux of Adleman’s step 4 .
6) COPYING/AMPLIFICATION :
Copies are made of DNA strands in a test tube. The strands to be copied must have known sequences at both the beginning and end in order for this operation to be performed.
7) APPEND :
This process makes a DNA strand longer by adding a character or strand to the end of each sequence.
8) DETECT :
It is also possible to analyze test tube inorder to determine whether or not it contains atleast one strand of DNA.
This operation, for example, is the last in Adleman’s algorithm where we attempt to find a DNA sequence that has survived the previous steps.
4. Aldeman’s Hamilton path problem:-
4. Aldeman’s Hamilton path problem:-
The Hamiltonian Path problem.
In 1994, Leonard M. Adleman solved an unremarkable computational problem with a remarkable technique. It was a problem that a person could solve it in a few moments or an average desktop machine could solve in the blink of an eye. It took Adleman, however, seven days to find a solution. Why then was this work exceptional? Because he solved the problem with DNA. It was a landmark demonstration of computing on the molecular level.
The type of problem that Adleman solved is a famous one. It's formally known as a directed Hamiltonian Path (HP) problem, but is more popularly recognized as a variant of the so-called "traveling salesman problem." In Adleman's version of the traveling salesman problem, or "TSP" for short, a hypothetical salesman tries to find a route through a set of cities so that he visits each city only once. As the number of cities increases, the problem becomes more difficult until its solution is beyond analytical analysis altogether, at which point it requires brute force search methods. TSPs with a large number of cities quickly become computationally expensive, making them impractical to solve on even the latest super-computer. Adleman’s demonstration only involves seven cities, making it in some sense a trivial problem that can easily be solved by inspection. Nevertheless, his work is significant for a number of reasons.
It illustrates the possibilities of using DNA to solve a class of problems that is difficult or impossible to solve using traditional computing methods.
It's an example of computation at a molecular level, potentially a size limit that may never be reached by the semiconductor industry. It demonstrates unique aspects of DNA as a data structure. It demonstrates that computing with DNA can work in a massively parallel fashion.
The Hamiltonian Path problem.
In 1994, Leonard M. Adleman solved an unremarkable computational problem with a remarkable technique. It was a problem that a person could solve it in a few moments or an average desktop machine could solve in the blink of an eye. It took Adleman, however, seven days to find a solution. Why then was this work exceptional? Because he solved the problem with DNA. It was a landmark demonstration of computing on the molecular level.
The type of problem that Adleman solved is a famous one. It's formally known as a directed Hamiltonian Path (HP) problem, but is more popularly recognized as a variant of the so-called "traveling salesman problem." In Adleman's version of the traveling salesman problem, or "TSP" for short, a hypothetical salesman tries to find a route through a set of cities so that he visits each city only once. As the number of cities increases, the problem becomes more difficult until its solution is beyond analytical analysis altogether, at which point it requires brute force search methods. TSPs with a large number of cities quickly become computationally expensive, making them impractical to solve on even the latest super-computer. Adleman’s demonstration only involves seven cities, making it in some sense a trivial problem that can easily be solved by inspection. Nevertheless, his work is significant for a number of reasons.
It illustrates the possibilities of using DNA to solve a class of problems that is difficult or impossible to solve using traditional computing methods.
It's an example of computation at a molecular level, potentially a size limit that may never be reached by the semiconductor industry. It demonstrates unique aspects of DNA as a data structure. It demonstrates that computing with DNA can work in a massively parallel fashion.
5. The Adleman’s experiment :
5. The Adleman’s experiment :
There is no better way to understand how something works than by going through an example step by step. So let’s solve our own directed Hamiltonian Path problem, using the DNA methods demonstrated by Adleman. The concepts are the same but the example has been simplified to make it easier to follow and present.
Suppose that I live in Boston, and need to visit four cities: Atlanta, San Diego , St.Louis, and NY, with NY being my final destination. The airline I’m taking has a specific set of connecting flights that restrict which routes I can take (i.e. there is a flight from Boston. to San Diego, but no flight from St.Louis to San Diego). What should my itinerary be if I want to visit each city only once?
Figure 1. A sample traveling salesman problem involving the shortest path connecting all cities. Arrows indicate the direction that someone can travel. For example, a voyager can leave Atlanta and arrive in St. Louis, and vice versa
It should take you only a moment to see that there is only one route. Starting from Boston you need to fly to San Diego , Atlanta, St.Louis and then to N.Y. Any other choice of cities will force you to miss a destination, visit a city twice, or not make it to N.Y. For this example you obviously don’t need the help of a computer to find a solution. For six, seven, or even eight cities, the problem is still manageable. However, as the number of cities increases, the problem quickly gets out of hand. Assuming a random distribution of connecting routes, the number of itineraries you need to check increases exponentially.
Pretty soon you will run out of pen and paper listing all the possible routes, and it becomes a problem for a computer.....or perhaps DNA. The method Adleman used to solve this problem is basically the shotgun approach mentioned previously. He first generated all the possible itineraries and then selected the correct itinerary. This is the advantage of DNA. It’s small and there are combinatorial techniques that can quickly generate many different data strings. Since the enzymes work on many DNA molecules at once, the selection process is massively parallel.
Specifically, the method based on Adleman’s experiment would be as follows:
1) Generate all possible routes.
2) Select itineraries that start with the proper city and end with the final city.
3) Select itineraries with the correct number of cities.
4) Select itineraries that contain each city only once.
All of the above steps can be accomplished with standard molecular biology techniques.
Part I: Generate all possible routes
Strategy : Encode city names in short DNA sequences. Encode itineraries by connecting the city sequences for which routes exist.
DNA can simply be treated as a string of data. For example, each city can be represented by a "word" of six bases:
Boston GCTACG
San Diego CTAGTA
Atlanta TCGTAC
St.Louis CTACGG
New York ATGCCG
The entire itinerary can be encoded by simply stringing together these DNA sequences that represent specific cities. For example, the route from Boston -> San Diego -> Atlanta -> St.Louis -> New York would simply be GCTACGCTAGTATCGTACCTACGGATGCCG, or equivalently it could be represented in double stranded form with its complement sequence.
So how do we generate this? Synthesizing short single stranded DNA is now a routine process, so encoding the city names is straightforward. The molecules can be made by a machine called a DNA synthesizer or even custom ordered from a third party. Itineraries can then be produced from the city encodings by linking them together in proper order. To accomplish this you can take advantage of the fact that DNA hybridizes with its complimentary sequence.
For example, you can encode the routes between cities by encoding the compliment of the second half (last three letters) of the departure city and the first half (first three letters) of the arrival city. For example the route between St.Louis (CTACGG) and NY (ATGCCG) can be made by taking the second half of the coding for St.Louis (CGG) and the first half of the coding for NY (ATG). This gives CGGATG. By taking the complement of this you get, GCCTAC, which not only uniquely represents the route from St.Louis to NY, but will connect the DNA representing St.Louis and NY by hybridizing itself to the second half of the code representing St.Louis (...CGG) and the first half of the code representing NY (ATG...). For example:
Random itineraries can be made by mixing city encodings with the route encodings. Finally, the DNA strands can be connected together by an enzyme called ligase. What we are left with are strands of DNA representing itineraries with a random number of cities and random set of routes. For example:
We can be confident that we have all possible combinations including the correct one by using an excess of DNA encodings, say 10^13 copies of each city and each route between cities. Remember DNA is a highly compact data format, so numbers are on our side.
Part II: Select itineraries that start and end with the correct cities:
Strategy: Selectively copy and amplify only the section of the DNA that starts with LA and ends with NY by using the Polymerase Chain Reaction.
After Part I, we now have a test tube full of various lengths of DNA that encode possible routes between cities. What we want are routes that start with Boston and end with NY. To accomplish this we can use a technique called Polymerase Chain Reaction (PCR), which allows you to produce many copies of a specific sequence of DNA. PCR is an iterative process that cycles through a series of copying events using an enzyme called polymerase. Polymerase will copy a section of single stranded DNA starting at the position of a primer, a short piece of DNA complimentary to one end of a section of the DNA that you're interested in.
By selecting primers that flank the section of DNA you want to amplify, the polymerase preferentially amplifies the DNA between these primers, doubling the amount of DNA containing this sequence. After many iterations of PCR, the DNA you're working on is amplified exponentially. So to selectively amplify the itineraries that start and stop with our cities of interest, we use primers that are complimentary to Boston and NY. What we end up with after PCR is a test tube full of double stranded DNA of various lengths, encoding itineraries that start with Boston and end with NY.
Part III: Select itineraries that contain the correct number of cities.
Strategy: Sort the DNA by length and select the DNA whose length corresponds to 5 cities.
Our test tube is now filled with DNA encoded itineraries that start with Boston and end with NY, where the number of cities in between Boston and NY varies. We now want to select those itineraries that are five cities long. To accomplish this we can use a technique called Gel Electrophoresis, which is a common procedure used to resolve the size of DNA. The basic principle behind Gel Electrophoresis is to force DNA through a gel matrix by using an electric field. DNA is a negatively charged molecule under most conditions, so if placed in an electric field it will be attracted to the positive potential.
However since the charge density of DNA is constant (charge per length) long pieces of DNA move as fast as short pieces when suspended in a fluid. This is why you use a gel matrix. The gel is made up of a polymer that forms a meshwork of linked strands. The DNA now is forced to thread its way through the tiny spaces between these strands, which slows down the DNA at different rates depending on its length. What we typically end up with after running a gel is a series of DNA bands, with each band corresponding to a certain length. We can then simply cut out the band of interest to isolate DNA of a specific length. Since we known that each city is encoded with 6 base pairs of DNA, knowing the length of the itinerary gives number of cities. In this case we would isolate the DNA that was 30 base pairs long (5 cities times 6 base pairs).
Part IV: Select itineraries that have a complete set of cities:
Strategy: Successively filter the DNA molecules by city, one city at a time. Since the DNA we start with contains five cities, we will be left with strands that encode each city once.
DNA containing a specific sequence can be purified from a sample of mixed DNA by a technique called affinity purification. This is accomplished by attaching the compliment of the sequence in question to a substrate like a magnetic bead. The beads are then mixed with the DNA. DNA, which contains the sequence you're after then hybridizes with the complement sequence on the beads. These beads can then be retrieved and the DNA isolated.
So we now affinity purify fives times, using a different city complement for each run. For example, for the first run we use Boston.'-beads (where the ' indicates compliment strand) to fish out DNA sequences which contain the encoding for Boston (which should be all the DNA because of step 3), the next run we use Atlanta '-beads, and then San Diego '-beads, St.Louis '-beads, and finally NY'-beads.
\
The order isn’t important. If an itinerary is missing a city, then it will not be "fished out" during one of the runs and will be removed from the candidate pool. What we are left with are the are itineraries that start in Boston, visit each city once, and end in NY. This is exactly what we are looking for. If the answer exists we would retrieve it at this step.
Reading out the answer:
One possible way to find the result would be to simply sequence the DNA strands. However, since we already have the sequence of the city encodings we can use an alternate method called graduated PCR. Here we do a series of PCR amplifications using the primer corresponding to Boston, with a different primer for each city in succession. By measuring the various lengths of DNA for each PCR product we can piece together the final sequence of cities in our itinerary. For example, we know that the DNA itinerary starts with Boston and is 30 base pairs long, so if the PCR product for the LA and Atlanta primers was 24 base pairs long, you know Atlanta is the fourth city in the itinerary (24 divided by 6). Finally, if we were careful in our DNA manipulations the only DNA left in our test tube should be DNA itinerary encoding Boston, San Diego, St.Louis, Atlanta, and NY. So if the succession of primers used is Boston & San Diego, Boston & St.Louis, Boston & Atlanta, and Boston & NY, then we would get PCR products with lengths 12, 18, 24, and 30 base pairs.
Caveats:
Adleman's experiment solved a seven city problem, but there are two major shortcomings preventing a large scaling up of his computation. The complexity of the traveling salesman problem simply doesn’t disappear when applying a different method of solution - it still increases exponentially. For Adleman’s method, what scales exponentially is not the computing time, but rather the amount of DNA. Unfortunately this places some hard restrictions on the number of cities that can be solved; after the Adleman article was published, more than a few people have pointed out that using his method to solve a 200 city HP problem would take an amount of DNA that weighed more than the earth. Another factor that places limits on his method is the error rate for each operation. Since these operations are not deterministic but stochastically driven (we are doing chemistry here), each step contains statistical errors, limiting the number of iterations you can do successively before the probability of producing an error becomes greater than producing the correct result. For example an error rate of 1% is fine for 10 iterations, giving less than 10% error, but after 100 iterations this error grows to 63%.
Conclusions :
So will DNA ever be used to solve a traveling salesman problem with a higher number of cities than can be done with traditional computers? Well, considering that the record is a whopping 13,509 cities, it certainly will not be done with the procedure described above. It took this group only three months, using three Digital AlphaServer 4100s (a total of 12 processors) and a cluster of 32 Pentium-II PCs. The solution was possible not because of brute force computing power, but because they used some very efficient branching rules. This first demonstration of DNA computing used a rather unsophisticated algorithm, but as the formalism of DNA computing becomes refined, new algorithms perhaps will one day allow DNA to overtake conventional computation and set a new record.
On the side of the "hardware" (or should I say "wetware"), improvements in biotechnology are happening at a rate similar to the advances made in the semiconductor industry. For instance, look at sequencing; what once took a graduate student 5 years to do for a Ph.D thesis takes Celera just one day. With the amount of government funded research dollars flowing into genetic-related R&D and with the large potential payoffs from the lucrative pharmaceutical and medical-related markets, this isn't surprising. Just look at the number of advances in DNA-related technology that happened in the last five years. Today we have not one but several companies making "DNA chips," where DNA strands are attached to a silicon substrate in large arrays (for example Affymetrix's genechip). Production technology of MEMS is advancing rapidly, allowing for novel integrated small scale DNA processing devices. The Human Genome Project is producing rapid innovations in sequencing technology. The future of DNA manipulation is speed, automation, and miniaturization.
And of course we are talking about DNA here, the genetic code of life itself. It certainly has been the molecule of this century and most likely the next one. Considering all the attention that DNA has garnered, it isn’t too hard to imagine that one day we might have the tools and talent to produce a small integrated desktop machine that uses DNA, or a DNA-like biopolymer, as a computing substrate along with set of designer enzymes. Perhaps it won’t be used to play Quake IV or surf the web -- things that traditional computers are good at -- but it certainly might be used in the study of logic, encryption, genetic programming and algorithms, automata, language systems, and lots of other interesting things that haven't even been invented yet.
There is no better way to understand how something works than by going through an example step by step. So let’s solve our own directed Hamiltonian Path problem, using the DNA methods demonstrated by Adleman. The concepts are the same but the example has been simplified to make it easier to follow and present.
Suppose that I live in Boston, and need to visit four cities: Atlanta, San Diego , St.Louis, and NY, with NY being my final destination. The airline I’m taking has a specific set of connecting flights that restrict which routes I can take (i.e. there is a flight from Boston. to San Diego, but no flight from St.Louis to San Diego). What should my itinerary be if I want to visit each city only once?
Figure 1. A sample traveling salesman problem involving the shortest path connecting all cities. Arrows indicate the direction that someone can travel. For example, a voyager can leave Atlanta and arrive in St. Louis, and vice versa
It should take you only a moment to see that there is only one route. Starting from Boston you need to fly to San Diego , Atlanta, St.Louis and then to N.Y. Any other choice of cities will force you to miss a destination, visit a city twice, or not make it to N.Y. For this example you obviously don’t need the help of a computer to find a solution. For six, seven, or even eight cities, the problem is still manageable. However, as the number of cities increases, the problem quickly gets out of hand. Assuming a random distribution of connecting routes, the number of itineraries you need to check increases exponentially.
Pretty soon you will run out of pen and paper listing all the possible routes, and it becomes a problem for a computer.....or perhaps DNA. The method Adleman used to solve this problem is basically the shotgun approach mentioned previously. He first generated all the possible itineraries and then selected the correct itinerary. This is the advantage of DNA. It’s small and there are combinatorial techniques that can quickly generate many different data strings. Since the enzymes work on many DNA molecules at once, the selection process is massively parallel.
Specifically, the method based on Adleman’s experiment would be as follows:
1) Generate all possible routes.
2) Select itineraries that start with the proper city and end with the final city.
3) Select itineraries with the correct number of cities.
4) Select itineraries that contain each city only once.
All of the above steps can be accomplished with standard molecular biology techniques.
Part I: Generate all possible routes
Strategy : Encode city names in short DNA sequences. Encode itineraries by connecting the city sequences for which routes exist.
DNA can simply be treated as a string of data. For example, each city can be represented by a "word" of six bases:
Boston GCTACG
San Diego CTAGTA
Atlanta TCGTAC
St.Louis CTACGG
New York ATGCCG
The entire itinerary can be encoded by simply stringing together these DNA sequences that represent specific cities. For example, the route from Boston -> San Diego -> Atlanta -> St.Louis -> New York would simply be GCTACGCTAGTATCGTACCTACGGATGCCG, or equivalently it could be represented in double stranded form with its complement sequence.
So how do we generate this? Synthesizing short single stranded DNA is now a routine process, so encoding the city names is straightforward. The molecules can be made by a machine called a DNA synthesizer or even custom ordered from a third party. Itineraries can then be produced from the city encodings by linking them together in proper order. To accomplish this you can take advantage of the fact that DNA hybridizes with its complimentary sequence.
For example, you can encode the routes between cities by encoding the compliment of the second half (last three letters) of the departure city and the first half (first three letters) of the arrival city. For example the route between St.Louis (CTACGG) and NY (ATGCCG) can be made by taking the second half of the coding for St.Louis (CGG) and the first half of the coding for NY (ATG). This gives CGGATG. By taking the complement of this you get, GCCTAC, which not only uniquely represents the route from St.Louis to NY, but will connect the DNA representing St.Louis and NY by hybridizing itself to the second half of the code representing St.Louis (...CGG) and the first half of the code representing NY (ATG...). For example:
Random itineraries can be made by mixing city encodings with the route encodings. Finally, the DNA strands can be connected together by an enzyme called ligase. What we are left with are strands of DNA representing itineraries with a random number of cities and random set of routes. For example:
We can be confident that we have all possible combinations including the correct one by using an excess of DNA encodings, say 10^13 copies of each city and each route between cities. Remember DNA is a highly compact data format, so numbers are on our side.
Part II: Select itineraries that start and end with the correct cities:
Strategy: Selectively copy and amplify only the section of the DNA that starts with LA and ends with NY by using the Polymerase Chain Reaction.
After Part I, we now have a test tube full of various lengths of DNA that encode possible routes between cities. What we want are routes that start with Boston and end with NY. To accomplish this we can use a technique called Polymerase Chain Reaction (PCR), which allows you to produce many copies of a specific sequence of DNA. PCR is an iterative process that cycles through a series of copying events using an enzyme called polymerase. Polymerase will copy a section of single stranded DNA starting at the position of a primer, a short piece of DNA complimentary to one end of a section of the DNA that you're interested in.
By selecting primers that flank the section of DNA you want to amplify, the polymerase preferentially amplifies the DNA between these primers, doubling the amount of DNA containing this sequence. After many iterations of PCR, the DNA you're working on is amplified exponentially. So to selectively amplify the itineraries that start and stop with our cities of interest, we use primers that are complimentary to Boston and NY. What we end up with after PCR is a test tube full of double stranded DNA of various lengths, encoding itineraries that start with Boston and end with NY.
Part III: Select itineraries that contain the correct number of cities.
Strategy: Sort the DNA by length and select the DNA whose length corresponds to 5 cities.
Our test tube is now filled with DNA encoded itineraries that start with Boston and end with NY, where the number of cities in between Boston and NY varies. We now want to select those itineraries that are five cities long. To accomplish this we can use a technique called Gel Electrophoresis, which is a common procedure used to resolve the size of DNA. The basic principle behind Gel Electrophoresis is to force DNA through a gel matrix by using an electric field. DNA is a negatively charged molecule under most conditions, so if placed in an electric field it will be attracted to the positive potential.
However since the charge density of DNA is constant (charge per length) long pieces of DNA move as fast as short pieces when suspended in a fluid. This is why you use a gel matrix. The gel is made up of a polymer that forms a meshwork of linked strands. The DNA now is forced to thread its way through the tiny spaces between these strands, which slows down the DNA at different rates depending on its length. What we typically end up with after running a gel is a series of DNA bands, with each band corresponding to a certain length. We can then simply cut out the band of interest to isolate DNA of a specific length. Since we known that each city is encoded with 6 base pairs of DNA, knowing the length of the itinerary gives number of cities. In this case we would isolate the DNA that was 30 base pairs long (5 cities times 6 base pairs).
Part IV: Select itineraries that have a complete set of cities:
Strategy: Successively filter the DNA molecules by city, one city at a time. Since the DNA we start with contains five cities, we will be left with strands that encode each city once.
DNA containing a specific sequence can be purified from a sample of mixed DNA by a technique called affinity purification. This is accomplished by attaching the compliment of the sequence in question to a substrate like a magnetic bead. The beads are then mixed with the DNA. DNA, which contains the sequence you're after then hybridizes with the complement sequence on the beads. These beads can then be retrieved and the DNA isolated.
So we now affinity purify fives times, using a different city complement for each run. For example, for the first run we use Boston.'-beads (where the ' indicates compliment strand) to fish out DNA sequences which contain the encoding for Boston (which should be all the DNA because of step 3), the next run we use Atlanta '-beads, and then San Diego '-beads, St.Louis '-beads, and finally NY'-beads.
\
The order isn’t important. If an itinerary is missing a city, then it will not be "fished out" during one of the runs and will be removed from the candidate pool. What we are left with are the are itineraries that start in Boston, visit each city once, and end in NY. This is exactly what we are looking for. If the answer exists we would retrieve it at this step.
Reading out the answer:
One possible way to find the result would be to simply sequence the DNA strands. However, since we already have the sequence of the city encodings we can use an alternate method called graduated PCR. Here we do a series of PCR amplifications using the primer corresponding to Boston, with a different primer for each city in succession. By measuring the various lengths of DNA for each PCR product we can piece together the final sequence of cities in our itinerary. For example, we know that the DNA itinerary starts with Boston and is 30 base pairs long, so if the PCR product for the LA and Atlanta primers was 24 base pairs long, you know Atlanta is the fourth city in the itinerary (24 divided by 6). Finally, if we were careful in our DNA manipulations the only DNA left in our test tube should be DNA itinerary encoding Boston, San Diego, St.Louis, Atlanta, and NY. So if the succession of primers used is Boston & San Diego, Boston & St.Louis, Boston & Atlanta, and Boston & NY, then we would get PCR products with lengths 12, 18, 24, and 30 base pairs.
Caveats:
Adleman's experiment solved a seven city problem, but there are two major shortcomings preventing a large scaling up of his computation. The complexity of the traveling salesman problem simply doesn’t disappear when applying a different method of solution - it still increases exponentially. For Adleman’s method, what scales exponentially is not the computing time, but rather the amount of DNA. Unfortunately this places some hard restrictions on the number of cities that can be solved; after the Adleman article was published, more than a few people have pointed out that using his method to solve a 200 city HP problem would take an amount of DNA that weighed more than the earth. Another factor that places limits on his method is the error rate for each operation. Since these operations are not deterministic but stochastically driven (we are doing chemistry here), each step contains statistical errors, limiting the number of iterations you can do successively before the probability of producing an error becomes greater than producing the correct result. For example an error rate of 1% is fine for 10 iterations, giving less than 10% error, but after 100 iterations this error grows to 63%.
Conclusions :
So will DNA ever be used to solve a traveling salesman problem with a higher number of cities than can be done with traditional computers? Well, considering that the record is a whopping 13,509 cities, it certainly will not be done with the procedure described above. It took this group only three months, using three Digital AlphaServer 4100s (a total of 12 processors) and a cluster of 32 Pentium-II PCs. The solution was possible not because of brute force computing power, but because they used some very efficient branching rules. This first demonstration of DNA computing used a rather unsophisticated algorithm, but as the formalism of DNA computing becomes refined, new algorithms perhaps will one day allow DNA to overtake conventional computation and set a new record.
On the side of the "hardware" (or should I say "wetware"), improvements in biotechnology are happening at a rate similar to the advances made in the semiconductor industry. For instance, look at sequencing; what once took a graduate student 5 years to do for a Ph.D thesis takes Celera just one day. With the amount of government funded research dollars flowing into genetic-related R&D and with the large potential payoffs from the lucrative pharmaceutical and medical-related markets, this isn't surprising. Just look at the number of advances in DNA-related technology that happened in the last five years. Today we have not one but several companies making "DNA chips," where DNA strands are attached to a silicon substrate in large arrays (for example Affymetrix's genechip). Production technology of MEMS is advancing rapidly, allowing for novel integrated small scale DNA processing devices. The Human Genome Project is producing rapid innovations in sequencing technology. The future of DNA manipulation is speed, automation, and miniaturization.
And of course we are talking about DNA here, the genetic code of life itself. It certainly has been the molecule of this century and most likely the next one. Considering all the attention that DNA has garnered, it isn’t too hard to imagine that one day we might have the tools and talent to produce a small integrated desktop machine that uses DNA, or a DNA-like biopolymer, as a computing substrate along with set of designer enzymes. Perhaps it won’t be used to play Quake IV or surf the web -- things that traditional computers are good at -- but it certainly might be used in the study of logic, encryption, genetic programming and algorithms, automata, language systems, and lots of other interesting things that haven't even been invented yet.
Subscribe to:
Posts (Atom)