Thursday, June 11, 2009

Challenges in the Migration to 4G

Second-generation (2G) mobile systems were very successful in the previous decade. Their success prompted the development of third generation (3G) mobile systems. While 2G systems such as GSM, IS-95, and cdmaOne were designed to carry speech and low-bit-rate data, 3G systems were designed to provide higher-data-rate services. During the evolution from 2G to 3G, a range of wireless systems, including GPRS, IMT-2000, Bluetooth, WLAN, and HiperLAN, have been developed. All these systems were designed independently, targeting different service types, data rates, and users. As all these systems have their own merits and shortcomings, there is no single system that is good enough to replace all the other technologies. Instead of putting efforts into developing new radio interfaces and technologies for 4G systems, which some researchers are doing, we believe establishing 4G systems that integrate existing and newly developed wireless systems is a more feasible option.
Researchers are currently developing frameworks for future 4G networks. Different research programs, such as Mobile VCE, MIRAI, and DoCoMo, have their own visions on 4G features and implementations. Some key features (mainly from user's point of view) of 4G networks are stated as follows:" High usability: anytime, anywhere, and with any technology" Support for multimedia services at low transmission cost" Personalization" Integrated servicesFirst, 4G networks are all IP based heterogeneous networks that allow users to use any system at any time and anywhere. Users carrying an integrated terminal can use a wide range of applications provided by multiple wireless networks.
Second, 4G systems provide not only telecommunications services, but also data and multimedia services. To support multimedia services, high-data-rate services with good system reliability will be provided. At the same time, a low per-bit transmission cost will be maintained.
Third, personalized service will be provided by this new-generation network. It is expected that when 4G services are launched, users in widely different locations, occupations, and economic classes will use the services. In order to meet the demands of these diverse users, service providers should design personal and customized services for them.
Finally, 4G systems also provide facilities for integrated services. Users can use multiple services from any service provider at the same time. Just imagine a 4G mobile user, Mary, who is looking for information on movies shown in nearby cinemas. Her mobile may simultaneously connect to different wireless systems. These wireless systems may include a Global Positioning System (GPS) (for tracking her current location), a wireless LAN (for receiving previews of the movies in nearby cinemas), and a code-division multiple access (CDMA) (for making a telephone call to one of the cinemas). In this example Mary is actually using multiple wireless services that differ in quality of service (QoS) levels, security policies, device settings, charging methods and applications. It will be a significant revolution if such highly integrated services are made possible in 4G mobile applications.
To migrate current systems to 4G with the features mentioned above, we have to face a number of challenges. In this article these challenges are highlighted and grouped into various research areas. An overview of the challenges in future heterogeneous systems will be provided. Each area of challenges will be examined in detail. The article is then concluded.

Friday, March 20, 2009

Mobile agent

In computer science, a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.
Mobile Agent, namely, is a type of software agent, with the feature of autonomy, social ability, learning, and most important, mobility.
When the term mobile agent is used, it refers to a process that can transport its state from one environment to another, with its data intact, and still being able to perform appropriately in the new environment. Mobile agents decide when and where to move next, which is evolved from RPC. So how exactly does a mobile agent move? Just like a user doesn t really visit a website but only make a copy of it, a mobile agent accomplishes this move through data duplication. When a mobile agent decides to move, it saves its own state and transports this saved state to next host and resume execution from the saved state.
Mobile agents are a specific form of mobile code and software agents paradigms. However, in contrast to the Remote evaluation and Code on demand paradigms, mobile agents are active in that they may choose to migrate between computers at any time during their execution. This makes them a powerful tool for implementing distributed applications in a computer network.

Advantages
1) Move computation to data, reducing network load.
2) Asynchronous execution on multiple heterogeneous network hosts
3) Dynamic adaptation - actions are dependent on the state of the host environment
4) Tolerant to network faults - able to operate without an active connection between client and server
5) Flexible maintenance - to change an agent s actions, only the source (rather than the computation hosts) must be updated

Applications
1) Resource availability, discovery, monitoring
2) Information retrieval
3) Network management
4) Dynamic software deployment

Thursday, March 19, 2009

Open GPS Tracker

The Open GPS Tracker is a small device which plugs into a $20 prepaid mobile phone to make a GPS tracker. The Tracker responds to text message commands, detects motion, and sends you its exact position, ready for Google Maps or your mapping software. The Tracker firmware is open source and user-customizable.

The current supported hardware platform is:

* Tyco Electronics A1035D GPS module
* Motorola C168i AT&T GoPhone prepaid mobile phone
* Atmel ATTINY84-20PU AVR microcontroller

Project requires no interface

chips! All you need is a GPS module, a phone, an ATTINY84, a voltage regulator, a PNP transistor, and a few passive components. This is a commercial grade tracker and is currently a second-generation stable beta V0.17.

This version stores messages while out of GSM coverage, and forwards them when it regains coverage.

Tracker components Phone showing location report Street map with GPS fix

Introduction


Welcome to the Open GPS Tracker site. The Open GPS Tracker is a small device which plugs into a $20 prepaid mobile phone to make a GPS tracker. The Tracker responds to text message commands, detects motion, and sends you its exact position, ready for Google Maps or your mapping software

. The Tracker firmware
Project status: We currently have second-generation stable firmware and a reference hardware design. All parts are available from Mouser Electronics, and the phone is available from Target, Walmart, or Radio Shack. This site provides the firmware with source code, theory of operation, parts list, and exact assembly and checkout instructions. If you can solder, this is a one-sitting project. No PC board or surface-mount capability is required.

Programmed parts will be available as soon as the firmware is out of beta. We intend to have kits and assembled units available for purchase shortly thereafter. Commercial products are planned, but the firmware will remain open source.

The current supported hardware platform is:
  • Tyco Electronics A1035D GPS module

  • Motorola C168i AT&T GoPhone prepaid mobile phone

  • Atmel ATTINY84-20PU AVR microcontroller

We intend to support

more phones and GPS devices in the future.
The Tracker's features are competitive with, or better than, many commercial products:

  • SiRFstar III receiver gets a fix inside most buildings.

  • Sends latitude, longitude, altitude, speed, course, date, and time.

  • Sends to any SMS-capable mobile phone, or any email address.

  • Battery life up to 14 days, limited by mobile phone. Longer life possible with external batteries.

  • GoPhone costs $10 per month for 1000 messages per month.

  • Configurable over-the-air via text message commands.

  • Password security and unique identifier.

  • Manual locate and automatic tracking modes controlled via text message.

  • Automatic tracking mode sends location when the tracker starts moving,
    when it stops moving, and at programmable intervals while moving.

  • Alerts when user-set speed limit is exceeded.

  • Retains tracking messages if out of coverage, and sends when back in coverage.

  • Retains and reports last good fix if it loses GPS coverage.

  • Remote reporting of mobile phone battery and signal status.

  • Extended runtime mode switches phone on and off to save battery life.

  • Watchdog timer prevents device lockup.

  • Firmware is user-customizable with a $35.91 programmer and free software.

In addition to being a GPS tracker, the firmware is easily modified to monitor

and control anything from a weather station to a vending machine via text messaging.


Smart NoteTaker

The Smart NoteTaker is such a helpful product that satisfies the needs of the people in today's technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one's self with something. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life.
The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It's also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.
There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen.
Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel.

Data Compression Techniques

Data Compression Techniques
Data compression is the procces of converting an input data stream or the source stream or the original raw data into another data stream that has a smaller size. data compression is popular because of two reasons
1) People like to accumulate data and hate to throw anything away. No matter however large a storage device may be, sooner or later it is going to overflow. Data compression seems useful because it delays this inevitability2) People hate to wait a long time for data transfer. There are many known methods of data compression. They are based on the different ideas and are suitable for different types of data.
They produce different results, but they are all based on the same basic principle that they compress data by removing the redundancy from the original data in the source file. The idea of compression by reducing redundancy suggests the general law of data compression, which is to "assign short codes to common events and long codes to rare events". Data compression is done by changing its representation from inefficient to efficient form.
The main aim of the field of data compression is of course to develop methods for better and better compression. Experience shows that fine tuning an algorithm to squeeze out the last remaining bits of redundancy from the data gives diminishing returns. Data compression has become so important that some researches have proposed the "simplicity and power theory". Specifically it says, data compression may be interpreted as a process of removing unnecessary complexity in information and thus maximizing the simplicity while preserving as much as possible of its non redundant descriptive power.Basic Types Of Data Compression
There are two basic types of data compression.
1. Lossy compression
2. Lossless compression
Lossy Compression: In lossy compression some information is lost during the processing, where the image data is stored into important and unimportant data. The system then discards the unimportant dataIt provides much higher compression rates but there will be some loss of information compared to the original source file. The main advantage is that the loss cannot be visible to eye or it is visually lossless. Visually lossless compression is based on knowledge about colour images and human perception.
Lossless Compression: In this type of compression no information is lost during the compression and the decompression process. Here the reconstructed image is mathematically and visually identical to the original one. It achieves only about a 2:1 compression ratio. This type of compression technique looks for patterns in strings of bits and then expresses them more concisely.

Internet Access via Cable TV Network

Internet is a network of networks in which various computers connect each other through out the world. The connection to other computers is possible with the help of ISP (Internet Service Provider). Each Internet users depend dialup connections to connect to Internet. This has many disadvantages like very poor speed, may time cut downs etc. To solve the problem, Internet data can be transferred through Cable networks wired to the user computer. Different type connections used are PSTN connection, ISDN connection and Internet via Cable networks. Various advantages are High availability, High bandwidth to low cost, high speed data access, always on connectivity etc.The huge growth in the number of Internet users every year has resulted in the traffic congestion on the net, resulting in slower and expensive Internet access. As cable TV has a strong reach to homes, it is the best medium for providing the Internet to house - holds with faster access at feasible rates.
We are witnessing an unprecedented demand from residential and business customers, especially in the last few years, for access to the Internet, corporate intranets and various online information services. The Internet revolution is sweeping the country with a burgeoning number of the Internet users. As more and more people are being attracted towards the Internet, traffic congestion on the Net is continuously increasing due to limited bandwidths resulting in slower and expensive Internet access.
The number of household getting on the Internet has increased exponentially in the recent past. First time internet users are amazed at the internet's richness of content and personalization, never before offered by any other medium. But this initial awe last only till they experienced the slow speed of internet content deliver. Hence the popular reference "World Wide Wait"(not world wide web). There is a pent-up demand for the high-speed (or broad band) internet access for fast web browsing and more effective telecommuting.
India has a cable penetration of 80 million homes, offering a vast network for leveraging the internet access. Cable TV has a strong reach to the homes and therefore offering the Internet through cable could be a scope for furthering the growth of internet usage in the homes.
The cable is an alternative medium for delivering the Internet services in the US, there are already a million homes with cable modems, enabling the high-speed internet access over cable. In India, we are in the initial stages. We are experiencing innumerable local problems in Mumbai, Bangalore and Delhi, along with an acute shortage of international Internet connectivity.
Accessing the Internet on the public switched telephone networks (PSTN) still has a lot of problems. Such as drops outs. Its takes along time to download or upload large files. One has to pay both for the Internet connectivity as well as for telephone usages during that period. Since it is technically possible to offer higher bandwidth by their cable, home as well as corporate users may make like it. Many people cannot afford a PC At their premises. Hardware obsolescence in the main problem to the home user. Who cannot afford to upgrade his PC every year? Cable TV based ISP solution s offer an economic alternative.

MPEG-7

As more and more audiovisual information becomes available from many sources around the world, many people would like to use this information for various purposes. This challenging situation led to the need for a solution that quickly and efficiently searches for and/or filters various types of multimedia material that's interesting to the user.For example, finding information by rich-spoken queries, hand-drawn images, and humming improves the user-friendliness of computer systems and finally addresses what most people have been expecting from computers. For professionals, a new generation of applications will enable high-quality information search and retrieval.
For example, TV program producers can search with "laser-like precision" for occurrences of famous events or references to certain people, stored in thousands of hours of audiovisual records, in order to collect material for a program. This will reduce program production time and increase the quality of its content.MPEG-7 is a multimedia content description standard, (to be defined by September 2001), that addresses how humans expect to interact with computer systems, since it develops rich descriptions that reflect those expectations.
The Moving Pictures Experts Group abbreviated MPEG is part of the International Standards Organization (ISO), and defines standards for digital video and digital audio. The primal task of this group was to develop a format to play back video and audio in real time from a CD. Meanwhile the demands have raised and beside the CD the DVD needs to be supported as well as transmission equipment like satellites and networks. All this operational uses are covered by a broad selection of standards. Well known are the standards MPEG-1, MPEG-2, MPEG-4 and MPEG-7.
Each standard provides levels and profiles to support special applications in an optimized way.It's clearly much more fun to develop multimedia content than to index it. The amount of multimedia content available -- in digital archives, on the World Wide Web, in broadcast data streams and in personal and professional databases -- is growing out of control. But this enthusiasm has led to increasing difficulties in accessing, identifying and managing such resources due to their volume and complexity and a lack of adequate indexing standards. The large number of recently funded DLI-2 projects related to the resource discovery of different media types, including music, speech, video and images, indicates an acknowledgement of this problem and the importance of this field of research for digital libraries.

IP spoofing

Criminals have long employed the tactic of masking their true identity, from disguises to aliases to caller-id blocking. It should come as no surprise then, that criminals who conduct their nefarious activities on networks and computers should employ such techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by "spoofing" the IP address of that machine. In the subsequent pages of this report, we will examine the concepts of IP spoofing: why it is possible, how it works, what it is used for and how to defend against it.
Brief History of IP Spoofing
The concept of IP spoofing was initially discussed in academic circles in the 1980's. In the April 1989 article entitled: "Security Problems in the TCP/IP Protocol Suite", author S. M Bellovin of AT & T Bell labs was among the first to identify IP spoofing as a real risk to computer networks. Bellovin describes how Robert Morris, creator of the now infamous Internet Worm, figured out how TCP created sequence numbers and forged a TCP packet sequence. This TCP packet included the destination address of his "victim" and using an IP spoofing attack Morris was able to obtain root access to his targeted system without a User ID or password. Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators. A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth. This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection. However, IP spoofing is an integral part of many network attacks that do not need to see responses (blind spoofing).
2. TCP/IP PROTOCOL Suite
IP Spoofing exploits the flaws in TCP/IP protocol suite. In order to completely understand how these attacks can take place, one must examine the structure of the TCP/IP protocol suite. A basic understanding of these headers and network exchanges is crucial to the process.
2.1 Internet Protocol - IP
The Internet Protocol (or IP as it generally known), is the network layer of the Internet. IP provides a connection-less service. The job of IP is to route and send a packet to the packet's destination. IP provides no guarantee whatsoever, for the packets it tries to deliver. The IP packets are usually termed datagrams. The datagrams go through a series of routers before they reach the destination. At each node that the datagram passes through, the node determines the next hop for the datagram and routes it to the next hop. Since the network is dynamic, it is possible that two datagrams from the same source take different paths to make it to the destination. Since the network has variable delays, it is not guaranteed that the datagrams will be received in sequence. IP only tries for a best-effort delivery. It does not take care of lost packets; this is left to the higher layer protocols. There is no state maintained between two datagrams; in other words, IP is connection-less.

Cyber Terrorism

Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least.
The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them.
The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist.
We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, theconsequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future…

Windows DNA

For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.
The increased presence of Internet technologies is enabling global sharing of information-not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.
It is possible to think of these new Internet applications needing to handle literally millions of users-a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.
Introducing Windows DNA: Framework for a New Generation of Computing Solutions
Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.
Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?

Digital Micromirror Device technology

Digital Micromirror Device, or DMD is an optical semiconductorthat is the core of DLP projection technology, and was invented by Dr. LarryHornbeck and Dr. William E. ‘Ed’ Nelson of Texas Instruments (TI) in 1987.The DMD project began as the Deformable Mirror Device in 1977, usingmicromechanical, analog light modulators. The first analog DMD productwas the TI DMD2000 airline ticket printer that used a DMD instead of alaser scanner. A DMD chip has on its surface several hundred thousandmicroscopic mirrors arranged in a rectangular array which correspond to thepixels in the image to be displayed. The mirrors can be individually rotated±10-12°, to an on or off state. In the on state, light from the bulb isreflected onto the lens making the pixel appear bright on the screen.
In the off state, the light is directed elsewhere (usually onto a heatsink), making the pixel appear dark. To produce greyscales, the mirror is toggled on and offvery quickly, and the ratio of on time to off time determines the shadeproduced (binary pulse-width modulation). Contemporary DMD chips canproduce up to 1024 shades of gray. See DLP for discussion of how colorimages are produced in DMD-based systems. The mirrors themselves aremade out of aluminum and are around 16 micrometres across. Each one ismounted on a yoke which in turn is connected to two support posts bycompliant torsion hinges. In this type of hinge, the axle is fixed at both endsand literally twists in the middle.
Because of the small scale, hinge fatigue is not a problem and tests have shown that even 1 trillion operations does not cause noticeable damage. Tests have also shown that the hinges cannot be damaged by normal shock and vibration, since it is absorbed by the DMD superstructure. Two pairs of electrodes on either side of the hinge control theposition of the mirror by electrostatic attraction. One pair acts on the yokeand the other acts on the mirror directly. The majority of the time, equal biascharges are applied to both sides simultaneously. Instead of flipping to acentral position as one might expect, this actually holds the mirror in itscurrent position. This is because attraction force on the side the mirror isalready tilted towards is greater, since that side is closer to the electrodes. Tomove the mirror, the required state is first loaded into an SRAM cell locatedbeneath the pixel, which is also connected to the electrodes. The bias voltageis then removed, allowing the charges from the SRAM cell to prevail,moving the mirror. When the bias is restored, the mirror is once again heldin position, and the next required movement can be loaded into the memorycell. The bias system is used because it reduces the voltage levels required toaddress the pixels such that they can be driven directly from the SRAM cell,and also because the bias voltage can be removed at the same time for thewhole chip, meaning every mirror moves at the same instant. Theadvantages of the latter are more accurate timing and a more filmic movingimage.

High Altitude Platforms for communication

HAPs can be considered as a novel solution for providing telecommunications services. Haps are well established as a concept essentially they are quasi stationary vehicles in stratosphere which are generally unmanned or solar powered they can support payloads for communications relay in a similar fashion as a satellite and thence provide a range of tactical and strategic wireless services.
They are usually operating in the stratosphere at altitudes of up to 22km to provide communication services they can exploit the best features of both terrestrial and satellite schemes the platforms may be aeroplanes or airships and may be manned or unmanned with autonomous operation coupled with remote control from the ground.
HAPs have similarities and differences with terrestrial wireless and satellites systems. The most important advantages of HAPs systems are their easy and incremental deployment, flexibility/reconfigurability, low cost operation, low propagation delay, high elevation angles, broad coverage, broadcast/multicast capability, broadband capability, ability to move around in emergency situations etc,.
A very interesting feature is that for the same bandwidth allocation terrestrial systems need a huge number of base stations to provide the needed coverage, while GEO satellites face limitations on the minimum cell size projected on the earth surface and LEO satellites suffer from handover problems. Therefore, HAPs seem to be a very good design compromise.
HAPs represent an economically attractive way for the provision of communications. The cost for the development of satellite systems is much greater, and it may be economically more efficient to cover a large area with many HAPs rather than with many terrestrial base stations or with a satellite system. In addition, due to their long development period, satellite systems always run the risk of becoming obsolete by the time they are in orbit. HAPs also enjoy more favorable path-loss characteristics compared to both terrestrial and satellite systems, while they can frequently take off and land for maintenance and upgrading.
Actually, today it is very interesting and challenging to examine and evaluate a mixed infrastructure comprising HAPs, terrestrial and satellite systems, which could lead to a powerful integrated network infrastructure by making up for the weaknesses of each other Moreover, the growing exigencies for mobility and ubiquitous access to multimedia services call for the development of new generation, wireless telecommunications systems. In this respect, 4G networks are expected to fulfill the vision for optimal connectivity anywhere, anytime, providing higher bit rates at low cost, and towards this end, HAPs can play an important role in the evolution of systems beyond 3G. Among the wide spectrum of services that 4G networks are called to support, multicast services represent one of the most interesting categories. However, if Multimedia Broadcast and Multicast Services (MBMS) were to be provided by the terrestrial segment, they would lead to high traffic load.

3 D CHIP DESIGN

There is a saying in real estate; when land get expensive, multi-storied buildings are the alternative solution. We have a similar situation in the chip industry. For the past thirty years, chip designers have considered whether building integrated circuits multiple layers might create cheaper, more powerful chips.

Performance of deep-sub micrometer very large scale integrated (VLSI) circuits is being increasingly dominated by the interconnects due to increasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies on one single chip is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable.

The three dimensional (3-D) chip design strategy exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize system on a chip (SoC) design. By simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved.

In the 3-Ddesign architecture, an entire chip is divided into a number of blocks, and each block is placed on a separate layer of Si that are stacked on top of each other.

AOP (Agent Oriented Programming)

We need open architectures that continuously change and evolve to accommodate new components and meet new requirements. More and more software must operate on different platforms, without recompilation and with minimal assumptions about its operating systems and users. It must be robust, autonomous and proactive. These circumstances motivated the development of Agent Oriented Programming.
The objective of Agent Oriented (AO) Technology is to build systems applicable to real world that can observe and act on changes in the environment. Such systems must be able to behave rationally and autonomously in completion of their designated tasks. AO technology is an approach for building complex real time distributed applications. This technology is build on belief that a computer system must be designed to exhibit rational goal directed behaviour similar to that of a human being. AO technology achieves this by building entities called agents which are purposeful reactive and communication based and sometimes team oriented.
There are different programming methods. Object Oriented Programming is the successor of Structured programming. Agent oriented programming can be seen as an improvement and extension of object oriented programming. Since the word “Programming” is attached it means that both concepts are close to the programming language and implementation level. The term “Agent-Oriented Programming” was introduced by Shoham. So this AOP is a fairly new programming paradigm that supports societal view of computation. In AOP objects known as agents interact to achieve individual goals. Agents can be autonomous entities, deciding their next step without the interference of a user, or they can be controllable, serving as mediatory between user and another agent.In AOP programming is performed at abstract level. Agent-Oriented Software Engineering is being described as a new paradigm for the research field of Software Engineering. But in order to become a new paradigm for the software industry, robust and easy-to-use methodologies and tools have to be developed. The term AOP was suggested by Shoham

Tuesday, March 17, 2009

AIR FUELLED CARS

Have you been to the gas station this week? Considering that we live in a very mobile society, it’s probably safe to assume that you have. While pumping gas, you’ve undoubtedly noticed how much the price of gas has soared in recent years. Gasoline which has been the main source of fuel for the history of cars, is becoming more and more expensive and impractical (especially from an environmental standpoint). These factors are leading car manufacturers to develop cars fueled by alternative energies. Two hybrid cars took to the road in 2000, and in three or four years fuel-cell-powered cars will roll onto the world’s highways.While gasoline prices in the United States have not yet reached their highest point ($2.66/gallon in 1980), they have climbed steeply in the past two years. In 1999, prices rose by 30 percent, and from December 1999 to October 2000, prices rose an additional 20 percent, according to the U.S. Bureau of Labor Statistics. In Europe, prices are even higher, costing more than $4 in countries like England and the Netherlands. But cost is not the only problem with using gasoline as our primary fuel. It is also damaging to the environment, and since it is not a renewable resource, it will eventually run out. One possible alternative is the air-powered car.Air powered cars runs on compressed air instead of gasoline. This car is powered by a two cylinder compressed engine. This engine can run either on compressed air alone or act as an IC engine. Compressed air is stored in glass or fiber tanks at a pressure of 4351 psi.Within the next two years, you could see the first air-powered vehicle motoring through your town. Most likely, it will be the e.Volution car that is being built by Zero Pollution Motors.The cars have generated a lot of interest in recent years, and the Mexican government has already signed a deal to buy 40,000 e.Volutions to replace gasoline- and diesel-powered taxis in the heavily polluted Mexico City
Labels: SEMINAR

Tuesday, March 3, 2009

FingerPrint Based Security System

a Personal Safes are revolutionary locking storage cases that open with just the touch of your finger. These products are designed as secure storage for medications, jewelry, weapons, documents, and other valuable or potentially harmful items.These utilize fingerprint recognition technology to allow access to only those whose fingerprints you choose. It contains all the necessary electronics to allow you to store, delete, and verify fingerprints with just the touch of a button. Stored fingerprints are retained even in the event of complete power failure or battery drain.These eliminates the need for keeping track of keys or remembering a combination password, or PIN. It can only be opened when an authorized user is present, since there are no keys or combinations to be copied or stolen, or locks that can be picked.