Rabu, 31 Desember 2008

Route Optimization in Mobile IP

One of the problems in Mobile IP version 4 is the asymmetric routing, also called triangle routing. The triangle routing is due to the fact that all packets going to the mobile first have to pass through the HA, while in the reverse direction packets are routed through regular IP routing. Route optimization [3, 13, 14] is the set of extensions to the basic Mobile IP protocol to allow more efficient routing procedures, so that the IP packets can be routed from a correspondent node to a mobile node without first passing through the HA. The home agent sends a binding update message with the COA of the mobile node to all correspondent nodes that require them. This binding is then stored by the correspondent node and is used to tunnel its own IP packets directly to the care-of address indicated in that binding. Thus, passing through the HA is avoided. In the initiation phase, however, the IP packets sent by the correspondent node still use the triangle routing until the moment that the binding update message sent by the HA is received by the correspondent node.

Other extensions are provided to allow IP packets that are sent by a correspondent node with an out-of-date stored binding, or in transit, to be forwarded
directly to the new COA of the MN. The authentication mechanisms used in route optimization are the same as those used in the basic version of Mobile IP. This authentication generally relies on a mobility security association established in advance between the
sender and receiver of such messages. The route optimization protocol operates in four steps:


1. A binding warning control message may be sent to the HA indicating
that the correspondent node is unaware of the new COA of the mobile
node.

2. A binding request message is sent by a correspondent node to the HA
when it determines that its binding should be refreshed.

3. An authenticated binding update message is sent by the HA to those
correspondent nodes that require them, containing the current COA
of the mobile node.

4. When smooth handoffs occur, the mobile node transmits a binding
update and has to be sure that the update has been received. Thereby,
it can request a binding acknowledgment from the correspondent
node.

The procedure of handoff in Mobile IPv4 When a mobile node attempts at undertaking a handoff from one foreign domain to another, it sends a deregistration message to the previous foreign agent (e.g., FA1). The mobile node can send a deregistration message to FA1 or
just make a handoff and let its connection with FA1 time out. After the mobile enters a new foreign network, it waits for an agent advertisement from a FA. As soon as the mobile node receives the advertisement, it sends registration request to the home agent using the address of the new foreign agent (FA2) as care-of address. The HA processes the request and sends back a registration reply.

Rabu, 24 Desember 2008

Unbreakable glass !

All are possible with a new process for strengthening glass and ceramics developed by an Alfred University researcher.

Alfred University has signed a royalty agreement with Santanoni Glass and Ceramics, Inc., of Alfred Station, NY, for proprietary technology related to the strengthening of glass.

The process allows Santanoni to produce “unbreakable” glassware such as wine glasses, canning jars, bottles, tumblers, goblets and mugs at a cost that allows the products to be competitive with normal, un-strengthened glassware.

Dr. William LacCourse, a professor of Glass Science at the New York State College of Ceramics at Alfred University, and president of the company, located in the Ceramics Corridor Innovation Center in Alfred, has researched processes for strengthening glasses for more than 30 years.

“No glass is unbreakable, but our process produces the highest strength glassware available today, and at price that makes it affordable,” said LaCourse. “It has the potential to save restaurants, catering services and families up to 80 percent, and perhaps more, on their glassware costs. We have dropped glass bottles from 10 feet high onto a concrete floor, and the glass simply bounces.”

Under the agreement, Santanoni will have access to the technology developed by LaCourse and his graduate students. The glassware will be processed in Alfred Station, NY at the Sugar Hill Industrial Park, and will be marketed nationally.

“We are working with a couple of distributors for some specialty products, but will do the majority of consumer marketing through gift shops and the Internet. We are also contacting various food service companies where we believe the products can save them thousands of dollars per year due to reduced breakage and lower inventory costs.”

Alfred University President Charles Edmondson heralded the agreement with Santanoni Glass, calling it “significant for Alfred University and the Southern Tier. It is an indication of how our high-tech materials research can generate job creation and economic growth.”

Over the years the research was partially funded by Alfred’s Center for Advanced Ceramic Technology (CACT), as well as Santanoni. “The help of our CACT was critical in getting the company started. We could not have done it with out its constant support. I owe a lot to the CACT and especially to Alfred University for providing the laboratories, equipment and financial support,” said LaCourse. “It is time to pay back.”

Santanoni’s Ultra-HS glass products are now available in limited quantities as the company prepares to ramp up production levels.

A robot scientist: with out humans

A robot scientist that can generate its own hypotheses and run experiments to test them has made its first real scientific discoveries. Dubbed Adam, the robot is the handiwork of researchers at Aberystwyth University and the University of Cambridge in the UK. All by itself it discovered new functions for a number of genes in Saccharomyces cerevisiae, aka brewer's yeast. Ross King, a computational biologist at Aberystwyth, who leads the project, said that Adam's results were modest, but real. "It's certainly a contribution to knowledge. It would be publishable," he says. Adam, which actually consists of a small roomful of lab equipment, has four personal computers that act as a brain, and possesses robot arms, cameras, liquid handlers, incubators and other equipment. The team gave the robot a freezer containing a library of thousands of mutant strains of yeast with individual genes deleted. It was also equipped with a database containing information about yeast genes, enzymes, and metabolism, and a supply of hundreds of metabolites. To discover which genes coded for which enzymes, Adam cultured a mutant yeast with a certain gene knocked out, and monitored how well the mutant grew without a particular metabolite. If the strain grew poorly without the metabolite, Adam learned something about the function of the knocked out gene. The robot could carry out more than 1000 of these experiments a day. In all, Adam formulated and tested 20 hypotheses about genes coding for 13 enzymes. Twelve hypotheses were confirmed. For instance, Adam correctly hypothesised that three genes it identified encode an enzyme important in producing the amino acid lysine. The researchers confirmed Adam's work with their own experiments. The team is now working on a new robot, called Eve, which will search for new drugs.

Read the whole article and watch the video

Source: http://cr4.globalspec.com/

Free-space optical communication

Definition
Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers.

FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits.

The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications.
FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.


FSO
FSO technology is implemented using a laser device .These laser devices or terminals can be mounted on rooftops ,Corners of buidings or even inside offices behind windows. FSOdevices look like security video cameras.

Low-power infrared beams, which do not harm the eyes, are the means by which free-space optics technology transmits data through the air between transceivers, or link heads, mounted on rooftops or behind windows. It works over distances of several hundred meters to a few kilometers, depending upon atmospheric conditions.
Commercially available free-space optics equipment provides data rates much higher than digital subscriber lines or coaxial cables can ever hope to offer. And systems even faster than the present range of 10 Mb/s to 1.25 Gb/s have been announced, though not yet delivered.

CT scans

Introduction

There are two main limitations of using conventional x-rays to examine internal structures of the body. Firstly superimpositions of the 3-dimensional information onto a single plane make diagnosis confusing and often difficult. Secondly the photographic film usually used for making radiographs has a limited dynamic range and therefore only object that have large variation in the x-ray absorption relative to their surroundings will cause sufficient contrast differences on the film to be distinguished by the eye. Thus the details of bony structures can be seen, it is difficult to discern the shape and composition of soft tissue organ accurately.

CT uses special x-ray equipment to obtain image data from different angles around a body and then shows a cross section of body tissues and organs. i.e., it can show several types of tissue-lung,bone,soft tissue and blood vessel with great clarity. CT of the body is a patient friendly exam that involves little radiation exposure.

Basic Principle
In CT scanning, the image is reconstructed from a large number of absorption profiles taken at regular angular intervals around a slice, each profile being made up from a parallel set of absorption values through the object. ie, CT also passes x-rays through the body of the patient but the detection method is usually electronic in nature, and the data is converted from analog signal to digital impulses in an AD converter. This digital representation of the x-ray intensity is fed in to a computer, which then reconstruct an image.

The method of doing of tomography uses an x-ray detector which translates which translates linearly on a track across the x-ray beam, and when the end of the scan is reached the x-ray tube and the detector are rotated to a new angle and the linear motion is repeated. The latest generation of CT machines use a 'fan-beam' geometry with an array of detectors which simultaneously detect x-rays on a number of different paths through the patient.

CT Scanner
CT scanner is a large square machine with a hole in the centre, something like a doughnut. The patient lies still on a table that can move up/down and slide in to and out from the centre of hole. With in the machine an X-ray tube on a rotating gantry moves around the patient's body to produce the images.

Procedure
In CT the film is replaced by an array of detectors which measures X-ray profile. Inside the scanner, a rotating gantry that has an X-ray tube mounted on one side an arc -shaped detector mounted on opposite side. An X-ray beam is emitted in a fan beam as the rotating frame spins the X-ray tube and detector around the patient. Each time the X-ray tube and detector make a 360 degree rotation and X-ray passes through the patient's body the image of a thin section is acquired. During each rotation the detector records about 1000 images (profiles) of the expanded X-ray beam. Each profile is then reconstructed by a dedicated computer into two time.

Touch-screens

About this computer paper-presentations : (Touch-screens)
       A type of display screen that has a touch-sensitive transparent panel covering the screen. Instead of using a pointing device such as a mouse or light pen, you can use your finger to point directly to objects on the screen.
Although touch screens provide a natural interface for computer novices, they are unsatisfactory for most applications because the finger is such a relatively large object. It is impossible to point accurately to small areas of the screen. In addition, most users find touch screens tiring to the arms after long use.

Touch-screens are typically found on larger displays, in phones with integrated PDA features. Most are designed to work with either your finger or a special stylus. Tapping a specific point on the display will activate the virtual button or feature displayed at that location on the display.Some phones with this feature can also recognize handwriting written on the screen using a stylus, as a way to quickly input lengthy or complex information

A touchscreen is an input device that allows users to operate a PC by simply touching the display screen. Touch input is suitable for a wide variety of computing applications. A touchscreen can be used with most PC systems as easily as other input devices such as track balls or touch pads. Browse the links below to learn more about touch input technology and how it can work for you.

History Of Touch Screen Technology
A touch screen is a special type of visual display unit with a screen which is sensitive to pressure or touching. The screen can detect the position of the point of touch. The design of touch screens is best for inputting simple choices and the choices are programmable. The device is very user-friendly since it 'talks' with the user when the user is picking up choices on the screen.

Touch technology turns a CRT, flat panel display or flat surface into a dynamic data entry device that replaces both the keyboard and mouse. In addition to eliminating these separate data entry devices, touch offers an "intuitive" interface. In public kiosks, for example, users receive no more instruction than 'touch your selection.
Specific areas of the screen are defined as "buttons" that the operator selects simply by touching them. One significant advantage to touch screen applications is that each screen can be customized to reflect only the valid options for each phase of an operation, greatly reducing the frustration of hunting for the right key or function.

Pen-based systems, such as the Palm Pilot® and signature capture systems, also use touch technology but are not included in this article. The essential difference is that the pressure levels are set higher for pen-based systems than for touch.Touch screens come in a wide range of options, from full color VGA and SVGA monitors designed for highly graphic Windows® or Macintosh® applications to small monochrome displays designed for keypad replacement and enhancement.

Specific figures on the growth of touch screen technology are hard to come by, but a 1995 study last year by Venture Development Corporation predicted overall growth of 17%, with at least 10% in the industrial sector.Other vendors agree that touch screen technology is becoming more popular because of its ease-of-use, proven reliability, expanded functionality, and decreasing cost.

A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen causes a voltage or signal change. This voltage change is used to determine the location of the touch to the screen.

Exploding PC Burns Man to Death

A young software engineer was found dead in front of his burned-up computer, however the cause of the explosion has yet to be determined, baffling police.


As a rule of thumb, it's not nice to make light of the misfortunes of others, especially when physical harm comes into play. There are certainly numerous jokes that involve exploding PCs, especially where gaming, overclocking, and porn is concerned. Many of them have no tact for obvious reasons, and there's no doubt that readers may recall one or two by the end of this article. But truth be told, news surrounding a burned up man sitting in front of smoking debris that once served as a PC only conjures up comical images seen in cartoons.

According to The Times of India, a 28-year-old software engineer--named only as Vijayakumar in the article--was found dead this past Friday in his home on Telugu Brahmin Street in Velachery. Vijayakumar shared the house with two other software engineers--Vignesh (26) and Ram Prasad (26)--although the latter roommate was the only other tenant present at the time of Vijayakumar's demise. Apparently, Prasad went to take a bath, and then rushed back out after hearing a loud blast. He told police that he saw the charred remains of Vijayakumar still sitting in front of a smoldering PC... and then fainted.

So what happened to Vijayakumar? According to local police, the PC exploded and burned it user alive. “We are yet to ascertain the cause of the blast," a police officer told Times of India. "The computer was completely damaged and the deceased was charred." The officer also went on to say that the case has baffled the investigating officers, sounding rather unbelievable, and something they had never seen before.

"But the scene of the accident seems to suggest that the youth was killed in an accident as his body was in the sitting position in front of the burnt computer,” the official added.

For now, the police have not offered any other information. Certainly many factors could have caused the PC to short circuit: faulty wiring, spilled liquids, maybe even a jolt of lightning crashing through the power outlet. Perhaps the power supply arced and burned him up on the spot, or the PC had faulty power cables. It would be understandable had Vijayakumar's demise been the direct result of an exploding battery in a laptop. But an exploding PC? That remains questionable. Still, because Vijayakumar was found burned in a sitting position, its easy to assume that whatever happened was close to instantaneous.

And where was Vignesh during the entire incident? No, something about the entire incident sounds fishy. With Prasad the only other individual in the house, the grim setting sounds like a plot yanked straight out of a thriller movie. Hopefully, more information will surface soon because, quite frankly, if there are faulty parts out there on the market, then we as consumers need to know. Period.

Source: xcpus.com

Touch screen: For blinds


Finnish scientists have created a vibrating touch screen phone, for the visually challenged, that can simulate Braille characters. A Nokia 770 mobile Internet tablet was the main research tool used, and since it already has haptic feedback built in to the screen, it's relatively easy to develop and test the technique. Instead of recreating the 2 x 3 matrix of raised spots that represents a Braille character, the new system just vibrates the screen using the transducers. As a reading finger is touched to the screen, its position is logged relative to the conventional text character beneath: The Braille is then emulated as a Morse code-like chain of intense and weak vibrations of the screen. A strong one relates to a Braille dot, and a weak one represents a Braille space--it's incredibly simple.Volunteers involved in the research have been able to transition between conventional Braille and the new technique without too much difficulty, reading single characters in around 1.25 seconds.

Flight Using Alternative Energy

Surveillance Vehicles Take Flight Using Alternative Energy

undetectable from the ground, unmanned aerial vehicles (UAVs) are widely used by the military to scan terrain for possible threats and intelligence. Now, fuel cell powered UAVs are taking flight as an Office of Naval Research (ONR)-sponsored program to help tactical decision-makers gather critical information more efficiently… and more quietly.

Piloted remotely or autonomously, UAVs have long provided extra "eyes in the sky" especially for missions that are too dangerous for manned aircraft. This latest technology is showcased by Ion Tiger, a UAV research program at the Naval Research Laboratory (NRL) that merges two separate efforts — UAV technology and fuel cell systems.

In particular, the Ion Tiger UAV tests a hydrogen-powered fuel cell design, which can travel farther and carry heavier payloads than earlier battery-powered designs. Ion Tiger employs stealthy characteristics due to its small size, reduced noise, low heat signature and zero emissions.

"Pursuing energy efficiency and energy independence are core to ONR's Power and Energy Focus Area," said Rear Admiral Nevin Carr, Chief of Naval Research. "ONR's investments in alternative energy sources, like fuel cell research, have application to the Navy and Marine Corps mission in future UAVs and vehicles. These investments also contribute directly to solving some of the same technology challenges faced at the national level."

Fuel cells create an electrical current when they convert hydrogen and oxygen into water and are pollution-free. A fuel cell propulsion system can also deliver potentially twice the efficiency of an internal combustion engine — while running more quietly and with greater endurance.

"In this size range, we are hopefully able to conduct very productive surveillance missions at low cost with a relatively small vehicle, and a high-quality electric payload," says NRL Principal Investigator Dr. Karen Swider-Lyons.

This spring, Ion Tiger's flight trial is expected to exceed the duration of previous flights seven-fold.

"This will really be a 'first of its kind' demonstration for a fuel cell system in a UAV application for a 24-hour endurance flight, with a 5 pound payload," says ONR Program Manager Dr. Michele Anderson. "That's something nobody can do right now."

In 2005, NRL backed initial research in fuel cell technologies for UAVs. Today, says Swider-Lyons, it's paying off with a few lessons learned from the automotive industry.

"With UAVs, we are dealing with relatively small fuel cells of 500 watts," she explains. "It is hard to get custom, high-quality fuel cell membranes built just for this program. So we are riding along with this push for technology from the automotive industry."

"What's different with fuel cell cars is that developers are focused on volume…so they want everything very compact," adds Swider-Lyons. "Our first issue is weight, our second issue is weight and our third issue is weight!"

Besides delivering energy savings and increased power potential, fuel cell technology spans the operational spectrum from ground vehicles to UAVs, to man-portable power generation for Marine expeditionary missions to meeting power needs afloat. In fact, it's technology that Marines at Camp Pendleton are using today to power their General Motors fuel cell vehicles.

Across the board, the Navy and Marine Corps are seeking more efficient sources of energy. ONR has been researching and testing power and energy technology for decades. Often the improvements to power generation and fuel efficiency for ships, aircraft, vehicles and installations yield a direct benefit to the public.

"ONR has been a visionary in terms of providing support for this program," says Swider-Lyons.

Other Ion Tiger partners include Protonex Technology Corporation and the University of Hawaii. NRL's work on UAVs also leverages funding from the Office of the Secretary of Defense.

Info: Naval Research Laboratory.

New Laser Technique Advances Nanofabrication

The ability to create tiny patterns is essential to the fabrication of computer chips and many other current and potential applications of nanotechnology. Yet, creating ever smaller features, through a widely-used process called photolithography, has required the use of ultraviolet light, which is difficult and expensive to work with. John Fourkas, Professor of Chemistry and Biochemistry in the University of Maryland College of Chemical and Life Sciences, and his research group have developed a new, table-top technique called RAPID (Resolution Augmentation through Photo-Induced Deactivation) lithography that makes it possible to create small features without the use of ultraviolet light. This research is to be published in Science magazine and released on Science Express on April 9, 2009.

Photolithography uses light to deposit or remove material and create patterns on a surface. There is usually a direct relationship between the wavelength of light used and the feature size created. Therefore, nanofabrication has depended on short wavelength ultraviolet light to generate ever smaller features.

"The RAPID lithography technique we have developed enables us to create patterns twenty times smaller than the wavelength of light employed,"explains Dr. Fourkas, "which means that it streamlines the nanofabrication process. We expect RAPID to find many applications in areas such as electronics, optics, and biomedical devices."

"If you have gotten a filling at the dentist in recent years,"says Fourkas, "you have seen that a viscous liquid is squirted into the cavity and a blue light is then used to harden it. A similar process of hardening using light is the first element of RAPID. Now imagine that your dentist could use a second light source to sculpt the filling by preventing it from hardening in certain places. We have developed a way of using a second light source to perform this sculpting, and it allows us to create features that are 2500 times smaller than the width of a human hair."

Both of the laser light sources used by Fourkas and his team were of the same color, the only difference being that the laser used to harden the material produced short bursts of light while the laser used to prevent hardening was on constantly. The second laser beam also passed through a special optic that allowed for sculpting of the hardened features in the desired shape.

"The fact that one laser is on constantly in RAPID makes this technique particularly easy to implement,"says Fourkas, "because there is no need to control the timing between two different pulsed lasers."

Fourkas and his team are currently working on improvements to RAPID lithography that they believe will make it possible to create features that are half of the size of the ones they have demonstrated to date.

Achieving lambda/20 Resolution by One-Color Initiation and Deactivation of Polymerization was written by Linjie Li, Rafael R. Gattass, Erez Gershgorem, Hana Hwang and John T. Fourkas.

- - - -

CONTACTS: Kelly Blake, 301-405-8203, http://chemlife.umd.edu

Lee Tune, 301 405 4679 or ltune@umd.edu

PHOTO AVAILABLE: Contact Lee Tune, above.

PHOTO CAPTION: Schematic depictions of RAPID lithography,the technique developed by John Fourkas and colleagues which enables the creation of features 2500 times smaller than the width of a human hair.

This University of Maryland News Release is available at: http://www.newsdesk.umd.edu/scitech/release.cfm"ArticleID=1862



reference:www.smalltimes.com

Opportunities and Challenges in Wireless Sensor Networks

Opportunities and Challenges in Wireless Sensor Networks




Due to advances in wireless communications and electronics over the last few years, the development of networks of low-cost, low-power, multifunctional sensors has received increasing attention. These sensors

are small in size and able to sense, process data, and communicate with each other, typically over an RF (radio frequency) channel. A sensor network is designed to detect events or phenomena, collect and

process data, and transmit sensed information to interested users. Basic features of sensor networks are:



• Self-organizing capabilities

• Short-range broadcast communication and multihop routing

• Dense deployment and cooperative effort of sensor nodes

• Frequently changing topology due to fading and node failures

• Limitations in energy, transmit power, memory, and computing power



These characteristics, particularly the last three, make sensor networks different from other wireless ad hoc or mesh networks.



Clearly, the idea of mesh networking is not new; it has been suggested for some time for wireless Internet access or voice communication. Similarly, small computers and sensors are not innovative

per se. However, combining small sensors, low-power computers, and radios makes for a new technological platform that has numerous important uses and applications, as will be discussed in the next section.







Growing Research and Commercial Interest

Research and commercial interest in the area of wireless sensor networks are currently growing exponentially, which is manifested in many ways:



• The number of Web pages (Google: 26,000 hits for sensor networks; 8000 for wireless sensor networks in August 2003)

• The increasing number of

• Dedicated annual workshops, such as IPSN (information processing in sensor networks); SenSys; EWSN (European workshop on wireless sensor networks); SNPA (sensor network protocols and applications); and WSNA (wireless sensor networks and applications)

• Conference sessions on sensor networks in the communications and mobile computing communities (ISIT, ICC, Globecom, INFOCOM, VTC, MobiCom, MobiHoc)

• Research projects funded by NSF (apart from ongoing programs, a new specific effort now focuses on sensors and sensor networks) and DARPA through its SensIT (sensor information Technology), NEST (networked embedded software technology), MSET (multisensor exploitation), UGS (unattended ground sensors), NETEX (networking in extreme environments),

ISP (integrated sensing and processing), and communicator programs



Special issues and sections in renowned journals are common, e.g., in the

IEEE Proceedings

[1] and signal processing, communications, and networking magazines. Commercial interest is reflected in investments by established companies as well as start-ups that offer general and specific hardware and software



Solutions.

Compared to the use of a few expensive (but highly accurate) sensors, the strategy of deploying a large Number of inexpensive sensors has significant advantages, at smaller or comparable total system cost:

Much higher spatial resolution; higher robustness against failures through distributed operation; uniform Coverage; small obtrusiveness; ease of deployment; reduced energy consumption; and, consequently,

Increased system lifetime. The main point is to position sensors close to the source of a potential problem Phenomenon, where the acquired data are likely to have the greatest benefit or impact.

Pure sensing in a fine-grained manner may revolutionize the way in which complex physical systems are understood. The addition of actuators, however, opens a completely new dimension by permitting

management and manipulation of the environment at a scale that offers enormous opportunities for Almost every scientific discipline. Indeed, Business 2.0 (http://www.business2.com/) lists sensor robots

as one of “six technologies that will change the world,” and



Technology Review

at MIT and Global future Identify WSNs as one of the “10 emerging technologies that will change the world” (http://www.globalfuture. com/mit-trends2003.htm). The combination of sensor network technology with MEMS and nanotechnology

Will greatly reduce the size of the nodes and enhance the capabilities of the network. The remainder of this chapter lists and briefly describes a number of applications for wireless sensor

Networks, grouped into different categories. However, because the number of areas of application is Growing rapidly, every attempt at compiling an exhaustive list is bound to fail.

Basics: Nanotechnology

Nanotechnology operates at the first level of organization of atoms and molecules for both living and anthropogenic systems. This is where the properties and functions of all systems are defined. Such fundamental control promises a broad and revolutionary technology platform for industry, biomedicine, environmental engineering, safety and security, food, water resources, energy conversion, and countless other areas.

The first definition of nanotechnology to achieve some degree of international acceptance was developed after consultation with experts in over 20 countries in 1987–1898 (Siegel et al., 1999; Roco et al., 2000). However, despite its importance, there is no globally recognized definition. Any nanotechnology definition would include three elements:


1. The size range of the material structures under consideration — the intermediate length scale between a single atom or molecule, and about 100 molecular diameters or about 100 nm. Here
we have the transition from individual to collective behavior of atoms. This length scale condition alone is not sufficient because all natural and manmade systems have a structure at the nanoscale.

2. The ability to measure and restructure matter at the nanoscale; without it we do not have new understanding and a new technology; such ability has been reached only partially so far, but
significant progress was achieved in the last five years.

3. Exploiting properties and functions specific to nanoscale as compared to the macro- or microscales; this is a key motivation for researching nanoscale.


According to the National Science Foundation and NNI, nanotechnology is the ability to understand, control, and manipulate matter at the level of individual atoms and molecules, as well as at the “supramolecular” level involving clusters of molecules (in the range of about 0.1 to 100 nm), in order to create materials, devices, and systems with fundamentally new properties and functions because of their small structure. The definition implies using the same principles and tools to establish a unifying platform for science and engineering at the nanoscale, and employing the atomic and molecular interactions to develop efficient manufacturing methods.

There are at least three reasons for the current interest in nanotechnology. First, the research is helping us fill a major gap in our fundamental knowledge of matter. At the small end of the scale — single atoms and molecules — we already know quite a bit from using tools developed by conventional physics and chemistry. And at the large end, likewise, conventional chemistry, biology, and engineering have taught us about the bulk behavior of materials and systems. Until now, however, we have known much less about the intermediate nanoscale, which is the natural threshold where all living and manmade systems work. The basic properties and functions of material structures and systems are defined here and, even more importantly, can be changed as a function of the organization of matter via ‘‘weak” molecular interactions (such as hydrogen bonds, electrostatic dipole, van der Waals forces, various surface forces, electro-fluidic forces, etc.). The intellectual drive toward smaller dimensions was accelerated by the discovery of size-dependentnovel properties and phenomena. Only since 1981 have we been able to measure the size of a cluster of atoms on a surface (IBM, Zurich), and begun to provide better models for chemistry and biology selforganization and self-assembly. Ten years later, in 1991, we were able to move atoms on surfaces (IBM, Almaden). And after ten more years, in 2002, we assembled molecules by physically positioning the component atoms. Yet, we cannot visualize or model with proper spatial and temporal accuracy a chosen domain of engineering or biological relevance at the nanoscale. We are still at the beginning of this road. A second reason for the interest in nanotechnology is that nanoscale phenomena hold the promise for fundamentally new applications. Possible examples include chemical manufacturing using designed molecular
assemblies, processing of information using photons or electron spin, detection of chemicals or bioagents using only a few molecules, detection and treatment of chronic illnesses by subcellular interventions, regenerating tissue and nerves, enhancing learning and other cognitive processes by understanding the “society” of neurons, and cleaning contaminated soils with designed nanoparticles. Using input from industry and academic experts in the U.S., Asia Pacific countries, and Europe between 1997 and 1999, we have projected that $1 trillion in products incorporating nanotechnology and about 2 million jobs worldwide will be affected by nanotechnology by 2015 (Roco and Bainbridge, 2001). Extrapolating from information technology, where for every worker, another 2.5 jobs are created in related areas, nanotechnology has the potential to create 7 million jobs overall by 2015 in the global market. Indeed, the first generation of nanostructured metals, polymers, and ceramics have already entered the commercial marketplace.

Build Your Own Multi-touch Surface Computer

Maximum PC didn't feel like shelling out $12,000 for Microsoft's Surface technology, so the staff made its own multi-touch table PC for only $350


The online magazine's original task was to publish an article about future user interfaces. However, after extensive research into multi-touch applications such as Apple's iPhone and Microsoft Surface, the staff at Maximum PC uncovered a whole community of DIY engineers "perfecting the art" of creating homemade multi-touch surfaces. Home-built multi-touch surfaces should come as no surprise: there are websites dedicated to hands-on construction of unique technologies such as a Commodore 64 laptop, a speech-controlled trash can, and even a lemon-charged battery. Needless to say, if the industry can build it, then the online community will find a way to build even it better... and cheaper.

With that said, Maximum PC decided to create a multi-touch surface computer using methods found online at the Natural User Interface Group. Ultimately, the online magazine didn't go out and spend $12,000, but rather just $350. Out of various processes used to construct the homemade multi-touch surface, the staff decided to use the FTIR (Frustrated Total Internal Reflection) screen setup. This consists of a sheet of transparent acrylic, a chain of infrared LEDs, and a camera with an IR sensor. According to the site, the LEDs are arranged around the outside of the acrylic sheet so that they shine directly into the side. The IR light thus shoots into the acrylic, reflecting off the top and bottom of the material, remaining contained within.

When a finger presses against the sheet, the reflecting light hits the spot and bounces downward into the cabinet mounted underneath. A modified webcam mounted in the cabinet--altered to detect only infrared light--views the finger touch as white spots, and then sends the image to software running on a connected PC. The software maps the movements and applies the coordinates to whatever application is running. The PC thus transmits the on-screen image via a projector back onto the surface using a mirror and a piece of heat-absorbing glass. Granted this brief overview sounds rather simple, the process of creating the multi-touch surface PC takes a bit of work, from polishing the sides of the acrylic sheet to altering the webcam.

But wait... Maximum PC didn't just use any webcam; the site implemented the $35 PlayStation 3 Eye, using a rectangular razor blade to gain access to the poor camera's IR filter. As with the rest of the article, the site shows the step-by-step process of removing the unwanted filter. "The infrared sensor is the innermost piece of glass on the lens assembly," the site reads. "When it catches the light, it looks ruby red – a dead giveaway that this is the piece filtering out infrared light. In order to remove it we simply used a razor blade to gouge out the plastic in a circle around the filter, allowing us to easily pop it out." Why remove the filter? So that the PlayStation 3 Eye can pick up infrared light.

As for the connected computer, the staff didn't use anything meaty, only a PC containing a Core 2 Duo and 2 GB of memory. With that said, DIY builders won't need anything outrageously fast, but more than likely a rig that hit the market within the last few years. Additionally, the camera and PC don't necessarily need to be within the cabinet; the cables for the PS3 Eye and projector can run out of the cabinet and hook up to a laptop if needed.

Ultimately, the actual multi-touch screen was 24-inches by 30-inches, with the acrylic sheet 3/8-inches thick. The IR LEDs lining along each side were 1-inch apart, however the staff wired the LEDs together the hard way, soldering the leads together rather than just using a wire-wrap gun (that would make the task quicker and more environmentally safe... meaning no lead). The cabinet itself was constructed from 3/8-inch MDF, with a stained hardwood frame on top, standing waist high. To get the entire contraption to work, the team installed Touchlib on the PC, an open source library that takes the visual data received by the camera and parses it into touch events. Someone even wrote a driver that enables the PS3 Eye to work on the PC.

"We completed this project over the course of about two weeks' work," the article reads. "All said and done, everything worked out pretty well. We ended up with a fully functional, highly responsive multi-touch surface."

For a meager $350, the DIY multi-touch project sounds like great fun, and may end up as something we do here at Tom's just for kicks. After all, many of us don't have a whopping $12,000 stored in the underwear drawer (well, maybe Tuan). Still, this example definitely proves that anything is possible on a small budget. All it takes is a little patience, a little research, and a dedicated community to help along the way.

Source: maximumpc.com

next-gen chips

IBM, Samsung Electronics, STMicroelectronics, and others are teaming up on the development of next-generation chip technology for small, low-power devices with one wary eye on Intel, which is expediting its move to chips with smaller geometries.


IBM and its semiconductor technology alliance partners are announcing the availability of 28-nanometer (nm) chip technology, a little more than a generation beyond the 45nm technologies currently used by Intel and Advanced Micro Devices in their latest chips.

The first products using chips based on this technology are expected in the second half of 2010, an IBM spokesman said. Devices will include smartphones and consumer electronics products.

The largest, single countervailing force to the IBM-led group is Intel. The Santa Clara, Calif.-based chip giant's chief executive, Paul Otellini, said Tuesday in a first-quarter earnings conference call that Intel is "pulling in" the release of "Westmere" chips based on 32nm technology and will ship silicon later this year.

Generally, the smaller the geometry, the faster and more power efficient the chip is.

The IBM alliance--which also includes the AMD manufacturing spin-off Globalfoundries, Chartered Semiconductor, and Infineon Technologies--are jointly developing the 28nm chipmaking process based on the partners' "high-k metal gate" (which minimizes current leakage), low-power complementary metal oxide semiconductor (CMOS) process technology.

The technology "can provide a 40 percent performance improvement and a more than 20 percent reduction in power, in a chip that is half the size, compared with 45nm technology," IBM said in a statement. "These improvements enable microchip designs with outstanding performance, smaller feature sizes and low standby power, contributing to faster processing speed and longer battery life in next-generation mobile Internet devices and other systems."

IBM said customers can begin their designs now using 32nm technology and then transition to 28nm for density and power advantages without the need for a major redesign.

One prominent customer is U.K.-based ARM, whose basic chip design has been used in billions of devices all over the world. ARM is collaborating with the IBM alliance to develop a design platform for 32nm and 28nm technology and is tuning its Cortex processor family and future processors to exploit the technology's capabilities, IBM said.


Ref: Cnet

Biorefinery: Renewable energy

Renewable energy deriving from solar, wind, and biomass sources has great potential for growth to meet our future energy needs. Fuels such as ethanol, methane, and hydrogen are characterized as biofuels because they can be produced by the activity of biological organisms.

Which of these fuels will play a major role in our future? The answer is not clear, as factors such as land availability, future technical innovation, environmental policy regulating greenhouse gas emissions, governmental subsidies for fossil fuel extraction/ processing, implementation of net metering, and public support for alternative fuels will all affect the outcome. A critical point is that as research and development continue to improve the efficiency of bio fuel production processes, economic feasibility will continue to improve.

Bio fuel production is best evaluated in the context of a bio refinery. In a bio refinery, agricultural feedstock and by-products are processed through a series of biological, chemical, and physical
processes to recover biofuels, biomaterials, nutraceuticals, polymers, and specialty chemical compounds.2,3 This concept can be compared to a petroleum refinery in which oil is processed to produce fuels, plastics, and petrochemicals. The recoverable products in a biorefinery range from basic food ingredients to complex pharmaceutical compounds and from simple building materials to complex industrial composites and polymers. Biofuels, such as ethanol, hydrogen, or biodiesel, and biochemicals, such as xylitol, glycerol, citric acid, lactic acid, isopropanol, or vitamins, can be produced for use in the energy, food, and nutraceutical/pharmaceutical industries. Fibers, adhesives, biodegradable plastics such as polylactic acid, degradable surfactants, detergents, and enzymes can be recovered for industrial use. Many biofuel compounds may only be economically feasible to produce when valuable coproducts are also recovered and when energyefficient processing is employed. One advantage of microbial conversion processes over chemical processes is that microbes are able to select their substrate among a complex mixture of compounds, minimizing the need for isolation and purification of substrate prior
to processing. This can translate to more complete use of substrate and lower chemical requirements for processing.

Early proponents of the biorefinery concept emphasized the zeroemissions goal inherent in the plan—waste streams, water, and heat from one process are utilized as feed streams or energy to another, to fully recover all possible products and reduce waste with maximized efficiency.2,3 Ethanol and biodiesel production can be linked effectively in this way. In ethanol fermentation, 0.96 kg of CO2 is produced per kilogram of ethanol formed. The CO2 can be fed to algal bioreactors to produce oils used for biodiesel production. Approximately 1.3 kg CO2 is consumed per kilogram of algae grown, or 0.5 kg algal oil produced by oleaginous strains. Another example is the potential application of microbial fuel cells to generate electricity by utilizing waste organic compounds in spent fermentation media from biofuel production processes.

Also encompassed in a sustainable biorefinery is the use of “green” processing technologies to replace traditional chemical processing. For example, supercritical CO2 can be used to extract oils and nutraceutical compounds from biomass instead of using toxic would allow for replacement of petroleum-derived products with sustainable, carbon-neutral, low-polluting alternatives. In addition to environmental benefits of biorefining, there are economic benefits as new industries grow in response to need.2,3 A thorough economic analysis, including ecosystem and environmental impact, harvest, transport, processing, and storage costs must be considered. The R&D Act of 2000 and the Energy Policy Act of 2005 recommend increasing biofuel production from 0.5 to 20 percent and biobased chemicals and materials from 5 to 25 percent,5 a goal that may best be reached through a biorefinery model. organic solvents such as hexane.4 Ethanol can be used in biodiesel production from biological oils in place of toxic petroleum-based methanol traditionally used.Widespread application of biorefineries

Description: Biofuels

The origin of all fuel and biofuel compounds is ultimately the sun, as solar energy is captured and stored as organic compounds through photosynthetic processes. Certain biofuels, such as oils produced by plants and algae, are direct products of photosynthesis. These oils can be used directly as fuel or chemically transesterified to biodiesel. Other biofuels such as ethanol and methane are produced as organic substrates are fermented by microbes under anaerobic conditions. Hydrogen gas can be produced by both routes, that is, by photosynthetic algae and cyanobacteria under certain nutrient- or oxygen-depleted conditions, and by bacteria and archae utilizing organic substrates under anaerobic conditions. Electrical energy produced by microbial fuel cells—specialized biological reactors that intercept electron flow from microbial metabolism—can fall into either category, depending on whether electron harvest occurs from organic substrates oxidized by organotrophic cultures or from photosynthetic cultures.

A comparison of biofuel energy contents reveals that hydrogen gas has the highest energy density of common fuels expressed on a mass basis. For liquid fuels, biodiesel, gasoline, and diesel have energy densities in the 40 to 46 kJ/g range. Biodiesel fuel contains 13 percent lower energy density than petroleum diesel fuel, but combusts more completely and has greater lubricity.7 The infrastructure for transportation, storage, and distribution of hydrogen is lacking, which is a significant advantage for adoption of biodiesel.

Another measure of energy content is energy yield (YE), the energy produced per unit of fossil fuel energy consumed. YE for biodiesel from soybean oil is 3.2 compared to 1.5 for ethanol from
corn and 0.84 and 0.81 for petroleum diesel and gasoline, respectively.8 Even greater YE values are achievable for biodiesel created from algal sources or for ethanol from cellulosic sources.9 The high net energy gain for biofuels is attributed to the solar energy captured compared to an overall net energy loss for fossil fuels.

What Do hackers Want from You?

What Do They Want from You?
So the question remains: What could anyone possibly find on your computer or
home network that would be of value to him or her? The answer might surprise you.
For example, they might want to:

1 Steal your Microsoft Money and Quicken files, where you store personal
financial information.
2 Get their hands on your personal saving and checking account numbers.
3 Search for your personal pin numbers.
4 Steal electronic copies of your taxes that have been prepared using desktop
tax reporting applications.
5 Steal your credit card numbers or any other financial information that is of
value.
6 Steal important business information on your computer that might be of
value to a competitor.
7 Launch distributed denial of service attacks against other Internet computers
and Web sites.

All these types of information can easily be captured and sent to the hacker using a
worm program, as depicted in Figure 1.4. A worm can be initially implanted on your
computer by hiding inside an e-mail attachment which, when double-clicked,
silently installs the worm on your hard drive. The worm then goes to work searching
your hard disk for valuable information that it can relay back to its creator.
Money and personal secrets might not be the only things of value your computer
can provide to hackers. Some people simply delight in causing trouble or playing
practical jokes. It is not fun to find out that somebody has hacked on to your computer
and deleted important files or filled up your hard drive with useless garbage,
but to some crackers this is a form of amusement.
A cracker can also take control of your computer without your knowledge and use it
and thousands of other computers to launch attacks on commercials Web sites and
other corporate communications systems. Crackers achieve this task by breaking
into individual computer systems and planting Trojan horses that, after installed,
communicate back to the cracker’s computer and perform whatever instructions they
are told to do. To prevent this sort of silent hostile takeover, you need to install a personal
firewall and configure it to block all unapproved outgoing traffic from your
computer. As you will see in Chapter 3 you can configure your firewall with a list of
approved Internet applications such as Internet Explorer and Outlook Express. Your
personal firewall will then deny access to the Internet to any application that is not
on this list, including any Trojan horse applications.


The term Trojan horse comes from the trick that the Greek attackers used to penetrate the defenses of the city of Troy. It describes a program that sneaks onto your computer by hiding within a seemingly legitimate piece of software. The horse later begins to run amuck. Back Orifice made the Trojan horse software attack famous. Back Orifice is a Trojan horse program whose name mimics the Microsoft Back Office suite of network applications. Once planted, the Back Orifice program provides the hacker with complete control over the infected computer.

Sniffing

Sniffing



Sniffing is the use of a network interface to receive data not intended for the machine in which the interface resides. A variety of types of machines need to have this capability. A token-ring bridge, for example, typically has two network interfaces that normally receive all packets traveling on the media on one interface and retransmit some, but not all, of these packets on the other interface. Another example of a device that incorporates sniffing is one typically marketed as a “network analyzer.” A network analyzer helps network administrators diagnose a variety of obscure problems that may not be visible on any one particular host. These problems can involve unusual interactions between more than just one or two machines and sometimes involve a variety of protocols interacting in strange ways. Devices that incorporate sniffing are useful and necessary. However, their very existence implies that a malicious person could use such a device or modify an existing machine to snoop on network traffic. Sniffing programs could be used to gather passwords, read inter-machine e-mail, and examine client-server database records in transit. Besides these high-level data, low level information might be used to mount an active attack on data in another computer system.

Sniffing: How It Is Done



In a shared media network, such as Ethernet, all network interfaces on a network segment have access to all of the data that travels on the media. Each network interface has a hardware-layer address that should differ from all hardware-layer addresses of all other network interfaces on the network. Each network also has at least one broadcast address that corresponds not to an individual network interface, but to the set of all network interfaces. Normally, a network interface will only respond to a data frame carrying either its own hardware-layer address in the frame’s destination field or the “broadcast address” in the destination field. It responds to these frames by generating a hardware interrupt to the CPU. This interrupt gets the attention of the operating system, and passes the data in the frame to the operating system for further processing.



Note

The term “broadcast address” is somewhat misleading. When the sender wants to

get the attention of the operating systems of all hosts on the network, he or she uses

the “broadcast address.” Most network interfaces are capable of being put into a

“promiscuous mode.” In promiscuous mode, network interfaces generate a hardware

interrupt to the CPU for every frame they encounter, not just the ones with

their own address or the “broadcast address.” The term “shared media” indicates to

the reader that such networks broadcast all frames—the frames travel on all the

physical media that make up the network.



At times, you may hear network administrators talk about their networking trouble spots when they observe failures in a localized area. They will say a particular area of the Ethernet is busier than other areas of the Ethernet where there are no problems. All of the packets travel through all parts of the Ethernet segment. Interconnection devices that do not pass all the frames from one side of the device to the other form the boundaries of a segment. Bridges, switches, and routers divide segments from each other, but low-level devices that operate on one bit at a time, such as repeaters and hubs, do not divide segments from each other. If only low-level devices separate two parts of the network, both are part of a single segment. All frames traveling in one part of the segment also travel in the other part. The broadcast nature of shared media networks affects network performance and reliability so greatly that networking professionals use a network analyzer, or sniffer, to troubleshoot problems. A sniffer puts a network interface in promiscuous mode so that the sniffer can monitor each data packet on the network segment. In the hands of an experienced system administrator, a sniffer is an invaluable aid in determining why a network is behaving (or misbehaving) the way it is. With an analyzer, you can determine how much of the traffic is due to which network protocols, which hosts are the source of most of the traffic, and which hosts are the destination of most of the traffic. You can also examine data traveling between a particular pair of hosts and categorize it by protocol and store it for later analysis offline. With a sufficiently powerful CPU, you can also do the analysis in real time. Most commercial network sniffers are rather expensive, costing thousands of dollars. When you examine these closely, you notice that they are nothing more than a portable computer with an Ethernet card and some special software. The only item that differentiates a sniffer from an ordinary computer is software. It is also easy to download shareware and freeware sniffing software from the Internet or various bulletin board systems.

The ease of access to sniffing software is great for network administrators because this type of software helps them become better network troubleshooters. However, the availability of this software also means that malicious computer users with access to a network can capture all the data flowing through the network. The sniffer can capture all the data for a short period of time or selected portions of the data for a fairly long period of time. Eventually, the malicious user will run out of space to store the data—the network I use often has 1000 packets per second flowing on it. Just capturing the first 64 bytes of data from each packet fills up my system’s local disk space within the hour.



Note

Esniff.c is a simple 300-line C language program that works on SunOS 4.x. When

run by the root user on a Sun workstation, Esniff captures the first 300 bytes of each

TCP/IP connection on the local network. It is quite effective at capturing all usernames and passwords entered by users for telnet, rlogin, and FTP. TCPDump 3.0.2 is a common, more sophisticated, and more portable Unix sniffing program written by Van Jacobson, a famous developer of high-quality TCP/IP software. It uses the libpcap library for portably interfacing with promiscuous mode network interfaces. The most recent version is available via anonymous FTP to ftp.ee.lbl.gov.

NetMan contains a more sophisticated, portable Unix sniffer in several programs in

its network management suite. The latest version of NetMan is available via

anonymous FTP to ftp.cs.curtin.edu.au in the directory /pub/netman.

EthDump is a sniffer that runs under DOS and can be obtained via anonymous FTP

from ftp.eu.germany.net in the directory /pub/networking/inet/ethernet/.



WARNING

On some Unix systems, TCPDump comes bundled with the vendor OS. When run by an ordinary, unprivileged user, it does not put the network interface into promiscuous mode. with this command available, a user can only see date being sent to the Unix host, but is not limited to seeing data sent to processes owned by the user. Systems administrators concerned about sniffing should remove user execution privileges from this program.





Sniffing: How It Threatens Security

Sniffing data from the network leads to loss of privacy of several kinds of information that should be private for a computer network to be secure. These kinds of information include the following:

* Passwords

* Financial account numbers

* Private data

* Low-level protocol information

The following subsections are intended to provide examples of these kinds.

Sniffing Passwords

Perhaps the most common loss of computer privacy is the loss of passwords. Typical users type a password at least once a day. Data is often thought of as secure because access to it requires a password. Users usually are very careful about guarding their password by not sharing it with anyone and not writing it down anywhere.

Passwords are used not only to authenticate users for access to the files they keep in their

private accounts but other passwords are often employed within multilevel secure database systems. When the user types any of these passwords, the system does not echo them to the computer screen to ensure that no one will see them. After jealously guarding these passwords and having the computer system reinforce the notion that they are private, a setup that sends each character in a password across the network is extremely easy for any Ethernet sniffer to see. End users do not realize just how easily these passwords can be found by someone using a simple and common piece of software.

Sniffing Financial Account Numbers

Most users are uneasy about sending financial account numbers, such as credit card numbers and checking account numbers, over the Internet. This apprehension may be partly because of the carelessness most retailers display when tearing up or returning carbons of credit card receipts. The privacy of each user’s credit card numbers is important. Although the Internet is by no means bulletproof, the most likely location for the loss of privacy to occur is at the endpoints of the transmission. Presumably, businesses making electronic transactions are as fastidious about security as those that make paper transactions, so the highest risk probably comes from the same local network in which the users are typing passwords. However, much larger potential losses exist for businesses that conduct electronic funds transfer or electronic document interchange over a computer network. These transactions involve the transmission of account numbers that a sniffer could pick up; the thief could then transfer funds into his or her own account or order goods paid for by a corporate account. Most credit card fraud of this kind involves only a few thousand dollars per incident.

Sniffing Private Data

Loss of privacy is also common in e-mail transactions. Many e-mail messages have been

publicized without the permission of the sender or receiver. Remember the Iran-Contra affair in which President Reagan’s secretary of defense, Caspar Weinberger, was convicted. A crucial piece of evidence was backup tapes of PROFS e-mail on a National Security Agency computer. The e-mail was not intercepted in transit, but in a typical networked system, it could have been. It is not at all uncommon for e-mail to contain confidential business information or personal information. Even routine memos can be embarrassing when they fall into the wrong hands.

Sniffing Low-Level Protocol Information

Information network protocols send between computers includes hardware addresses of local network interfaces, the IP addresses of remote network interfaces, IP routing information, and sequence numbers assigned to bytes on a TCP connection. Knowledge of any of this information can be misused by someone interested in attacking the security of machines on the network. See the second part of this chapter for more information on how these data can pose risks for the security of a network. A sniffer can obtain any of these data. After an attacker has this kind of information, he or she is in a position to turn a passive attack into an active attack with even greater potential for damage.

Ref: Gaining Access and Securingthe Gateway

Laws and Safety Regulations

Laws and Safety Regulations

The construction industry is one of the biggest industriesin the United Kingdom, although most workers are employed by small companies employing less than

25 people. The construction industry carries out all types of building work from basic housing to offices, hotels, schools and airports. In all of these construction projects the Electrotechnical Industry plays a major role in designing and installing the electrical systems to meet the needs of those who will use the completed buildings.

The construction process is potentially hazardous and many construction sites these days insist on basic safety standards being met before you are allowed on

site. All workers must wear hard hats and safety boots or safety trainers and use low voltage or battery tools. When the building project is finished, all safety systems

will be in place and the building will be safe for those who will use it. However, during the construction period, temporary safety systems are in place. People

work from scaffold towers, ladders and stepladders. Permanent stairways and safety handrails must be put in by the construction workers themselves.

When the electrical team arrives on site to, let us say, ‘first fix’ a new domestic dwelling house, the downstairs floorboards and the ceiling plasterboards will

probably not be in place, and the person putting in the power cables for the downstairs sockets will need to step over the floor joists, or walk and kneel on

planks temporarily laid over the floor joists. The electrical team spend a lot of time on their hands and knees in confined spaces, on ladders, scaffold

towers and on temporary safety systems during the ‘first fix’ of the process and, as a consequence, slips, trips and falls do occur.

To make all working environments safer, laws and safety regulations have been introduced. To make your working environment safe for yourself and those

around you, you must obey all the safety regulations that are relevant to your work.

The many laws and regulations controlling the working environment have one common purpose, to make the working environment safe for everyone.

Let us now look at some of these laws and regulations as they apply to the Electrotechnical Industry.



ref

Trevor Linsley

Simple Lie Detector

Here's a simple lie detector that can be built in a few minutes, but can be incredibly useful when you want to know if someone is really telling you the truth. It is not as sophisticated as the ones the professionals use, but it works. It works by measuring skin resistance, which goes down when you lie.


Here are the details of the specific parts you will need














Part Total Qty. Description Substitutions
  • R1 1 33K 1/4W Resistor
    R2 1 5K Pot
    R3 1 1.5K 1/4W Resistor
    C1 1 1uF 16V Electrolytic Capacitor
    Q1 1 2N3565 NPN Transistor
    M1 1 0-1 mA Analog Meter
    MISC 1 Case, Wire, Electrodes (See Nots)

Notes
1. The electrodes can be alligator clips (although they can be painful), electrode pads (like the type they use in the hospital), or just wires and tape.

2. To use the circuit, attach the electrodes to the back of the subjects hand, about 1 inch apart. Then, adjust the meter for a reading of 0. Ask the questions. You know the subject is lying when the meter changes.

Encryption/Decryption

While other security mechanisms provide protection against unauthorized access
and destruction of resources and information, encryption/decryption protects
information from being usable by the attacker. Encryption/decryption is a security
mechanism where cipher algorithms are applied together with a secret key
to encrypt data so that they are unreadable if they are intercepted. Data are then
decrypted at or near their destination. This is shown in Figure 3.8 .
As such, encryption/decryption enhances other forms of security by protecting
information in case other mechanisms fail to keep unauthorized users from
that information. There are two common types of encryption/decryption: public
key and private key. Software implementations of public key encryption/decryption
are commonly available. Examples include data encryption standard (DES)
private key encryption, triple DES private key encryption, and Rivest, Shamir, and
Adleman (RSA) public key encryption.

Public key infrastructure (PKI) is an example of a security infrastructure that
uses both public and private keys. Public key infrastructure is a security infrastructure
that combines security mechanisms, policies, and directives into a system that
is targeted for use across unsecured public networks (e.g., the Internet), where
information is encrypted through the use of a public and a private cryptographic
key pair that is obtained and shared through a trusted authority. PKI is targeted
toward legal, commercial, offi cial, and confi dential transactions, and includes cryptographic
keys and a certifi cate management system. Components of this system are:

■ Managing the generation and distribution of public/private keys
■ Publishing public keys with UIDs as certifi cates in open directories
■ Ensuring that specifi c public keys are truly linked to specifi c private keys
■ Authenticating the holder of a public/private key pair

PKI uses one or more trusted systems known as Certifi cation Authorities (CA),
which serve as trusted third parties for PKI. The PKI infrastructure is hierarchical,
with issuing authorities, registration authorities, authentication authorities, and
local registration authorities.
Another example is the secure sockets library (SSL). Secure sockets library is
a security mechanism that uses RSA-based authentication to recognize a party’s
digital identity and uses RC4 to encrypt and decrypt the accompanying transaction
or communication. SSL has grown to become one of the leading security protocols
on the Internet.
One trade-off with encryption/decryption is a reduction in network performance.
Depending on the type of encryption/decryption and where it is implemented in
the network, network performance (in terms of capacity and delay) can be degraded
from 15% to 85% or more. Encryption/decryption usually also requires administration
and maintenance, and some encryption/decryption equipment can be expensive.
While this mechanism is compatible with other security mechanisms, trade-offs
such as these should be considered when evaluating encryption/decryption.

WHY DO YOU NEED A PERSONAL FIREWALL?

Because you are reading this chapter’s introduction chances are very good that you already know a little about the Internet and why it is both an incredible and dangerous place to visit. The Internet is gold mine of information and opportunity. Unfortunately it has also become a hunting
ground for less-than-scrupulous individuals who have both the tools and the know-how to penetrate your computer and steal your personal and financial information or who simply enjoy playing practical jokes or deliberately harming other people’s computer systems. The introduction of wide spread high-speed Internet access makes your computer an easier and more attractive target for these people. The mission of this book is to introduce you to personal firewalls and to help you protect your data and your privacy when you are surfing around the
World Wide Web.

1 Learn about the hacker community and the dangers of surfing unprotected
on the Internet
2 Examine the dangers of high-speed cable and DSL access
3 Discover how easy it is to protect yourself by installing your own personal
firewall
4 Review the differences between software and hardware firewalls and decide
which solution is best for you
5 Find out which features you should look for when you go firewall shopping

The Hacker Community

Hackers are more than just isolated individuals roaming the Internet looking to
cause trouble. In fact, you might be surprised to know that there is an active hacker
community flourishing on the Internet. This community has a heritage that goes
back to the 1960s and can trace its roots back to the first hackers who used to hack
into the phone company to steal long-distance service. These people eventually gave
themselves the title of phone freaks. As you will see, colorful names abound in the
hacker community.
Perhaps the best way to learn about and understand the hacker community is to
examine its various self-named members. These classifications include:
Hacker
Cracker
Whacker
Samurai
Larva
Demigod

Hacker
A hacker is an individual who possesses a technical mastery of computing skills and
who thrives on finding and solving technical challenges. This person usually has a
very strong UNIX and networking background. A hacker’s networking background
includes years of experience on the Internet and the ability to break into and infiltrate
other networks. Hackers can program using an assortment of programming
languages. In fact, this person can probably learn a new language in a matter of
days. The title of hacker is not something that you can claim. Instead, your peers
must give it to you. These people thrive on the admiration of their peers. In order to
earn this level of respect, an individual must share his or her knowledge. It is this
sharing of knowledge that forms the basis of the hacker community.

UNIX is one of the oldest and most powerful operating systems in the world. It’s also
one of the most advanced. UNIX provides most of the computing infrastructure that
runs the Internet today and a comprehensive understanding of UNIX’s inner workings is
a prerequisite for a true hacker.

One basic premise of this community is that no one should ever have to solve the
same problem twice. Time is too precious to waste reinventing the wheel. Therefore,
hackers share their knowledge and discoveries and as a result their status within the
hacker community grows as does the community itself.
Hackers believe that information is meant to be free and that it is their duty to make
sure that it is. Hackers are not out to do any harm. Their mission, they think, is to
seek a form of personal enlightenment, to constantly learn and explore and to
share. Of course, this is a terribly self-gratifying view but that is how hackers see
each other. They see their conduct as honorable and noble.
But the bottom line is that hackers use their computing skills to break into computers
and networks. Even though they might not do harm, it is still an unethical and
illegal act. Hacking into someone else’s computer is very much the same thing as
breaking into their home. Whether it makes them more enlightened or not is insufficient
justification for the crimes that they commit.

Cracker
Another group in the hacker community is the group that gives hackers a bad
name. The individuals in this group are known as crackers. Crackers are people who
break into computers and networks with the intent of creating mischief. Crackers
tend to get a great deal of media attention and are always called hackers by the TV
news and press. This, of course, causes hackers much frustration. Hackers have little
respect for crackers and want very much to distinguish themselves from them. To a
hacker, a cracker is a lower form of life deserving no attention. Of course, crackers
always call themselves hackers.
Usually, a cracker doesn’t have anywhere near the skill set of a true hacker,
although they do posses a certain level of expertise. Mostly they substitute brute
force attacks and a handful of tricks in place of the ingenuity and mastery wielded
by hackers.

Whacker
Whacker is another title that you might have heard. A whacker is essentially a person
who shares the philosophy of the hacker, but not his or her skill set. Whackers
are less sophisticated in their techniques and ability to penetrate systems. Unlike a
hacker, a whacker is someone who has never achieved the goal of making the perfect
hack. Although less technically sophisticated, whackers still posses a formidable
skill set and although they might not produce new discoveries, they are able to follow
in the footsteps of hackers and can often reproduce their feats in an effort to
learn from them.

Samurai
A samurai is a hacker who decides to hire out his or her finely honed skills in order
to perform legal activities for corporations and other organizations. Samurai are
often paid by companies to try to break into their networks. The samurai is modeled
after the ancient Japanese Samurai and lives by a rigid code of honor that prohibits
the misuse of his or her craft for illegal means.

Larva
Larvas are beginner hackers. They are new to the craft and lack the years of experience
required to be a real hacker. They idolize true hackers and in time hope to
reach true hacker status.
So what do hackers, crackers, whackers, Samurai, or larva want with you or your
computer? After all there are plenty of corporate and government computers and
networks in the world that must offer far more attractive targets. Well, although
hackers, whackers, and Samurai might not be targeting them, home computers can
often be viewed as low lying fruit for crackers who want easy access to financial
information and a fertile training ground for larva to play and experiment.
But the biggest threat of all might come from a group of people not associated with
the hacker community. This group consists of teenagers and disgruntled adults with
too much time on their hands. These people usually have little if any real hacking
skills. And were it not for the information sharing code of the hacker community,
these people would never pose a threat to anybody. However, even with very little
know-how, these people can still download and execute scripts and programs developed
by real hackers. In the wrong hands, these programs seek out and detect vulnerable
computers and networks and wreak all kinds of destruction.

Other Hacker Terms
In addition to the more common titles previously presented, there are a few other
hacker terms that you should be aware of. For example, a wannabee is an individual
who is in the beginning larva stage of his or her hacking career. Wannabees are
seen as very eager pupils and can be dangerous because of their inexperience even
when their intentions are good. A dark-side hacker is an individual who for one reason
or another has lost their faith in the hacker philosophy and now uses their skills
maliciously. A demigod is a hacker with decades of experience and a worldwide reputation.

Just remember that somebody is always watching you; that on the Internet nothing is private anymore and it’s not always the bad guys that you need to be worried about. In early 2000, the FBI installed a device called the Carnivore at every major ISP that allowed them to trap and view every IP packet that crossed over the wire. It has since been renamed to a less intimidating name of CDS1000. The FBI installed this surveillance hardware and software, they say, so that they can collect court-ordered information
regarding specifically targeted individuals. It’s kind of scary but it is true. Just be careful with whatever you put into your e-mail because you never know who will read it.

Selasa, 16 Desember 2008

DEVELOPING A SECURITY AND PRIVACY PLAN

The development of each component architecture is based on our understanding
of why that function is needed for that particular network. While one may argue
that security is always necessary, we still need to ensure that the security mechanisms
we incorporate into the architecture are optimal for achieving the security
goals for that network. Therefore, toward developing a security architecture, we
should answer the following questions:


1. What are we trying to solve, add, or differentiate by adding security mechanisms
to this network?

2. Are security mechanisms suffi cient for this network?


While it is likely that some degree of security is necessary for any network, we
should have information from the threat analysis to help us decide how much
security is needed. As with the performance architecture, we want to avoid implementing
(security) mechanisms just because they are interesting or new.
When security mechanisms are indicated, it is best to start simple and work
toward a more complex security architecture when warranted. Simplicity may be
achieved in the security architecture by implementing security mechanisms only in
selected areas of the network (e.g., at the access or distribution [server] networks),
or by using only one or a few mechanisms, or by selecting only those mechanisms
that are easy to implement, operate, and maintain.
In developing the security architecture, you should determine what problems
your customer is trying to solve. This may be clearly stated in the problem defi nition,
developed as part of the threat analysis, or you may need to probe further to
answer this question. Some common areas that are addressed by the security architecture
include:

■ Which resources need to be protected
■ What problems (threats) are we protecting against
■ The likelihood of each problem (threat)
■ This information becomes part of your security and privacy plan for the network.

This plan should be reviewed and updated periodically to refl ect the
current state of security threats to the network. Some organizations review
their security plans yearly, others more frequently, depending on their requirements
for security.
Note that there may be groups within a network that have different security
needs. As a result, the security architecture may have different levels of security.
This equates to the security perimeters or zones introduced in the previous chapter.
How security zones are established is discussed later in this chapter.
Once you have determined which problems will be solved by each security
mechanism, you should then determine if these security mechanisms are suffi cient
for that network. Will they completely solve the customer’s problems, or are they
only a partial solution? If they are a partial solution, are there other mechanisms that
are available, or will be available within your project time frame? You may plan to
implement basic security mechanisms early in the project, and upgrade or add to
those mechanisms at various stages in the project.
Related Posts Plugin for WordPress, Blogger...