Smart Dust


What Is A Smart Dust?

'Smart dust' — sensor-laden networked computer nodes that are just cubic millimetres in volume. The smart dust project envisions a complete sensor network node, including power supply, processor, sensor and communications mechanisms, in a single cubic millimetre.  .Smart dust motes could run for years , given that a cubic millimetre battery can store 1J and could be backed up with a solar cell or vibrational energy source .

The Mems Technology In Smart Dust

        Smart dust requires mainly revolutionary advances in miniaturization, integration & energy management. Hence designers have used  MEMS technology to build small sensors, optical communication components, and power supplies. Microelectro mechanical systems consists of extremely tiny mechanical elements, often integrated together with electronic circuitory. They are measured in micrometers, that is millions of a meter. They are made in a similar fashion as computer chips. The advantage of this manufacturing process is not simply that small structures can be achieved but also that thousands or even millions of system elements can be fabricated simultaneously. This allows systems to be both highly complex and extremely low-cost.


 Active-Steered Laser Systems

For mote-to-mote communication, an active-steered laser communication system uses an onboard light source to send a tightly collimated light beam toward an intended receiver. Steered laser communication has the advantage of high power density; for example, a 1-milliwatt laser radiating into 1 milliradian (3.4 arcseconds) has a density of approximately 318 kilowatts per steradian (there are 4 steradians in a sphere), as opposed to a 100-watt lightbulb that radiates 8 watts per steradian isotropically. A Smart Dust mote’s emitted beam would have a divergence of approximately 1 milliradian, permitting communication over enormous distances using milliwatts of power. Each mote must carefully weigh the needs to sense, compute, communicate, and evaluate its energy reserve status before allocating precious nanojoules of energy to turn on its transmitter or receiver.

Abstract

        Advances in hardware technology have enabled very compact, autonomous and mobile nodes each having one or more sensors, computation and communication capabilities, and a power supply.  The Smart Dust project is exploring whether an autonomous sensing, computing, and communication system can be packed into a cubic-millimeter mote to form the basis of integrated, massively distributed sensor networks. It focuses on reduction of power consumption, size and cost. To build these small sensors, processors, communication devices, and power supply , designers have used the MEMS (Micro electro mechanical Systems) technology.

Major Challenges

1.     To incorporate all these functions while maintaining a low power consumption

2.     Maximising operating life given the limited volume of energy storage

Listening To A Dust Field


Many Smart Dust applications rely on direct optical communication from an entire field of dust motes to one or more base stations. These base stations must therefore be able to receive a volume of simultaneous optical transmissions. Further, communication must be possible outdoors in bright sunlight which has an intensity of approximately 1 kilowatt per square meter, although the dust motes each transmit information with a few milliwatts of power. Using a narrow-band optical filter to eliminate all sunlight except the portion near the light frequency used for communication can partially solve this second problem, but the ambient optical power often remains much stronger than the received signal power.


Ovonic Unified Memory


Ovonic Unified Memory

        Among the above-mentioned non-volatile Memories, Ovonic Unified Memory is the most promising one. “Ovonic Unified Memory” is the registered name for the non-volatile memory based on the material called chalcogenide. The term “chalcogen” refers to the Group VI elements of the periodic table. “Chalcogenide” refers to alloys containing at least one of these elements such as the alloy of germanium, antimony, and tellurium discussed here. Energy Conversion Devices, Inc. has used this particular alloy to develop a phase-change memory technology used in commercially available rewriteable CD and DVD disks. This phase change technology uses a thermally activated, rapid, reversible change in the structure of the alloy to store data. Since the binary information is represented by two different phases of the material it is inherently non-volatile, requiring no energy to keep the material in either of its two stable structural states.

Introduction

        We are now living in a world driven by various electronic equipments. Semiconductors form the fundamental building blocks of the modern electronic world providing the brains and the memory of products all around us from washing machines to super computers. Semi conductors consist of array of transistors with each transistor being a simple switch between electrical 0 and 1. Now often bundled together in there 10’s of millions they form highly complex, intelligent, reliable semiconductor chips, which are small and cheap enough for proliferation into products all around us.


If scaling is to continue to and below the 65nm node, alternatives to CMOS designs will be needed to provide a path to device scaling beyond the end of the roadmap. However, these emerging research technologies will be faced with an uphill technology challenge. For digital applications, these challenges include exponentially increasing the leakage current (gate, channel, and source/drain junctions), short channel effects, etc. while for analogue or RF applications, among the challenges are sustained linearity, low noise figure, power added efficiency and transistor matching. One of the fundamental approaches to manage this challenge is using new materials to build the next generation transistors.

Fundamental Ideas Of Emerging Memories

        The fundamental idea of all these technologies is the bistable nature possible for of the selected material. FeRAM works on the basis of the bistable nature of the centre atom of selected crystalline material. A voltage is applied upon the crystal, which in turn polarizes the internal dipoles up or down. I.e. actually the difference between these states is the difference in conductivity. Non –Linear FeRAM read capacitor, i.e., the crystal unit placed in between two electrodes will remain in the direction polarized (state) by the applied electric field until another field capable of polarizing the crystal’s central atom to another state is applied.

Abstract

                Nowadays, digital memories are used in each and every fields of day-to-day life. Semiconductors form the fundamental building blocks of the modern electronic world providing the brains and the memory of products all around us from washing machines to super computers. But now we are entering an era of material limited scaling. Continuous scaling has required the introduction of new materials.

Conclusion

                Unlike conventional flash memory Ovonic unified memory can be randomly addressed. OUM cell can be written 10 trillion times when compared with conventional flash memory. The computers using OUM would not be subjected to critical data loss when the system hangs up or when power is abruptly lost as are present day computers using DRAM a/o SRAM. OUM requires fewer steps in an IC manufacturing process resulting in reduced cycle times, fewer defects, and greater manufacturing flexibility.


Smart Fabrics


Abstract

 Based on the advances in computer technology, especially in the field of miniaturization, wireless technology and worldwide networking, the vision of wearable computers emerged. We already use a lot of portable electronic devices like cell phones, notebooks and organizers. The next step in mobile computing could be to create truly wearable computers that are integrated into our daily clothing and always serve as our personal assistant. This paper explores this from a textile point of view. Which new functions could textiles have? Is a combination of textiles and electronics possible? What sort of intelligent clothing can be realized?  Necessary steps of textile research and examples of current developments are presented as well as future challenges.

 Today, the interaction of human individuals with electronic devices demands specific user skills. In future, improved user interfaces can largely alleviate this problem and push the exploitation of microelectronics considerably. In this context the concept of smart clothes promises greater user-friendliness, user empowerment, and more efficient services support. Wearable electronics responds to the acting individual in a more or less invisible way. It serves individual needs and thus makes life much easier. We believe that today, the cost level of important microelectronic functions is sufficiently low and enabling key technologies are mature enough to exploit this vision to the benefit of society. In the following, we present various technology components to enable the integration of electronics into textiles.


Principles Behind Elektex

                ElekTex is essentially a laminate of textiles comprising two conductive outer layers separated by a partially conductive central layer. The outer layers each have two conductive-fabric electrode strips arranged so that the upper conductive layer has tracks which make contact across its opposing top and bottom edges and the lower conductive layer has conductive tracks up its left and right sides. The partially conductive central layer provides the magic which makes ElekTex work. Its role is to act as an insulator in the resting state which, when touched, allows electrical current to flow between the top and bottom layer. Pressure applied to the ElekTex fabric causes two effects. First, the conducting fibres in the central layer are locally compressed allowing contact between neighbouring conducting fibres to form a conductive channel through the central layer. Second, the applied pressure brings the two outer layers into contact with the conductive channel running through the central layer allowing a local circuit to be established between the upper and lower layers.

Other Interesting "Smart Clothing"  
                              
                    There are also other "Smart Clothes" that are aimed at consumer use. For example, Philips, a British consumer electronics manufacturer, has developed new fabrics, which are blended with conductive materials that are powered by removable 9V batteries. These fabrics have been tested in wet conditions and have proven resilient and safe for wearers. One prototype that Philips has developed is a child's "bugsuit" that integrates a GPS system and a digit camera woven into the fabric with an electronic game panel on the sleeve. This allows parents to monitor the child's location and actions. Another Philips product is a live-saving ski jacket that has a built in thermometer, GPS, and proximity sensor. The thermometer monitors the skier's body temperature and heats the fabric if it detects a drastic fall in the body temperature.

Wearable Intelligence

Self-heating hats and glow-in-the-dark sweatshirts might correctly be labeled as ‘smart’, but how about a shirt that ‘knows’ whether you are free to take a cell phone call or retrieve information from a 1000 page safety manual displayed on your inside pocket? Such items, termed ‘intelligent’ clothing to distinguish them from their lowertech cousins, have proved more difficult to patch unobtrusively into everyday apparel. Indeed, the first prototype ‘wearable computers’ of the early 1990s required users to strap on a head-mounted visor and carry heavy battery packs in their pockets, leading some to question the appropriateness of the term ‘wearable’.

Sensitive Fabric Surfaces

                 Creating sensors that are soft and malleable and that conform to a variety of physical forms will greatly change the way computing devices appear and feel. Currently, creating beautiful and unusual computational objects, like keyboards and digital musical instruments, is  a difficult problem. Keyboards today are made from electric contacts printed on plastic backing. These contacts are triggered by mechanical switches and buttons. Digital musical instruments rely on film sensors, like piezoelectric and resistive strips. All these sensors require rigid physical substrates to prevent de-lamination, and the mechanical incorporation of bulky switches. This drastically limits the physical form, size and tactile properties of objects using these sensors.

Conclusion


                 What smart fabrics cannot is not as important as what it can. This intelligent textiles have managed to pervade into those places where you least expect to find them. That is the real charm of knowing them. It can engender a myriad of wild imaginations which are not impossible.


Surround Sound System


 Surround Sound Formats

The principal format for digital discrete surround is the “5.1 channel” system. The 5.1 name stands for five channels (see figure 1 below) (in front: left, right and centre, and behind: left surround and right surround)of full bandwidth audio (20 Hz to 20 kHz) plus a sixth channel which will, at times, contain additional bass information to maximize the impact of scenes such as explosions, etc.  this channel has only a narrow frequency response (3 Hz  to 120 Hz), thus it is sometimes referred to as the “.1” channel. When added together, the system is sometimes referred to as having “5.1” channels.

DTS

DTS was introduced to the public in 1993 with the release of Jurassic Park. Presently, over 5000 theatres worldwide are equipped with DTS playback equipment and over 100 movies to date have been DTS encoded. The DTs format for movie theatres, also known as DTS-6, and the format proposed for home entertainment, are quite different. We will discuss the former first.



Introduction

We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison’s invention of the phonograph in 1877 until the 1950s. during those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism. The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid-‘50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left – right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension.

7 – Channel Surround Circuit

It creates a 5-channel real surround with the other two channels being totally virtual. This becomes possible by adopting a radically different approach – the output from the amplifier is split into lows and highs of frequencies and fed into separate speakers. The lows are fed to rear speakers and the highs to the front ones. One distinct feature is that the direction of speakers is directly opposite to that in the Dolby Surround System.

SDDS

Sony Dynamic Digital Sound is the newest of the three formats to hit the market. The system was released in early 1994, and kits were made available to dubbing studios and film printers to adapt industry standard film printing equipment so as to provide an easy ability to record and print SDDS films.

THX IN 5.1

The THX Sound System was developed in 1982 durint the production of Return of the Jedi. The system was developed by Lucas film’s corporate technical director Tomlinson Holman; thus the new sound system was referred to as the Tomlinson Holman eXperiment. THX is a sound system designed specifically to reproduce film sound exactly as it was recorded by the film maker. THX systems are more 5.1 ready than most other systems. Home THX have always employed separate amplifier channels for two surrounds, because THX controllers apply “decorrelation” to cover Pro Logic’s mono surround signal into two spacious “steriozed” surrounds. So with a THX system, it is already using the stereo amplification that DSD or DTS required for surround channels.

Conclusion

With the amount of technological advances that are occurring in the field of surround sound, it will be no surprise, if the stereos we know become obsolete soon. The new proposed DVD has already chosen DSD as preferred format for audio.
  

Treating Cardiac Diseases Based On Catheter Based Tissue Heating


Abstract

In microwave ablation, electromagnetic energy would be delivered via a catheter to a precise location in a coronary artery for selective heating of a targeted atherosclerotic lesion. Advantageous temperature profiles would be obtained by controlling the power delivered, pulse duration, and frequency. The major components of an apparatus for microwave ablation apparatus would include a microwave source, a catheter/transmission line, and an antenna at the distal end of the catheter .The antenna would focus the radiated beam so that most of the   microwave energy would be deposited within the targeted atherosclerotic lesion.

 Microwave Cardiac Ablation

Another application of catheter based microwave heating is the treatment of abnormal heart rhythm, or cardiacarrhythmia .this life threatening disease , which affects over 300,000 Americans yearly, is caused by anomalous electrical activity in certain areas of the heart. Although drugs can be used to control the excessively rapid heart beat, mechanically removing or destroying  section of this tissue is more effective in curing arrhythmias. Selective catheter fed ablation, or excessive heating of tissue, destroys the region of the heart responsible for the anomalous  electrical activity.


Effects Of Electric Field

When oriented parallel to artery walls, the electric fields are the same on both sides of each LWC/ HWC boundary. Dissipated power is equal to  |E|2 / 2.Since the conductivity is much greater in HWC than LWC tissue, more power is deposited on the HWC side. Conversely, electric field perpendicular to artery walls are greater on the LWC side by the ratio H/ L so the power is preferentially dissipated on the LWC side by the ratio (ΣL/ ΣH )(H/ L)2 .Figure 2 shows schematically  relative sizes of the electric fields (arrow lengths)and deposited power (box volumes) for the two field orientation. Using the normal electric field polarization ensures that waves with radially polarized electric field deposit more power in the plaque layer than in the healthy artery wall.

Mca Applicators

        Microwave catheter ablation (MCA) antenna applicators have been used experimentally for cardiac ablation. These applicators are grouped in to two categories: the monopolar antennas and helical coil antennas. Both types radiation the normal  mode, with waves propagating perpendicular to the axis of the helix. Further, monopole antennas are usually one-half of the tissue wavelengths in length and generate a well-defined football-shaped heating pattern along its axis.

 Introduction

For decades, scientists have been using electromagnetic and sonic energy to serve medicine. But, aside from electro surgery, their efforts have focused on diagnostic imaging of internal body structures—particularly in the case of x-ray, MRI, and ultrasound systems. Lately, however, researchers have begun to see acoustic and electromagnetic waves in a whole new light, turning their attention to therapeutic—rather than diagnostic—applications. Current research is exploiting the ability of radio-frequency (RF) and microwaves to generate heat, essentially by exciting molecules. This heat is used predominantly to ablate cells. Of the two technologies, RF was the first to be used in a marketable device.

Conclusions 

        Two applications of microwave internal biological heating have been discussed. Both MABA and MCA consist of an antenna applicator fed by means of coaxial cable, which passes through a catheter. The antenna designs take advantage of polarization and phase effects of microwaves to create specific  power deposition patterns. MABA with a helix and mode filter balloon  uses the large differences in the dielectric characteristics of HWC and LWC tissue to preferentially heat and weld plaque while sparing healthy artery walls. The wide aperture MCA uses  an unfurlable spiral antenna within a balloon to generate a deep large ablation volume in diseased cardiac tissue. Theoretical studies have been validated with a variety of in-vitro and in-vivo experiments .There is less of a potential for tissue surface charring with microwaves than with RF ablation. Live animal studies indicate that MCA is well tolerated by animals.



Utility FOG


Abstract

Nanofog is a highly advanced nanotechnology, which the Technocratic Union has developed as the ultimate multi-purpose tool. It is a user-friendly, completely programmable collection of avogadro (6 x!023) numbers of nanomachines that can form a vast range of machinery, from wristwatches to spaceships. It can simulate any material from gas, liquid, and solid, and it can even be used in sufficient quantities to implement the ultimate in virtual reality. ITx researchers suggest that more complex applications could include uploading human minds into planet-sized collections of Utility Fog. Active, polymorphic material, Utility Fog can be designed as a conglomeration of 100-micron robotic cells called foglets. Such robots could be built with the techniques of molecular nanotechnology.

Synergistic Combination With Other Technologies

The counterintuitive inefficiency in communications is an example, possibly the most extreme one, of a case where macroscopic mechanisms outperform the Fog at some specific task. This will be even more true when we consider nano-engineered macroscopic mechanisms. We could imagine a robot, human-sized, that was formed of a collection of nano-engineered parts held together by a mass of Utility Fog. The parts might include "bones", perhaps diamond-fiber composites, having great structural strength; motors, power sources, and so forth. The parts would form a sort of erector set that the surrounding Fog would assemble to perform the task at hand. The Fog could do directly all subtasks not requiring the excessive strength, power, and so forth that the special-purpose parts would supply.


Introduction

Nanotechnology is based on the concept of tiny, self-replicating robots. The Utility Fog is a very simple extension of the idea:Suppose, instead of building the object you want atom by atom, the tiny robots linked their arms together to form a solid mass in the shape of the object you wanted?. Then, when you got tired of that avant-garde coffee table, the robots could simply shift around a little and you'd have an elegant Queen Anne piece instead. The color and reflectivity of an object are results of its properties as an antenna in the micron wavelength region. Each robot could have an "antenna arm" that it could manipulate to vary those properties, and thus the surface of a Utility Fog object could look just about however you wanted it to. A "thin film" of robots could act as a video screen, varying their optical properties in real time.

Rather than paint the walls, coat them with Utility Fog and they can be a different color every day, or act as a floor-to-ceiling TV. Indeed, make the entire wall of the Fog and you can change the floor plan of your house to suit the occasion. Make the floor of it and never gets dirty, looks like hardwood but feels like foam rubber, and extrudes furniture in any form you desire. Indeed, your whole domestic environment can be constructed from Utility Fog; it can form any object you want (except food) and whenever you don't want an object any more, the robots that formed it spread out and form part of the floor again. You may as well make your car of Utility Fog, too; then you can have a "new" one every day. But better than that, the *interior* of the car is filled with robots as well as its shell. You'll need to wear holographic "eyephones" to see, but the Fog will hold them up in front of your eyes and they'll feel and look as if they weren't there. Although heavier than air, the Fog is programmed to simulate its physical properties, so you can't feel it: when you move your arm, it flows out of the way. Except when there's a crash!. Then it forms an instant form-fitting "seatbelt" protecting every inch of your body. You can take a 100-mph impact without messing your hair.

Other Desirable Limitations

In 1611, William Shakespeare wrote his final play, "The Tempest." 445 years later, an obscure science fiction writer named W. J. Stuart updated the Tempest's plot into a story called "Forbidden Planet," and created a modern myth.

Communications And Control


In the macroscopic world, microcomputer-based controllers (e.g. the widely used Intel 8051 series microcontrollers) typically run on a clock speed of about 10 MHz. They emit control signals, at most, on the order of 10 kHz (usually less), and control motions in robots that are at most 10 Hz, i.e. a complete motion taking one tenth of a second. This million-clocks-per-action is not strictly necessary, of course; but it gives us some concept of the action rate we might expect for a given computer clock rate in a digitally controlled nanorobot.


VLSI Computations


The Future

Computers to be used in the 1990s may be the next generation. Very large-scale integrated (VLSI) chips will be used along with high-density modular design. Multiprocessors like the 16 processors in the S-1 project at Lawrence Livermore National Laboratory and in the Denelcor’s HEP will be required. Cray-2 is expected to have four processors, to be delivered in 1985. More than 1000 mega float-point operations per second (megaflops) are expected in these future supercomputers.

Mapping Algorithms Into Vlsi Arrays

Procedures to map cyclic loop algorithms into special-purpose VLSI arrays are described below.  The method is based on mathematical transformation of the index sets and the data-dependence vectors associated with a given algorithm.  After the algorithmic transformation, one can devise a more efficient array structure that can better exploit parallelism and pipelining by removing unnecessary data dependencies.



Need For Parallel Processing

Achieving high performance depends not only on using faster and more reliable hardware devices, but also on major improvements in computer architecture and processing techniques. State – of – the art parallel computer systems can be characterized into three structural classes: pipelined computers, array processors and multi-processor systems. Parallel processing computers provide a cost-effective means to achieve high system performance through concurrent activities.

The Systolic Array Architecture

The choice of an appropriate architecture for any electronic system is very closely related to the implementation technology.  This is especially true in VLSI.  The constraints of power dissipation, I/O pin count, relatively long communication delays, difficulty in design and layout, etc., all important problems in VLSI, are much less critical in other technologies.  As a compensation, however, VLSI offers very fast and inexpensive computational elements with some unique and exciting properties.  For example, bi-directional transmission gates (Pass transistors) enable a full barrel shifter to be configured in a very compact NMOS array.

VlSI Computing Structures

Highly parallel computing structures promise to be a major application area for the million-transistor chips that will be possible in just a few years.  Such computing system has structural properties that are suitable for VLSI implementation.  Almost by definition, parallel structures imply a basic computational element repeated perhaps hundreds or thousands of times.  This architectural style immediately reduces the design problem by similar orders of magnitude.  In this section, we examine some VLSI computing structures that have been suggested by computer researchers.  We begin with a characterization of the systolic architecture.  Then we describe methodologies for mapping parallel algorithms into processor arrays.  Finally, we present the reconfigurable processor arrays for designing algorithmically presented below.  Modularly structured VLSI computing structures will be presented. Described below are key attributes of VLSI computing structures.

Reconfigurable Processor Array

Algorithmically specialized processors often use different interconnection structures.  As demonstrated in Figure 10.30, five array structures have been suggested for implementing different algorithms.  The mesh is used for dynamic programming.  The hexagonally connected mesh was shown in the previous section for L-U decomposition.  The torus is used for transitive closure.  The binary tree is used for sorting.  The double-rooted tree is used for searching.  The matching of the structure to the right algorithm has a fundamental influence on performance and cost effectiveness.

Conclusion


The applications of VLSI computations appear in real-time image processing as well as real-time signal processing. The VLSI feature extraction introduced by Foley and Sammon in 1975 enables signal and image processing computations effectively and speedily.Pattern embedding by wafer-scale integration (WSI) method introduced by Hedlum is another application of VLSI computing structures. Modular VLSI architectures for implementing large scale matrix arithmetic processors have been introduced.


Wearable Bio Sensors


LED Modulation  
 
Power consumption problem can be solved with a lighting modulation technique. Instead of lighting the skin continually the LEDis turned on only for a short time, say 100-1000ns and the signal is sampled within the period. High frequency low duty cycle modulation implemented minimizes LED power consumption.

Basic Principle Of Ring Sensor

Each time the heart muscle contracts, blood is ejected from the ventricles and a pulse of pressure is transmitted through the circulatory system. This pressure pulse when traveling through the vessels, causes vessel wall displacement which is measurable at various points inorder to detect pulsatile blood volume changes by photoelectric method, photo conductors are used normally photo resistors are used, for amplification purpose photo transistors are used

Working    

The LEDs and PD are placed on the flanks of the finger either reflective or transmittal type can be used. For avoiding motion disturbances quite stable transmittal method is used. Transmittal type has a powerful LED for transmitting light across the finger. This power consumption problem can be solved with a light modulation technique using high-speed devices.



Ring Sensor     

It is a pulse oximetry sensor that allows one to continuously monitor heart rate and oxygen saturation in a totally unobtrusive way. The device is shaped like a ring and thus it can be worn for long periods of time without any discomfort to the subject.

Abstract

          Recent advancements in miniature devices have fostered a dramatic growth of interest of wearable technology. Wearable Bio-Sensors (WBS) will permit continuous cardiovascular (CV) monitoring in a number of novel settings. WBS could play an important role in the wireless surveillance of people during hazardous operations (military, firefighting, etc) or such sensors could be dispensed during a mass civilian casualty occurrence. They typically rely on wireless, miniature sensors enclosed in ring or a shirt.

Impact Of The Smart Shirt 

          The smart shirt will have significant impact on the practice of medium since it fulfills the critical need for a technology that can enhance the quality of life while reducing the health care cost across the continuum of life that is from newborns to senior citizens, and across the continuum of medical care that is from hospitals and everywhere in between.

Introduction

          Wearable sensors and systems have evolved to the point that they can be considered ready for clinical application. The use of wearable monitoring devices that allow continuous or intermittent monitoring of physiological signals is critical for the advancement of both the diagnosis as well as treatment of diseases.  

Future Trends    
 
By providing the “platform” for a suite of sensors that can be utilized to monitor an individual unobtrusively. Smart Shirt technology opens up existing opportunities to develop “adaptive and responsive” systems that can “think” and “act” based on the users condition, stimuli and environment. Thus, the rich vital signs delta steam from the smart shirt can be used to design and experiment “real-time” feedback mechanism (as part of the smart shirt system) to embrace the quality of care for this individual by providing appropriate and timely medical inspections.

Conclusion  
   
The ring sensor and smart shirt are an effective and comfortable, and mobile information infrastructure that can be made to the individual’s requirements to take advantage of the advancements in telemedicine and information processing. Just as special-purpose chips and processors can be plugged into a computer motherboard to obtain the required information processing capability, the smart shirt is an information infrastructure into which the wearer can “plug in” the desired sensors and devices, thereby creating a system for monitoring vital signs in an efficient and cost effective manner with the “universal“ interface of clothing.


Wavelet Transforms


Abstract

Mathematical transformations are applied to signals to obtain further information from the signal that is not readily available in the raw signal. By applying the various transformations available today, the frequency information in these signals is obtained. There are many transforms that are used quite often by engineers and mathematician’s.

Wavelets – Theory

The Wavelet analysis is performed using a prototype function called a wavelet, which has the effect of a band pass filter. Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet transform is to represent any arbitrary function f (t) as a superposition of a set of such wavelets or basis function. These basis functions are derived from a single prototype called mother wavelet.


Importance Of The Frequency Information

Often times, the information that cannot be readily seen in the time-domain can be seen in the frequency domain. Most of the signals in practice are time-domain signals in their raw format. That is, whatever that signal is measuring, is a function of time. In other words, when we plot the signal one of the axis is time (independent variable) and the other (dependent variable) is usually the amplitude. When we plot time-domain signals, we obtain a time-amplitude representation of the signal.

The Wavelet Transform

The Wavelet transform provides the time-frequency representation. Often times a particular spectral component occurring at any instant can be of particular interest. In these cases it may be very beneficial to know that the time intervals these particular spectral components occur. For example, in EEGs, the latency of an event-related potential is of particular interest.

Introduction

Wavelet transforms have been one of the important signal processing developments in the last decade, especially for the applications such as time-frequency analysis, data compression, segmentation and vision. During the past decade, several efficient implementations of wavelet transforms have been derived. The theory of wavelets has roots in quantum mechanics and the theory of functions though a unifying framework is a recent occurrence. Wavelet analysis is performed using a prototype function called a wavelet. Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet transform is to represent any arbitrary function f (t) as a superposition of a set of such wavelets or basis functions.

Fourier Transforms

The Fourier transforms are used in many areas, in applications to obtain the frequency representation of the signal. If the Fourier transform of a signal in time-domain is taken, the frequecy – amplitude representation of that signal is obtained.

The Fourier transform is defined by the following two equations:

X (f) = -∞ ∫∞ x (t). e (- 2 j ∏ f t) dt.  ...(1)
 
x (t) = -∞ ∫∞ X (f). e (2 j ∏ f t) df.  …(2)

Conclusion

The development of wavelets is an example where ideas from many different fields combined to merge into a whole that is more than the sum of its parts. Wavelet transforms have been widely employed in signal processing application, particularly in image compression research. It has been used extensively in multi-resolution analysis (MRA) for image processing.


VISNAV


Introduction

        Now days there are several navigation systems for positioning the objects. Several research efforts have been carried out in the field of Six Degrees Of Freedom estimation for rendezvous and proximity operations. One such navigation system used in the field of Six Degrees Of Freedom position and attitude estimation is the VISion based NAVigation system. It is aimed at achieving better accuracies in Six Degrees Of Freedom estimation using a more simpler and robust approach.The VISNAV system uses a Position Sensitive Diode (PSD) sensor for 6 DOF estimation. Output current from the PSD sensor determines the azimuth and elevation of the light source with respect to the sensor. By having four or more light source called beacons in the target frame at known positions the six degree of freedom data associated with the sensor is calculated.

Aerial Refueling

        The aim this application is to extend the operational envelop of unmanned aerial vehicles by designing an autonomous in flight refueling system. One of the most difficult technical problems in autonomous flight refueling is the accuracy. That is it needs high accurate sensor to measure the location of the tanker and the aircraft. Currently Global Positioning System (GPS) is limited by an accuracy of one foot approximately.



Dsp Implimentation

         The beacons are multiplexed in FDM mode. A low power fixed point DSP, TMS320C55x [2] is utilized for the algorithm of beacon separation and demodulation. Asynchronous analog to digital converter samples the sensor’s four currents to feed estimates to the TMS320C55x [2]. Each current has frequency components corresponding to the frequencies of different beacons. For the case of eight beacons the carrier frequencies are starting from 48.5 kHz with an interchannel separation of 0.5 kHz, in order to distinguish from low frequency background noise.

Demodulation

        Considering that it is needed to determine the amplitude of the sinusoidal signal and the associated signal due to the relative movement of the sensor, an approach similar to AM demodulation is used here. The main difference, however, is that we are also interested in the carrier amplitude. Although analog circuits can be used to perform the channel separation and demodulation, the DSP based approach provides a more cost effective solution with a higher degree of reliability, programmability and scalability.

Abstract

        Spacecraft missions such a spacecraft docking and formation flying requires high-precision relative position and attitude data. Although a global positioning system (GPS) can provide this capability near the earth, deep space missions require the use of alternative technologies. One such technology is the vision-based navigation (VISNAV) sensor system developed at Texas A&M university .J comprises an electro optical sensor combined with light sources or beacons. This patented sensor has an analog detector in the focal plane with a rise time of a few microseconds.

Conclusion


        A new method for operating beacons and demodulating the beacon currents for the VISNAV sensor system is introduced here. It is shown that target differentiation based on FDM yields higher signal to noise ratios for the sensor measurements and the demodulation in the digital domain using multirate signal processing techniques brings reliability and flexibility to the sensor system. The algorithm that is implemented on DSP is robust when there are four or more of line of sight measurements except near certain geometric conditions that are rarely encountered. 

Newer Posts Older Posts Home