Solar Power Satellites SPS

About

The new millennium has introduced increased pressure for finding new renewable energy sources. The exponential increase in population has led to the global crisis such as global warming, environmental pollution and change and rapid decrease of fossil reservoirs. Also the demand of electric power increases at a much higher pace than other energy demands as the world is industrialized and computerized. Under these circumstances, research has been carried out to look into the possibility of building a power station in space to transmit electricity to Earth by way of radio waves-the Solar Power Satellites. Solar Power Satellites(SPS) converts solar energy in to micro waves and sends that microwaves in to a beam to a receiving antenna on the Earth for conversion to ordinary electricity.SPS is a clean, large-scale, stable electric power source. Solar Power Satellites is known by a variety of other names such as Satellite Power System, Space Power Station, Space Power System, Solar Power Station, Space Solar Power Station etc.

Klystron

Here a high velocity electron beam is formed, focused and send down a glass tube to a collector electrode which is at high positive potential with respect to the cathode. As the electron beam having constant velocity approaches gap A, they are velocity modulated by the RF voltage existing across this gap. Thus as the beam progress further down the drift tube, bunching of electrons takes place. Eventually the current pass the catcher gap in quite pronounce bunches and therefore varies cyclically with time. This variation in current enables the klystron to have significant gain. Thus the catcher cavity is excited into oscillations at its resonant frequency and a large output is obtained.

Klystron amplifier schematic diagram

 Construction Of Sps From Non Terrestrial Materials

SPS, as mentioned before is massive and because of their size they should have been constructed in space. Recent work also indicate that this unconventional but scientifically well –based approach should permit the production of power satellite without the need for any rocket vehicle more advanced than the existing ones. The plan envisioned sending small segments of the satellites into space using the space shuttle. The projected cost of a SPS could be considerably reduced if extraterrestrial resources are employed in the construction.One often discussed road to lunar resource utilization is to start with mining and refining of lunar oxygen, the most abundant element in the Moon’s crust, for use as a component of rocket fuel to support lunar base as well as exploration mission. The aluminum and silicon can be refined to produce solar arrays.

Beam Control

A key system and safety aspect of WPT in its ability to control the power beam. Retro directive beam control systems have been the preferred method of achieving accurate beam pointing. As shown in fig. a coded pilot signal is emitted from the rectenna towards the SPS transmitter to provide a phase reference for forming and pointing the power beams. To form the power beam and point it back forwards the rectenna, the phase of the pilot signal is captured by the receiver located at each sub array is compared to an onboard reference frequency distributed equally throughout the array. If a phase difference exists between the two signals, the received signal is phase conjugated and fed back to earth dc-RF converted. In the absence of the pilot signal, the transmitter will automatically dephase its power beam, and the peak power density decreases by the ratio of the number of transmitter elements.

Rectenna

Rectenna is the microwave to dc converting device and is mainly composed of a receiving antenna and a rectifying circuit. Fig .8 shows the schematic of rectenna circuit . It consists of a receiving antenna, an input low pass filter, a rectifying circuit and an output smoothing filter. The input filter is needed to suppress re radiation of high harmonics that are generated by the non linear characteristics of rectifying circuit. Because it is a highly non linear circuit, harmonic power levels must be suppressed. One method of suppressing harmonics is by placing a frequency selective surface in front of the rectenna circuit that passes the operating frequency and attenuates the harmonics.

Retro directive beam control concept with an SPS

Conclusion


The SPS will be a central attraction of space and energy technology in coming decades. However, large scale retro directive power transmission has not yet been proven and needs further development. Another important area of technological development will be the reduction of the size and weight of individual elements in the space section of SPS. Large-scale transportation and robotics for the construction of large-scale structures in space include the other major fields of technologies requiring further developments. Technical hurdles will be removed in the coming one or two decades.

Bio Molecular Computers

About

Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via 1C miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing.

Protein Based Optical Computing Memories

Much research has gone in developing high-speed optics random access memory based on bacteriorhodopsin. Bacteriorhodopsin is a purple coloured pigment occurring in the cell membrane of Halobacterium halobium. It utilizes solar energy to move protons across the membrane, resulting in difference in the proton levels. Now it is known that under a high proton concentration, the formation of ATP takes place and this ATP is used to catalyse a reaction. By measuring the rate of reaction, one can create a logic gate. On being cooled to sufficiently low temperatures, a nanometer-sized section of the bR molecule will kink out of shape when struck by a green laser. But, most importantly, the altered bR molecules can be made to snap back to their original form, if hit by a red laser. Hence, bR can act as the basis for a molecular binary switch. This can be used to make large optical memories with access time below two nano seconds. Currently, access times of 20ns have been achieved, the major limitation being the speed at which optical beams can be positioned to read or write single bits.

Bio Molecular Computers Seminar PPT


What About Efficiency?

In both the solid-surface glass-plate approach and the test tube approach, each DNA strand represents one possible answer to the problem that the computer is trying to solve. The strands have been synthesized by combining the building blocks of DNA, called nucleotides, with one another, using techniques developed for biotechnology. The set of DNA strands is manufactured so that all conceivable answers are included. Because a set of strands is tailored to a specific problem, a new set would have to be made for each new problem.

Design And Fabrication

The costs to design and build a 64 mega-bit memory chip run into billions of dollars and these costs would raise higher for larger memory-sized chips. In contrast, some bio molecular systems like bR offer the promise of being economically grown in a vat and can quickly be harvested in a normal environment which is controlled via ordinary chemistry or use of shelf laser diodes.

Quantum Effects

They are introduced due to very small size of solid-state devices. This is important when the feature size reduces to a point where one is dealing with individual atoms. The quantum effects like unwanted tunneling of electrons pose a great difficulty. These effects can be nullified using an average output through redundant circuits making the fabrication costlier.

Thermal Build Up

Semiconductor designers are always trying to shrink circuit line widths in order to increase overall processor speed. But this causes massive thermal dissipation problems. The tightly spaced electronic switches generate huge amounts of heat, which has to be dissipated at a high speed. Such problems will not arise in bio molecular devices.

New Devolopments

The first applications were "brute force" solutions in which random DNA molecules were generated, and then the correct sequence was identified. The first problems solved by DNA computations involved finding the optimal path by which a travelling salesman could visit a fixed number of cities once each. Recent work showed how DNA can be employed to carry out a fundamental computer operation, addition of two numbers expressed in binary."

Conclusion

Biomolecular computers have the real potential for solving problems of high computational complexities and therefore, many problems are still associated with this field. The difficulty of devising an interface is therefore the sensitive dependence on a biological environment, susceptibility to degradation, senescence and infection, etc. Nevertheless, it offers the best approach to human cognitive equivalence.


Bubble Power Seminar Report


About

The standard of living in a society is measured by the amount of energy consumed. In the present scenario where the conventional fuels are getting depleted at a very fast rate the current energy reserves are not expected to last for more than 100 years. Improving the harnessing efficiency of non-conventional energy sources like solar, wind etc. as a substitute for the conventional sources is under research.

A new step that has developed in this field is ‘Bubble Power’-the revolutionary new energy source. It is working under the principle of Sonofusion. For several years Sonofusion research team from various organizations have joined forces to create Acoustic Fusion Technology Energy Consortium (AFTEC) to promote the development of sonofusion. It was derived from a related phenomenon known as sonoluminescence. Sonofusion involves tiny bubbles imploded by sound waves that can make hydrogen nuclei fuse and may one day become a revolutionary new energy source.

Sonofusion

The apparatus consists of a cylindrical Pyrex glass flask 100 m.m. in high and 65m.m.in diameter. A lead-zirconate-titanate ceramic piezoelectric crystal in the form of a ring is attached to the flask’s outer surface. The piezoelectric ring works like the loud speakers in a sonoluminescence experiment, although it creates much stronger pressure waves. When a positive voltage is applied to the piezoelectric ring, it contracts; when the voltage is removed, it expands to its original size.

Bubble Power Seminar Report

Action Of Vacuum Pump

The naturally occurring gas bubbles cannot withstand high temperature and pressure. All the naturally occurring gas bubbles dissolved in the liquid are removed virtually by attaching a vacuum pump to the flask and acoustically agitating the liquid.

 Action Of The Wave Generator

To initiate the sonofusion process, we apply an oscillating voltage with a frequency of about 20,000 hertz to the piezoelectric ring. The alternating contractions and expansions of the ring-and there by of the flask-send concentric pressure waves through the liquid. The waves interact, and after a while they set up an acoustic standing wave that resonates and concentrates a huge amount of sound energy. This wave causes the region at the flask’s centre to oscillate between a maximum (1500kpa) and a minimum pressure. (-1500kpa).

Action Of The Neutron Generator

Precisely when the pressure reaches its lowest point, a pulsed neutron generator is fired. This is a commercially available, baseball bat size device that sits next to the flask. The generator emits high-energy neutrons at 14.1 mega electron volts in a burst that lasts about six microseconds and that goes in all directions.

Other Approaches Of Fusion Reaction

There are mainly two approaches on fusion reactions other than bubble power. They are

1.      Laser Beam Technique

2.      Magnetic Confinement Fusion

 Magnetic Confinement Fusion

          It uses powerful magnetic fields to create immense heat and pressure in hydrogen plasma contained in a large, toroidal device known as a tokamak. The fusion produces high energy by neutrons that escape the plasma and hit a liquid filled blanket surrounding it.  The idea is to use the heat produced in the blanket to generate vapor to drive a turbine and thus generate electricity.

Conclusion

With the steady growth of world population and with economic progress in developing countries, average electricity consumption per person has increased significantly. There for seeking new sources of energy isn’t just important, it is necessary. So for more than half a century, thermonuclear fusion has held out the promise of cheap clean and virtually limitless energy. Unleashed through a fusion reactor of some sort, the energy from 1 gram of deuterium, an isotope of hydrogen, would be equivalent to that produced by burning 7000 liters of gasoline.



Mobile IP


About

While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience. However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA.

Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting. Depending on which network the mobile client is currently visiting; its point of attachment Foreign Agent) may change. At each point of attachment, Mobile IP either requires the availability of a standalone Foreign Agent or the usage of a Co-located care-of address in the mobile client itself.

Abstract

The Mobile IP protocol was designed to support seamless and continuous Internet connectivity for mobile computing devices such as notebook PCs, cell phones, PDAs, etc. Utilizing Mobile IP, the mobile computing device is able to stay connected as it moves about and changes its point of attachment to the Internet. Both home and local resources, such as location based services, instant messaging, and email, are continuously accessible.

Reverse Tunneling

Another problem is that many Internet Routers strictly filter out packets that are not originating from a topologically correct sub-net. The solution to these problems is a technique called “reverse tunneling”. Essentially reverse tunneling means that in addition to the “forward tunnel” (from the Home Agent to the Foreign Agent), the Foreign Agent also tunnels packets, from the mobile node, back to the Home Agent instead of directly sending them to the Corresponding Node.

Mobile IP Seminar PPT

How A Mobile Node Sends Packets

Tunneling is generally not required when the mobile node sends a packet.The mobile node transmits an IP packet with its home agent address as the source IP address.The packet is routed directly to its destination without unnecessarily traversing the home network.This technique fails,however,in networks that do source IP address checking,so reverse tunneling can be used if necessary.

ARP Resolution

IP is logical address, for actual communication link level address (called MAC address) is required. IP addresses are resolved into physical address using ARP (Address Resolution Protocol). But when the Mobile Node is away from home network it hinders the normal working of ARP because Mobile Node is not present in the home network to resolve the ARP request. To handle this problem Mobile IP describes two special use of ARP—Proxy ARP and Gratuitious ARP.

Private and Public Networks

We use the concept “public network” in the sense of meaning that a “public network” is an IP network with public IP addresses. All public networks are interconnected via routers and thereby form the Internet. A private network, on the other hand, is an IP network that is isolated from the Internet in some way. A private network may use private or public IP addresses – it may be connected to the Internet via a network address translator or a firewall. However, it is not a part of the Internet since its internal resources are protected from the Internet. Private Networks may use the Internet to interconnect a multi-site private network, a multi-site VPN solution.

AAA And Mobile IP Interworking

AAA (Authentication, Authorization and Accounting) protocols used in IP environments include the well-known RADIUS [11] protocol as well as the upcoming Diameter protocol [3,4]. Diameter is the successor of the well-known RADIUS protocol and features e.g. more advanced security functions as well as increased means for peer availability. Diameter is still undergoing standardization within the IETF AAA working group.

Conclusions

In this paper we have touched multiple areas related to mobility in IP design - such as Multi Access Network Mobility applicable for both wire-line and wireless networks. We emphasize on application independent mobility with inherent support for all IP-based applications. Mobile IP together with AAA combines personal and terminal mobility with roaming services.

Datalogger

 About    
          A data logger (or datalogger) is an electronic instrument that records data over time or in relation to location.  Increasingly, but not necessarily, they are based on a digital processor (or computer). They may be small, battery powered and portable and vary between general purpose types for a range of measurement applications to very specific devices for measuring in one environment only.It is common for general purpose types to beprogrammable. Standardisation of protocols and data formats is growing in the industry and XML is increasingly being adopted for data exchange. The development of the Semantic Web is likely to accelerate this trend. A smart protocol, SDI-12, exists that allows some instrumentation to be connected to a variety of data loggers. The use of this standard has not gained much acceptance outside the environmental industry.

The Extensible Markup Language

          XML is a text-based markup language that is fast becoming the standard for data interchange on the Web. As with HTML, you identify data using tags (identifiers enclosed in angle brackets, like this: <...>). Collectively, the tags are known as "markup". But unlike HTML, XML tags identify the data, rather than specifying how to display it. Where an HTML tag says something like "display this data in bold font" (...), an XML tag acts like a field name in your program. It puts a label on a piece of data that identifies it .

Datalogger Seminar Report
 

 Data Logging Versus Data Acquisition

          The terms data logging and data acquisition are often used interchangeably. However, in a historical context they are quite different. A data logger is a data acquisition system, but a data acquisition system is not necessarily a data logger.

 Data loggers are implicitly stand-alone devices, while typical data acquisition system must remain tethered to a computer to acquire data. This stand-alone aspect of data loggers implies on-board memory that is used to store acquired data. Sometimes this memory is very large to accommodate many days, or even months, of unattended recording. This memory may be battery-backed static random access memory, flash memory or EEPROM.

About The Modbus Protocol

          Modbus is a serial communications protocol published by Modicon in 1979 for use with its programmable logic controllers (PLCs). It is has become a de facto standard communications protocol in industry, and is now the most commonly available means of connecting industrial electronic devices.

About The Sdi-12 Protocol

          SDI-12 stands for serial data interface at 1200 baud. It is a standard to interface battery powered data recorders with micro-processor based sensors designed for environmental data acquisition (EDA).

Usage Of Micro-Processor Based Sensors

          A micro-processor in the sensor may calibrate the sensor, control sensor measurements, and convert raw sensor readings into engineering units. The micro-processor also controls the SDI-12 interface. It accepts and decodes instructions received from the data recorder, starts the measurements, controls all timing, and uses the SDI-12 protocol to communicate with the data recorder.

Online Analysis

          This step includes any analysis you would like to do before storing the data. A common example of this is converting the voltage measurement to meaningful scientific units, such as degrees Celsius. You can complete these complex calculations and data compression before logging the data. Controlling part of a system based on current measurements -- for example, a kill switch -- is also part of online analysis.

Conclusion

          Data Loggers are changing more rapidly now than ever before. The original model of a stand alone data logger is changing to one of a device that collects data but also has access to wireless communications for alarming of events and automatic reporting of data. Dataloggers are beginning to serve web pages for current readings, email their alarms and FTP their daily results into databases or direct to the users. Technically speaking, a data logger is any device that can be used to store data.


Brain Gate


History

          After 10 years of study and research, Cyberkinetics, a biotech company in Foxboro, Massachusetts, has developed BrainGate in 2003. Dr. John Donaghue, director of the brain science program at Brown University, Rhode Island, and chief scientific officer of Cyberkinetics, the company behind the brain implant, lead the team to research and develop this brain implant system.

Working

          The sensor of the size of a contact lens is implanted in brain’s percental gyrus which control hand and arm movements. A tiny wire connects the chip to a small pedestal secured in the scull. A cable connects the pedestal to a computer. The brain's 100bn neurons fire between 20 and 200 times a second .The sensor implanted in the brain senses these electrical signals and passes to the pedestal through the wire. The pedestal passes this signals to the computer through the cable.

Braingate Neural Interface System

          The BrainGate Neural Interface System is currently the subject of a pilot clinical trial being conducted under an Investigational Device Exemption (IDE) from the FDA. The system is designed to restore functionality for a limited, immobile group of severely motor-impaired individuals. It is expected that people using the BrainGate System will employ a personal computer as the gateway to a range of self-directed activities. These activities may extend beyond typical computer functions (e.g., communication) to include the control of objects in the environment such as a telephone, a television and lights.

Brain Gate Seminar Topic

About

BrainGate is a brain implant system developed by the bio-tech company Cyberkinetics in 2003 in conjunction with the Department of Neuroscience at Brown University. The device was designed to help those who have lost control of their limbs, or other bodily functions, such as patients with amyotrophic lateral sclerosis (ALS) or spinal cord injury. The computer chip, which is implanted into the brain, monitors brain activity in the patient and converts the intention of the user into computer commands. Cyberkinetics describes that "such applications may include novel communications interfaces for motor impaired patients, as well as the monitoring and treatment of certain diseases which manifest themselves in patterns of brain activity, such as epilepsy and depression." Currently the chip uses 100 hair-thin electrodes that sense the electro-magnetic signature of neurons firing in specific areas of the brain, for example, the area that controls arm movement. The activities are translated into electrically charged signals and are then sent and decoded using a program, which can move either a robotic arm or a computer cursor.

Brain-Computer Interface

          A brain-computer interface (BCI), sometimes called a direct neural interface or a brain-machine interface, is a direct communication pathway between a human or animal brain (or brain cell culture) and an external device. In one-way BCIs, computers either accept commands from the brain or send signals to it (for example, to restore vision) but not both. Two-way BCIs would allow brains and external devices to exchange information in both directions but have yet to be successfully implanted in animals or humans.In this definition, the word brain means the brain or nervous system of an organic life form rather than the mind. Computer means any processing or computational device, from simple circuits to silicon chips (including hypothetical future technologies such as quantum computing).

Future Of Neural Interfaces

          Cyberkinetics has a vision, CEO Tim Surgenor explained to Gizmag, but it is not promising "miracle cures", or that quadriplegic people will be able to walk again - yet. Their primary goal is to help restore many activities of daily living that are impossible for paralyzed people and to provide a platform for the development of a wide range of other assistive devices.  Cyberkinetics hopes to refine the BrainGate in the next two years to develop a wireless device that is completely implantable and doesn't have a plug, making it safer and less visible.  Surgenor also sees a time not too far off where normal humans are interfacing with BrainGate technology to enhance their relationship with the digital world - if they're willing to be implanted.

Conclusion

The invention of Braingate is such a revolution in medical field.The remarkable breakthrough offers hope that people who are paralysed will one day be able to independently operate artificial limbs, computers or wheelchairs.


Asynchronous Chips


What Are The Potential Benefits Of Asynchronous Systems?

          First, asynchrony may speed up computers. In a synchronous chip, the clock’s rhythm must be slow enough to accommodate the slowest action in the chip’s circuits. If it takes a billionth of a second for one circuit to complete its operation, the chip cannot run faster than one gigahertz. Even though many other circuits on that chip may be able to complete their operations in less time, these circuits must wait until the clock ticks again before proceeding to the next logical step. In contrast each part of an asynchronous system takes as much or as little time for each action as it needs.

          Complex operations can take more time than average, and simple ones can take les. Actions can start as soon as the prerequisite actions are done, without waiting for the next tick of the clock. Thus the systems speed depends on the average action time rather than the slowest action time.

How Fast Is Your Personal Computer?

          When people ask this question, they are typically referring to the frequency of a minuscule clock inside the computer, a crystal oscillator that sets the basic rhythm used throughout the machine. In a computer with a speed of one Gigahertz, for example, the crystal “ticks” a billion times a second. Every action of he computer takes place in tiny step; complex calculations may take many steps. All operations, however, must begin and end according to the clock’s timing signals.


Asynchronous Logic

          Data-driven circuits design technique where, instead of the components sharing a common clock and exchanging data on clock edges, data is passed on as soon as it is available. This removes the need to distribute a common clock signal throughout the circuit with acceptable clock skew. It also helps to reduce power dissipation in CMOS circuits because gates only switch when they are doing useful work rather than on every clock edge.

About

          Computer chips of today are synchronous. They contain a main clock, which controls the timing of the entire chips. There are problems, however, involved with these clocked designs that are common today.

          One problem is speed. A chip can only work as fast as its slowest component. Therefore, if one part of the chip is especially slow, the other parts of the chip are forced to sit idle. This wasted computed time is obviously detrimental to the speed of the chip.

         The other major problem with c clocked design is power consumption. The clock consumes more power that any other component of the chip. The most disturbing thing about this is that the clock serves no direct computational use. A clock does not perform operations on information; it simply orchestrates the computational parts of the computer.

 Local Operation

          To describe how asynchronous systems work, we often use the metaphor of the bucket brigade. A clocked system is like a bucket brigade in which each person must pass and receive buckets according to the tick tock rhythm of the clock. When the clock ticks, each person pushes a bucket forward to the next person down the line. When the clock tocks, each person grasps the bucket pushed forward by the preceding person. The rhythm of this brigade cannot go faster than the time it takes the slowest person to move the heaviest bucket. Even if most of the buckets are light, everyone in the line must wait for the clock to tick before passing the next bucket.

Abstract

          Breaking the bounds of the clock on a processor may seem a daunting task to those brought up through a typical engineering program. Without the clock, how do you organize the chip and know when you have the correct data or instruction? We may have to take this task on very soon.

Today, we have the advanced manufacturing devices to make chips extremely accurate. Because of this, it is possible to create prototype processors without a clock. But will these chips catch on? A major hindrance to the development of clock less chips is the competitiveness of the computer industry. Presently, it is nearly impossible for companies to develop and manufacture a clock less chip while keeping the cost reasonable. Until this is possible, clock less chips will not be a major player in the market.

Conclusion


Clocks have served the electronics design industry very well for a long time, but there are insignificant difficulties looming for clocked design in future. These difficulties are most obvious in complex SOC development, where electrical noise, power and design costs threaten to render the potential of future process technologies inaccessible to clocked design.


Brain Chips


About

An implantable brain-computer interface the size of an aspirin has been clinically tested on humans by American company Cyber kinetics. The 'Brain Gate' device can provide paralyzed or motor-impaired patients a mode of communication through the translation of thought into direct computer control. The technology driving this breakthrough in the Brain-Machine-Interface field has a myriad of potential applications, including the development of human augmentation for military and commercial purposes. Brain Gate system in the current human trials, a 25 year old quadriplegic has successfully been able to switch on lights, adjust the volume on a TV, change channels and read e-mail using only his brain. Crucially the patient was able to do these tasks while carrying on a conversation and moving his head at the same time. John Donoghue, the chairman of the Department of Neuroscience at Brown University, led the original research project and went on to co-found Cyber kinetics, where he is currently chief scientific officer overseeing the clinical trial. It is expected that people using the Brain Gate system will employ a personal computer as the gateway to range of self-directed activities. These activities may extend beyond typical computer functions (e.g., communication) to include the control of objects in the environment such as a telephone, a television and lights. Usually the brain is connected to an external computer system through a chip composed of electrodes.


Invasive Bcis

Invasive BCI research has targeted repairing damaged or congenitally absent sight and hearing and providing new functionality to paralyzed people. There has been great success in using cochlear implants in humans as a treatment for non congenital deafness, but it's not clear that these can be considered brain-computer interfaces. There is also promising research in vision science where direct brain implants have been used to treat non-congenital blindness. One of the first scientists to come up with a working brain interface to restore sight was private researcher, William Dobelle. Dobelle's first prototype was implanted into Jerry, a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes. The system included TV cameras mounted on glasses to send signals to the implant. Initially the implant allowed Jerry to see shades of grey in a limited field of vision and at a low frame-rate also requiring him to be hooked up to a two-ton Mainframe. Shrinking electronics and faster computers made his artificial eye more portable And allowed him to perform simple task sun assisted.

Abstract

Thousands of people around the world suffer from paralysis, rendering them dependent on others toper form even the most basic tasks. But that could change, because of the latest achievements in the Brain-Computer Interface (BCI), which could help them regain a portion of their lost in dependence. Even normal humans may also be able to utilize Brain Chip Technology to enhance their relationship with the digital world-provided they are willing to receive the implant. The term ‘Brain-Computer Interface’ refers to the direct interaction between a healthy brain and a computer. Intense efforts and research in this BCI field over the past decade have recently resulted in a human BCI implantation, which is a great news for all of us, especially for those who have been resigned to spending their lives in wheel chairs.

 Conclusion

Here by, we conclude that neural interfaces have emerged as effective interventions to reduce the burden associated with some neurological diseases, injuries and disabilities. The Brain Gate helps the  quadriplegic patients who cannot perform even simple actions without the help of another person are able to do things like checking e-mails, turn the TV on or off, and control a prosthetic arm— with just their thoughts.




Artificial Intelligence Substation Control


About

          Electric substations are facilities in charge of the voltage transformation to provide safe and effective energy to the consumers. This energy supply has to be carried out with sufficient quality and should guarantee the equipment security.  The associated cost to ensure quality and security during the supply in substations is high. 

                     Even  when all the magnitudes to be controlled cannot be included in the analysis  (mostly due to the great number of measurements and status variables of the substation and, therefore, to the rules that would be required by the controller), it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status.

Experimental Results

          To carry out the experiment, a software for the platform windows 9x/2000 using Delphi was elaborated. Signal generators for the analog input variables were used.  The experiment was starting from the status 0011. During the first 400 measurments,243 actions could not be determined by the controller ,an expert gave the answers i.e., ,and as a result,243 new controller rules were extracted.


Plant Description

           The system under study represents a test substation with two 30KVA three-phase transformers, two CBs, two switches, three current transformers, and two potential transformers.  It also contains an auto transformer  (to regulate the input voltage) as well as impedance to simulate the existence of a transmission line.  The input voltage are the same  (220V), this characteristic was selected in order to analyze the operation of the controller in a laboratory scale in a second stage of the development of the present work. Therefore, the first transformer increases the voltage to a value of 13.2KV, while the second lowers it again to 220V. fixed filter, an automatic filter for the control of the power factor and the regulation of the voltage, and three feeding lines with diverse type of loads of different nature  (including nonlinear loads) are connected through CBs to the output bar. 

Inference Module

                 The  controller  outputs  are  decided  by  searching  in  the  rule  base.  In  this  step,  called  inference,  the  fire  degree  of  each  rule  is  calculated. Since  the consequence  part  in  each  rule  only  deals  with  status  variables  whose  values  are  crisp  numbers(0  and  1),  use  of  defuzzification  method  is  not  necessary. Therefore,  the  controller   output  in  each  case  will  be  the  consequent  part  of  the  rule  with  the  biggest  fire  degree.
       
 Abstract

Controlling a substation by a fuzzy controller speeds up response time diminishes up the possibility of risks normally related to human operations. The automation of electric substation is an area under constant development Our research has focused on, the Selection of the magnitude to be controlled, Definition  and  implementation  of  the  soft  techniques, Elaboration  of  a  programming  tool  to  execute  the  control  operations. it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status. The status of the circuit breakers can be control by using a knowledge base that relates some of the operation magnitudes, mixing status variables with time variables and fuzzy sets .

Conclusion

Electric substations are facilities in charge of the voltage transformation to provide safe and effective energy to the consumers. This energy supply has to be carried out with sufficient quality and should guarantee the equipment security. The associated cost to ensure quality and security during the supply in substations is high. Automatic mechanisms are generally used in greater or lesser scale, although they mostly operate according to an individual control and protection logic related with the equipment itself.
  
Newer Posts Older Posts Home