Brain Gate


History

          After 10 years of study and research, Cyberkinetics, a biotech company in Foxboro, Massachusetts, has developed BrainGate in 2003. Dr. John Donaghue, director of the brain science program at Brown University, Rhode Island, and chief scientific officer of Cyberkinetics, the company behind the brain implant, lead the team to research and develop this brain implant system.

Working

          The sensor of the size of a contact lens is implanted in brain’s percental gyrus which control hand and arm movements. A tiny wire connects the chip to a small pedestal secured in the scull. A cable connects the pedestal to a computer. The brain's 100bn neurons fire between 20 and 200 times a second .The sensor implanted in the brain senses these electrical signals and passes to the pedestal through the wire. The pedestal passes this signals to the computer through the cable.

Braingate Neural Interface System

          The BrainGate Neural Interface System is currently the subject of a pilot clinical trial being conducted under an Investigational Device Exemption (IDE) from the FDA. The system is designed to restore functionality for a limited, immobile group of severely motor-impaired individuals. It is expected that people using the BrainGate System will employ a personal computer as the gateway to a range of self-directed activities. These activities may extend beyond typical computer functions (e.g., communication) to include the control of objects in the environment such as a telephone, a television and lights.

Brain Gate Seminar Topic

About

BrainGate is a brain implant system developed by the bio-tech company Cyberkinetics in 2003 in conjunction with the Department of Neuroscience at Brown University. The device was designed to help those who have lost control of their limbs, or other bodily functions, such as patients with amyotrophic lateral sclerosis (ALS) or spinal cord injury. The computer chip, which is implanted into the brain, monitors brain activity in the patient and converts the intention of the user into computer commands. Cyberkinetics describes that "such applications may include novel communications interfaces for motor impaired patients, as well as the monitoring and treatment of certain diseases which manifest themselves in patterns of brain activity, such as epilepsy and depression." Currently the chip uses 100 hair-thin electrodes that sense the electro-magnetic signature of neurons firing in specific areas of the brain, for example, the area that controls arm movement. The activities are translated into electrically charged signals and are then sent and decoded using a program, which can move either a robotic arm or a computer cursor.

Brain-Computer Interface

          A brain-computer interface (BCI), sometimes called a direct neural interface or a brain-machine interface, is a direct communication pathway between a human or animal brain (or brain cell culture) and an external device. In one-way BCIs, computers either accept commands from the brain or send signals to it (for example, to restore vision) but not both. Two-way BCIs would allow brains and external devices to exchange information in both directions but have yet to be successfully implanted in animals or humans.In this definition, the word brain means the brain or nervous system of an organic life form rather than the mind. Computer means any processing or computational device, from simple circuits to silicon chips (including hypothetical future technologies such as quantum computing).

Future Of Neural Interfaces

          Cyberkinetics has a vision, CEO Tim Surgenor explained to Gizmag, but it is not promising "miracle cures", or that quadriplegic people will be able to walk again - yet. Their primary goal is to help restore many activities of daily living that are impossible for paralyzed people and to provide a platform for the development of a wide range of other assistive devices.  Cyberkinetics hopes to refine the BrainGate in the next two years to develop a wireless device that is completely implantable and doesn't have a plug, making it safer and less visible.  Surgenor also sees a time not too far off where normal humans are interfacing with BrainGate technology to enhance their relationship with the digital world - if they're willing to be implanted.

Conclusion

The invention of Braingate is such a revolution in medical field.The remarkable breakthrough offers hope that people who are paralysed will one day be able to independently operate artificial limbs, computers or wheelchairs.


Asynchronous Chips


What Are The Potential Benefits Of Asynchronous Systems?

          First, asynchrony may speed up computers. In a synchronous chip, the clock’s rhythm must be slow enough to accommodate the slowest action in the chip’s circuits. If it takes a billionth of a second for one circuit to complete its operation, the chip cannot run faster than one gigahertz. Even though many other circuits on that chip may be able to complete their operations in less time, these circuits must wait until the clock ticks again before proceeding to the next logical step. In contrast each part of an asynchronous system takes as much or as little time for each action as it needs.

          Complex operations can take more time than average, and simple ones can take les. Actions can start as soon as the prerequisite actions are done, without waiting for the next tick of the clock. Thus the systems speed depends on the average action time rather than the slowest action time.

How Fast Is Your Personal Computer?

          When people ask this question, they are typically referring to the frequency of a minuscule clock inside the computer, a crystal oscillator that sets the basic rhythm used throughout the machine. In a computer with a speed of one Gigahertz, for example, the crystal “ticks” a billion times a second. Every action of he computer takes place in tiny step; complex calculations may take many steps. All operations, however, must begin and end according to the clock’s timing signals.


Asynchronous Logic

          Data-driven circuits design technique where, instead of the components sharing a common clock and exchanging data on clock edges, data is passed on as soon as it is available. This removes the need to distribute a common clock signal throughout the circuit with acceptable clock skew. It also helps to reduce power dissipation in CMOS circuits because gates only switch when they are doing useful work rather than on every clock edge.

About

          Computer chips of today are synchronous. They contain a main clock, which controls the timing of the entire chips. There are problems, however, involved with these clocked designs that are common today.

          One problem is speed. A chip can only work as fast as its slowest component. Therefore, if one part of the chip is especially slow, the other parts of the chip are forced to sit idle. This wasted computed time is obviously detrimental to the speed of the chip.

         The other major problem with c clocked design is power consumption. The clock consumes more power that any other component of the chip. The most disturbing thing about this is that the clock serves no direct computational use. A clock does not perform operations on information; it simply orchestrates the computational parts of the computer.

 Local Operation

          To describe how asynchronous systems work, we often use the metaphor of the bucket brigade. A clocked system is like a bucket brigade in which each person must pass and receive buckets according to the tick tock rhythm of the clock. When the clock ticks, each person pushes a bucket forward to the next person down the line. When the clock tocks, each person grasps the bucket pushed forward by the preceding person. The rhythm of this brigade cannot go faster than the time it takes the slowest person to move the heaviest bucket. Even if most of the buckets are light, everyone in the line must wait for the clock to tick before passing the next bucket.

Abstract

          Breaking the bounds of the clock on a processor may seem a daunting task to those brought up through a typical engineering program. Without the clock, how do you organize the chip and know when you have the correct data or instruction? We may have to take this task on very soon.

Today, we have the advanced manufacturing devices to make chips extremely accurate. Because of this, it is possible to create prototype processors without a clock. But will these chips catch on? A major hindrance to the development of clock less chips is the competitiveness of the computer industry. Presently, it is nearly impossible for companies to develop and manufacture a clock less chip while keeping the cost reasonable. Until this is possible, clock less chips will not be a major player in the market.

Conclusion


Clocks have served the electronics design industry very well for a long time, but there are insignificant difficulties looming for clocked design in future. These difficulties are most obvious in complex SOC development, where electrical noise, power and design costs threaten to render the potential of future process technologies inaccessible to clocked design.


Brain Chips


About

An implantable brain-computer interface the size of an aspirin has been clinically tested on humans by American company Cyber kinetics. The 'Brain Gate' device can provide paralyzed or motor-impaired patients a mode of communication through the translation of thought into direct computer control. The technology driving this breakthrough in the Brain-Machine-Interface field has a myriad of potential applications, including the development of human augmentation for military and commercial purposes. Brain Gate system in the current human trials, a 25 year old quadriplegic has successfully been able to switch on lights, adjust the volume on a TV, change channels and read e-mail using only his brain. Crucially the patient was able to do these tasks while carrying on a conversation and moving his head at the same time. John Donoghue, the chairman of the Department of Neuroscience at Brown University, led the original research project and went on to co-found Cyber kinetics, where he is currently chief scientific officer overseeing the clinical trial. It is expected that people using the Brain Gate system will employ a personal computer as the gateway to range of self-directed activities. These activities may extend beyond typical computer functions (e.g., communication) to include the control of objects in the environment such as a telephone, a television and lights. Usually the brain is connected to an external computer system through a chip composed of electrodes.


Invasive Bcis

Invasive BCI research has targeted repairing damaged or congenitally absent sight and hearing and providing new functionality to paralyzed people. There has been great success in using cochlear implants in humans as a treatment for non congenital deafness, but it's not clear that these can be considered brain-computer interfaces. There is also promising research in vision science where direct brain implants have been used to treat non-congenital blindness. One of the first scientists to come up with a working brain interface to restore sight was private researcher, William Dobelle. Dobelle's first prototype was implanted into Jerry, a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes. The system included TV cameras mounted on glasses to send signals to the implant. Initially the implant allowed Jerry to see shades of grey in a limited field of vision and at a low frame-rate also requiring him to be hooked up to a two-ton Mainframe. Shrinking electronics and faster computers made his artificial eye more portable And allowed him to perform simple task sun assisted.

Abstract

Thousands of people around the world suffer from paralysis, rendering them dependent on others toper form even the most basic tasks. But that could change, because of the latest achievements in the Brain-Computer Interface (BCI), which could help them regain a portion of their lost in dependence. Even normal humans may also be able to utilize Brain Chip Technology to enhance their relationship with the digital world-provided they are willing to receive the implant. The term ‘Brain-Computer Interface’ refers to the direct interaction between a healthy brain and a computer. Intense efforts and research in this BCI field over the past decade have recently resulted in a human BCI implantation, which is a great news for all of us, especially for those who have been resigned to spending their lives in wheel chairs.

 Conclusion

Here by, we conclude that neural interfaces have emerged as effective interventions to reduce the burden associated with some neurological diseases, injuries and disabilities. The Brain Gate helps the  quadriplegic patients who cannot perform even simple actions without the help of another person are able to do things like checking e-mails, turn the TV on or off, and control a prosthetic arm— with just their thoughts.




Artificial Intelligence Substation Control


About

          Electric substations are facilities in charge of the voltage transformation to provide safe and effective energy to the consumers. This energy supply has to be carried out with sufficient quality and should guarantee the equipment security.  The associated cost to ensure quality and security during the supply in substations is high. 

                     Even  when all the magnitudes to be controlled cannot be included in the analysis  (mostly due to the great number of measurements and status variables of the substation and, therefore, to the rules that would be required by the controller), it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status.

Experimental Results

          To carry out the experiment, a software for the platform windows 9x/2000 using Delphi was elaborated. Signal generators for the analog input variables were used.  The experiment was starting from the status 0011. During the first 400 measurments,243 actions could not be determined by the controller ,an expert gave the answers i.e., ,and as a result,243 new controller rules were extracted.


Plant Description

           The system under study represents a test substation with two 30KVA three-phase transformers, two CBs, two switches, three current transformers, and two potential transformers.  It also contains an auto transformer  (to regulate the input voltage) as well as impedance to simulate the existence of a transmission line.  The input voltage are the same  (220V), this characteristic was selected in order to analyze the operation of the controller in a laboratory scale in a second stage of the development of the present work. Therefore, the first transformer increases the voltage to a value of 13.2KV, while the second lowers it again to 220V. fixed filter, an automatic filter for the control of the power factor and the regulation of the voltage, and three feeding lines with diverse type of loads of different nature  (including nonlinear loads) are connected through CBs to the output bar. 

Inference Module

                 The  controller  outputs  are  decided  by  searching  in  the  rule  base.  In  this  step,  called  inference,  the  fire  degree  of  each  rule  is  calculated. Since  the consequence  part  in  each  rule  only  deals  with  status  variables  whose  values  are  crisp  numbers(0  and  1),  use  of  defuzzification  method  is  not  necessary. Therefore,  the  controller   output  in  each  case  will  be  the  consequent  part  of  the  rule  with  the  biggest  fire  degree.
       
 Abstract

Controlling a substation by a fuzzy controller speeds up response time diminishes up the possibility of risks normally related to human operations. The automation of electric substation is an area under constant development Our research has focused on, the Selection of the magnitude to be controlled, Definition  and  implementation  of  the  soft  techniques, Elaboration  of  a  programming  tool  to  execute  the  control  operations. it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status. The status of the circuit breakers can be control by using a knowledge base that relates some of the operation magnitudes, mixing status variables with time variables and fuzzy sets .

Conclusion

Electric substations are facilities in charge of the voltage transformation to provide safe and effective energy to the consumers. This energy supply has to be carried out with sufficient quality and should guarantee the equipment security. The associated cost to ensure quality and security during the supply in substations is high. Automatic mechanisms are generally used in greater or lesser scale, although they mostly operate according to an individual control and protection logic related with the equipment itself.
  

Blu Ray Disc


Blu-Ray Disc And HD-DVD

The HD-DVD format, originally called AOD or Advanced Optical Disc, is based on much of today's DVD principles and as a result, suffers from many of its limitations. The format does not provide as big of a technological step as Blu-ray Disc. For example, its pre-recorded capacities are only 15 GB for a single layer disc, or 30 GB for a double layer disc. Blu-ray Disc provides 67% more capacity per layer at 25 GB for a single layer and 50GB for a double layer disc. Although the HD-DVD format claims it keeps initial investments for disc replicates and media manufacturers as low as possible, they still need to make substantial investments in modifying their production equipment to create HD-DVDs. But what's more important is that HD-DVD can be seen as just a transition technology, with a capacity not sufficient for the long term. It might not offer enough space to hold a High Definition feature along with bonus material in HD quality and additional material that can be revealed upon authorization via a network. When two discs are needed, this will degrade the so-called cost benefit substantially.

Different Formats of Blu-ray Disc

BD-ROM   : a read only format developed for prerecorded content
BD-R         : a write once format developed for PC storage
BD-RW     : a rewritable format developed for PC storage
BD-RE       : a rewritable format developed for HDTV recording

Blu Ray Disc

Data Management Parameters

 The logical organization of data on the disk and how those data are used are considerations for data management. Data management considerations have important implications in the application of optical disk technology to storage for HDTV. For example, simply using a more advanced error correction scheme on DVDs allows a 30% higher disk capacity compared to CDs. Data rate, video format, bit-rate scheme and HDTV play time are all data management issues.  There is a basic difference in data management between CDs and DVDs. Since CDs were designed for audio, data are managed in a manner similar to data management for magnetic tape. Long, contiguous files are used that are not easily subdivided and written in a random access pattern. Efficient data retrieval is accomplished when these long files are read out in a contiguous fashion. To be sure, CDs are much more efficient that magnetic tape for pseudorandom access, but the management philosophy is the same. On the other hand, DVDs are more like magnetic hard disks, where the file structure is designed to be used in random-access architecture. That is, efficient recovery of variable length files is achieved. In addition, the Original error correction strategy for CDs was designed for error concealment when listening to audio, where DVDs utilize true error correction. Later generations of optical disks also follow the DVD model.

Applications

·       High Definition Television Recording
     ·       High Definition Video Distribution
     ·       High Definition Camcorder Archiving
     ·       Mass Data Storage
     ·       Digital Asset Management and Professional Storage

Mass Data Storage

           In its day, CD-R/RW meant a huge increase in storage capacity compared to traditional storage media with its 650 MB. Then DVD surpassed this amount by offering 4.7 to 8.5 GB of storage, an impressive 5 to 10 times increase. Now consumers demand an even bigger storage capacity. The growing number of broadband connections allowing consumers to download vast amounts of data, as well as the ever increasing audio, video and photo capabilities of personal computers has lead to yet another level in data storage requirements. In addition, commercial storage requirements are growing exponentially due to the proliferation of e-mail and the migration to paperless processes. The Blu-ray Disc format again offers 5 to 10 times as much capacity as traditional DVD resulting in 25 to 50 GB of data to be stored on a single rewritable or recordable disc.

Conclusion


           The BD represents a major advancement in capacity as well as data transfer rate. It would be an ideal choice for the secondary storage purposes.The semiconductor storage for secondary memory is large, consumes more power and is more expensive. HDTV video recording and reproducing would essentially require the large storage capacity and data transfer rates, as offered by the Blu-ray disc. The Blu-ray disc has a wide variety of applications and is the ultimate storage device that would lead to digital convergence, ultimately leading to the convergence of the PC and CE technologies. In the opinion of many researchers (including those at the BDF group themselves), BD possibly represents the last of the plastic-based, visible laser optical disc systems.


Digital Hubbub


About

Entertainment is the major aspect of any human‘s life. This brings out the importance of consumer electronics. Consumer electronics play a very important role in today’s household entertainment devices such as TV, VCR, music systems like CD player MP3 etc. A lot of innovations are taking place in the field of consumer electronics.

Neologism In Digital Hubbub

As far as an electronic component market is considered the success of a product relies on the compactness and cheapness of the product. As any electronic device digital hubbub is also required to be compact and cheap. Compactness is brought about by implementing chips with multi-functions or in other way it can be said as compactness can be brought by merging 2 or 3 chips to do a single function. There fore several electronics firm are doing lot of research and development to bring about a much compact and cheap digital hubbub.

Hardware

At the core of a home entertainment hub are
     ·       Central Processing Unit
     ·       Digital signal processing chips 
     ·       Hard disk drive
     ·       Universal serial bus port
     ·       PCMCIA connector
     ·       Ethernet jack

Central processing unit   :- As in a computer system , CPU is the master of the hub. It deals with the data transfer that takes between different peripherals and hub. It checks on the parallel operations taking place in hub.

Seminar Topics

Digital signal processing chips :- The analog signals from various peripherals like a TV set  or a tape recorder is received by analog to digital convertors .  These digitized data is accessed   by digital signal processing chips via their serial ports.

Hard Disk Drive  :- The hard disk drive is under a direct control of CPU via disk controller. As in any device a hard disk drive is used to store the data .

Universal Serial Bus (USB) :- The USB is a synchronous protocol that supports isochronous and asynchronous data and messaging transfers.

Personal Computer Memory card International Association (PCMCIA) :- PCMCIA cards are credit card size adapters which fit into PCMCIA slots found in most handheld and laptop computers.

Software

As in a normal personal computer software checks on user interface , applications etc . Software of digital hubbub can be considered as a series of layers. In the innermost is an operating system that manages resources such as storage or CPU timing . The next layer is the middle ware that handles such house keeping details as displaying text and graphics on TV screens. The middleware interpret the input from different panel or remote control and it enables the CPU to generate signals according to the concerned function. It also deals with the communication with the cable that supplies the digital video and data strings . The outermost layer handles several applications .

A436 Parallel Video Dsp Chip Overview

    ·       Highly optimized and efficient, general purpose, very high performance, 512b advanced imaging parallel DSP and 32b RISC processor (no MMU) in a single chip in a single instruction stream
    ·       Performance of 50,000 RISC MIPS for motion estimation and 3 billion MACS with only 100 MHz CPU clock.
    ·       Achieves very high performance with moderate CPU clock rate and main-stream fabrication

Abstract

As far as consumer electronics is considered the latest talk in the town is about digital hubbub. His device is used as a hub to interconnect various any home devices .Along with the interconnecting capability hub also incorporates several functions like recording play backing etc of data streams from various electronic devices in the house .


The merged company intends to roll out Moxi's software on Motorola's set-top box hardware; it is also moving forward with tests of Moxi media center prototypes among subscribers to Echostar Communications Corp. (Littleton, Colo.), a satellite TV service. Established makers of set-top boxes, including Royal Philips Electronics (Amsterdam, the Netherlands) and Pioneer Corp. (Tokyo)—and, of course, Motorola—are building boxes that include high-speed data connections and home-network capabilities, in addition to the digital TV decoders of ordinary cable systems.


BiCMOS Technology


BiCMOS

BiCMOS technologies possess better integration capability than bipolar-only technologies. It is not possible to develop very-high-density digital circuits in bipolar-only technologies, as these bipolar logic circuits consume static power. Also of importance to SOC systems is the vast set of large macro cells, including microprocessors, memory macros, and DSPs, that are available in most CMOS technologies, as well as the computer-aided design (CAD) infrastructure for multi-million gate systems, that exist in fine-line CMOS technologies.

About

          The history of semiconductor devices starts in 1930’s when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.


System On Chip (Soc) Fundamentals

The concept of system-on-chip (SOC) has evolved as the number of gates available to a designer has increased and as CMOS technology has migrated from a minimum feature size of several microns to close to 0.1 µm. Over the last decade, the integration of analog circuit blocks is an increasingly common feature of SOC development, motivated by the desire to shrink the number of chips and passives on a PC board. This, in turn, reduces system size and cost and improves reliability by requiring fewer components to be mounted on a PC board. Power dissipation of the system also improves with the elimination of the chip input-output (I/O) interconnect blocks. Superior matching and control of integrated components also allows for new circuit architectures to be used that cannot be attempted in multi-chip architectures. Driving PC board traces consume significant power, both in overcoming the larger capacitances on the PC board and through larger signal swings to overcome signal cross talk and noise on the PC board.

Process Integration Of The Sige Device

Next consider the low-cost integration of SiGe bipolar devices with the core CMOS process for mixed RF, analog, and digital SOC chips. Figure 1 shows a cross section of the silicon wafer with both the CMOS and SiGe bipolar devices. The addition of a SiGe bipolar transistor module introduces a low-cost, high-performance, super-self-aligned (double-poly) graded SiGe base NPN transistor to the CMOS process. In a double polysilicon bipolar transistor, the base and emitter polysilicon define the placements of the base and emitter regions, respectively. The emitter is formed using arsenic-doped poly. The capacitances from emitter to base (Ceb) and from base to collector (Cbs) are reduced because no extra implant width is required to account for registration errors between the active element of the device and its contact.

Capacitors

          Without any modules, a CMOS process offers only the gate-semiconductor capacitance for capacitor formation. Not only is this highly nonlinear as the device transition from accumulation to depletion, but it also results in a parasitic junction capacitance on silicon side of the device that makes the capacitor incompatible for many circuits. Traditional CMOS processes designed for analog-and mixed-signal applications have included an extra layer of polysilicon to form a poly-poly capacitor. This capacitor is much more linear and its parasitic capacitance from the lower poly layer to the substrate is significantly reduced.

Abstract

          The need for high-performance, low-power, and low-cost systems for network transport and wireless communications is driving silicon technology toward higher speed, higher integration, and more functionality. Further more, this integration of RF and analog mixed-signal circuits into high-performance digital signal-processing (DSP) systems must be done with minimum cost overhead to be commercially viable. While some analog and RF designs have been attempted in mainstream digital-only complimentary metal-oxide semiconductor (CMOS) technologies, almost all designs that require stringent RF performance use bipolar or semiconductor technology.

Inductors

          Conflicting substrate requirements limit the integration of high-Q inductors with high-performance CMOS devices. Inductors fabricated using CMOS technologies based on epi/p+ substrates [Figure 8a) are severely degraded because of eddy-current losses in the substrate, and typically maximum quality-factor Q reported on epi/p+ substrates is only 3.

 Conclusion

Presented an overview of a SiGe modular BiCMOS process technology. Through the use of add-on modules compatible with the core CMOS process technology, large-scale chips combining digital, analog, and RF technologies can be produced. Modules are added as required by the chip under development. By using the core process with added modules, the economies of scale associated with large-volume CMOS production are maintained without compromising the performance of the analog or RF circuits.                                                                         


AI for Speech Recognition


What Is A Speech Recognition System?

A speech recognition system is a type of software that allows the user to have their spoken words converted into written text in a computer application such as a word processor or spreadsheet. The computer can also be controlled by the use of spoken commands.

            Speech recognition software can be installed on a personal computer of appropriate specification. The user speaks into a microphone (a headphone microphone is usually supplied with the product). The software generally requires an initial training and enrolment process in order to teach the software to recognise the voice of the user. A voice profile is then produced that is unique to that individual. This procedure also helps the user to learn how to ‘speak’ to a computer.

 About

         When you dial the telephone number of a big company, you are likely to hear the sonorous voice of a cultured lady who responds to your call with great courtesy saying  “welcome to company X. Please give me the extension number you want” .You pronounce the extension number, your name, and the name of the person you want to contact. If the called person accepts the call, the connection is given quickly. This is artificial intelligence where an automatic call-handling system is used without employing any telephone operator.


 Working Of The System

              The voice input to the microphone produces an analogue speech signal. An analogue to digital converter (ADC) converts this speech signal into binary words that are compatible with digital computer. The converted binary version is then stored in the system and compared with previously stored binary representation of words and phrases.

What Software Is Available?

There are a number of publishers of speech recognition software. New and improved versions are regularly produced, and older versions are often sold at greatly reduced prices. Invariably, the newest versions require the most modern computers of well above average specification. Using the software on a computer with a lower specification means that it will run very slowly and may well be impossible to use. There are two main types of speech recognition software: discrete speech and continuous speech.

Acceptance And Rejection

               When the recognition engine processes an utterance, it returns a result. The result can be either of two states: acceptance or rejection. An accepted utterance is one in which the engine returns recognized text. Whatever the caller says, the speech recognition engine tries very hard to match the utterance to a word or phrase in the active grammar.

              Sometimes the match may be poor because the caller said something that the application was not expecting, or the caller spoke indistinctly. In these cases, the speech engine returns the closest match, which might be incorrect. Some engines also return a confidence score along with the text to indicate the likelihood that the returned text is correct. Not all utterances that are processed by the speech engine are accepted. Acceptance or rejection is flagged by the engine with each processed utterance.

The Limits Of Speech Recognition

            To improve speech recognition applications, designers must understand acoustic memory and prosody. Continued research and development should be able to improve certain speech input, output, and dialogue applications. Speech recognition and gen-eration is sometimes helpful for environments that are hands-busy, eyes-busy, mobility-required, or hostile and shows promise for telephone-based ser-vices.

              Dictation input is increasingly accurate, but adoption outside the disabled-user community has been slow compared to visual interfaces. Obvious physical problems include fatigue from speaking continuously and the disruption in an office filled with people speaking.

       By understanding the cognitive processes sur-rounding human “acoustic memory” and process-ing, interface designers may be able to integrate speech more effectively and guide users more suc- cessfully. By appreciating the differences between human-human interaction and human-computer interaction, designers may then be able to choose appropriate applications for human use of speech with computers.

Conclusion

        Speech recognition will revolutionize the way people conduct business over the Web and will, ultimately, differentiate world-class e-businesses. VoiceXML ties speech  recognition and telephony together and provides the technology with which businesses can develop and deploy voice-enabled Web solutions TODAY! These solutions can greatly expand the accessibility of Web-based self-service transactions to customers who would otherwise not have access, and, at the same time, leverage a business’ existing Web investments.


Newer Posts Older Posts Home