Chapter 1
1.
INTRODUCTION An artificial passenger (AP) is a device that would be used in a motor vehicle to make sure that the driver stays awake. IBM has developed a prototype that holds a conversation with a driver, telling jokes and asking questions intended to determine whether the driver can respond alertly enough. Assuming the IBM approach, an artificial passenger would use a microphone for the driver and a speech generator and the vehicle's audio speakers to converse with the driver. The conversation would be based on a personalized profile of the driver. A camera could be used to evaluate the driver's "facial state" and a voice analyzer to evaluate whether the driver was becoming drowsy. If a driver seemed to display too much fatigue, the artificial passenger might be programmed to open all the windows, sound a buzzer, increase background music volume, or even spray the driver with ice water. Studies of road safety found that human error was the sole cause in more than half of all accidents .One of the reasons why humans commit so many errors lies in the inherent limitation of human information processing .With the increase in popularity of Telematics services in cars (like navigation, cellular telephone, internet access) there is more information that drivers need to process and more devices that drivers need to control that might contribute to additional driving errors. This topic is devoted to a discussion of these and other aspects of driver safety. The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession. A microphone picks up your answer and breaks it down into separate words with speechrecognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue. This research suggests that we can make predictions about various aspects of driver performance based on what we glean from the movements of a driver’s eyes and that a system can eventually be developed to capture this data and use it to alert people when their driving has become significantly impaired by fatigue.
1
The natural dialog car system analyzes a driver’s answer and the contents of the answer together with his voice patterns to determine if he is alert while driving. The system warns the driver or changes the topic of conversation if the system determines that the driver is about to fall asleep. The system may also detect whether a driver is affected by alcohol or drugs. 1.1
Artificial Passenger Overview The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession. When activated, the AP uses the profile to cook up provocative questions such “Who was the first person you dated?” via a speech generator and in-car speakers. A microphone picks up your answer and breaks it down into separate words with speechrecognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue. If you reply quickly and clearly, the system judges you to be alert and tells the conversation planner to continue the line of questioning. If your response is slow or doesn’t make sense, the voice analyzer assumes you are dropping off and acts to get your attention. The system, according to its inventors, does not go through a suite of rote questions demanding rote answers. Rather, it knows your tastes and will even, if you wish, make certain you never miss Paul Harvey again. This is from the patent application: “An even further object of the present invention is to provide a natural dialog car system that understands content of tapes, books, and radio programs and extracts and reproduces appropriate phrases from those materials while it is talking with a driver. For example, a system can find out if someone is singing on a channel of a radio station. The system will state, “And now you will hear a wonderful song!” or detect that there is news and state, “Do you know what happened now—hear the following and play some news.” The system also includes a recognition system to detect who is speaking over the radio and alert the driver if the person speaking is one the driver wishes to hear.” Just because you can express the rules of grammar in software doesn’t mean a driver is going to use them. The AP is ready for that possibility. It provides for a natural dialog car system directed to human factor engineering for example, people using different strategies to talk (for instance, short vs. elaborate responses ). In this manner, the individual is guided to talk in a certain way so as to make
2
the system work—e.g., “Sorry, I didn’t get it. Could you say it briefly?” Here, the system defines a narrow topic of the user reply (answer or question) via an association of classes of relevant words via decision trees. The system builds a reply sentence asking: What are most probable word sequences that could follow the user’s reply.”
3
Chapter 2
2.
BACKGROUND OF THE INVENTION During the night times the driver could get sleepier which may be porn to accidents. So in order to overcome the sleepiness the driver could have taken one of the following or all the below precautions. • Use of simulation drinks (e.g.: coffee and tea) • Some tablets to prevent sleeping. • Miniature system installed in driver’s hat. As these methods are sometimes inefficient and it may affect the health conditions of the driver. So in order to overcome the disadvantages of these methods IBM introduces a new sleep prevention technology device called as “ARTIFICIAL PASSENGER” which was developed by Dimitry Kanevsky and Wlodek Zadrozny. This software holds the conversation with driver to determine whether the driver can respond alertly enough. The name artificial passenger was first suggested in new scientist magazine which was designed to make solo journey safer and more bearable. Early techniques for determining head-pose used devices that were fixed to the head of the subject to be tracked. For example, reflective devices were attached to the subjects head and using a light source to illuminate the reflectors, the reflector locations were determined. As such reflective devices are more easily tracked than the head itself, the problem of tracking head-pose was simplified greatly. Virtual-reality headsets are another example of the subject wearing a device for the purpose of head-pose tracking. These devices typically rely on a directional antenna and radio-frequency sources, or directional magnetic measurement to determine head-pose. Wearing a device of any sort is clearly a disadvantage, as the user's competence and acceptance to wearing the device then directly affects the reliability of the system. Devices are generally intrusive and will affect a user's behavior, preventing natural motion or operation. Structured light techniques that project patterns of light onto the face in order to determine head- pose are also known. The light patterns are structured to facilitate the recovery of 3D information using simple image processing. However, the technique is prone to error in conditions of lighting variation and is therefore unsuitable for use under natural lighting conditions.
4
Chapter 3
3.
APPLICATIONS OF ARTIFICIAL PASSENGER First introduced in US Sensor/Software system detects and counteracts sleepiness behind the wheel. Seventies staples John Travolta and the Eagles made successful comebacks, and another is trying: That voice in the automobile dashboard that used to remind drivers to check the headlights and buckle up could return to new cars in just a few years—this time with jokes, a huge vocabulary, and a spray bottle. The following are the applications of the artificial passenger:
• • • • • • • •
3.1
Artificial Passenger is broadly used to prevent accident. Prevents the driver, falling asleep during long and solo trip. If the driver gets a heart attack or he is drunk it will send signals to vehicles nearby about this so driver there become alert. In any problem it alerts the vehicles nearby this, so the driver there become alert. Opens and closes the doors and windows of the car automatically. It is also used for the entertainment. It provides a natural dialog car system that understands content of tapes, books and radio programs. This system can also be used in other situations such as - Security guard - Operators at nuclear plants - Pilots of airplane. - Cabins in airplanes. - Water craft such as boats. - Trains and subways
Why Artificial Passenger IBM received a patent in May for a sleep prevention system for use in automobiles that is, according to the patent application, “capable of keeping a driver awake while driving during a long trip or one that extends into the late evening. The system carries on a conversation with the driver on various topics utilizing a natural dialog car system.” Additionally, the application said, “The natural dialog car system analyzes a driver’s answer and the contents of the answer together with his voice patterns to determine if he is alert while driving. The system warns the driver or changes the topic of conversation if the system determines that the driver is about to fall asleep. The system may also detect whether a driver is affected by alcohol or drugs.”
5
If the system thinks your attention is flagging, it might try to perk you up with a joke, though most of us probably think an IBM engineer’s idea of a real thigh slapper is actually a signal to change the channel: “The stock market just fell 500 points! Oh, I am sorry—I was joking.” Alternatively, the system might abruptly change radio stations for you, sound a buzzer, or summarily roll down the window. If those don’t do the trick, the Artificial Passenger (AP) is ready with a more drastic measure: sprits of icy water in your face.
6
Chapter 4
4.
FUNCTIONS OF ARTIFICIAL PASSENGER
4.1
Voice Control Interface One of the ways to address driver safety concerns is to develop an efficient system that relies on voice instead of hands to control Telematics devices. It has been shown in various experiments that well designed voice control interfaces can reduce a driver’s distraction compared with manual control situations. One of the ways to reduce a driver’s cognitive workload is to allow the driver to speak naturally when interacting with a car system (e.g. when playing voice games, issuing commands via voice). It is difficult for a driver to remember a complex speech command menu (e.g. recalling specific syntax, such as "What is the distance to JFK?" or "Or how far is JFK?" or "How long to drive to JFK?" etc.). This fact led to the development of Conversational Interactivity for Telematics (CIT) speech systems at IBM Research.. CIT speech systems can significantly improve a drivervehicle relationship and contribute to driving safety. But the development of fully fledged Natural Language Understanding (NLU) for CIT is a difficult problem that typically requires significant computer resources that are usually not available in local computer processors that car manufacturers provide for their cars. To address this, NLU components should be located on a server that is accessed by cars remotely or NLU should be downsized to run on local computer devices (that are typically based on embedded chips). Some car manufacturers see advantages in using upgraded NLU and speech processing on the client in the car, since remote connections to servers are not available everywhere, can have delays, and are not robust. Our department is developing a “quasi-NLU” component - a “reduced” variant of NLU that can be run in CPU systems with relatively limited resources. It extends concepts described in the paper [3]. In our approach, possible variants for speaking commands are kept in special grammar files (one file for each topic or application). When the system gets a voice response, it searches through files (starting with the most relevant topic). If it finds an appropriate command in some file, it executes the command. Otherwise the system executes other options that are defined by a Dialog Manager (DM). The DM component is a rule based sub-system that can interact with the car and external systems (such as weather forecast services, e-mail systems, telephone directories, etc.) and a driver to reduce task complexity for the NLU system.
7
The following are examples of conversations between a driver and DM that illustrate some of tasks that an advanced DM should be able to perform: 1. Ask questions (via a text to speech module) to resolve ambiguities: - (Driver) Please, plot a course to Yorktown - (DM) Within Massachusetts? - (Driver) No, in New York 2.
Fill in missing information and remove ambiguous references from context: -
(Driver) What is the weather forecast for today? (DM) Partly cloudy, 50% chance of rain (Driver) What about Ossining? (DM) Partly sunny, 10% chance of rain (The DM assumes that the driver means Yorktown, NY, from the earlier conversational context. Also, when the driver asks the inexplicit question “What about Ossining?” it assumes that the driver is still asking about weather.)
3. Manage failure and provide contextual, failure- dependent help and actions -
(Driver) When will we get there? (DM) Sorry, what did you say? (Driver) I asked when we will get there. The problem of instantaneous data collection could be dealt systematically by creating a learning transformation system (LT).
Examples of LT tasks are as follows: • • • •
4.2
Monitor driver and passenger actions in the car’s internal and external environments across a network; Extract and record the Driver Safety Manager relevant data in databases; Generate and learn patterns from stored data; Learn from this data how Safety Driver Manager Components and driver behavior could be improved and adjusted to improve Driver Safety Manager Performance and improve driving safety.
Embedded Speech Recognition
8
Car computers are usually not very powerful due to cost considerations. The growing necessity of the conversational interface demands significant advances in processing power on the one hand, and speech and natural language technologies on the other. In particular, there is significant need for a low-resource speech recognition system that is robust, accurate, and efficient. An example of a low-resource system that is executed by a 50 DMIPS processor augmented by 1 MB or less of DRAM. In what follows we give a brief description of the IBM embedded speech recognition system that is based on the research papers. Logically a speech system is divided into three primary modules: the front-end, the labeler and the decoder. When processing speech, the computational workload is divided approximately equally among these modules. In this system the front-end computes standard 13- dimensional mel-frequency cepstral coefficients (MFCC) from 16-bit PCM sampled at 11.025 KHz. Front-End Processing Speech samples are partitioned into overlapping frames of 25 ms duration with a frame shift of 15ms. A 15 ms frame-shift instead of the standard 10 ms frame-shift was chosen since it reduces the overall computational load significantly without affecting the recognition accuracy. Each frame of speech is windowed with a Hamming window and represented by a 13 dimensional MFCC vector. We empirically observed that noise sources, such as car noise, have significant energy in the low frequencies and speech energy is mainly concentrated in frequencies above 200 Hz. The 24 triangular Mel-filters are therefore placed in the frequency range [200Hz – 5500 Hz], with center frequencies equally spaced in the corresponding Mel-frequency scale. Discarding the low frequencies in this way improves the robustness of the system to noise. The front-end also performs adaptive mean removal and adaptive energy normalization to reduce the effects of channel and high variability in the signal levels respectively. The labeler computes first and second differences of the 13-dimensional cepstral vectors, and concatenates these with the original elements to yield a 39- dimensional feature vector. The labeler then computes the log likelihood of each feature vector according to observation densities associated with the states of the system's HMMs. This computation yields a ranked list of the top 100 HMM states. Likelihoods are inferred based upon the rank of each HMM state by a table lookups. The sequence of rank likelihoods is then forwarded to the decoder. The system uses the familiar phonetically-based, hidden Markov model (HMM) approach. The acoustic model comprises context-dependent sub-phone classes (all phones). The context for a given phone is composed of only one phone to its left and one phone to its right. The allophones are identified by growing a decision tree using the context-tagged
9
training feature vectors and specifying the terminal nodes of the tree as the relevant instances of these classes. Each allophone is modeled by a single-state Hidden Markov Model with a self loop and a forward transition. The training feature vectors are poured down the decision tree and the vectors that collect at each leaf are modeled by a Gaussian Mixture Model (GMM), with diagonal covariance matrices to give an initial acoustic model. Starting with these initial sets of GMMs several iterations of the standard Baum-Welch EM training procedure are run to obtain the final baseline model. In our system, the output distributions on the state transitions are expressed in terms of the rank of the HMM state instead of in terms of the feature vector and the GMM modeling the leaf. The rank of an HMM state is obtained by computing the likelihood of the acoustic vector using the GMM at each state, and then ranking the states on the basis of their likelihoods. The decoder implements a synchronous Viterbi search over its active vocabulary, which may be changed dynamically. Words are represented as sequences of context dependent phonemes, with each phoneme modeled as a three-state HMM. The observation densities associated with each HMM state are conditioned upon one phone of left context and one phone of right context only. A discriminative training procedure was applied to estimate the parameters of these phones. MMI training attempts to simultaneously (i)
Maximize the likelihood of the training data given the sequence of models corresponding to the correct transcription, and (ii) minimize the likelihood of the training data given all possible sequences of models allowed by the grammar describing the task.
(ii)
In 2001, speech evaluation experiments yields improvement from 20% to 40% relatively depending on testing conditions (e.g. 7.6% error rate for 0 speed and 10.1% for 60 mph).
10
Fig 4.1 Embedded speech recognition indicator
Fig 4.2 Embedded Speech Recognition devices
11
4.3
Driver Drowsiness Prevention Fatigue causes more than 240,000 vehicular accidents every year. Currently, drivers who are alone in a vehicle have access only to media such as music and radio news which they listen to passively. Often these do not provide sufficient stimulation to assure wakefulness. Ideally, drivers should be presented with external stimuli that are interactive to improve their alertness. Driving, however, occupies the driver’s eyes and hands, thereby limiting most current interactive options. Among the efforts presented in this general direction, the invention [8] suggests fighting drowsiness by detecting drowsiness via speech biometrics and, if needed, by increasing arousal via speech interactivity. When the patent was granted in May 22, 2001, it received favorable worldwide media attention. It became clear from the numerous press articles and interviews on TV, newspaper and radio that Artificial Passenger was perceived as having the potential to dramatically increase the safety of drivers who are highly fatigued. It is a common experience for drivers to talk to other people while they are driving to keep themselves awake. The purpose of Artificial Passenger part of the CIT project at IBM is to provide a higher level of interaction with a driver than current media, such as CD players or radio stations, can offer. This is envisioned as a series of interactive modules within Artificial Passenger that increase driver awareness and help to determine if the driver is losing focus. This can include both conversational dialog and interactive games, using voice only. The scenarios for Artificial Passenger currently include: quiz games, reading jokes, asking questions, and interactive books. In the Artificial Passenger (ArtPas) paradigm, the awareness-state of the driver will be monitored, and the content will be modified accordingly. Drivers evidencing fatigue, for example, will be presented with more stimulating content than drivers who appear to be alert. This could enhance the driver experience, and may contribute to safety. The Artificial Passenger interaction is founded on the workload manager concept of psychological arousal. Most well known emotion researchers agree that arousal (high, low) and valence (positive, negative) are the two fundamental dimensions of emotion. Arousal reflects the level of stimulation of the person as measured by physiological aspects such as heart rate, cortical activation, and respiration. For someone to be sleepy or fall asleep, they have to have a very low level of arousal. There is a lot of research into what factors increase psychological arousal since this can result in higher levels of attention, information retention and memory. We know that
12
movement, human voices and faces (especially if larger than life), and scary images (fires, snakes) increase arousal levels. We also know that speaking and laughing create higher arousal levels than sitting quietly. Arousal levels can be measured fairly easily with a biometric glove (from MIT), which glows when arousal levels are higher (reacts to galvanic skin responses such as temperature and humidity). The following is a typical scenario involving Artificial Passenger. Imagine, driver “Joe” returning home after an extended business trip during which he had spent many late nights. His head starts to nod … ArtPas: Hey Joe, what did you get your daughter for her birthday? Joe (startled): It’s not her birthday! ArtPas: You seem a little tired. Want to play a game? Joe: Yes. ArtPas: You were a wiz at “Name that Tune” last time. I was impressed. Want to try your hand at trivia? Joe: OK. ArtPas: Pick a category: Hollywood Stars, Magic Moments or Hall of Fame? Joe: Hall of Fame. ArtPas: I bet you are really good at this. Do you want the $100, $500 or $1000? Joe: 500 ArtPas: I see. Hedging your bets are you? By the time Joe has answered a few questions and has been engaged with the dynamics of the game, his activation level has gone way up. Sleep is receding to the edges of his mind. If Joe loses his concentration on the game (e.g. does not respond to the questions which Artificial Passenger asks) the system will activate a physical stimulus (e.g. verbal alarm). The Artificial Passenger can detect that a driver does not respond because his concentration is on the road and will not distract the driver with questions. On longer trips the Artificial Passenger can also tie into a car navigation system and direct the driver to a local motel or hotel. 4.4
Workload Manager In this section we provide a brief analysis of the design of the workload management that is a key component of driver Safety Manager. An object of the workload manager is to determine a moment-to-moment analysis of the user's cognitive workload. It accomplishes this by collecting data about user conditions, monitoring local and remote events, and
13
prioritizing message delivery. There is rapid growth in the use of sensory technology in cars. These sensors allow for the monitoring of driver actions (e.g. application of brakes, changing lanes), provide information about local events (e.g. heavy rain), and provide information about driver characteristics (e.g. speaking speed, eyelid status). There is also growing amount of distracting information that may be presented to the driver (e.g. phone rings, radio, music, e-mail etc.) and actions that a driver can perform in cars via voice control. The relationship between a driver and a car should be consistent with the information from sensors. The workload manager should be designed in such a way that it can integrate sensor information and rules on when and if distracting information is delivered. This can be designed as a “workload representational surface”. One axis of the surface would represent stress on the vehicle and another, orthogonally distinct axis, would represent stress on the driver. Values on each axis could conceivably run from zero to one. Maximum load would be represented by the position where there is both maximum vehicle stress and maximum driver stress, beyond which there would be “overload”. The workload manager is closely related to the event manager that detects when to trigger actions and/or make decisions about potential actions. The system uses a set of rules for starting and stopping the interactions (or interventions). It controls interruption of a dialog between the driver and the car dashboard (for example, interrupting a conversation to deliver an urgent message about traffic conditions on an expected driver route). It can use answers from the driver and/or data from the workload manager relating to driver conditions, like computing how often the driver answered correctly and the length of delays in answers, etc. It interprets the status of a driver’s alertness, based on his/her answers as well as on information from the workload manager. It will make decisions on whether the driver needs additional stimuli and on what types of stimuli should be provided (e.g. verbal stimuli via speech applications or physical stimuli such as a bright light, loud noise, etc.) and whether to suggest to a driver to stop for rest. The system permits the use and testing of different statistical models for interpreting driver answers and information about driver conditions. The driver workload manager is connected to a driving risk evaluator that is an important component of the Safety
14
4.5
Driver Manager The goal of the Safety Driver Manager is to evaluate the potential risk of a traffic accident by producing measurements related to stresses on the driver and/or vehicle, the driver’s cognitive workload, environmental factors, etc. The important input to the workload manager is provided by the situation manager whose task is to recognize critical situations. It receives as input various media (audio, video, car sensor data, network data, GPS, biometrics) and as output it produces a list of situations. Situations could be simple, complex or abstract. Simple situations could include, for instance: a dog locked in a car; a baby in a car; another car approaching; the driver’s eyes are closed; car windows are closed; a key is left on a car seat; it is hot in a car; there are no people in a car; a car is located in New York City; a driver has diabetes; a driver is on the way home. Complex situations could include, for example: a dog locked is locked in a car AND it is hot in a car AND car windows are closed; a baby is in a car AND there are no people in a car; another car is approaching AND the driver is looking in the opposite direction; a key is left on a car seat AND a driver is in the midst of locking a car; the driver is diabetic AND has not taken a medicine for 4 hours. Abstract situations could include, for example: Goals: get to work, to cleaners, to a movie. Driver preferences: typical routes, music to play, restaurants, shops. Driver history: accidents, illness, visits. Situation information can be used by different modules such as workload, dialog and event managers; by systems that learns driver behavioral patterns, provide driver distraction detection, and prioritize message delivery. For example, when the workload manager performs a moment-to-moment analysis of the driver's cognitive workload, it may well deal with such complex situations as the following: Driver speaks over the phone AND the car moves with high speed AND the car changes lanes; driver asks for a stock quotation AND presses brakes AND it is raining outside; another car approaches on the left AND the driver is playing a voice interactive game. The dialog manager may well at times require uncertainty resolution involving complex situations, as exemplified in the following verbal query by a driver: “How do I get to Spring Valley Rd?” Here, the uncertainty resides in the lack of an expressed (geographical) state or municipality. The uncertainty can be resolved through situation recognition; for example, the car may be in New York State already (that is defined via GPS) and it may be known that the driver rarely visits other states.
15
The concept associated with learning driver behavioral patterns can be facilitated by a particular driver’s repeated routines, which provides a good opportunity for the system’s “learning” habitual patterns and goals. So, for instance, the system could assist in determining whether drivers are going to pick up their kids in time by, perhaps, reordering a path from the cleaners, the mall, the grocery store, etc.
Fig 4.3 Condition Sensors Device
Fig 4.4 Mobile indicator device
16
4.6
Privacy And Social Aspects • Addressing privacy concerns: The safety driver manager framework should be designed such that it will be straightforward for the application designers to protect the end user’s privacy. This should include encryption of the message traffic from the vehicle, through a carrier's network, and into the service provider’s secure environment, such that the driver’s responses cannot be intercepted. This can be achieved through the use of IBM Web Sphere Personalization Server or Portal Server, allowing the end user an interface to select options and choices about the level of privacy and/or the types of content presented. An example of such an option is the opportunity for drivers to be informed about the existence of the Artificial Passenger capability, with clear instructions about how to turn it off if they opt not to use it. • Addressing social concerns: The safety driver manager is being developed to enable service providers to enhance the end user’s driving experience, and the system will be designed to ensure that it has this desired effect. The social impact of the system will be managed by making sure that users clearly understand what the system is, what the system can and cannot do, and what they need to do to maximize its performance to suit their unique needs. For example, in the Artificial Passenger paradigm the interaction can be customized to suit the driver’s conversational style, sense of humor, and the amount of “control” that he/she chooses to leave to the Artificial Passenger system (e.g., some drivers might find it disconcerting if the Artificial Passenger system opens the window for them automatically; others might find this a key feature.). The system will include a learning module that detects and records customer feedback, e.g. if a driver does not laugh at certain type of jokes, then that type will not be presented. Positive feedback in one area (football scores from a driver’s home town) leads to additional related content (baseball scores from the same town, weather, etc.). The social concerns associated with Artificial Passenger can be addressed by allowing the users to specify their desires and requirements through the subscriber management tools. A general approach to privacy, social and legal issues in Telematics can be found. Some elements of this approach (e.g. Privacy Manager, Insurance) are reflected.
17
4.7
Distributive User Interface Between Cars The safety of a driver depends not only on the driver himself but on the behavior of other drivers near him. Existing technologies can attenuate the risks to a driver in managing his/her own vehicle, but they do not attenuate the risks presented to other drivers who may be in “high risk” situations, because they are near or passing a car where the driver is distracted by playing games, listening to books or engaging in a telephone conversation. It would thus appear helpful at times to inform a driver about such risks associated with drivers in other cars. In some countries, it is required that drivers younger than 17 have a mark provided on their cars to indicate this. In Russia (at least in Soviet times), it was required that deaf or hard of hearing drivers announce this fact on the back of the window of his or her car. There is, then, an acknowledged need to provide a more dynamic arrangement to highlight a variety of potentially dangerous situations to drivers of other cars and to ensure that drivers of other cars do not bear the added responsibility of discovering these themselves through observation, as this presents its own risks. Information about driver conditions can be provided from sensors that are located in that car. The following are examples of the information about drivers that can affect driver conditions: - mood (angry, calm, laughing, upset) - physical conditions (tired, drowsy, sick, has chronic illnesses that can affect driving like diabetes) - attention (looking on a road or navigation map in a car, talking to a baby in a back sit, talking over telephone, listening to e-mail) - driver profile (number of traffic accidents, age) There can be several ways to assess this information. Driver overall readiness for safe driving can be evaluated by a safety manager in his/her car. It can be ranked by some metrics (e.g. on a scale from 1 to 5) and this evaluation can then be sent to the driver safety managers in nearby cars. Another way is that a driver manager in one car has access to information to a driver profile and from sensors in other cars. This second method allows individual car drivers to customize their priorities and use personal estimators for driving risks factors. For example, someone who is more worried about young drivers may request that this information be provided to his/her driver safety manager rather than an overall estimation of risk expressed as a single number. If a driver safety manager finds that there is additional risk associated with driver behavior in a car located nearby, it may prevent a telephone ringing or interrupt a dialog between the driver and a car system if. It can also advise someone who is calling a driver that that driver is busy and should not be disturbed at this time. The information can be sent anonymously to the driver safety manager in another car and this manager would then adjust the risk factor in its estimation of the surrounding environment for this car.
18
This allows the system to address privacy concerns that drivers may have. One can also offer reduced insurance payments to a driver if s/he agrees to disclose information to other cars. Employers of fleet tracks may be particularly interested in this approach since it allows reduction in traffic accidents.
19
Chapter 5
5.
ARCHITECTURE
5.1
General Architecture
Fig 5.1 General Architecture of Artificial Passenger
Microphone: For picking up the words and separate them by some internally used software for Conversation.
Camera: This will track the lip movements of the driver and also used for the improvement for the accuracy of the speech recognition.
External service provider: Linked to the dialog system by wireless network system Coupled with Car media, driver profile and conversational planner Driver analyzer module It controls interruption of a dialog between the driver and the car dashboard (for example, interrupting a conversation to deliver an urgent message about traffic conditions on an expected driver route).
20
Temperature indicator: This component is used to measure the temperature inside the vehicle and it also helps in maintaining the steady temperature.
Door lock sensor: This sensor alarms when the door is not locked.
Odor sensor: This sensor will periodically sprinkle the sweet air inside the vehicle.
Speaker: This generally used for the entertainment purpose.
5.2
Working Components of Artificial Passenger
Fig 5.2 Working Components There are some of the components which support for the working of the system: • •
Automatic Speech Recognizer (ASR) Natural Language Processor (NLP)
21
• • • • • •
Driver analyzer Conversational planner (CP) Alarm External service provider Microphone Camera
Automatic Speech Recognition (ASR): There are two ASRs used in the system: • Speaker independent: It will decode the driver voice and the decoded voice signals will output to Natural Language Processor (NLP) • Operates with a voice car media, decodes tapes, audio books, telephone mails. Decoding outputs of the ASR module is analyzed by intelligent text processor and it will output data to conversational planner. Natural Language Processor (NLP): Processes the decoded signal of textual data from ASR module, identifies semantic and syntactic content of the decoded message, produces variants of responses and outputs this data to a text input of the driver analyzer. Driver analyzer: Receives the textual data and voice data from NLP and measures the time of response using a clock. This time responses, concludes about drivers alertness and it will output to the conversational planner. This analysis is both objective & subjective. Conversational planner: This is generally referred as the heart of the system and it instructs the language generator to produce the response. If the driver continues to be in a perfect condition, then conversational planner instructs the language generator to continue the conversation otherwise the language generator is instructed to change the conversation. Alarm: If the conversational planner receives information that the driver is about to fall asleep then it activates an alarm system.
22
Chapter 6
6.
WORKING OF ARTIFICIAL PASSENGER The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession. When activated, the AP uses the profile to cook up provocative questions such as, “Who was the first person you dated?” via a speech generator and in-car speakers. A microphone picks up your answer and breaks it down into separate words with speechrecognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue. If you reply quickly and clearly, the system judges you to be alert and tells the conversation planner to continue the line of questioning. If your response is slow or doesn’t make sense, the voice analyzer assumes you are dropping off and acts to get your attention. The system, according to its inventors, does not go through a suite of rote questions demanding rote answers. Rather, it knows your tastes and will even, if you wish, make certain you never miss Paul Harvey again. This is from the patent application: “An even further object of the present invention is to provide a natural dialog car system that understands content of tapes, books, and radio programs and extracts and reproduces appropriate phrases from those materials while it is talking with a driver. For example, a system can find out if someone is singing on a channel of a radio station. The system will state, “And now you will hear a wonderful song!” or detect that there is news and state, “Do you know what happened now—hear the following—and play some news.” The system also includes a recognition system to detect who is speaking over the radio and alert the driver if the person speaking is one the driver wishes to hear.” Just because you can express the rules of grammar in software doesn’t mean a driver is going to use them. The AP is ready for that possibility: “It provides for a natural dialog car system directed to human factor engineering- for example, people using different strategies to talk (for instance, short vs. elaborate
23
responses). In this manner, the individual is guided to talk in a certain way so as to make the system work—e.g., “Sorry, I didn’t get it. Could you say it briefly?” Here, the system defines a narrow topic of the user reply (answer or question) via an association of classes of relevant words via decision trees. The system builds a reply sentence asking: what are most probable word sequences that could follow the user’s reply.” Driver fatigue causes at least 100,000 crashes, 1,500 fatalities, and 71,000 injuries annually, according to estimates prepared by the National Highway Traffic Safety Administration, which estimated further that the annual cost to the economy due to property damage and lost productivity is at least $12.5 billion. The Federal Highway Administration, the American Trucking Association, and Liberty Mutual co-sponsored a study in 1999 that subjected nine volunteer truck drivers to a protracted laboratory simulation of over-the-road driving. Researchers filmed the drivers during the simulation, and other instruments measured heart function, eye movements, and other physiological responses. “A majority of the offroad accidents observed during the driving simulations were preceded by eye closures of one-half second to as long as 2 to 3 seconds,” Stern said. A normal human blink lasts 0.2 to 0.3 second. Stern said he believes that by the time long eye closures are detected, it’s too late to prevent danger. “To be of much use,” he said, “alert systems must detect early signs of fatigue, since the onset of sleep is too late to take corrective action.” Stern and other researchers are attempting to pinpoint various irregularities in eye movements that signal oncoming mental lapses—sudden and unexpected short interruptions in mental performance that usually occur much earlier in the transition to sleep. “Our research suggests that we can make predictions about various aspects of driver performance based on what we glean from the movements of a driver’s eyes,” Stern said, “and that a system can eventually be developed to capture this data and use it to alert people when their driving has become significantly impaired by fatigue.” He said such a system might be ready for testing in 2004.
Fig 5.1Camera for detection of Lips Movement 24
6.1
Devices That Are Used In Artificial Passenger The main devices that are used in this artificial passenger are:1) Eye tracker. 2) Voice recognizer or speech recognizer.
6.2
How Does Eye Tracking Work? Collecting eye movement data requires hardware and software specifically designed to perform this function. Eye-tracking hardware is either mounted on a user's head or mounted remotely. Both systems measure the corneal reflection of an infrared light emitting diode (LED), which illuminates and generates a reflection off the surface of the eye. This action causes the pupil to appear as a bright disk in contrast to the surrounding iris and creates a small glint underneath the pupil. It is this glint that head-mounted and remote systems use for calibration and tracking.
25
Chapter 7
7.
FEATURES OF ARTIFICIAL PASSENGER
7.1
Conversational Telematics IBM’s Artificial Passenger is like having a butler in your car—someone who looks after you, takes care of your every need, is bent on providing service, and has enough intelligence to anticipate your needs. This voice-actuated Telematics system helps you perform certain actions within your car hands-free: turn on the radio, switch stations, adjust HVAC, make a cell phone call, and more. It provides uniform access to devices and networked services in and outside your car. It reports car conditions and external hazards with minimal distraction. Plus, it helps you stay awake with some form of entertainment when it detects you’re getting drowsy. In time, the Artificial Passenger technology will go beyond simple command-and-control. Interactivity will be key. So will natural sounding dialog. For starters, it won’t be repetitive (“Sorry your door is open, sorry your door is open . . .”). It will ask for corrections if it determines it misunderstood you. The amount of information it provides will be based on its “assessment of the driver’s cognitive load” (i.e., the situation). It can learn your habits, such as how you adjust your seat. Parts of this technology are 12 to 18 months away from broad implementation.
7.2
Improving Speech Recognition You’re driving at 70 mph, it’s raining hard, a truck is passing, the car radio is blasting, and the A/C is on. Such noisy environments are a challenge to speech recognition systems, including the Artificial Passenger. IBM’s Audio Visual Speech Recognition (AVSR) cuts through the noise. It reads lips to augment speech recognition. Cameras focused on the driver’s mouth do the lip reading; IBM’s Embedded Via Voice does the speech recognition. In places with moderate noise, where conventional speech recognition has a 1% error rate, the error rate of AVSR is less than 1%. In places roughly ten times noisier, speech recognition has about a 2% error rate; AVSR’s is still pretty good (1% error rate). When the ambient noise is just as loud as the driver talking, speech recognition loses about 10% of the words; AVSR, 3%. Not great, but certainly usable.
7.3
Analyzing Data The sensors and embedded controllers in today’s cars collect a wealth of data. The next step is to have them “phone home,” transmitting that wealth back to those who can use those data. Making sense of that detailed data is hardly a trivial matter, though- especially when divining transient problems or analyzing data about the vehicle’s operation over time. IBM’s Automated Analysis Initiative is a data management system for identifying
26
failure trends and predicting specific vehicle failures before they happen. The system comprises capturing, retrieving, storing, and analyzing vehicle data; exploring data to identify features and trends; developing and testing reusable analytics; and evaluating as well as deriving corrective measures. It involves several reasoning techniques, including filters, transformations, fuzzy logic, and clustering/mining. Since 1999, this sort of technology has helped Peugeot diagnose and repair 90% of its cars within four hours, and 80% of its cars within a day (versus days). An Internet-based diagnostics server reads the car data to determine the root cause of a problem or lead the technician through a series of tests. The server also takes a “snapshot” of the data and repair steps. Should the problem reappear, the system has the fix readily available. 7.4
Sharing Data Collecting dynamic and event-driven data is one problem. Another is ensuring data security, integrity, and regulatory compliance while sharing that data. For instance, vehicle identifiers, locations, and diagnostics data from a fleet of vehicles can be used by a variety of interested, and sometimes competitive, parties. These data can be used to monitor the vehicles (something the fleet agency will definitely want to do, and so too may an automaker eager to analyze its vehicles’ performance), to trigger emergency roadside assistance (third-party service provider), and to feed the local “traffic helicopter” report. This IBM project is the basis of a “Pay as You Drive” program in the United Kingdom. By monitoring car model data and policy-holder driving habits (the ones that opt-in), an insurance company can establish fair premiums based on car model and the driver’s safety record. The technology is also behind the “black boxes” readied for New York City’s yellow taxis and limousines. These boxes help prevent fraud, especially when accidents occur, by radioing vehicular information such as speed, location, and seat belt use.
7.5
RETRIEVING DATA ON DEMAND “Plumbing”—the infrastructure stuff. In time, Telematics will be another web service, using sophisticated back-end data processing of “live” and stored data from a variety of distributed, sometimes unconventional, external data sources, such as other cars, sensors, phone directories, e-coupon servers, even wireless PDAs. IBM calls this it’s “Resource Manager,” a software server for retrieving and delivering live data on demand. This server will have to manage a broad range of data that frequently, constantly, and rapidly change. The server must give service providers the ability to declare what data they want, even without knowing exactly where those data reside.
27
Moreover, the server must scale to encompass the increasing numbers of Telematics enabled cars, the huge volumes of data collected, and all the data out on the Internet. A future application of this technology would provide you with a “shortest-time” routing based on road conditions changing because of weather and traffic, remote diagnostics of your car and cars on your route, destination requirements (your flight has been delayed), and nearby incentives (“e-coupons” for restaurants along your way). 7.6
Face Recognition Human face detection by computer systems has become a major field of interest. Face detection algorithms are used in a wide range of applications, such as security control, video retrieving, biometric signal processing, human computer interface, face recognitions and image database management. However, it is difficult to develop a complete robust face detector due to various light conditions, face sizes, face orientations, background and skin colors. In this report, we propose a face detection method for color images. Our method detects skin regions over the entire image, and then generates face candidates based on a connected component analysis. Finally, the face candidates are divided into human face and non-face images by an enhanced version of the template-matching method. Experimental results demonstrate successful face detection over the EE368 training images. For color images, various literatures have shown that is possible to separate human skin regions from complex background based on either YCbCr or HSV color space. The face candidates can be generated from the identified skin regions. Numerous approaches can be applied to classify face and non-face from the face candidates, such as wavelet packet analysis, template matching for faces, eyes and mouths, feature extraction using watersheds and projections
28
Chapter 8
8.
MONITORING HEAD/EYE MOTION FOR DRIVER ALERTNESS WITH ONE CAMERA Visual methods and systems are described for detecting alertness and vigilance of persons under conditions of fatigue, lack of sleep, and exposure to mind altering substances such as alcohol and drugs. In particular, the intention can have particular applications for truck drivers, bus drivers, train operators, pilots and watercraft controllers and stationary heavy equipment operators, and students and employees during either daytime or nighttime conditions. The invention robustly tracks a person's head and facial features with a single on-board camera with a fully automatic system, that can initialize automatically, and can reinitialize when it need's to and provide outputs in real time. The system can classify rotation in all viewing direction, detects' eye/mouth occlusion, detects' eye blinking, and recovers the 3Dgaze of the eyes. In addition, the system is able to track both through occlusions like eye blinking and also through occlusion like rotation. Outputs can be visual and sound alarms to the driver directly. Additional outputs can slow down the vehicle cause and/or cause the vehicle to come to a full stop.
8.1
Representative Image This invention relates to visual monitoring systems, and in particular to systems and methods for using digital cameras that monitor head motion and eye motion with computer vision algorithms for monitoring driver alertness and vigilance for drivers of vehicles, trucks, buses, planes, trains and boats, and operators of stationary and moveable and stationary heavy equipment, from driver fatigue and driver loss of sleep, and effects from alcohol and drugs, as well as for monitoring students and employees during educational, training and workstation activities
29
Fig 8.1 Flowchart monitoring driver alertness 8.2
A Driver Alertness System Comprising (a) a single camera within a vehicle aimed at a head region of a driver; (b) means for simultaneously monitoring head rotation, yawning and full eye occlusion of the driver with said camera, the head rotation including nodding up and down, and moving left to right, and the full eye occlusion including eye blinking and complete eye closure, the monitoring means includes means for determining left to right rotation and the up and down nodding;
30
(c) Alarm means for activating an alarm in real time when a threshold condition in the monitoring means has been reached, whereby the driver is alerted into driver vigilance. The monitoring means includes: means for determining gaze direction of driver, a detected condition selected from at least one of: lack of sleep of the driver, driver fatigue, alcohol effects and drug effects of the driver.
8.3
Method Of Detecting Driver Vigilance Comprises The Following Steps 1.
2. 3.
4.
Aiming a single camera at a head of a driver of a vehicle; detecting frequency of up and down nodding and left to right rotations of the head within a selected time period of the driver with the camera; Determining frequency of eye blinking and eye closings of the driver within the selected time period with the camera; Determining frequency of yawning of the driver within the selected time period with the camera; Generating an alarm signal in real time if a frequency value of the number of the frequency of the up and down nodding, the left to right rotations, the eye blinking, the eye closings, the yawning exceed a selected threshold value.
31
Chapter 9
9.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Before explaining the disclosed embodiment of the present in detail it is to be understood that the invention is not limited in its application to the details of the particular arrangement shown since the invention is capable of other embodiments. Also, the terminology used herein is for the purpose of description and not of limitation. The novel invention can analyze video sequences of a driver for determining when the driver is not paying adequate attention to the road. The invention collects data with a single camera placed that can be placed on the car dashboard. The system can focus on rotation of the head and eye blinking, two important cues for determining driver alertness, to make determination of the driver's vigilance level. Our head tracker consists of tracking the lip corners, eye centers, and side of the face. Automatic initialization of all features is achieved using color predicates and a connected components algorithm. A connected component algorithm is one in which every element in the component has a given property. Each element in the component is adjacent to another element either by being to the left, right, above, or below. Other types of connectivity can also be allowed. An example of a connected component algorithm follows: If we are given various land masses, then one could say that each land mass is a connected component because the water separates the land masses. However, if a bridge was built between two land masses then the bridge would connect them into one land mass. For the invention, the term “Occlusion” of the eyes and mouth often occurs when the head rotates or the eyes close, so our system tracks through such occlusion and can automatically reinitialize when it mis-tracks. Also, the system performs blink detection and determines 3-D direction of gaze. These are necessary components for monitoring driver alertness. The novel method and system can track through local lip motion like yawning, and presents a robust tracking method of the face, and in particular, the lips, and can be extended to track during yawning or opening of the mouth. A general overview of is the novel method and system for daytime conditions is given below, and can include the following steps:-
32
1. Automatically initialize lips and eyes using color predicates and connected components. 2. Track lip corners using dark line between lips and color predicate even through large mouth movement like yawning 3. Track eyes using affine motion and color predicates 4. Construct a bounding box of the head 5. Determine rotation using distances between eye and lip feature points and sides of the face 6. Determine eye blinking and eye closing using the number and intensity of pixels in the eye region 7. Determine driver vigilance level using all acquired information.
The above steps can be modified for night time conditions The novel invention can provide quick substantially real time monitoring responses. For example, driver vigilance can be determined within as low as approximately 20frames, which would be within approximately ⅔ of a second under some conditions (when camera is taking pictures at a rate of approximately 30 frames per second). Prior art systems usually require substantial amounts of times, such as at least 400 frames which can take in excess of 20 seconds if the camera is taking pictures at approximately 30 frames per second. Thus, the invention is vastly superior to prior art systems. The video sequences throughout the invention were acquired using a video camera placed on a car dashboard. The system runs on an UltraSparc using 320×240 size images with 30 fps video. The system will first determine day or night status. It is nighttime if: a camera clock time period is set for example to be between 18:00 and 07:00 hours. Alternatively, day or night status can be checked if the driver has his night time driving headlights on by wiring the system to the headlight controls of the vehicle. Additionally, day or night status can be set if the intensity of the image, is below a threshold. In this case then it must be dark. For example, if the intensity of the image (intensity is defined in many ways, one such way is the average of all RGB(Red, Green, Blue) values) is below approximately 40 then the nighttime method could be used. The possible range of values for the average RGB value is 0 to approximately 255, with the units being arbitrarily selected for the scale. If day time is determined then the left side of the flow chart depicted in FIG will follow then first initialize to find face. A frame is grabbed from the video output. Tracking of the feature points is performed in steps. Measurements of rotation and orientation of the face occurs. Eye occlusion such as blinking and eye closure is examined. Determining if yawning occurs, the rotation, eye occlusion and yawning information is used to measure the driver's vigilance.
33
If night time is determined, then the right flow chart series of steps occurs, by first initializing to find the face. Next a frame is grabbed from the video output. Tracking of the lip corners and eye pupils is performed. Measure rotation and orientation of face, the feature points are corrected if necessary. Eye occlusion such as blinking and eye closure is examined. Determining if yawning is occurring is done. The rotation, eye occlusion and yawning steps in formation is used to measure the driver's vigilance. 9.1
Daytime Conditions For the daytime scenario, initialization is performed to find the face feature points. A frame is taken from a video stream of frames. Tracking is then done in stages. Lip tracking is done. There are multiple stages in the eye tracker. Stage 1 and Stage 2 operate independently. A bounding box around the face is constructed and then the facial orientation can be computed. Eye occlusion is determined. Yawning is detected. The rotation, eye occlusion, and yawning information is fused to determine the vigilance level of the operator. This is repeated by which allows the method and system to grab another frame from a video stream of frames and continue again. The system initializes itself. The lip and eye colors ((RED, BLUE, GREEN) RGB) are marked in the image offline. The colors in the image are marked to be recognized by the system. Mark the lip pixels in the image is important. All other pixel values in the image are considered unimportant. Each pixel has a Red(R), Green) G), and Blue (B) component. For a pixel that is marked as important, go to this location in the RGB array indexing on the R, G, B components. This array location can be incremented by equation (1) exp (−1.0*(j*j+k*k+i*i)/(2*sigma*sigma)); (1) where: sigma is approximately 2; j refers to the component in the y direction and can go from approximately −2 to approximately 2; k refers to the component in the z direction and can go from approximately −2 to approximately 2; i refers to the component in the x direction and can go from approximately −2 to approximately 2. Thus simply increment values in the x, y, and z direction from approximately-2 to approximately +2 pixels, using the above function. As an example running through equation (1), given that sigma is 2, let i=0, j=1, and k=−1, then the function evaluates to exp(−1.0*(1+1+0)/(2*2*2))=exp(−1*2/8)=0.77880, where exp is the standard exponential function (e x ). Equation (1) is run through for every pixel that is marked as important. If a color, or pixel value, is marked as important multiple times, its new value can be added to the current value. Pixel values that are marked as unimportant can decrease the value of the RGB indexed location via equation (2) as follows: exp(−1.0*( j*j+k*k+i*i )/(2*(sigma−1)*(sigma−1))). (2) Where: sigma is approximately 2; j refers to the component in the y direction and can go from approximately −2 to
34
approximately 2; k refers to the component in the z direction and can go from approximately −2 to approximately 2; i refers to the component in the x direction and can go from approximately −2 to approximately 2. Thus simply increment values in the x, y, and z direction from approximately −2 to approximately +2 pixels, using the above function. As an example running through equation (1), given that sigma is 2, let i=0, j=1, and k=−1, then the function evaluates to exp(−1.0*(1+1+0)/(2*1*1))=exp(−1*2/2(=0.36788, where exp is the standard exponential function (e x ). The values in the array which are above a threshold are marked as being one of the specified colors. The values in the array below the threshold are marked as not being of the specified color. An RGB (RED, GREEN BLUE) array of the lip colors is generated, and the endpoints of the biggest lip colored component are selected as the mouth corners. The driver's skin is marked as important. All other pixel values in the image are considered unimportant. Each pixel has an R, G, B component. So for a pixel that is marked as important, go to this location in the RGB array indexing on the R, G, B components. Increments this array location by equation (1) given and explained above, it is both written and briefly described here for convenience: exp(−1.0*(j*j+k*k+i*i)/(2 *sigma*sigma)); sigma is 2. Increment values in the x, y, and z direction from approximately −2 to approximately +2, using equation 1. Do this for every pixel that is marked as important. If a color, or pixel value, is marked s important multiple times, its new value is added to the current value. Pixel values that are marked as unimportant decrease the value of the RGB indexed location via equation (2), given and explained above, and is both written and briefly described here for convenience: exp(−1.0*(j*j+k*k+i*i)/(2*(sigma−1)*(sigma−1))). The values in the array which are above a threshold are marked as being one of the specified colors. Another RGB array is generated of the skin colors, and the largest non-skin components above the lips are marked as the eyes. The program method then starts looking above the lips in a vertical manner until it finds two non-skin regions, which are between approximately 15 to approximately 800 pixels in an area. The marking of pixels can occur automatically by considering the common color of various skin/lip tones.
35
9.2
Nighttime Conditions If it is nighttime perform the following steps: To determine if it is night any of the three conditions can occur. If a camera clock is between 18:00 and 07:00 hours and/or if the driver has his night time driving headlights on or if the intensity of the image is below a threshold it must be dark, so use the night time algorithm steps. The invention initializes eyes by finding the bright spots with dark around them. In the first two frames the system finds the brightest pixels with dark regions around them. These points are marked as the eye centers. In subsequent frames there brightest regions are referred to as the eye bright tracker estimate. If these estimates are too far from the previous values, retain the old values as the new eye location estimates. The next frame is then grabbed. The system runs two independent subsystems. Starting with the left subsystem first the dark pixel is located and tested to see if it is close enough to the previous eye location. If these estimates are too far from the previous values, the system retains the old values as the new eye location estimates. If the new estimates are close to the previous values, then these new estimates are kept. The second subsystem, finds the image transform. This stage tries to find a common function between two images in which the camera moved some amount. This function would transform all the pixels in one image to the corresponding point in another image. This function is called an affine function. It has six parameters, and it is a motion estimation equation.
36
Chapter 10
10. CONCLUSIONS We suggested that such important issues related to a driver safety, such as controlling Telematics devices and drowsiness can be addressed by a special speech interface. This interface requires interactions with workload, dialog, event, privacy, situation and other modules. We showed that basic speech interactions can be done in a low-resource embedded processor and this allows a development of a useful local component of Safety Driver Manager. The reduction of conventional speech processes to low – resources processing was done by reducing a signal processing and decoding load in such a way that it did not significantly affect decoding accuracy and by the development of quasi-NLU principles. We observed that an important application like Artificial Passenger can be sufficiently entertaining for a driver with relatively little dialog complexity requirements – playing simple voice games with a vocabulary containing a few words. Successful implementation of Safety Driver Manager would allow use of various services in cars (like reading e-mail, navigation, downloading music titles etc.) without compromising a driver safety. Providing new services in a car environment is important to make the driver comfortable and it can be a significant source of revenues for Telematics. We expect that novel ideas in this paper regarding the use of speech and distributive user interfaces in Telematics will have a significant impact on driver safety and they will be the subject of intensive research and development in forthcoming years at IBM and other laboratories.
37
REFERENCES [1]. [2]. [3]. [4].
http://www.Google.com http://www.esnips.com http://www.ieee.org http://www.Electronicsforu.com
38