Gesture Recognition, Proximity Sensors Drive Advances in Automotive Infotainment

At a time when social media interactivity is also making its way into cars, an evolution of the instrument control panel is necessary if drivers are to connect and perform tasks simultaneously, and safely. Improved gesture and speech recognition are two of the more prominent cockpit human-machine interface (HMI) technologies being developed by carmakers to deliver a safe, reliable interaction between driver and vehicle.
Gesture recognition: Hands in command
Gesture recognition technology is widely expected to be the next generation in-car user interface. Gesture recognition determines whether the driver has performed recognizable hand or finger gestures within an allotted space without contacting a touchscreen. For example, an approaching hand can activate the in-car infotainment system, or in a more sophisticated system, the driver can touch the steering wheel then tilt his head left or right to turn the volume of the stereo up or down. A camera placed in the steering wheel or on the dashboard is programmed to watch for certain gestures. When it sees them, it sends a signal to the processor that handles the connected infotainment hardware. The data is analyzed to determine what the driver is doing, ascertain which display controls the driver wants to adjust and then activate the appropriate features.
This technology may well be familiar to gamers. That’s because it is not unlike Microsoft’s Kinect system for the Xbox game console, which detects motion from distances of up to approximately 10 feet. However, rather than tracking a user’s entire body motion as is done with a Kinect system, only the users’ hand gestures are analyzed for automotive infotainment applications.
A market study conducted in 2013 by IHS Automotive examined gesture-recognition technology and proximity sensing. According to IHS, the global market for automotive proximity and gesture recognition systems that allow motorists to control their infotainment systems with a simple wave of their hand will grow to more than 38 million units in 2023, up from about 700,000 in 2013. Automakers including Audi, BMW, Cadillac, Ford, GM, Hyundai, Kia, Lexus, Mercedes-Benz, Nissan, Toyota and Volkswagen are all in the process of implementing some form of gesture technology into their automobiles.
Hyundai’s HCD-14 is a luxury four-door concept sedan featuring gesture controls for audio, HVAC, navigation and smartphone connectivity functions. The driver selects a main function by gazing at a heads-up display (HUD), presses a thumb button on the steering wheel to confirm his/her selection and then performs gestures, which include moving a hand in or out from the dashboard to zoom in or out on the navigation system, a dialing motion to adjust volume and a side-to-side gesture to change radio stations.
Similarly, Visteon’s Horizon cockpit concept, which the company has been demonstrating to global vehicle manufacturers, uses 3-D gesture recognition to transform the way a driver controls such features as interior temperature, audio and navigation. In the Horizon cockpit concept controls can be manipulated by moving the hand or just a finger. Radio volume, for example, can be adjusted by making a turning motion with one’s hand without making contact with the instrument cluster. The gesture recognition technology uses a camera system to map the user’s hand and replicates a virtual hand on the center stack display.
The industry answers with new sensors and tools
Semiconductor suppliers are developing the hardware needed to enable user command input with natural hand and finger movements. For example, Microchip’s MGC3130 is a three-dimensional gesture recognition and tracking controller chip based on the company’s patented GestIC technology, which uses an electric field (E-field) to provide gesture information as well as positional data of the human hand in real time. E-fields are generated by electrical charges and propagate three-dimensionally around the surface carrying the electrical charge. Applying direct voltages (DC) to an electrode results in a constant electric field. In case a person’s hand or finger intrudes the electrical field, the field becomes distorted. GestIC technology uses a minimum number of four receiver (Rx) electrodes to detect the E-field variations at different positions to measure the origin of the electric field distortion from the varying signals received. The information is used to calculate the position, track movements and to classify movement patterns (gestures).
A number of automotive touchscreens now employ proximity sensing, a technology that can enable touch-free interfaces in infotainment systems, keyless entry systems and lighting controls. The Cadillac User Experience was the first system to offer proximity sensing in a mass-market production vehicle. A pair of infrared sensors just below the screen can detect when a user’s hand approaches the screen and activates frequently used menus, such as a list of mixed presets and navigation options.
A detector IC can be used together with up to three separate infrared LEDs to form a sensor that can detect 3-D movements of objects in front of the device. With this information the content of a display can be controlled just by hand waving. Detection ranges up to 20 cm are achievable, and the range can easily be extended by using high power emitters. The SFH 7770 E6 from OSRAM Opto Semiconductors combines the functions of a digital ambient-light sensor with those of a digital proximity sensor. It determines the ambient brightness, detects the presence of an object nearby and can tell whether the object is moving closer or moving further away. This product can be used wherever short-range gesture recognition is needed.
Voice control: Tell me what you want
When traffic situations make driver interaction with infotainment apps potentially hazardous, voice control based on speech recognition can be a useful complement to conventional HMIs. Market research company IHS expects more than half of new automobiles in 2019 to integrate voice technologies such as voice recognition, text-to-speech and speech-to-text to enable drivers to control entertainment and navigation systems simply by using their voices.
In order to do so, speech interfaces in vehicles must overcome a low-adoption rate among drivers, primarily due to perceived accuracy issues. Voice functions must actually work, providing true, reliable voice integration. If it is to make the leap from purely task-oriented command and control to a more sophisticated, user-centric interface designed to negate distraction issues, voice systems must interact in a more conversational way instead of being restricted to a list of fixed, predefined menu phrases.
The good news is that speech recognition systems in the form of hands-free text-to-voice or voice-to-text operations are now delivering greater accuracy and employing more flexible grammar libraries. For example, Ford’s SYNC, based on the Microsoft Windows CE operating system, supports up to 10,000 voice commands with no training required for the system to recognize the commands.
It will take a lot of computer muscle to power next-generation speech-based infotainment applications, enabling them to be context-aware and respond more quickly and with increased accuracy. Some speech functions will be performed on-board by the infotainment system and others, like dictating email or web content, may be performed off-board by servers in the Internet service provider’s data center.
Processors such as the Intel Atom E640 series have the computing headroom to perform critical- noise and echo cancellation functions, thereby eliminating the need for a digital signal processor (DSP) and providing speech applications and far-away listeners on phone calls with a cleaner input signal.
Multimedia information consoles are now the predominant feature of dashboards in new cars. Safety concerns are driving demand for in-car hand gesture and voice recognition technology as a potential solution to driver distraction issues that will be more conspicuous as connectivity on the road becomes a default feature of most new cars. The potential for both technologies in human-machine interface development is immense, and IC manufacturers are quickly integrating the necessary functionality into MCUs and sensors to make it a reality.



