October 8,2008:
An American firm (Conversay) has
developed a voice recognition system that not only does not require
"training" (repeating dozens of phrases into a microphone so the
software can adapt itself to your speech patterns), but can immediately adapt
itself to a wide variety of accents. European air forces are installing this
system in their new Typhoon fighters. The new software can immediately tell the
difference between American, British, German, Italian and Spanish accented
English. Even in the U.S. and Britain, there is a wide variation in how English
is pronounced. English is the universal
language for commercial and military pilots, at least those that have to
operate in foreign airports or airbases. For NATO, this allows pilots from many
different nations to speak the same language to each other during joint
operations. Ironically, the Conversay software will understand all these
accents of English better than many of the human pilots. Incomprehensible
English accepts are a common complaint of air traffic controllers dealing with
pilots for whom English is a second language.
The French
air force pioneered the use of voice recognition in the cockpit. In the 1990s,
the French introduced such software, and it even took into account voice
distortion under stress (including g stress, as when a fast moving aircraft
makes a tight turn.) The new Conversay software builds on this work. The U.S.
F-35 is also being equipped with another speech recognition system. This is the
first time an American aircraft will be using such a system. The French success
with this sort of thing has encouraged other nations to go in the same
direction.
With this
voice recognition, many tasks that previously required a button push, can now
be executed with a spoken command. Tests in actual cockpits have demonstrated
accuracy of 98%, which is higher than many human crews are capable of when
manually flipping a switch or pressing a button. Typical tasks for spoken
commands and electronic ears are requests for information on aircraft condition
or changing the status of a sensor or weapon system (which can be presented on
the see-through computer display built into the visors of many pilot helmets).
A typical speech system can recognize hundreds of words, including some in
slurred speech common during high stress maneuvers. The spoken commands save
the pilot the time required to press a button or flip a switch, or glance
sideways to view a display.
What is
developing here is the appearance of, in effect, computerized co-pilots. These
systems use computers to constantly collect and examine information from the
dozens of sensors on board. These sensors range from the familiar fuel gage, to
radar and radar warning devices. Often overlooked by civilians are the numerous
calculations and decisions pilots must make in flight. For example, on an
interception mission, the pilot must decide how best to approach distant enemy
aircraft. Radar will usually spot other aircraft long before weapons can be
used. There may also be ground based missile systems aiming radars at you.
These conditions present several options; should you go after the enemy
aircraft with long range missiles? Or speed up and engage with more accurate
cannon and short range missiles? You also have to worry about your own fuel
situation, and which of your systems might be malfunctioning. An AI (Artificial
Intelligence) computers memory contains the experiences of numerous more
experienced pilots as well as instant information on the rapidly changing
situation. You can ask your electronic assistant what the options are and which
one has the best chance of success. The pilot can then make decisions more
quickly and accurately. When enemy aircraft are sighted, the electronic
assistant can suggest which of the many maneuvers available are likely to work.
If the aircraft is damaged, the electronic co-pilot can rapidly report what the
new options are. One becomes quite fond of computers once they have saved your
bacon a few times. Many of these capabilities are being installed piecemeal, as
part of electronic countermeasures or radar systems. And, bit-by-bit, these
"thinking systems" are being merged, producing an electronic
co-pilot.
There are
other uses for this voice recognition. One option is for one human pilot to
lead a group aircraft that included one manned aircraft and three UAVs. The
human pilot would be the flight leader, and would give orders to the UAVs. The
most dangerous jobs, like putting bombs on heavily defended targets, would go
to the UAVs. While the UAVs could also be commanded from the ground, or an
AWACs, a human pilot on-the-spot would always have a better view of the
situation, and be able to make decisions more quickly. That's something combat
pilots are trained to do.
The British
Royal Air Force recently ran a successful test of flight control software that
allows the pilot of one warplane to control up to four nearby UAVs. The U.S.
Navy has been working on a similar system. It's all in the software. The UAVs
must have software that enables them to do a lot of things by themselves, like
flying the aircraft effectively, and being able to understand verbal commands.