Perceptual Development Workshop
APPENDIX TWO
FlashSonar: the next step in echolocation instruction
by Daniel Kish, M.A., M.A., COMS, NOMC
Perception and Imaging
The ability to direct our interactions with the environment is connected to the perceptual imaging system. The brain gathers information through the perceptual system to create images that represent our experiences. The quality of these images impacts how we interact with the environment. What we perceive feeds our comprehension, which further improves our interaction. (For a discussion of perception, see the article "A PERCEPTION BASIS FOR CANE LENGTH CONSIDERATIONS" in another issue.)
A healthy operational image is rich in character derived from multiple data streams of experience. When vision is disrupted, the brain naturally works to maintain image integrity by optimizing perception and discovery to heighten the quality of meaningful information gathered from experience. The inability to see with eyes need not disrupt achievement when the brain learns to "see" with in tact perceptual and self management systems. Thus, self directed interaction is indispensable to brain development. While certain neuralogical impairments can disrupt the perceptual imaging process, the brain's fundamental drive to perceive and discover remains applicable with most types of impairment.
For blind people, hearing can become the dominant sense for conveying spatial information about the world at intermediate distances, and facilitating dynamic interactions with the world. As determined by brain scan research, the capacity of audition to discriminate, recognize, and image multiple events in dynamic space, called scene analysis, is very pronounced. However, it is little understood or applied. It can't be emphasized enough that hearing in blind people must be recognized and carefully cultivated for use to improve environmental interaction.
Sight and hearing both interpret patterns of energy reflecting from the environment. Reflected sound energy is called echo. Echoes can be used to perceive 3 characteristics of environmental elements - location, dimension (height and width), and density (solid vs. sparse, reflective vs. absorbent). This allows extraction of a functional image of single or multiple elements of the environment for hundreds of yards depending on circumstances. For example, a parked car may be perceived as a large object that starts out low at one end, rises in the middle, and drops off at the other end. The difference in the height and slope at either end can identify the front from the back; typically, the front will be lower, with a more gradual slope to the roof. Distinguishing between vehicle type is also possible. A pickup truck, for instance, is usually taller, with a hollow sound reflecting from its bed. An SUV is usually tall and boxy overall, with a distinctly blocky rear end. A tree is
relatively narrow and solid at the bottom, broadening in all directions and becoming more sparse toward the top. More specific characteristics, such as size, leafiness, or height of the branches can also be determined. Using this information, a scene can be analyzed and imaged, allowing the listener to establish orientation and direction through the scene. As with the visual system, this process becomes unconscious.
Sonar can be passive and active. Passive sonar is more common among blind humans, but is very uncommon elsewhere in nature. It relies on sounds in the environment or casually produced by the listener, such as by footsteps or cane taps. The images thus extracted are relatively vague. Active sonar involves the use of a signal produced by the listener. The greater effectiveness of active sonar lies in the brain's control over and familiarity with the signal which allows it to distinguish between the characteristics of the signal it produces from those of the returning echo. The returning signal is systematically changed by the qualities of whatever returns it, and these changes carry information about what the signal encounters. The relative precision of active sonar is why it is used most widely in nature and in technical applications. We use the term FlashSonar, because the ideal echo signal is a flash of sound, resembling the flash of a camera, and the brain captures the
reflection of the signal, much like the film of a camera.
Perhaps the greatest advantage to FlashSonar is that an active signal can be produced very consistently so the brain can tune to this specific signal very intently. This allows for easier recognition of echoes even in complex or noisy environments. It's like recognizing a familiar face in a crowd. The more familiar is the face, the easier it is to recognize. The characteristics of an active signal can also be deliberately controlled to fit situations, and the brain is primed to attend to each echo by virtue of its control over the signal.
Sonar struggles most with figure-ground distinction - distinguishing one object or feature from others near it. Elements tend to blur together - blending small elements with large. Also, high noise levels or wind can mask echoes so that they can be difficult to hear, requiring louder clicks and more head scanning.
Tongue clicks should be sharp, similar to a finger snap or the pop of chewing gum. Most students can produce them without instruction. An older student can be instructed to make a click by explaining that the tip of the tongue should not slap against the bottom of the mouth. The part of the tongue used to make the click is the same that produces a "k" sound. If the student keeps the tip up while pulling down with the back of the tongue, a pop sound results. This sound becomes controlled with practice. Young blind kids quickly learn to imitate others making the clicks, so we teach these to the family and other instructors. Hand claps or clickers may also be used as a back up, but these require the use of the hands, and are not as easily controlled. Clickers are generally too loud for indoor use. They should never be sounded near the ears, and never clicked more than once every 2 or 3 seconds. Cane taps can be used in a pinch, but the signal is poorly aligned with the ears,
and it is inconsistent as surface characteristics change. We find that sonar signals are rarely noticed by the general public, so they do not constitute a concern against normalcy. They generally result in improvement to posture, more natural head movement and gait, greater confidence, and more graceful environmental interaction.
The brain learns to see by using systematic stimulus differentiation. This natural process may be sped up with formal instruction.
We start with sensitizing students to echoes, usually by having them detect and locate targets that are easy, such as large plastic panels or bowls. This helps them get a sense of what echoes sound like. Once this is established, we reduce the size of the panels, and use subtler and more complex stimuli.
Stimulus clarification helps students perceive a stimulus that they may not sense, such as an open door. To clarify this, we may use a wider doorway, or use a more reverberant room beyond the doorway. Then, we return to the original doorway.
With stimulus comparison, we exemplify the sounds of characteristics by using A-B comparisons when possible. For example, solid vs. sparse may be typified by comparing a fence (with nothing behind it) from a wall near by. A high wall could be found near a low wall, or a tree near a pole, or a large alcove near a smaller one. We try to locate training environments that are rich with varied stimuli. The characteristics of almost any object or feature can be better understood when compared to something distinctly different. Then, subtler differences can be compared.
Stimulus association is the conceptual version of stimulus comparison. Instead of comparing elements in the environment, we are comparing them in our minds by drawing upon mental references. For example, when facing a hedge, a student might say, "It sounds solid?" I might reply, "as solid as the wall to your house?" "No, not that solid," he might reply. "As sparse as the fence of your yard?" "No, more solid than that," he might answer. Now we have a range of relativity to work with. "Does it remind you of anything else near your house, maybe in the side yard?" "Bushes?" he might query. "But what seems different from those bushes?" "These are sort of flat like a fence." If he can't put the word to it, we have him touch to determine that it's a hedge, and we may discuss why it sounds the way it does.
We also like to encourage precision interaction by, for instance, having a student practice walking through a doorway without touching, with the gap narrowed by closing the door more and more with each trial, or having a student locate the exact position of a pole, and reach out to touch it without fishing for it. We also work on maintaining orientation and connectedness with surfaces in complex spaces by, for example, moving diagonally from one corner to another across increasingly larger rooms. Students learn to hear the corner opening up behind them, while closing in before them, and keeping their line between the two. Once this is mastered, we place obstacles to be negotiated while still maintaining orientation.
Ultimately, we foster students' ability to establish orientation and direct themselves through space. We practice finding and establishing the relative locations of objects and reference points in a large, complex environment, such as a park or college campus. Students walk through the area, keeping track of their location with respect to things they can hear. They are encouraged to venture off pathways and across open spaces. We find and map more objects and features until the space is learned.