Software Tools for Blind and Visually Impaired in GNOME2 Œaccessibilty in GNOME2 Œ
Total Page:16
File Type:pdf, Size:1020Kb
Anale. Seria Informatic. Vol. III fasc. I Annals. Computer Science Series. 3rd Tome 1st Fasc. Software tools for blind and visually impaired in GNOME2 œAccessibilty in GNOME2 œ Dr. Victoria Iordan, Associate Professor University of the W est, Timisoara, Romania REZUMAT. Aceast lucrare are drept scop prezentarea İi analizarea posibilit$ilor ca o persoan cu handicap vizual s utilizeze calculatorul folosind desktopul GNOME. Ín prima parte lucrarea prezint no$iunea de accesibilitate (—Accessibility“) în general İi identific solu$ii existente pentru a da sens no$iunii de accesibilitate referitor la persoanele cu deficien$e vizuale, fcându-se analiza desktopului GNOME İi a suportului pe care acesta îl ofer pentru accesibilitate. Partea a doua este dedicat analizei unicului screen reader pentru GNOME (cel pu$in pân în momentul de fa$): Gnopernicus. Acesta este parte a GAP (Gnome Accessibility Project), managementul proiectului fiind realizat de Sun Microsystems în colaborare cu BAUM Retec AG dar implementarea efectiv realizându-se în România, de ctre BAUM Engineering S.R.L. Este analizat modul în care este posibil s i se ofere utilizatorului un feedback auditiv (comunicarea cu diferite mecanisme TTS œ Text To Speech) İi de asemenea modul în care se poate oferi un feedback Braille (cum se pot accepta mai multe tipuri de dispozitive Braille, mai multe tabele braille). 1. W hat Accessibility means? Accessibility is about enabling people with disabilities to participate in substantial life activities that include work and the use of services, products, and information. Everybody must be sure that PC technology can be tremendously empowering. A significant portion of the growth in the U.S. economy during the 1990s can be directly attributed to productivity improvements 39 Anale. Seria Informatic. Vol. III fasc. I Annals. Computer Science Series. 3rd Tome 1st Fasc. brought on by PC technology. W ith each new generation of PCs, new applications are enabled that offer more benefits. As PCs become more powerful, they improve most people‘s lives in more ways every day. The PC technology can be particularly empowering for people who are blind or visually impaired. In an increasingly information-driven economy, the potential benefit of personal computers on employment for blind and visually impaired people is dramatic. The unemployment rate for blind and severely visually impaired people in the U.S. is currently over 70%. The PCs, paired with accessibility technology, have allowed the disabled users to effectively overcome their visual disabilities, and to live and work with able-bodied people in ways that would have otherwise been impossible. By opening up new opportunities to hold mainstream jobs œ good jobs that make full use of a person‘s mental talents and abilities œ PCs can bring blind and visually impaired people out of a world of dependence and isolation and into one of full productivity and participation. Indeed, PCs have promised to be the most important tool for empowerment of blind and visually impaired people since the invention of Braille or the magnifying glass. History of the Development of Blind and Low-Vision AT The history of accessibility technology for blind and visually impaired users is one of rapid innovation and continual adaptation. In the early days of computers, accessibility for blind users was not terribly difficult. A standard 80x24 text screen used by DOS and other text-only operating systems could be reliably translated into refreshable Braille or speech (using special hardware connected to a serial port). These were very exciting days for blind people. Using the new technologies, they could stand practically level with other sighted colleagues. Then came W indows… W indows 3.0 and 3.1 didn‘t work with the traditional DOS screen readers. The visual interface was fundamentally more complex, as well as the underlying operating system technologies. Unfortunately, W indows 3.1 did not come with any support for accessibility utilities. Screen reader companies had to re-invent their products, using much more complex methods for gathering and interpreting information about what was happening on the screen. It took a while for screen reader companies to develop these utilities, and in the interim a number of blind people lost their jobs as their employers switched to W indows. As adoption of W indows accelerated and the introduction of W indows 95 (another major architectural revision of the operating system) loomed, blind people saw that they could once again lose jobs by the thousands. To remedy the situation, people within the Massachusetts Commission for the Blind, the 40 Anale. Seria Informatic. Vol. III fasc. I Annals. Computer Science Series. 3rd Tome 1st Fasc. Massachusetts Assistive Technology Partnership (MATP), the National Council on Disability, and the National Federation of the blind lobbied Microsoft to fix the problem by making windows accessible. Specifically, they asked for standard —hooks“ in the operating system that would support the creation of robust and reliable accessibility utilities. Their requests were generally rebuffed. W hen these methods failed, the advocates began to lobby state governments to get them to boycott Microsoft products until adequate accessibility technologies could be developed. That got Microsoft‘s attention. Microsoft‘s response to the commitment to provide —hooks“ in the operating system for use by accessibility utilities was the creation of the MSAA interfaces to facilitate direct communication between the applications and utilities. Initially it was hoped that this technology would solve the problem of making application accessible. Unfortunately, it took several years to complete the first version of MSAA, relatively few applications outside of Microsoft use it, and prospects for widespread adoption are not good. No corresponding standard lower-level hooks have been provided to implement reliable support for legacy and non-compliant applications (which grow geometrically in number every year). So the current situation is one of mixed progress and a similarly mixed outlook toward the future. W hile the overall reliability of screen readers for W indows has increased in the last 5 years, they are still plagued by technical problems. The W indows operating system has increased support for accessibility, but it also introduces significant new problems with each major update. But what about Linux? There are some solutions to allow people with visual disabilities to use it? The answer is YES: visually impaired people are able to use GNOME desktop (there are many signals that KDE will be also accessible in the near future) by using Gnopernicus as a screen reader and magnifier. How Screen Readers W ork œ Overview Screen readers are special utility programs that run concurrently with other regular programs to figure out what is on the screen, interpret it, and then interact with the blind or visually impaired user through text-to- speech or refreshable Braille. Screen readers use two primary methods to gather information about what is on the screen: a) directly communication with the applications that are running. This is preferred, but requires additional work on the part of the application vendor 41 Anale. Seria Informatic. Vol. III fasc. I Annals. Computer Science Series. 3rd Tome 1st Fasc. b) —hooks“ in the operating system that are used to intercept low- level operations, such as text output, to deduce what is going on. These and other sources of information are combined in real time to construct an alternate view of the world, frequently called an —off-screen model“ (OSM). Since utility programs do not have eyes to observe the output to the screen, they take a more sophisticated approach. The observation of normal output to the screen involves collecting (directly or indirectly) information from multiple sources within the system during normal program operation. Considered abstractly, some information is collected at a higher level (closer to the user) and some information is collected at a lower level (closer to the hardware). An example of a highest-level operation would be direct communication between the screen reader and a running application. In this instance the screen reader might ask, —W hat is under the mouse cursor?“ or —W hat text is in this rectangle on screen?“ This kind of direct communication represents the ideal situation, because the application has the opportunity to provide information about what is happening on screen that is sufficiently detailed and complete such that the screen reader can provide an adequate translation for the user. In the less ideal (though more common) situations, if the application does not provide a direct method for querying what is happening on the screen, the alternative fallback strategy for the screen reader is to infer, using lower-level information, what is happening. For example, all word processors generally send blocks of text to the screen using APIs provided by the operating system. By intercepting calls to the text-display APIs, the screen reader can see where most of the text is on the screen, and can then try to infer what is actually going on. This is a simplified example. Screen readers frequently rely on several types of intercepted information when making inferences about what is happening on the screen because each source of information is incomplete. The process of intercepting API calls in the system is called —hooking“. Historically there have been two primary methods of hooking calls in the operating system. —Hooking high“ means hooking the APIs that the operating system provides for use by applications. —Hooking low“ means hooking the display driver interfaces (DDIs), a step closer to the hardware (video card). The logic that must be used for interpretation and translation of the available high-level and low-level information is frequently very specific to individual applications. This means that the screen reader must effectively be tuned to work with each new application most effectively.