US 20070211071A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/0211071 A1 Slotznick et al. (43) Pub. Date: Sep. 13, 2007

(54) METHOD AND APPARATUS FOR (52) U.S. Cl...... 34.5/594 INTERACTING WITH A VISUALLY DISPLAYED DOCUMENT ON A SCREEN (57) ABSTRACT READER User interaction of a visually displayed document is pro vided via a (GUI). The document (76) Inventors: Benjamin Slotznick, Mt. Gretna, PA includes, and is parsed into, a plurality of text-based gram (US); Stephen C. Sheetz, Lititz, PA matical units. An input device modality is selected from a (US) plurality of input device modalities which determines the Correspondence Address: type of input device in which a user interacts with to make AKN GUMPSTRAUSS HAUER & FELD a selection. One or more grammatical units of the document L.L.P. are then selected using the selected type of input device. ONE COMMERCE SQUARE Each grammatical unit that is selected is read aloud to the 2005 MARKET STREET, SUITE 2200 user by loading the grammatical unit into a text-to-speech PHILADELPHIA, PA 19103 (US) engine. The text of the grammatical unit is thereby auto matically spoken. Furthermore, a Switching modality is (21) Appl. No.: 11/642,247 selected from a plurality of switching modalities. The (22) Filed: Dec. 20, 2006 Switching modality determines the manner in which one or more Switches are used to make a selection. Using the Related U.S. Application Data selected Switching modality, a user steps through at least Some of the grammatical units in an ordered manner by (60) Provisional application No. 60/751,855, filed on Dec. physically activating one or more Switches associated with 20, 2005. the GUI. Each activation steps through one grammatical Publication Classification unit. Each grammatical unit that is stepped through is read aloud by loading the grammatical unit into a text-to-speech (51) Int. C. engine, thereby causing the text of the grammatical unit to G9G 5/02 (2006.01) be automatically spoken.

READING MODE WAIT FOR NEXT INPUT 1502 IS PUSHED 1503

IS THIS THE STEP HIGHILIGHT AND READ 50s FORWARD THENEXT SENTENCE BUTTON OR OBJECT 1507

IS THIS THE RE-READ SENTENCE 509 REPEAT STEP OR OBJECT BUTTON 1511

HIGHILIGHT AND READ PREVIOUS SENTENCE

OR OBJECT 1515

WAIT

(1502) ACTIVATE LINK OR CLICKABLE OBJECT 1521

- Y -

GET NEW CHANGE PAGE 1523 1ss THIS IS THE TO CHANGEMODE HYPER w BUTTON LINK GOTO MODE START 1527 15O1 Patent Application Publication Sep. 13, 2007 Sheet 1 of 19 US 2007/0211071 A1

`ISIARISIS inSSAOOHd JLSHOO™ETRI

LII

HILIAASILRIAASTRIQINV HOHAGOOCIRIHIGOW SXINITCÍNWJLX£IL

:RIVAALHOS;) 3ÐV?g?AGSHVä

i|JUNGIITOYHOEIST).

Patent Application Publication Sep. 13, 2007 Sheet 2 of 19 US 2007/0211071 A1

{{TOCIOWN zOIH… |£I{ºshaxvadsH?AO| „,HO33?S-OL-LXILHLIAAºbvà83A(TVRI |JXEJLH8OCI?L?VIJSCINQOS| (ISIHOLOSISOOHO LIZ&IXRL

IZIWORTH

Patent Application Publication Sep. 13, 2007 Sheet 4 of 19 US 2007/0211071 A1

(LHVHOIHA)

Patent Application Publication Sep. 13, 2007 Sheet 6 of 19 US 2007/0211071 A1

(LHVHOTHA)

& Patent Application Publication Sep. 13, 2007 Sheet 7 of 19 US 2007/0211071 A1

Z'9ICH (LHVHOTHd)

OQ +) Patent Application Publication Sep. 13, 2007 Sheet 8 of 19 US 2007/0211071 A1

89ICH (LHVHOTHd) Patent Application Publication Sep. 13, 2007 Sheet 9 of 19 US 2007/0211071 A1

YA Patent Application Publication Sep. 13, 2007 Sheet 10 of 19 US 2007/0211071 A1

0['9ICH (LHVHOTHà)

? Patent Application Publication Sep. 13, 2007 Sheet 11 of 19 US 2007/0211071 A1

II'9IAI (LHVHOTHA) Patent Application Publication Sep. 13, 2007 Sheet 12 of 19 US 2007/0211071 A1

ZI(9IAI

(LXIVHOTHA)

Patent Application Publication Sep. 13, 2007 Sheet 13 of 19 US 2007/0211071 A1

9.I’9ICI (LHVHOTHA). Patent Application Publication Sep. 13, 2007 Sheet 14 of 19 US 2007/0211071 A1

SOME KEYASSIGNMENTS FOR USING STANDARD KEYBOARD

Fingers to be used Key Action Finger Left hand only: change mode little finger 2 step backward ring finger 3 repeat step middle finger 4 step forward pointerfinger Space bar activate thumb Left hand plus: Tab change mode little finger one finger on 2 step backward ring finger the right hand 3 repeat step middle finger 4 step forward pointerfinger Enter activate right hand pointer Left plus two: 2 step backward ring finger two fingers on 3 repeat step middle finger the right hand 4. step forward pointer finger Enter activate right hand pointer Ctrl or Alt change mode right hand thumb Right hand only: Space bar activate thumb (zero) 0 step forward pointerfinger (minus) - repeat step middle finger (plus) + step backward ring finger \ and change mode little finger Right hand plus: Tab activate left hand pointer one finger on the left hand (zero) 0 step forward pointerfinger (minus) - repeat step middle finger (plus) + step backward ring finger “\ and change mode little finger Right hand plus: Tab activate left hand pointer two fingers on Ctrl or Alt change mode left hand thumb the left hand (zero) step forward pointer finger (minus) - repeat step middle finger (plus) + step backward ring finger Fig. 14 Patent Application Publication Sep. 13, 2007 Sheet 15 of 19 US 2007/0211071 A1 READING MODE WAIT FOR NEXT INPUT 1502

Fig. 15 BUTTON IS PUSHED 1503

IS HIGHILIGHT AND READ THIS THE STEP 505 FORWARD THE NEXT SENTENCE BUTTON OR OBJECT 1507

NO

IS THIS THE RE-READ SENTENCE 509 REPEAT STEP OR OBJECT BUTTON? 1511

IS THIS THE STEPY. HIGHILIGHT AND READ 513 BACKWARD PREVIOUS SENTENCE OR OBJECT BUTTON 1515

GO TO NO WAIT

(1502) ACTIVATE IS LINK OR THIS THE CLICKABLE 1517 ACTIVATE BUTTON? OBJECT 1521

CHANGE PAGE 1523 TO 1ss THIS IS THE HYPER CHANGE MODE BUTTON LINK MODE 1527 Patent Application Publication Sep. 13, 2007 Sheet 16 of 19 US 2007/0211071 A1 MODE WAIT FOR NEXT INPUT 1601 Fig. 16 BUTTON IS PUSHED 1603

IS HIGHILIGHT AND READ THIS THE STEP 605 THE NEXT HYPERLINK FORWARD BUTTON OR CLICKABLE OBJECT 1607 NO

IS THIS THE RE-READ HYPERLINK

1609 REPEAT STEP OR CLICKABLE BUTTON? OBJECT 1611

IS HIGHILIGHT AND READ THIS THE STEP PREVIOUS HYPERLINK 1613 BACKWARD OR CLICKABLE BUTTON? OBJECT 1615

No ACTIVATE IS LINK OR THIS THE CLICKABLE 1617 ACTIVATE BUTTON OBJECT 1621

GET NEW

CHANGE PAGE 1623 15 THIS IS THE TO CHANGE MODE NAVIGA BUTTON TION MODE 1627 Patent Application Publication Sep. 13, 2007 Sheet 17 of 19 US 2007/0211071 A1

NAVIGATION MODE d WAIT FOR NEXT INPUT 1701 F 19 1 7 BUTTON IS PUSHED 1703

IS THIS THE STEP HIGHILIGHT AND READ 05 FORWARD NEXT NAVIGATION BUTTON BUTTON OR TAB 1707

IS RE-READ THIS THE NAVIGATION 1709 REPEATSTEP BUTTON OR TAB BUTTON?

HIGHILIGHT AND READ IS PREVIOUS

THIS THE STEP NAVIGATION BUTTON 713 BACKWARD ORTAB 1715 BUTTON

WAIT (1701) No IS ACTIVATE THIS THE NAVIGATION 1717 ACTIVATE BUTTON OR BUTTON TAB 1719

GET NEW CHANGE PAGE 1723 TO 1725 THIS IS THE

READ CHANGE MODE ING

BUTTON MODE 1727

Patent Application Publication Sep. 13, 2007 Sheet 18 of 19 US 2007/0211071 A1 [08]

£18I11816081L081S081€081IL181S181|318 e6edq?AN?IdueS Patent Application Publication Sep. 13, 2007 Sheet 19 of 19 US 2007/0211071 A1

w 606I

O you

III8I608IL08S08£08ILI8ISI8||$18

O US 2007/021 1 071 A1 Sep. 13, 2007

METHOD AND APPARATUS FOR INTERACTING 0007 i. People who cannot read or who have difficulty WITH A VISUALLY DISPLAYED DOCUMENT ON reading, but who can hear and comprehend conversational A SCREEN READER speech. CROSS-REFERENCE TO RELATED 0008 ii. People who may have poor vision, but who can APPLICATIONS see high contrasts. 0001) This application claims the benefit of U.S. Provi 0009 iii. People with hand-motor limitations who can sional Application No. 60/751,855 filed Dec. 20, 2005 move and position a pointing device (such as a mouse) but entitled “User Interface for Stepping Through Functions of nonetheless may have a difficulty clicking on mouse buttons. a Screen Reader.” 0010 iv. People who may have learning disabilities, or cognitive disabilities (such as traumatic brain injury, mental COMPACT DISC APPENDIX retardation, or Alzheimer's disease) which make reading 0002 This patent application includes an Appendix on difficult. one compact disc having a file named appendix.txt, created 0011. However, there are people whose vision or manual on Dec. 19, 2006, and having a size of 36,864 bytes. The dexterity is even more limited than currently required for compact disc is incorporated by reference into the present Point-and-Read. Just as importantly, many disabilities are patent application. This compact disc appendix is identical progressive and increase with age, so that some people who in content to the compact disc appendix that was incorpo have the ability to use Point-and-Read may lose that ability rated by reference into U.S. Patent Application Publication as they age. The present invention is intended to extend No. 2002/0178007 (Slotznicket al.). some of the benefits of using a screen-reader like Point-and Read to Such people. COPYRIGHT NOTICE AND AUTHORIZATION 0012 (Most screen-readers, and much of assistive tech 0003 Portions of the documentation in this patent docu nology, focus on compensating for one physical disability, ment contain material that is subject to copyright protection. usually by relying upon other abilities and mental acuity. The copyright owner has no objection to the facsimile This approach does not help people who have multiple reproduction by anyone of the patent document or the patent disabilities, especially if one of their disabilities is cogni disclosure as it appears in the Patent and Trademark Office tive.) file or records, but otherwise reserves all copyright rights whatsoever. 0013 With the present invention, as increasing the func tionality of a screen-reader Such as Point-and-Read, a user's BACKGROUND TO THE INVENTION vision can range from good to blind and a user's motor skills can range from utilizing a mouse to utilizing only one 0004 The present invention discloses novel techniques Switch. This allows a user to continue employing the same for adding multiple input device modalities and multiple Software program user interface as he or she transitions over Switching modalities. Such as Switch-Scanning (or step time or with age from few moderate disabilities to many scanning) capabilities, to screen-reader Software. Such as the SWC OS. Point-and-Read(R) screen-reader disclosed in U.S. Patent Application Publication No. 2002/0178007 (Slotznicket 0014 2. Using switches to control computers: Some al.). Portions of U.S. Patent Application Publication No. people with severe physical disabilities or muscle degenera 2002/0178007 are repeated below, and portions not repeated tive diseases such as Lou Gehrig's disease (ALS) may have below are incorporated by reference herein. only one or two specific movements or muscles that they can readily control. Yet ingenious engineers have designed 0005 1. Using the Point-and-Read screen-reader: To use single Switches that these people can operate to control the Point-and-Read screen-reader, the user moves the everything from a motorized wheelchair to a computer (most frequently controlled by a or other program. For example, besides hand operated Switches, pointing device) over the screen. The Point-and-Read soft there are switches that can be activated by an eyelid blink ware will highlight in a contrasting color an entire sentence ing, or by puffing on a straw-like object. when the cursor hovers over any part of it. If the user keeps the cursor over the sentence for about a second, the software 00.15 Many people who are blind or have low vision will read the sentence aloud. Clicking is not necessary. If the cannot see (or have difficulty seeing) the computer cursor on user places the cursor over a link, and keeps it there, the a computer screen. They find it difficult or impossible to use software will first cause the computer to read the link, then a computer pointing device, such as a mouse, to control if the cursor remains over the link, the software will cause software. For these people, software that is controlled by a the computer to navigate to the link. Pointing devices keyboard or Switch(es) is easier to use than Software con coupled with highlighting and clickless activation also oper trolled by a pointing device, even if these people do not have ate the control features of the software (i.e. the “buttons' hand-motor-control disabilities. located on , such as "Back”, “Forward”, “Print', and "Scroll Down'). Keystroke combinations can also be used 0016 3. Using Switches and automated step scanning: for a handful of the most important activities, such as Automated step scanning allows a person who can use only reading text and activating links. These actions can be varied one switch to select from a multitude of actions. The through options and preferences. computer software automatically steps through the possible choices one at a time, and indicates or identifies the nature 0006 Unlike many screen-readers, the Point-and-Read of each choice to the user by highlighting it on a computer screen-reader is designed for people who may have mul screen, or by reading it aloud, or by Some other indicia tiple-disabilities, such as the following types of people: appropriate to the user's abilities. The choice is highlighted US 2007/021 1 071 A1 Sep. 13, 2007

(read or identified) for a preset time, after which the software 0025. In the assistive technology field, products such as automatically moves to the next choice and highlights (reads the Kurzweil 3000 combine an image scanner with optical or identifies) this next choice. The user activates (or triggers) character recognition (OCR) software and text-to-speech the switch when the option or choice that he or she wishes software to help people who are blind or have a difficulty to choose has been identified (e.g., highlighted or read reading because of dyslexia. Typically, the user will put a aloud). In this way, a single Switch can be used with sheet of paper with printed words into the scanner, and press on-screen keyboards to type entire sentences or control a Some keys or buttons. The scanner will take an image of the variety of computer programs. Different software programs paper, the OCR software will convert the image to a text file, may provide different ways of stepping through choices. and the text-to-speech Software will read the page aloud to This type of a process is referred to as “single-switch the user. scanning”, “automatic scanning”, “automated Scanning, or 0026. The present invention is primarily concerned with just 'auto Scanning. Switch-scanning. However, when a computer is controlled 0017 4. Using two-switch step scanning: If the person by Switch-Scanning, the Switch-scanning may be used to can control two different switches, then one switch can be activate an image-scanner that is attached to the computer. used to physically (e.g., manually) step through the choices, Also, when an image-scanner is used to convert a paper and the other switch can be used to select the choice the user document into an electronic one, Switch scanning may be wants. A single Switch is functionally equivalent to two used to read the document one sentence at a time. switches if the user has sufficient control over the single switch to use it reliably in two different ways, such as by a 0027 Assistive technology has made great progress over repeated activation (e.g., a left-mouse click Versus a left the years, but each technology tends to assume that the user mouse double-click) or by holding the switch consistently has only one disability, namely, a complete lack of one key for different durations (e.g., a short period versus a long sensory input. For example, technology for the blind gen period as in Morse code). However, in either event, this will erally assumes that the user has no useful vision but that the be referred to as “two-switch step scanning, or “two-switch user can compensate for lack of sight by using touch, Scanning. hearing and mental acuity. As another example, technology for Switch-users, generally assumes that the user can operate 0018 Automatic scanning may be physically easier for only one or two Switches, but can compensate for the Some people than two-switch step scanning. However, two inability to use a pointing device or keyboard by using sight, Switch Scanning offers the user a simpler cognitive map, and hearing and mental acuity. As another example, a one may also be more appropriate for people who have trouble handed keyboard (such as the BAT Keyboard from Infogrip, activating a Switch on cue. Inc., Ventura, Calif.) will have fewer keys, but often rely 0.019 For both automatic scanning and physical (e.g., upon “chording' (hitting more than one key at a time) to manual) step scanning, there is sometimes an additional achieve all possible letters and control keys, thus substitut switch provided that allows the user to cancel his or her ing mental acuity and single-hand dexterity for two-handed selection. dexterity. (The BAT Keyboard has three keys for the thumb plus four other keys, one for each finger.) 0020 5. Using directed step scanning: The term “directed scanning is sometimes used when more that two Switches 0028) If the user has multiple disabilities, disparate tech are employed to direct the pattern or path by which a nologies frequently have to be cobbled together in a cus scanning program steps through choices. For example, a tomized product by a rehabilitation engineer. Just as impor joy-stick (or four directional buttons) may be used to direct tantly, a person with multiple disabilities may have only how the computer steps through an on-screen keyboard. partial losses of several inputs. But because each technology usually assumes a complete loss of one type of input, the 0021. Some software programs not designed primarily cobbled together customized product does not use all the for people with disabilities still have scanning features. For abilities that the user possesses. In addition, the customized example, when Microsoft's Internet Explorer is displaying a product is likely to rely more heavily on mental acuity. page, hitting the Tab key will advance the focus of the program to the next clickable button or hyper-link. (Hitting 0029. However, this is not helpful for people with cog the Enter key will then activate the link.) Repeatedly hitting nitive disabilities (such as traumatic brain injury or mental the Tab key will advance through all buttons and links on the retardation), who frequently have some other partial impair page. ment(s). Such as poor hand-motor control, or poor vision. 0022 6. Additional background information: All of these 0030 Just as importantly, most screen-readers also tend various automated and physical (e.g., manual) methods will to focus on one level of disability, so that they are too be referred to as “switch-scanning. intrusive for a person with a less severe disability and don’t provide Sufficient Support for a person with a more severe 0023 “Scanning is also the term used for converting a disability. This approach does not help the many people who physical image on paper (or other media such as film stock) acquire various disabilities as they age and whose disabili into a digital image, by using hardware called a scanner. This ties increase with aging. Just when an aging person needs to type of process will be referred to as “image scanning. Switch technologies to ameliorate various increased physical 0024. The hardware looks, and in many ways works, like disabilities, he or she might be cognitively less able to learn a photocopy machine. A variety of manufacturers including a new technology. Hewlett-Packard and Xerox make scanners. The scanner works in conjunction with image-scanning software to con BRIEF SUMMARY OF THE INVENTION vert the captured image to the appropriate type of electronic 0031) User interaction of a visually displayed document file. is provided via a graphical user interface (GUI). The docu US 2007/021 1 071 A1 Sep. 13, 2007 ment includes, and is parsed into, a plurality of text-based 0042 FIG. 16 shows a flow chart of what actions are grammatical units. An input device modality is selected taken in the hyperlink mode when any of the five keys are from a plurality of input device modalities which determines pressed in accordance with one preferred embodiment of the the type of input device in which a user interacts with to present invention. make a selection. One or more grammatical units of the document are then selected using the selected type of input 0.043 FIG. 17 shows a flow chart of what actions are device. Each grammatical unit that is selected is read aloud taken in the navigation mode when any of the five keys are to the user by loading the grammatical unit into a text-to pressed in accordance with one preferred embodiment of the speech engine. The text of the grammatical unit is thereby present invention. automatically spoken. Furthermore, a Switching modality is 0044 FIG. 18 shows a screen shot of an embodiment of selected from a plurality of switching modalities. The the present invention designed for one or two Switch step Switching modality determines the manner in which one or Scanning. more Switches are used to make a selection. Using the selected Switching modality, a user steps through at least 004.5 FIG. 19 shows a screen shot of one preferred Some of the grammatical units in an ordered manner by embodiment of the present invention which may be operated physically activating one or more Switches associated with in several different input device modalities and several the GUI. Each activation steps through one grammatical different switching modalities. The screen shot shows the unit. Each grammatical unit that is stepped through is read option page by which the user chooses among the modali aloud by loading the grammatical unit into a text-to-speech ties. engine, thereby causing the text of the grammatical unit to DETAILED DESCRIPTION OF THE be automatically spoken. INVENTION BRIEF DESCRIPTION OF THE DRAWINGS 0046 Certain terminology is used herein for convenience only and is not to be taken as a limitation on the present 0032. The above summary, as well as the following invention. In the drawings, the same reference letters are detailed description of preferred embodiments of the inven employed for designating the same elements throughout the tion, will be better understood when read in conjunction with several figures. the following drawings. For the purpose of illustrating the invention, there is shown in the drawings an embodiment 1. Definitions that is presently preferred, and an example of how the invention is used in a real-world project. It should be 0047 The following definitions are provided to promote understood that the invention is not limited to the precise understanding of the present invention. arrangements and instrumentalities shown. In the drawings: 0048. The term “standard method’ is used herein to refer 0033 FIG. 1 shows a flow chart of a prior art embodi to operation of a screen-reader (like Point-and-Read) which ment that is related to the present invention; operates as described in U.S. Patent Application Publication No. 2002/0178007. Most personal computer programs 0034 FIG. 2 shows a flow chart of a particular step in expect that a user will be able to operate a computer mouse FIG. 1, but with greater detail of the sub-steps: or other pointing devices such as a track-ball or touch-pad. 0035 FIG. 3 shows a flow chart of an alternate prior art A screen-reader employing the standard method is operated embodiment that is related to the present invention; primarily by a pointing device (Such as a mouse) plus clickless activation. The standard method can include some 0.036 FIG. 4 shows a screen capture associated with FIG. switch-based features, for example, the use of keystrokes 3: like Tab or Shift--Tab as described in that application. 0037 FIG. 5 shows a screen capture of the prior art 0049. In contrast, the term “switch-based method is used embodiment related to the present invention displaying a in this patent application to refer to operation of a screen particular with modified formatting, after having reader in which all features of the screen-reader can be navigated to the particular web page from the FIG. 3 screen; operated with a handful of switches. Switch-based methods 0038 FIG. 6 shows a screen capture of the prior art include directed Scanning, physical (e.g., manual) step embodiment related to present invention after the user has scanning and automated step-scanning, as well as other placed the cursor over a sentence in the web page shown in control sequences. A Switch-based method includes control via six Switches, five Switches, two Switches, or one Switch. FIG. 5; and Switches include the keys on a computer keyboard, the keys 0039 FIGS. 7-13 show screen captures of another prior on a separate keypad, or special Switches integrated into a art embodiment related to the present invention. computing device or attached thereto. 0040 FIG. 14 shows different ways in which five of the 0050. The term “the input device modality” is used herein keys on a standard QWERTY keyboard can be used to to refer to the type of input device by which a user interacts simulate a BAT keyboard or similar five key keyboards in with a computer to make a selection. Exemplary input accordance with one preferred embodiment of the present device modalities include a pointing device modality as invention. described above, and a switch-based modality wherein one 0041 FIG. 15 shows a flow chart of what actions are or more Switches are used for selection. taken in the reading mode when any of the five keys are 0051. The term “switching modality” is used herein to pressed in accordance with one preferred embodiment of the refer specifically to the number of switches used in the present invention. switch-based method to operate the software. US 2007/021 1 071 A1 Sep. 13, 2007

0.052 The term “activating a switch’ is used in this patent displays that page in a new but similar manner (“WEBPAGE application to refer to pressing a physical Switch or other 2). The new format contains additional hidden code which wise causing a physical Switch to close. Many special enables the web page to be easily read aloud to the user by switches have designed for people with disabilities, includ text-to-speech Software. ing those activated by blinking an eyelid, sipping or puffing on a straw-like object, touching an object (e.g. a touchpad or 0057 The present invention reads the contents of touch screen), placing a finger or hand over an object (e.g. WEBPAGE 1 (or more particularly, parses its HTML code) a proximity detector), breaking a beam of light, moving and then “on-the-fly” in real time creates the code to display one's eyes, or moving an object with one’s lips. The full WEBPAGE 2, in the following manner: panoply of Switches is not limited to those described in this 0.058 (1) All standard text (i.e., sentence or phrase) paragraph. that is not within link tags is placed within link tags to which are added an “on Mouseover” event. The on 0053) The term “document modes” is used herein to refer Mouseover event executes a JavaScript function which to the various ways in which a document can be organized causes the text-to-speech reader to read aloud the or abstracted for display or control. The term includes a contents within the link tags, when the user places the reading mode which comprises all objects contained in the pointing device (mouse, wand, etc.) over the link. Font document or only selected objects (e.g., only text-based tags are also added to the sentence (if necessary) So that grammatical units), a hyperlink mode which comprises all the text is displayed in the same color as it would be in in an document (and only the hyperlinks), a WEBPAGE 1 rather than the hyperlink colors cell mode which comprises all cells found in tables in a (default, active or visited hyperlink) set for WEBPAGE document (and only the cells), and a mode which 1. Consequently, the standard text will appear in the comprises all frames found in an html document (and only same color and font on WEBPAGE 2 as on WEBPAGE the frames). The hyperlink mode may also include other 1, with the exception that in WEBPAGE 2, the text will clickable objects in addition to links. The full delineation of be underlined. documents modes is not limited to those described in this paragraph. Changing a document mode may change the 0059 (2) All hyperlinks and buttons which could sup aspects of a document which are displayed, or it may simply port an on Mouseover event, (but do not in WEBPAGE change the aspects of a document which are highlighted or 1 contain an on Mouseover event) are given an on otherwise accessed, activated or controlled. Mouseover event. The on Mouseover event executes a 0054) The term “control mode” is used herein to refer to JavaScript function which causes the text-to-speech the organization or abstraction of the set of user commands reader to read aloud the text within the link tags or the available from a GUI. Most frequently, the control mode is value of the button tag, when the user places the conceived of as a set of buttons on one or more toolbars, but pointing device (mouse, wand, etc.) over the link. the control mode can also be (without limitation) a displayed Consequently, this type of hyperlink appears the same list of commands or an interactive region on a computer On WEBPAGE 2 as on WEBPAGE 1. screen. The control mode can also be conceived of as an 0060 (3) All buttons and hyperlinks that do contain an invisible list of commands that is recited by a synthesized on Mouseover event are given a substitute onMou voice to a blind (or sighted) user. The term “control mode' seover event. The substitute on Mouseover event includes a navigation mode which comprises a Subset of the executes a JavaScript function which first places text navigation buttons and tool bars used in most Windows that is within the link (or the value of the button tag) programs. Placing the Software in control mode allows the into the queue to be read by the text-to-speech reader, user to access controls and commands for the Software—as and then automatically executes the original on Mou opposed to directly interacting with any document that the seover event coded into WEBPAGE 1. Consequently, Software displays or creates. this type of hyperlink appears the same on WEBPAGE 0.055 The term “activating an object' is used herein to 2 as on WEBPAGE 1. refer to causing an executable program (or program feature) 0061 (4) All hyperlinks and buttons are preceded by associated with an on-screen object (i.e., an object displayed an placed within link tags. These link tags contain on a computer Screen) to run. On-screen objects include (but an on Mouseover event. This on Mouseover event will are not limited to) grammatical units, hyperlinks, images, execute a JavaScript function that triggers the follow text and other objects within span tags, form objects, text ing hyperlink or button. In other words, if a user places boxes, radio buttons, Submit buttons, sliders, dials, widgets, a pointer (e.g., mouse or wand) over the icon, the and other images of buttons, keys, and controls. Ways of browser acts as if the user had clicked the subsequent activating on-screen objects include (but are not limited to) link or button. click events, mouse events, hover (or dwell) events, code sequences, and Switch activations. In any particular software As is evident to those skilled in the art, WEBPAGE 2 will program, some on-screen objects can be activated and others appear almost identical to WEBPAGE 1 except all Cannot. standard text will be underlined, and there will be small icons in front of every link and button. The user can 2. Overview of One Prior Art Preferred have any sentence, link or button read to him by Embodiment of Present Invention moving the pointing device over it. This allows two classes of disabled users to access the web page, those 0056. A preferred embodiment of the present invention who have difficulty reading, and those with dexterity takes one web page which would ordinarily be displayed in impairments that prevent them from “clicking on a browser in a certain manner (“WEBPAGE 1) and objects. US 2007/021 1 071 A1 Sep. 13, 2007

0062. In many implementations of JavaScript, for part (3) of time (such as 2 seconds), the browser will navigate to the above, both the original on Mouseover function call (as in href URL of the link (i.e., the web page to which the link WEBPAGE 1) and the new on Mouseover function call used would navigate when clicked in WEBPAGE 1). If the cursor in part (2) can be placed in the same on Mouseover handler. has been moved, no navigation occurs. WEBPAGE 2 For example, if a link in WEBPAGE 1 contained the text appears identical to WEBPAGE 1. "Buy before lightning strikes' and a picture of clear skies, 0070 An alternative embodiment substitutes “onClick” along with the code events for on Mouseover events. This embodiment is geared 0063 on MouseOver='"ShowLightning() to those whose dexterity is sufficient to click on objects. In this embodiment, the icons described in (4) above are which makes lightning flash in the sky picture, WEBPAGE eliminated. 2 would contain the code 0071 An alternative embodiment that is geared to those 0064 on MouseOver='''Cursor Over(Buy before light whose dexterity is sufficient to click on objects does not ning strikes.'); ShowLightning(): place all text within link tags, but keeps the icons described 0065. The invention avoids conflicts between function in (4) in front of each sentence, link and button. The icons calls to the computer sound card in several ways. No conflict do not have on Mouseover events, however, but rather arises if both function calls access Microsoft Agent, because onClick events which execute a JavaScript function that the two texts to be “spoken' will automatically be placed in causes the text-to-speech reader to read the following sen separate queues. If both functions call the sound card via tence, link or button. In this embodiment, clicking on the different Software applications and the Sound card has multi link or button on WEBPAGE 2 acts the same as clicking on channel processing (such as ESS Maestro2E), both software the link or button on WEBPAGE 1. applications will be heard simultaneously. Alternatively, the two applications can be queued (one after another) via the 0072 An alternative embodiment does not have these coding that the present invention adds to WEBPAGE 2. icons precede each sentence, but only each paragraph. The Alternatively, a plug-in is created that monitors data streams onClick event associated with the icon executes a JavaScript sent to the Sound card. These streams are Suppressed at user function which causes the text-to-speech reader to read the option. For example, if the Sound card is playing streaming whole paragraph. An alternate formulation allows the user to audio from an Internet “radio station, and this streaming pause the speech after each sentence or to repeat sentences. conflicts with the text-to-speech synthesis, the streaming 0073. An alternative embodiment has the on Mouseover audio channel is automatically muted (or softened). event, which is associated with each hyperlink from WEBPAGE 1, read the URL where the link would navigate. 0066. In an alternative embodiment, the href value is A different alternative embodiment reads a phrase such as omitted from the link tag for text (part 1 above). (The href “When you click on this link it will navigate to a web page value is the address or URL of the web page to which the browser navigates when the user clicks on a link.) In at before reading the URL. In some embodiments, this on browsers, such as Microsoft's Internet Explorer, the text in Mouseover event is replaced by an onClick event. WEBPAGE 2 retains the original font color of WEBPAGE 0074. In an alternative embodiment, the text-to-speech 1 and is not underlined. Thus, WEBPAGE 2 appears even reader speaks nonempty "alt’ tags on images. ("Alt' tags more like WEBPAGE 1. provide a text description of the image, but are not necessary code to display the image.) If the image is within a hyperlink 0067. In an alternative embodiment, a new HTML tag is on WEBPAGE 1, the on Mouseover event will add addi created that functions like a link tag, except that the text is tional code that will speak a phrase such as “This link not underlined. This new tag is recognized by the new built contains an image of a followed by the contents of the alt in routines. WEBPAGE 2 appears very much like tag. Stand-alone images with nonempty alt tags will be given WEBPAGE 1. on Mouseover events with JavaScript functions that speak a 0068. In an alternate embodiment, when the on Mou phrase such as “This is an image of followed by the Seover event is triggered, the text that is being read appears contents of the alt tag. in a different color, or appears as if highlighted with a Magic 0075 An alternate implementation adds the new events Marker (i.e., the color of the background behind that text to the arrays of objects in each document container Sup changes) so that the user knows visually which text is being ported by the browser. Many browsers support an array of read. When the mouse is moved outside of this text, the text images and an array of frames found in any particular returns to its original color. In an alternate embodiment, the document or web page. These are easily accessed by Java text does not return to its original color but becomes some Script (e.g., document.frames or document.images). In other color so that the user visually can distinguish which addition, Netscape 4.0+, Supports tag arrays (but Microsoft text has been read and which has not. This is similar to the Internet Explorer does not). In this implementation, JavaS change in color while a hyperlink is being made active, and cript code then makes the changes to properties of individual after it has been activated. In some embodiments these elements of the array or all elements of a given class (P, H1, changes in color and appearance are effected by Cascading etc.). For example, by writing Style Sheets. 0.076 document. tags.H1.color="blue': 0069. An alternative embodiment eliminates the naviga tion icon (part 4 above) placed before each link. Instead, the 0077 all text contained in

tags turns blue. In this on Mouseover event is written differently, so that after the implementation (which requires that the tag array allow text-to-speech Software is finished reading the link, a timer access to the hyperlink text as well as the on Mouseover will start. If the cursor is still on the link after a set amount event), rather than parsing each document completely and US 2007/021 1 071 A1 Sep. 13, 2007

adding HTML text to the document, all changes are made the document from sentence to sentence. Then, a keyboard using JavaScript. The internal text in each tag is read, command will activate the text-to-speech engine to read the and then placed in new on Mouseover handlers. This imple text where the cursor is placed. Alternatively, at the user's mentation requires less parsing, so is less Vulnerable to error, option, the present invention automatically tabs to the next and reduces the document size of WEBPAGE 2. sentence and reads it. In this embodiment, the present invention reads aloud the document until a pause or stop 0078. In a preferred embodiment of the present invention, command is initiated. Again at the user's option, the present the parsing routines are built into a browser, either directly, invention begins reading the document (WEBPAGE2) once or as a plug-in, as an applet, as an object, as an add-in, etc. it has been displayed on the screen, and continues reading Only WEBPAGE 1 is transmitted over the Internet. In this the document until stopped or until the document has been embodiment, the parsing occurs at the user's client computer completely read. or Internet appliance—that is, the browser/plug-in combi nation gets WEBPAGE 1 from the Internet, parses it, turns 0082 Alternative embodiments add speech recognition it into WEBPAGE 2 and then displays WEBPAGE 2. If the software, so that users with severe dexterity limitations can user has dexterity problems, the control objects for the navigate within a web page and between web pages. In this browser (buttons, icons, etc.) are triggered by on Mouseover embodiment, voice commands (such as “TABRIGHT”) are events rather than the onClick or on DoubleClick events used to tab or otherwise navigate to the appropriate text or usually associated with computer applications that use a link, other voice commands (such as “CLICK” or graphical interface. “SPEAK) are used to trigger the text-to-speech software, and other voice commands activate a link for purposes of 0079. In an alternative embodiment, the user accesses the navigating to a new web page. When the user has set the present invention from a web page with framesets that make present invention to automatically advance to the next text, the web page look like a browser (“WEBPAGE voice commands (such as “STOP”, “PAUSE, “REPEAT, BROWSER). One of the frames contains buttons or images or “RESUME) control the reader. that look like the control objects usually found on browsers, 0083. The difficulty of establishing economically viable and these control objects have the same functions usually Internet-based media services is compounded in the case of found on browsers (e.g., navigation, search, history, print, services for the disabled or illiterate. Many of the potential home, etc.). These functions are triggered by on Mouseover users are in lower Socio-economic brackets and cannot events associated with each image or button. The second afford to pay for software or subscription services. Many frame will display web pages in the form of WEBPAGE 2. Internet services are offered free of charge, but seek adver When a user submits a URL (web page address) to the tising or sponsorships. For websites, advertising or sponsor WEBPAGE BROWSER, the user is actually submitting the ships are usually seen as visuals (such as banner ads) on the URL to a CGI script at a server. The CGI script navigates to websites pages. This invention offers additional advertising the URL, downloads a page such as WEBPAGE 1, parses it opportunities. on-the-fly, converts it to WEBPAGE 2, and transmits WEBPAGE 2 to the user's computer over the Internet. The 0084. In one embodiment, the present invention inserts CGI script also changes the of links that it parses in multi-media advertisements as interstitials that are seen as WEBPAGE 1. The links call the CGI script with a variable the user navigates between web pages and websites. In consisting of the originally hyperlink URL. For example, in another embodiment, the present invention “speaks' adver one embodiment, if the hyperlink in WEBPAGE 1 had an tising. For example, when the user navigates to a new web hiref-http://www.nytimes.com and the CGI script was at page, the present invention inserts an audio clip, or uses the http://www.simtalk.com/cgi-bin/webreader.pl, then the href text-to-speech Software to say something like "This reading of the hyperlink in WEBPAGE 2 reads href-http://ww service is sponsored by Intel.” In an alternative embodiment, w.simtalk.com/cgi-bin/webreader.pl?originalUrl=www.ny the present invention recognizes a specific meta tag (or meta times.com. When the user activates this link, it invokes the tags, or other special tags) in the header of WEBPAGE 1 (or CGI script and directs the CGI script to navigate to the elsewhere). This meta tag contains a commercial message or hyperlink URL for parsing and modifying. This embodiment sponsorship of the reading services for the web page. The uses more Internet bandwidth than when the present inven message may be text or the URL of an audio message. The tion is integrated into the browser, and greater server present invention reads or plays this message when it first resources. However, this embodiment can be accessed from encounters the web page. The web page author can charge any computer hooked to the Internet. In this manner, people sponsors a fee for the message, and the reading service can with disabilities do not have to bring their own computers charge the web page for reading its message. This advertis and software with them, but can use the computers at any ing model is similar to the sponsorship of closed captioning facility. This is particularly important for less affluent indi on TV. viduals who do not have their own computers, and who 0085. Several products, including HELPRead, Browser access the Internet using public facilities Such as libraries. Buddy, and the U.S. Pat. No. 7,137,127 (Slotznick), use and teach methods by which a link can be embedded in a web 0080. An alternative embodiment takes the code from the page, and the text-to-speech Software can be launched by CGI script and places it in a file on the user's computer clicking on that link. In a similar manner, a link can be (perhaps in a different computer programming language). embedded in a web page which will launch the present This embodiment then sets the home page of the browser to invention in its various embodiments. Such a link can be that file. The modified code for links then calls that file distinguish which embodiment the user has installed, and on the user's own computer rather than a CGI server. launch the appropriate one. 0081 Alternative embodiments do not require the user to 0086 Text-to-speech software frequently has difficulty place a cursor or pointer on an icon or text, but “tab' through distinguishing heterophonic homographs (or isonyms): US 2007/021 1 071 A1 Sep. 13, 2007

words that are spelled the same, but sound different. An electronic formats. As an example, the Gutenberg Project example is the word “bow” as in “After the archer shoots his contains thousands of books in electronic format, but not bow, he will bow before the king.” A text-to-speech engine HTML. As another example, many web-based e-mail (par will usually choose one pronunciation for all instances of the ticularly “free” services such as Hotmail) deliver e-mail as word. A text-to-speech engine will also have difficulty HTML documents, whereas other e-mail programs such as speaking uncommon names or terms that do not obey the Microsoft Outlook and Eudora, use a POP protocol to store usual pronunciation rules. While this is not practical in the and deliver content. WEBPAGE 1 also refers to formatted text of a document meant to be read, a "dictionary” can be text files produced by word processing software such as associated with a document which sets forth the phonemes Microsoft Word, and files that contain text whether produced (phonetic spelling) for particular words in the document. In by spreadsheet software such as Microsoft Excel, by data one embodiment of the present invention, a web page creates base software such as Microsoft Access, or any of a variety Such a dictionary and signals the dictionary's existence and of e-mail and document production Software. Alternate location via a pre-specified tag, object, function, etc. Then, embodiments of the present invention “speak” and “read” the present invention will get that dictionary, and when these several types of documents. parsing the web page, will Substitute the phonetic spellings 0092 WEBPAGE 1 also refers to documents stored or within the on Mouseover events. transmitted over intranets, local area networks (LANs), wide 0087. The above-identified U.S. Pat. No. 7,137,127 area networks (WANs), and other networks, even if not (Slotznick) discloses a method of embedding hidden text Stored or transmitted over the Internet. WEBPAGE 1 also captions or commentary on a web page, whereby clicking on refers to documents created, stored, accessed, processed or an icon or dragging that icon to another window would displayed on a single computer and never transmitted to that enable the captions to be read (referred to herein as “spoken computer over any network, including documents read from captions'). The hidden text could also include other infor removable discs regardless of where created. mation Such as the language in which the caption or web 0093. While these embodiments have been described as if page was written. An alternative embodiment of the present WEBPAGE 1 was a single HTML document, no such invention uses this information to facilitate real-time on-the limitation is intended. WEBPAGE 1 may include tables, fly translation of the caption or the web page, using the framesets, referenced code or files, or other objects. methods taught in the above-identified U.S. Pat. No. 7,137. WEBPAGE 1 is intended to refer to the collection of files, 127 (Slotznick). The text is translated to the language used code, applets, Scripts, objects and documents, wherever by the text-to-speech engine. stored, that is displayed by the user's browser as a web page. 0088. In an alternative embodiment, the present invention The present invention parses each of these and replaces alters the code in the spoken captions as displayed in appropriate symbols and code, so that WEBPAGE 2 appears WEBPAGE 2, so that the commentary is “spoken” by the similar to WEBPAGE 1 but has the requisite text-to-speech text-to-speech Software when the user places a cursor or functionality of the present invention. pointer over the icon. 0094) While these embodiments have been described as if alt values occurred only in conjunction with images, no Such 0089. In an alternative embodiment of the present inven limitation is intended. Similar alternative descriptions tion, a code placed on a web page, such as in a meta tag in accompany other objects, and are intended to be 'spoken the heading of the page, or in the spoken caption icons, by the present invention at the option of the user. For identifies the language in which the web page is written example, closed captioning has been a television broadcast (e.g., English, Spanish). The present invention then trans technology for showing Subtitles of spoken words, but lates the text of the web page, sentence by sentence, and similar approaches to providing access for the disabled have displays a new web page (WEBPAGE 2) in the language been and are being extended to streaming media and other used by the text-to-speech engine of the present invention, Internet multi-media technologies. As another example, after inserting the code that allows the text-to-speech engine accessibility advocates desire that all visual media include to “speak' the text. (This includes the various on Mouseover an audio description and that all audio media include a text commands, etc.) In an alternate embodiment, the new web captioning system. Audio descriptions, however, take up page (WEBPAGE2) is shown in the original language, but considerable bandwidth. The present invention takes a text the on Mouseover commands have the text-to-speech engine captioning system and with text-to-speech Software, creates read the translated version. an audio description on-the-fly. 0090. In an alternative embodiment, the translation does 0.095 While these embodiments have been described in not occur until the user places a pointer or cursor over a text terms of using “JavaScript functions' and function calls, no passage. Then, the present invention uses the information such limitation is intended. The “functions include not only about what language WEBPAGE 1 is written in to translate true function calls but also method calls, applet calls and that particular text passage on-the-fly into the language of other programming commands in any programming lan the text-to-speech engine, and causes the engine to speak the guages including but not limited to Java, JavaScript, translated words. VBScript, etc. The term “JavaScript functions' also 0091) While the above embodiments have been described includes, but is not limited to, ActiveX controls, other as if WEBPAGE 1 were an HTML document, primarily control objects and versions of XML and dynamic HTML. designed for display on the Internet, no such limitation is 0096] While these embodiments have been described in intended. WEBPAGE 1 also refers to documents produced terms of reading sentences, no such limitation is intended. At in other formats that are stored or transmitted via the the user's option, the present invention reads paragraphs, or Internet: including ASCII documents, e-mail in its various groups of sentences, or even single words that the user protocols, and FTP-accessed documents, in a variety of points to. US 2007/021 1 071 A1 Sep. 13, 2007

3. Detailed Description of Prior Art Embodiment bitmaps are sent to the computer monitor to be displayed in (Part One) conjunction with the sound of the text being played over the speakers. 0097 FIG. 1 shows a flow chart of a preferred embodi ment of the present invention. At the start 101 of this 0102) In any event, once the text has been “read aloud, process, the user launches an Internet browser 105, such as the user must decide if he or she wants to hear it again 211. Netscape Navigator, or Microsoft Internet Explorer, from his If so, the user moves the cursor off the text 213 and them or her personal computer 103 (Internet appliance or inter moves the cursor back over the text 215. This will again active TV, etc.). The browser sends a request over the cause the code to feed the text to the text-to-speech module Internet for a particular web page 107. The computer server 203, which will “read it again. (In an alternate embodiment, 109 that hosts the web page will process the request 111. If the user activates a specially designated “replay' button.) If the web page is a simple HTML document, the processing the user does not want to hear the text again, he or she must will consist of retrieving a file. In other instances, for decide whether to hear other different text on the page 217. example, when the web page invokes a CGI script or If the user wants to hear other text, he or she places the requires data from a dynamic database, the computer server cursor over that text 201 as described above. Otherwise, the will generate the code for the web page on-the-fly in real user must decide whether to quit browsing 123, as described time. This code for the web page is then sent back 113 over more fully in FIG. 1 and above. the Internet to the user's computer 103. There, the portion of the present invention in the form of plug-in software 115, 0103 FIG. 3 shows the flow chart for an alternative will intercept the web page code, before it can be displayed embodiment of the present invention. In this embodiment, by the browser. The plug-in software will parse the web page the parsing and modifying of WEBPAGE 1 does not occur and rewrite it with modified code of the text, links, and other in a plug-in (FIG. 1, 115) installed on the user's computer objects as appropriate 117. 103, but rather occurs at a website that acts as a portal using software installed in the server computer 303 that hosts the 0098. After the web page code has been modified, it is website. In FIG. 3, at the start 101 of this process, the user sent to the browser 119. There, the browser displays the web launches a browser 105 on his or her computer 103. Instead page as modified by the plug-in 121. The web page will then of requesting that the browser navigate to any website, the be read aloud to the user 123 as the user interacts with it. user then must request the portal website 301. The server 0099. After listening to the web page, the user may computer 303 at the portal website will create the home page decide to discontinue or quit browsing 125 in which case the 305 that will serve as the WEBBROWSER for the user. This process stops 127. On the other hand, the user may decide may be simple HTML code, or may require dynamic cre not to quit 125 and may continue browsing by requesting a ation. In any event, the home page code is returned to the new web page 107. The user could request a new web page user's computer 307, where it is displayed by the browser by typing it into a text field, or by activating a hyperlink. If 309. (In alternate embodiments, the home page may be a new web page is requested, the process will continue as created in whole or part by modifying the web page from before. another website as described below with respect to FIG. 3 0100. The process of listening to the web page is illus items 317, 111, 113,319.) trated in expanded form in FIG. 2. Once the browser 0104. An essential part of the home page is that it acts as displays the web page as modified by the plug-in 121, the a “browser within a browser as shown in FIG. 4. FIG. 4 user places the cursor of the pointing device over the text shows a Microsoft Internet Explorer window 401 (the which he or she wishes to hear. The code (e.g., JavaScript browser) filling about 3/4 of a computer screen 405. Also code placed in the web page by the plug-in software) feeds shown is “Peedy the Parrot'403, one of the Microsoft Agent the text to a text-to-speech module 205 such as DECtalk animations. The title line 407 and browser 409 in the originally written by Digital Equipment Corporation or browser window 401 are part of the browser. The CGI script TruVoice by Lernout and Hauspie. The text-to-speech mod has suppressed other browser toolbars. The area 411 that ule may be a stand-alone piece of Software, or may be appears to be a toolbar is actually part of a web page. This bundled with other software. For example, the Virtual Friend web page is a frameset composed of two frames: 411 and animation software from Haptek incorporates DECtalk, 413. The first frame 411 contains buttons constructed out of whereas Microsoft Agent animation Software incorporates HTML code. TruVoice. Both of these software packages have animated “cartoons' which move their lips along with the sounds 0105 These are given the same functionality as a brows generated by the text-to-speech Software (i.e., the cartoons er's buttons, but contain extra code triggered by cursor lip sync the words). Other plug-ins (or similar ActiveX events, so that the text-to-speech software reads the function objects) such as Speaks for Itself by DirectXtras, Inc., Menlo of the button aloud. For example, when the cursor is placed Park, Calif., generate synthetic speech from text without on the “Back” button, the text-to-speech software synthe animated speakers. In any event, the text-to-speech module sizes speech that says, “Back.” The second frame 413. 205 converts the text 207 that has been fed to it 203 into a displays the various web pages to which the user navigates sound file. The sound file is sent to the computers sound card (but after modifying the code). and speakers where it is played aloud 209 and heard by the 0106 Returning to frame 411, the header for that frame USC. contains code which allows the browser to access the 0101. In an alternative embodiment in which the text-to text-to-speech software. To access Microsoft Agent soft speech module is combined or linked to animation Software, ware, and the Lernout and Hauspie TruVoice text-to-speech instructions will also be sent to the animation module, which software that is bundled with it, “object’ tags are placed of generate bitmaps of the cartoon lip-syncing the text. The the top frame 411. US 2007/021 1 071 A1 Sep. 13, 2007

the on Mouseout event triggers the Cursor Out function, which cancels the Speak function before it can occur. When the Speak function occurs, the “delayedText' variable is sent EARTHQUAKE SEVERS UNDERSEA Over”, “Cursor Out’, and “Speak”: CABLES 0115 For more details click here.