Unified Web Evaluation Methodology (UWEM)

Unified Web Evaluation Methodology (UWEM)

<p> Web Accessibility Benchmarking Cluster</p><p>Sixth Framework Programme Information Society Technologies Priority</p><p>D-WAB4 Unified Web Evaluation Methodology (UWEM 1.1)</p><p>Contractual Date of Delivery to the 05 July 2006 EC: Actual Date of Delivery to the EC: 05 July 2006 Revision Date: 16 April 2007 Editors: Eric Velleman (Accessibility Foundation), Carlos A Velasco (Fraunhofer Institute for Applied Information Technology FIT), Mikael Snaprud (Agder University College) Contributors: See Error: Reference source not found for contributors list Workpackage: WAB1a Security: Public Nature: Report Version: L Total number of pages: 141</p><p>Keywords: Web Accessibility, WAB Cluster, World Wide Web Consortium, W3C, Web Accessibility Initiative, WAI, Web Content Accessibility Guidelines, WCAG, Unified Web site Evaluation Methodology, UWEM, Evaluation and Report Language, EARL.</p><p>Please see Error: Reference source not found for the document license. WAB Cluster Deliverable D-WAB4 (Public)</p><p>DOCUMENT HISTORY Version Version Responsible Description date A 2006-06-26 Eric Velleman Initial version based upon version 0.5 review, comments and testing. Changes made in all sections to shift to 1.0. B 2006-06-26 Nils Ulltveit-Moe, Agata Added error margin of 95% Sawicka, Eric Velleman confidence interval as sampling quality measure, and indicated algorithms for random sampling. Changed scorecards according to Agatas suggestions. EV changed appendix template report. C 2006-07-03 Annika Nietzio Updated version of section on aggregation. D 2006-07-05 Eric Velleman Changes overall in checkpoints following review comments and work on scope and sampling. E 2006-07-05 Eric Velleman Adding references provided by Annika Nietzio in aggregation section. F 2006-07-05 Carlos A Velasco Overall copy-editing. G 2006-07-05 Eric Velleman Updated Section 5 Expected results and editorial changes thanks to review comments by Christophe, Annika and Jesse. H 2007-03-29 Annika Nietzio Updates to several sections in reponse to comments (internal and external). I 2007-03-30 Colin Meerveld Updates to section 5 in response to comments (internal and external). J 2007-04-03 Christophe Strobbe Updates to section 5 in response to comments (internal and external). K 2007-04-16 Carlos A Velasco Final editorial changes and copy-editing. L 2007-06-28 Morten Goodwin Olsen WAB cluster meeting adjustments</p><p>16 April 2007 Page 2 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Table of Contents 1Executive summary...... 17</p><p>2Introduction...... 19</p><p>2.1Methodology definition...... 20</p><p>2.2Target audience of the document...... 20</p><p>2.3Target technologies of this document...... 21</p><p>2.4Expertise for evaluating Accessibility...... 21</p><p>2.5Acknowledgements...... 22</p><p>2.6More information about the WAB Cluster...... 22</p><p>3Evaluation procedures and conformance...... 23</p><p>3.1Types of evaluation...... 23</p><p>3.2Website Conformance claims...... 25</p><p>3.3Tool conformance claims...... 25</p><p>4Scope and Sampling of resources...... 27</p><p>4.1Definitions...... 27</p><p>4.2Procedure to express the scope...... 27</p><p>4.3Procedure to generate evaluation samples...... 28</p><p>4.3.1The Core Resource List...... 28</p><p>4.3.2The Sampled Resource List...... 29</p><p>4.3.3Manual sample size...... 30</p><p>5Tests for conformance evaluation...... 32</p><p>5.1Introduction...... 32</p><p>5.2Guideline 1...... 34</p><p>5.2.1Checkpoint 1.1...... 34</p><p>5.2.1.1(X)HTML tests...... 35</p><p>5.2.1.1.1Test 1.1_HTML_01...... 35</p><p>5.2.1.1.2Test 1.1_HTML_02...... 35</p><p>16 April 2007 Page 3 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.2.1.1.3Test 1.1_HTML_03...... 35</p><p>5.2.1.1.4Test 1.1_HTML_04...... 36</p><p>5.2.1.1.5Test 1.1_HTML_05...... 36</p><p>5.2.1.1.6Test 1.1_HTML_06...... 36</p><p>5.2.1.1.7Test 1.1_HTML_07...... 37</p><p>5.2.1.1.8Test 1.1_HTML_08...... 37</p><p>5.2.1.1.9Test 1.1_HTML_09...... 37</p><p>5.2.1.1.10Test 1.1_HTML_10...... 38</p><p>5.2.1.2Tests for external objects...... 38</p><p>5.2.1.2.1Test 1.1_external_01...... 38</p><p>5.2.1.2.2Test 1.1_external_02...... 39</p><p>5.2.2Checkpoint 1.2...... 39</p><p>5.2.2.1(X)HTML tests...... 39</p><p>5.2.2.1.1Test 1.2_HTML_01...... 39</p><p>5.2.3Checkpoint 1.3...... 40</p><p>5.2.3.1Tests for external objects...... 40</p><p>5.2.3.1.1Test 1.3_external_01...... 40</p><p>5.2.3.1.2Test 1.3_external_02...... 40</p><p>5.2.4Checkpoint 1.4...... 41</p><p>5.2.4.1Tests for external objects...... 41</p><p>5.2.4.1.1Test 1.4_external_01...... 41</p><p>5.3Guideline 2...... 41</p><p>5.3.1Checkpoint 2.1...... 41</p><p>5.3.1.1(X)HTML tests...... 42</p><p>5.3.1.1.1Test 2.1_HTML_01...... 42</p><p>5.3.1.1.2Test 2.1_HTML_02...... 42</p><p>5.3.1.1.3Test 2.1_HTML_03...... 42</p><p>5.3.1.2CSS tests...... 43</p><p>16 April 2007 Page 4 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.3.1.2.1Test 2.1_CSS_01...... 43</p><p>5.3.2Checkpoint 2.2...... 43</p><p>5.3.2.1(X)HTML tests...... 43</p><p>5.3.2.1.1Test 2.2_HTML_01...... 43</p><p>5.3.2.2CSS tests...... 44</p><p>5.3.2.2.1Test 2.2_CSS_01...... 44</p><p>5.4Guideline 3...... 44</p><p>5.4.1Checkpoint 3.1...... 44</p><p>5.4.1.1(X)HTML tests...... 44</p><p>5.4.1.1.1Test 3.1_HTML_01...... 44</p><p>5.4.1.1.2Test 3.1_HTML_02...... 45</p><p>5.4.1.1.3Test 3.1_HTML_03...... 45</p><p>5.4.2Checkpoint 3.2...... 46</p><p>5.4.2.1(X)HTML tests...... 46</p><p>5.4.2.1.1Test 3.2_HTML_01...... 46</p><p>5.4.2.1.2Test 3.2_HTML_02...... 46</p><p>5.4.2.2CSS tests...... 47</p><p>5.4.2.2.1Test 3.2_CSS_01...... 47</p><p>5.4.3Checkpoint 3.3...... 47</p><p>5.4.3.1(X)HTML tests...... 47</p><p>5.4.3.1.1Test 3.3_HTML_01...... 47</p><p>5.4.3.1.2Test 3.3_HTML_02...... 48</p><p>5.4.4Checkpoint 3.4...... 48</p><p>5.4.4.1(X)HTML tests...... 49</p><p>5.4.4.1.1Test 3.4_HTML_01...... 49</p><p>5.4.4.1.2Test 3.4_HTML_02...... 49</p><p>5.4.4.1.3Test 3.4_HTML_03...... 50</p><p>5.4.4.2CSS tests...... 50</p><p>16 April 2007 Page 5 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.4.4.2.1Test 3.4_CSS_01...... 50</p><p>5.4.5Checkpoint 3.5...... 51</p><p>5.4.5.1(X)HTML tests...... 51</p><p>5.4.5.1.1Test 3.5_HTML_01...... 51</p><p>5.4.5.1.2Test 3.5_HTML_02...... 52</p><p>5.4.5.1.3Test 3.5_HTML_03...... 52</p><p>5.4.5.1.4Test 3.5_HTML_04...... 52</p><p>5.4.5.1.5Test 3.5_HTML_05...... 53</p><p>5.4.6Checkpoint 3.6...... 53</p><p>5.4.6.1(X)HTML tests...... 53</p><p>5.4.6.1.1Test 3.6_HTML_01...... 53</p><p>5.4.6.1.2Test 3.6_HTML_02...... 54</p><p>5.4.6.1.3Test 3.6_HTML_03...... 54</p><p>5.4.6.1.4Test 3.6_HTML_04...... 55</p><p>5.4.6.1.5Test 3.6_HTML_05...... 55</p><p>5.4.6.1.6Test 3.6_HTML_06...... 56</p><p>5.4.6.1.7Test 3.6_HTML_07...... 56</p><p>5.4.6.2CSS tests...... 56</p><p>5.4.6.2.1Test 3.6_CSS_01...... 56</p><p>5.4.7Checkpoint 3.7...... 57</p><p>5.4.7.1(X)HTML tests...... 57</p><p>5.4.7.1.1Test 3.7_HTML_01...... 57</p><p>5.4.7.1.2Test 3.7_HTML_02...... 57</p><p>5.4.7.1.3Test 3.7_HTML_03...... 58</p><p>5.4.7.1.4Test 3.7_HTML_04...... 58</p><p>5.5Guideline 4...... 58</p><p>5.5.1Checkpoint 4.1...... 59</p><p>5.5.1.1(X)HTML tests...... 59</p><p>16 April 2007 Page 6 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.5.1.1.1Test 4.1_HTML_01...... 59</p><p>5.5.1.1.2Test 4.1_HTML_02...... 59</p><p>5.5.1.1.3Test 4.1_HTML_03...... 60</p><p>5.5.1.1.4Test 4.1_HTML_04...... 61</p><p>5.5.1.2CSS tests...... 62</p><p>5.5.1.2.1Test 4.1_CSS_01...... 62</p><p>5.6Guideline 5...... 62</p><p>5.6.1Checkpoint 5.1...... 63</p><p>5.6.1.1(X)HTML tests...... 63</p><p>5.6.1.1.1Test 5.1_HTML_01...... 63</p><p>5.6.1.1.2Test 5.1_HTML_02...... 63</p><p>5.6.2Checkpoint 5.2...... 64</p><p>5.6.2.1(X)HTML tests...... 64</p><p>5.6.2.1.1Test 5.2_HTML_01...... 64</p><p>5.6.2.1.2Test 5.2_HTML_02...... 64</p><p>5.6.2.1.3Test 5.2_HTML_03...... 65</p><p>5.6.3Checkpoint 5.3...... 65</p><p>5.6.3.1(X)HTML tests...... 66</p><p>5.6.3.1.1Test 5.3_HTML_01...... 66</p><p>5.6.4Checkpoint 5.4...... 66</p><p>5.6.4.1(X)HTML tests...... 66</p><p>5.6.4.1.1Test 5.4_HTML_01...... 66</p><p>5.6.4.1.2Test 5.4_HTML_02...... 66</p><p>5.6.4.1.3Test 5.4_HTML_03...... 67</p><p>5.6.4.1.4Test 5.4_HTML_04...... 67</p><p>5.6.4.1.5Test 5.4_HTML_05...... 67</p><p>5.7Guideline 6...... 68</p><p>5.7.1Checkpoint 6.1...... 68</p><p>16 April 2007 Page 7 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.7.1.1(X)HTML tests...... 68</p><p>5.7.1.1.1Test 6.1_HTML_01...... 68</p><p>5.7.2Checkpoint 6.2...... 69</p><p>5.7.2.1(X)HTML tests...... 69</p><p>5.7.2.1.1Test 6.2_HTML_01...... 69</p><p>5.7.2.1.2Test 6.2_HTML_02...... 69</p><p>5.7.2.1.3Test 6.2_HTML_03...... 70</p><p>5.7.3Checkpoint 6.3...... 71</p><p>5.7.3.1(X)HTML tests...... 71</p><p>5.7.3.1.1Test 6.3_HTML_01...... 71</p><p>5.7.3.1.2Test 6.3_HTML_02...... 71</p><p>5.7.4Checkpoint 6.4...... 72</p><p>5.7.4.1(X)HTML tests...... 72</p><p>5.7.4.1.1Test 6.4_HTML_01...... 72</p><p>5.7.4.1.2Test 6.4_HTML_02...... 73</p><p>5.7.4.2Tests for external objects...... 73</p><p>5.7.4.2.16.4_external_01...... 73</p><p>5.7.5Checkpoint 6.5...... 74</p><p>5.7.5.1(X)HTML tests...... 74</p><p>5.7.5.1.1Test 6.5_HTML_01...... 74</p><p>5.7.5.1.2Test 6.5_HTML_02...... 74</p><p>5.7.5.1.3Test 6.5_HTML_03...... 75</p><p>5.8Guideline 7...... 75</p><p>5.8.1Checkpoint 7.1...... 75</p><p>5.8.1.1(X)HTML tests...... 76</p><p>5.8.1.1.1Test 7.1_HTML_01...... 76</p><p>5.8.1.1.2Test 7.1_HTML_02...... 76</p><p>5.8.1.1.3Test 7.1_HTML_03...... 77</p><p>16 April 2007 Page 8 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.8.1.2CSS tests...... 77</p><p>5.8.1.2.1Test 7.1_CSS_01...... 77</p><p>5.8.1.3Test for external objects...... 78</p><p>5.8.1.3.1Test 7.1_external_01...... 78</p><p>5.8.1.3.2Test 7.1_external_02...... 78</p><p>5.8.2Checkpoint 7.2...... 79</p><p>5.8.2.1(X)HTML tests...... 79</p><p>5.8.2.1.1Test 7.2_HTML_01...... 79</p><p>5.8.2.1.2Test 7.2_HTML_02...... 79</p><p>5.8.2.1.3Test 7.2_HTML_03...... 80</p><p>5.8.2.2CSS tests...... 80</p><p>5.8.2.2.1Test 7.2_CSS_01...... 80</p><p>5.8.2.2.2Test 7.2_CSS_02...... 80</p><p>5.8.2.3Tests for external objects...... 81</p><p>5.8.2.3.1Test 7.2_external_01...... 81</p><p>5.8.2.3.2Test 7.2_external_02...... 81</p><p>5.8.3Checkpoint 7.3...... 82</p><p>5.8.3.1(X)HTML tests...... 82</p><p>5.8.3.1.1Test 7.3_HTML_01...... 82</p><p>5.8.3.1.2Test 7.3_HTML_02...... 82</p><p>5.8.3.2CSS tests...... 83</p><p>5.8.3.2.1Test 7.3_CSS_01...... 83</p><p>5.8.3.3Tests for external objects...... 83</p><p>5.8.3.3.1Test 7.3_external_01...... 83</p><p>5.8.3.3.2Test 7.3_external_02...... 84</p><p>5.8.4Checkpoint 7.4...... 84</p><p>5.8.4.1(X)HTML tests...... 84</p><p>5.8.4.1.1Test 7.4_HTML_01...... 84</p><p>16 April 2007 Page 9 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.8.4.1.2Test 7.4_HTML_02...... 85</p><p>5.8.4.2Tests for external objects...... 85</p><p>5.8.4.2.1Test 7.4_external_01...... 85</p><p>5.8.5Checkpoint 7.5...... 86</p><p>5.8.5.1(X)HTML tests...... 86</p><p>5.8.5.1.1Test 7.5_HTML_01...... 86</p><p>5.8.5.1.2Test 7.5_HTML_02...... 86</p><p>5.8.5.2Tests for external objects...... 87</p><p>5.8.5.2.1Test 7.5_external_01...... 87</p><p>5.9Guideline 8...... 87</p><p>5.9.1Checkpoint 8.1...... 87</p><p>5.9.1.1(X)HTML tests...... 88</p><p>5.9.1.1.1Test 8.1_HTML_01...... 88</p><p>5.9.1.2Tests for external objects...... 88</p><p>5.9.1.2.1Test 8.1_external_01...... 88</p><p>5.10Guideline 9...... 89</p><p>5.10.1Checkpoint 9.1...... 89</p><p>5.10.1.1(X)HTML tests...... 89</p><p>5.10.1.1.1Test 9.1_HTML_01...... 89</p><p>5.10.2Checkpoint 9.2...... 90</p><p>5.10.2.1Tests for external objects...... 90</p><p>5.10.2.1.1Test 9.2_external_01...... 90</p><p>5.10.3Checkpoint 9.3...... 90</p><p>5.10.3.1(X)HTML tests...... 91</p><p>5.10.3.1.1Test 9.3_HTML_01...... 91</p><p>5.11Guideline 10...... 91</p><p>5.11.1Checkpoint 10.1...... 91</p><p>5.11.1.1(X)HTML tests...... 92</p><p>16 April 2007 Page 10 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.11.1.1.1Test 10.1_HTML_01...... 92</p><p>5.11.1.1.2Test 10.1_HTML_02...... 92</p><p>5.11.1.1.3Test 10.1_HTML_03...... 92</p><p>5.11.2Checkpoint 10.2...... 93</p><p>5.11.2.1(X)HTML tests...... 93</p><p>5.11.2.1.1Test 10.2_HTML_01...... 93</p><p>5.12Guideline 11...... 93</p><p>5.12.1Checkpoint 11.1...... 94</p><p>5.12.1.1(X)HTML tests...... 94</p><p>5.12.1.1.1Test 11.1_HTML_01...... 94</p><p>5.12.2Checkpoint 11.2...... 94</p><p>5.12.2.1(X)HTML tests...... 95</p><p>5.12.2.1.1Test 11.2_HTML_01...... 95</p><p>5.12.2.1.2Test 11.2_HTML_02...... 95</p><p>5.12.3Checkpoint 11.4...... 96</p><p>5.12.3.1(X)HTML tests...... 97</p><p>5.12.3.1.1Test 11.4_HTML_01...... 97</p><p>5.12.3.1.2Test 11.4_HTML_02...... 97</p><p>5.12.3.1.3Test 11.4_HTML_03...... 97</p><p>5.13Guideline 12...... 98</p><p>5.13.1Checkpoint 12.1...... 98</p><p>5.13.1.1(X)HTML tests...... 98</p><p>5.13.1.1.1Test 12.1_HTML_01...... 98</p><p>5.13.1.1.2Test 12.1_HTML_02...... 98</p><p>5.13.2Checkpoint 12.2...... 99</p><p>5.13.2.1(X)HTML tests...... 99</p><p>5.13.2.1.1Test 12.2_HTML_01...... 99</p><p>5.13.3Checkpoint 12.3...... 99</p><p>16 April 2007 Page 11 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.13.3.1(X)HTML tests...... 100</p><p>5.13.3.1.1Test 12.3_HTML_01...... 100</p><p>5.13.3.1.2Test 12.3_HTML_02...... 100</p><p>5.13.3.1.3Test 12.3_HTML_03...... 100</p><p>5.13.3.1.4Test 12.3_HTML_04...... 100</p><p>5.13.3.1.5Test 12.3_HTML_05...... 101</p><p>5.13.3.1.6Test 12.3_HTML_06...... 101</p><p>5.13.3.1.7Test 12.3_HTML_07...... 101</p><p>5.13.3.1.8Test 12.3_HTML_08...... 102</p><p>5.13.3.1.9Test 12.3_HTML_09...... 102</p><p>5.13.3.1.10Test 12.3_HTML_10...... 102</p><p>5.13.3.1.11Test 12.3_HTML_11...... 103</p><p>5.13.3.1.12Test 12.3_HTML_12...... 103</p><p>5.13.3.1.13Test 12.3_HTML_13...... 103</p><p>5.13.3.1.14Test 12.3_HTML_14...... 103</p><p>5.13.3.1.15Test 12.3_HTML_15...... 104</p><p>5.13.4Checkpoint 12.4...... 104</p><p>5.13.4.1(X)HTML tests...... 104</p><p>5.13.4.1.1Test 12.4_HTML_01...... 104</p><p>5.13.4.1.2Test 12.4_HTML_02...... 105</p><p>5.14Guideline 13...... 105</p><p>5.14.1Checkpoint 13.1...... 105</p><p>5.14.1.1(X)HTML tests...... 105</p><p>5.14.1.1.1Test 13.1_HTML_01...... 105</p><p>5.14.1.1.2Test 13.1_HTML_02...... 106</p><p>5.14.2Checkpoint 13.2...... 106</p><p>5.14.2.1(X)HTML tests...... 106</p><p>5.14.2.1.1Test 13.2_HTML_01...... 106</p><p>16 April 2007 Page 12 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.14.3Checkpoint 13.3...... 107</p><p>5.14.3.1(X)HTML tests...... 107</p><p>5.14.3.1.1Test 13.3_HTML_01...... 107</p><p>5.14.4Checkpoint 13.4...... 107</p><p>5.14.4.1(X)HTML tests...... 108</p><p>5.14.4.1.1Test 13.4_HTML_01...... 108</p><p>5.15Guideline 14...... 108</p><p>5.15.1Checkpoint 14.1...... 108</p><p>5.15.1.1(X)HTML tests...... 108</p><p>5.15.1.1.1Test 14.1_HTML_01...... 108</p><p>6Aggregation model for test results...... 110</p><p>6.1Introduction...... 110</p><p>6.2Approach...... 110</p><p>6.3Definitions and mathematical background...... 112</p><p>6.3.1Definitions...... 112</p><p>6.3.2Notation...... 113</p><p>6.3.3Example (Confidence weighting)...... 114</p><p>6.3.4Mathematical background: The UCAB model...... 115</p><p>6.4Stage 2: Accessibility Barrier Probability Fp...... 115</p><p>6.4.1Example (Fp for single Web page):...... 116</p><p>6.4.2Combining results from different testing procedures...... 116</p><p>6.5Stage 3: Accessibility Barrier Probability Fs ...... 116</p><p>6.5.1Example (Fs for single Web site)...... 117</p><p>6.6Limitations of the UCAB model...... 117</p><p>6.6.1Aspects not covered by the model...... 118</p><p>6.6.2Underlying assumptions...... 118</p><p>7Reporting of test results...... 120</p><p>8Scorecard report...... 121</p><p>16 April 2007 Page 13 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>8.1Scorecard report...... 121</p><p>8.1.1Scores indicating barrier status...... 121</p><p>8.1.2Scores indicating barrier change ...... 122</p><p>8.2Aggregation of statistics...... 123</p><p>9Glossary...... 124</p><p>10References...... 126</p><p>11Appendix A: Document License...... 128</p><p>12Appendix B: Template for expert evaluation...... 130</p><p>12.1Introduction...... 130</p><p>12.2Executive Summary...... 130</p><p>12.3Background about Evaluation...... 130</p><p>12.3.1Web Site Evaluated...... 130</p><p>12.3.2Reviewer(s)...... 131</p><p>12.3.3Evaluation Process...... 131</p><p>12.4Resources List...... 131</p><p>12.5Check Results...... 131</p><p>12.6References...... 132</p><p>12.7Appendices...... 132</p><p>13Appendix C: RDF Schemas for Resources Lists...... 133</p><p>13.1Expressing resources...... 133</p><p>13.2Resource Lists...... 133</p><p>13.3Sample EARL report for test results...... 136</p><p>14Appendix D: Contributors to this and previous versions of this document...... 138</p><p>15Appendix E: W3C® DOCUMENT LICENSE...... 140</p><p>List of Figures Figure 1: UWEM Evaluation Types...... 24</p><p>Figure 2: The first and second stage of the Web site evaluation process...... 111</p><p>16 April 2007 Page 14 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Figure 3: The third stage of the Web site evaluation process...... 112 List of Tables Table 1: UWEM barrier score represented by letter and colour to indicate barrier probability. Note that the levels chosen are preliminary values that will be evaluated when real data measurements from the EIAO project are available...... 122</p><p>Table 2: UWEM barrier change score represented by a symbol indicating change in barrier probability...... 122</p><p>16 April 2007 Page 15 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>1 Executive summary</p><p>This document is the first revision of version 1.0 of the Unified Web Evaluation Methodology (UWEM). This revision includes comments received by the WAB Cluster since the publication of UWEM 1.0. UWEM is the result of a joint effort by 23 European organizations participating in three European projects combined in a cluster to develop it. The following describes a methodology for evaluating conformance of Web sites with the W3C Web Content Accessibility Guidelines 1.0 [WCAG10]. The next version of UWEM will be synchronized with the foreseen migration from WCAG 1.0 to WCAG 2.0 [WCAG20].</p><p>The projects involved in making this methodology aim to ensure that evaluation tools and methods developed for global monitoring or for local evaluation, are compatible and coherent among themselves and with WAI. Their aim is to increase the value of evaluations by basing them on a shared interpretation of WCAG 1.0. Therefore, UWEM offers a harmonized interpretation of the WCAG 1.0 guidelines agreed among the participants within the three aforementioned projects.</p><p>The UWEM provides an evaluation procedure consisting of a system of principles and practices for expert and automatic evaluation of Web accessibility for humans and machine interfaces. The methodology aims to be fully conformant with WCAG 1.0 guidelines. Currently the UWEM is limited to priority 1 and priority 2 guidelines and combines tool and expert evaluation.</p><p>The methodology is suitable for detailed evaluations of one Web page, an entire site (irrespective of size), or multiple sites. It covers sampling, clarifications of the checkpoints and information necessary for reporting, interpretation and integration/aggregation of results.</p><p>The UWEM will be used by the projects included in the Cluster for an observatory (EIAO project), tools for benchmarking (BenToWeb project) and a certification scheme (SEAM project). More information about the Cluster, the UWEM and the projects involved can be found at: http://www.wabcluster.org/. </p><p>The document is organized in the following way: Section 2 outlines the requirements, the target audience and the techniques for the document and serves as a description of the basic properties of UWEM. Section 3 describes the UWEM evaluation procedures and conformance. Section 4 describes the approaches for scoping and sampling to provide resources lists for conformance of a web site. The procedures are designed to ensure that the identified samples are representative so that the evaluation results are reliable. Section 5 on conformance testing describes how to carry out tool and expert tests for conformance evaluation of WCAG 1.0 checkpoints for priority 1 and 2. Section 6 describes a model for aggregating test results from the procedures described in section 5. The model also supports estimation of accessibility barriers within a Web site as well as across groups of Web sites. Section 7 describes the reporting</p><p>16 April 2007 Page 16 of 141 WAB Cluster Deliverable D-WAB4 (Public) of test results with EARL and also proposes a template for reporting expert evaluation results. Section 8 describes a reporting method specially tailored for policy makers using a scorecard approach. Section 9 and 10 provide a glossary with the terms and, a list of the references used in the document. Error: Reference source not found provides the document license for UWEM. Error: Reference source not found provides a template to report evaluation results tailored to the expert evaluation using UWEM. Error: Reference source not found provides pointers to the relevant RDF vocabularies to be used within UWEM for expressing results. Error: Reference source not found contains a list of contributors to this document. Error: Reference source not found provides the W3C document license for the referenced W3C work in this UWEM document.</p><p>16 April 2007 Page 17 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>2 Introduction</p><p>The Unified Web Evaluation Methodology should ensure that evaluation tools and methods developed for large scale monitoring or for local evaluation, are compatible and coherent among themselves and with W3C/WAI. This document is the result of a joint effort of three European projects with 23 organisations collaborating in the WAB Cluster to develop UWEM.</p><p>The UWEM describes a methodology for evaluation of conformance with the W3C Web Content Accessibility Guidelines 1.0 [WCAG10]. For practical reasons, this version of the methodology focuses on the current WCAG 1.0 guidelines. The WCAG 1.0 guidelines are broadly accepted and form a stable factor in accessibility since May 1999. Already in 2002, the EU recommended that they should be adopted by the public sector in Member States. In some countries, they found their way into legislation. In its second version, UWEM will be synchronized with the foreseen migration from WCAG 1.0 to WCAG 2.0 [WCAG20]. </p><p>The purpose of the UWEM is to provide a basis for evaluating the methodology with regard to the intended types of testing: expert and automatic evaluation of Web resources.1 The evaluation of the UWEM is also planned to provide feedback and contribute to W3C/WAI for future guidelines or versions of guidelines. W3C/WAI staff have reviewed and provided input into previous drafts of this document in order to minimize potential fragmentation of technical content. This does not imply W3C or WAI endorsement of any part of this document in any way.</p><p>Part of the materials presented in this document in section 5 are annotations of W3C documents. In particular, we are targeting the following two documents:</p><p> Web Content Accessibility Guidelines 1.0 [WCAG10],</p><p> Techniques for Web Content Accessibility Guidelines 1.0 [WCAG10- TECHS], and other Techniques documents linked from this one.</p><p>According to the Intellectual Rights FAQ from W3C,2 section 5 of the UWEM falls under an annotation “... that does not require the copying and modification of the document being annotated.”3 Therefore, all references to guidelines and checkpoints are duly quoted, and the URL to the original document is included. W3C is not responsible for any content not found at the original URL, and the annotations in this document are non-normative. 2.1 Methodology definition</p><p>The Unified Web Evaluation Methodology provides an evaluation procedure 1 Evaluation of Web resources via user testing will be covered by future UWEM releases. 2 http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620 3 http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620#annotate</p><p>16 April 2007 Page 18 of 141 WAB Cluster Deliverable D-WAB4 (Public) offering a system of principles and practices for expert and automatic evaluation of Web accessibility for humans and machine interfaces. The Methodology is designed to be conformant with WCAG 1.0 priority 1 and 2 checkpoints with regard to technical criteria. </p><p>The UWEM aims to increase the value of evaluations by basing them on a shared interpretation of WCAG 1.0 and a set of tests that are sufficiently robust to give stakeholders confidence in results. Web content producers may also wish to evaluate their own content and UWEM aims to also be suitable to these users.</p><p>The methodology is designed to meet the following requirements:</p><p> Technical conformance to existing Web Accessibility Initiative (WAI) Recommendations and Techniques documents.</p><p> Tool and browser independence: questions and tests are given in a 'pure' form, making them as tool independent as possible.</p><p> Unique interpretation: questions shall have only one way of being interpreted.</p><p> Replicability: different Web accessibility evaluators who perform the same tests on the same site should get the same results within a given tolerance.</p><p> Translatability: the methodology will address localisation issues. </p><p> Compliance with Regulation (EC) No 808/2004 of the European Parliament and of the Council of 21 April 2004 concerning Community statistics on the information society.</p><p>In the methodology we have included information about:</p><p> Scope and sampling.</p><p> Reporting, interpretation and integration/aggregation of results.</p><p>2.2 Target audience of the document</p><p>The target audiences for this document include, but are not limited to:</p><p> Web accessibility benchmarking projects (European Commission, national governments, disability groups, EDeAN, and research institutions)</p><p> Possible Certification Authorities</p><p> Web content producers wishing to evaluate their content</p><p> Developers of Evaluation and Repair Tools</p><p>16 April 2007 Page 19 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Policy makers and Web site owners</p><p> Other organisations evaluating Web sites</p><p>The European Commission, national governments and other organisations who wish to benchmark Web accessibility will be able to use the UWEM to carry out the evaluations and compare their results in a meaningful way.</p><p>The UWEM is an evaluation methodology and is not intended to provide information for Web content producers wishing to produce content comformant with WCAG 1.0. This information is provided in the WCAG 1.0 Techniques Documents that are available through the W3C/WAI web site [WCAG10-TECHS]. 2.3 Target technologies of this document</p><p>The UWEM primarily covers methods to evaluate documents based on the following technologies:</p><p> HTML 4.01,</p><p> XHTML 1.0 and 1.1,</p><p> CSS 2.x,</p><p> other embedded objects in (X)HTML resources 2.4 Expertise for evaluating Accessibility</p><p>The W3C Evaluation suite4 extensively describes the expertise necessary for the evaluation of the accessibility of Web content for people with disabilities. Evaluation activities require diverse kinds of expertise and perspectives. Individuals evaluating the accessibility of web content require training and experience across a broad range of disciplines. A collaborative approach can bring better results for individuals evaluating web content.</p><p>The W3C/WAI evaluation suite writes:5 “Effective evaluation of Web accessibility requires more than simply running an evaluation tool on a Web site. Comprehensive and effective evaluations require evaluators with an understanding of Web technologies, evaluation tools, barriers that people with disabilities experience, assistive technologies and approaches that people with disabilities use, and accessibility guidelines and techniques.”</p><p>This document describes the expert and automatic evaluation methodology. Automatic evaluation can significantly reduce the time and effort needed for an evaluation but please note that many accessibility checks require human judgement and must be evaluated manually. This is described in more detail in the tests of section 5.</p><p>4 http://www.w3.org/WAI/eval/ 5 http://www.w3.org/WAI/eval/reviewteams.html</p><p>16 April 2007 Page 20 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>More information on using tools can be found in the W3C/WAI evaluation suite section “selecting tools”.6 2.5 Acknowledgements</p><p>The following organizations worked on this UWEM document:</p><p>Accessibility Foundation (The Netherlands, Cluster coordinator); Agder University College (Norway, EIAO coordinator); Fraunhofer Institute for Applied Information Technology FIT (Germany, BenToWeb coordinator); Association Braillenet (France, SupportEAM coordinator), Vista Utredning AS (Norway); Forschungsintitut Technologie-Behindertenhilfe der Evangelischen Stiftung Volmarstein (Germany); The Manchester Metropolitan University (UK); Nettkroken as (Norway); FBL s.r.l. (Italy); Warsaw University of Technology, Faculty of Production Engineering, Institute of Production Systems Organisation (Poland); Aalborg University (Denmark); Intermedium as (Norway); Fundosa Technosite (Spain); Dublin City University (Ireland); Universität Linz, Institut integriert studieren (Austria); Katholieke Universiteit Leuven, Research & Development (Belgium); Accessinmind Limited (UK); Multimedia Campus Kiel (Germany); Department of Product and Systems Design, University of the Aegean (Greece); University of York (UK); ISdAC International Association (Belgium); FernUniversität in Hagen (Germany).</p><p>We thank the Web Accessibility Initiative's Team from the World Wide Web Consortium for all the useful comments to the draft versions of this document. 2.6 More information about the WAB Cluster</p><p>The projects participating in the WAB cluster are funded by the European Union in the second FP6 IST call (2003) of the eInclusion Strategic Objective. The WAB cluster Web site is available at http://www.wabcluster.org/. More information about the projects can be found on the project web sites:</p><p> http://bentoweb.org/</p><p> http://www.eiao.net/</p><p> http://www.support-eam.org/</p><p>6 http://www.w3.org/WAI/eval/selectingtools.html</p><p>16 April 2007 Page 21 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>3 Evaluation procedures and conformance</p><p>This section describes the procedures and their objectives for expert and automatic evaluation and provides information about conformance claims based on the UWEM. The evaluation procedures in this document have the following objectives:</p><p> To improve replicability of the evaluation results by definition of a repeatable sampling scheme in section 4.</p><p> To clarify interpretation, facilitate automatic evaluation and improve replicability of its test results (see section 5).</p><p> To support aggregation of the evaluation results for individual Web pages, Web sites and even sets of Web sites (section 6). The UWEM will also support aggregations across geographical regions as well as economical sectors, as indicated in section 8.</p><p> To support the reporting of test results (section 7 and 8)</p><p> To support large scale evaluation of Web sites, to identify problem areas in Web sites.</p><p> To evaluate conformance to the checks of section 5. It includes procedures for evaluation down to very specific tests for expert and automatic evaluation. 3.1 Types of evaluation</p><p>Accessibility testing may be done via automatic, expert and user testing. The different types of evaluation methods have a number of strengths and weaknesses. </p><p>Figure 1 describes three different evaluation methods, from which two (automatic and expert) are covered in the UWEM. The figure shows, for example, that automatic evaluation (Tool1 or Tool2) can only test for conformance to a subset of the checkpoints (such as the set of tests marked as "fully automatable" in section 5), which further means that only a subset of all possible accessibility barriers can be identified reliably by using automatic testing. Therefore, coverage of automatic evaluation as an overall indicator of accessibility is low, however it can identify many barriers reliably. It may also be applied efficiently to test very large numbers of Web resources within one Web site as well as multiple Web sites. Tool 1 and Tool 2 are here two fully automatic assessment tools that focus on checking accessibility issues, possibly with some overlap of functionality.</p><p>Some tools can also act as support systems in an expert evaluation process. The</p><p>16 April 2007 Page 22 of 141 WAB Cluster Deliverable D-WAB4 (Public) tools provide reliable results for a subset of tests and can not only speed up the process by performing some tasks automatically, but also, by providing hints about barrier locations, indicate areas the expert evaluators should focus on.</p><p>User testing is able to identify barriers that are not caught by other testing means, and is also capable of estimating the accessibility for tested scenarios. However, user testing is quite specialised, thus it is not generally suitable for conformance testing, since it is not able to test all aspects of the tests of section 5. The best approach to ensure both accessibility and UWEM conformance is to use a combined approach encompassing all evaluation methods: automatic, expert evaluation and user testing of the Web site. </p><p>Accessibility Testing UWEM tests</p><p>User Testing (not addressed in this UWEM version) UWEM automatic Expert Testing tests (plus support tools)</p><p>Automatic Testing (tool #1+tool #2+...)</p><p>Figure 1: UWEM Evaluation Types.</p><p>User testing is not covered in this version of UWEM. How to involve users in the evaluation of web content is described also on the W3C/WAI website in the evaluation suite.7</p><p>The main advantages of automatic testing are:</p><p> The method is replicable (although the dynamic nature of some Web</p><p>7 http://www.w3.org/WAI/eval/users.html</p><p>16 April 2007 Page 23 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> technologies mean complete replication is impossible [LEVENE99]). </p><p> It may be applied to very large number of resources on a Web site and to multiple Web sites. 3.2 Website Conformance claims</p><p>A conformance claim determines if a web site meets the accessibility standards described in section 5. To claim conformance with the UWEM, it is minimally required that:</p><p>1. The resources sample and the scope for the evaluation is defined according to section 4.</p><p>2. All resources in the sample pass all applicable tests to the corresponding conformance level.</p><p>The conformance levels to the UWEM replicate those of [WCAG10], i.e.:</p><p> Conformance Level 1: all tests relevant to [WCAG10] Priority 1 checkpoints are satisfied (see section 5).</p><p> Conformance Level 2: all tests relevant to [WCAG10] Priority 1 and Priority 2 checkpoints are satisfied (see also section 5).</p><p>More information on the meaning of priorities can be found in: http://www.w3.org/TR/WCAG10/#priorities.</p><p>The claims of accessibility conformance according to the UWEM methodology must use the following form:</p><p>1. The UWEM version and its URI identifier, i.e., http://www.wabcluster.org/UWEM1.1/</p><p>2. The URI to a document detailing the scope and the sample (see section 4) to which the claim refers.</p><p>3. The level of conformance. 3.3 Tool conformance claims</p><p>Evaluation tools can also claim conformance to UWEM 1.1. In that way, experts evaluating web sites according to UWEM 1.1 will be able to rely on the results of the tool for the fully automatable tests of the methodology.</p><p>To claim conformance to UWEM 1.1, the tool MUST implement all fully automatable tests of section 5, to the corresponding conformance level (see previous section). This conformance claim must use the following form:</p><p>1. The UWEM version and its URI identifier, i.e.,</p><p>16 April 2007 Page 24 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> http://www.wabcluster.org/UWEM1.1/</p><p>2. The URI to a document detailing a set of public test files where this conformance claim has been verified.</p><p>3. The level of conformance.</p><p>16 April 2007 Page 25 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>4 Scope and Sampling of resources 4.1 Definitions</p><p>Within UWEM, conformance claims need to refer to a list of resources evaluated within the scope of the web site(s). This section provides the background definitions to the different concepts used.</p><p> Resource: a network data object identified by a URI [RFC3986]. This definition is adapted from the definition of resource in [RFC2616]. This concept is included for non-HTTP resources like, e.g., those accessed via the FTP protocol. This type of resource must be expressed via an instance of the earl:Content Class (see Error: Reference source not found and [EARL10-Schema] for further details).</p><p> HTTP Resource: a network data object identified by a single HTTP request. This type of resource must be expressed via an instance of the earl:Content Class (see Error: Reference source not found for further details), which may contain additional components of the [HTTP-RDF] W3C Note. This distinction on the resources is due to the underlying complexity of the HTTP protocol [RFC2616], where content negotiation can lead to different versions of a resource (e.g., language versions via the Accept- Language HTTP header).</p><p> Resources list: Conformance claims in UWEM are related to a given resources list, which is expressed as an RDF sequence of resources (of any type). Error: Reference source not found describes the RDF syntax used to express a resources list.</p><p>According to the needs of different applications of UWEM, this resources list may be specified by a variety of different participants in the evaluation process – such as a site owner, a site operator, an inspection organisation, etc. This document only explains how such a list should be unambiguously expressed. 4.2 Procedure to express the scope</p><p>For the purposes of the UWEM a Web site is defined as an arbitrary collection of hyperlinked Web resources, each identified according to the procedure described in section 4.1. </p><p>The purpose of UWEM is to guarantee replicability of results. Therefore, it is of key importance for the aggregation and comparison of results, the unambiguous expression of tested resources. Therefore, it will not be accepted as UWEM 1.1 conformance claims blanket statements of the type “http://example.org/ is conformant to UWEM 1.1 Level 1.” This will imply that a set of “seed” resources have been crawled to the end following certain pre-determined limits or</p><p>16 April 2007 Page 26 of 141 WAB Cluster Deliverable D-WAB4 (Public) constraints. However, bearing in mind the wide variety of existing crawlers, and the different technologies that they use, it is not possible to verify the reliability of those statements. Furthermore, the different RFCs related to Domain Names leave open room for interpretation in regard to the concept of subdomain and its resolution.</p><p>Therefore, for UWEM 1.1 conformance claims, the scope of a Web site MUST be expressed in the form of a list of resources (see Error: Reference source not found). 4.3 Procedure to generate evaluation samples</p><p>In general it will not be practical to test all site resources against all evaluation criteria. Accordingly, after determining and disclosing a list of resources to be evaluated and the targeted conformance level, we propose to identify certain subsets or “samples”.</p><p>The resources to sample should include the Core Resource List supplemented with a selection of arbitrary resources. We call this the Sampled Resource List.</p><p>4.3.1 The Core Resource List</p><p>The Core Resource List is a set of generic resources, which are likely to be present in most Web sites, and which are core to the use and accessibility evaluation of a site. The Core Resource List therefore represents a set of resources which should be included in any accessibility evaluation of the site. The Core Resource List cannot, in general, be automatically identified, but requires human judgement to select. In case of completely automatic testing like, e.g., in an observatory, the Core Resource List may be determined via some heuristic methods. The Core Resource List should consist of as many of the following resources as are applicable:</p><p> The "Home" resource. This is defined as the resource identified by a URL consisting only of a HTTP protocol component followed by the host name. The mapping from this URL to the Home resource may rely on server side redirection. This resource is required to be present on the site (though it may be differently named).</p><p> The "Contact Information" resource (if any).</p><p> The generic "Help" resource (if any).</p><p> The "Site Map" resource (if any).</p><p> The resources comprising the "Primary Site Search" service (if any). This shall include at least one resource where a search can be initiated and at least one resource showing the results from a sample search.</p><p> Examples of resources representative of the primary intended uses of the</p><p>16 April 2007 Page 27 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> site (if identifiable).</p><p> If the site provides services which involve a user systematically traversing a sequence of resources (e.g., a multi-page form or transaction), then representative resources accessed at each step of each such key scenario should be included (if applicable and only if within the scope).</p><p> Resources representative of each category of resources having a substantially distinct “look and feel” (typically representative of distinct underlying site “templates”) (if identifiable).</p><p> Resources representative of each of the following distinct web technologies (where they are in use):</p><p> forms,</p><p> frames,</p><p> data tables,</p><p> client-side scripting,</p><p> Cascading Style Sheets,</p><p> applets, plugins, multimedia, etc.</p><p>Of course, any single resource may belong to more than one of the categories above: the requirement is simply that the Core Resource List as a whole should, as far as possible, collectively address all the applicable sampling objectives. Any given resource should appear only once in the Core Resource List.</p><p>4.3.2 The Sampled Resource List</p><p>A Sampled Resource List is a set, which can be generated by automatic recursive crawling from a set of “seed” resources. A Sampled Resource List would typically be used in the context of evaluations carried out over large numbers of sites (against automatic criteria only), where it is not feasible or necessary to evaluate the complete set of web pages for each site.8 If a sampled approach is used, then the sampled result must be representative and unbiased, which means that it must be a random sub-set of the total number of resources. The Sampled Resource List for large scale automatic evaluation should therefore use a sampling algorithm that samples the resource set using a random uniform, or near-random uniform sampling algorithm [HENZINGER00], or a random set of samples from the complete set of web pages (provided that the complete set of web pages is available).9 The UWEM aggregation method in section 6 operates on</p><p>8 Note that if a small web site is evaluated entirely, then the mean value of the aggregated samples can be calculated exactly. For larger web sites, it is tolerable to sample a random sub- set of the web pages, as long as the error margin of the 95% confidence interval is disclosed. 9 As long as the algorithm used selects a random, unbiased set of web pages, then the sample is valid, and should provide the same results within the calculated error margin for a 95% </p><p>16 April 2007 Page 28 of 141 WAB Cluster Deliverable D-WAB4 (Public) web page level, so each sample unit should resemble the set of web resources that together form a rendered web page. </p><p>The error margin of the 95% confidence interval of the mean value of the aggregated samples for a web site, using the UWEM aggregation method in section 6 should be clearly denoted in the test results. It is up to the tool vendor whether they choose to present the error margin for each web site, or if they choose to perform sampling to a given error margin,10 so that the maximum error margin is presented once.11</p><p>The error margin m of a confidence interval is defined to be the value added or subtracted from the sample mean which determines the length of the interval:</p><p> m= z n </p><p>Where z=1.96 for 95% confidence interval, n is the sample size and σ is the standard deviation of the aggregated samples for the web site using the aggregation method in section 6.</p><p>Note that both the sampling algorithm used, and any further restrictions limiting or biasing the result, including, but not limited to the set of restrictions below, should be explicitly disclosed in any evaluation report:</p><p> restriction to pages without form interaction,</p><p> restriction by content type (e.g., not JavaScript/Flash/PDF),</p><p> restriction by linkage depth from any seed resource,</p><p> restriction on the specific seed resources used,</p><p> restriction on the total number of resources in the sample,</p><p> restriction on the total amount of resource data in the sample.</p><p>4.3.3 Manual sample size</p><p>As an alternative to the tool assisted random sampling described above, an expert doing a manual evaluation of a web site can also select the Sampled Resource List based on simple random sampling. In this case the minimum number of resources in the Sampled Resource List, depends on the estimated</p><p> confidence interval. 10 Sample until error margin is achieved is based on the fact that an increase in sample size will decrease the length of the confidence interval without reducing the level of confidence. This is because the standard deviation of the sample mean decreases as n increases. 11 Note that is also is possible to determine a minimum number of samples that will provide results that are within a given error margin, even in worst case, with a variance of 0.5. However this will require more samples than strictly necessary for all web sites that are better (less variance) than the worst case. With this in mind, we believe it is better to have a requirement on telling the error margin of the result, than a requirement on the number of samples.</p><p>16 April 2007 Page 29 of 141 WAB Cluster Deliverable D-WAB4 (Public) web site size. </p><p>The minimum sample size consists of 30 unique resources (if available), adding 2 unique resources per 1000 up to a maximum of 50 resources in the Sampled Resource List. This is an arbitrary number. More detailed recommendations for sample sizes will be added in a later version of the UWEM and will be largely based on the results from experiments within the EIAO and BenToWeb projects.</p><p>16 April 2007 Page 30 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5 Tests for conformance evaluation 5.1 Introduction</p><p>This section covers the UWEM automatic and expert testing of Priority 1 and Priority 2 checkpoints of WCAG 1.0 [WCAG10]. For this purpose, this section provides tests for expert and/or automatic evaluation.</p><p>The structure of the tests in this section is the following:</p><p>1. Guideline</p><p>Quotation of the corresponding WCAG 1.0 guideline. Pointers to additional clarifications might be added.</p><p> Checkpoint</p><p>Quotation of the corresponding WCAG 1.0 checkpoint. Pointers to additional clarifications might be added. For each checkpoint, a set of one or more tests is defined. If no automatic tests for a certain technology are defined, this means that there are no applicable tests for automated testing.</p><p> a) (X)HTML-specific tests</p><p>Set of tests to be made for conformance claims for (X)HTML resources. Each test consists of:</p><p> Title and ID: short descriptive title (informative) and unique identifier (normative).</p><p> Applicability criteria: elements, attributes and combinations thereof used to determine the applicability of the test. Whenever possible, the criteria are presented as XPath expressions, otherwise a prose description is given. XPath expressions always refer to generated code, including code generated by client-side script.</p><p> Test procedure: description in a tool-independent manner of the test procedure. The procedure may consist of multiple steps and is written so as to enable possible machine-testing. The test procedure needs to be applied to each individual item (element, attribute, object, etcetera) covered by the applicability criteria, unless the test procedure states otherwise.</p><p> Expected results: statement defining the fail or pass conditions with regard to one or more steps in the test </p><p>16 April 2007 Page 31 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> procedure. The elements or content specified in the accessibility criteria pass the test if the result is not FAIL.</p><p> Fully automatable: statement whether the test procedure can be fully automated (yes/no).</p><p> b) CSS-specific tests</p><p>Set of tests to be made for conformance claims for CSS resources. Each test consists of:</p><p> Title and ID: short descriptive title (informative) and unique identifier (normative).</p><p> Applicability criteria: CSS selectors, properties and combinations thereof used to determine the applicability of the test.</p><p> Test procedure: description in a tool-independent manner of the test procedure. The procedure may consist of multiple steps and is written so as to enable possible machine-testing. The test procedure needs to be applied to each individual item (CSS selector, property, etcetera) covered by the applicability criteria, unless the test procedure states otherwise.</p><p> Expected results: statement defining the fail or pass conditions with regard to one or more steps in the test procedure. The elements or content specified in the accessibility criteria pass the test if the result is not FAIL.</p><p> Fully automatable: statement whether the test procedure can be fully automated (yes/no).</p><p> c) Tests for external objects</p><p>Set of tests to be made for conformance claims for objects included or embedded in web pages through HTML elements or CSS-generated content. This includes applets, Flash, video and audio. Each test consists of:</p><p> Title and ID: short descriptive title (informative) and unique identifier (normative).</p><p> Applicability criteria: Elements, attributes and combinations thereof used to determine the applicability of the test.</p><p> Test procedure: description in a tool-independent manner of the test procedure. The procedure may consist of </p><p>16 April 2007 Page 32 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> multiple steps and is written so as to enable possible machine-testing. The test procedure needs to be applied to each individual item (element, attribute, object, CSS rule, etcetera) covered by the applicability criteria, unless the test procedure states otherwise.</p><p> Expected results: statement defining the fail or pass conditions with regard to one or more steps in the test procedure. The elements or content specified in the accessibility criteria pass the test if the result is not FAIL.</p><p> Fully automatable: statement whether the test procedure can be fully automated (yes/no).</p><p>2. (Optional) Additional clarification issues, such as definition pointers.</p><p>This section does not repeat information available in W3C documents. Instead, it provides pointers to the relevant places and extends only information when necessary for the defined tests.</p><p>Web content passes a checkpoint if it fails none of the applicable tests for that checkpoint. Web content fails a checkpoint if it fails any of the applicable tests for that checkpoint. Web content fails an applicable test if any item covered by the applicability criteria causes a FAIL after using the test procedure. 5.2 Guideline 1</p><p>“Provide equivalent alternatives to auditory and visual content.” (See http://www.w3.org/TR/WCAG10/#gl-provide-equivalents) This guideline provides information on how to support with complementary text alternatives auditory and visual content.</p><p>5.2.1 Checkpoint 1.1</p><p>Provide a text equivalent for every non-text element (e.g., via "alt", "longdesc", or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. [Priority 1]</p><p>(See http://www.w3.org/TR/WCAG10/#tech-text-equivalent and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-text-equivalent)</p><p>16 April 2007 Page 33 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.2.1.1 (X)HTML tests</p><p>5.2.1.1.1 Test 1.1_HTML_01</p><p>This test is targeted to check that non-text content has a text equivalent.</p><p> Applicability criteria: all non-text elements that support the alt attribute.</p><p>//img //area //input[@type='image'] //applet  Test procedure: Check that the element has an alt attribute.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>5.2.1.1.2 Test 1.1_HTML_02</p><p>This test is targeted to analyse non-text elements with an empty text alternative.</p><p> Applicability criteria: non-text elements with empty text alternative.</p><p>//img[@alt=''] //area[@alt=''] //input[@type='image'][@alt=''] //applet[@alt=''][count(local-name(*)!='param')=0] //object[count(local-name(*)!='param')=0]  Test procedure: 1. Check that the image/content is purely decorative. 2. If #1 is false, check that there is a text alternative adjacent to the non- text content.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false</p><p> Fully automatable: no.</p><p>5.2.1.1.3 Test 1.1_HTML_03</p><p>This test is targeted to analyse non-text elements with non-empty text alternative. </p><p> Applicability criteria: all non-text elements with non-empty text alternative.</p><p>//img[@alt][@alt!=''] //area[@alt][@alt!=''] //input[@type='image'][@alt][@alt!=''] //applet[@alt][@alt!=''] //object[count(local-name(*)!='param')>0]</p><p>16 April 2007 Page 34 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Test procedure: Check that the text alternative12 represents the function of the non-text- element within the context.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.2.1.1.4 Test 1.1_HTML_04</p><p>This test is targeted to analyse long descriptions of media elements.</p><p> Applicability criteria: all long descriptions of images and media elements. </p><p>//img/@longdesc //object//a/@href  Test procedure: 1. Check that the long description referenced by the longdesc or href attribute is available. 2. Check that it describes the element.</p><p> Expected results: FAIL if #1 or #2 is false.</p><p> Fully automatable: no.</p><p>5.2.1.1.5 Test 1.1_HTML_05</p><p>This test is targeted to find complex images and non-text content that require a long description.</p><p> Applicability criteria: all img and object elements without long description</p><p>//img[not(@longdesc)]</p><p>//object</p><p> Test procedure: 1a. For each img element check that it does not require a long description. 1b. For each object element that does not contain or reference a long description, check that it does not require a long description.</p><p> Expected results: PASS if 1a and 1b are true. FAIL if 1a or 1b is false.</p><p> Fully automatable: no.</p><p>5.2.1.1.6 Test 1.1_HTML_06</p><p>This test is targeted to non-text content embedded with the non-standard embed element.</p><p>12 If there is text content adjacent to the non-text element, the text alternative can consist of this text content combined with the non-text element alt attribute value.</p><p>16 April 2007 Page 35 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Since there is no defined method of providing alternatives for embed, embed is inherently inaccessible.</p><p> Applicability criteria: all embed elements.</p><p>//embed  Test procedure: Select elements</p><p> Expected results: FAIL if true.</p><p> Fully automatable: yes.</p><p>5.2.1.1.7 Test 1.1_HTML_07</p><p>This test is targeted to check for text alternatives for non-text content loaded into an inline frame.</p><p> Applicability criteria: all iframe elements.</p><p>//iframe  Test procedure: 1. Check if the element loads non-text content. 2. If #1 is true, check that the element contains a text alternative or a link to a text alternative for the non-text content. Note that the iframe content cannot be updated when the content loaded by the iframe changes as a result of user interaction or script execution. </p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.2.1.1.8 Test 1.1_HTML_08</p><p>This test is targeted to check for frames that directly load non-text content.</p><p> Applicability criteria: all frame elements.</p><p>//frame  Test procedure: Check that the element directly loads non-text content.</p><p> Expected results: FAIL if true.</p><p> Fully automatable: no.</p><p>5.2.1.1.9 Test 1.1_HTML_09</p><p>This test is targeted to find embedded or linked audio-only components without a text transcript.</p><p>16 April 2007 Page 36 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: all audio-only components.</p><p>//object //applet //a  Test procedure: Check that there is a text transcript of the audio-only component.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.2.1.1.10 Test 1.1_HTML_10</p><p>This test is targeted to analyse text transcripts of embedded or linked audio-only components.</p><p> Applicability criteria: All audio-only components with a transcript.</p><p>//object //applet //a  Test procedure: Check that the text transcript fully describes all the important information in the audio track(s) of the audio-only component, including spoken words and non-spoken sounds such as sound effects.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.2.1.2 Tests for external objects</p><p>5.2.1.2.1 Test 1.1_external_01</p><p>This test is targeted to find linked or embedded multimedia presentations without associated captions.</p><p> Applicability criteria: all multimedia presentations with at least one audio and at least one video track.</p><p>//object //applet //a  Test procedure: Check that all applicable components have associated captions.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 37 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.2.1.2.2 Test 1.1_external_02</p><p>This test is targeted to analyse the associated captions of multimedia presentations.</p><p> Applicability criteria: all multimedia presentations with at least one audio and at least one video track.</p><p>//object //applet //a  Test procedure: Check that the associated captions fully convey all the important information in the audio track(s) of the multimedia presentation, including spoken words and non-spoken sounds such as sound effects.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.2.2 Checkpoint 1.2</p><p>Provide redundant text links for each active region of a server-side image map. [Priority 1] </p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-redundant-server- links and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-redundant-server-links) 5.2.2.1 (X)HTML tests</p><p>5.2.2.1.1 Test 1.2_HTML_01</p><p>This test is targeted to find active regions of a server-side image map without redundant text links.</p><p> Applicability criteria: all server-side image maps.</p><p>//img[@ismap] //input[@type='image'][@ismap]  Test procedure: 1. Identify all active regions of the image map. 2. Check that there is a redundant text link for each active region. </p><p> Expected results: FAIL if #2 is false (for at least one active region).</p><p> Fully automatable: no.</p><p>16 April 2007 Page 38 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.2.3 Checkpoint 1.3</p><p>Until user agents can automatically read aloud the text equivalent of a visual track, provide an auditory description of the important information of the visual track of a multimedia presentation. [Priority 1] </p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-auditory- descriptions and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-auditory-descriptions) 5.2.3.1 Tests for external objects</p><p>5.2.3.1.1 Test 1.3_external_01</p><p>This test is targeted to find multimedia presentations without an auditory description of the important information of their visual track.</p><p> Applicability criteria: all multimedia presentations with at least one visual track.</p><p>//object //applet //a  Test procedure: Check that there is an auditory description of the important information of the visual track.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.2.3.1.2 Test 1.3_external_02</p><p>This test is targeted to analyse the auditory description of linked and embedded multimedia presentations.</p><p> Applicability criteria: all multimedia presentations with at least one visual track.</p><p>//object //applet //a  Test procedure: Check that the auditory description effectively conveys all important visual elements of the presentation including information about actors, actions, body language, graphics, and scene changes.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 39 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.2.4 Checkpoint 1.4</p><p>For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation. [Priority 1]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-synchronize- equivalents and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-synchronize-equivalents) 5.2.4.1 Tests for external objects</p><p>5.2.4.1.1 Test 1.4_external_01</p><p>This test is targeted to check the synchronisation of equivalent alternatives for multimedia presentations.</p><p> Applicability criteria: all multimedia presentations with equivalent alternatives.</p><p>//a //applet //object  Test procedure: Check that the equivalent alternative (captions, auditory descriptions or other equivalent alternative, as applicable) is synchronised with the presentation.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no. 5.3 Guideline 2</p><p>“Don't rely on color alone.” (See http://www.w3.org/TR/WCAG10/#gl-color) This guideline provides information on how to use colour appropriately.</p><p>5.3.1 Checkpoint 2.1</p><p>Ensure that all information conveyed with color is also available without color, for example from context or markup. [Priority 1]</p><p>(See http://www.w3.org/TR/WCAG10/#tech-color-convey and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-color-convey)</p><p>16 April 2007 Page 40 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.3.1.1 (X)HTML tests</p><p>5.3.1.1.1 Test 2.1_HTML_01</p><p>This test is targeted to find phrases in text that refer to parts of a document only by mentioning their colour.</p><p> Applicability criteria: all text.</p><p>//body  Test procedure: 1. Check that the text does not refer to parts of a document only by mentioning their colour. 2. If #1 is false check that references in text via colour are redundant.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.3.1.1.2 Test 2.1_HTML_02</p><p>This test is targeted to find phrases in non-text content that refer to parts of a document only by mentioning their colour.</p><p> Applicability criteria: all text in non-text content.</p><p>//img //area //input[@type='image'] //applet //object  Test procedure: 1. Check that the text does not refer to parts of a document only by mentioning their colour. 2. If 1 is false check that references in text via colour are redundant.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.3.1.1.3 Test 2.1_HTML_03</p><p>This test is targeted to find coloured elements without redundant methods of conveying the information.</p><p> Applicability criteria: all coloured html elements.</p><p>//*/@color //*/@bgcolor //*/@text  Test procedure:</p><p>16 April 2007 Page 41 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Check that the colour information is redundant i.e. colour is not the only method to provide the information.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.3.1.2 CSS tests</p><p>5.3.1.2.1 Test 2.1_CSS_01</p><p>This test is targeted to find coloured elements without redundant methods of conveying the information.</p><p> Applicability criteria: all coloured content produced by CSS.</p><p> color background-color background border-color border outline-color outline  Test procedure: Check that the colour information is redundant.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.3.2 Checkpoint 2.2</p><p>Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen. [Priority 2 for images, Priority 3 for text].</p><p>(See http://www.w3.org/TR/WCAG10/#tech-color-contrast and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-color-contrast) 5.3.2.1 (X)HTML tests</p><p>5.3.2.1.1 Test 2.2_HTML_01</p><p>This test is targeted to find images without sufficient colour contrast.</p><p> Applicability criteria: all images:</p><p>//img //area //input[@type='image']</p><p>16 April 2007 Page 42 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//object  Test procedure: Check that the contrast between foreground and background colour is sufficient to convey the information.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes (if colour contrast algorithm is available13).</p><p>5.3.2.2 CSS tests</p><p>5.3.2.2.1 Test 2.2_CSS_01</p><p>This test is targeted to find images without sufficient colour contrast.</p><p> Applicability criteria: images referenced from CSS styles.</p><p> background-image background content cursor list-style-image14  Test procedure: Check that the contrast between foreground and background colour is sufficient to convey the information.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes (if colour contrast algorithm is available). 5.4 Guideline 3</p><p>“Use markup and style sheets and do so properly.” (See http://www.w3.org/TR/WCAG10/#gl-structure-presentation) 5.4.1 Checkpoint 3.1</p><p>When an appropriate markup language exists, use markup rather than images to convey information. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10/wai-pageauth.html#tech-use- markup and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-use-markup) 5.4.1.1 (X)HTML tests</p><p>5.4.1.1.1 Test 3.1_HTML_01</p><p>This test is targeted to check that there are no images containing text that can be replaced by markup constructs.</p><p>13 An example of such an algorithm is available in WCAG 2.0: http://www.w3.org/TR/2006/WD- WCAG20-20060427/appendixA.html#luminosity-contrastdef. 14 These CSS properties can specify the URI of an image to use as content or background.</p><p>16 April 2007 Page 43 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: all images.</p><p>//img //input[@type='image'] //object  Test procedure: 1. Check that the image contains text. 2. If #1 is true check that the image can be replaced by a markup construct without loss of information conveyed by the image. </p><p> Expected result: FAIL if #2 is true.</p><p> Fully automatable: no.</p><p>5.4.1.1.2 Test 3.1_HTML_02</p><p>This test is targeted to check that there are no images of mathematical equations that can be replaced by markup constructs.</p><p> Applicability criteria: all images.</p><p>//img //input[@type='image'] //object  Test procedure: 1. Check that the image contains a mathematical equation. 2. If yes, check that the image can be replaced by a markup construct.</p><p> Expected result: FAIL if #2 is true. </p><p> Fully automatable: no.</p><p>5.4.1.1.3 Test 3.1_HTML_03</p><p>This test is targeted to check that there are no bitmap images that contain text or mathematical equations and can be replaced by markup. </p><p> Applicability criteria: all bitmap images that contain text or mathematical equations.</p><p>//img //input[@type='image'] //object  Test procedure: Check that there is no markup language able to convey the information in the image.</p><p> Expected result: PASS if true. </p><p> Fully automatable: no.</p><p>16 April 2007 Page 44 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.4.2 Checkpoint 3.2</p><p>Create documents that validate to published formal grammars. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-identify- grammar and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-identify-grammar) 5.4.2.1 (X)HTML tests</p><p>5.4.2.1.1 Test 3.2_HTML_01</p><p>This test is targeted to check that the document contains a valid document type declaration.</p><p>Note: W3C Quality Assurance maintains a document entitled “Recommended DTDs to use in your Web document” at http://www.w3.org/QA/2002/04/valid- dtd-list.html. </p><p> Applicability criteria: content preceding the HTML element of any HTML 4.x or XHTML 1.0 document.</p><p> Test procedure: Check that the doctype declaration is valid.15</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>5.4.2.1.2 Test 3.2_HTML_02</p><p>This test is targeted to find violations against the formal schema for HTML 4.x or XHTML 1.0.</p><p> Applicability criteria: Any HTML 4.x or XHTML 1.0 document.</p><p> Test procedure: a) For HTML, check that the document validates against the specified document type using a validating SGML parser. b) For XHTML, check that the document is well formed and that it validates against the specified document type using a validating XML parser.</p><p> Expected result: FAIL if false.</p><p> Fully automatable: yes.</p><p>15 In XHTML 1.0 documents, the public identifier in the DOCTYPE declaration must reference one of three DTDs using the respective Formal Public Identifier. See 3.1.1 ("Strictly Conforming Documents") of XHTML™ 1.0 The Extensible HyperText Markup Language (Second Edition) - A Reformulation of HTML 4 in XML 1.0 - W3C Recommendation 26 January 2000, revised 1 August 2002: http://www.w3.org/TR/2002/REC-xhtml1-20020801/#normative.</p><p>16 April 2007 Page 45 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.4.2.2 CSS tests</p><p>5.4.2.2.1 Test 3.2_CSS_01</p><p>This test is targeted to find violations against the formal grammar for CSS 1.0 or CSS 2.x.</p><p> Applicability criteria: Any CSS style rules.</p><p> Test procedure: a) For style rules inside the style element or in the style attributes in an (X)HTML file: check that they conform to the formal grammar defined at http://www.w3.org/TR/REC-CSS2/grammar.html with a SAC parser.16</p><p> b) For CSS files: check that parsing each CSS file with a SAC parser causes no errors. Note that the W3C's “CSS Validator” does more then checking CSS rules against the formal grammar: it also checks for (un-)defined properties and their values, which are not included in the grammar. The grammar does not define the actual “vocabulary” of CSS.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>5.4.3 Checkpoint 3.3</p><p>Use style sheets to control layout and presentation. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-style-sheets and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-style-sheets) 5.4.3.1 (X)HTML tests</p><p>5.4.3.1.1 Test 3.3_HTML_01</p><p>This test is targeted to find whitespace that is used to control spacing between characters within words.</p><p>Note: There is no language-independent definition of the term “word”, so evaluators need to check that if the term “word” is applicable to the language of the content they are evaluating and, if yes, make sure that they understand what the term “word” means in the language of the content they are evaluating.</p><p> Applicability criteria: any “word” containing whitespace.</p><p>16 SAC: Simple API for CSS; see 'SAC: Simple API for CSS - W3C Note, 28, July, 2000' at http://www.w3.org/TR/SAC/ and 'SAC: The Simple API for CSS' at http://www.w3.org/Style/CSS/SAC/. </p><p>16 April 2007 Page 46 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> text()  Test procedure: Check that the whitespace is not used to convey emphasis or importance.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.3.1.2 Test 3.3_HTML_02</p><p>This test is targeted to determine if layout or presentation of one or more elements has been achieved via means other than CSS.</p><p> Applicability criteria: elements and attributes that can be used to position or influence presentation </p><p>//img //font //td (in layout table) //th (in layout table) //center //u //b //i //blink //strong (unless used semantically) //em (unless used semantically) //*/@align //*/@border //*/@hspace //*/@vspace //*/@bgcolor  Test procedure: Check that the resulting position and/or presentation could not be achieved using style sheets.</p><p> Expected result: FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.4 Checkpoint 3.4</p><p>Use relative rather than absolute units in markup language attribute values and style sheet property values. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10/wai-pageauth.html#tech- relative-units and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-relative-units)</p><p>16 April 2007 Page 47 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.4.4.1 (X)HTML tests</p><p>5.4.4.1.1 Test 3.4_HTML_01</p><p>This test is targeted to check for relative values in (X)HTML attributes of type %Length;.</p><p> Applicability criteria: attributes that specify height, width, cell padding, cell spacing or a character offset as a number of pixels or a percentage.</p><p>//table/@cellpadding //table/@cellspacing //col/@charoff //colgroup/@charoff //tbody/@charoff //td/@charoff //tfoot/@charoff //th/@charoff //thead/@charoff //tr/@charoff //iframe/@height //td/@height //th/@height //img/@height //object/@height //applet/@height //hr/@width //iframe/@width //img/@width //object/@width //table/@width //td/@width //th/@width //applet/@width  Test procedure: Check that the value of the attribute is a percentage value (positive integer + '%') or that it is an absolute value that does not interfere with the readability of other text elements.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.4.1.2 Test 3.4_HTML_02</p><p>This test is targeted to check for relative values in (X)HTML attributes of type multi-length17 ("%MultiLength;" in the HTML 4.01 DTD).</p><p> Applicability criteria: attributes that specify the width of columns or column groups.</p><p>//col/@width //colgroup/@width</p><p>17 http://www.w3.org/TR/1999/REC-html401-19991224/types.html#type-multi-length</p><p>16 April 2007 Page 48 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Test procedure: Check that the value of the attribute is a percentage value (positive integer + '%') or an * (asterisk) value or that it is an absolute value that does not interfere with the readability of other text elements.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.4.1.3 Test 3.4_HTML_03</p><p>This test is targeted to check for relative values in (X)HTML attributes of type multi-length-list18 or ("%MultiLengths;" in the HTML 4.01 DTD: a comma- separated list of MultiLength).</p><p> Applicability criteria: attributes that specify a list of lengths in pixels, a percentage or a relative value.</p><p>//frameset/@cols //frameset/@rows  Test procedure: Check that each value listed in the attribute is a percentage value (positive integer + '%') or an * (asterisk) value or that it is an absolute value that does not interfere with the readability of other text elements.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.4.2 CSS tests</p><p>5.4.4.2.1 Test 3.4_CSS_01</p><p>This test is targeted to check for relative value units in CSS properties that may contain <length> values.</p><p> Applicability criteria19: CSS properties that specify length, width, height, size, spacing or offset.</p><p> background-position border-spacing bottom font-size height left letter-spacing line-height marker-offset max-height</p><p>18 http://www.w3.org/TR/1999/REC-html401-19991224/types.html#type-multi-length 19 This includes shorthands derived from the CSS properties list in the applicability criteria. </p><p>16 April 2007 Page 49 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> max-width min-height min-width right size text-indent text-shadow top vertical-align width word-spacing  Test procedure: 1. Check that the unit of the value is not cm, mm, in, pt, pc or px20. 2. Check that the value is not xx-small, x-small, small, medium, large, x- large or xx-large21. 3. If an absolute value is used, check that the absolute value does not interfere with the readability of any text element.</p><p> Expected result: PASS if #1 and #2 are true, or if #3 is true.</p><p> Fully automatable: no.</p><p>5.4.5 Checkpoint 3.5</p><p>Use header elements to convey document structure and use them according to specification. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10/wai-pageauth.html#tech- logical-headings and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-logical-headings) 5.4.5.1 (X)HTML tests</p><p>5.4.5.1.1 Test 3.5_HTML_01</p><p>This test is targeted to find markup constructs that conceptually represent headings, but are not marked up with hx elements.</p><p> Applicability criteria: the body of a web page.</p><p>//body//*  Test procedure: 1. Select markup constructs that conceptually represent headings. 2. Check whether the headings are marked up with hx elements. </p><p>Hint: candidates for insufficient markup are e.g. combinations of font-</p><p>20 The CSS 2.0 specification lists 'px' (pixel) as a relative unit. However, the size of a pixel is relative to the computer display, not to any properties defined in web content. The CSS 2.0 specification also defines a “reference pixel” with an absolute size. 21 These are defined as absolute values at http://www.w3.org/TR/1998/REC-CSS2- 19980512/fonts.html#value-def-absolute-size.</p><p>16 April 2007 Page 50 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> weight/font-style changes (HTML b, i elements; CSS font-weight, font- style properties) and font-size enlargements (HTML big, font elements; CSS font-size property). This list is not exhaustive.</p><p> Expected result: PASS if true.</p><p> Fully automatable: no.</p><p>5.4.5.1.2 Test 3.5_HTML_02</p><p>This test is targeted to check that there is no heading element in the page that has a higher level number than the first heading.</p><p> Applicability criteria: all heading elements except h622.</p><p>//h1 //h2 //h3 //h4 //h5  Test procedure: Check that the heading element does not have a higher level number than the first heading element in the document.</p><p> Expected result: PASS if true.</p><p> Fully automatable: yes.</p><p>5.4.5.1.3 Test 3.5_HTML_03</p><p>This test is targeted to check that no levels are skipped in the heading hierarchy.</p><p> Applicability criteria: all heading elements except h1 and h223.</p><p>//h3 //h4 //h5 //h6  Test procedure: Check that the inspected heading element does not skip one or more levels in the structure (e.g., check that for h5 the preceding heading element is either h4, h5 or h6).</p><p> Expected result: PASS if true.</p><p> Fully automatable: yes.</p><p>22 h6 elements don't need to be checked because they represent the lowest heading level. 23 h1 and h2 don't have to be checked because there is no header level which is two levels higher.</p><p>16 April 2007 Page 51 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.4.5.1.4 Test 3.5_HTML_04</p><p>This test is targeted to check if heading elements have been used (improperly) for font formatting.</p><p> Applicability criteria: all heading elements (h1, ..., h6)</p><p>//h1 //h2 //h3 //h4 //h5 //h6  Test procedure: Check that headings are not used to create font formatting effects.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.5.1.5 Test 3.5_HTML_05</p><p>This test is targeted to check for the correct heading level hierarchy.</p><p> Applicability criteria: the whole document.</p><p>//body  Test procedure: Check that the heading elements convey the logical structure of the document.</p><p> Expected result: PASS if true.</p><p> Fully automatable: no.</p><p>5.4.6 Checkpoint 3.6</p><p>Mark up lists and list items properly. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10/#tech-list-structure and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-list-structure) Encode list structure and list items (UL, OL, DL, LI) properly. The HTML list elements DL, UL, and OL (available in HTML 3.2 and HTML 4.0) should only be used to create lists, not for formatting effects such as indentation. When possible, use ordered (numbered) lists to help navigation.</p><p>16 April 2007 Page 52 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.4.6.1 (X)HTML tests</p><p>5.4.6.1.1 Test 3.6_HTML_01</p><p>Authors can disable the default list style of ordered and unordered list and manually create multi-level numbering (for example, 1, 1.1, 1.2, 1.2.1).This test is targeted to check that manually added list numbering conveys the depth of the list to users.</p><p> Applicability criteria: all nested ordered and unordered lists with manually inserted multi-level numbers.</p><p>//li/ol //li/ul  Test procedure: Check that the numbering does not skip levels or numbers.</p><p> Expected result: PASS if true.</p><p> Fully automatable: no.</p><p>5.4.6.1.2 Test 3.6_HTML_02</p><p>This test is targeted to find out whether the List elements (li) are appropriate for the context of the document, i.e. to create lists, not for formatting such as indentation.</p><p> Applicability criteria: all list item elements, including definitions in definition lists.</p><p>//ul/li //ol/li //dl/dd  Test procedure: Check each that li or dd elements are used to mark up list items and not for formatting effects.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.6.1.3 Test 3.6_HTML_03</p><p>This test is targeted to find paragraphs, line breaks and numbers that are used to simulate numbered lists and which can be replaced with the ol element.</p><p> Applicability criteria: all paragraphs starting with a counter (number or character that indicates an order or sequence).</p><p>//p //p//br</p><p>16 April 2007 Page 53 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Test procedure: 1. Check that the document does not contain sequences of paragraphs that start with counters to simulate numbered lists. 2. Check that the document does not contain paragraphs with line breaks followed by counters to simulate numbered lists. </p><p> Expected result: PASS if #1 and #2 are true.</p><p> Fully automatable: yes.</p><p>5.4.6.1.4 Test 3.6_HTML_04</p><p>This test is targeted to find paragraphs, line breaks and certain characters such as asterisk and hyphens that are used to simulate unordered lists and which can be replaced with the ul element.</p><p> Applicability criteria: all paragraphs starting with characters that can be used to simulate list items.</p><p>//p //p//br  Test procedure: 1. Check that the document does not contain sequences of paragraphs that start with characters such as asterisk or hyphen to simulate unordered lists. 2. Check that the document does not contain paragraphs with line breaks followed by characters such as asterisk or hyphen to simulate unordered lists. </p><p> Expected result: PASS if #1 and #2 are true. FAIL if #1 or #2 are false.</p><p> Fully automatable: yes.</p><p>5.4.6.1.5 Test 3.6_HTML_05</p><p>This test is targeted to find paragraphs, line breaks and images displaying numbers that are used to simulate ordered lists and which can be replaced with the ol element and CSS.</p><p> Applicability criteria: all paragraphs starting with images displaying a number or other types of counters.</p><p>//p//img //p//br/following-sibling::img  Test procedure: 1. Check that the document does not contain sequences of paragraphs that start with images displaying numbers or other types of counters to simulate ordered lists. 2. Check that the document does not contain paragraphs with line breaks followed by images of consecutive numbers or other types of counters to </p><p>16 April 2007 Page 54 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> simulate numbered lists. </p><p> Expected result: PASS if #1 and #2 are true. FAIL if #1 or #2 are false.</p><p> Fully automatable: no.</p><p>5.4.6.1.6 Test 3.6_HTML_06</p><p>This test is targeted to find paragraphs, line breaks and images (especially bullet images) that are used to simulate unordered lists and which can be replaced with the ul element and CSS.</p><p> Applicability criteria: all paragraphs starting with bullet images.</p><p>//p//img //p//br/following-sibling::img  Test procedure: 1. Check that the document does not contain sequences of paragraphs that start with images of bullets to simulate unordered lists. 2. Check that the document does not contain paragraphs with line breaks followed by images of bullets to simulate unordered lists.</p><p> Expected result: PASS if #1 and #2 are true. FAIL if #1 or #2 are false.</p><p> Fully automatable: no.</p><p>5.4.6.1.7 Test 3.6_HTML_07</p><p>This test is targeted to find paragraphs, line breaks and formatting effects that are used to simulate definition lists and which can be replaced with the dt and dd elements.</p><p> Applicability criteria: all paragraphs starting with a term followed by a definition.</p><p>//p //p//br  Test procedure: Check that the document does not contain paragraphs that should be replaced by a definition list.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.6.2 CSS tests</p><p>5.4.6.2.1 Test 3.6_CSS_01</p><p>This test is targeted to check that a fall-back list style is present if images are used a list bullets.</p><p>16 April 2007 Page 55 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: all list-style properties.</p><p>*{list-style:...;}, *{list-style-image:url(...);}, *{list-style-type:...;}  Test procedure: Check that a fall-back bullet style (e.g., 'disc') is specified in case a bullet image cannot be loaded.</p><p> Expected result: PASS if true.</p><p> Fully automatable: yes. </p><p>5.4.7 Checkpoint 3.7</p><p>Mark up quotations. Do not use quotation markup for formatting effects such as indentation. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10/#tech-quotes and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-quotes 5.4.7.1 (X)HTML tests</p><p>5.4.7.1.1 Test 3.7_HTML_01</p><p>This test is targeted to check that quotation elements are used properly to mark up quotations and not for formatting or indentation effects.</p><p> Applicability criteria: all blockquote elements.</p><p>//blockquote  Test procedure: Check that the blockquote is used to mark up a quotation.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.4.7.1.2 Test 3.7_HTML_02</p><p>This test is targeted to check that short quotations (q element) are used properly for quotations and not for layout purposes.</p><p> Applicability criteria: all q elements.</p><p>//q  Test procedure: Check that the q element is used to mark up a quotation.</p><p> Expected result: PASS if true. FAIL if false.</p><p>16 April 2007 Page 56 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Fully automatable: no.</p><p>5.4.7.1.3 Test 3.7_HTML_03</p><p>This test is targeted to find quotations that have not been marked up with q or blockquote.</p><p> Applicability criteria: all text.</p><p>//p  Test procedure: 1. Are there any quotations in the selected paragraphs, e.g., passages that contain quotation marks in the markup or have CSS-generated quotation marks? 2. If yes, check that the quotations are marked up with q or blockquote.</p><p> Expected result: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.4.7.1.4 Test 3.7_HTML_04</p><p>This test is targeted to find any cite and address elements that are used to italicise text.</p><p> Applicability criteria: all cite and address elements.</p><p>//cite //address  Test procedure: 1. Select any cite and address elements. 2. Determine if they are used to italicise text instead of marking up a citation or providing information on the author of the document, respectively.</p><p> Expected result: FAIL if #2 is true.</p><p> Fully automatable: no. 5.5 Guideline 4</p><p>“Clarify natural language usage.” (See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-abbreviated-and- foreign) This guideline provides information on how to facilitate pronunciation or interpretation of abbreviated or foreign text.</p><p>16 April 2007 Page 57 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.5.1 Checkpoint 4.1</p><p>Clearly identify changes in the natural language of a document's text and any text equivalents (e.g., captions). [Priority 1]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-identify- changes and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-identify-changes) 5.5.1.1 (X)HTML tests</p><p>5.5.1.1.1 Test 4.1_HTML_01</p><p>This test is targeted to find changes in human language that are not marked up.</p><p> Applicability criteria: all elements containing text.</p><p>//*[true(text())]  Test procedure: 1. Select elements starting at the lowest level (the "leaves" in the tree structure), and move upwards to parent elements, checking every element in the document. 2. For each text segment24 in the current element, determine the human language. Note that proper nouns, technical terms, and words or phrases that have become part of the language of the rest of the element are not considered as changes in human language for the purpose of this test.</p><p>3. Check that the human language found in #2 is the same for the rest of the content in the current element. 4. Determine the human language of the text in the parent element. (If the parent element is the html element, its language is defined either by the lang attribute or from page context.) 5. If two different human languages are found in #5, check that the (current) element has a lang attribute that identifies the human language by means of the corresponding two-letter code defined in ISO63925.</p><p> Expected results: PASS if #3 or #5 is true. FAIL if both #3 and #5 are false.</p><p> Fully automatable: no.</p><p>5.5.1.1.2 Test 4.1_HTML_02</p><p>This test is targeted to find changes in human language that are not marked up.</p><p>24 In many languages, a text segment corresponds to single words or groups of words; in some languages, especially those with a logographic (sometimes called "ideographic") writing system, a text segment can even correspond to a single character. 25 The HTTP 'Content-Language' header is not taken into account here because this header should not be used for text processing. See http://www.w3.org/TR/i18n-html-tech- lang/#ri20040808.110827800 for further information.</p><p>16 April 2007 Page 58 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: all attributes that specify alternative text, advisory information (the title attribute), a table summary, a label in a hierarchical menu (the label attribute), a standby text (the standby attribute) or other textual content.</p><p>//img/@alt //applet/@alt //area/@alt //input/@alt //meta/@content //option/@label //optgroup/@label //object/@standby //table/@summary //*/@title26) //input[@type='text']/@value //input[@type='submit']/@value //frame/@name //iframe/@name  Test procedure: 1. Determine the human language of the text in the attribute. 2. Determine the human language of the text in the nearest ancestor of the attribute's parent element. 3. If the languages found in steps 1 and 2 are different, check that the attribute's parent element has a lang attribute that identifies the human language of the attribute value by means of the corresponding two-letter code defined in ISO639.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.5.1.1.3 Test 4.1_HTML_03</p><p>This test is targeted to find block-level elements with changes in text direction (for human languages) that are not marked up.</p><p> Applicability criteria: all block-level elements containing text.27</p><p>26 All elements except base, basefont, head, html, meta, param, script and title. 27 Note the following information from the section "Inheritance of text direction information" in HTML 4.01 (http://www.w3.org/TR/html401/struct/dirlang.html#blocklevel-bidi): "The Unicode bidirectional algorithm requires a base text direction for text blocks. To specify the base direction of a block-level element, set the element's dir attribute. The default value of the dir attribute is "ltr" (left-to-right text). When the dir attribute is set for a block-level element, it remains in effect for the duration of the element and any nested block-level elements. Setting the dir attribute on a nested element overrides the inherited value. (...) Inline elements, on the other hand, do not inherit the dir attribute. This means that an inline element without a dir attribute does not open an additional level of embedding with respect to the bidirectional algorithm. (Here, an element is considered to be block-level or inline based on its default presentation. Note that the INS and DEL elements can be block-level or inline depending on their context.)"</p><p>16 April 2007 Page 59 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//*[true(text())]  Test procedure: 1. Determine the direction of the text in the (current) element. 2. If the current element is the html or body element and the text- direction if right to left, check that the element has a dir attribute with the value "rtl". 3. If the current element is not the html element, determine the direction of the text in the parent element. 4. If two different text directions are found in step 3, check that the (current) element has a dir attribute that identifies the text direction by means of the corresponding value (rtl or ltr).</p><p> Expected results: PASS if #2 and #4 are true. FAIL if #2 or #4 is false.</p><p> Fully automatable: no.</p><p>5.5.1.1.4 Test 4.1_HTML_04</p><p>This test is targeted to find changes in text direction (for human languages) that are not marked up.</p><p> Applicability criteria: all attributes that specify alternative text, advisory information (the title attribute), a table summary, a label in a hierarchical menu (the label attribute), a standby text (the standby attribute) or other textual content.</p><p>//img/@alt //applet/@alt //area/@alt //input/@alt //meta/@content //option/@label //optgroup/@label //object/@standby //table/@summary //*/@title28) //input[@type='text']/@value //input[@type='submit']/@value //frame/@name</p><p>Also note the following information from the section "The effect of style sheets on bidirectionality" in HTML 4.01 (http://www.w3.org/TR/1999/REC-html401- 19991224/struct/dirlang.html#style-bidi): "In general, using style sheets to change an element's visual rendering from block-level to inline or vice-versa is straightforward. However, because the bidirectional algorithm relies on the inline/block-level distinction, special care must be taken during the transformation. When an inline element that does not have a dir attribute is transformed to the style of a block- level element by a style sheet, it inherits the dir attribute from its closest parent block element to define the base direction of the block. When a block element that does not have a dir attribute is transformed to the style of an inline element by a style sheet, the resulting presentation should be equivalent, in terms of bidirectional formatting, to the formatting obtained by explicitly adding a dir attribute (assigned the inherited value) to the transformed element." 28 All elements except base, basefont, head, html, meta, param, script and title.</p><p>16 April 2007 Page 60 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//iframe/@name  Test procedure: 1. Determine the direction of the text in the attribute. 2. Check that the parent element has a dir attribute that identifies the text direction by means of the corresponding value (rtl or ltr).</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.5.1.2 CSS tests</p><p>5.5.1.2.1 Test 4.1_CSS_01</p><p>This test is targeted to find each CSS style that generates text in a different human language than the language of the parent element of the element or elements for which the style is defined.</p><p>Note the following information from the CSS 2.0 specification: "As their names indicate, the :before and :after pseudo-elements specify the location of content before and after an element's document tree content" (emphasis added)29. For this reason, the language of the generated content should be the same as the language of the parent of the element for which the CSS style is defined.</p><p> Applicability criteria: each text string generated by CSS styles.</p><p>*:after {content: "...";} *:before {content: "...";}  Test procedure: 1. Determine the language of the text string. 2. Determine the language defined for or inherited by the parent element of the element for which the CSS style is defined. 3. Check that the languages found in #1 and #2 are the same.</p><p> Expected results: PASS if #3 true. FAIL if #3 is false.</p><p> Fully automatable: no.</p><p> 5.6 Guideline 5</p><p>“Create tables that transform gracefully.” (See http://www.w3.org/TR/WCAG10/#gl-table-markup) This guideline provides information on how to identify properly marked up tables.</p><p>29 http://www.w3.org/TR/1998/REC-CSS2-19980512/generate.html#before-after-content</p><p>16 April 2007 Page 61 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.6.1 Checkpoint 5.1</p><p>For data tables, identify row and column headers. [Priority 1] </p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-table- headers) 5.6.1.1 (X)HTML tests</p><p>5.6.1.1.1 Test 5.1_HTML_01</p><p>This test is targeted to find data tables that do not have row and column headers.</p><p> Applicability criteria: all data tables.</p><p>//table  Test procedure: 1. Select the data cells in the data table. 2. For each data cell, check that there is a row header cell and a column header cell that can be identified by the sections "Algorithm to find heading information" or "Associating header information with data cells" in HTML 4.01.30</p><p> Expected results: PASS if #2 is true.</p><p> Fully automatable: No.</p><p>5.6.1.1.2 Test 5.1_HTML_02</p><p>This test is targeted to identify preformatted text used to display tabular information. Preformatted text does not have mechanisms to specify row and column headings.</p><p> Applicability criteria: preformatted text.</p><p>//pre  Test procedure: Determine if the preformatted text is visually rendered as a table.</p><p> Expected results: FAIL if true.</p><p> Fully automatable: no.</p><p>5.6.2 Checkpoint 5.2</p><p>For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells. [Priority 1] </p><p>30 http://www.w3.org/TR/1999/REC-html401-19991224/struct/tables.html#h-11.4</p><p>16 April 2007 Page 62 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-table-structure and the techniques in http://www.w3.org/TR/WCAG10-HTML- TECHS/#identifying-table-rows-columns) 5.6.2.1 (X)HTML tests</p><p>5.6.2.1.1 Test 5.2_HTML_01</p><p>This test is targeted to identify tables with two or more logical levels of rows or columns that are not marked up properly by using table markup that associates rows and columns.</p><p> Applicability criteria: data tables where content in each data cell has a relationship with at least two row headers and/or at least two column headers.</p><p>//table  Test procedure: 1. For each data cell, check that at least one of the following applies: 1.a. the headers attribute contains a space-separated list of all the values of the id attributes of the header cells which with the data cell has a relationship; 1.b. each column header cell that provides header information for the rest of the column that contains it, has a scope attribute with the value 'col'; each column header cell that provides header information for the rest of the column group that contains it, has a scope attribute with the value 'colgroup'; each row header cell that provides header information for the rest of the row that contains it, has a scope attribute with the value 'row'; each row header cell that provides header information for the rest of the row group that contains it, has a scope attribute with the value 'rowgroup'.</p><p> Expected results: PASS if #1.a or #1.b is true.</p><p> Fully automatable: no.</p><p>5.6.2.1.2 Test 5.2_HTML_02</p><p>This test is targeted to determine if header cells in a heading with two or more levels are categorised consistently. This test does not require that axis should always be used, but that the categories identified by the attribute are appropriate or logical.</p><p> Applicability criteria: table headers with two or more levels.</p><p>//table[count(descendant::tr[th]) > 1] //table[count(descendant::tr[td[@scope]]) > 1] //table[descendant::tr[count(th) > 1]] //table[descendant::tr[count(td[@scope]) > 1]] //table[descendant::td[boolean(substring-after(substring- after(normalize-space(@headers), ' '), ' ')]]  Test procedure: </p><p>16 April 2007 Page 63 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>For each header cell in a table header with two or more levels, check that any axis attribute consistently labels the category to which the header cell belongs. Note that the value of the axis attribute is a label that may be presented to a user, instead of a merely machine-readable class or name.</p><p> Expected results: FAIL if false.</p><p> Fully automatable: no.</p><p>5.6.2.1.3 Test 5.2_HTML_03</p><p>This test is targeted to find inconsistent structuring of tables. This test does not require that colgroup, thead, tfoot or tbody should always be used, but that their use is appropriate or logical.</p><p> Applicability criteria: tables defining column groups, table headings, table footers and table bodies.</p><p>//table[colgroup] //table[thead] //table[tfoot] //table[tbody]  Test procedure: Check that each of the selected elements correctly structures the table.</p><p> Expected results: FAIL if false.</p><p> Fully automatable: no.</p><p>5.6.3 Checkpoint 5.3</p><p>Do not use tables for layout unless the table makes sense when linearized. Otherwise, if the table does not make sense, provide an alternative equivalent (which may be a linearized version). [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-avoid- table-for-layout) 5.6.3.1 (X)HTML tests</p><p>5.6.3.1.1 Test 5.3_HTML_01</p><p>This test is targeted to find layout tables that do not convey the same information when linearised.</p><p> Applicability criteria: layout tables.</p><p>//table[not(@summary) and not(child::caption)] //table  Test procedure: </p><p>16 April 2007 Page 64 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Check that the table conveys the same information when linearised.31 </p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.6.4 Checkpoint 5.4</p><p>If a table is used for layout, do not use any structural markup for the purpose of visual formatting. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-table-structure and the techniques in http://www.w3.org/TR/WCAG10-HTML- TECHS/#tech-table-layout) 5.6.4.1 (X)HTML tests</p><p>5.6.4.1.1 Test 5.4_HTML_01</p><p>This test is targeted to check that table headers are only used in data tables.</p><p> Applicability criteria: tables with header cells.</p><p>//table[descendant::th] //table[descendant::td[@scope]] //table[descendant::td[@axis]]  Test procedure: Check that the table is a data table.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.6.4.1.2 Test 5.4_HTML_02</p><p>This test is targeted to check that table headers and table footers are only used in data tables.</p><p> Applicability criteria: tables with headers and/or footers.</p><p>//table[thead] //table[tfoot]  Test procedure: Check that the table is a data table.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>31 The W3C has a Table Linearizer Web Service at http://www.w3.org/WAI/ER/WG/tabletest/tablin.</p><p>16 April 2007 Page 65 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.6.4.1.3 Test 5.4_HTML_03</p><p>This test is targeted to check that id and headers attributes are only used in data tables.</p><p> Applicability criteria: tables with one or more data cells with a headers attribute and one or more header cells with the id attribute.</p><p>//table[descendant::th[@id]] //table[descendant::td[@id]] //table[descendant::td[@headers]]  Test procedure: Check that the table is a data table.</p><p> Expected result: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.6.4.1.4 Test 5.4_HTML_04</p><p>This test is targeted to check that captions are only used for data tables.</p><p> Applicability criteria: tables with a caption.</p><p>//table[descendant::caption]  Test procedure: Check that the table is a data table.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.6.4.1.5 Test 5.4_HTML_05</p><p>This test is targeted to check that cells are only categorised in data tables.</p><p> Applicability criteria: Tables in which cells are categorised by means of the axis attribute.</p><p>//table[descendant::th[@axis]] //table[descendant::td[@axis]]  Test procedure: Check that the table is a data table.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no. 5.7 Guideline 6</p><p>“Ensure that pages featuring new technologies transform gracefully.”</p><p>16 April 2007 Page 66 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>(See http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/#gl- new-technologies) This guideline provides information on ensuring that pages are accessible even when newer technologies are not supported or are turned off.</p><p>5.7.1 Checkpoint 6.1</p><p>Organize documents so they may be read without style sheets. For example, when an HTML document is rendered without associated style sheets, it must still be possible to read the document. [Priority 1]</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-order-style-sheets and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-order-style-sheets) 5.7.1.1 (X)HTML tests</p><p>5.7.1.1.1 Test 6.1_HTML_01</p><p>This test analyses the effect on the readability of the document of CSS applied in standalone style sheets, embedded style sheets and style attributes of document elements.</p><p>Note that the CSS 2.0 specification defines a style sheet as "A set of statements that specify presentation of a document"32. This includes the statements in style attributes.</p><p> Applicability criteria: HTML 4.01 and XHTML 1.0 documents with one or more associated style sheets. This includes style sheets attached to documents by means of HTTP headers (see the section "Linking to style sheets with HTTP headers"33 in HTML 4.01).</p><p>//link[@rel='stylesheet'] //link[@rel='alternate stylesheet'] //style //*/@style</p><p> Test procedure: 1. Switch off, remove or deactivate all associated style sheets. 2. Check that content does not become invisible. 3. Check that content is not obscured by other content. 4. Check that the meaning is not changed by changes in reading order caused by step 1.</p><p> Expected results: PASS if #2-4 are true. FAIL if #2, #3 or #4 is false.</p><p> Fully automatable: no.</p><p>32 http://www.w3.org/TR/1998/REC-CSS2-19980512/conform.html#x10 33 http://www.w3.org/TR/1999/REC-html401-19991224/present/styles.html#h-14.6</p><p>16 April 2007 Page 67 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.7.2 Checkpoint 6.2</p><p>Ensure that equivalents for dynamic content are updated when the dynamic content changes. [Priority 1]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech- dynamic-source and the techniques in http://www.w3.org/TR/WCAG10-TECHS/#tech-dynamic-source) 5.7.2.1 (X)HTML tests</p><p>5.7.2.1.1 Test 6.2_HTML_01</p><p>This test analyses the text equivalent of any non-text content loaded into a frame.</p><p> Applicability criteria: any non-text content referenced by the src attrbute of a frame element.</p><p> document(//frame/@src) document(//iframe/@src)</p><p> Test procedure: Check that there is a text equivalent that describes the current version of the non-text content.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.7.2.1.2 Test 6.2_HTML_02</p><p>This test analyses the text equivalent of any non-text content loaded into the frame by the browser as a result of link activation or script execution.</p><p> Applicability criteria: any non-text content that is loaded into a frame as a result of the activation of a link or the execution of a script.</p><p>//script //a/@href //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onclick //*/@ondblclick //*/@onmousedown</p><p>16 April 2007 Page 68 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout</p><p> Test procedure: Check that there is a text equivalent that describes the current version of the non-text content.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.7.2.1.3 Test 6.2_HTML_03</p><p>This text is targeted to check that there are appropriate equivalents for non-text content that is added to the markup or the Document Object Model (DOM) by means of scripts.</p><p> Applicability criteria: any non-text content that is added to the markup or the DOM by means of scripts.</p><p>//script //a/@href //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout</p><p> Test procedure: Check that there is a text equivalent that describes the current version of the non-text content.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 69 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.7.3 Checkpoint 6.3</p><p>Ensure that pages are usable when scripts, applets, or other programmatic objects are turned off or not supported. If this is not possible, provide equivalent information on an alternative accessible page. [Priority 1]</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-scripts and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-scripts) 5.7.3.1 (X)HTML tests</p><p>5.7.3.1.1 Test 6.3_HTML_01</p><p>This test determines whether information and functionality provided by embedded content is also available without said content.</p><p> Applicability criteria: applets (in a wide sense, not limited to Java; this includes Flash).</p><p>//applet //object</p><p> Test procedure: 1. Disable support for applets. 2. Check that the page is usable, i.e. that all functionality is still available. 3. If the page is not usable , check that there is an alternative accessible page with equivalent information.</p><p> Expected results: PASS if #2 or #3 are true. FAIL if #2 and #3 are false.</p><p> Fully automatable: no.</p><p>5.7.3.1.2 Test 6.3_HTML_02</p><p>This test determines whether information and functionality provided by script is also available when script is not executed.</p><p> Applicability criteria: (functionality provided by) scripts.</p><p>//script //a[starts-with(@href, 'javascript:')] //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange</p><p>16 April 2007 Page 70 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout</p><p> Test procedure: 1. Turn off or disable (the browser's support for) scripts. 2. Check that the page is usable, i.e. that all functionality is still available. 3. If the page is not usable , check that there is an alternative accessible page with equivalent information.</p><p> Expected results: PASS if #2 or #3 are true. FAIL if #2 and #3 are false.</p><p> Fully automatable: no.</p><p>5.7.4 Checkpoint 6.4</p><p>For scripts and applets, ensure that event handlers are input device- independent. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-keyboard- operable-scripts and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-keyboard-operable-scripts) 5.7.4.1 (X)HTML tests</p><p>5.7.4.1.1 Test 6.4_HTML_01</p><p>This test is targeted to check that mouse-specific event handlers have a keyboard-specific (or device-independent) version.</p><p> Applicability criteria: elements with attributes for mouse-specific events, including event attributes added by scripts.</p><p>//*[@onclick] //*[@onmousedown] //*[@onmouseup] //*[@onmouseout] //*[@onmouseover]  Test procedure: Check that each of these elements has a keyboard-specific event handler attribute that triggers exactly the same function or functions as the mouse-specific event handler attribute.</p><p>The following table maps device-specific event handlers:</p><p>16 April 2007 Page 71 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Mouse-specific Keyboard-specific or device-independent onmousedown onkeydown onmouseup onkeyup onclick34 onkeypress onmouseover onfocus onmouseout onblur  Expected results: PASS if true.</p><p> Fully automatable: yes.</p><p>5.7.4.1.2 Test 6.4_HTML_02</p><p>This test is targeted to check for mouse-specific event handlers for which no device-independent or keyboard-specific equivalent handlers are defined in the HTML 4 specification.</p><p> Applicability criteria:</p><p>//*[@ondblclick] //*[@onmousemove]  Test procedure: 1. Select any elements with a mouse-specific event handler attribute that does not have a keyboard-specific event handler attribute that executes exactly the same function. 2. Check that the functions performed by the event handlers can also implemented in a mouse-independent way.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.7.4.2 Tests for external objects</p><p>5.7.4.2.1 6.4_external_01</p><p>This test is targeted to check event handlers in applets are device-independent.</p><p> Applicability criteria: applets (in a wide sense, not limited to Java; this includes Flash).</p><p>-  Test procedure: 1. Select any applets loaded in a page. 2. Check that each function can be triggered through a keyboard interface.</p><p>34 Most browsers will trigger the mouse onclick event handler when the Enter key is pressed, therefore it is not required to provide onclick when onkeypress is specified.</p><p>16 April 2007 Page 72 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.7.5 Checkpoint 6.5</p><p>Ensure that dynamic content is accessible or provide an alternative presentation or page. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-fallback-page and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-fallback-page) This checkpoint handles the accessibility of content that is dynamic. There are two ways of creating dynamic content: server-side dynamic content and client- side dynamic content. In terms of (semi-)automatic testing, we are only interested in the client-side dynamic content, because we cannot really identify the presence of server-side dynamic content.</p><p>5.7.5.1 (X)HTML tests</p><p>5.7.5.1.1 Test 6.5_HTML_01</p><p>This test is targeted to find framesets with inaccessible dynamic content and without a noframes section.</p><p> Applicability criteria: framesets without a noframes section.</p><p>//frameset[not(descendant::noframes)]  Test procedure: Check that the dynamic content is accessible.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.7.5.1.2 Test 6.5_HTML_02</p><p>This test is targeted to find framesets with inaccessible dynamic content and without a noframes section.</p><p> Applicability criteria: framesets with dynamic content.</p><p>//frameset  Test procedure: 1. Check that the dynamic content is accessible. 2. If #1 is false, check that the frameset contains a noframes element with an alternative presentation or a link to an alternative presentation.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p>16 April 2007 Page 73 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Fully automatable: no.</p><p>5.7.5.1.3 Test 6.5_HTML_03</p><p>This test is targeted to find links that use javascript.</p><p> Applicability criteria: links with the 'javascript:' pseudo-protocol.</p><p>//a[starts-with(@href, 'javascript:')]  Test procedure: 1. Check that the URI does not use the 'javascript:' pseudo-protocol. 2. If step 1 is false, check that there is an alternative presentation or page with the same content. </p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no. 5.8 Guideline 7</p><p>“Ensure user control of time-sensitive content changes.” (See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-movement) This guideline provides information on moving, blinking, scrolling, or auto- updating objects or pages, which make it difficult, sometimes even impossible, to read or access content.</p><p>5.8.1 Checkpoint 7.1</p><p>Until user agents allow users to control flickering, avoid causing the screen to flicker. [Priority 1]</p><p>Note. People with photosensitive epilepsy can have seizures triggered by flickering or flashing in the 4 to 59 flashes per second (Hertz) range with a peak sensitivity at 20 flashes per second as well as quick changes from dark to light (like strobe lights).</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-avoid-flicker and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-avoid-flicker) Note: for checkpoints with an "until user agents" clause, WCAG 1.0 refers to the document "User Agent Support for Accessibility"35 for information about user agent support for accessibility features. The current version of this document (last updated in on 11 August 2005) states: "Netscape Navigator (versions, platform), Microsoft Internet Explorer (versions, platform), and Opera (versions, platform) allow the user to turn off loading of images, scripts, and applets. Turning these off will allow the user to avoid flicker caused by images, scripts, </p><p>35 http://www.w3.org/WAI/Resources/WAI-UA-Support</p><p>16 April 2007 Page 74 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> and applets. For other plug-ins the user can choose not to load the plug-in. However, it would be ideal if users could stop, pause, or step through animations, scripts, or other dynamic content that may cause flicker as discussed in the UAGL checkpoint 3.7 and UAGL checkpoint 3.10."</p><p>5.8.1.1 (X)HTML tests</p><p>5.8.1.1.1 Test 7.1_HTML_01</p><p>This test is targeted to find marquee text that causes blinking. Marquee does not normally cause blinking, but certain combinations of scroll amount, scroll delay, font size and colour might cause parts of the screen to blink.</p><p> Applicability criteria: marquee elements.</p><p>//marquee  Test procedure: 1. Check that the marquee does not cause flickering or flashing in the 4 to 59 flashes per second (Hertz) range. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user, i.e. it can be turned on and off.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.1.1.2 Test 7.1_HTML_02</p><p>This test is targeted to find animated GIF files that cause flicker. (Other image file types for inclusion in HTML pages – JPEG and PNG – do not support animation.)</p><p> Applicability criteria: animated GIF files.</p><p>//img //object36  Test procedure: 1. Check that the playback speed of frames and the colour contrast between subsequent frames do not cause flickering or flashing in the 4 to 59 flashes per second (Hertz) range. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user, i.e. it can be turned on and off.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>36 The selector //object[@type='image/gif'] will miss certain GIF files if the object element's type attribute is not defined or incorrect.</p><p>16 April 2007 Page 75 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.8.1.1.3 Test 7.1_HTML_03</p><p>This test is targeted to find client-side scripts that cause flicker or flashing.</p><p> Applicability criteria: client-side scripts.</p><p>//script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout  Test procedure: 1. Check that the script does not cause flicker or flashing at a rate between 4 and 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user, i.e. it can be turned on and off.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.1.2 CSS tests</p><p>5.8.1.2.1 Test 7.1_CSS_01</p><p>This test is targeted to find CSS-generated content that causes flicker or flashing.</p><p> Applicability criteria: objects (images, videos, animations) embedded into a web page by means of CSS.</p><p>*:after {content: url(...);} *:before {content: url(...);}  Test procedure: 1. Check that the object does not cause flicker at a rate between 4 and 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user, i.e. it can be turned on and off.</p><p>16 April 2007 Page 76 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.1.3 Test for external objects</p><p>5.8.1.3.1 Test 7.1_external_01</p><p>This test is targeted to find Java applets that cause flicker or flashing.</p><p> Applicability criteria: Java applets.</p><p>//object[@codetype='application/java'] //object[@codetype='application/java-archive] //object[starts-with(@codetype, 'application/x-java-applet)] //applet any content sent by HTTP with MIME types 'application/java', 'application/java-archive', 'application/x-java-applet'  Test procedure: 1. Check that the applet does not cause flicker or flashing at a rate between 4 and 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user, i.e. it can be turned on and off .</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.1.3.2 Test 7.1_external_02</p><p>This test is targeted to find any video content that cause flicker or flashing.</p><p> Applicability criteria: video content.</p><p>//object[starts-with(@type, 'video/')]  Test procedure: 1. Check that the video content does not cause flicker or flashing at a rate between 4 an 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user, i.e. it can be turned on and off.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.2 Checkpoint 7.2</p><p>Until user agents allow users to control blinking, avoid causing content to blink (i.e., change presentation at a regular rate, such as turning on</p><p>16 April 2007 Page 77 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> and off). [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-avoid-blinking and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-avoid-blinking) 5.8.2.1 (X)HTML tests</p><p>5.8.2.1.1 Test 7.2_HTML_01</p><p>This test is targeted to find any blink elements.</p><p> Applicability criteria: blink elements.</p><p>//blink  Test procedure: Check that any such elements are found. </p><p> Expected results: FAIL if true.</p><p> Fully automatable: Yes.</p><p>5.8.2.1.2 Test 7.2_HTML_02</p><p>This test is targeted to find animated gif files that cause blinking. (Other image file types for inclusion in HTML pages – JPEG and PNG – do not support animation.)</p><p> Applicability criteria: Animated GIF files.</p><p>//img //object37  Test procedure: Check that the image does not cause blinking38.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable:Yes, if contrast threshold is known.</p><p>5.8.2.1.3 Test 7.2_HTML_03</p><p>This test is targeted to find scripts that cause blinking.</p><p> Applicability criteria: scripts.</p><p> script //*/@onfocus //*/@onblur //*/@onkeypress</p><p>37 The selector //object[@type='image/gif'] will miss certain GIF files if the object element's type attribute is not defined or incorrect. 38 Blink: turn on and off between 0.5 and 3 times per second.</p><p>16 April 2007 Page 78 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout  Test procedure: Check that the script does not cause blinking.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: No.</p><p>5.8.2.2 CSS tests</p><p>5.8.2.2.1 Test 7.2_CSS_01</p><p>This test is targeted to find CSS-generated content that causes blinking.</p><p> Applicability criteria: images, video and animations generated by CSS styles.</p><p>*:after {content: url(...);} *:before {content: url(...);}  Test procedure: Check that the content does not cause blinking.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: No.</p><p>5.8.2.2.2 Test 7.2_CSS_02</p><p>This test is targeted to find CSS rules that cause content to blink.</p><p> Applicability criteria: CSS rules with text-decoration: blink</p><p>* { text-decoration: blink;}  Test procedure: Check that there are no CSS rules with text-decoration: blink. </p><p> Expected results: PASS if true. FAIL if false.</p><p>16 April 2007 Page 79 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Fully automatable: yes.</p><p>5.8.2.3 Tests for external objects</p><p>5.8.2.3.1 Test 7.2_external_01</p><p>This test is targeted to find Java applets that cause blinking.</p><p> Applicability criteria: applets (Java, Flash or other).</p><p>//applet //object39  Test procedure: Check that the applet does not cause blinking. </p><p> Expected results: PASS if true.</p><p> Fully automatable: no.</p><p>5.8.2.3.2 Test 7.2_external_02</p><p>This test is targeted to find any video content that causes blinking.</p><p> Applicability criteria: video.</p><p>//object40  Test procedure: Check that the video content does not cause blinking. </p><p> Expected results: PASS if true.</p><p> Fully automatable: no.</p><p>5.8.3 Checkpoint 7.3</p><p>Until user agents allow users to freeze moving content, avoid movement in pages. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-movement and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-avoid-movement)</p><p>39 Object elements for Java applets can have a codetype attribute with the values 'application/java', 'application/java-archive' or 'application/x-java-applet'. The test is also applicable to any content sent by HTTP with the MIME types 'application/java', 'application/java- archive' or 'application/x-java-applet' . 40 Object element for video content can have a type attribute that starts with the value 'video/'. The test is also applicable to any content sent by HTTP with a MIME type that starts with 'video/'.</p><p>16 April 2007 Page 80 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.8.3.1 (X)HTML tests</p><p>5.8.3.1.1 Test 7.3_HTML_01</p><p>This test is targeted to find marquee elements.</p><p> Applicability criteria: marquee elements.</p><p>//marquee  Test procedure: Check that no such elements are found.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>5.8.3.1.2 Test 7.3_HTML_02</p><p>This test is targeted to check for scripts that cause movement.</p><p> Applicability criteria: Scripts.</p><p>//script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout  Test procedure: 1. Check that the script does not cause movement. 2. If false, check that the movement can be frozen.</p><p> Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 81 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.8.3.2 CSS tests</p><p>5.8.3.2.1 Test 7.3_CSS_01</p><p>This test is targeted to find CSS-generated content that causes movement.</p><p> Applicability criteria: CSS-generated content.</p><p>*:after {content: url(...);} *:before {content: url(...);}  Test procedure: Check that the content does not cause movement.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.8.3.3 Tests for external objects</p><p>5.8.3.3.1 Test 7.3_external_01</p><p>This test is targeted to check for external objects that cause or contain movement.</p><p> Applicability criteria: applets (Java, Flash or other).</p><p>//applet //object41  Test procedure: 1. Check that the content of the applet does not contain or cause movement. 2. If false, check that the movement can be frozen.</p><p> Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.3.3.2 Test 7.3_external_02</p><p>This test is targeted to check for video content that causes or contains movement.</p><p> Applicability criteria: video.</p><p>41 Object elements for Java applets can have a codetype attribute with the values 'application/java', 'application/java-archive' or 'application/x-java-applet'. The test is also applicable to any content sent by HTTP with the MIME types 'application/java', 'application/java- archive' or 'application/x-java-applet' .</p><p>16 April 2007 Page 82 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//object42  Test procedure: 1. Check that the video content does not contain or cause movement. 2. If false, check that the movement can be frozen.</p><p> Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.4 Checkpoint 7.4</p><p>Until user agents provide the ability to stop the refresh, do not create periodically auto-refreshing pages. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-movement and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-no-periodic-refresh) 5.8.4.1 (X)HTML tests</p><p>5.8.4.1.1 Test 7.4_HTML_01</p><p>This test is targeted to find elements that can cause page refreshing.</p><p> Applicability criteria: meta elements with http-equiv="refresh".</p><p>//meta[@http-equiv='refresh']  Test procedure: 1. Check that the meta element contains a URI in its content attribute. 2. If true, check that the URI in the meta content attribute is not equal to the URI of the HTML resource containing the meta element.</p><p> Expected results: PASS if #1 and #2 are true. FAIL if #1 is false. FAIL if #1 is true and #2 is false.</p><p> Fully automatable: yes.</p><p>5.8.4.1.2 Test 7.4_HTML_02</p><p>This test is targeted to find scripts objects that can cause page refreshing.</p><p> Applicability criteria: scripts.</p><p>//script</p><p>42 Object element for video content can have a type attribute that starts with the value 'video/'. The test is also applicable to any content sent by HTTP with a MIME type that starts with 'video/'.</p><p>16 April 2007 Page 83 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout  Test procedure: 1. Check that the script does not cause page refreshing. 2. If false, check that the refreshing can be stopped.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.4.2 Tests for external objects</p><p>5.8.4.2.1 Test 7.4_external_01</p><p>This test is targeted to check for external objects that can cause page refreshing.</p><p> Applicability criteria: external objects.</p><p>//applet //object  Test procedure: 1. Check that the object does not cause page refreshing. 2. If false, check that the refreshing can be stopped.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.8.5 Checkpoint 7.5</p><p>Until user agents provide the ability to stop auto-redirect, do not use markup to redirect pages automatically. Instead, configure the server to perform redirects. [Priority 2]</p><p>16 April 2007 Page 84 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-movement and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-no-auto-forward) 5.8.5.1 (X)HTML tests</p><p>5.8.5.1.1 Test 7.5_HTML_01</p><p>This test is targeted to find elements that can cause page redirecting.</p><p> Applicability criteria: meta elements with http-equiv="refresh".</p><p>//meta[@http-equiv='refresh']  Test procedure: Check that the URI in the meta element's content attribute is equal to the URI of the document containing the meta element.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>5.8.5.1.2 Test 7.5_HTML_02</p><p>This test is targeted to find scripts that cause redirecting without providing a mechanism to stop the redirect.</p><p> Applicability criteria: scripts.</p><p>//script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout  Test procedure: 1. Check that the script does not cause redirecting. 2. If false, check that the redirecting can be stopped.</p><p> Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 </p><p>16 April 2007 Page 85 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> are false.</p><p> Fully automatable: no.</p><p>5.8.5.2 Tests for external objects</p><p>5.8.5.2.1 Test 7.5_external_01</p><p>This test is targeted to check for external objects that can cause redirecting without providing a mechanism to stop this.</p><p> Applicability criteria: external objects.</p><p>//applet //object  Test procedure: 1. Check that the object does not cause redirecting. 2. If false, check that the redirecting can be stopped.</p><p> Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no. 5.9 Guideline 8</p><p>“Ensure direct accessibility of embedded user interfaces.” (See http://www.w3.org/TR/WCAG10/#gl-own-interface) This guideline provides information on how to create accessible embedded user interfaces.</p><p>5.9.1 Checkpoint 8.1</p><p>Make programmatic elements such as scripts and applets directly accessible or compatible with assistive technologies [Priority 1 if functionality is important and not presented elsewhere, otherwise Priority 2.]</p><p>(See http://www.w3.org/TR/WCAG10/#tech-directly-accessible and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-directly-accessible) 5.9.1.1 (X)HTML tests</p><p>5.9.1.1.1 Test 8.1_HTML_01</p><p>This test is targeted to find scripts that are not directly accessible or compatible with assistive technologies.</p><p>16 April 2007 Page 86 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: scripts.</p><p>//script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout  Test procedure: 1. If the script is a user agent, check that it conforms to the User Agent Accessibility Guidelines 1.043. 2. If the script is an authoring tool, check that it conforms to the Authoring Tool Accessibility Guidelines 1.044. 3. In other cases, check that the object is directly accessible or compatible with assistive technologies, either through an Accessibility API if available or by ensuring that all functionality is available when only a keyboard interface is used.</p><p> Expected results: PASS if #1 or #2 or #3 is true. FAIL if #1, #2 and #3 are false.</p><p> Fully automatable: no.</p><p>5.9.1.2 Tests for external objects</p><p>5.9.1.2.1 Test 8.1_external_01</p><p>This test is targeted find external objects that are not directly accessible or compatible with assistive technologies.</p><p> Applicability criteria: all external objects.</p><p>//applet //object  Test procedure: 1. If the external object is a user agent, check that it conforms to the User</p><p>43 http://www.w3.org/TR/2002/REC-UAAG10-20021217/ 44 http://www.w3.org/TR/2000/REC-ATAG10-20000203/</p><p>16 April 2007 Page 87 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Agent Accessibility Guidelines 1.045. 2. If the external object is an authoring tool, check that it conforms to the Authoring Tool Accessibility Guidelines 1.046. 3. In other cases, check that the object is directly accessible or compatible with assistive technologies, either through an Accessibility API if available or by ensuring that all functionality is available when only a keyboard interface is used.</p><p> Expected results: PASS if #1 or #2 or #3 is true. FAIL if #1, #2 and #3 are false.</p><p> Fully automatable: no. 5.10 Guideline 9</p><p>“Design for device-independence.” (See http://www.w3.org/TR/WCAG10/#gl-device-independence) This guideline provides information on how to create web content that does not rely on one specific input or output device.</p><p>5.10.1 Checkpoint 9.1</p><p>Provide client-side image maps instead of server-side image maps except where the regions cannot be defined with an available geometric shape. [Priority 1]</p><p>(See http://www.w3.org/TR/WCAG10/#tech-client-side-maps and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-client-side-maps) 5.10.1.1 (X)HTML tests</p><p>5.10.1.1.1 Test 9.1_HTML_01</p><p>This test is targeted to find server-side image maps.</p><p> Applicability criteria: server-side image maps.</p><p>//a//img[@ismap] //input[@type='image']  Test procedure: Check that the server-side image map can not be replaced with a client- side image map.</p><p> Expected results: PASS if true. FAIL if false.</p><p>45 http://www.w3.org/TR/2002/REC-UAAG10-20021217/ 46 http://www.w3.org/TR/2000/REC-ATAG10-20000203/</p><p>16 April 2007 Page 88 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Fully automatable: yes47.</p><p>5.10.2 Checkpoint 9.2</p><p>Ensure that any element that has its own interface can be operated in a device-independent manner. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-keyboard- operable and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-keyboard-operable) 5.10.2.1 Tests for external objects</p><p>5.10.2.1.1 Test 9.2_external_01</p><p>This test is targeted to check for the device-independence of the interface of embedded or included objects.</p><p> Applicability criteria: all embedded or included objects.</p><p>//applet //object  Test procedure: 1. Check that the external object does not have its own interface. 2. If false, check that any operation supported by the interface can be operated in a device-independent manner by using only a keyboard or keyboard alternative interface.</p><p> Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false.</p><p> Fully automatable: no.</p><p>5.10.3 Checkpoint 9.3</p><p>For scripts, specify logical event handlers rather than device- dependent event handlers. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-device- independent-events and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#tech-device- independent-events) 5.10.3.1 (X)HTML tests</p><p>5.10.3.1.1 Test 9.3_HTML_01</p><p>This test is targeted to analyse device-dependent event handlers.</p><p>47 This test is automatable as it will always produce a FAIL, because every geometric region can be defined by a polygon and every pixel can be defined as a circle with radius 0.</p><p>16 April 2007 Page 89 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: elements with device-dependent event handler attributes.</p><p>//*/@onclick //*/@ondblclick //*/@onkeydown //*/@onkeypress //*/@onkeyup //*/@onmousedown //*/@onmousemove //*/@onmouseout //*/@onmouseover //*/@onmouseup  Test procedure: 1. Check that invoking the device-dependent event handler causes only purely decorative effects, such as the disappearance of an underline, a change in text colour or an image switch. 2. If #1 is false, check that replacing the device-dependent event-handler attribute with a device-independent one (such as onblur, onchange, onfocus, onload, onreset, onselect, onsubmit and onunload) causes loss of content or functionality. </p><p> Expected results: PASS if #1 true. PASS if #1 is false and #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no. 5.11 Guideline 10</p><p>“Use interim solutions.” (See http://www.w3.org/TR/WCAG10/#gl-interim-accessibility) 5.11.1 Checkpoint 10.1</p><p>Until user agents allow users to turn off spawned windows, do not cause pop-ups or other windows to appear and do not change the current window without informing the user. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-avoid-pop-ups and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-avoid-pop-ups) 5.11.1.1 (X)HTML tests</p><p>5.11.1.1.1 Test 10.1_HTML_01</p><p>This test is targeted to find any target attributes that cause content to be opened in a new window without informing the user.</p><p> Applicability criteria: a, area, form or link elements with a target attribute with a value that is not one of “_self”, “_parent” or “_top”.</p><p>16 April 2007 Page 90 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//a[@target!='_self' and @target!='_parent' and @target!='_top'] //area[@target!='_self' and @target!='_parent' and @target! ='_top'] //form[@target!='_self' and @target!='_parent' and @target! ='_top'] //link[@target!='_self' and @target!='_parent' and @target! ='_top']  Test procedure: Check that the content warns the user that activating the element spawns a new window.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.11.1.1.2 Test 10.1_HTML_02</p><p>This test is targeted to find base elements with a target attribute that cause content to be opened in a new window without informing the user.</p><p> Applicability criteria: base elements with a target attribute with a value that is not one of “_self”, “_parent” or “_top”.</p><p>//base[@target!='_self' and @target!='_parent' and @target! ='_top']  Test procedure: Check that the content warns the user that activating any a, area, form or link element spawns a new window.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.11.1.1.3 Test 10.1_HTML_03</p><p>This test is targeted to find scripts that cause content to be opened in a new window without informing the user.</p><p> Applicability criteria: scripts that cause content to be opened in a new window.</p><p>//script //*/@onblur //*/@onchange //*/@onfocus //*/@onload //*/@onreset //*/@onselect //*/@onsubmit //*/@onunload //a[starts-with(@href, 'javascript:')]  Test procedure:</p><p>16 April 2007 Page 91 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Check that the content warns the user that the script spawns a new window.</p><p> Expected results: PASS if true. FAIL if false</p><p> Fully automatable: no.</p><p>5.11.2 Checkpoint 10.2</p><p>Until user agents support explicit associations between labels and form controls, for all form controls with implicitly associated labels, ensure that the label is properly positioned. [Priority 2] </p><p>(See http://www.w3.org/TR/WCAG10/#tech-unassociated-labels and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-unassociated-labels) 5.11.2.1 (X)HTML tests</p><p>5.11.2.1.1 Test 10.2_HTML_01</p><p>This test is targeted to find form controls with improperly positioned, implicitly associated labels.</p><p> Applicability criteria: visible form controls that can have labels.</p><p>//input[not(@type='hidden')][not(@type='submit')] [not(@type='button')][not(@type='reset')] //select //textarea  Test procedure: Check that any implicitly associated label for the control is properly positioned.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.12 Guideline 11</p><p>Use W3C technologies and guidelines.</p><p>(See http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/#gl- use-w3c) This guideline recommends using W3C technologies and describes what to do if other technologies are used.</p><p>16 April 2007 Page 92 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.12.1 Checkpoint 11.1</p><p>Use W3C technologies when they are available and appropriate for a task and use the latest versions when supported. [Priority 2]</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-latest-w3c-specs and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-latest-w3c-specs) 5.12.1.1 (X)HTML tests</p><p>5.12.1.1.1 Test 11.1_HTML_01</p><p>This test is targeted to find out whether the latest versions of W3C technologies for HTML and XHML have been used.</p><p> Applicability criteria: HTTP content header and document type declaration.</p><p>-  Test procedure: 1. Check that the HTTP content type is text/html or application/xhtml+xml. 2. If HTTP content type is text/html, check that the document type is HTML 4.01 or XHTML 1.0. 3. If HTTP content type is application/xhtml+xml, check that the document type is XHTML 1.1. 4. If document type is HTML 4.01, check that the document conforms to the DTD with a validating SGML parser. 5. If document type is XHTML 1.0 or XHTML 1.1, check that the document conforms to the DTD with a validating XML parser.</p><p> Expected results: FAIL if any is false.</p><p> Fully automatable: yes.</p><p>5.12.2 Checkpoint 11.2</p><p>Avoid deprecated features of W3C technologies. [Priority 2] </p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-avoid-deprecated and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-avoid-deprecated) Note that “deprecated” can mean different things in different specifications. In HTML 4.01 it means the following:</p><p>“ A deprecated element or attribute is one that has been outdated by newer constructs. Deprecated elements are defined in the reference manual in appropriate locations, but are clearly marked as deprecated. Deprecated elements may become obsolete in future versions of HTML. </p><p>16 April 2007 Page 93 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>User agents should continue to support deprecated elements for reasons of backward compatibility.” </p><p>It is possible that a deprecated feature is deprecated in favour of another feature that is not well supported (e.g. object is intended to replace applet).</p><p>5.12.2.1 (X)HTML tests</p><p>5.12.2.1.1 Test 11.2_HTML_01</p><p>This test is targeted to find deprecated HTML elements.</p><p> Applicability criteria: deprecated elements in the HTML 4.01 specification.</p><p>//applet //basefont //center //dir //font //isindex //menu //s //strike //u  Test procedure: Check that any of these elements are present.</p><p> Expected results: PASS if false. FAIL if true.</p><p> Fully automatable: yes.</p><p>5.12.2.1.2 Test 11.2_HTML_02</p><p>This test is targeted to find deprecated HTML attributes.</p><p> Applicability criteria: deprecated attributes in the HTML 4.01 specification.</p><p>//body/@bgcolor //body/@link //body/@alink //body/@vlink //body/@background //body/@text //caption/@align //iframe/@align //img/@align //img/@border //img/@hspace //img/@vspace //object/@align //object/@border //object/@hspace //object/@vspace //input/@align</p><p>16 April 2007 Page 94 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>//legend/@align //li/@value //table/@align //table/@bgcolor //tr/@bgcolor //th/@bgcolor //th/@height //th/@nowrap //th/@width //td/@bgcolor //td/@height //td/@nowrap //td/@width //hr/@align //hr/@noshade //hr/@size //hr/@width //p/@align //div/@align //h1/@align //h2/@align //h3/@align //h4/@align //h5/@align //h6/@align //br/@clear //dl/@compact //ol/@compact //ol/@start //ol/@value //ul/@compact //script/@language XHTML 1.048 formally deprecates the use of: //a/@name //applet/@name //form/@name //frame/@name //iframe/@name //img/@name  Test procedure: Check that any of these attributes are present.</p><p> Expected results: PASS if false. FAIL if true.</p><p> Fully automatable: yes.</p><p>5.12.3 Checkpoint 11.4</p><p>If, after best efforts, you cannot create an accessible page, provide a link to an alternative page that uses W3C technologies, is accessible, has equivalent information (or functionality), and is updated as often </p><p>48 Section 4.10 in the XHTML 1.0 specification: "The elements with 'id' and 'name' attributes": http://www.w3.org/TR/xhtml1/#h-4.10.</p><p>16 April 2007 Page 95 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> as the inaccessible (original) page. [Priority 1]</p><p>(See http://www.w3.org/TR/WCAG10-TECHS/#tech-alt-pages and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-alt-pages) 5.12.3.1 (X)HTML tests</p><p>5.12.3.1.1 Test 11.4_HTML_01</p><p>This test looks for the alternative content and checks whether it is equivalent.</p><p> Applicability criteria: linked alternative page.</p><p> document(//a/href) document(//link/@href)</p><p> Test procedure: 1. Compare functionality and information content of the two delivery units. 2. Check that they are equivalent.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.12.3.1.2 Test 11.4_HTML_02</p><p>This test looks for the alternative content and checks whether it is accessible.</p><p> Applicability criteria: linked alternative page.</p><p> document(//a/href) document(//link/@href)</p><p> Test procedure: 1. Identify the alternative content. 2. Check that it conforms to all tests for Priority 1 checkpoints in this section of UWEM other than this one, 11.4.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.12.3.1.3 Test 11.4_HTML_03</p><p>This test looks for the alternative content and checks whether the original content could have been made accessible.</p><p> Applicability criteria: linked alternative page.</p><p> document(//a/href) document(//link/@href)</p><p>16 April 2007 Page 96 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Test procedure: 1. Identify the alternative content. 2. Check that the original content could not have been made accessible without undue effort. </p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.13 Guideline 12</p><p>Provide context and orientation information.</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-complex- elements) This guideline provides information on how to provide contextual and orientation information to help users understand complex pages or elements.</p><p>5.13.1 Checkpoint 12.1</p><p>Title each frame to facilitate frame identification and navigation. [Priority 1] </p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-frame-titles) 5.13.1.1 (X)HTML tests</p><p>5.13.1.1.1 Test 12.1_HTML_01</p><p>This test is targeted to find frames without description.</p><p> Applicability criteria: frame elements without title attribute.</p><p>//frame[not(@title)] //iframe[not(@title)]  Test procedure: Check that any such frames are found.</p><p> Expected results: FAIL if true.</p><p> Fully automatable: yes.</p><p>5.13.1.1.2 Test 12.1_HTML_02</p><p>This test is targeted to check whether the title attribute identifies the frame.</p><p> Applicability criteria: frame elements with title attribute.</p><p>//frame[@title] //iframe[@title]</p><p>16 April 2007 Page 97 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Test procedure: 1. Select elements. 2. Check that the title identifies the frame.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.13.2 Checkpoint 12.2</p><p>Describe the purpose of frames and how frames relate to each other if it is not obvious by frame titles alone. [Priority 2] </p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-frame- longdesc) 5.13.2.1 (X)HTML tests</p><p>5.13.2.1.1 Test 12.2_HTML_01</p><p>This test is targeted to check whether the long description represents the context of the frame, if it is not clear by the frame title alone.</p><p> Applicability criteria: documents referenced by a frame element's longdesc attribute or by an a element's href attribute in a noframes element, the purpose of which are to describe the context of a frame.</p><p> document(//frame/@longdesc) document(//noframes//a/@href)  Test procedure: 1. Select long description document referenced by the element. 2. Check that the frame and its context are described by the text in the document, if not obvious by the frame title alone.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.13.3 Checkpoint 12.3</p><p>Divide large blocks of information into more manageable groups where natural and appropriate. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-group- information) 5.13.3.1 (X)HTML tests</p><p>5.13.3.1.1 Test 12.3_HTML_01</p><p>This test is targeted to find fieldsets without legend.</p><p>16 April 2007 Page 98 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: fieldset elements without legend child element.</p><p>//fieldset[not(legend)]  Test procedure: Check that any such fieldsets are found.</p><p> Expected results: PASS if false. FAIL if true.</p><p> Fully automatable: yes.</p><p>5.13.3.1.2 Test 12.3_HTML_02</p><p>This test is targeted to check whether the legend describes the meaning of the fieldset.</p><p> Applicability criteria: legend elements.</p><p>//legend  Test procedure: Check that the legend represents the context of the fieldset parent element.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.3 Test 12.3_HTML_03</p><p>This test is targeted to check whether the elements are grouped in a practical way.</p><p> Applicability criteria: fieldset elements.</p><p>//fieldset  Test procedure: Check that form control elements in the fieldset are grouped in a practical way.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.4 Test 12.3_HTML_04</p><p>This test is targeted to find optgroup elements without label.</p><p> Applicability criteria: optgroup elements without a label attribute.</p><p>//optgroup[not(@label)]  Test procedure: Check that any such optgroup elements are found.</p><p>16 April 2007 Page 99 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Expected results: FAIL if true.</p><p> Fully automatable: yes.</p><p>5.13.3.1.5 Test 12.3_HTML_05</p><p>This test is targeted to check whether the label describes the meaning of the optgroup.</p><p> Applicability criteria: label attribute of optgroup elements.</p><p>//optgroup/@label  Test procedure: Check that the label represents the context of the optgroup.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.6 Test 12.3_HTML_06</p><p>This test is targeted to check whether the option elements are grouped in a practical way.</p><p> Applicability criteria: optgroup elements.</p><p>//optgroup  Test procedure: Check that the elements are grouped in a practical way.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.7 Test 12.3_HTML_07</p><p>This test is targeted to find tables without caption.</p><p> Applicability criteria: data tables implemented by a table element without caption child element.</p><p>//table[not(caption)]  Test procedure: Check that any such table elements are found.</p><p> Expected results: PASS if false. FAIL if true.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 100 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.13.3.1.8 Test 12.3_HTML_08</p><p>This test is targeted to check whether the caption describes the meaning of the table.</p><p> Applicability criteria: caption element in a data table.</p><p>//caption  Test procedure: Check that caption describes the nature of the table.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.9 Test 12.3_HTML_09</p><p>This test is targeted to check whether the elements are grouped in a practical way.</p><p> Applicability criteria: thead, tbody, tfoot, and colgroup elements.</p><p>//thead //tbody //tfoot //colgroup  Test procedure: Check that their child elements are grouped in a practical way.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.10 Test 12.3_HTML_10</p><p>This test is targeted to check whether the elements are grouped in a practical way.</p><p> Applicability criteria: ul, ol, and dl elements.</p><p>//ul //ol //dl  Test procedure: Check that their child elements are grouped in a practical way.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 101 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.13.3.1.11 Test 12.3_HTML_11</p><p>This test is targeted to check whether the elements are structured in a practical way.</p><p> Applicability criteria: paragraphs (p elements).</p><p>//p  Test procedure: Check that the paragraph is used to structure text in a practical way.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.12 Test 12.3_HTML_12</p><p>This test is targeted to check whether form controls need grouping.</p><p> Applicability criteria: form element without a fieldset descendant element.</p><p>//form[not(.//fieldset)]  Test procedure: Check that the form controls in the form do not need grouping.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no.</p><p>5.13.3.1.13 Test 12.3_HTML_13</p><p>This test is targeted to check whether the options need grouping.</p><p> Applicability criteria: select element without optgroup child element.</p><p>//select[not(optgroup)]  Test procedure: Check that options in the select element do not need grouping.</p><p> Expected results: PASS if true.</p><p> Fully automatable: no.</p><p>5.13.3.1.14 Test 12.3_HTML_14</p><p>This test is targeted to check whether the table rows need grouping.</p><p> Applicability criteria: tables without headers or footers.</p><p>//table[not(thead) or not(tfoot) or count(tbody)<2]  Test procedure:</p><p>16 April 2007 Page 102 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Check that the table rows do not need grouping.</p><p> Expected results: PASS if true.</p><p> Fully automatable: no.</p><p>5.13.3.1.15 Test 12.3_HTML_15</p><p>This test is targeted to check whether text needs grouping with headings and paragraphs.</p><p> Applicability criteria: body text.</p><p>//body//text()  Test procedure: 1. Check that chapters, subtopics and sections are introduced by headings.</p><p>2. Check that the headings introduce or describe the chapters, subtopics or sections that they belong to.</p><p> Expected results: PASS if #1 and #2 are true. FAIL if #1 or #2 are false.</p><p> Fully automatable: no.</p><p>5.13.4 Checkpoint 12.4</p><p>Associate labels explicitly with their controls. [Priority 2] </p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-associate- labels) 5.13.4.1 (X)HTML tests</p><p>5.13.4.1.1 Test 12.4_HTML_01</p><p>This test is targeted to find form control elements without id.</p><p> Applicability criteria: input or selection form control elements without id attribute.</p><p>//input[not(@type='hidden')][not(@type='submit')][not(@type='reset')] [not(@type='button')][not(@type='image')][not(@id)] //select[not(@id)] //textarea[not(@id)]  Test procedure: Check that no such elements are found.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>16 April 2007 Page 103 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5.13.4.1.2 Test 12.4_HTML_02</p><p>This test is targeted to find form control elements without label element.</p><p> Applicability criteria: input or selection form control elements without associated label element.</p><p>//input[not(@type='hidden')][not(@type='submit')][not(@type='reset')] [not(@type='button')][not(@type='image')][@id] //select[@id] //textarea[@id]  Test procedure: Check that there is a label element in the document, the for attribute of which equals the form control element's id attribute.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: yes.</p><p>5.14 Guideline 13</p><p>Provide clear navigation mechanisms.</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-facilitate- navigation) This guideline provides information on how to provide contextual and orientation information to help users understand complex pages or elements.</p><p>5.14.1 Checkpoint 13.1</p><p>Clearly identify the target of each link. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-meaningful- links and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-meaningful-links) 5.14.1.1 (X)HTML tests</p><p>5.14.1.1.1 Test 13.1_HTML_01</p><p>This test is targeted to find a and area elements with the same title and text with different different link target (href). If no title attribute is provided, only the element text is checked.</p><p> Applicability criteria: a and area elements with same link text and title attributes, but different link targets.</p><p>//a //area  Test procedure:</p><p>16 April 2007 Page 104 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>1. Select elements pairwise. 2. Check that two links with same link text (including alternative text for non-text elements) and same title attribute (if provided) point to the same resource.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: yes</p><p>5.14.1.1.2 Test 13.1_HTML_02</p><p>This test is targeted to find a and area elements with link text that does not clearly identify the target of the link.</p><p> Applicability criteria: a and area elements with same link text and title attributes, but different link targets.</p><p>//a //area  Test procedure: 1. Check that the link text (including alternative text for non-text elements) clearly describes the effect of activating the element.</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no</p><p>5.14.2 Checkpoint 13.2</p><p>Provide metadata to add semantic information to pages and sites. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-use-metadata and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-use-metadata) 5.14.2.1 (X)HTML tests</p><p>5.14.2.1.1 Test 13.2_HTML_01</p><p>This test is targeted to find a document title element in a web resource.</p><p> Applicability criteria: documents.</p><p> Test procedure: 1. Check that there is a title element. 2. Check that the title is not empty. 3. Check that the title element contains text that identifies the purpose of the resource.</p><p>16 April 2007 Page 105 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Expected results: PASS if true. FAIL if false.</p><p> Fully automatable: no</p><p>5.14.3 Checkpoint 13.3</p><p>Provide information about the general layout of a site (e.g., a site map or table of contents). [Priority 2] In describing site layout, highlight and explain available accessibility features.</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-site-description and the techniques in http://www.w3.org/TR/WAI-WEBCONTENT- TECHS/#tech-site-description) 5.14.3.1 (X)HTML tests</p><p>5.14.3.1.1 Test 13.3_HTML_01</p><p>This test is targeted to find a web site without information about the general layout of the site.</p><p> Applicability criteria: the whole web site.</p><p> Test procedure: 1. Check that the web site contains a document containing a description of the general layout of a site. A site map or a table of contents would meet this requirement. 2. Check that this document can be reached from the index page.</p><p> Expected results: PASS if #1 and #2 true. FAIL if #1 or #2 is false.</p><p> Fully automatable: no.</p><p>5.14.4 Checkpoint 13.4</p><p>Use navigation mechanisms in a consistent manner. [Priority 2]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-clear-nav- mechanism and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-clear-nav-mechanism) 5.14.4.1 (X)HTML tests</p><p>5.14.4.1.1 Test 13.4_HTML_01</p><p>This test is targeted to check whether navigation mechanisms are used in a consistent manner.</p><p>16 April 2007 Page 106 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Applicability criteria: navigation menus and navigation bars.</p><p> Test procedure: 1. Select navigation elements on several pages. 2. Check that the navigation facilities are similar with respect to presentation (location in rendered document, location in content rendered without style sheets, colours, font) and behaviour.</p><p> Expected results: PASS if #2 is true. FAIL if #2 is false.</p><p> Fully automatable: no.</p><p>5.15 Guideline 14</p><p>Ensure that documents are clear and simple.</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#gl-facilitate- comprehension) This guideline provides information on how to create clear and simple documents.</p><p>5.15.1 Checkpoint 14.1</p><p>Use the clearest and simplest language appropriate for a site's content. [Priority 1]</p><p>(See http://www.w3.org/TR/WAI-WEBCONTENT/#tech-simple-and- straightforward and the techniques in http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/#tech-icons) 5.15.1.1 (X)HTML tests</p><p>5.15.1.1.1 Test 14.1_HTML_01</p><p>This test is targeted to analyse the readability of the content.</p><p> Applicability criteria: whole document.</p><p> Test procedure: 1. Read the text. 2. Check that the language is simple. 3. If not, check that the language used in the document is necessary to convey the information.</p><p> Expected results: PASS if #2 or #3 is true. FAIL if #2 and #3 are false.</p><p> Fully automatable: no.</p><p>16 April 2007 Page 107 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>16 April 2007 Page 108 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>6 Aggregation model for test results 6.1 Introduction</p><p>For automatic and/or expert evaluation it is important that the individual assessments can be aggregated into an end-result that is easy to understand and has a clear interpretation. Aggregation of test results is possible on different levels like the checkpoint or test level. </p><p>One of the metrics that can also be interesting for policy makers, is the accessibility barrier probability. The European Commission regulation 808/2004 concerning community statistics for the information society explicitly states that one characteristic to be provided is barriers to the use of ICT, Internet and other electronic networks, e-commerce and e-business processes. This section describes a model for calculating the accessibility barrier probability for single Web pages and Web sites.49</p><p>Note that the aggregation approach is not conflicting with the conformance claim approach. A conformance claim can only be made if a site passes all the tests, i.e. the site has a barrier probability of 0. The aggregation result on the other hand can be calculated for any evaluation and can serve as an indication of how good or bad the accessibility of a web site is in comparison to other sites. 6.2 Approach</p><p>Web accessibility evaluation can be viewed as a three stage process. Figure 2 and Figure 3 summarise the Web accessibility evaluation process. The notation in the figures that will be used throughout this section is introduced in section 6.3.</p><p>In the first stage, W3C's Evaluation and Report Language (EARL) is used as a standardised format for collecting and conveying test results from accessibility assessment tools according to any given standard. The model also supports the weighting of the test reports according to their error probability such that the contribution of tests with lower confidence to the overall result is reduced.</p><p>The second stage performs aggregation of the individual test results into one comprehensive figure for a web page. The calculation is based on a statistical model (user centric accessibility barrier model, UCAB, introduced in section 6.3.4). The underlying assumption is that the accessibility barriers within a web page accumulate.</p><p>To present the results to the public, e.g., people who experience barriers (such as people with disabilities) or users of the data for planning and development</p><p>49 This probabilistic model for accessibility barriers is work in progress. It will be evaluated and refined if necessary during the evaluation phases of UWEM. Aggregation of accessibility barriers will probably be based around these general ideas, but the exact model is subject to change during the evaluation phase.</p><p>16 April 2007 Page 109 of 141 WAB Cluster Deliverable D-WAB4 (Public) purposes (such as policy makers and stakeholders) the third stage of Web accessibility evaluation needs to provide means for interpreting the results. This includes statistical analyses of the findings, presentation of average values for a Web site or for groups of Web sites by geographical region or business sector. This stage can be viewed as the “business logic” for estimating the accessibility barriers. The reporting of findings will be covered in section 7 and section 8.</p><p>Figure 2: The first and second stage of the Web site evaluation process.</p><p>16 April 2007 Page 110 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Figure 3: The third stage of the Web site evaluation process Section 6.4 concentrates on the second stage of Web accessibility evaluation and provides further details on the statistical model used in the aggregation. Section 6.5 describes how an average value for a Web site can be derived from the results of stage 2. 6.3 Definitions and mathematical background</p><p>In this section we explain the main concepts involved in the modelling of Web accessibility barriers. Subsequently we show how they can be transferred into a statistical model and introduce the UWEM User Centric Accessibility Barrier Model (UCAB).</p><p>6.3.1 Definitions</p><p> Web page </p><p>A resource on the web as defined by section 4.</p><p> Barrier</p><p>An accessibility barrier is modelled as a product failure caused by an incompatibility between the needs of a disabled user and product functionality. The incompatibility is caused by the web page, i.e., it is not the user's fault.</p><p> Barrier type</p><p>A barrier type is related to a test procedure (as described in section</p><p>16 April 2007 Page 111 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>5Error: Reference source not found) and provides a unique interpretation of its result. For example, a non-text element can constitute a barrier in several ways. One way is described by barrier type 1.1_HTML_01 ("alt attribute is missing"). </p><p>Barrier types are an integral part of barrier modelling because they have two important properties:</p><p>1. Barrier types can be measured objectively with reproducible results.</p><p>2. The results of the measurements can be aggregated.</p><p>Additionally it is assumed that each barrier gives the same result when the same element is being checked more than once, so that the same barrier type will be reported every time, and therefore also the same barrier probability will apply every time the element is being tested. </p><p> Accessibility</p><p>Under the scope of this section, accessibility is defined as the absence of barriers within the Sampled Resource List.</p><p>6.3.2 Notation</p><p>The following notation is used to refer to the quantities that are involved in the calculations.</p><p>The results from each test in stage one are given by a report R pb∈[0 ; 1 ] where p is a page and b a barrier type. R pb=1 means that the test for barrier type b failed, whereas R pb=0 means that the test for barrier type b passed. For expert evaluation these results will usually be given in a tabular test report.</p><p>Automatic evaluation has the capacity to assess all elements within the page p . In this case the result can also be given as the ratio of the number of failed tests B pb to the number of all relevant elements N pb for barrier type b .</p><p> automatic B pb R pb = N pb</p><p>If the confidence level of a test procedure is known the reports can be adapted fp to reflect this. Let P b denote the probability that the test for b yields a false fn 50 positive and P b the probability that the test for b yields a false negative. The following report includes confidence weighting:</p><p>50 This version of UWEM does not specify how the parameters should be selected. Instead the fp fn default values P b =P b =0 will be used, Note that with this parameter selection the results are the same as without confidence weighting. </p><p>16 April 2007 Page 112 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> fp expert P b , test b passed R pb = fn {1−P b , test b failed</p><p>If ratio reports are used the calculation changes to</p><p> autom atic B pb fn N pb− B pb fp R pb = 1−P b  P b N pb N pb</p><p>51 The barrier probability of barrier type b is given by F b∈[0 ; 1 ] . The barrier probabilities constitute a fixed set of parameters. A small value of F b indicates that the probability that a disabled user encountering a barrier of type b will experience an accessibility problem is small. All parameters are set to the same value F b=0.05 ∀ b for a start. The parameters are being validated and will be tuned within EIAO with regard to automatic evaluation for future versions of UWEM.</p><p>The result of the aggregation F p is interpreted as accessibility barrier probability:</p><p>F p is the probability that the Web page p constitutes an accessibility barrier for a disabled user.</p><p>6.3.3 Example (Confidence weighting)</p><p>The goal is to estimate the proportion of images that don't have an appropriate alternative text. The inspected web page contains ten image elements with alternative text. The (stage one) evaluation reports that for three of the images the alternative text is not appropriate.</p><p>3 The barrier ratio is R pb= =0.3 . 10</p><p>Suppose that the probability of false negatives (i.e. false "fail" results) is small because it is relatively easy to recognise suspicious image descriptions like file fn names or place holder text: P b =0.01 . On the other hand the probability of false positives (i.e., false "pass" results) is higher because sometimes the whole page context needs to be taken into account to determine whether the description is fp appropriate: P b =0.1 .</p><p>The estimate is adapted accordingly:</p><p>51 The accessibility barrier probability Fb for a barrier type b can be estimated via user testing using a representative set of users. It may also be estimated to some extent with expert testing or semi-automatic testing, however the level of precision in detecting real barriers will be less than for user testing. In this simple first approach the barrier probability is set to a fixed value. In later versions this parameter could be used to provide a finer grading of the severity of the barrier types, e.g. by introducing different values for different disability groups.</p><p>16 April 2007 Page 113 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> fn fp 3 7 R = 3 ⋅1−P  10−3⋅P = ⋅0.99 ⋅0.1=0.367 pb 10 b 10 b 10 10</p><p>6.3.4 Mathematical background: The UCAB model</p><p>The main purpose of the aggregation of the reports from stage one is to model the experience of a disabled user trying to access a Web site. The goal is to determine the probability that user can not complete a task because of the accessibility barriers they encounter. Depending on the severity the barriers might either stop the disabled used right away or prevent the completion of the task because too much time and effort are required. To address this goal we introduce the User Centric Accessibility Barrier Model (UCAB).</p><p>There a two main statistical assumptions underlying the UCAB model:</p><p>1. Independence of barrier occurrences</p><p>Each test passes or fails independently of each other, i.e., the reports R pb used as random variables are mutually independent.</p><p>2. Barriers within a Web page accumulate</p><p>Each barrier that a disabled user encounters within a Web page reduces the overall accessibility, i.e. increases the accessibility barrier probability F p . This is modelled as probability that the user encounters any barrier within the web page.</p><p>Let A and B be two independent events then the probability that A or B occurs is given by:</p><p>P A∪B=P A∩B =1−P A P B=1−1−P A1−P B</p><p> where P A  denotes the probability of A and A is the complementary event of A . 6.4 Stage 2: Accessibility Barrier Probability Fp</p><p>An accessibility barrier probability F p is modelled as a product failure caused by an incompatibility between a disabled user's need and product functionality. It is assumed that a failure mode reported by a test procedure will introduce an accessibility barrier with some known probability F b . </p><p>Application of the UCAB model yields the following formula (notation as described in section 6.3.2).</p><p>F p=1− ∏ 1 −R pb F b  all barrier types b</p><p>16 April 2007 Page 114 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>6.4.1 Example (Fp for single Web page):</p><p>The (stage one) evaluation of a Web page yielded reports that were generated by three different tests:</p><p> b0 = 1.1_HTML_01</p><p> b1 = 2.2_HTML_02</p><p> b2 = 9.3_HTML_03</p><p>In detail, the reports might look like this:</p><p> R p, b0=1 (images without alt attribute)</p><p> R p, b1=0 (the colour contrast is ok)</p><p> R p, b2=1 (event handler ondblclick has been used)</p><p>Assuming that the barrier probabilities of the failure modes have the values F b0=F b1=F b2=0.05 , the accessibility barrier probability of the Web page calculated from the UCAB model is:</p><p>F p= 1 – 1−R p, b0 F b01−R p, b1 F b11−R p, b2 F b2 = 1−1−0.051−01−0.05 = 1−0.952 = 0.0975</p><p>6.4.2 Combining results from different testing procedures</p><p>It is possible to combine the results from evaluations performed with different tools or by different experts if the following conditions apply:</p><p>1. No double reports (No barrier should be included more than once in the aggregation. To meet this the evaluation has to observe a division of the tests that are performed, e.g. into different sets of Web pages or into automatic and expert evaluation).</p><p>2. Same sample (The reports have to cover the same data, i.e. the same version and selection of Web resources). 6.5 Stage 3: Accessibility Barrier Probability Fs </p><p>The Web page accessibility barrier probabilities naturally lend themselves to aggregation, so that the average barrier probability and variance for a Web site can be calculated from the sampled Web pages for a Web site. Similarly, aggregation can be further performed over several Web sites, regions or</p><p>16 April 2007 Page 115 of 141 WAB Cluster Deliverable D-WAB4 (Public) countries.</p><p>The barrier probability F s for a Web site s is calculated as the mean of the barrier probabilities of the Web pages that have been sampled from the Web site.</p><p>1 n F s= ∑ F pj n j=1 where n is the number of Web pages sampled from Web site s and F pj is the barrier probability for Web page pj – calculated as described in section 6.4.</p><p>The standard deviation S s is given by</p><p> n S = 1  F −F 2 s n ∑ pj s  j=1 6.5.1 Example (Fs for single Web site)</p><p>Two pages have been sampled from Web site s . The barrier probabilities have been calculated in stage 2:</p><p> F p1=0.0975</p><p> F p2=0.1425</p><p>Then the average barrier probability of s is</p><p>F = 1  F F = 1 0.09750.1425=1⋅0.24=0.12 s 2 p1 p2 2 2</p><p>The standard deviation is</p><p>S = 1  F −F 2 F −F 2 s 2 p1 s p2 s =  1 2 2 = 0.0975−0.12  0.1425−0.12   2 = 0.00050625 = 0.0225</p><p>6.6 Limitations of the UCAB model</p><p>Clearly we are assuming an idealised Web in this model, and some of the assumptions may not be true for a real Web site. The evaluation phase of UWEM will be used to verify if the model is usable, and also to improve the model where necessary.</p><p>16 April 2007 Page 116 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>6.6.1 Aspects not covered by the model</p><p> Information redundancy</p><p>Information can be presented in several ways on the Web, and in some cases there may be sufficient redundant information, so that what looks like a barrier, is not perceived as a barrier by the disabled user due to information redundancy (e.g., a title can compensate for a missing alt attribute in some specific scenarios). The model does not consider information redundancy.</p><p> User behaviour</p><p>User behaviour is not modelled yet. It is conceivable that we could model the time it takes until a user successfully performs the given task as a stochastic variable. The integral of the probability distribution function could then be used to calculate the probabilities that a disabled user would be able to finish a task within given time. It could also be used to model when most users would give up due to frustration.</p><p> Complexity of Web page</p><p>All Web pages are treated equally. The complexity (e.g., size or the number of test locations that have been identified) is not taken in to account. However, it seems unnatural that a large Web page has a higher barrier probability than a smaller page with the same barrier density. </p><p>Our current approach is justifiable as long as we assume that the sampling algorithm collects samples that are uniformly distributed and comparable.</p><p>6.6.2 Underlying assumptions</p><p> Ideal results from stage 1</p><p>The statistical model assumes that the results from stage 1 are correct, i.e., that no false reports are included in the calculation. To improve the robustness of the model it might be useful to introduce some kind of error analysis when implementing it.</p><p> Independence of barrier occurrences</p><p>The different elements of a Web site are not totally independent, thus a model including these dependencies may be more precise. This will in practice introduce a bias for this model.</p><p>If several barriers of the same type are present within a Web page it should not be assumed that they contribute to the result in the same way as barriers of different types.</p><p> Independence of barrier types</p><p>16 April 2007 Page 117 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Some barrier types are probably related to each other (for example, tests for checkpoint 13.3 and 13.4). If several tests are applied to the same element the applicability constraints are designed to ensure that each problem is counted only once. In this way most of the dependencies between the barrier types can be avoided.</p><p> Constant barrier probabilities Fb</p><p>The methodology assumes that barriers can be categorised into a fixed number of barrier types. Each time a barrier of type b occurs within a Web page it has the same severity (i.e. it contributes the same barrier probability F b to the calculation).</p><p>16 April 2007 Page 118 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>7 Reporting of test results</p><p>Reporting of test results under UWEM needs to be unambiguously defined. To that end, the most suitable language is the Evaluation and Report Language EARL [EARL10-Schema], soon becoming a candidate recommendation within W3C. For an example of how an EARL document looks like see Error: Reference source not found. However, it must be acknowledged that this format is not the more suitable for expert evaluation, especially when there might be neither tools at hand that support EARL nor RDF knowledge by the expert. Therefore, UWEM offers the possibility of an alternative format to EARL following a descriptive approach following the template of Error: Reference source not found. This template is based on the W3C/WAI evaluation suite template that serves as a model for accessibility evaluation reports.</p><p>The proposed minimal detail level of reporting for expert evaluation is per checkpoint for the entire sample. More detailed reporting (such as reporting per test and/or reporting for each sampled resource) is optional.</p><p>The Evaluation and Report Language is a framework to express test results. The concept of a “test” under EARL is taken in a wider meaning, and can include bug reports, software unit tests, test suite evaluations, and conformance claims, among others. EARL is based upon RDF.</p><p>16 April 2007 Page 119 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>8 Scorecard report</p><p>The two evaluation types outlined earlier support the identification of possible barriers. A single value describes the probability of creating a barrier by violating any of the tests outlined in section 5. Aggregation of single values allows to compute the barrier probability for a single Web site (section 6). Scoring is based on this data and allows to compare the results obtained via the same evaluation type. In the following, we describe in more detail the scorecards for monitoring the status and progress of single Web sites and groups of Web sites. 8.1 Scorecard report</p><p>The balanced scorecard is an approach to strategic management developed by Kaplan and Norton [KAPLAN96]. The balanced scorecard method is amongst others used by the European Commission in the Bologna process52 to measure the progress made in three areas in higher education. This is done to visualise the status and improvements in the process. </p><p>Scorecards give the “big picture” of status and progress, and allow to monitor a set of indicators. In UWEM, we use a simple scorecard to show status at a given point in time and progress since the previous evaluation, if available. The overview perspective is useful to direct additional evaluation resources to areas where the results are especially interesting or alarming. </p><p>8.1.1 Scores indicating barrier status</p><p>Table 1 explains the colour and letter codes used for the UWEM barrier status score. The barrier scores are coded with letter codes and coloured fields with A and dark green indicating the best score, and E and red indicating the worst (no data is indicated with “n/a” and grey field).</p><p>Scorecards Barrier Level of Interpretati probality Fs accessibility on Accessibility is Full A Fs = 0.00% fully supported accessibility Within levels of confidence no 0% < Fs <= or very few Very good B 25% accessibility accessibility barriers have been identified A few 25% < Fs accessibility Medium C <= 50% barriers have accessibility been identified D 50% < Fs Several Poor </p><p>52 http://www.bologna-bergen2005.no/Bergen/050509_Stocktaking.pdf</p><p>16 April 2007 Page 120 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Scorecards Barrier Level of Interpretati probality Fs accessibility on accessibility <= 75% barriers have accessibility been identified 75% < Fs Web Site is Very poor E <= 100% inaccessible accessibility Not Not tested or n/a n/a measured not applicable Reference to type of evaluation (Expert and/or Automatic) and to set of tests used for the automatic or expert evaluation. Table 1: UWEM barrier score represented by letter and colour to indicate barrier probability. Note that the levels chosen are preliminary values that will be evaluated when real data measurements from the EIAO project are available. The scorecards presented in Table 1 can be used to describe the assessment of single Web sites or groups of Web sites based on UWEM. They may also be used to describe the results of both automatic and expert testing. However, it is important to remember that only the results for different web sites or groups of web sites may be compared only if they are obtained using the same set of tests..</p><p>The score should always be presented together with information about what tests have been used. The set of tests that can be automatically evaluated is a subset of all defined tests (see section 5). The other tests can be conducted mainly manually by experts.</p><p>8.1.2 Scores indicating barrier change </p><p>If a set of web resources was previously evaluated using the same assessment method, the score for indicating the barrier probability change can be determined. This score simply indicates direction of change compared to the previous evaluation, as shown in Table 2. An increase in accessibility is indicated with an arrow pointing up, a decrease with an arrow pointing down. No change is indicated with a horizontal bar, and no previous data with the text “n/a”.</p><p>Code Barrier probability (Fs) Accessibility change change since previous evaluation ↑ Status score decreased Accessibility improved — Status score unchanged Accessibility unchanged ↓ Status score increased Accessibility declined n/a No previous evaluation available No previous evaluation available</p><p>Table 2: UWEM barrier change score represented by a symbol indicating change in barrier probability. To provide consistency between status and change scores, the change score</p><p>16 April 2007 Page 121 of 141 WAB Cluster Deliverable D-WAB4 (Public) relates to a change in the actual status scorecard (see Table 1) rather than to changes in barrier probability Fs (a change in the scorecard colour is needed to trigger a decrease/increase in the change score). 8.2 Aggregation of statistics</p><p>The single web sites scores can be grouped according to:</p><p> NACE (Nomenclature générale des Activités Economiques dans les Communautés Européenes) 53business sectors and </p><p> NUTS (Nomenclature of Territorial Units)54 regions. </p><p>The scores are then computed as averages over the barrier probabilities of the web sites in the NACE or NUTS group.</p><p>53 http://epp.eurostat.cec.eu.int/cache/ITY_OFFPUB/KS-BG-06-002/EN/KS-BG-06-002-EN.PDF 54 http://epp.eurostat.cec.eu.int/cache/ITY_OFFPUB/KS-BD-05-001/EN/KS-BD-05-001-EN.PDF</p><p>16 April 2007 Page 122 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>9 Glossary</p><p>Term Use in UWEM Accessibility Accessibility is here defined as the absence of accessibility barriers for a given checkpoint, Web resource or key use scenario. Accessibility A test referring to one or more Web pages resulting in an test EARL report. Accessibility Fp indicates the probability for an accessibility barrier for a barrier given web page p. probability Fp Aggregation Grouping of data to get an overview of the result set. Authored units Some set of material created as a single entity by an author. Examples include a collection of markup, a style sheet, and a media resource, such as an image or audio clip. Balanced Approach to strategic management developed by Robert Scorecard Kaplan and David Norton. The approach can be used to monitor progress towards a defined set of goals. Barrier A barrier on a Web page can be viewed as an incompatibility between the user and the Web page that causes the user to be not able to accomplish a task in a given key use scenario. BenToWeb Benchmarking Tools and Methods for the Web. Project in the WAB cluster: http://bentoweb.org/ Core resource The Core Resource List is a set of generic resource types list which are likely to be present in most Web sites, and which are core to the use and accessibility evaluation of a site. It represents a minimal set of resources which should be included in any accessibility evaluation of the site. It cannot, in general, be automatically identified, but requires human judgement to select. CP Checkpoint Crawl Web site Recursively retrieve Web pages, until the whole Web site is downloaded, or until sampling criteria terminates the crawling process. Data cell The cells in a data table marked up with td elements. Data table Table where content in data cells has a relationship with other content in both the containing row and column. EARL Evaluation And Reporting Language EIAO European Internet Accessibility Observatory. Project in the WAB cluster: http://www.eiao.net/ Evaluation type The manner in which the accessibility evaluation is performed. The different evaluation types mentioned in UWEM are automatic, expert and user testing. This version of UWEM describes only automatic and expert testing. Header cell A cell in a data table marked up with a th element, or a td element with a scope attribute or an axis attribute.</p><p>16 April 2007 Page 123 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>Term Use in UWEM Key use The sequence of events and Web pages that must be scenario traversed to successfully perform a task on a Web page A, AA, AAA Levels of conformance with WCAG 1.0 as defined by W3C. NACE Acronym from the French 'Nomenclature statistique des Activites economiques dans la Communaute Europeenne' – Statistical classification of economic activities in the European Community Non-text Content that is not represented by a Unicode character or content sequence of Unicode characters when rendered in a user agent according to the formal specification of the content type Note: This includes ASCII Art, which is a pattern of characters.55 NUTS Nomenclature of Territorial Units for Statistics Preformatted Text where layout is defined by whitespace and line break text characters and which is typically rendered with a monospace font. QA Quality Assurance Replicable If tests are repeated, the same results are expected with certain limitations Sampling Procedure to identify a subsets of the Complete Resource Set (a single complete Web site) which are to be evaluated. S-EAM Support EAM. Project in the WAB cluster: http://www.support-eam.org/ User testing Evaluation by use of a representative set of users for each failure mode UWEM Universal Web Evaluation Methodology WAB Web Accessibility Benchmark cluster WCAG Web Content Accessibility Guidelines See also http://www.w3.org/2003/glossary/ for explanation of common Web terms and abbreviations.</p><p>55 This is the WCAG 2.0 definition quoted from http://www.w3.org/WAI/GL/WCAG20/WD- WCAG20-20060317/appendixA.html#non-text-contentdef</p><p>16 April 2007 Page 124 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>10 References</p><p>[BRIN98] Brin S, Page L (1998). The anatomy of a large-scale hypertextual Web search engine. In: Enslow P H, Ellis A (eds), Proceedings of the Seventh international Conference on World Wide Web 7 (Brisbane, Australia), pp. 107— 117. Amsterdam: Elsevier Science Publishers B. V. DOI= http://dx.doi.org/10.1016/S0169-7552(98)00110-X</p><p>[EARL10-Schema] Abou-Zahra S (ed) (2007). Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 23 March 2007. Available at: http://www.w3.org/TR/EARL10-Schema/</p><p>[KAPLAN96] Kaplan R S, Norton D P (1996). The Balanced Scorecard: Translating Strategy into Action. Boston, MA: Harvard Business School Press.</p><p>[HENZINGER00] Monika R. Henzinger, Allan Heydon, Michael Mitzenmacher, Marc Najork (2000). On Near-Uniform URL Sampling. In Proc. of the 9th Int. World Wide Web Conference, May 2000.</p><p>[HTTP-RDF] Koch J, Velasco C A, Abou-Zahra S (2006). HTTP Vocabulary in RDF. W3C Working Draft 23 March 2007. Available at: http://www.w3.org/TR/HTTP- in-RDF/</p><p>[KRUG00] Krug S (2000). Don't Make Me Think: A Common Sense Approach to Web Usability. New Riders Press (2nd edition).</p><p>[LEVENE99] Levene M, Loizou G (1999). Navigation in Hypertext is easy only sometimes. SIAM Journal on Computing, 29, pp. 728—760.</p><p>[RDFS] Brickley D, Guha R V (2004). RDF Vocabulary Description Language 1.0: RDF Schema - W3C Recommendation, 10 February 2004. Available at: http://www.w3.org/TR/rdf-schema/</p><p>[RFC2119] Bradner S (ed) (1997). Key words for use in RFCs to Indicate Requirement Levels. Request for Comments: 2119. IETF. Available at: http://www.ietf.org/rfc/rfc2119.txt</p><p>[RFC2616] Fielding R, Gettys J, Mogul J, Frystik H, Masinter L, Berners-Lee T (eds) (1999). Hypertext Transfer Protocol – HTTP/1.1. Request for Comments: 2616. IETF. Available at: http://www.ietf.org/rfc/rfc2616.txt</p><p>[RFC3986] Berners-Lee T, Fielding R, Masinter L (eds) (2005). Uniform Resource Identifier (URI): Generic Syntax. Request for Comments: 3986. IETF. Available at: http://www.ietf.org/rfc/rfc3986.txt</p><p>[WCAG10] Chisholm W, Vanderheiden G, Jacobs I (eds) (1999). Web Content Accessibility Guidelines 1.0, W3C Recommendation 5-May-1999. World Wide Web Consortium. Available at: http://www.w3.org/TR/WCAG10/ </p><p>16 April 2007 Page 125 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>[WCAG10-TECHS] Chisholm W, Vanderheiden G, Jacobs I (eds) (1999). Techniques for Web Content Accessibility Guidelines 1.0, W3C Note 6 November 2000. World Wide Web Consortium. Available at: http://www.w3.org/TR/WAI- WEBCONTENT-TECHS/ </p><p>[WCAG20] Caldwell B, Chisholm W, Slatin J, Vanderheiden G, White J (eds) (2005). Web Content Accessibility Guidelines 2.0, W3C Working Draft 30 June 2005. World Wide Web Consortium. Available at: http://www.w3.org/TR/WCAG20/ </p><p>[ZENG04] Zeng X (2004). Evaluation and Enhancement of Web Content Accessibility for Persons with Disabilities. Ph.D. Thesis. University of of Pittsburgh. Available at: http://etd.library.pitt.edu/ETD/available/etd-04192004- 155229/unrestricted/XiaomingZeng_April2004.pdf</p><p>16 April 2007 Page 126 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>11 Appendix A: Document License</p><p>Copyright © 2005 European Commission and WAB Cluster members</p><p>[License based upon the W3C document license56. NOTICE: This license applies specifically to UWEM and WAB Cluster materials. None of the documents from the World Wide Web Consortium or its Web Accessibility Initiative that are referenced or quoted in this document are subject to the conditions of this license. Materials from the World Wide Web Consortium or the Web Accessibility Initiative that are referenced or quoted in this document or in documents derived from it, are subject to the W3C© Document License.]</p><p>By using and/or copying this document, or the document from which this statement is linked (Unified web Evaluation Methodology, UWEM), you (the licensee) agree that you have read, understood, and will comply with the following terms and conditions:</p><p>Permission to copy, and distribute the contents of this document, or the document from which this statement is linked, in any medium for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the document, or portions thereof, that you use:</p><p> A link or URL to the original document.</p><p> The pre-existing copyright notice of the original author, or if it doesn't exist, a notice (hypertext is preferred, but a textual representation is permitted) of the form: "Copyright © 2005 European Commission and WAB Cluster members. All Rights Reserved."</p><p> The STATUS of the document .</p><p>When space permits, inclusion of the full text of this NOTICE should be provided. We request that authorship attribution be provided in any software, documents, or other items or products that you create pursuant to the implementation of the contents of this document, or any portion thereof.</p><p>This license allows the use, modification and extension of this document to any organisation royalty free, with the conditions expressed above. In case of modifications outside the selected standardisation body or equivalent entity by the copyright holders, neither the term “Unified Web (Site) Evaluation Methodology” nor the acronym “UWEM” could be used to denominate the resulting work.</p><p>THIS DOCUMENT IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT</p><p>56 http://www.w3.org/Consortium/Legal/2002/copyright-documents-20021231. Our license allows extensions and modifications of UWEM, as far as references to the original document are given, and copies of this license are provided.</p><p>16 April 2007 Page 127 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, OR TITLE; THAT THE CONTENTS OF THE DOCUMENT ARE SUITABLE FOR ANY PURPOSE; NOR THAT THE IMPLEMENTATION OF SUCH CONTENTS WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.</p><p>COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE DOCUMENT OR THE PERFORMANCE OR IMPLEMENTATION OF THE CONTENTS THEREOF.</p><p>The name and trademarks of copyright holders may NOT be used in advertising or publicity pertaining to this document or its contents without specific, written prior permission. Title to copyright in this document will at all times remain with copyright holders.</p><p>16 April 2007 Page 128 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>12 Appendix B: Template for expert evaluation</p><p>This appendix presents a template to be used for expert evaluation when presenting the results of a UWEM evaluation. This template is based upon the W3C/WAI template for evaluation reports,57 with minor changes as some additions required by the Unified Web Evaluation Methodology (UWEM). Please note that some points can only be used if applicable.</p><p>[start of proposed template] 12.1 Introduction</p><p>This evaluation report describes the evaluation of the web site: http://example.org/, according to UWEM. The list of resources tested for conformance can be found in: http://example.org/tests/test1.rdf 12.2 Executive Summary</p><p>This report describes the conformance of the http://example.org/ Web site with UWEM . The evaluation results are described below and are based on the Unified Web Evaluation Methodology (UWEM) as provided by the WAB Cluster on http://www.wabcluster.org/UWEM1/.</p><p>Based on this evaluation using the UWEM tests for every WCAG1.0 checkpoint of priority 1 and 2, the http://example.org/ Web site [meets/does not meet] UWEM Conformance Level 1 (identical to WCAG level A) and/or 2 (identical to WCAG double A). Detailed evaluation results are available in section 12.5. Resources for follow-up study are listed in section 12.6. Feedback on this evaluation is welcome. 12.3 Background about Evaluation</p><p>Conformance evaluation of Web accessibility can be a combination of evaluation tools and manual evaluation by an experienced reviewer. The evaluation results in this report are based on evaluation conducted on the following date(s): ______[and using the following tool(s)______:]. The Web site may have changed since that time. Additional information on the UWEM evaluation process is available at http://www.wabcluster.org/UWEM1/. </p><p>12.3.1 Web Site Evaluated</p><p> [Name of Web site]</p><p>[and purpose of site, if relevant]</p><p>57 http://www.w3.org/WAI/eval/template.html</p><p>16 April 2007 Page 129 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> [Base URL of site]</p><p> [Scope of the evaluation, with exclusions explicitly disclosed]</p><p> [Link to Sampled Resources List]</p><p> [Exact date, or range of dates, on which the evaluation was conducted] </p><p> [Natural language(s) of Web site]</p><p>12.3.2 Reviewer(s)</p><p> [Name of reviewer or review team, unless anonymous] </p><p> [Organization with which reviewer(s) is/are affiliated, if relevant and if not anonymous] </p><p> [Contact information for reviewer(s) or reviewer(s) organization, unless anonymous]</p><p>12.3.3 Evaluation Process</p><p> [WCAG 1.0 Level for which conformance was tested using UWEM, e.g. WCAG 1.0 Level A, Double A] </p><p> UWEM process 12.4 Resources List</p><p>Description of the sample from UWEM specific for this web site. </p><p>The Sampled Resource List set can be found in Appendix [YY/URL]. 12.5 Check Results</p><p> [Interpretative summary of evaluation results] </p><p>○ [e.g. this Web site [meets/does not meet] WCAG 1.0 [Level A, Double A] </p><p>○ [accessibility features in which this site is strong include ______] </p><p> [Detailed results, structured according to WCAG 1.0 Checklist] </p><p>○ [include links to WCAG 1.0 Checkpoints and Techniques for all non- conformant items] </p><p>○ [provide links to examples of non-conformancies]</p><p>○ [attach or link to specific reports in appendices, e.g., output of</p><p>16 April 2007 Page 130 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> validators and evaluation tools] </p><p>○ [provide links to recommendations for addressing non-conformant checkpoints] </p><p> [If available, attach more detailed test results as an Appendix to this evaluation]</p><p> [Describe or point to a suggested program of on-going monitoring of Web site accessibility, re-evaluation of authoring tools, etc.] 12.6 References</p><p> Web Content Accessibility Guidelines 1.0: http://www.w3.org/TR/WCAG10/</p><p> WAB cluster: http://www.wabcluster.org</p><p> Support-EAM project: http://www.support-eam.org</p><p> BenToWeb project: http://www.bentoweb.org</p><p> EIAO project: http://www.eiao.org</p><p> Checklist for Web Content Accessibility Guidelines 1.0: http://www.w3.org/TR/WCAG10/full-checklist.html</p><p> Techniques for Web Content Accessibility Guidelines 1.0: http://www.w3.org/TR/WCAG10-TECHS/</p><p> Evaluating Web Sites for Accessibility: http://www.w3.org/WAI/eval/</p><p> Evaluation, Repair, and Transformation Tools for Web Content Accessibility: http://www.w3.org/WAI/ER/existingtools.html</p><p> Selecting and Using Authoring Tools for Web Accessibility [draft]: http://www.w3.org/WAI/EO/Drafts/impl/software5.html</p><p> Review Teams for Evaluating Web Site Accessibility [draft]: http://www.w3.org/WAI/EO/Drafts/review/reviewteams.html 12.7 Appendices</p><p>Attach any necessary appendices here.</p><p>[end of proposed template]</p><p>16 April 2007 Page 131 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>13 Appendix C: RDF Schemas for Resources Lists 13.1 Expressing resources</p><p>As declared in section 4, resources must be expressed via the earl:Content Class [EARL10-Schema]. For a detailed description on how to use them, please refer to the aforementioned document, in combination with the HTTP Vocabulary in RDF [HTTP-RDF]. Depending on the type of resource, some of the properties may be optional. 13.2 Resource Lists</p><p>Resources lists can be expressed as a simple RDF Sequence containing a list of simple or HTTP resources. In the following, we present two examples of how to present Resources List.</p><p>The first example is a simple Resource List for a server with no content negotiation:</p><p><?xml version="1.0" encoding="UTF-8"?></p><p><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:earl="http://www.w3.org/ns/earl#" xmlns:http="http://www.w3.org/2006/http#" xml:base="http://example.org/20070401/test#"></p><p><!-- Resource sequence --> <rdf:Seq rdf:ID="listOfResources"> <rdf:li rdf:resource="#res1" /> <rdf:li rdf:resource="#res2" /> <rdf:li rdf:resource="#res3" /> <rdf:li rdf:resource="#res4" /> </rdf:Seq></p><p><!-- Resources --> <earl:Content rdf:ID="res1"> <earl:context rdf:resource="#request1" /> </earl:Content> <earl:Content rdf:ID="res2"> <earl:context rdf:resource="#request2" /> </earl:Content> <earl:Content rdf:ID="res3"> <earl:context rdf:resource="#request3" /> </earl:Content> <earl:Content rdf:about="ftp://example.org/css/global.css" /></p><p><!-- Main connection --> <http:Connection rdf:ID="conn"> <http:connectionAuthority></p><p>16 April 2007 Page 132 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> example.org:80 </http:connectionAuthority> <http:request rdf:parseType="Collection"> <http:Request rdf:about="#request1" /> <http:Request rdf:about="#request2" /> <http:Request rdf:about="#request3" /> </http:request> </http:Connection></p><p><!-- Requests --> <http:GetRequest rdf:ID="request1"> <http:abs_path>/index.html</http:abs_path> </http:GetRequest> <http:GetRequest rdf:ID="request2"> <http:abs_path>/search.html</http:abs_path> </http:GetRequest> <http:GetRequest rdf:ID="request3"> <http:abs_path>/sitemap.html</http:abs_path> </http:GetRequest></p><p></rdf:RDF></p><p>The second example is a little bit more complex, and offers a Resource List where the sitemap is presented depending on the Accept-Language header of the user's User-Agent:</p><p><?xml version="1.0" encoding="UTF-8"?></p><p><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:earl="http://www.w3.org/ns/earl#" xmlns:http="http://www.w3.org/2006/http#" xml:base="http://example.org/20070401/test#"></p><p><!-- Resource sequence --> <rdf:Seq rdf:ID="listOfResources"> <rdf:li rdf:resource="#res1" /> <rdf:li rdf:resource="#res2" /> <rdf:li rdf:resource="#res3" /> </rdf:Seq></p><p><!-- Resources --> <earl:Content rdf:ID="res1"> <earl:context rdf:resource="#request1" /> </earl:Content> <earl:Content rdf:ID="res2"> <earl:context rdf:resource="#request2" /> </earl:Content> <earl:Content rdf:ID="res3"> <earl:context rdf:resource="#request3" /> </earl:Content></p><p><!-- Main connection --> <http:Connection rdf:ID="conn"> <http:connectionAuthority></p><p>16 April 2007 Page 133 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> example.org:80 </http:connectionAuthority> <http:request rdf:parseType="Collection"> <http:Request rdf:about="#request1" /> <http:Request rdf:about="#request2" /> <http:Request rdf:about="#request3" /> </http:request> </http:Connection></p><p><!-- Requests --> <http:GetRequest rdf:ID="request1"> <http:abs_path>/index.html</http:abs_path> </http:GetRequest> <http:GetRequest rdf:ID="request2"> <http:abs_path>/search.html</http:abs_path> </http:GetRequest> <http:GetRequest rdf:ID="request3"> <http:abs_path>/sitemap.html</http:abs_path> <http:version>1.1</http:version> <http:header rdf:parseType="Collection"> <http:MessageHeader> <http:fieldName rdf:resource="http://www.w3.org/2006/http- header#accept" /> <http:fieldValue rdf:parseType="Collection"> <http:HeaderElement> <http:elementName>text/html</http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName> <http:paramValue>1.0</http:paramValue> </http:Param> </http:param> </http:HeaderElement> <http:HeaderElement> <http:elementName> application/xml </http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName> <http:paramValue>0.9</http:paramValue> </http:Param> </http:param> </http:HeaderElement> <http:HeaderElement> <http:elementName>*/*</http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName></p><p><http:paramValue>0.01</http:paramValue> </http:Param> </http:param> </http:HeaderElement> </http:fieldValue></p><p>16 April 2007 Page 134 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p></http:MessageHeader> <http:MessageHeader> <http:fieldName rdf:resource="http://www.w3.org/2006/http-header#accept- language" /> <http:fieldValue rdf:parseType="Collection"> <http:HeaderElement> <http:elementName>de-DE</http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName> <http:paramValue>1.0</http:paramValue> </http:Param> </http:param> </http:HeaderElement> <http:HeaderElement> <http:elementName>de</http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName></p><p><http:paramValue>0.75</http:paramValue> </http:Param> </http:param> </http:HeaderElement> <http:HeaderElement> <http:elementName>en</http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName></p><p><http:paramValue>0.25</http:paramValue> </http:Param> </http:param> </http:HeaderElement> <http:HeaderElement> <http:elementName>*</http:elementName> <http:param> <http:Param> <http:paramName>q</http:paramName></p><p><http:paramValue>0.01</http:paramValue> </http:Param> </http:param> </http:HeaderElement> </http:fieldValue> </http:MessageHeader> </http:header> </http:GetRequest></p><p></rdf:RDF></p><p>16 April 2007 Page 135 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>13.3 Sample EARL report for test results</p><p>The following is a simple EARL report that presents the results of an evaluation.</p><p><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:earl="http://www.w3.org/ns/earl#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:foaf="http://xmlns.com/foaf/0.1/"></p><p><earl:Assertion rdf:about="#assertion"> <earl:subject rdf:resource="http://www.example.org/page1.html"/> <earl:assertedBy rdf:resource="#tool"/> <earl:test rdf:resource="http://www.wabcluster.org/uwem/tests/guideline-1/#Test1- 1_HTML_01"/> <earl:result rdf:resource="#result"/> <earl:mode rdf:resource="http://www.w3.org/ns/earl#automatic"/> </earl:Assertion></p><p><earl:Content rdf:about="http://www.example.org/page1.html"> <earl:sourceCopy rdf:parseType="Literal" xml:lang="en"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Hello World!</title> </head> <body> <p>Hello World! </p> </body> </html> </earl:sourceCopy> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#date">2007-03- 25</dc:date> </earl:Content></p><p><earl:Software rdf:about="#tool"> <dc:title xml:lang="en">checkUWEM</dc:title> <dc:description xml:lang="en">The UWEM check tool</dc:description> <foaf:homepage rdf:resource="http://example.org/tools/#checkUWEM"/> <dct:hasVersion>1.0</dct:hasVersion> </earl:Software></p><p><earl:TestResult rdf:about="#result"> <earl:outcome rdf:resource="http://www.w3.org/ns/earl#fail"/> <dc:title xml:lang="en">Missing alt attribute</dc:title> <dc:description rdf:parseType="Literal" xml:lang="en"> <div xmlns="http://www.w3.org/1999/xhtml"> <p>The <code>alt</code> attribute is missing in the <code>img</code> element.</p> </div> </dc:description> <dc:date </p><p>16 April 2007 Page 136 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> rdf:datatype="http://www.w3.org/2001/XMLSchema#date">2007-03- 28</dc:date> </earl:TestResult></p><p></rdf:RDF></p><p>16 April 2007 Page 137 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>14 Appendix D: Contributors to this and previous versions of this document</p><p> Eric Velleman</p><p> Mikael Snaprud</p><p> Carlos A Velasco</p><p> Dominique Burger</p><p> Jesse Tan</p><p> Colin Meerveld</p><p> Barry McMullin</p><p> Nils Ulltveit-Moe</p><p> Johannes Koch</p><p> Annika Nietzio</p><p> Helen Petrie</p><p> Jenny Craven</p><p> Alan Chuter</p><p> Gerhard Weber</p><p> Christophe Strobbe</p><p> Sandor Herramhof</p><p> Evangelos Vlachogiannis</p><p> Pierre Guillou</p><p> Alistair Garrison</p><p> Olaf Perlick</p><p> Yehya Mohamad</p><p> Henrike Gappa</p><p> Gabriele Nordbrock</p><p> Dirk Stegemann</p><p>16 April 2007 Page 138 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p> Finn Aslaksen</p><p> Torben Bach Pedersen </p><p> Patrizia Bertini</p><p> Terje Gjøsæter</p><p> Morten Goodwin Olsen</p><p> Ole-Christoffer Granmo</p><p> Andreas Prinz</p><p> Agata Sawicka</p><p> Edith Helene Unander</p><p>16 April 2007 Page 139 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>15 Appendix E: W3C® DOCUMENT LICENSE http://www.w3.org/Consortium/Legal/2002/copyright-documents-20021231</p><p>Public documents on the W3C site are provided by the copyright holders under the following license. By using and/or copying this document, or the W3C document from which this statement is linked, you (the licensee) agree that you have read, understood, and will comply with the following terms and conditions:</p><p>Permission to copy, and distribute the contents of this document, or the W3C document from which this statement is linked, in any medium for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the document, or portions thereof, that you use:</p><p>1. A link or URL to the original W3C document.</p><p>2. The pre-existing copyright notice of the original author, or if it doesn't exist, a notice (hypertext is preferred, but a textual representation is permitted) of the form: "Copyright © [$date-of-document] World Wide Web Consortium, (Massachusetts Institute of Technology, European Research Consortium for Informatics and Mathematics, Keio University). All Rights Reserved. http://www.w3.org/Consortium/Legal/2002/copyright-documents- 20021231"</p><p>3. If it exists, the STATUS of the W3C document.</p><p>When space permits, inclusion of the full text of this NOTICE should be provided. We request that authorship attribution be provided in any software, documents, or other items or products that you create pursuant to the implementation of the contents of this document, or any portion thereof.</p><p>No right to create modifications or derivatives of W3C documents is granted pursuant to this license. However, if additional requirements (documented in the Copyright FAQ) are satisfied, the right to create modifications or derivatives is sometimes granted by the W3C to individuals complying with those requirements.</p><p>THIS DOCUMENT IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, OR TITLE; THAT THE CONTENTS OF THE DOCUMENT ARE SUITABLE FOR ANY PURPOSE; NOR THAT THE IMPLEMENTATION OF SUCH CONTENTS WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.</p><p>COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE DOCUMENT OR THE PERFORMANCE OR IMPLEMENTATION OF THE CONTENTS</p><p>16 April 2007 Page 140 of 141 WAB Cluster Deliverable D-WAB4 (Public)</p><p>THEREOF.</p><p>The name and trademarks of copyright holders may NOT be used in advertising or publicity pertaining to this document or its contents without specific, written prior permission. Title to copyright in this document will at all times remain with copyright holders.</p><p>This formulation of W3C's notice and license became active on December 31 2002. This version removes the copyright ownership notice such that this license can be used with materials other than those owned by the W3C, moves information on style sheets, DTDs, and schemas to the Copyright FAQ, reflects that ERCIM is now a host of the W3C, includes references to this specific dated version of the license, and removes the ambiguous grant of "use". See the older formulation for the policy prior to this date. Please see our Copyright FAQ for common questions about using materials from our site, such as the translating or annotating specifications. Other questions about this notice can be directed to [email protected].</p><p>Joseph Reagle <[email protected]></p><p>Last revised $Id: copyright-documents-20021231.html,v 1.6 2004/07/06 16:02:49 slesch Exp $</p><p>16 April 2007 Page 141 of 141</p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    141 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us