<<

ABSTRACT

LIN, JUAN. Factors Affecting the Perception and Measurement of Optically Brightened Textiles. (Under the direction of Prof. Renzo Shamey).

Whites comprise approximately 3% of the total volume of CIELAB color space, but their relative importance is much larger than the small volume of the color solid that they occupy.

Textile products commonly contain many different textures and patterns. Variations in surface roughness can significantly affect the colorimetric attributes of textile substrates. In addition to texture, other influential factors include background color, luminance in the viewing field, physical size of samples, and sample presentation mode. Moreover, surface texture's effect on the perception and measurement of white substrates, including those containing fluorescent brightening agents (FBAs) is not fully understood.

It is well established that surface roughness influences color perception. A number of recommended formulae, e.g., CMC (l:c), CIE94 and CIEDE2000, include adjustment factors to account for varying interaction of light with different surfaces. More than 100 whiteness indices were also developed. Main factors influencing these equations include:

1. the visual experimental data developed;

2. accuracy and precision of the spectrophotometers used in measurement of optically

brightened materials;

3. accuracy of the correlation model between measured and visual data; and

4. the uniformity of the SPD of the light sources used for visual assessments in viewing

booths. A number of studies have reported unsatisfactory correlations between visual responses and

CIE whiteness models especially for tinted white samples. The unsatisfactory performance is

not solely due to errors in the formula, but may be due to one or more critical variables that currently are not adequately controlled. These variables include:

1. Differences in geometry and light sources between spectrophotometers used for

measurement of fluorescent materials;

2. Unknown or non-standardized UV emission of lamps used in standard viewing booths.

The objective of this research is to investigate several factors affecting the perception and

measurement of optically brightened white textiles with a view to determine whether the

performance of the whiteness index can be improved. Several approaches were examined to

achieve this objective:

1. Preparation of textile sample sets with various textures to be used in visual and

instrumental evaluation of whiteness;

2. Visual/instrumental evaluation of the effect of texture on lightness and whiteness

under sources simulating illuminant D65;

3. Visual/instrumental evaluation of the effect of texture on perceived whiteness under

sources simulating illuminant D75;

4. Visual/instrumental evaluation of the effect of texture on perceived whiteness under

light source/illuminant A and source U30;

5. Examination of the role of UV content in viewing booths simulating illuminants D65

and D75 and determination of the effect of UV on visual assessment and instrumental

measurement of fluorescent white samples; 6. Assessment of the uniformity of a monitor and examination of the uniformity

boundary of the screen for display of white textile samples;

7. Development of software incorporating color management system to generate a patch

incorporating simulated textures;

8. Generating images of the textured substrates with similar lightness and whiteness

properties using linear transformation methods;

9. Designing a visual assessment protocol to evaluate perceived whiteness, conduct

visual assessment using reference anchors based on forced ranking method;

10. Examination of the effect of background on perception of simulated white textiles on

the monitor;

11. Analysis of the whiteness perception results based on texture variation;

12. Correlation of responses from perceived whiteness of knitted textures to perceived

whiteness of simulated textures; and

13. Modification and examination of the whiteness index by incorporating the effect of

texture.

© Copyright 2013 by Juan Lin

All Rights Reserved Factors Affecting the Perception and Measurement of Optically Brightened White Textiles

by Juan Lin

A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Fiber and Polymer Science

Raleigh, North Carolina

2013

APPROVED BY:

______Dr. Renzo Shamey Dr. David Hinks Committee Chair

______Dr. Henry Joel Trussell Dr. Douglas Gillan

DEDICATION

To my whole family, my mother, father and sister, for without their help and support,

graduate school would have been surely impossible.

To my boyfriend, who has been accompanying and helping me throughout this long journey.

ii BIOGRAPHY

Juan Lin grew up in Shanghai, China. She finished her undergraduate studies at the

Information Engineering University, China, in 2006 and then received an MS in image

processing in 2009. Her main research interests include color science, image processing,

color measurement and color management.

iii ACKNOWLEDGMENTS

The author would like to thank Prof. Renzo Shamey, for his advice, support, patience,

guidance and financial support throughout this research; and Prof. Joel Trussell, member of the advisory committee, for his great contribution to the author's development of image processing skills. Their continuous input and encouragement provided immeasurable support for the author. Also, the author would like to extend her gratitude to Prof. David Hinks and

Prof. Douglas Gillan, members of the advisory committee, for their valuable suggestions and encouragement.

In addition, the author thanks all observers who willingly participated in the visual assessments related to this work.

The author is very grateful to Mr. Jeff Krauss for the help received in bleaching and brightening fabric samples and Mr. Brian Davis for the help in setting up the machinery which enabled the author to prepare knitted samples with various textures.

Thanks are also due to Dr. Yuzheng Lu and Mr. Renbo Cao, for their help and suggestions during experiments.

Finally, the author would like to thank Ms. Wenwen Zhang, Mr. Nanshan Zhang, Ms. Ting

He and Mr. Xiaofeng Qin for their continued love, support, help and encouragement throughout this study.

iv TABLE OF CONTENTS

LIST OF FIGURES ...... xiii

LIST OF TABLES ...... xxii

TERMS AND NOMENCLATURE ...... xxvii

I. Introduction ...... 1

II. Literature Review ...... 2

1. Color Vision and Factors Affecting Color Perception ...... 2

1.1 Perception of Color...... 2

1.1.1 Structure of the Eye ...... 2

1.1.2 Theories of Color Vision ...... 8

1.1.2.1 Trichromatic Theory ...... 8

1.1.2.2 Opponent Color Vision Theory ...... 9

1.1.2.3 Zone Theory ...... 11

1.1.3 Color Constancy ...... 13

1.1.3.1 Color Contrast ...... 14

1.1.3.2 Lightness Crispening ...... 20

1.2 White and Whiteness ...... 21

1.3 Illuminants and Light Sources ...... 27

1.3.1 ...... 29

v 1.3.2 Light Sources ...... 31

1.3.3 CIE Standard Illuminants ...... 33

1.3.4 Fluorescent Lamps and Tubes ...... 37

1.3.5 LEDs ...... 39

1.4 Texture Analysis ...... 40

1.4.1 Definition of Texture ...... 40

1.4.2 Effect of Texture on Color Perception ...... 42

1.5 Improving the Whiteness of Material ...... 49

1.5.1 Bleaching and Bluing ...... 50

1.5.2 Application of FBAs and Optical Brightening ...... 51

1.5.3 Fluorescence ...... 52

2. Measuring Whiteness ...... 55

2.1 Whiteness Formulas ...... 55

2.1.1 One-dimensional Whiteness Formulas ...... 55

2.1.2 Two-dimensional Whiteness Formulas ...... 63

2.2 Instrumental Assessment ...... 67

2.2.1 The Spectrophotometer ...... 67

2.2.2 The Spectroradiometer ...... 72

2.2.3 Effect of UV Content on Whiteness ...... 75

vi 3. Psychophysical Evaluation of Whiteness ...... 76

3.1 Ordinal Scaling Methods ...... 78

3.1.1. Rank Order Method ...... 78

3.1.2 Paired-Comparison Method ...... 80

3.2 Psychophysical Experiments ...... 82

3.2.1 Factors Effecting Visual Assessments ...... 82

3.2.2 Observers' Evaluation ...... 84

3.2.2.1 Average Rank ...... 84

3.2.2.2 PF/3 ...... 85

3.2.2.3 Standardized Residual Sum of Squares (STRESS) ...... 86

3.2.2.4 Correlation Coefficient ...... 88

4. Surface Texture ...... 89

4.1 Overview of Various Methods for Texture Analysis ...... 90

4.1.1 Grey-level Histogram ...... 91

4.1.2 Grey Level Co-occurrence Matrices ...... 93

4.1.3 Discrete Fourier Transformation ...... 95

4.2 Modulation Transform Function ...... 96

III. Experimental Methodology and Results...... 99

vii 1. The Effect of Texture on Perception and Measurement of White Knitted Textiles under D65

Illumination ...... 99

1.1 Experimental ...... 100

1.1.1 Preparation of Samples ...... 100

1.1.2 Instrumental Measurement ...... 104

1.1.3 Visual Assessment ...... 106

1.2 Data Analysis ...... 107

1.2.1 Cluster Analysis ...... 108

1.2.2 Weighted Probability Analysis ...... 108

1.3 Results and Discussion ...... 111

1.3.1 The Effect of Texture on Measured and Perceived Lightness ...... 112

1.3.2 Correlation between Perceived Lightness and Perceived Whiteness ...... 115

1.3.3 Correlation between L* and Perceived Whiteness ...... 117

1.3.4 The Effect of Texture on Whiteness ...... 119

1.4 Conclusions ...... 123

2. The Effect of Light Source on Perception and Whiteness of Knitted Structures ...... 125

2.1 Experimental ...... 125

2.2 Data Analysis ...... 126

2.3 Results and Discussions ...... 131

viii 2.3.1 Correlation between Perceived Lightness and Measured Lightness ...... 132

2.3.2 Correlation between Perceived Whiteness Rank and Mean Perceived Lightness ...... 135

2.3.3 Effect of Texture on Perceived Whiteness ...... 137

2.4 Conclusions ...... 140

3. Factors Affecting the Whiteness of Optically Brightened Material ...... 141

3.1 Methods ...... 142

3.1.1 Materials ...... 142

3.1.2 Preparation of Fluorescent Cotton Samples ...... 143

3.1.3 Spectrophotometric Measurement of Materials ...... 143

3.1.4 Spectrophotometric Measurement of Light Sources ...... 144

3.1.5 Determination of UV Spectra ...... 147

3.1.6 Perceptual Assessments ...... 149

3.2 Results and Discussions ...... 151

3.3 Conclusions ...... 170

4. The Investigation of Spatial Uniformity and Whiteness Boundary of an EIZO Monitor ...... 172

4.1 Calibration of the LCD Monitor ...... 172

4.2 Spatial Uniformity of Monitor ...... 173

4.3 Whiteness Boundary of an EIZO Monitor ...... 176

4.3.1 Experimental ...... 177

ix 4.3.1.1 Samples Preparation ...... 177

4.3.1.2 Sample Color Measurement ...... 178

4.3.2 Visual Assessment ...... 179

4.3.3 Results and Discussions ...... 182

5. Investigation of Various Methods to Generating Texture ...... 188

5.1 Mapping Samples Used to Determine Device Dependent Whiteness Boundary ...... 188

5.1.1 Polynomial Regression Method ...... 189

5.1.2 Artificial Neural Networks ...... 191

5.1.3 Look-Up-Table Method ...... 194

5.2 Comparison of Color Space Mapping between Polynomial Regression and Neural Network

Methods ...... 194

5.3 Generating Texture Images with Different Surfaces ...... 197

5.3.1 ICC Color Management ...... 198

5.3.1.1 PCS ...... 199

5.3.1.2 ICC Profile ...... 199

5.3.1.3 Rendering Intent ...... 200

5.3.2 Little CMS ...... 202

5.3.2.1 Assessment of Accuracy with EIZO Monitor Profile...... 205

5.3.2.2 Measurement of Woolen Samples with a SF 600X Spectrophotometer ...... 207

x 5.3.2.3 Measurement of Woolen Samples with a PR-670 Spectroradiometer ...... 208

5.3.2.4 A Comparison of Accuracy Different Rendering Intents ...... 211

5.3.2.5 Conclusions ...... 217

5.3.3 Generated Texture Images with Similar Lightness and Whiteness based on woolen

Samples ...... 219

5.3.3.1 The Effect of Surround on the Measured Value of Displayed Images ...... 219

5.3.3.2 Generation of Texture Images with Similar L*a*b* Values ...... 222

5.3.3.3 Generated Anchor (reference) Images ...... 228

5.3.3.4 Preliminary Design of Experiment ...... 229

5.3.3.5 Adjustment of Whiteness of Displayed Images ...... 233

5.3.3.6 Generation of A New Set of Images based on the AATCC Standard ...... 234

5.3.3.7 Generation of a New Set of Texture Images with Improved Whiteness ...... 235

5.3.3.8 Visual Assessment ...... 239

5.3.3.9 Results and Discussions ...... 245

6. Visual Perception of Texture ...... 257

6.1 Investigation of the Effects of Texture on Color Perception ...... 257

6.1.1 Experimental Preparation ...... 259

6.1.2 Frequency Content and Color Perception ...... 259

6.1.3 Results of Preliminary Study ...... 265

xi 6.1.4 Summary and Conclusions ...... 267

6.2 Influence of Texture on Perceived Whiteness of Objects ...... 268

6.2.1 Generating White Textured Images ...... 268

6.2.2 Visual Assessment of Whiteness ...... 270

6.2.3 Features based on Texture Analysis ...... 270

6.2.3.1 Transformation to Grey Images ...... 271

6.2.3.2 Roughness ...... 272

6.2.3.3. Directionality ...... 274

6.2.3.4 Density ...... 276

6.2.4 The Influence of Texture-determined Factors on Perceived Whiteness ...... 278

6.2.5 Conclusions ...... 282

7. Conclusions and Future Work ...... 288

APPENDICES ...... 322

xii LIST OF FIGURES

Figure 1. Cross section of the structure of the human eye...... 3

Figure 2. Schematic diagram of the human retina...... 5

Figure 3. The distribution of rods and cones in the human retina...... 6

Figure 4. Diagram representing zone color vision theory...... 12

Figure 5. Successive contrast...... 15

Figure 6. Simultaneous contrast in lightness...... 17

Figure 7. Assimilation / spreading effect...... 19

Figure 8. Lightness crispening effect...... 21

Figure 9. Spectral reflectance of white textures...... 22

Figure 10. Chromaticity loci of perceived white objects...... 25

Figure 11. The effect of additive mixing and subtractive mixing on lightness...... 27

Figure 12. Spectrum distribution of sun light...... 28

Figure 13. Relative SPD in visible region normalized at 555 nm...... 30

Figure 14. Relative SPD of different phases of daylight, normalized at 555 nm: (a) cloud-free

zenith skylight, (b) cloud-free north skylight, (c) overcast skylight, (d) medium

daylight, and (e) direct sunlight...... 32

Figure 15. Relative SPD of illuminant A...... 34

Figure 16. Relative SPD curves of illuminants B and C...... 35

Figure 17. Relative SPD of CIE illuminant D series...... 36

Figure 18. Relative SPD of illuminant F series...... 37

xiii Figure 19. SPDs of three types of fluorescent lamp...... 38

Figure 20. Semiconductor junction laser...... 39

Figure 21. Light propagation in a fiber...... 42

Figure 22. Light propagation in a colored medium...... 43

Figure 23. Polar distribution of reflected light for various surfaces...... 44

Figure 24. Schematic model of light reflection from a woven textile...... 45

Figure 25. KES-FB4 surface tester...... 46

Figure 26. Effect of fluorescence on the spectral reflectance...... 54

Figure 27. Basic features of a dual-beam spectrophotometer (Datacolor SF500)...... 70

Figure 28. Flowchart of measuring containing FWA based on ...... 71

Figure 29. The diagram of a scanning telespectroradiometer (Bentham instrument)...... 73

Figure 30. Sample arrangement in grey-scale assessments...... 82

Figure 31. Texture segmentation...... 90

Figure 32. Typical modulation transfer function...... 98

Figure 33. Bleached woolen yarn, scoured woolen knitted fabric, bleached woolen fabric,

bleached & optically brightened woolen fabric (from left to right)...... 102

Figure 34. Different surface textures examined in the study...... 103

Figure 35. Procedure employed for the perceptual assessment of white textile substrates. . 107

Figure 36. The weighted probability of different woolen textures being ranked as the most

white...... 111

xiv Figure 37. Correlation of mean perceived lightness magnitude against L* for woolen samples.

...... 112

Figure 38. Correlation of mean observer lightness rank against L* for cotton and woolen

samples separately (a) and as a group (b)...... 114

Figure 39. Correlation of mean observer lightness ranks against mean whiteness ranks for

woolen (a) and cotton (b) samples...... 116

Figure 40. Mean observer lightness rank against mean whiteness rank for woolen and cotton

samples based on texture...... 117

Figure 41. Correlation between perceived whiteness and measured lightness (L*) of samples

separately (a) and as a group (b)...... 118

Figure 42. Correlation of observer's rank against CIE Whiteness Index of knitted woolen

samples...... 120

Figure 43. Textured samples ranked based on perceived whiteness from least white (left) to

most white (right)...... 121

Figure 44. Correlation of observer whiteness rankings against CIE Whiteness Index values of

knitted cotton samples...... 122

Figure 45. Weighted probability of different cotton and woolen structures being ranked as the

most white...... 127

Figure 46. Correlation of mean perceived lightness against measured lightness for cotton

under illuminant U30...... 133

xv Figure 47. Correlation of mean perceived lightness against measured lightness for cotton

under illuminant A...... 133

Figure 48. Correlation of mean perceived lightness against measured lightness for wool

under source U30...... 134

Figure 49. Correlation of mean perceived lightness against measured lightness for wool

under illuminant A...... 134

Figure 50. Correlation of perceived whiteness rank against mean perceived lightness for

cotton under source U30...... 135

Figure 51. Correlation of perceived whiteness rank against mean perceived lightness for

cotton under illuminant A...... 136

Figure 52. Correlation of perceived whiteness rank against mean perceived lightness for wool

under source U30...... 136

Figure 53. Correlation of perceived whiteness rank against mean perceived lightness for wool

under illuminant A...... 137

Figure 54. Correlation of perceived whiteness rank against CIE Whiteness Index for cotton

under source U30...... 138

Figure 55. Correlation of perceived whiteness rank against CIE Whiteness Index for cotton

under source A...... 138

Figure 56. Correlation of perceived whiteness rank against CIE Whiteness Index for wool

under source U30...... 139

xvi Figure 57. Correlation of perceived whiteness rank against CIE Whiteness Index for wool

under source A...... 139

Figure 58. SPD of standard illuminants D65, D75 and the simulated daylight sources in the

viewing booths, including the simulated daylight sources with a supplementary

UV source...... 147

Figure 59. The arrangement used to block approximately 25% of the UV radiation using

opaque dark gray cardboard rings placed around the UV light bulb...... 149

Figure 60. Visual assessment of optically brightened white samples under varying UV levels.

...... 150

Figure 61. Measured spectral radiance curves of (a) the PTFE plate (b) untreated cotton

substrate (c) 0.025% FBA treated (d) 0.25% FBA treated and (e) 2.5% FBA

treated white materials irradiated at various relative UV intensities as measured

by a reflectance spectrophotometer using illuminant D65...... 153

Figure 62. CIE WI of white samples calculated from measurements with a spectrophotometer

employing sources D65 (a) and D75 (b) filtered to contain various UV contents.

...... 154

Figure 63. Uchida WI of white samples calculated from measurements with a reflectance

spectrophotometer employing sources D65 (a) and D75 (b) filtered to contain

various UV contents...... 155

Figure 64. The Correlation between perceived whiteness and predicted whiteness from

Uchida and CIE WI under D65 for all UV levels (a) .The correlation between

xvii perceived whiteness and predicted whiteness from Uchida and CIE WI under D75

for all UV levels (b)...... 158

Figure 65. Spectral irradiance of fluorescent white materials illuminated with D65 (a) and

D65 +UV (b) sources in a SpectraLight III viewing booth determined

radiometrically. Spectral irradiance of fluorescent white materials illuminated

with D75 (c) and D75 +UV (d) sources in a SpectraLight III viewing booth

determined radiometrically...... 160

Figure 66. Comparison of spectral irradiance curves measured over the surface of untreated

(a) 0.025% FBA treated (b), 0.25% FBA treated (c) and 2.5% FBA treated (d)

white materials between D65 against D75 and D65+ UV against D75+ UV in

SpectraLight III viewing booths...... 162

Figure 67. Spectral irradiance curves measured in various illuminant combinations in a

SpectraLight III viewing booth...... 163

Figure 68. Variations in total UV energy measured by summing up the spectral irradiance

from the surface of optically brightened samples illuminated under different

conditions in SpectraLight III viewing booths...... 164

Figure 69. Perceived whiteness of five samples treated with various amounts of FBA and

evaluated under ten different illumination conditions...... 166

Figure 70. Schematic representation of the division of an EIZO monitor panel...... 174

Figure 71. The selected blocks (76 in total) depicted in the L*a*b* color space...... 179

Figure 72. Viewing and display configuration (10o field of view)...... 181

xviii Figure 73. Samples display in the center of EIZO screen. Color of the block was changed 182

Figure 74. The relationship between perceptibility responses and color difference

magnitudes...... 186

Figure 75. Samples considered as white at acceptable threshold of 50%...... 187

Figure 76. Schematic diagram of an artificial neural network with one hidden layer...... 192

Figure 77. Schematic diagram of how ICC profile works...... 200

Figure 78. Transformation of a bitmap image between a scanner profile and EIZO monitor

profile...... 203

Figure 79. The interface generated using Visual Studio 2008, Qt Project including little CMS.

...... 204

Figure 80. Colorimetric distribution of selected data in L*a*b* color space...... 205

Figure 81. Schematic representation of the methodology to test the accuracy of EIZO monitor

profile...... 206

Figure 82. Calibration of Spectroradiometer PR-670...... 209

Figure 83. Measurement of 10 woolen samples with Spectroradiometer PR-670...... 209

Figure 84. The interface generated with STS of displaying the scanned woolen sample. ... 212

Figure 85. A schematic demonstration of the position of the black dot in PR-670

spectroradiometer's view finder focused in an image during measurements...... 220

Figure 86. The 10 normalized images with the approximate L*a*b* values displayed on an

EIZO monitor...... 226

Figure 87. Investigation of the uniformity of the EIZO monitor...... 227

xix Figure 88. Display arrangement of the normalized images in the center of the monitor. .... 230

Figure 89. Two anchors with L* values of 70 and 98 respectively (left to right), displayed

above the texture image, and measured with a PR-670 spectroradiometer (a), three

anchor samples with L* values of 75, 80 and 85 respectively (L to R), displayed

above the texture image and measured with a PR-670 spectroradiometer (b). .. 232

Figure 90. Display of the 10 normalized woolen texture images with improved whiteness. 236

Figure 91. Visual assessment of 10 normalized texture images using three anchors...... 238

Figure 92. Visual assessment of normalized textured white image using three anchor samples

on top...... 240

Figure 93. The interface designed for the visual assessment of textured images...... 242

Figure 94. Twelve texture images displayed mostly within 6 blocks...... 243

Figure 95. The interface designed for the visual assessment of textured images...... 244

Figure 96. The mechanism of lateral inhibition affecting the perceived whiteness of the

tweleve texture images...... 248

Figure 97. Segmentation from stripe A (front) and stripe B (back)...... 253

Figure 98. MTF of human eye...... 262

Figure 99. Total Energy of the L-band of Camera vs. L* measured by spectrophotometer. 264

Figure 100. Difference of total energy and weighted energy vs. perceived lightness...... 267

Figure 101. Scanned images of the woolen knitted samples representing various textures

examined...... 269

Figure 102. The relationship between roughness and wP...... 279

xx Figure 103. The relationship between directionality and wP...... 280

Figure 104. The relationship between density and wP...... 280

xxi LIST OF TABLES

Table 1. Constants for Illuminants A, C and D65 for 2o and 10o observers ...... 58

Table 2. Illustration of data matrix R...... 79

Table 3. Gray Scale grades corresponding with CIELAB Color Difference...... 81

Table 4. Some texture features extracted from Gray Level Co-occurrence Matrices...... 95

Table 5. CIE L* and perceived lightness of textured woolen samples...... 104

Table 6. CIE whiteness index and L* of textured samples...... 105

Table 7. Percentage weighted probabilities of different woolen samples ranked from most to

least white...... 110

Table 8. Percent probabilities of cotton and woolen samples being ranked ...... 128

Table 9. Mean perceived lightness for each sample for wool and cotton samples...... 131

Table 10. Units for radiance and irradiance...... 146

Table 11. Mean inter- and intra-subject variability (in CIEWI units) in determination of

perceived whiteness...... 156

Table 12. Effect of variations in UV on measured CIE whiteness index of non-brightened and

(0-2.5%) FBA treated samples. Correlation coefficients are reported for each UV

level shown in the column and the corresponding PW values under the same

relative UV amounts used for perceptual assessments in viewing booths (i.e., 25%

vs. 25%)...... 165

xxii Table 13. Effect of UV content on perceived whiteness (PW) of non-brightened and (0-2.5%)

FBA treated samples. Correlation coefficients are reported between each level and

the calibrated UV level in the spectrophotometer based on AATCC TM110. .... 167

Table 14. Effect of variations in UV content on perceived whiteness (PW) of non-brightened

and (0-2.5%) FBA treated samples. Results in each column show % mean change

in perceived whiteness of each sample for the UV intensity level shown against

zero supplementary UV...... 168

Table 15. Colorimetric values of 30 blocks covering the entire screen of the EIZO display.

...... 175

* Table 16. The color difference, DEab , between each of the 29 blocks against the reference

block H4V3...... 176

Table 17. RGB values of selected blocks for examining the whiteness boundary...... 178

Table 18. Parameter estimates...... 184

Table 19. Chi-Square tests...... 184

Table 20. Augmented matrix used in polynomial regression method...... 190

* Table 21. Model performance, in terms of DEab , using various polynomials for five

components...... 195

* Table 22. Model performance, in terms of DEab , by various polynomials for all 76 color

samples...... 195

* Table 23. Model performance, in terms of DEab , by MLPs for five components...... 196

* Table 24. Model performance, in terms of DEab , by MLPs for all 76 color samples...... 197

xxiii Table 25. Color difference between (Lab)CMS and (Lab)m for selected 180 samples...... 207

Table 26. L*a*b* values measured with a Datacolor SF600X Spectrophotometer with UV and

without UV...... 208

Table 27. L*a*b* values obtained using a PR-670 Spectroradiometer with and without UV.

...... 210

* Table 28. The color difference, DEab , between SF600x photometer and PR-670 radiometer

with UV and without UV...... 211

Table 29. L*a*b* values of 10 scanned woolen samples displayed on EIZO monitor measured

with spectroradiometer PR-670 under four rendering intents...... 214

Table 30. Color differences calculated for 10 woolen samples under different rendering. .. 215

Table 31. Color difference obtained when samples displayed in the viewing booth and

measured with a radiometer versus those measured with a spectrophotometer

SF600X...... 217

Table 32. L*a*b* values of 6 images with different background settings...... 221

Table 33. The mean, maximum and minimum L*a*b* values of 10 scanned texture ...... 223

Table 34. L*a*b* values of 10 normalized images, minimum, maximum, mean as well as

L*a*b* differences...... 223

Table 35. The mean, maximum and minimum Lab values of 10 scanned ...... 224

Table 36. L*a*b* values of 10 normalized images, minimum, maximum, mean and L*a*b*

differences...... 225

xxiv Table 37. L*a*b* values of a target image corresponding to block positions 8, 9, 10, 11, 14,

15, 16 and 17 on an EIZO monitor...... 227

Table 38. L*a*b* values of normalized reference images (anchors)...... 229

Table 39. L*a*b* values of normalized images displayed in the center of the monitor and

measured by a PR-670 spectroradiometer...... 231

Table 40. Whiteness and Tint values of the 10 normalized images based on measured XYZ

values...... 234

Table 41. XYZ, L*a*b*, whiteness and tint values of 13 normalized AATCC Std. samples. 235

Table 42. XYZ, L*a*b*, CIEWI and Tint values of the 10 converted woolen texture images.

...... 237

Table 43. Ordered sample number and relative perceived whiteness of 10 texture images

displayed under three backgrounds with different lightness...... 245

Table 44. Mean, minimum and maximum relative perceived whiteness in three different

backgrounds...... 246

Table 45. Correlations for imaging texture parameters; PL represents Perceived Lightness 266

Table 46. Roughness of the samples tested together with weighted probability (wP) of sample

appearing as most white...... 273

Table 47. Directionality of the samples tested together with weighted probability of sample

appearing as most white (wP)...... 276

Table 48. Density of the scanned textures together with weighted probability of sample

appearing as most white (wP)...... 277

xxv Table 49. Results of the regression analysis...... 281

Table 50. WINCSU and weighted probability (wP) of sample appearing as most white,

normalized for TUVCS at 100, for illuminant D65...... 285

Table 51. Weighted probability (wP) of sample appearing as most white and WINCSU under

U30 and A...... 286

xxvi TERMS AND NOMENCLATURE

CIE: The Commission Internationale de l'Eclairage or the International Commission

on Illumination (ICI).

Lightness: The brightness of an area judged relative to the brightness of similarly

illuminated area that appears to be white or highly transmitting.

Luminance: The total integrated luminous flux for all wavelengths, emitted per unit solid

angle, per unit of projected area in a given direction, of the luminous surface.

Illuminance: The area density of luminous flux received by an illuminated body, integrated

for all wavelengths and all directions.

SPD: The radiative power emitted by a heated body described by a plot showing the

variation across the electromagnetic spectrum of the emittance per unit

wavelength.

Irradiance: The area density of radiant flux received by an illuminated body, integrated for

all wavelengths and all directions. Irradiance is the radiant power per unit area

incident onto a surface and has units of watts per square meter.

Radiance: A measure of the power emitted from a source or a surface, rather than incident

upon a surface, per unit area per unit solid angle with units of watts per square

meter per steradian.

Brightness: Attribute of a visual sensation according to which an area appears to emit more

or less light.

xxvii Fluorescence: The phenomenon by which a compound absorbs UV light and, through a

process of quantum mechanical intersystem crossing, re-emits visible light, after

a loss of energy, at a longer wavelength than that which is absorbed.

Whiteness: In colorimetry, whiteness is the degree to which a surface appears to be white.

Chroma: Colorfulness of an area judged as a proportion of the brightness of a similarly

illuminated area that appears white or highly transmitting.

Saturation: Attribute of a visual sensation which an area appears to exhibit more or less

chromatic color judged in proportion to the brightness.

Tint: In color theory, the mixture of a color with white, which increases lightness,

while shade is the mixture of a color with black, which decreases lightness.

xxviii

I. Introduction

The primary goal of this research was to examine the performance of the current CIE

whiteness index formulae and the effect of texture on perceived and measured whiteness. In

the course of this exercise, several aspects including illumination type, UV content in light

sources and viewing booths, and especially the effect of texture on perception and

measurement of whiteness were investigated to determine which factor(s) should be further

considered in development of a new whiteness index. To examine the effect of texture on

perceived whiteness, textile samples with different surface features were evaluated under

sources simulating illuminants D65, D75, A and source U30. Factors influencing the

perception of optically brightened materials were studied in two SPLIII viewing booths

equipped with D65 or D75 sources. In addition, samples were scanned and displayed in a

color calibrated EIZO monitor. The display uniformity and whiteness boundary of the EIZO

monitor were investigated before simulating and displaying the different surface textures on

the monitor to a panel of observers. The effect of background luminance on perception of displayed images was also examined. Lateral inhibition and figure-ground concepts involving with the occurrence of contrast or assimilation were examined to analyze observers perceived whiteness results from a psychophysical perspective. The modulation transfer function of the eye and texture features were also used to examine their correlation against the perceived lightness and whiteness of physical and simulated samples.

1

II. Literature Review

A visual perception of colored objects requires three essential elements: light source, object and the eye. All these factors influence the object appearance visualized by the observer.

While vision is common in the animal kingdom, color vision is only limited to species with color receptors. To understand color vision, one must pay attention first to the physics of light and its interaction with objects. Color vision is the capacity of an organism or machine to distinguish objects based on spectral distribution of the light. Two complementary theories of color vision are the trichromatic theory and opponent process theory [1]. Color receptors detect the spectral distribution of the light, react by producing signals at different wavelengths that are processed by the brain resulting in perception of color.

1. Color Vision and Factors Affecting Color Perception

1.1 Perception of Color

1.1.1 Structure of the Eye

The eyes of most vertebrate animals, from fish to mammals, have a similar basic structure. A schematic diagram of the human eye is shown in Figure 1 [2].

2

Figure 1. Cross section of the structure of the human eye.

In general the eyes of all animals resemble cameras in that the lens of the eye forms an inverted image of objects in front of it and projects it onto the sensitive retina, which corresponds to the film in a conventional camera. The cornea in front of the eye serves as a simple fixed lens that begins to gather light and concentrate it so that it will eventually form a sharp image on the rear interior surface of the eye. The colored membrane after the cornea surrounding a central hole, is the iris. The light enters through the hole in the iris called the pupil. When light is bright, the pupil may contract reflexively to as little as 2 mm in diameter.

The constriction of the pupil in bright light serves as an important function, similar to that of

3

reducing the size of the aperture in a camera; similarly the pupil increases in size to let in more light, under dim illumination condition.

The lens in most vertebrate eyes is located directly behind the pupil. Since the curvature of the lens determines the amount by which light entering it is bent, its shape is critical in bringing an image into focus at the rear of the eye. The process by which the lens changes its focus is called accommodation. In other words, variation in lens shape allows the eye to adjust its optical power to maintain a focused image of objects at different distances, this process is affected by natural ageing. With aging, the quality of vision worsens due to reasons independent of aging or eye diseases. The aging lens and cornea causes glare by light scattering, especially for shorter wavelengths. The yellowing of the lens with aging is believed to be responsible for the reduced ability to discriminate blues and blue-greens.

Contrast sensitivity shows a significant age-related decline [3].

The image formed by the cornea and the lens of the eye is focused on a screen of neural elements at the back of the eye called the retina. The retina of each eye contains over 100 million photoreceptor cells, responsible for converting light energy into neural activity called transduction.

4

Figure 2. Schematic diagram of the human retina.

The retina consists of three major layers of neural tissue where the light is transformed into a neural response. The layer of the retina closest to the sclera wall contains the photoreceptors.

There are two types of photoreceptors in the human eye, which are distinguishable on the basis of their shapes: long, thin, cylindrical cells called rods, and shorter, thicker, and more tapered cells called cones. When light particles strike these receptors, their location in the image, their frequency (color) and their intensity (brightness) are recorded and transmitted to the visual cortex in the brain. The cones are divided into three categories: cones that absorb long-wavelength (red) light denoted as L, cones that absorb short wavelength (blue) light

5

denoted as S, and cones that absorb medium-wavelength (green) light denoted as M. There is a region in the retina known as fovea that only contains cones: there are no rods at all. Foveal cones have a different shape than the more peripheral cones as depicted in Figure 2 [4].

Outside of the fovea the number of cones decreases rapidly. The number of rods, on the other

hand, increases rapidly as one leaves the foveal region, reaching a peak concentration at

about 20° of visual angle from the fovea and then decreasing again as shown in Figure 1.3

[5]. In comparison of the rest of the retina, the cones in the fovea pit have a small diameter

and can thus be more densely packed. The high spatial density of cones accounts for the high

visual acuity capability at the fovea.

Figure 3. The distribution of rods and cones in the human retina.

6

The blind spot shown in Figure 3 is the place in the visual field that corresponds to the lack

of light-detecting photoreceptor cells on the optic disc of the retina where the optic nerve is

connected [5]. Since there are no cells to detect light on the optic disc, the section of the

image associated with this part of the field of vision is not perceived.

The outer segments of the photoreceptor cells contain pigments that absorb light as shown in

Figure 2. The middle layer of the retina consists of bipolar cells, which are neurons with two long extended processes. One end makes synapses with the photoreceptors; the other end makes synapses with the large ganglion cells in the third layer of the retina.

In addition to the three forward-feeding layers of cells in the retina - photoreceptors, bipolar,

and ganglion cells - there are two types of cells that make connections laterally, i.e.,

horizontal cells and amacrine cells. Both the horizontal and amacrine cells modify the visual

signal and allow adjacent cells in the retina to communicate and interact with one another [4].

Light reception occurs within the rods which are more sensitive to light intensity than cones.

Color perception, on the other hand, begins with the specialized retinal cells containing

pigments with different spectral sensitivities, i.e. cone cells. Human eyes contain three types

of cones, L (red), M (green) and S (blue), which are sensitive to three different spectral

bands, resulting in trichromatic color vision. The perception of color is achieved by a

complex process that starts with the differential output of these cells in the retina and is

finalized in the visual cortex and associative areas of the brain. A complex arrangement of

neurons transmits the signals, produced by the reception of light, through neural pathways

7

into the primary visual cortex, at the back of the brain. The brain itself reconstructs the

images and reverts them in a highly complex process which has not been fully understood.

1.1.2 Theories of Color Vision

It might be thought that a single theory of color vision should predict all the known

perceptual attributes of color and the relative color phenomenon. However, no such theory

that could fully explain the complex array of perceptual experiences has thus far been

developed. Several theories are widely used in describing key aspects of color vision,

however, and three of the most important models are described briefly in the following

section.

1.1.2.1 Trichromatic Theory

In 1802, Thomas Young proposed that human color vision occurs through the combination of

sensitivity to red, green, and blue colors. This theory, modified by Hermann von Helmholtz

in 1852, came to be known as the Young-Helmholtz or trichromatic (three-color) theory of color vision [6]. The basic idea was that the eye responds to three primary colors, and combining the three primary colors via an additive color mixing process results in all other colors.

It was not until 1983 that the presence of three different types of cells in the retina was unequivocally proven [7]. The finding that there are three types of color-sensitive cone

8

receptors in the retina supported the trichromatic color vision theory. One set of receptors is

sensitive to long wavelengths such as red, one to medium wavelengths such as green, and

one is sensitive to short wavelengths such as blue. Thus the trichromatic theory has some

physiological support.

However, certain aspects of color vision cannot be accounted for by the trichromatic theory,

for example, the phenomenon of color afterimages. If one stares at a red dot, then moves

their gaze to a white wall, they will see a green dot as an afterimage. If one stares at a green

dot, followed by viewing a white surface, they will see a red afterimage. The same thing

happens with yellow and blue [8, 9].

In addition, it is difficult to explain several color vision facts, based on the trichromatic

theory as summarized below:

1. Observers are capable of selecting four unique hue: red, green, yellow and blue;

2. Dichromats: perceive white and yellow color and

3. Color discrimination functions and opponent color perceptions, cannot be explained.

1.1.2.2 Opponent Color Vision Theory

Based on the existence of color afterimages, Ewald Hering proposed the opponent process

theory of color vision in 1878 [10]. He suggested that color vision occurred in three channels where "opposite" colors (called complementary colors) are in a form of competition. For example, red and green are complementary colors. When one stares at something red, their

9

red detectors are fatigued. The green receptors, as opponents gain the upper hand, and one

sees a green afterimage when viewing a white surface.

The modern form of this theory assumes there are three basic channels involved in color

vision [11]. One channel is the red/green opponent channel; and another is the yellow/blue channel. A third channel, the black/white or brightness/darkness channel, may also provide information relevant to color vision, though this is a complex issue which is debated among researchers.

The yellow/blue channel may seem odd, because there are no yellow-sensitive cones in the retina. Yellow light stimulates a combination of long-wavelength (red-sensitive) and medium

wavelength (green-sensitive) cones. If there is more activity in blue receptors (compared to

red plus green receptors) the brain interprets this as blue. If there is more red plus green

activity (as compared to blue) the brain interprets this as yellow. The result is a yellow/blue

channel. Yellow and blue act as opponent processes just like red and green. If one stares at a

blue image, one gets a yellow afterimage; if one stares at a yellow dot, one gets a blue afterimage.

Color-blind people usually miss one or more of the cone types for their functions, red- sensitive, green-sensitive, or blue sensitive. The result is a disorder in one or both color channels. The most common type of colorblindness is red/green colorblindness. Genetic studies show this type of color-blindness is usually caused by a defective gene on the X chromosome. If this gene is defective, women (having two X chromosomes) are "protected" by a duplicate copy of the gene on the other X chromosome. Males (having one X and one Y

10

chromosome) do not have the extra copy, so red/green colorblindness is about 20 times more

common in men than in women.

A person with no color-sensitive pigments, therefore no color vision, is called a

monochromat (one-color person). To such a person, the world looks like a black-and-white

TV picture. Colors are shades of gray. A person with a defect in one channel-either the red/green or yellow/blue channel-is called a dichromat. Both colors in a channel are affected, so if the person cannot distinguish red that same person cannot distinguish green. A person who cannot see blue as a distinct color will also not see yellow as a distinct color. People with normal color vision use all three channels (black/ white, red/green, and yellow/blue) and are called trichromats.

1.1.2.3 Zone Theory

Despite their success on explaining on a certain color phenomena, none of these theories

alone provides full explanations and proper predictions of color experience. However, when

these theories are combined into a single one, namely, zone theory, additional color vision

phenomena can be explained as well.

As shown in Figure 4, three types of independent cones, initiate color vision through

absorption of light by photoreceptors and sending responses in the form of electrical signals.

In this stage, additive color matching could be explained well. In the second stage, the cone

signals are passed to bipolar cells in the retina that generate three new signals, one

11

achromatic signal and two chromatic signals. The chromatic signals are supported by the opponent theory proposed by Hering. Then the pairs of opponent signals are then passed to the ganglion cells in the retina for processing in an opponent manner [12, 13].

Figure 4. Diagram representing zone color vision theory.

12

In the final stage, the signals are interpreted in a complex manner in relation to the other

spatial and temporal information associated with light from previous visual experience such

as memory.

1.1.3 Color Constancy

The human visual system is a miraculous part of the body which provides us with a three-

dimensional perception of our world. When the reflected light enters our eyes, it is recorded

by our retinal cells, however, most of the information analysis is carried out inside our brain.

How this kind of information is actually processed is still largely unknown. In the computer

science domain, a large number of algorithms have been developed to imitate and match the

capabilities of our human visual system. Their goal typically revolves around maximum

color reproduction fidelity. How to duplicate color in different environments such as various

illuminants, media, etc., and keep color constancy is a key factor that needs to be considered.

Without color constancy, objects could no longer be reliably identified by their color.

Color is actually not an attribute that can be attached to the objects around us. It is basically a

result of the complex processing of information received by the retina within the brain. The human visual system is able to determine the colors of objects irrespective of the illuminant.

This ability is called color constancy [14]. To provide an example let’s consider an object

which is illuminated under sunlight in which the level and color of the illumination can vary

very considerably. It is obvious that the object’s appearance would be different when it is

13

moved to a living room decorated with electric tungsten-filament lighting. However, our human visual system is amazingly capable of compensating for changes in both the level and the color of the lighting in a visual process known as adaptation, and recognizing the object as having nearly the same color in various conditions, which is the phenomenon known as color constancy [15].

1.1.3.1 Color Contrast

In our real world, colors are always seen in relation to their spatial context. Furthermore, our perception of the brightness of targets often depends more on the luminance of adjacent objects than on the luminance of the target itself. For example, objects that have only a small contrast with respect to their background are difficult to observe. The associated phenomena, therefore, are detected and defined into three fundamental types of contrast, successive contrast, simultaneous contrast and spreading effect. The causes of these slight failures of color constancy are still under active scientific investigation while it is still being questioned whether they truly exist. All may be broadly regarded as reflecting the fact that our visual system does not work like an objective measuring device, sending raw light and color data to the brain.

Successive contrast occurs when the perception of currently viewed stimuli is modulated by previously viewed stimuli. A practical example of this is when one stares at the dot shown in

Figure 5 in the center of one of the two colored disks on the top row for a few seconds and then looks at the dot in the center of the disk on the same side in the bottom row, the two

14

lower disks, though identically colored, will appear different with the left-bottom color appearing greenish and the right-bottom color appearing reddish [16,17].

Figure 5. Successive contrast.

Everyday experience tells us that our conscious experience of brightness will increase as the amount of light reaching the eye increases. Unfortunately, our perceptual experiences often defy such common sense. Despite increases in the amount of light reaching the eye, the brightness of a surface may actually decrease depending on the illumination of the background on which it rests, because simultaneous brightness contrast is greater at higher levels of illumination [18]. Simultaneous contrast, also named as simultaneous brightness

15

contrast or chromatic induction [16,19], identified by Michel E. Chevreul [20], refers to the appearance of a color can be significantly affected by the presence of other colors around it.

Figure 6 demonstrates the effect of simultaneous contrast on lightness. Apparently, a dark surrounding makes a color look lighter and a light surrounding makes it look darker. Based on Figure 6-a [21], it is also true that a surrounding color close to the central sample enhances apparent contrast and a surrounding color greatly different from the center sample lowers apparent contrast.

16

Figure 6. Simultaneous contrast in lightness.

From a psychophysical perspective, lateral inhibition could be used to explain simultaneous contrast, Hering grid illusion, and Mach band effect. Lateral inhibition refers to the inhibition that neighboring neurons in brain pathways have on each other. In the visual system, neighboring pathways from the receptors to the optic nerve, which carries information to the

17

visual areas of the brain, show lateral inhibition. In other words, neighboring visual neurons respond less if they are activated at the same time than if one is activated alone. Therefore, the fewer neighboring neurons stimulated, the more strongly a neuron responds [22]. This process greatly increases the visual system's ability to respond to edges of a surface. Since neurons responding to the edge of a stimulus respond more strongly than do neurons responding to the middle, the "edge" neurons receive inhibition only from neighbors on one side, the side away from the edge. Neurons stimulated from the middle of a surface get inhibition from all sides. This makes the very faint edges look much sharper and this is the function of lateral inhibition, to make edges stand out.

As shown in Figure 6-b [23], most people see the center rectangle on the left as darker than the one on the right, even though physically they are identical. The center rectangle on the left gets little lateral inhibition from its dark surround. The center rectangle on the right gets considerable lateral inhibition from its light surround. Therefore, the light from the center rectangle on the left sends a stronger neural signal to the brain than does the same light from the center of the right rectangle, so the center rectangle on the right appears brighter.

When stimuli are seen at small angular subtenses, the opposite of simultaneous contrast can occur, when colors become more, instead of less, like their surroundings, this is an effect which is termed assimilation or the spreading effect.

18

Figure 7. Assimilation / spreading effect.

Figure 7 [24] demonstrates the assimilation phenomenon. The colored rectangle on the left

appear darker behind the black pattern and lighter under the white pattern. This is an example

where cognitive effects may alter or override the effects of lateral inhibition on brightness. In

other words, this is the opposite of the prediction we would make based on lateral inhibition,

i.e. the rectangle background of the left hand should appear lighter and that of right hand

darker. Peripheral physiological contributions to assimilation are when the black and white

patterns taken fall into the off or inhibitory region of the receptive field, then simultaneous

contrast is once again observed [25]. However, some of the higher level processes involve

computation of an average illumination level across this scene also [26].

19

1.1.3.2 Lightness Crispening

Another special phenomenon of lightness contrast is the so called crispening effect, which results in increasing the apparent contrast between two colors of similar lightness when the surround lightness has a value straddling that of the stimuli. This is displayed in Figure 8 which shows the perceived lightness contrast between grayscale steps is amplified when the lightness of the background is close to that of the grayscale steps. The lighter values are compressed slightly toward white, and match the nominal white at the lightest step of the scale; the darker values are compressed slightly toward black and match that of the nominal black scale. This shift is strongest within a one half Munsell value step on either side of the background value, and then becomes constant for the rest of the scale. The apparent contrast is greatest close to the lightness value of the background as shown in the middle rectangle.

The contrast between two areas of slightly different lightness is greater against a background with a lightness value between them. This can have a profound effect on visual evaluation of objects. In practical terms if the lightness of the viewing booth is changed, such as that for

European vs. US booths, note must of taken of the effect of lightness crispening on results.

20

Figure 8. Lightness crispening effect.

1.2 White and Whiteness

Color perception is not a physical quantity but rather a purely psychophysical response, usually to visual light after entering the eye. It is thus not measurable by normal engineering methods. However, it is possible to describe colors in an objective manner by quantifying them with at least three distinct numbers. These numbers are called color values and are dimensionless quantities.

White is normally defined as the contrary of black and has the highest luminosity of all colors. Objects described as white appear to be neither colored, nor grayish. Munsell color system, originated by A. H. Munsell, is a color space that specifies colors based on three dimensions, lightness, hue and chroma [27,28]. In the Munsell system of describing color,

21

chroma of objects would be very low, usually less than a few tenths, and their Munsell value

would be very high, usually more than 9 according to the ISCC-NBS Method of Designating

Colors [29]. White is in a range where Munsell chroma is no higher than 0.5 for all hues, except for 4Y to 9Y where up to 0.7 is acceptable, and a Munsell value of at least 8.5.

Alternatively, whites reside in a very small area within the CIE chromaticity diagram and color spaces. Although white region appears fairly narrow, there are about 5,000 distinguishable white colors, and 30,000 so called -ish whites such as bluish white, greenish white, yellowish white etc. [30].

Figure 9. Spectral reflectance of white textures.

22

Physically, a white surface reflects strongly throughout the visible spectrum. As this spectral

reflectance becomes higher and more uniform, the surface appears whiter as shown in Figure

9 [31]. In geometrical terms, a white surface such as cotton reflects diffusely in all directions and, of course, white objects have high scattering coefficients and low absorption coefficients. Therefore, a glossy white tile used in spectrophotometer calibration shows an approximately constant reflectance.

The bluer, the lighter the sample, the whiter it appears. However, if a surface is lighter but less blue, it may or may not appear whiter, depending on the balance of the two factors.

Bleaching increases reflectance across the whole of the visible spectrum especially at the blue end thus resulting in improved lightness and blueness. Bleaching therefore increases the whiteness of texture, pulping and paper, although some material appear to possess the attribute of yellowness due to absorbing more strongly in the blue region of the electromagnetic spectrum. The whiteness of textiles can be further increased with bluing which is to shade it with a small amount of a bright blue dye. Bluing could result in selectively absorption of light in a region of spectrum that includes blue. If the dye is not fluorescent, this decreases the lightness of the substrate and leaves blueness unchanged.

Since the increase in apparent blueness on perceived whiteness outweighs the decreased lightness, the shaded substrate appears whiter. The perceived whiteness could alternatively be increased by treating textiles with an FWA, or, Fluorescent Whitening Agent. FWAs strongly absorb energy in the UV range and re-emit it via fluorescence as visible light thus

23

causing a large increase in the blueness of the substrate as well as in its lightness, and thereby

significant increase in the perceived lightness, and whiteness of the treated material.

The definition of perfect white is dependent on the illuminant employed; perfect colorimetric

white is thus not an invariant. A perfect diffuser is also a perfect white, i.e. with the

maximum value of luminosity without saturation or hue. White samples possess certain

characteristics such as high levels of luminosity, no saturation at all and consequently no hue.

Whiteness is a perceived experience by the human observer, thus samples appear to being white in a quite subjective manner and this process depends strongly on illumination, surrounding and the subjects themselves. The color coordinates of some white samples do not lie on the black-white axis, but show a finite saturation level with a blue hue, such

samples showing whiteness are located in the blue region of the CIE chromaticity chart. The

region for samples exhibiting whiteness can thus be depicted as shown in Figure 10 [32].

24

Figure 10. Chromaticity loci of perceived white objects.

The whiteness space can be characterized by the whiteness axis which starts on the achromatic point and shifts towards the of 470 nm, as perceived as neural by the human observer. The parameter defining whiteness increase for unit saturation is denoted as ∂W/∂S which is the metric used in the formula used for whiteness evaluation. It can be seen that the whiteness values are increased by increasing luminosity Y or blue saturation. The angle of inclination, denoted as ϕ, is used to determine regional preferences.

Also independently increasing one of these two factors will result in enhanced whiteness, however, an increase of saturation cannot be continued indefinitely since at a certain point some samples will appear blue and not white any more. Shade deviation associated with

25

whites is represented by the regions deviating from the whiteness axis which contain samples perceived as being white but showing certain shades, e.g., reddish or greenish as compared with neural whites. The angle of inclination can be used to determine regional preferences.

In general, a color may be defined as a combination of three attributes which are given by the tri-stimulus values for the standard color normal observer. The perceived color is seen as a result of the light from a source modulated by the reflectance factors of the object or from self-luminance of the object. Generally, each process can be understood using two mechanisms.

In additive color mixing, the desired color can be obtained as the mixture of light coming from three light sources with primary colors, red, green and blue. For example, on a CRT monitor, very small dots of red, green and blue phosphors are excited to generate various lights on the screen. The main characteristic of additive color mixing is that the luminosity value of the mixed color is always higher than those of its components.

In subtractive color mixing which is more common but usually involves much more complicated processes, the desired color can be acquired by the mixture of light coming from three colored molecules, usually cyan, magenta and yellow. An example of this process involves mixing paints, or dyeing substrates with a mixture of dyes in textiles and printing industry. Also the main characteristic of subtractive color mixing is that the luminosity value of the mixed color is always lower than those of its components [33].

26

Figure 11. The effect of additive mixing and subtractive mixing on lightness.

Luminosity plays an important role when choosing a proper mechanism for increasing whiteness due to different impacts of color mixing mechanisms on the appearance of whites as shown in Figure 11 [34].

In textiles and printing industries, values for luminosity can be increased up to a certain extent by adding optimal amounts of fluorescent brightening agents.

1.3 Illuminants and Light Sources

The perception of fluorescent white materials is a complex process that involves three

essential elements, light, substrate and vision, as we mentioned before. Light is a form of

27

electromagnetic radiation with the publication of Newton's descriptions and explanations of the effect of passing white light from the sun through combinations of prisms [35]. The visible region of the electromagnetic spectrum makes up a very small part of the total spectrum as shown in Figure 12 [36], where electromagnetic radiation is arranged according to wavelength.

Figure 12. Spectrum distribution of sun light.

A light source emits light. It can be physically switched on and off and its corresponding

Spectral Power Distribution (SPD) can be experimentally determined. An illuminant refers to a light defined by a specific SPD and may not be physically realizable as a source.

28

1.3.1 Color Temperature

The spectral power distribution (SPD) curves of a light source refer to the radiative power emitted by a heated body by plotting the variation across the electromagnetic spectrum of the emittance per unit wavelength. Color temperature is defined as the temperature of an ideal blackbody radiator given in the SI unit Kelvin (K). Black body is a model source would radiate energy perfectly and conversely absorb light perfectly without reflecting any of it away. A black body is an idealized concept because no known material actually absorbs all the incident radiant energy. Figure 13 shows the spectral radiance of the blackbody at several temperatures [37].

29

Figure 13. Relative SPD in visible region normalized at 555 nm.

As shown in Figure 13 [38], the peak wavelength is shifted to a shorter wavelength as the temperature of the blackbody increases which denotes that the lower the color temperature, the warmer the light. In some cases such as a tungsten filament lamp, its true temperature is always lower than the associated color temperature and also its calculation of color temperature via empirical formulae [39, 40]. Moreover, color temperature cannot be used to describe the spectral properties of fluorescent lamps due to the same reason. Therefore, correlated color temperature (CCT) refer to a given stimulus is the temperature of the

30

Planckian radiator whose perceived color most closely resembles that of the given stimulus at the same brightness and under specified viewing conditions [41].

1.3.2 Light Sources

The most important natural source of light is the sun. The daylight as seen on the surface of the planet is formed by absorption and scattering of sunlight due to the presence of particulate matter in the atmosphere before it reaches the earth's surface.

As shown in Figure 10, the selective absorptions produce scattering at the low wavelengths leading to the blue appearance of a clear sky, with the rapid fall-off of the energy in the near

UV region depicted by the SPD curves. Typically, daylight is preferred for color matching and color assessment. Figure 14 also shows that the relative SPD of different phases of daylight vary significantly [42].

31

Figure 14. Relative SPD of different phases of daylight, normalized at 555 nm: (a) cloud-free zenith skylight, (b) cloud-free north skylight, (c) overcast skylight, (d) medium daylight, and (e) direct sunlight.

Incandescence is the release of electromagnetic radiation, usually in the infrared and visible region, from a hot body due to high temperature. Two common types of incandescent lamps are tungsten-filament lamp and tungsten-halogen lamps. Typical tungsten lamps generally

operate from 2800 K to 3800 K and generate approximately white light. A tungsten-halogen

lamp is produced by the addition of a small quantity of halogen to the filling gas of a

tungsten filament lamp. Tungsten-halogen lamps exhibit a number of important advantages

32

over ordinary tungsten lamps. They are more compact and provide a better light output, can

be operated at higher temperatures than tungsten-filament lamps, and thus provide SPD with

correspondingly higher color temperature from 2900K to 3300K [43,44].

1.3.3 CIE Standard Illuminants

The Commission Internationale de l'Eclairage (CIE) recommended a set of spectral radiant

power distributions known as the CIE standard illuminants, and CIE Standard sources A, B

and C, were adopted at that time as approximations to three common illumination conditions

[45].

Illuminant A

Illuminant A was recommended by the CIE in 1931 to represent light from the ideal

blackbody radiator at 2856K [45]. It is realized by an actual source such as a filament lamp

operating at a correlated color temperature of 2856K. However, the color of the light from

such an illuminant is relatively yellowish as it is deficient in power in the blue region and

rich in the red region of the visible spectrum as shown in Figure 15 [46].

33

Figure 15. Relative SPD of illuminant A.

Illuminants B and C

Illuminant B was designed to simulate direct noon sunlight with a correlated color

temperature of 4874 K, and illuminant C was to simulate an average daylight with a correlated color temperature of 6774 K. They are not used much in practice because they have too little power in the UV region that is very important for assessment of fluorescent materials; especially, their relative SPD curves do not adequately match spectral distributions

of daylight, particularly in the UV region of the spectrum as shown in Figure 16 [46]. They

are therefore not recommended for use in assessing the fluorescent materials where the UV

content of the source would be of importance.

34

Figure 16. Relative SPD curves of illuminants B and C.

CIE D Series

The CIE recommended D series of illuminants with SPD curves covering the UV, visible and near IR to represent various phases of daylight as shown in Figure 17. Illuminant D50 and D65 with CCTs of 5000 and 5500 K, respectively, represent mid-morning and mid-afternoon daylight. Illuminant D75 with CCT of 7500 K represents north sky daylight which is a bluer phase of daylight and is preferred in some parts of the American continent for the color assessment of automotive components. Illuminant D65 represents a phase of natural daylight at an approximate CCT of 6500 K, and now is widely accepted as a standard illuminant for

35

color assessment and measurement. Illuminant D65 exhibits a higher UV content when compared with illuminants A, B and C.

Figure 17. Relative SPD of CIE illuminant D series.

CIE Standard Illuminant F Series

CIE also recommended F series of illuminants to represent various types of fluorescent lighting that are currently widely used. The SPD curves of some of these illuminants are shown in Figure 18, in which illuminant F2 represents cool white fluorescent, F11 represents

36

triband fluorescent, and illuminants F7 and F8 represent daylight fluorescent lamps as

approximations of D65 and D50, respectively [46].

Figure 18. Relative SPD of illuminant F series.

1.3.4 Fluorescent Lamps and Tubes

The most common fluorescent tube consists of a long glass vessel containing mercury vapor at low pressure sealed at each end with metal electrodes. Phosphors are used to coat the interior surface of the tube and excited by the high-energy UV lines from the mercury spectrum. The spectrum generated is dependent on the phosphor type used, The lamps vary from the cool white lamp to the broad-band type in which long-wavelength phosphors are

37

incorporated to enhance the color rendering properties of the source. Another type, the three-

band fluorescent such as TL84 or prime color lamps, use narrow-line phosphors to give emission at approximately 435 nm, 545 nm and 610 nm as well as an overall white light

color with a better color rendering properties. The SPDs of these three fluorescent tubes are

compared in Figure 19 [48].

Figure 19. SPDs of three types of fluorescent lamp.

The first two lamps show prominent line emissions at the mercury wavelengths of 404, 436,

546 and 577 nm. The three-band fluorescent lamps with much higher efficacy are

advantageous compared to the other two types in terms of energy consumption and are

widely used in store lighting, however, this has also increased problems associated with

due to their specific SPDs [48, 49].

38

The UV content of bulbs is not often measured and standardized. A variation in UV content significantly affects the perception of white samples treated with fluorescent brightening agents. Also the UV content applied in viewing booths used for visual assessments does not correlate with that used for measurement of whites [50].

1.3.5 LEDs

Semiconductor materials are used in the manufacture of light emitting diodes (LEDs) and the phenomenon of electroluminescence is used to generate light in the LEDs [51].

Figure 20. Semiconductor junction laser.

The electrical excitation between the conduction band in the n-type semiconductor, with an excess of electrons, and the valence band in the p-type material, with a shortage of electrons

39

which is referred to holes, results in an energy gap. Light emission occurs via electron and

hole recombination across the p-n semiconductor junction as shown in Figure 20 [52]. This

effect is called electroluminescence and the color of the light, corresponding to the energy of

the photon, is determined by the energy gap of the semiconductor.

LEDs emit a very narrow band wavelength only 50-80 nm wide that depends on the chemical

composition of the semiconductor materials. LEDs generate white light either by mixing red,

green, and blue monochromatic LEDs or by a color conversion process, which is more

common [53, 54]. It relies on a mixture of blue LED light and subsequent re-emission from green, yellow, or red phosphor materials to make white light. Recent studies have highlighted the importance of a sophisticated design of the color conversion element (CCE) to make superior-quality white LED light sources. This includes not only the CCE’s arrangement

within the white LED package and use of multiple phosphors to increase color rendering, but

also the design of the CCE itself [55, 56, 57].

1.4 Texture Analysis

1.4.1 Definition of Texture

Texture refers to properties that represent the surface of an object. Over the last few years,

different definitions of texture have been proposed by a number of vision researchers in

psychophysics as well as computer vision [58,59]. Psychophysicists have studied texture

since it provides them with a means of understanding early human visual information

40

processing. Computer vision researchers have used measures of texture to discriminate

between different objects and segment scenes.

Richards & Polit [60] define texture as an attribute of a field having no components that

appear enumerable. The phase relations between the components are thus not apparent. Nor

should the field contain an obvious gradient. The intent of this definition is to direct attention

of the observer to the global properties of the display - i.e., its overall "coarseness",

"bumpiness", or "fineness". Physically, non-enumerable patterns are generated by stochastic as opposed to deterministic processes. Perceptually, however, the set of all patterns without obvious enumerable component will include many deterministic textures [60].

ASTM (The American Society for Testing and Materials) defines texture as the "visible" surface structure depending on the size and organization of small constituent parts of a material, typically, surface structure of a woven fabric [61].

The human response to texture often is described with terms like fine, coarse, grained and smooth, etc. Alternatively, texture can be described as a variation in tone (intensity or lightness) and structure. Other responses to a physical surface can be described in the following terms:

• Roughness;

• Smoothness;

• Ripple - the appearance of irregularity of a surface resembling the skin of an orange;

• Apparent mottle - a spotty non-uniformity of color appearance on a scale that is larger

than the colorant particle, typically 1 to 10 mm; and

41

• Speckle - a phenomenon in which the scattering of light by a rough surface or

inhomogeneous medium generates a random-intensity distribution of light that gives

the surface or medium a granular appearance.

1.4.2 Effect of Texture on Color Perception

A textile substrate consists of a large number of long, fine fibers, which are initially arranged irregularly and tangled up. During spinning, fibers line up more or less parallel to one another. The insertion of twist leads to formation of yarn form. Weaving or knitting processes are then used to generate different surface textures [62].

Figure 21. Light propagation in a fiber.

42

From a microscopic perspective, when light hits a fiber, it may be partly transmitted, absorbed or reflected as shown in Figure 21 [62]. The relative contribution of each of these components determines the visual appearance of the fiber, including its color shade and luster. From a macroscopic perspective, surface appearances influence our color perception directly. Consider a beam of white light incident on the surface of an object. As soon as the light meets the substrate's surface the beam undergoes refraction, while some of the light is reflected. The refracted beam entering the layer undergoes absorption and scattering, and it is the combination of these two processes which gives rise to the underlying color of the medium as shown in Figure 22 [63].

Figure 22. Light propagation in a colored medium.

However, the extent of scattering will depend on the particle size and on the refractive index difference between the pigment particles and the medium in which they are dispersed

43

according to Fresnel's law. Depending on their surface properties media exhibit a balance between specular and diffusely reflected light and this is schematically depicted by directions and sizes of arrows representing reflection in Figure 23 [64].

Figure 23. Polar distribution of reflected light for various surfaces.

Figure 24 shows a model of light reflection from the surface of a textile. Light reflected from the textile consists of three kinds of reflections, 1. diffuse reflection at the surface layer of fibers; 2. diffuse reflection by multifarious reflections between surfaces of internal fibers; 3. regular reflection at the surface of fibers [65].

44

Figure 24. Schematic model of light reflection from a woven textile.

Different textile surfaces are widely generated for common use but from an optical point of view they are difficult to define. This difficulty is demonstrated by the number of different definitions for texture, attempted by vision and imaging scientists. Texture is a property that represents the surface and structure of a medium [66]. Generally speaking, in the case of textile substrates texture can be defined as a repetition of an element or pattern on the surface. In textile fabrics, the texture is quite regular, that is, the elementary woven pattern is repeated over the entire substrate. The texture level therefore may be described by a roughness or coarseness index. Variations in surface roughness can significantly affect the colorimetric attributes of textile substrates from physical perspective. Often, the effect of surface patterns is negated by repeated measurement of the substrate at different orientations and averaging the results.

45

Figure 25. KES-FB4 surface tester.

The KES system, designed for assessing fabric surface properties, measures the height of a surface of a fabric over a 2cm length (forwards and backwards) along principal directions as shown in Figure 25 [67].

Surface friction and roughness are characterized by MIU without unit (defined below), and frictional measurement, mean value of the coefficient of friction, SMD (defined below), roughness measurement, and mean deviation of surface roughness with unit of micron [68].

The forwards and backwards motion gives two values for geometrical roughness, SMD1 and

SMD2. The geometrical roughness is a measure of the surface contour of the fabric, and an increase in SMD usually indicates an increase in surface variation of a fabric [69]. In order to conduct the surface tests, piano-wire is used to measure friction. The piano-wire is used under a constant force of 10 g and frequency of 30 Hz. The sample size for surface tests is

20×3.5 cm. Different sampling rates are accordingly applied based on various requirements.

The characteristic MIU and SMD values are obtained from Equation 1 [70]:

46

MIU = 1 푋 푋 ∫0 휇 푑푥 SMD = | | (1) 1 푋 푋 0 푇 − 푇� 푑푥 where µ: frictional force/pressure∫ force

x, displacement of the contractor on the surface of specimen

X: 2cm is taken in standard measurement

T: Thickness of the specimen at position x, the thickness is measured by the contactor

: Mean value of T

The SMD푇� metric, however, does not always correlate with perceptual assessments of roughness for a given surface. Several parametric effects influence the perceived color of products including lightness as well as whiteness attributes and variations in these parameters can change the magnitude of differences amongst otherwise identical objects. In addition to texture, other contributing factors include the color of the background, luminance in the viewing field, the physical size of samples, the mode of sample presentation, the magnitude of color differences, and whether the color can be described as a surface or self-luminous color.

A further weakness in most discussions of texture is the assumption that the perception of texture is a tonal effect and that chromaticity is of secondary importance. When dealing with color and color-difference, however, the texture may also have a great impact on the perceived difference of color. A number of recommended color difference formulae, e.g.,

CMC(l:c), CIE94, CIEDE2000, etc., include adjusting factors to account for the varied

47

interaction of light with different surfaces. More recently, the influence of texture on color

difference evaluation [71] and on suprathreshold lightness differences [72] was reported.

Therefore, texture is an important parametric effect that needs to be incorporated in color-

difference metrics. Attributes supporting texture perception such as lightness, brightness,

whiteness as well as darkness have been widely used in textiles from a colorimetric

perspective. Also properties such as roughness/smoothness and coarseness provide means of

physical measurement.

In the field of imaging, texture refers to an aerial construct that defines local spatial

information of spatially varying spectral values that is repeated in a region of larger spatial scale. Therefore the perception of texture is a function of spatial and radiometric scales.

Descriptors providing measures of properties such as smoothness, coarseness, which are also

used in the textile domain as well as regularity, are used to quantify the texture content of an object. The most common method adopted in image analysis is the spatial distribution of gray values, by computing local features at each point in the image, and deriving a set of statistics from the distribution of the local features [73].

For the human perception, the perceived texture can change with viewing conditions such as distance between an observer and samples, because of characteristics of our eyes, e.g. the contrast sensitivity of human perception. Contrast sensitivity is the ability of the visual system to discriminate spatial information of luminance and chromatically defined form. It is affected by the angular display size. For example, perceived texture of a sample may be

48

coarse in a close distance, but the same sample could be perceived as fine texture form a

longer distance.

Since textiles have geometric characteristics, such as natural convolution of fibers, cross-

sectional shapes of fiber, twists of yarn, and surface fluff, incident light beams are scattered

at different strengths in different directions, according to surface geometry, particularly

surface roughness. When a subject views a real textile sample, light reflected from the surface stimulates the subject's eyes and provides two-dimensional color images on the subject's retinas as an image of the woven construction, at which point the subject registers a three-dimensional image by way of recognizing memories of experiences with fabrics [74].

1.5 Improving the Whiteness of Material

In paper as well as textile industries generation of suitable white materials is important to

satisfy high lightness and neutral saturation requirements from an aesthetics perspective and

to provide a suitable base for coloration via dyeing and printing. However, most natural and

synthetic fibers contain slightly colored, conjugated organic components that detract from

their apparent whiteness [75, 76, 77, 78]. Consequently, whiteness of such substrates can be

improved in three ways, as described in the following sections.

49

1.5.1 Bleaching and Bluing

Bleaching agents convert impurities into colorless particles. Color is imparted by a

chromophore, i.e. a moiety usually involving alternating carbon-carbon single and double

bonds [79]. Bleaching process increases blue reflectance of the substrate by destroying the

coloring matter with strong reducing or oxidizing agents. The function of bleaching is to

destroy blue-absorbing yellow contaminants via oxidation so that there is a large increase in

the whiteness.

However, even the most effective bleaching cannot remove all traces of yellowish cast.

Therefore, an additional whitening stage is often essential.

Bluing is an age-old practice to treat the bleached reagents with a very small amount of a

brilliant blue or violet dye that absorbs selectively in regions of the spectrum other than blue,

offsetting the yellow color of the substrate to improve visual impression of whiteness. Even though the bluing agent decreases slightly the lightness of the substrate, it shifts the shade of the yellowish material towards blue. Hence, for cultures that perceive bluish whites as preferred, the eye records an increase in whiteness. However, the bluing may generate a dull perceived white due to decreasing lightness. If an excessive amount of blue dye is used, the substrate will become firstly grayish and finally bluish, so that the substrate appears colored but not white to the eye.

50

1.5.2 Application of FBAs and Optical Brightening

The third way is to use fluorescent brightening agents (FBAs), fluorescent whitening agents

(FWA) or optical brightening agents (OBA). Unlike bleaching and bluing, FBAs offset the

yellowish cast and at the same time improve lightness, because they do not subtract green-

yellow light, but rather add blue light. FBAs are virtually colorless compounds which, when

present on a material, have the ability to absorb mainly invisible ultraviolet light in the 300-

400 nm range and remit violet to blue fluorescent light. The emitted fluorescent light is added

to the light reflected by the treated material, producing an apparent increase in reflectance in

the blue region. Dazzling whiteness may be perceived, especially on a well-bleached

material. A slight improvement in base whiteness enhances whiteness of FBA-treated

material significantly.

FBAs are used to brighten not only textile materials but also paper, leather and plastics. They

are important constituents of household detergent formulations. More specialized areas of

application include lasers, liquid crystals and biological stains. By far the most important

uses of FBAs, however, are in applications to textiles and paper. FBAs should be applicable without undesirable side-effects, such as staining or subsequent photosensitization or

degradation of the substrate to which they are applied.

The object of bleaching is to produce white fabrics by destroying the coloring matter with the

help of bleaching agents with minimum degradation of the fiber. The bleaching agents either

oxidize or reduce the coloring matter which is washed out and whiteness thus obtained is of

51

permanent nature. Chemical bleaching of textile fibers is further aided by addition of optical brighteners [80].

Certain organic compounds possess the ability to fluoresce whereby they absorb UV light and re-emit it at longer wave-lengths within the visible spectrum. Therefore, at a specific wavelength, a surface containing a fluorescent compound can emit more than the total amount of daylight than falls on it, giving an intensely brilliant white. Compounds that possess these properties are called optical brightening agents or OBA's. The effect is only operative when the incident light has a significant proportion of ultraviolet rays such as sunlight. When OBA's are exposed to UV fluorescent light bulbs, they glow, a sure fire way of identifying fibers that are treated with optical brighteners [81]. When OBA's are not exposed to UV light, the OBA's are not activate causing the eye to see the actual color without OBA's and may look creamy or somewhat yellow. The extent of OBA's fluorescence and loss in activity will vary depending upon how much exposure the sample has to UV light.

1.5.3 Fluorescence

Fluorescence is the phenomenon by which a compound absorbs UV light and, through a process of quantum mechanical intersystem crossing, remits light, after a loss of energy, at a longer wavelength than that which is absorbed [81]. For most fluorescent brightening agents

(FBAs), the emitted light is in the short wavelength range of the visible spectrum leading to a

52

bright bluish white color [82]. However, other tints of white can be made by modifying the

emission properties of fluorescent brightening agents. FBAs are widely employed in the

processing of paper and textiles to improve their white appearance. The whiteness of

fluorescent white materials depends on the fluorescence properties of the FBA employed and

the UV content of the incident light source(s) [83-86]. While the quantity of fluorescence is related to the amount of FBA applied, maximum whiteness is attained at an optimum concentration of FBA. An overload of FBA limits UV absorption and causes quenching of fluorescence either of which results in reduction of perceived whiteness of fluorescent white materials [87]. Fluorescence, therefore, is dependent not only on the structure of the molecule, but also on its condition.

The fundamental idea for fluorescent materials to produce better whiteness is to enlarge the blue reflectance by absorbing light in the ultraviolet and violet regions (340 - 370 nm) and reemit in the blue region (420-470 nm). The reemitted blue light offsets the yellow light reflected from the substrate and increases the total reflected light, producing a ‘whiter than white’ effect and thus a vivid and dazzling white [88]. The mode of action to increase the whiteness of a substrate is illustrated in Figure 26. The first action is the absorption of UV and violet radiation, resulting in a change from curve A to curve B. Secondly, the absorbed energy is emitted as blue fluorescence radiation (curve C). The addition of true reflected intensity (curve B) and fluorescence intensity (curve C) gives the radiation which is perceived by an observer (curve D), and is called the total spectral radiance factor [89].

53

Figure 26. Effect of fluorescence on the spectral reflectance.

In other words, when present on a substrate an effective FBA increases the apparent reflectance of the medium in the blue-violet region of the spectrum.

54

2. Measuring Whiteness

2.1 Whiteness Formulas

By the 1990's more than 100 whiteness indices had been developed [90, 91]. In whiteness

measurements, the issue is how to develop a single formula that gives an appropriate

weighting to the tristimulus values. In equations developed thus far, tristimulus values are

incorporated using different algebraic functions to assign a suitable weight that takes into account the role of blue hues on perception of white objects. The human visual system, classifies a slightly blue white object whiter than objects reflecting perfectly over the whole visual range [92]. Moreover, a single whiteness formula is often not sufficient for general applications since the perception of the appearance of an object varies amongst different individuals [93]. In addition, there is no general agreement in the best method for the evaluation of whiteness. Nonetheless, some of the important whiteness formulae are described in the following sections.

2.1.1 One-dimensional Whiteness Formulas

The first attempts exerted at describing whiteness are based on lightness, yellowness or blueness. Equation 2 shown below [93]:

W = Y (2)

55

quantifies whiteness in relation to lightness only, where the result is a relative quantity based on a preferred white defined by a Magnesium oxide or Barium sulfate tablet, using the CIE function y(λ) to describe the luminance factor under a given observer and illuminant setting.

This is purely a luminance value and does not report if the observed object is bluish or yellowish. Equation 3 [93] relates whiteness to a blue reflectance defined by the CIE function z(λ).

W = B (3)

Moreover, It is clear that Equation 3 gives no negative values regardless of the real color of the observed substrate, also values are not corrected by the relative amount of absorbed yellow light and it does not take into account the bluing techniques.

The equations were corrected by using yellowness factors that considered the relative amount of blue and yellow in the reflected light. A large number of whiteness calculations are based on measuring the reflectance at two wavelengths - one at the short wavelength for blueness and another at long wavelength for redness. The methods were originally devised for non- fluorescent white to determine the general level of reflectance, and decreased reflectance in the blue region due to the yellowish cast. Stephansen formula, an empirical relation based on a similar approach, is shown in Equation 4,

WI Stephansen = B - (R - B) = 2B - R (4) where B and R are reflections measured with a filterphotometer [93]. The effective wavelengths are 430 µm (nm) for B and 670 µm for R.

56

The Harrison formula which is shown in Equation 5 and the bracketed terms shown in Eq. 4 and 5 refer the difference between the reflectance levels in the yellow and blue region and aimed to measure the yellowish cast in order to lowers whiteness. Harrison's formula considers the value of 100 for physically ideal white [94]. However, these four whiteness models are not applicable to materials containing a blue dye [93].

WI Harrison = 100 - (R - B) (5)

The non-standardized band pass filters used resulted in a loss of popularity for this type of

formulas, especially after the introduction of filter colorimeters that employed G, B and A

(green, blue and amber) filters that were related to the CIE y(λ), z(λ) and x(λ) color matching

functions weighted by a CIE standard illuminant. Generally, following relationships are

shown in Equation 6 were used to model the characteristics of these filters.

A =

1 푏 푎 푎 ∙푐 G = Y ∙ 푋 − ∙ 푍 (6)

B = 1 푐 where X,Y,Z are the tristimuli of the object∙ 푍 measured and,

57

Table 1. Constants for Illuminants A, C and D65 for 2o and 10o observers

Observer Illuminant a b c 2° A 1.044623 0.053849 0.355824 C 0.783185 0.197520 1.182246 D65 0.770180 0.180251 1.088814 10° A 1.057190 0.054170 0.352020 C 0.777180 0.195660 1.161440 D65 0.768417 0.179707 1.073241

The Taube formula based on the above approach is shown in Equation 7 [96],

W Taube = 4B - 3G (7) where B and G are defined in Eq. 6.

Hunter Whiteness Formula

It was recognized that yellowness formulas or those based on the relative differences of blue and yellow light were not sufficient to describe whiteness, especially for objects whitened through bluing techniques. The importance of the contribution of lightness to whiteness perception as shown in Hunter formula was recognized. Hunter whiteness is given by

Equation 8 [96,97]:

WH = L - 3b (8)

where L and b are the Hunter coordinates defined as those shown in Equation 9.

58

= 100 푌 . ⋅ 퐿 = 175⋅∙ �푌푛 (9) 0 0102 푋푛 푋 푌 . 푌 ⋅ 푛 푛 푎 = 70⋅ � �푌푛 ∙ �푋 − 푌 � 0 00847 푍푛 푋 푍 푌 푛 푛 푏 � �푌푛 ∙ �푋 − 푍 �

and Xn, Yn, Zn are the coordinates of the achromatic point. The simplicity of the Hunter formula is remarkable while it clearly takes into account the importance of having high lightness and neutral blue b values.

MacAdam Formula

A close relative of Hunter formula are the MacAdam formula given by Equation 10 [96],

W MacAdam = (10) 2 where Y is the luminance factor, �k 푌is− a 푘constant푝 of 6700 [96] and p is the colorimetric or excitation purity, that is:

( ) + ( ) = ( )2 + ( 2) � 푥 − 푥푤 푦 − 푦푤 푝 2 2 푑 푤 푑 푤 where x and y are the chromaticity� co푥-ordinates− 푥 of the푦 sample,− 푦 xd and yd are the chromaticity

co-ordinates of the dominant wavelength, and xw and yw are the chromaticity coordinates of

the white corresponding to source C:

xw = 0.3101, yw = 0.3163.

59

Selling Formula

The Selling formula [96] is shown in Equation 11:

= 100 100 ( ) + ( ) (11) 1 �2 2 2 푊푆푒푙푙푖푛푔 − � ∙ ∆ 푌 푘 ∙ ∆푠

where = 1 �2 ∆ �푌 � 푌푀푔푂 − 푌푠푎푚푝푙푒 is the luminance of a perfect �diffuser based� on MgO used as whiteness standard, s is

MgO deviation푌 of the sample from the neural color of the same lightness as measured on∆ the

MacAdam's UCS diagram, k is a constant of 6700 [98].

Berger Whiteness Formula

This formula was developed by A. Berger in 1959. Whiteness index developed after the formula of Berger formula was developed mainly for application in the paper as well as textile industries which is shown in Equation 12 [98].

W Berger = Y + a ⋅ Z - b ⋅ X (12)

where the numerical parameters are defined as in daylight D65,

a = 3.440, b = 3.895 (2o observer)

a = 3.448, b = 3.904 (10o observer)

Generally, the formula has a preference for greenish whites, in other words, white samples

having a greenish shade will possess higher Berger whiteness values.

60

Whiteness Index (ASTM)

The ASTM-whiteness index is defined according to Equation 13 [99],

WI = 3.388 ⋅ Z - 3⋅ Y (13)

Where Z and Y are the tristimulus values of the object.

This formula requires the measurement with a source equivalent to illuminant C, also during

the measurement; instrument can be a three-filter colorimeter or spectrophotometer type with geometry 45/0.

C/V Index

The perceived whiteness evaluation formulas proposed discussed so far, are empirical ones

based on results of visual evaluation without considering the visual mechanism that regulates

the perceived whiteness. Therefore, the basis of the evaluation in these formulas is not clear.

Also, these perceived whiteness indices are subject to the evaluation under the CIE standard

illuminant D65 or C, and the evaluation method under an arbitrary illuminant has not been

established. However, it may be argued that white papers are rarely viewed outdoors; most

are used under a source with the correlated color temperature of less than 6500K [100]. One

approach to predict whiteness under varying conditions might be to use a color appearance

model since color appearance models such as CIECAM02 predict the appearance of an

object under arbitrary illuminants [100,101].

However, it is difficult to describe the perceived whiteness exactly by the chroma of the color

appearance model because the chromaticity point of the illuminant is not necessarily in line

61

with the highest perceived whiteness. Also, in case of applying the chromatic adaptation correction to the existing whiteness indices, the perceived whiteness changes due to changes of illuminants cannot be predicted because tristimulus values of the white sample are normalized to those of the corresponding colors under the reference illuminant [102].

By considering the relationship between the chromatic strength and the perceived whiteness, the perceived whiteness evaluation index called C/V index was proposed which is presented in Equation 14 [103].

(λ) (λ) (λ) λ C V index = 780 Y a (14) (λ) (λ)ν(λ) λ ∫380 P R C d 780 � ∫380 P R d ∙ ∙ where P(λ) is the spectral distribution of the illumination, R(λ) is the spectral reflectance factor of the white object, V(λ) is the CIE spectral luminous efficiency. Y is the luminous reflectance factor, and ‘a’ is the adjustment factor. C(λ) is the wavelength function of the vector luminance obtained using Guth's color vision model, which tries to predict descriptors of colors under different viewing conditions according to the stage of the visual system

[104]. This model begins with nonlinear cone responses followed by a nonlinear receptor and two stages of opponent responses, finally includes a neural compression of the opponent signals.

(λ) (λ) (λ) λ Also Y = 780 100 (15) (λ) (λ) λ ∫380 P R y� d 780 ∫380 P y� d ∙ where (λ) = V(λ), so C/V index can be expressed in Equation 16.

푦�

62

(λ) (λ) (λ) λ C V index = 780 100 a (16) (λ) (λ) λ ∫380 P R C d 780 � ∫380 P v d ∙ ∙

2.1.2 Two-dimensional Whiteness Formulas

The introduction of a second dimension resolves the problem posed by the existence of multiple preferred whites; each white sample is characterized by a whiteness number, W, and

a tint or shade deviation value T calculated with formulas shown in Equation 17 [105],

= + ( ) + ( ) (17)

0 0 푊 =푌 푃( ∙ 푥 −) 푥 푄( ∙ 푦 −) 푦

0 0 where whiteness numbers refer푇 푚to∙ a푥 neutral− 푥 − white푛 ∙ 푦 characterized− 푦 with the dominant wavelength of 472nm; the perfect diffuser is assigned the whiteness value of 100; xo, yo are

the tristimulus values for the given illuminant and P, Q, m, n are all constant.

Ganz Whiteness Formula

Ganz whiteness formula is the first that refers to a neural white and the second dimension of

tint or shade deviation. The whiteness formula that Ganz proposed is as follows [106,

107,108],

WGanz = (D ⋅ Y) + (P ⋅ x) + (Q ⋅ y) + C (18)

Y, x, y are the chromaticity coordinates,

63

D, P, Q and C are formula parameters.

Ganz and Griesser introduced a tint deviation formula as an additional factor in the calculation of whiteness to make instrumental whiteness assessments more accurate.

Tint Deviation = (m ⋅ x) + (n ⋅ y) + k (19) where x and y are colorimetric variables and m, n and k are the formula parameters that are specific to the measuring instrument, where the coefficients are given for D65/10° as:

P = -1868.322, Q = -3695.690, C = 1809.441;

m = -1001.223, n = 748.366, k = 68.261.

Tint deviation can be also be correlated with the white scale as a reference. Whether a sample possesses the same hue as an equally white scale or whether it is greener or redder or not, and how much can be discerned with this equation. In other words, in the case of D65/10°,

Tint > 0, white has a greenish shade; Tint < 0, white has a reddish shade; Samples differing in whiteness values less than 5 Ganz units appear undistinguishable to the human eye; samples differing in tint values less than 0.5 Ganz-Griesser units also appear undistinguishable to the human eye.

CIE Whiteness Index

In 1981, CIE started extensive studies to solve the problem created by the excessive number of equations available for determination of whiteness. Using results from Ganz [106, 107],

64

the CIE recommended an equation for the whiteness W, related to basic CIE tristimulus

measurements, and having the form 20 shown in equation 21:

= + 800( ) + 1700( ) (20)

퐶퐼퐸 푛 푛 where x and y are 푊the CIE푌 chromaticity푥 − coordinates푥 of푦 the− 푦 specimen and xn and yn are the

chromaticity coordinates of the specimen, and 0.313795 and 0.330972 are the xn, yn

chromaticity coordinates for illuminant D65, respectively [100, 107]. A tint index for

evaluating the hue direction of white objects was also established as shown in Equation 2.20

[109].

= 1000( ) 650( ) for a 2° observer

푛 푛 or 푇 푥 − 푥 − 푦 − 푦 (21)

= 900( ) 650( ) for a 10° observer

푛 푛 These give tint-values푇 in 푥the −red푥 or− green푦 direction− 푦 for the 1931 or 1964 CIE standard observer, respectively. A positive value of T indicates greenness and a negative value

indicates redness.

These equations can be used only in a limited region. Criteria for whiteness are that the

values of W fall within the limits given by:

5Y-280 >WCIE > 40 and the tint value T shall fall within the limits given by:

2 > T > –3

The W formula describes an axis in the blue-yellow direction with a dominant wavelength of

470 nm in the CIE chromaticity diagram and the inequalities limit the extent to which a

65

sample may enter the blue or yellow regions or stray towards the red or green and still be

classified as white. According to this definition, the perfect reflecting diffuser has a

whiteness of 100 and a zero tint value.

The CIE whiteness formula is a linear model with a baseline of 470nm dominant wavelength

which is based on Ganz's study [109, 110]. However, the CIE whiteness index shows a low correlation with the visual evaluation of white samples containing various hues.

Uchida Whiteness Index

While very good agreement has been found between the visual assessment results and the

CIE whiteness formula values for samples that have similar tint or fluorescence, visual

results for white samples treated by different fluorescent whitening agents or different tints show a degree of deviation from their CIE WI values. In order to improve the performance of the CIE whiteness index, Uchida proposed a whiteness index based on the CIE formula.

According to Uchida's whiteness formula, the white samples could be divided into in-base and out-base point samples. For samples whose CIE whiteness indices lie within the limitation boundaries (40 < WCIE < 5Y-225), Uchida proposed a whiteness index shown in

Equation 22. For samples whose CIE whiteness indices are out of boundary (WCIE > 5Y-

275), the suggested whiteness index for 1964 standard observer is calculated using Equation

23 [111].

2 W = WCIE - 2(TW) (22)

66

2 W = PW - 2(TW) (23)

where

0.82 0.82 PW = (5Y-275)-{800[0.2742+0.00127(100-Y)-x} +1700[0.2762+0.00176(100-Y)-y]

Tw has the same definition as shown in Eq. 2.20. Uchida claimed that this whiteness formula

deals with tint and excitation purity in a more expanded space [111, 112]. However, the study reported by Jafari et. al. showed that the CIE WI formula performs better than the Uchida

index with visual assessments for both in and out of boundary white fabrics [112].

2.2 Instrumental Assessment

Instrumental measurement of whiteness simulating a psychometric scale is accompanied by

an additional term that indicates to what extent and in what direction the appearance of the

sample deviates from the maximum whiteness. Instrumental whiteness assessment will be

absolute only if the uncertainties of measurement and of evaluation are overcome [113]. A

range of devices can be used to determine the degree of whiteness of objects. These are

briefly described in the following sections.

2.2.1 The Spectrophotometer

The Colorimetric Mechanism of the Spectrophotometer

Three color-matching functions were defined by the CIE (Commission Internationale de

l'Eclairage) in 1931 [114]. These correspond to amounts of red, green and blue primary

67

colored lights required by an observer possessing normal color vision to match colors under a

2° field of view, because the CIE visual observations were conducted with a visual area subtending a 2° visual angle. In 1964, the CIE defined an additional set of color matching functions corresponding to a 10° observer. The transformed functions are generally used to estimate the human cone receptor's response to incident radiation from an object, if the radiation is known [115, 116].

Spectrophotometers measure the amount of radiation versus wavelength and, therefore the cone responses can be estimated. The X, Y and Z tristimulus values are calculated in terms of the spectral response and together represent an unambiguous description of a color, calculated or measured by reference to the CIE standard illuminant and observer functions.

The CIE standard observer defines three stimuli functions X(λ), Y(λ), and Z(λ). X(λ) is equivalent to the visual efficiency function of the human eye with a peak in the red region,

Y(λ) to the green response and luminosity, and Z(λ) to the blue region. Tristimulus colorimeters attempt to filter light to reproduce the human cone response to light based on which the X, Y, Z values can be calculated [116, 117].

Each tristimulus value (X, Y, Z) is a primary of an axis in a three-dimensional space and all

together a sample’s tristimulus values define a position in that three-dimensional space. A

two-dimensional chromaticity diagram is obtained by performing two sequential projections.

Tristimulus values are converted into two variables giving a two dimensional map. That is,

magnitudes of tristimulus values are transformed into ratios of tristimulus values or in other

words into chromaticity coordinates, given by:

68

x = X/(X+Y+Z)

y = Y/(X+Y+Z) (24)

z = Z/(X+Y+Z)

Measurement Principles of Spectraflash 500

Since certain fluorescent materials absorb UV energy at around 350 nm and re-emit energy in the blue part of the visible spectrum at around 450 nm, light sources of instruments for whiteness assessment should correspond to the spectral profile of daylight, down to 300 nm

in the UV region. This is a strong limitation from the instrumental point of view because

standard light sources such as incandescent sources could not produce enough UV amount to

approach the level encountered under daylight illuminations.

For this reason only instruments equipped with Xenon flash lamps can be used for measuring

whiteness, the flash lamp must produce a level of UV, that is higher than the one encountered

in daylight.

The Datacolor SF500 is a classical dual-beam reference spectrophotometer shown in Figure

27 and widely used in the standardizing laboratories [118].

69

Figure 27. Basic features of a dual-beam spectrophotometer (Datacolor SF500).

A filtered pulsed xenon lamp is used to approximate illuminant D65 and illuminate a 15 cm

diameter sphere coated with barium sulfate. The UV content of this light source is controlled by a motorized filter wheel, which significantly changes their colorimetric values during the measurement of fluorescent materials. Using a movable filter allows to set the UV level to correspond to daylight. The UV portion of the illuminated SPD can be cut off below 400, 420 or 460 nm, which enables the effect of fluorescent brightening agents to be measured. Dual-

beam measurements are preferred due to their inherent stability, as the measurement is of a

ratio rather than an absolute value. Also the errors due to drift of the measurement electronics

or variation in the light source are therefore effectively eliminated.

The software provided to operate this instrument must be able to conduct the adjusting

procedure. It is possible to determine the amount and profile of the fluorescence and separate

70

it from the passive reflectance of the substrate by inserting different filters. The main

advantage of Datacolor SF500 is that it can be calibrated in terms of fluorescence intensity

besides a calibration based on whiteness values because it can directly determine

fluorescence. Fluorescence calibration can be used to successfully determine the degree of

whiteness of samples containing FWAs.

UV calibration is carried out by measuring a set of one or more fluorescent standards such as

TUVCS according to the AATCC Test Method 110 [119]. In this method, samples with known values of whiteness are measured and the UV filter is adjusted until measured values match with those in the certificate as shown in Figure 28.

Figure 28. Flowchart of measuring whites containing FWA based on the AATCC Test Method 110.

71

It is important to observe that fixing the UV amount alters the overall spectral distribution of the light source, a whiteness calibration with any filter reposition must obtain reliable results.

In the measurement of the optical properties of fluorescent materials, it is essential that the

UV content of the light source be defined, as well as techniques developed for calibrating or adjusting the UV content.

2.2.2 The Spectroradiometer

A spectroradiometer measures the radiometric output of an optical source as a function of wavelength and minimally contains a dispersing element (monochromator) and a detector.

Spectroradiometers can be configured to measure either spectral irradiance or radiance, a limiting aperture is required to measure the latter. The instrument used for measuring the distribution of radiant flux is the telespectroradiometer [120]. Telespectroradiometers are useful when measuring:

∗ Luminance, radiance or spectral radiance of uniform, diffusely emitting light sources; ∗ Luminous, radiant, or spectral radiant intensity of small or point sources, and ∗ Illuminance, irradiance, or spectral irradiance of collimated or point sources. A schematic diagram representing the working mechanism of a typical telespectroradiometer is shown in Figure 29.

72

Figure 29. The diagram of a scanning telespectroradiometer (Bentham instrument).

Light from a certain area is collected through an aperture by the telescope and conveyed by a fiber optic link to a monochromator, which disperses the incoming light by means of a prism or diffraction grating and samples a small wavelength interval, usually 1 nm or 5 nm, through a narrow slit. The radiant flux in this fixed interval is detected by a photomultiplier tube and amplified as an analogue voltage, then converted into a digital value and stored in the software set-up for the telespectroradiometer. The software in the host computer controls the process, stepping the monochromator sequentially through the full range of wavelengths.

Adjustments can be made to the lens and sampling aperture in the telescope and to the wavelength interval. Such instruments can achieve high levels of accuracy, but may take a long time to complete one total scan across all wavelengths.

73

Another type of spectroradiometer, with an alternative mechanism to a monochromator, is

the 'fast scan' variety in which the spectral dispersion is performed by a polychromator which

simultaneously measures the flux at all wavelength intervals. This can be measured by

directing the light output of the dispersive element on to a silicon detector array. Such

instruments have lower inherent accuracy but can complete a measurement in a fraction of a

second and have a limited dynamic range of the silicon detectors [121].

In order to calibrate the spectroradiometer to a known radiance standard, a reference source

is always required. Generally speaking, this takes the form of a tungsten lamp inside an

integrating sphere coated on the inside with a uniform white reflective material such as

polytetrafluoroethylene (PTFE) diffuse reflectance standard, with a small aperture through

which the diffuse radiation is emitted. A UV lamp calibration is necessary for radiant flux

measurement if the wavelength variation from 200 nm to 400 nm is needed to be

investigated. By measuring the standard lamp and comparing the results with the reference data, the control software for the telespectroradiometer can determine a correction factor at each wavelength to convert the measured signals into accurate radiance values.

In order to effectively use a spectroradiometer, several factors need to be considered, which

include:

1. The overall wavelength range must be sufficient to include all radiant energy; 2. The dynamic range of the detector must be sufficient to handle the variation in the light level, and 3. Stray light leakage through the monochromator must be eliminated as much as possible.

74

2.2.3 Effect of UV Content on Whiteness

Fluorescent white textile materials exhibit enhanced whiteness due to the phenomenon of fluorescence and its typical mode of action on textile materials. The materials containing

FWA are mainly influenced by the FWA type, age, illumination (lamp characteristics) and the design of the instrument. A change in the illuminant changes in general the whiteness and shade deviation of the shaded white along with a probably change in the behavior of the

FWA.

The degree of fluorescence is a function of the UV content; the incident radiation must be carefully defined if the whiteness value is to be accurate. The intensity of fluorescence of samples containing FWA (Fluorescent Whitening Agent) samples depends on the spectral power distribution of the illumination, especially in the UV region. Differences among sources employed in visual assessments booths, measurements, and different measurement devices also arise from differences in the spectral power distribution of the illumination.

Spectrophotometric and radiometric techniques can be used to determine the extent of variability in fluorescence due to variations in UV to analyze the effect of UV content on whiteness [122].

The contribution of fluorescence to final whiteness is a variable quantity that depends solely on the amount of UV radiation at disposition of the FWA. The magnitude of perceived whiteness is determined by the combination of whiteness of substrate and the absorption from the shading agent and the FWA.

75

3. Psychophysical Evaluation of Whiteness

Whiteness is an attribute of color with high luminous reflectance and low purity, situated in a

relatively small region of the color space. The color white is distinguished by its high

lightness and its’ very low (ideally zero) saturation. Depending on the hues of near whites,

their perception may differ. For example, an object with a blue cast will be perceived whiter

than an object that has a yellow cast, where saturation and lightness are the same for both

objects. Perception of whiteness depends on observers and observer preferences. Visual

assessment of color is impaired by individual preferences. Instrumental whiteness assessment

will be absolute only if the uncertainties of measurement and of evaluation are overcome.

The assessment results also depend on the assessment methods applied such as, ranking, pair

comparison, difference scaling, and ratio scaling for a particular observer. Many varying

conditions such as the level and spectral power distribution of sample irradiation, the color of

the surrounding whites, and the desired appearance of various products have also a direct

impact on the whiteness perceived. The intensity of fluorescence of samples containing FWA

(Fluorescent Whitening Agent), for example, depends on the spectral power distribution of

the illumination, especially in the UV region. Differences among visual assessments, measurements and different measurement devices may also result from differences in

spectral power distribution of the illumination. On the other hand, the general agreement is

that samples are considered less white or darker if they are yellower and darker.

76

According to Ernst Ganz [123], three uncharacteristic observations are made if samples are compared where they differ only moderately in whiteness:

1. Different observers may give different weights to lightness and blueness. A sample can be

ranked for its whiteness, although a difference in lightness and/or blueness may be clearly

perceived.

2. Some observers prefer whites with a greenish while some prefer reddish tint. This causes

contradictory evaluations of whiteness values where there is a hue difference in a given

sample. At the same time samples with an intermediate bluish or neutral tint are assessed

more consistently.

3. There is in general agreement among observers for hue differences, which is assessed

independent of the perceived difference in whiteness.

These three effects could explain why samples with different luminous reflectance, purity, and dominant wavelength may be ranked linearly for their perceived whiteness values under given conditions by observers, although no general agreement on whiteness may be reached

[106]. It is important to note that whiteness must be defined based on perceptual evaluations and psychometric techniques before instrumentally measuring whiteness. In reality, the appearance of an object is evaluated by the observer. Therefore, an objective measure cannot be built until the subjective reality has been analyzed [122].

Thus from the perspective of an observer, ordinal scaling methods, which are the ranking method and the paired-comparison method, have been widely used in psychological

77

evaluations [124]. In terms of the observers employed in any experiments, the magnitude of

their experience or skill in such assessments can play a significant role in assessment results

[125]. Therefore the methodology to evaluate the observer variation, including inter/ intra-

variability, is a subject which need to be considered [126].

3.1 Ordinal Scaling Methods

Visual experiments tend to fall into two categories, one is threshold and matching

experiments which designed to measure visual sensitivity to small changes in stimuli, another

is scaling experiment, intended to generate a relationship between the physical and

perceptual magnitude of a stimulus. It is critical to determine which class of experiment is

appropriate for a desired application. Scaling experiments have been widely applied within

the model development of establishing a relationship between the visual assessment of

perceived differences and measured data.

3.1.1. Rank Order Method

Asking observers to rank samples is perhaps the simplest ordinal scaling method to

administer. The observer is asked to rank the textured samples in order, from best to worst,

along an attribute defined by the instructions, such as texture or whiteness. If there are n

samples, where n should not be too large to significantly alter the viewing angle, then the

ranks go from one to n, where n is usually assigned to the greatest amount of the attribute.

The usual strategy is to get ranking from m observers, who assign a number to the ranks and

78

then calculate an average result for the ranks. Observer instructions are essential to guide the testers to achieve a more accurate experiment. One of the first prototypes for the rank order scaling assessments is modeled by Bartleson [127]. The observers' task was to arrange a series of samples in order of some "ness", e.g. lightness, brightness and whiteness etc.

Two possible methods for data collection for rank order scaling are available. However, both methods generate the same scale values, but represent the data in two different ways. The first option is to form a data matrix, R, with a column representing each sample, numbered 1 to n as shown in Table 2. Each row of this data matrix contains the response of an observer, recorded as numbers from 1 to n, representing the rank of each sample, recorded in the column for that sample. Observers are identified in the first column, numbered 1 to m in our example. As for each observer, a record of the rank given by the observer for the stimuli identified by the column is obtained. The data matrix, R, is of size m rows by n columns.

Table 2. Illustration of data matrix R.

Observer Sample Number 1 2 3 ... n 1 3 1 n ... 6 2 2 3 n-1 ... 5 3 3 1 n-2 ... 6 ...... m 4 1 n ... 7

79

This method of data representation is convenient for converting the ranking data into a proportion matrix, and is therefore the preferred method. Once the proper matrix is obtained, other analysis techniques can be used to generate the desired results. Rank order method has been applied in a significant number of visual assessments in the field of color science [128,

129].

3.1.2 Paired-Comparison Method

The technique of paired comparisons is attributed to Gustav Fechner, who described it in

1860. Pair comparisons are rarely used for scale generation because the procedure is time- consuming. However, this method has been applied in visual assessments, where samples are compared with a standard, in other words, an anchor, to estimate their perceived attributes.

Suppose we have a set of n color samples and wish to scale their degree of whiteness. The samples are presented to an observer in pairs and the observer responds by selecting the sample that has the greatest amount of whiteness. This pairwise presentation is repeated for all possible n(n-1)/2 pairs, the number of all possible combinations of n objects taken two at a time.

A general procedure for visual assessment is described in AATCC Evaluation Procedure 8 to evaluate the color difference between two using a paired-comparison method [130]. A standard gray scale consisting of pairs of standard gray chips is used, the pairs representing progressive differences in color or contrast corresponding to numerical colorfastness grades.

The gray scale can be placed along the edges of the test sample pair which consists of a

80

standard sample and its corresponding batch or trial sample and the perceived visual difference between the standard and a batch sample is obtained. Gray scale grades can be transformed to CIELAB color difference values using the colorimetric tolerance data defined for the standards as shown in Table 3 [131,132].

Table 3. Gray Scale grades corresponding with CIELAB Color Difference.

Gray Scale Grade CIELAB Color Difference Description 5 0.0 Equal 4-5 0.8 ± 0.2 4 1.7 ± 0.3 Slight 3-4 2.5 ± 0.3 3 3.4 ± 0.4 Noticeable 2-3 4.8 ± 0.5 2 6.8 ± 0.6 Considerable 1-2 9.6 ± 0.7 1 13.6 ± 1.0 Much

A number of other different methodologies have also been attempted [133] such as those depicted in Figure 30, where S denotes standard. Researchers at NC State University also reported a similar grey scale series assessment approach based on larger gray samples [134].

81

Figure 30. Sample arrangement in grey-scale assessments.

3.2 Psychophysical Experiments

3.2.1 Factors Effecting Visual Assessments

Some techniques used to derive psychophysical scales have been elaborated in the last section. However, there are still lots of issues that arises in the design of visual assessment that have a significant impact on the experimental results, particularly when applied these results in order to establish an ideal model to predict their color variation. Many of these experimental factors are the key variables that have illustrated the need to extend basic colorimetry with the development of the accurate relationship between average perceived

82

magnitude of human vision and the predicted data. A list of the key variables identified to

date is shown below, which should be considered in practical applications [135].

Observer age; Surround conditions; Observer global experience; Control and history of eye movements; Observer local experience; Adaptation scale; Number of observers; Complexity of observer task; Screening for color vision deficiencies; Controls; Observer acuity; Repetition rate; Instructions; Range effects; Context; Image content; Feedback; Number of images; Rewards; Duration of observation sessions; Illumination level; Number of observation sessions; Illumination color; Observer motivation; Illumination geometry; Cognitive factors; Background conditions; Statistical significance of results.

These factors are real and must be addressed to assure high-quality scale values. Also a general definition of observer selection and evaluation is outlined by ASTM Standard E

1499-97 [136], which are primarily oriented to color appearance judgments, and gives detailed guidelines for the selection, evaluation and training of observers. Several factors may be strictly considered in a scaling study, including:

∗ Presentation mode, which occurs in two basic ways, all at once or one at a time.

83

∗ Viewing distance, which should be controlled during observers' judgments.

∗ Sample illumination, two principal illumination variables to control, that is, the absolute

light level incident on the samples and the source spectral power distribution.

∗ Environmental factors such as room temperature, humidity and psychological features.

∗ Observer motivation.

3.2.2 Observers' Evaluation

A commonly held belief among naive observers involved in psychometric scaling is that experts see things differently, or give different scale values than inexperienced observers.

This may or may not be true, and depends on the scaling task. Observers who participate in scaling studies are eager to help, and will often use various methods to provide the "correct" answers.

3.2.2.1 Average Rank

In order to evaluate observers' variation, substantial approaches and methods have been investigated in the past 20 years. Bartleson [127] put forward an approach based on Equation

25.

1 = [1 1 1 … 1 ] (25)

퐴푣푔푅푎푛푘푖 푚 푅 푚

84

Here the row vector 1's is of length m. Equation 25 computes the column sum and divides the

result by the number of observers, yielding the average rank for each stimulus or sample. The

vector AvgRanki has length n, the number of samples. R is the matrix shown in Table 2.

A number of performance factors such as PF/3, PF/4 and STRESS have been put forward to

evaluate the performance of different color difference equations; however, these methods

were extended to also assess observers' variability.

3.2.2.2 PF/3

PF/3 is a composite index that was proposed by Guan and Luo [137,138,139], with the corresponding parameters shown in Equations 26:

1 log ( ) = 푁 [log log ( )] (26) 푖 �������������횤� 2 10 10 ∆퐸 10 ∆퐸 훾 � � � 푖 � − 횤 푁 푖=1 ∆푉 ∆푉 1 ( ) = 푁 2 푖 푖 퐴퐵 ∆퐸 − 퐹∆푉 푉 � � 푖 푖 푁 푖=1 ∆퐸 퐹∆푉 With = ∑∆퐸푖⁄∆푉푖 ( 푖 ) 푖 CV = 100 퐹 �∑∆푉 ⁄∆퐸 ( ) 2 1 ∆퐸푖− 푓∆푉푖 2 �푁 ∑ ∆���퐸� With = ∑ ∆퐸푖∆푉푖 2 푓 ∑ ∆푉푖

85

[( ) / ] PF/3 = 100 훾−1 +푉퐴퐵+퐶푉 100 3

where N indicates the number of color pairs (with visual and computed differences and

푖 , respectively), F and f are factors adjusting the and values to the same scale,∆푉 and

푖 푖 푖 the∆퐸 upper bar in a variable indicates the arithmetical∆퐸 mean.∆ For푉 perfect agreement between

and , CV and should equal zero and should equal one in such a way that PF/3

푖 푖 퐴퐵 ∆should퐸 become∆푉 zero. A푉 higher PF/3 value indicates훾 worse agreement.

3.2.2.3 Standardized Residual Sum of Squares (STRESS)

The PF/3 index has a number of shortcomings and as such the STRESS index was proposed

as an alternative. The standardized residual sum of squares (STRESS) index [140], shown in

Eq. (27), is an important tool for measurements of the strength of the relationship between the visual color difference ( V) and the computed color difference ( E) for a color pair.

∆ ∆

( ) = 100 (27) 2 ∑ ∆퐸푖 − 퐹1∆푉푖 푆푇푅퐸푆푆 � 2 2 ∑ 퐹1 ∆푉푖 = 2 푖 1 ∑ ∆퐸 퐹 푖 푖 ∑ ∆퐸 ∆푉

86

For a given set of i = 1... N color pairs, the visually perceived color difference is designated by V , and the computed color difference by E . A STRESS value of zero indicates perfect

i i agreement∆ between results from different trials∆ or amongst different observers.

STRESS can be employed to determine observer variability in visual assessments. Two types of variability are often calculated, namely intra-observer variability, which indicates the repeatability of the same observer in different trials, and inter-observer variability, which determines the reproducibility of results given by an observer in relation to the general response from a group of observers. The inter-observer variability, also named observer accuracy, is deviation between mean results from each observer against the mean results of a panel of observers, while intra-observer variability is deviation among results of a given observer in replicated trials in an experiment. For the assessment of intra-observer variability

and are replaced by the visual responses of a given observer in two different

푖 푖 ∆assessment푉 ∆퐸 sessions. For the assessment of inter-observer variability and are

푖 푖 replaced by the mean responses obtained from a given observer and those from∆푉 all observers∆퐸

respectively.

STRESS index also applied in different fields to evaluate the performance of various color

difference equations and allows inference on the statistical significance of two color- difference formulas with respect to a given set of visual data as a valuable method. However, the study from Kirchner & Dekker showed that the STRESS has no meaningful interpretation in this regression analysis in comparison with Pearson's correlation coefficient (r), as well as the performance factor PF/3[141]. This indicates that some supplementary statistical methods

87

such as the correlation coefficient may be needed when analyzing results to ensure results

will not be misinterpreted.

3.2.2.4 Correlation Coefficient

A correlation coefficient [142] has also been used to evaluate the performance of color difference formula which is shown in Eq. 28.

( ) = (28) 푛 ∑ ( 푋푖푌푖) −[∑ 푋푖 ∑ 푌푖 ( ) ] 2 2 2 2 푟 ��푛 ∑ 푋푖 − ∑ 푋푖 � 푛 ∑ 푌푖 − ∑ 푌푖 Where Xi is the instrumental value, , Yi is the visual value, , and n is the number of

pairs of samples. A correlation coefficient∆퐸 of 1 means the perfect∆푉 agreement between the

instrumental and visual data. However, since color perception is multi-dimensional it is

questionable whether r can be used as an effective tool to gauge the agreement between

perceived and computed values.

88

4. Surface Texture

Image texture, defined as a function of the spatial variation in pixel intensities (gray values),

is useful in a variety of applications and has been a subject of intense study by many

researchers. One immediate application of image texture is the recognition of image regions

using texture properties. Three primary issues in study of texture analysis include: texture classification, texture segmentation and texture synthesis [143].

Texture classification refers to the process of grouping images of textures into classes, where each resulting class contains similar patterns according to some similarity criterion. Texture segmentation is used to refer to the process of dividing an image up into homogeneous regions according to some homogeneity criterion as shown in Figure 31 [144].

89

Figure 31. Texture segmentation.

Texture synthesis is often used for image-compression applications. It is also important in computer science area where the goal is to render object surfaces which are as realistic as possible.

4.1 Overview of Various Methods for Texture Analysis

Mathematical methods of texture analysis of 2D images can be categorized as statistical and syntactic methods [66].

Statistical approaches usually compute different properties and are suitable when the primitive sizes of texture are comparable with the pixel sizes. Statistical methods categorize texture as smooth, coarse and grainy, etc. Furthermore, statistics of some local geometrical

90

features such as edges, peaks, valleys etc. can also give a measure of specific texture

properties in an image. This method is useful when the texture primitives are small, resulting

in micro-textures. Alternatively, when the size of the texture primitives is large, it becomes

necessary to first determine the shape and properties of the basic primitive and then the rules

which govern the placement of these primitives, forming macro-textures [145]. Syntactic and

hybrid (Combined with statistical and syntactic) methods are suitable for textures where

primitives can be described using a larger variety of properties than just tonal properties; for

example shape description. Using these properties, the primitives can be identified, defined

and assigned a label. This section discusses some of the statistical approaches for texture

analysis as the characterizations of textile textures.

4.1.1 Grey-level Histogram

The simplest method for describing texture is to use statistical moments of the grey-level

histogram of an image or region-of-interest of that texture. The first-order histogram probability p(i) can be given in Eq. 29 [146]:

( ) ( ) = ( = 0, 1, 2, … … , 1) (29) 푁 푖 푝 푖 푖 푘 − 푀

N is the number of the pixels with the pixel value i; k is the total number of intensity levels (k

is 256 for an 8-bit image data); and M is the number of the pixels in an image or region-of-

91

interest. The features based on the first-order histogram probability are the mean, standard

deviation, skewness, kurtosis, energy and entropy.

The mean is the average value, so it can be described as the general brightness of the image

as shown in Eq. 30:

= ( ) (30) 푘−1

횤̅ �푖=0 푖푃 푖 The standard deviation, which is also known as the square root of the variance, is related to contrast, so a low contrast image will have a small variance.

= k−1(i ) P(i) (31) 2 2 σ � − ı̅ i=0 The variance σ2 can also be used to establish descriptors of relative smoothness.

1 R = 1 (32) 1 + (i) − 2 R is 0 for areas of constant intensity since the varianceσ is 0.

Skewness is a measure of symmetry, or more accurately, the lack of symmetry.

1 S = k−1(i ) P(i) (33) 3 3 � − ı̅ σ i=0 A distribution is symmetric if it looks the same to the left and right of the center point. The skewness for any symmetric data should have a value near zero.

92

The energy measure has a maximum value of unity for an image with a constant value and

gets increasingly smaller as the pixel values are distributed across more grey level values.

U = k−1 P (i) (34) 2 � i=0 Entropy is a measure of variability and is zero for a constant image. The measure tends to

vary inversely with the energy.

e = k−1 p(i) log [p(i)] (35)

− � 2 i=0 However, measures of texture based upon statistics of the grey-level histograms do not take

into account information regarding the relative position of the pixels with respect to each

other.

4.1.2 Grey Level Co-occurrence Matrices

The GLCM (Grey Level Co-occurrence Matrices), proposed by Haralick et. al. [147] has

been used to describe the homogeneity characteristics of samples and identify regions of

interest in an image. This method is based on the joint probability distributions of pairs of

pixels.

The intensities of the current pixel i and its neighbor j conjugate a single co-occurrence of i

and j, given the sampling parameters d (distance) and θ (orientation). The frequencies of all

93

co-occurrence are stored in matrix M×N. Since the range of the gray level of an image is

from 0 to 255, the GLCM dimension is consequently 256×256. Therefore entry i, j in the

matrix is the number of i, j pairs sampled in an image. GLCM is denoted as follows:

N{[(x , y ), (x , y )]∈ S | f(x , y ) = g & (x , y ) = g } P(g , g ) = (36) 1 1 2 2 N(1S)1 1 2 2 2 1 2 푓

where P(g1, g2) is the frequency of a co-occurrence with the gray level varied from g1 to g2

under a given distance and direction. N means the number, the denominator on the right is

the sum of the pairs in the global image under the given distance and direction, while the

numerator is the number of the pairs whose gray level changed from g1 to g2. GLCM show how often each gray level occurs at a pixel located at a fixed geometric position relative to each other pixel, as a function of the gray level [66]. Fourteen indices are generated to describe the characteristic of the matrix and some of them features are listed in Table 4.

94

Table 4. Some texture features extracted from Gray Level Co-occurrence Matrices.

Texture Feature Formula

Energy ( , ) 2 � � 푃푑 푖 푗 푖 푗 Entropy ( , ) ( , ) − � � 푃푑 푖 푗 푙표푔푃푑 푖 푗 푖 푗 ( ) Contrast ( , ) 2 � � 푖 − 푗 푃푑 푖 푗 푖 푗 ( , )

Homogeneity 1 + | | 푃푑 푖 푗 � � 푖 푗 푖 − 푗 ( ) ( , ) Correlation ∑푖 ∑푗 푖 − 휇푥 �푗 − 휇푦�푃푑 푖 푗 휎푥휎푦

4.1.3 Discrete Fourier Transformation

Psychophysical studies have shown that human brain carries out a frequency analysis of an

image. This section includes texture analysis algorithm that rely on signal-processing

methods [148]. The Fourier transformation was used to transform images from a spatial

domain to a frequency domain. Spatial frequency is related to texture as fine textures are rich

in high spatial frequencies while coarse textures are rich in low spatial frequencies. The

discrete Fourier transformation is shown in Equation (37).

95

1 1 1 2 ( + ) ( , ) = 푁− 푀− ( , ) (37) 푥휇 푦푣 × − 휋 =0 =0 푁 푀 퐹 푢 푣 � � 푓 푥 푦 푒 푁 푀 푥 푦 where (x, y) is the coordinate in the spatial domain, while (u, v) is the coordinate in the frequency domain, and f is the luminance of the grey-level images. Each image is in an N×M grid. F(u, v) is the spectral value in the frequency corresponding to (u, v). When the image is a square, that is, N=M, the transformation is simplified to that shown in Equation (38):

1 1 + 1 2 ( ) ( , ) = 푁− 푁− ( , ) (38) 푥휇 푦푣 × − 휋 =0 =0 푁 퐹 푢 푣 � � 푓 푥 푦 푒 푁 푀 푥 푦 Since Fourier transforms have the conjugate symmetry property, the lower frequency region

where (u, v) is close to (0,0), can be relocated to the center and the range of (u, v) can thus be changed to (-N/2, N/2). The spectral power of the entire image, P(u, v) can thus be calculated using Equation (39):

P(u, v) = | F(u, v)| 2 (39)

4.2 Modulation Transform Function

The human visual system such as human eyes consists of optical processing and neural

processing. One of the weaknesses of many analyses of texture is that the human visual system is often not considered. All image-forming optical systems including human eyes create a less identical image with lost contrast than perfect image of the object we are viewing [149]. In other words, a texture may be viewed and visible at a certain distance, but

96

may be not visible if the viewing distance is increased. The property of the visual system that

is pertinent to this phenomenon is the contrast sensitivity function. It is, therefore, reasonable

to consider the human visual system, especially the effect of contrast sensitivity function on

the perception of texture.

Let us assume a point light and an observer with normal color vision in a scene, the image of this point on the retina would be identical to the original point of light if the human visual system is a perfect optical system. However, the human optical system, our eyes are not perfect, the relative intensity of this point light is distributed across the retina. The standard way to describe a resolution characteristic of an image-forming optical system is its modulation transfer function. In an optical system that forms various sine-wave grating targets the contrast in the image is compared to the contrast of the target. For each spatial frequency, the modulation transfer (MT), from object to image, for the lens only, is computed as shown in Eq. (40).

MT = image contrast / object contrast (40)

The modulation transfer function of a normal lens is shown in Figure 32 [150]. A value of modulation transfer of 1.0 shows that the image contrast is the same as that found in the object, that is, no contrast is lost.

97

This property correspondingly is the frequency response of a linear, shift-invariant system

which is composed of a magnitude response and a phase response in the visual system, which can be used to illustrate the visual sensitivity of different frequency signals.

Figure 32. Typical modulation transfer function.

A practical model for the MTF is given by the radially symmetric function, shown in

Equation (41) [151]:

( ) = + exp (41) 휔 휔 훽 퐻 휔 퐴 �훼 휔0 � �− �휔0� �

where ω is the circular frequency measured in cycles per degree, and can be transformed to

cycles per mm. In Equation (41), A = 2.5, α = 0.0192, ω0 = 8.772, and β = 1.1.

98

III. Experimental Methodology and Results

1. The Effect of Texture on Perception and Measurement of White Knitted

Textiles under D65 Illumination

In the textile industry, products commonly contain a range of different textures and patterns.

Variations in surface roughness, via introduction of different textures, can significantly affect

the colorimetric attributes of textile substrates. In a global supply chain, a correct

understanding of the effect of such variations is critical to maintaining a competitive edge.

Several parametric effects influence the perceived color of products and variations in these

parameters can change the magnitude of differences amongst otherwise identical objects. In

addition to texture, other factors include the color of the background, luminance in the

viewing field, the physical size of samples, the mode of sample presentation, the magnitude

of color differences, as well as whether the color can be described as a surface or self-

luminous.

Assessments of product quality in textiles are carried out using visual, as well as instrumental

techniques. It has been long established that surface roughness influences the perception of

color. Various attempts have been made to model and predict the change in color as a result

of change in texture. Indeed, a number of recommended color difference formulae, e.g.,

CMC, CIE94, CIEDE2000, etc., include adjusting factors to account for the varied

interaction of light with different surfaces. More recently, the influence of texture on

suprathreshold lightness differences [71] and on color difference evaluation [72] was

99

reported. However, studies that report an examination of the effect of surface texture on the

perception and measurement of white substrates, including those containing fluorescent

brightening agents, was not found in the literature.

As a part of a larger study, the aim of this section of the work was to develop a range of textile substrates with a uniform white base but different surface characteristics and determine the role of texture on perception and measurement of white materials. A

preliminary set of results was reported previously [152]. Psychophysical assessments as well

as instrumental methods involving spectrophotometric techniques were used to analyze the

effect of surface features on perception and measurement of white substrates.

1.1 Experimental

1.1.1 Preparation of Samples

Several knitted woolen and cotton fabrics with different surface patterns were prepared. A

series of suitable textures from the large set of prepared samples was then selected. In the

first part of the study, 10 woolen samples were obtained using two methods. Initially scoured

only woolen yarns were knitted and samples representing varying levels of surface roughness

were generated. The knitted fabrics were then bleached using a commercial recipe containing

sodium borohydride (SBH) and sodium bisulphate (SBS) [153]. To increase the level of

whiteness attained, samples were simultaneously optically brightened with a commercially

available fluorescent brightening agent, UVITEX, at nine different concentrations (0.1, 0.25,

0.5, 0.75, 1.0, 1.25, 1.5, 1.75, and 2.0% o.w.f). The optimal conditions for bleaching as well

100

as the amount of brightening agent to generate fabrics of appropriate base whiteness were

examined using a panel of five expert visual assessors under controlled illumination and

viewing conditions. A SpectraLight III calibrated viewing booth (X-Rite) illuminated with

filtered tungsten bulbs simulating D65 illuminant at a color temperature of 6489 K and an

illumination intensity of 1400 lux inside the chamber was used. Cabinet’s UV light was

added to the simulated D65 illuminant (D65+UV) during the assessments. A 0/45

illumination viewing geometry was employed for psychophysical assessments. Figure 33

shows a scoured woolen yarn package, a knitted sample from scoured woolen yarn, a bleached woolen fabric, as well as a bleached and optically brightened sample for comparison of appearance.

101

Figure 33. Bleached woolen yarn, scoured woolen knitted fabric, bleached woolen fabric, bleached & optically brightened woolen fabric (from left to right).

A second set of woolen samples was obtained by first bleaching the woolen yarn under optimal bleaching and brightening conditions (1.25% o.w.f. FBA) followed by knitting bleached yarns to generate different surface patterns as shown in Figure 34.

102

Figure 34. Different surface textures examined in the study.

A different set of ten knitted cotton fabrics with different surface patterns was also generated from cotton yarns that were already bleached and brightened. Cotton samples were then washed to remove any impurities that may have been introduced during sample preparation.

This method minimized variability in the degree of whiteness attained during wet processing.

Table 5 gives a description of wool fabric patterns used in the study as well as their L* and

mean perceived lightness values, as determined by observers in visual assessments.

103

Table 5. CIE L* and perceived lightness of textured woolen samples.

Textures L* Perceived Lightness

a Zigzag Effect 82.86 82.54

b Jersey Face 82.65 83.07

c Bias Effect 82.59 82.53

d Jersey Back 82.47 82.65

e Racking Effect 82.40 82.49

f 2×3 Rib 82.07 81.95

g Half Cardigan Back 81.22 80.83

h 1×1 Rib 81.07 81.11

i Half Cardigan Face 80.79 80.99

j Full Cardigan 80.71 80.96

1.1.2 Instrumental Measurement

Colorimetric attributes and the whiteness of textured samples were measured with a

Datacolor SF600X spectrophotometer using D65 illuminant, 1964 CIE supplementary standard observer (10o) and with specular light and UV included based on the AATCC

Evaluation Procedure 11 [154] but at a specific predetermined orientation. The same orientation was used for all measurements and for visual assessments. The CIE Whiteness

Index (WI) formula, shown in Equation 20, was used to calculate the whiteness of samples from their tristimulus values according to the AATCC Test Method 110 [155].

104

Table 6. CIE whiteness index and L* of textured samples.

Wool Cotton Textures L* CIEWI L* CIEWI Zigzag Effect 82.86 54.88 94.96 150.69

Jersey Face 82.65 57.74 93.34 143.55

Bias Effect 82.59 52.71 95.84 152.48

Jersey Back 82.47 56.76 94.37 149.36

Racking Effect 82.40 54.34 95.01 149.28

2×3 Rib 82.07 56.52 94.03 147.15

Half Cardigan Back 81.22 51.37 93.82 147.77

1×1 Rib 81.07 53.71 93.45 145.25

Half Cardigan Face 80.79 53.05 93.86 147.07

Full Cardigan 80.71 53.50 93.24 147.20

In addition to whiteness index, CIE Tint Index, using illuminant D65 and 1964 CIE

supplementary standard observer (10o) can be used to determine variations in tint amongst

white substrates as shown in Equation 21 [156]. It should be noted, however, that

measurement of tint among different instruments is subject to considerable variation and thus

results may significantly vary for a set of measured substrates.

Both sets of cotton and wool samples were visually evaluated and the analysis of psychophysical evaluations from the samples is given in the following section. Table 6 shows

L* and CIE whiteness values for woolen and cotton samples.

105

1.1.3 Visual Assessment

In the first part of the study a panel of 25 naïve observers (13F and 12M, average age 26) assessed the perceptual whiteness of selected woolen samples. The AATCC EP9 guidelines were followed during the visual assessments [157]. All observers were color normal according to the Neitz test for color vision [158]. Observers repeated assessments three times for woolen samples (for a total of 1500 assessments) and twice for cotton (800 assessments), with a time gap of at least 24 hours between trials. The viewing booth used and the specifications of sources were identical to those described in section 1.1.1. Figure 35 illustrates the psychophysical assessment methodology employed. Observers were adapted to the viewing conditions for at least two minutes prior to assessments. In each trial, observers ranked the perceived whiteness of samples from most white (10) to least white (1) and a mean rank rating for each sample from all observations was obtained. Observers were then asked to rank samples from most light to least light and a mean ranking based on the lightness of samples was also obtained. A reference white textile sample (AATCC optically brightened white standard with L* ~ 100), and a reference gray (L* ~ 50), shown in Figure

34, was provided as an anchor reference pair to aid observers in their assessment of perceived magnitude of lightness of textured samples. The grand mean of the perceived magnitude of lightness of each sample based on responses from all observations was obtained.

In the second part of the study, a panel of 20 naïve observers (9M and 11F, average age 26.6) assessed the magnitude of the perceived whiteness of knitted cotton samples. Some of the observers taking part in this study also participated in the first study involving woolen

106

samples. The perceived lightness of cotton samples was ranked by observers from the most

light to the least light.

Figure 35. Procedure employed for the perceptual assessment of white textile substrates.

1.2 Data Analysis

Responses from observers were analyzed to determine the role of texture on perceived

whiteness and lightness of appropriate samples. It was found that the most and least white

samples could be identified with a high degree of confidence using simple statistical techniques. However, ranking of samples that exhibited medium apparent whiteness did not

107

generate a strong correlation with sample texture and therefore different analyses, namely

clustering and weighted probability, were also employed.

1.2.1 Cluster Analysis

Cluster analysis, also called segmentation analysis, provides an abstraction from individual

data to the clusters in which data objects reside and has been widely used in the analysis and

retrieval of information [159]. Cluster analysis can implement this by seeking to identify a

set of groups which both minimizes within-group variation and maximizes between-group

variation. This method was used in our study. In the cluster method observations that had

similar properties were clustered into one group. Two different whiteness groups were thus

generated from samples ranked as medium white, where responses for the racking effect,

1×1 rib and bias effect samples were classified into a single group, and responses for half-

cardigan (face), zigzag and full-cardigan samples were clustered into another group. In this work the common features among clustered samples were not further examined. Such algorithms can be incorporated to help distinguish similar structures. This may be the subject of work for future.

1.2.2 Weighted Probability Analysis

Weighted probability analysis, which is based on the assumption that each rank of perceived evaluation contributes in different amounts to predicting the whole perceptual assessment, is another approach that is widely used in data analysis. In this approach, ranking positions are

108

weighted differently based on their impact on the feature being modeled. Samples are ordered according to the highest probability of appearing in their rank position in each group.

The weighted probability of appearing in each rank for each sample was then calculated.

Results are listed in Table 7.

A value of 1 was assigned to rank 10 (most white) and a value of 0.1 was assigned to rank 1

(least white) with other ranks separated by an equal interval of 0.1 unit. The different weights were then used to mathematically add the impact of different ranking positions. The probability of appearing in each rank was multiplied by its weight and values were then summed up for the final weighted probability. Results are shown in Figure 36. The same methodology was repeated for the analysis of the perceptual lightness of the woolen samples.

109

Table 7. Percentage weighted probabilities of different woolen samples ranked from most to least white.

Most Textures [2] [3] [4] [5] white Zigzag Effect 1.39 5.56 4.17 11.11 11.11 Jersey Face 86.11 8.33 0.00 4.17 0.00 Bias Effect 2.78 2.78 6.94 12.50 19.44 Jersey Back 5.56 61.11 5.56 5.56 5.56 Racking Effect 1.39 4.17 19.44 22.22 13.89 2 × 3 Rib 1.39 6.94 37.50 19.44 11.11 Half Cardigan Back 0.00 1.39 1.39 0.00 1.39 1 × 1 Rib 0.00 4.17 20.83 13.89 16.67 Half Cardigan Face 0.00 4.17 2.78 5.56 8.33 Full Cardigan 1.39 1.39 1.39 5.56 12.50 Least [6] [7] [8] [9] white Zigzag Effect 6.67 20.83 18.06 8.33 2.78 Jersey Face 0.00 0.00 0.00 0.00 1.39 Bias Effect 16.67 12.50 9.72 11.11 4.17 Jersey Back 4.17 2.78 2.78 4.17 4.17 Racking Effect 16.67 9.72 9.72 1.39 2.78 2 × 3 Rib 13.89 2.78 5.56 0.00 1.39 Half Cardigan Back 6.94 12.50 5.56 16.67 54.17 1 × 1 Rib 4.17 6.94 8.33 13.89 11.11 Half Cardigan Face 12.50 16.67 20.83 20.83 6.94 Full Cardigan 8.33 15.28 19.44 23.61 11.11

110

100

80

60

40

20 WeightedProbability

0 2x3 Rib 1x 1 Rib Bias Effect Jersey Face Jersey Jersey Back Jersey Zigzag Effect Full Cardigan Full Racking Effect Racking Half Cardigan Cardigan Half Face Half Cardigan Cardigan Half Back

Figure 36. The weighted probability of different woolen textures being ranked as the most white.

1.3 Results and Discussion

Whiteness is influenced by two main parameters, lightness and tint. In this work, identical bleaching and brightening conditions were implemented as an attempt to keep tint for all samples constant. Variations in perceived lightness and whiteness of samples would thus be presumably mainly due to different surface textures as a result of varying levels of light scattering. Thus lightness variation was considered to be the main variable affecting the perception of whiteness of samples. To verify the validity of this assumption, the relationship between perceived lightness and L* for woolen samples was determined.

111

1.3.1 The Effect of Texture on Measured and Perceived Lightness

To investigate how variations in surface patterns influenced the perceived lightness of brightened woolen samples, the lightness magnitude was evaluated visually and mean responses were compared to measured lightness (L*) values. A reference white textile sample

(AATCC standard optically brightened white with L* ~ 100), as well as a reference gray (L*

~ 50) was used as an anchor pair to aid observers in their assessments. The grand mean

perceived lightness based on responses from all observers in all trials was obtained. Figure

37 shows results for woolen samples which exhibit a strong correlation.

83.0

82.5

82.0

R² = 0.9118 81.5 Perceived Lightness Perceived

81.0

80.5 80.5 81.0 81.5 82.0 82.5 83.0 83.5

L*

Figure 37. Correlation of mean perceived lightness magnitude against L* for woolen samples.

112

In Figure 38a-b, mean perceived lightness rankings for both woolen and cotton samples, separately, and as a group is shown. Figure 38a shows that as L* increases mean perceived lightness ranks exhibit a weaker correlation against measured values. In fact the correlation for woolen samples is considerably stronger (0.62) than that (0.17) for cotton. This indicates that observers’ ability in distinguishing small changes in lightness in the L* range of 90-100 is poor.

However, despite the narrow lightness range of samples employed, observers clearly detected slight variations in lightness at the lower L* range (80-90) for woolen samples as indicated by the stronger correlation obtained. When samples are viewed as a group, the correlation of perceived lightness rank against measured L* is significantly improved (0.90). This is expected since the differences in lightness of two sets of samples are large and clearly distinguishable and observer variability in detection differences between samples becomes much smaller, resulting in a good correlation of perceived lightness rank against measured values.

113

10

9 a

8 Wool Cotton

7

6

5

4

3

2 Perceived Rank Lightness Perceived 1 R² = 0.3705 R² = 0.1747

0 80 82 84 86 88 90 92 94 96 98 100

L*

20

18 b

16 R² = 0.9013 14

12

10

8

6

4 Perceived Rank Lightness Perceived 2

0 80 82 84 86 88 90 92 94 96 98 100

L*

Figure 38. Correlation of mean observer lightness rank against L* for cotton and woolen samples separately (a) and as a group (b).

114

1.3.2 Correlation between Perceived Lightness and Perceived Whiteness

The study was aimed at exploring the effect of variations in lightness as the key variable on perception of whiteness. It was expected that observer ranks of perceived lightness and perceived whiteness would exhibit a strong correlation. Figure 39 shows a strong correlation between perceived lightness and perceived whiteness for both cotton and wool samples. This supports the validity of the stated assumption. It must be stressed, however, that results differ for the apparent lightness and whiteness values of wool and cotton samples. For instance, while jersey face texture was perceived to be the whitest woolen sample, it was considered to be the least white cotton specimen. Similarly, 2×3 rib was considered to be the third-most white woolen sample whereas it was perceived as the second least white cotton sample. This is summarized in Figure 40. This indicates that certain surface features influence the perception of lightness and whiteness of objects differently at different luminance factor levels, although other unknown contributing factors may also be present.

115

10 a- Wool 9

8 R² = 0.9907

7

6

5

4

3 Perceived Rank Lightness Perceived 2

1 1 2 3 4 5 6 7 8 9 10

Perceived Whiteness Rank

10 b- Cotton 9

8 R² = 0.9611 7

6

5

4

Perceived Rank Lightness Perceived 3

2

1 1 2 3 4 5 6 7 8 9 10

Perceived Whiteness Rank

Figure 39. Correlation of mean observer lightness ranks against mean whiteness ranks for woolen (a) and cotton (b) samples.

116

10

9 Cotton Wool

8

7

6

5

4

3

LightnessRank Perceived 2

1

0 1 Rib 3 Rib

× × 1 2 Back Face Face Bias Effect Jersey Face Jersey Jersey Back Jersey Zigzag Effect Half Cardigan Cardigan Half Full Cardigan Cardigan Full Half Cardigan Cardigan Half Racking Effect Effect Racking

Figure 40. Mean observer lightness rank against mean whiteness rank for woolen and cotton samples based on texture.

1.3.3 Correlation between L* and Perceived Whiteness

In view of the strong correlation between perceived lightness and perceived whiteness, the

strength of the correlation between L* and perceived whiteness for different patterns was also examined.

117

10 9 a wool cotton 8 7 6 5 4 3 2 1 Perceived Rank Whiteness Perceived R² = 0.3639 R² = 0.2647 0 80 82 84 86 88 90 92 94 96 98 100

L*

20

18 b

16 R² = 0.9014 14

12

10

8

6

Perceived Rank Whiteness Perceived 4

2

0 80 82 84 86 88 90 92 94 96 98 100

L*

Figure 41. Correlation between perceived whiteness and measured lightness (L*) of samples separately (a) and as a group (b).

118

Figure 41a shows that measured lightness (L*) correlates poorly with the perceived whiteness of woolen samples. At higher lightness values the correlation between measured

L* values and perceived whiteness is even worse (R2 = 0.27). Figure 41b shows that the correlation improves (R2 = 0.90) when the entire set of samples is considered collectively. It is interesting to note that a similar correlation was also obtained between perceived lightness

and L* for the same set of samples. However, observers' perceived lightness correlates

strongly with their perceived whiteness of samples as shown in Figure 39a-b. Again due to

the large differences in whiteness and lightness of cotton and wool samples, agreement amongst observers in perceived whiteness rank of the entire set improves significantly which is expected.

1.3.4 The Effect of Texture on Whiteness

Figure 42 shows the correlation of CIE whiteness index values against mean and mode visual

whiteness evaluations of woolen samples. Observer agreement in relation to the most and the

least white samples was strong and in those cases responses correlated well with the CIE

whiteness values. Observers ranked the top three most white samples (jersey face, jersey

back and 2×3 rib respectively) as well as the least white (half-cardigan back) sample quickly and their responses were in agreement with the measured CIE whiteness values. However, agreements in ranking woolen samples that exhibited medium apparent whiteness, e.g., 53-

55, were relatively weak with a very poor correlation (R2 = 0.04) against the CIE whiteness

119

values. Mode and mean visual assessment results were used to compare results and rankings based on mean exhibited a stronger correlation (0.83) with the CIE whiteness values compared to the mode (0.59).

10 mean rank R² = 0.8327 9 mode rank R² = 0.5881 8 7 6 5 4 3 2

Perceived Rank Whiteness Perceived 1 0 51 52 53 54 55 56 57 58

CIE Whiteness Index

Figure 42. Correlation of observer's rank against CIE Whiteness Index of knitted woolen samples.

The clustering method was also used to examine the correlation of ranked whiteness responses against the CIE whiteness values, as shown in Figure 43. For samples that did not exhibit a clear rank, standard deviation of observers' whiteness ranks were used to generate two cluster groups. Two sets of clustered samples were thus obtained. In each cluster, the sample with the highest probability of appearing in the rank represented the rank of samples in the cluster. It can be seen that in all methods similar responses were obtained for samples

120

that appeared as the most or the least white. However, for samples of apparent medium

whiteness differences in ranking were observed according to each method. Initially it appears

that clustering does not provide a clear method to distinguish whiteness of samples based on their texture. However, clustering incorporates observers’ variability in visual assessment of whiteness for samples that appear to have similar texture. Clustering exhibits the overall

trend for the relationship between texture and mean observer rank responses.

10 mode rank 9 mean rank 8 cluster rank 7 6 5 4 3 2

Perceived Rank Whiteness Perceived 1

0 1 Rib 3 Rib × × 1 2 Bias Effect Jersey Face Jersey Jersey Back Jersey Zigzag Effect Full Cardigan Full Racking Effect Racking Half Cardigan Cardigan Half Face Half Cardigan Cardigan Half Back

Figure 43. Textured samples ranked based on perceived whiteness from least white (left) to most white (right).

121

The same analyses were employed for psychophysical assessments of knitted cotton samples.

Results are shown in Figure 44, which also indicate that observers were in agreement in

ranking the most and the least white samples, but that the agreement was poor in evaluation

of samples that had medium apparent whiteness. The correlation of CIE whiteness index values for cotton samples against visual assessments as determined by the grand mean and mode is 0.37 and 0.41 respectively, both of which are lower than those for the woolen samples. For higher CIE whiteness values, such as for certain optically brightened cotton samples, CIE Whiteness Index does not perform as a satisfactory metric when compared with visual evaluations.

10 mean rank R² = 0.3708

9 mode rank R² = 0.4132 8

7

6

5

4

3

2 Perceived Rank Whiteness Perceived 1

0 142 144 146 148 150 152 154 CIE Whiteness Index

Figure 44. Correlation of observer whiteness rankings against CIE Whiteness Index values of knitted cotton samples.

122

1.4 Conclusions

The aim of this study was to investigate the effect of texture on perception and measurement

of whiteness to help model variations in assessment due to surface roughness. Results from

measurement and psychophysical assessments of twenty knitted woolen and cotton samples

with varying surface textures showed that variations in the perceived whiteness of objects

due to increased surface roughness can be predicted. Correlation of perceived lightness and

perceived whiteness against CIE Whiteness Index values, respectively, was found to range

from modest for wool samples with moderate L* values (80-90) to poor for those at high L*

(90-100) values. This indicates that the CIEWI does not perform satisfactorily for samples with a high radiance factor.

In general, increased apparent surface texture was found to diminish the apparent whiteness of the object. Indeed in the case of brightened woolen samples (L* ~ 80-90) observers judged

the smoothest surfaces as representing the whitest. However, for optically brightened

samples of high radiance this relationship did not always hold possibly, due to the emission

of light by samples (fluorescence) and the increased complexity of the visual experience.

In addition, it was found that inter- and intra-observer variability in assessment of perceived

lightness and whiteness of samples increases as L* increased from 80 to 100. Observer

variability in perceptual assessment of whiteness of brightened cotton samples was

particularly high, perhaps due to observers’ inability to distinguish small variations in

whiteness of samples at high CIEWI values. Inter- and intra-observer repeatability in

123

assessment of lightness and whiteness improved at lower L* values (80-90) as found in the assessment of optically brightened woolen samples.

An examination of various statistical methods shows that clustering in combination with weighted probability methods may be useful in distinguishing differences in perceived whiteness and lightness of samples that have similar surface features. These results, in conjunction with other analytical techniques, may be used to predict the whiteness of products that vary in texture, such as those due to variations in weave, knit or structural patterns.

124

2. The Effect of Light Source on Perception and Whiteness of Knitted Structures

In the case of textiles, the fabric is not usually uniform and generally has patterns or textures

because of the weave structure as well as different finishing processes. It is well known that

perception of color is influenced by objects' surface characteristics as well as the light source

employed. In a previous work, the effect of different knitted structures, made from cotton and

wool yarns, on perceived whiteness of the substrate, under D65 light source, was studied.

The aim of this part of the work was to study the effect of variations in surface features on the perception of whiteness of textiles under sources A and U30. This was a component of a larger study to determine the role of texture on perception and measurement of white objects.

2.1 Experimental

Ten woolen and ten cotton knitted fabrics with different structures, as shown in Figure 33

were used in this study whose CIE Whiteness Index and L* values are showed in Table 6. A

SpectraLight III calibrated viewing booth (X-Rite) illuminated with tungsten or fluorescent bulbs, at corrected color temperature (CCT) of 2856 K or 3000 K, simulating illuminants A or U30 respectively was used. A 0/45 illumination viewing geometry was employed for psychophysical assessments.

125

A panel of 26 observers (13 Females, 13 Males, average age 22) took part in the visual

assessments. All observers were color normal according to the Neitz test for color vision

[158]. Two sets of samples, 10 each for wool and cotton were used.

The visual assessment was divided into two parts as described in section 1.1.3. In the first part of the study a range of ten white samples with different textures was rated using a forced ranking method from most white to least white. In the second part, a reference white

(AATCC optical brightened sample) with L*~100 and a reference grey sample with L*~50 were used as anchor points to aid observers in assessment of perceived lightness. The perceived lightness of each white sample was rated on a scale of 0 to 100. All samples were illuminated firstly using U30 source and then to incandescent source simulating illuminant A in a viewing booth. The observers were adapted for each light source for 2 minute. Each assessment was repeated three times at a time interval of at least 24 hours.

2.2 Data Analysis

Observer responses were analyzed to determine the role of texture on perceived whiteness

and lightness of samples under sources A and U30. For ranking method, weighted probability

analysis was used. Observer responses for samples were ordered according to the highest

probability of appearance in rank position in each group. Table 8 shows the weighted

probability calculated for appearing in each rank for each sample. Samples were ranked from

1 (least white) to 10 (most white). A value of 1 was assigned to the most white and 0.1 was

126

assigned to the least white sample with an interval of 0.1 units between each rank. Figure 45 shows the weighted probability of rank as most white for different cotton and woolen samples. For lightness, the mean lightness for each observer as well as for each sample was calculated. Table 9 shows the mean perceived lightness, under different light sources, for each sample. The regression analysis was performed to determine the correlation between the perceived lightness, perceived whiteness rank, measured lightness and CIE Whiteness Index of samples.

100 Cotton - U30 Cotton - A Wool - U30 Wool - A

80

60

40

Weighted Probabiity Weighted 20

… …

0

- - zag zag - 1Rib 3Rib Half Half × × Full Effect Effect Zig 1 2 Racking Racking cardigan cardigan cardigan Cardigan BiasEffect JerseyBack JerseyFront

Figure 45. Weighted probability of different cotton and woolen structures being ranked as the most white.

127

Table 8. Percent probabilities of cotton and woolen samples being ranked as the Most [10] and Least white [1].

Material Light Source Textures Most White [10] [9] [8] [7] [6]

2×3 Rib 2.56 1.28 2.56 6.41 2.56 Half Cardigan Front 3.85 2.56 5.13 11.54 14.10 Half Cardigan Back 1.28 8.97 6.41 10.26 14.10 Jersey Front 2.56 5.13 11.54 5.13 6.41

Jersey Back 48.72 7.69 6.41 6.41 3.85

U30 Racking Effect 8.97 12.82 23.08 21.79 10.26 1×1 Rib 14.10 17.95 17.95 7.69 6.41 Zigzag Effect 7.69 10.26 6.41 5.13 10.26 Bias Effect 6.41 24.36 17.95 12.82 16.67

Full Cardigan 3.85 8.97 2.56 12.82 15.38

otton × 1.28 1.28 1.28 1.28 3.85 C 2 3 Rib Half Cardigan Front 3.85 6.41 5.13 17.95 8.97 Half Cardigan Back 3.85 2.56 11.54 12.82 14.10 Jersey Front 1.28 7.69 5.13 7.69 7.69

Jersey Back 46.15 11.54 7.69 5.13 6.41 A Racking Effect 12.82 14.10 25.64 14.10 14.10 1×1 Rib 8.97 23.08 15.38 8.97 11.54 Zigzag Effect 8.97 10.26 5.13 7.69 7.69 Bias Effect 11.54 19.23 17.95 16.67 12.82 Full Cardigan 1.28 3.85 5.13 7.69 12.82 Least White Material Light Source Textures [5] [4] [3] [2] [1] 2×3 Rib 7.69 14.10 12.82 32.05 17.95 Half Cardigan Front 26.92 16.67 10.26 8.97 0.00 Half Cardigan Back 12.82 14.10 20.51 10.26 1.28 Jersey Front 6.41 5.13 1.28 10.26 46.15

Jersey Back 6.41 8.97 3.85 2.56 5.13

U30 Racking Effect 8.97 2.56 6.41 3.85 1.28

otton × 7.69 12.82 8.97 6.41 0.00 C 1 1 Rib Zigzag Effect 6.41 7.69 11.54 14.10 20.51 Bias Effect 3.85 5.13 6.41 2.56 3.85 Full Cardigan 12.82 12.82 17.95 8.97 3.85

2×3 Rib 8.97 10.26 30.77 25.64 15.38 A Half Cardigan Front 20.51 17.95 11.54 7.69 0.00

128

Table 8. Continued

Half Cardigan Back 15.38 15.38 12.82 8.97 2.56 Jersey Front 3.85 2.56 7.69 14.10 42.31 Jersey Back 3.85 6.41 3.85 6.41 2.56 Racking Effect 10.26 6.41 1.28 3.85 0.00

1×1 Rib 3.85 6.41 12.82 6.41 2.56 Zigzag Effect 7.69 12.82 5.13 8.97 25.64 Bias Effect 7.69 7.69 1.28 2.56 0.00 Full Cardigan 17.95 14.10 12.82 15.38 8.97 Light Most Material Textures [9] [8] [7] [6] Source White [10] 2×3 Rib 1.28 7.69 12.82 19.23 14.10 Half Cardigan Front 1.28 6.41 6.41 11.54 10.26 Half Cardigan Back 2.56 2.56 2.56 12.82 15.38 Jersey Front 56.41 11.54 1.28 6.41 5.13

Jersey Back 2.56 3.85 6.41 2.56 3.85

U30 Racking Effect 16.67 11.54 21.79 8.97 12.82 1×1 Rib 1.28 11.54 19.23 7.69 8.97 Zigzag Effect 1.28 8.97 8.97 5.13 8.97 Bias Effect 10.26 32.05 16.67 17.95 6.41

Full Cardigan 6.41 3.85 3.85 7.69 14.10

otton × 1.28 10.26 16.67 8.97 10.26

C 2 3 Rib Half Cardigan Front 3.85 1.28 10.26 10.26 8.97 Half Cardigan Back 3.85 7.69 6.41 10.26 8.97 Jersey Front 42.31 19.23 6.41 0.00 7.69

Jersey Back 2.56 2.56 2.56 3.85 3.85 A Racking Effect 12.82 19.23 8.97 20.51 14.10 1×1 Rib 5.13 11.54 15.38 10.26 10.26 Zigzag Effect 3.85 7.69 12.82 6.41 3.85 Bias Effect 19.23 16.67 16.67 16.67 11.54 Full Cardigan 5.13 3.85 3.85 12.82 20.51

129

Table 8. Continued

Light Least White Material Textures [5] [4] [3] [2] Source [1] 2×3 Rib 14.10 10.26 5.13 10.26 5.13 Half Cardigan Front 16.67 16.67 11.54 15.38 3.85 Half Cardigan Back 15.38 10.26 19.23 11.54 7.69 Jersey Front 5.13 8.97 2.56 2.56 0.00

Jersey Back 7.69 3.85 11.54 10.26 47.44

U30 Racking Effect 7.69 3.85 10.26 3.85 2.56 1×1 Rib 8.97 7.69 12.82 14.10 7.69 Zigzag Effect 10.26 19.23 11.54 11.54 12.82 Bias Effect 5.13 7.69 1.28 3.85 0.00

Full Cardigan 8.97 11.54 14.10 16.67 12.82

otton × 16.67 14.10 8.97 8.97 3.85 C 2 3 Rib Half Cardigan Front 23.08 16.67 15.38 6.41 3.85 Half Cardigan Back 12.82 6.41 16.67 17.95 8.97 Jersey Front 6.41 10.26 5.13 2.56 0.00

Jersey Back 5.13 12.82 7.69 11.54 47.44 A Racking Effect 3.85 8.97 5.13 6.41 0.00 1×1 Rib 3.85 7.69 16.67 15.38 3.85 Zigzag Effect 7.69 12.82 5.13 8.97 25.64 Bias Effect 7.69 7.69 1.28 2.56 0.00 Full Cardigan 17.95 14.10 12.82 15.38 8.97

130

Table 9. Mean perceived lightness for each sample for wool and cotton samples.

Cotton Wool Textures U30 A U30 A 2×3 Rib 79.45 80.32 74.87 74.28 Half Cardigan Front 86.92 86.41 71.55 72.60 Half Cardigan Back 86.08 85.23 73.35 74.81 Jersey Front 79.83 82.58 75.82 76.44 Jersey Back 85.09 86.97 72.90 72.86 Racking Effect 88.77 88.73 77.00 79.47 1×1 Rib 85.86 85.47 72.92 73.49 Zigzag Effect 83.81 85.23 72.96 74.09 Bias Effect 86.49 86.33 77.46 77.87 Full Cardigan 86.64 87.37 72.47 74.38

The t-test was applied to Table 9 and the sig. values are 0.157 and 0.012 respectively between U30 and A for cotton and wool samples with different surface textures which indicate the mean perceived lightness for observers is significantly different with woolen samples under U30 and A light sources.

2.3 Results and Discussions

As discussed in Chapter 1, whiteness is influenced by tint and lightness. In this study samples were bleached and brightened under identical conditions, thus it was attempted to keep tint constant for all samples. Hence, variations in perceived lightness and whiteness were considered to be mainly due to variation in the surface texture of samples. Variations in surface texture resulted in varying levels of light scattering.

131

2.3.1 Correlation between Perceived Lightness and Measured Lightness

The correlation between perceived lightness and measured lightness (L*) was examined. It

was found that in the case of both whiteness and lightness correlations (Figure 46 and 47) for

cotton samples are poor against measured values and moderate for wool (Figure 48 and 49).

This may imply that observers’ ability to distinguish small changes in lightness at relatively high L* values (90-100 range) is poor. However, observers were able to detect small

variations in lightness in L* range of 80-90, i.e. for woolen samples. Similar results were

obtained under D65 source. The type of light source used seems to have almost no effect on

mean perceived lightness of samples when compared against measured lightness as

evidenced by R2 values.

132

90

88 R² = 0.0831 86

84

82

80 Mean Perceived Lightness

78 93 93.5 94 94.5 95 95.5 96 L* (Measured)

Figure 46. Correlation of mean perceived lightness against measured lightness for cotton under illuminant U30.

90

88

R² = 0.086

86

84

82

80 Mean Perceived Lightness

78 93 93.5 94 94.5 95 95.5 96 L* (Measured)

Figure 47. Correlation of mean perceived lightness against measured lightness for cotton under illuminant A.

133

78

76 R² = 0.3964

74

72 Mean Perceived Lightness

70 80.5 81 81.5 82 82.5 83 L* (Measured)

Figure 48. Correlation of mean perceived lightness against measured lightness for wool under source U30.

80

78

R² = 0.2175 76

74 Mean Perceived Lightness

72 80.5 81 81.5 82 82.5 83 L* (Measured)

Figure 49. Correlation of mean perceived lightness against measured lightness for wool under illuminant A.

134

2.3.2 Correlation between Perceived Whiteness Rank and Mean Perceived Lightness

One of the main aims of this study was to explore the effect of variations in lightness on perceived whiteness of objects. For both cotton (Figure 50 and 51) and wool (Figure 52 and

53) samples moderate correlation was obtained between perceived whiteness rank and mean perceived lightness. However, whiteness ranks for wool and cotton samples were different.

For example in case of cotton jersey back had the highest whiteness rank while for wool jersey front had a higher whiteness rank. Ranks were also different under different light sources but the differences were very small and the most and least white samples under both light sources were the same for cotton and wool.

12

10 R² = 0.538

8

6

4

2 Perceived WhitenessRank 0 78 80 82 84 86 88 90 Mean Perceived Lightness

Figure 50. Correlation of perceived whiteness rank against mean perceived lightness for cotton under source U30.

135

12

10 R² = 0.5666

8

6

4

Perceived WhitenessRank 2

0 78 80 82 84 86 88 90 Mean Perceived Lightness

Figure 51. Correlation of perceived whiteness rank against mean perceived lightness for cotton under illuminant A.

12 R² = 0.6235 10

8

6

4

Perceived Whiteness Rank 2

0 71 72 73 74 75 76 77 78 Mean Perceived Lightness

Figure 52. Correlation of perceived whiteness rank against mean perceived lightness for wool under source U30.

136

12

R² = 0.4623 10

8

6

4

Perceived WhitenessRank 2

0 72 73 74 75 76 77 78 79 80 Mean Perceived Lightness

Figure 53. Correlation of perceived whiteness rank against mean perceived lightness for wool under illuminant A.

2.3.3 Effect of Texture on Perceived Whiteness

As shown in Figures 54-57 the correlation between perceived whiteness rank and CIE whiteness index is unsatisfactory. In the case of cotton the correlation was fair but for wool it was poor. One reason might be that CIE whiteness index was calculated using L* values obtained under D65 light source. Another reason could be due to the fact that wool samples were on the lower border of CIE whiteness range and this may have affected calculated results in addition to observers’ perception of such samples as being considered "white".

137

12

10 R² = 0.2295

8

6

4

2 Perceived WhitenessRank

0 142 144 146 148 150 152 154 CIE Whiteness Index

Figure 54. Correlation of perceived whiteness rank against CIE Whiteness Index for cotton under source U30.

12

10 R² = 0.2414

8

6

4

2 Perceived Whiteness Rank

0 142 144 146 148 150 152 154 CIE Whiteness Index

Figure 55. Correlation of perceived whiteness rank against CIE Whiteness Index for cotton under source A.

138

12

10

8 R² = 0.0231

6

4

2 Perceived whiteness Rank

0 51 52 53 54 55 56 57 58 59 CIE Whiteness Index

Figure 56. Correlation of perceived whiteness rank against CIE Whiteness Index for wool under source U30.

12

10

8 R² = 0.0275

6

4

Perceived Whiteness Rank 2

0 51 52 53 54 55 56 57 58 59 CIE Whiteness Index

Figure 57. Correlation of perceived whiteness rank against CIE Whiteness Index for wool under source A.

139

2.4 Conclusions

As a continuation of the work from Chapter 1, we investigated the effect of texture on

perception and measurement of whiteness under source D65 and tried to find out how the

effect of texture would behave on white substrates under different illuminant sources in this

section. The following conclusions are summarized below:

1. Knitted textures with different surfaces affected the perceived lightness and whiteness of

objects.

2. Increasing the apparent roughness of object surface diminished the perceived whiteness of

the object.

3. The correlation coefficients between perceived whiteness rank and mean perceived

lightness for cotton and woolen samples are 0.5380 and 0.6235 respectively under source

U30 and 0.5666 and 0.4623 respectively under source A. This supports the well-known

conclusion that, whiteness is significantly affected by lightness.

4. In the case of wool the difference in mean perceived lightness of objects was statistically

different between sources U30 and A. However, in the case of cotton the difference was

not statistically significant based on a paired t-test.

5. In addition, the correlation coefficient between the perceived whiteness rank and CIE

whiteness index for cotton and woolen samples under source U30 and A indicates that

perceived whiteness for samples was not significantly influenced by the light sources.

140

3. Factors Affecting the Whiteness of Optically Brightened Material

As we noted, common sources employed in viewing booths or luminaires simulate daylight poorly in the UV region. Furthermore, the CIE currently does not recommend standard sources for daylight e.g. D65 or D75 illuminants. A distinction must be made between standard illuminants, recommended by the CIE, and sources used to illuminate objects.

Measurement and visual evaluation of optically brightened white material are technically complicated by the need to standardize and closely control the UV content of the light source used in the instrument and the light booth in which the materials are visually evaluated.

A number of studies have reported unsatisfactory correlations between visual responses and

CIE whiteness models [103,109,110,111] especially for tinted white samples. The unsatisfactory performance is not solely due to errors in the formula, but may be due to one or more critical variables that currently are not adequately controlled. These variables include:

• Differences in geometry and light sources between spectrophotometers used for

measurement of fluorescent materials, and

• Unknown or non-standardized UV emission of lamps used in standard viewing

booths.

These result in different relationships between calculated indices based on measurements and perceptual assessments. Lack of control or consistency in the UV light incident on a white material containing a FBA in measuring instruments as well as light booths can lead to

141

variability in the assessment of whiteness and reduce the utility and performance confidence

of a whiteness index.

As discussed, investigation of the effect of texture on perception and measurement of a series

of fluorescent white objects and resulted in unsatisfactory correlation between perceived whiteness and the CIE WI for textured substrates [162]. Plain woven substrates with identical

structure were used in the current study to eliminate the effect of variations in texture on

results.

A series of cotton substrates were treated with varying amounts of a commercially available

FBA. The whiteness, radiance factor, extent of UV absorption, and intensity of visible light

emission of the optically brightened substrates were measured or calculated.

Spectrophotometric and radiometric techniques were used to determine the extent of

variability in radiance factor due to variations in UV content. Subsequently, samples were

visually assessed in standard viewing booths to determine the agreement between measured

and perceptual results.

3.1 Methods

3.1.1 Materials

C.I. Fluorescent Brightener 28 was purchased from Sigma-Aldrich (St Louis, MO). Bleached cotton knit fabric (187 gm-2) was purchased from Testfabrics, Inc. (West Pittston, PA). The

AATCC Textile UV Calibration Standard (TUVCS) was purchased from AATCC (Research

142

Triangle Park, NC). A PolyTetraFluoroEthylene (PTFE) Diffuse Reflectance Standard (OL

55RS) was purchased from Optronic Laboratories, Inc. (Orlando, FL).

3.1.2 Preparation of Fluorescent Cotton Samples

Cotton fabric samples (10 g) were whitened with C.I. Fluorescent Brightener 28 at 95 °C for

40 min using 0%, 0.025%, 0.25% and 2.5% on weight of fabric (owf) concentrations in an

exhaustion process using an Ahiba Nuance Infrared Laboratory Machine (Datacolor

International, USA) at a liquor-to-goods ratio of 20:1. For purposes of opacity of samples

four layers of the fluorescent whitened cotton samples were mounted on 3×3 inch stiff

cardboards [50].

3.1.3 Spectrophotometric Measurement of Materials

Spectrophotometric reflectance measurements were performed using a Spectraflash SF600X

(Datacolor International, USA) reflectance spectrophotometer. The spectrophotometer was

calibrated for measuring the reflectance spectra and whiteness using two different settings:

1. Normal calibration: large area view (30 mm), specular included, CIE 1964 Standard

Colorimetric Observer (10o), UV filter at “Calibrator” providing UV transmission of 0,

25, 50, 75 and 100%. The manufacturer supplied white tile was used as the reference

during the calibrations.

2. AATCC recommended calibration for measurement of whiteness: large area view (30

mm), specular included, CIE 1964 standard colorimetric observer (10o), UV filter at

143

“Illuminant D65” followed by UV calibration using TUVCS according to AATCC

Evaluation Procedure 11-2007 [119]. The TUVCS standard was used to adjust the

instrument’s UV intensity for D65 illuminant for the assessment of optically brightened

material.

The calibration of UV level in spectrophotometers according to AATCC TM110 is only

designed to improve the inter-laboratory agreement in measurement of optically white

material based on the CIE WI of the AATCC standard textile white substrate and does not

determine a precise UV level for general use. The two calibration methods, however, provide

a means of comparison for measurement of fluorescent objects under varying amounts of UV

in method one and a set amount of UV, according to the AATCC EP11, in method 2.

Measured results for optically brightened material were thus reported in terms of the radiance

factor to take into account reflectance plus emission of light in the visible range. CIE

Whiteness Index (CIE WI) values were calculated for illuminant data D65 and were also

determined for illuminant D75 using the tabulated data based on the ASTM Standard E308-

06 [163] and AATCC Test Method 110-2000 [156,164,165].

3.1.4 Spectrophotometric Measurement of Light Sources

Two SpectraLight III (X-rite, Inc., Grand Rapids, MI) standard viewing booths were used, one equipped with filtered incandescent source simulating daylight at a correlated color temperature (CCT) of 6500 K representing D65 illuminant and the other with simulating daylight at a CCT of 7500 K representing D75 illuminant. Radiometric measurement of D65

144

and D75 daylight simulators with sources employed in the SpectraLight III viewing booths were carried out using a double monochromator scanning OL750 automated spectroradiometer (Optronic Laboratories, FL) equipped with fiber optic probes and a reflex telescope. The spectroradiometer was calibrated for two modes of radiance and irradiance in the wavelength range of 200 to 1100 nm with 1 nm interval according to instrument manufacturer instructions [166]. A UV lamp calibration is necessary for radiant flux measurement if radiant variations between 200nm to 400nm are investigated. Specifically, a

40-Watt deuterium arc lamp ultraviolet standard for spectral irradiance (OL UV-40, Optronic

Laboratory) was used for UV calibrations. In addition a 45-watt quartz-halogen tungsten coiled filament lamp (OL 245M, Optronic Laboratory) was used for calibration in the UV, visible and near infra-red range from 250-1100 nm. The combined calibration file included data from both sources and was adjusted to ensure the continuity of the data [176]. The UV calibration was used only for the determination of the UV content of sources in the UV range. However, the combined UV-Vis calibration was used when the FBA treated samples were tested. The upper range was the upper detection limit of the instrument.

In order to calibrate the spectroradiometer to a known radiance standard a reference tungsten source and a uniform white reflective material such as a polytetrafluoroethylene (PTFE) diffuse reflectance standard was used. For radiance measurements the reflex telescope was positioned at the distance of 100 cm from the SpectraLight III light source. Setting the reflex mirror to the “View” position, the tested light source was focused by the telescope until a sharp image was seen through the reflex mirror. Using the eyepiece reticle, once the field of

145

view (aperture) area was overfilled by the source, the reflex mirror was switched to the

“Measurement” position for radiance measurement.

For irradiance measurements, a PTFE Diffuse Reflectance Standard or the fluorescent whitened cotton samples were positioned at a 45o angle on the floor of the SpectraLight III viewing booth, and were irradiated by each of the tested light sources. The optical axis of the reflex telescope was normal to the measured samples at the distance of 100 cm. The samples were focused and measured in the irradiance mode, in the same way described in radiance measurements. Units for radiance and irradiance are given in the Table 10.

Table 10. Units for radiance and irradiance.

Name Units

Illuminated Object Irradiance = radiant incidence W.m-2

Spectral irradiance W.m-2. nm-1

Light source Radiant exitance W.m-2

Radiant intensity W. sr-1

Spectral radiant intensity W.sr-1. nm-1

Radiance W.m-2. sr-1

Spectral radiance W.m-2. sr-1. nm-1

146

3.1.5 Determination of UV Spectra

The interior of the viewing booths used is painted in light gray approximately equivalent to

Munsell N7. In order to improve the correlation of simulated daylight sources with illuminant data in the UV range, a 6W F6T5 BLB UV light bulb with an emission maximum at 352 nm is included in the SpectraLight III viewing booths. In the assessments of fluorescent material the light from the supplementary UV source can be added to the light emitted by the simulated daylight source. Here the addition of supplementary UV source to simulated illuminants is denoted as D65 + UV and D75 + UV, respectively.

Figure 58. SPD of standard illuminants D65, D75 and the simulated daylight sources in the viewing booths, including the simulated daylight sources with a supplementary UV source.

147

Figure 58 shows the spectral power distribution of standard illuminants D65, D75, the

simulated daylight sources employed in the viewing booths as well as simulated sources in presence of 100% supplementary UV. In order to determine the effect of variations in UV on perception of fluorescent white material the amount of UV was reduced by proportional blocking of the UV light from the source. Grey matt paper cardboards of L* = 37 were cut

into rings of 0.94 inch width and 0.4 inch radius. Triple layers of cardboard rings were placed

around the UV light bulb evenly such that approximately 25%, 50%, and 75% of the bulb

was physically blocked. Five relative UV intensity levels, for both D65 and D75 sources,

namely, 0%, 25%, 50% and 75% UV, as well as 100% UV were thus obtained. Figure 59

shows the arrangement employed to block UV amount by approximately 25% in the viewing

booth. Measured values based on the use of 0%, 25%, 50%, 75% and 100% UV, according to

calibration method 1, were also obtained for comparison.

148

Figure 59. The arrangement used to block approximately 25% of the UV radiation using opaque dark gray cardboard rings placed around the UV light bulb.

The relative spectral power distributions of various D+UV illuminations, in the UV range, were obtained with the aid of an OL750 automated spectroradiometer, using a PTFE diffuse

reflectance standard placed at 45o against the back of the viewing booth. The light reflected

from the surface of PTFE was collected and the irradiance values were recorded from 200

nm to 400 nm and are discussed in the following sections.

3.1.6 Perceptual Assessments

A 6×12” custom made grey paperboard box with L*= 84, a*= 0.96 and b*= 0.76, was made

that housed the PTFE diffuse reflectance standard and a given white sample, as shown in

Figure 60. The box contained two 3×3 inch windows. The PTFE tile was placed below the

left hand side window as standard and was given an arbitrary whiteness value of 100.

149

Samples to be tested were placed below the right hand side window at a distance of 3 inches from the standard white. Both samples were viewed on the same plane using a 0/45 illumination viewing geometry, as shown in Figure 60.

Three color normal expert subjects (1 female, two males) with an average age of 43 and average experience in color assessments of 15 years took part in the perceptual assessments.

Each subject was adapted to the viewing condition for at least two minutes prior to assessing samples under each D+UV condition. Each subject compared the white sample with the standard and provided a numerical perceived whiteness value. No upper limit restriction was placed for the perceived whiteness of samples. Five samples, containing different amounts of

FBA, were presented to the subject three times each randomly on the same day. A total of

150 assessments were completed by each subject in approximately 80 minutes.

Figure 60. Visual assessment of optically brightened white samples under varying UV levels.

150

3.2 Results and Discussions

Some of the sources simulating D65 and D75 illuminants lack sufficient UV radiation

intensity in comparison with north-sky daylight. Ideally the illumination conditions used for viewing should correlate closely with the spectral power distributions (SPDs) of the standard illuminants from 300 to 700 nm. Due to lack of standard daylight sources, visual assessments should be carried out under good daylight simulators such as CIE Grade BC sources (or better) and the source should have sufficient amounts of UV radiation in the 350-400 nm

range to bring the spectral power distribution of these sources in the UV range closer in line

with the standard daylight spectral power functions.

The spectral irradiance functions of simulated D65 and D75 daylight illuminants with

supplementary UV source in viewing booths were radiometrically characterized in two

SpectraLight III viewing booths. The results are discussed later in this section.

The effect of UV filter settings on the radiance spectra of the PTFE and the samples

containing varying levels of FBA, as measured spectrophotometrically, were also

determined, as shown in Figures 61a-e. Here, measurements refer to spectral radiance to account for the cumulative effect of reflectance and emission of light. As expected, the

PTFE, commonly used as a diffuse reflectance standard, exhibits a constant spectral reflectance function regardless of the UV filter setting of the light source, due to the absence of FBA in the PTFE standard (Figure 61a). The spectral radiance function of the cotton substrate with no FBA treatment, while deficient in the blue region, is also constant regardless of variations in UV incident upon the substrate (Figure 61b). Samples containing

151

varying amounts of FBA, however, exhibit increased spectral radiance as the relative UV content of the light source was approximately increased from 0% to 100% (Figure 61c-61e).

152

Figure 61. Measured spectral radiance curves of (a) the PTFE plate (b) untreated cotton substrate (c) 0.025% FBA treated (d) 0.25% FBA treated and (e) 2.5% FBA treated white materials irradiated at various relative UV intensities as measured by a reflectance spectrophotometer using illuminant D65.

153

Figures 62a-b show the CIE WI values of treated substrates based on D65 and D75 standard illuminants. The Uchida WI values for cotton samples based on D65 and D75 illuminants are

shown in Figures 63a-b. The CIE WI of FBA treated substrates generally increases according

to the UV content of the light source for all substrates under D65 and D75 sources.

Figure 62. CIE WI of white samples calculated from measurements with a spectrophotometer employing sources D65 (a) and D75 (b) filtered to contain various UV contents.

154

Figure 63. Uchida WI of white samples calculated from measurements with a reflectance spectrophotometer employing sources D65 (a) and D75 (b) filtered to contain various UV contents.

The CIE WI values also rise with increased FBA content with the exception of samples treated with 2.5% FBA. This is likely due to fluorescence quenching at this treatment level.

A similar general trend is also observed for the Uchida WI values as shown in Figures 63a-b.

155

The AATCC Test Method 110 recommends a UV calibration method, using a UV calibration standard, TUVCS [166]. According to this test method an appropriate UV content for

spectrophotometric measurements, is adjusted by setting up the device in the ‘UV calibrated’ mode. Figures 62a-b, also show that the calculated CIE WI of each sample varies when the source light’s UV content deviates from that measured under UV calibrated mode. The WI values of control cotton sample (Cotton-FBA 0%) are not affected by variations in the UV content, as expected.

Subjects vary considerably in perceptual assessments of color and thus it should be noted that the number of expert subjects employed in this study is likely not sufficient to generate a complete means of comparison between perceptual and measured results. However, results may be used to indicate general trends. Inter- and intra-subject variability among subjects, expressed in CIEWI units, was calculated and is given in Table 11.

Table 11. Mean inter- and intra-subject variability (in CIEWI units) in determination of perceived whiteness.

UV % Mean-Intra variability Mean Inter-variability 0 3.83 3.94 25 1.31 1.35 50 2.13 2.10 75 3.09 3.13 100 0.88 0.90

156

For the limited number of trials, assessments are reasonably repeatable. While the number of assessments may be considered inadequate to derive overall conclusions, the nature of the relationship between visual and calculated whiteness values was examined. The relationship between the perceived whiteness and measured whiteness based on the Uchida and CIE WI are shown in Figures 64a-b.

157

Figure 64. The Correlation between perceived whiteness and predicted whiteness from Uchida and CIE WI under D65 for all UV levels (a) .The correlation between perceived whiteness and predicted whiteness from Uchida and CIE WI under D75 for all UV levels (b).

The CIE WI is designed for optimal performance under D65, but results show relatively poor agreement with perceptual assessments under simulated D65 (R2 = 0.46). A similar, though

158

slightly improved performance, is obtained under simulated D75 (R2 = 0.52). In addition, the

CIE WI performed better for the fluorescent samples studied, under both D65 and D75 in comparison with the Uchida model, which has R2 = 0.26 for both simulated D65 and D75 illuminants and shows no significant difference in performance under two illuminants. A large difference in the WI of the TUVCS was, however, noted between CIE and Uchida whiteness values. A unit change in CIE Tint values significantly affects the Uchida WI. The tint index of the TUVCS was -4 and this resulted in significantly lower Uchida WI for this sample which was not in agreement with perceptual results. This indicates a support for the work of Jafari et. al. that reported better agreement between perceptual and the CIE WI values compared to whiteness values based on the Uchida model [167].

The irradiance for D65 and D75 sources, using a PTFE diffuse reflectance standard plate as well as fluorescent brightened samples, was obtained according to the methodology described in the experimental section.

Figures 65a-d compare the UV/Visible spectra for D65 and D65+UV as well as D75 and

D75+UV illuminations employed. Due to the higher UV content of D75 source, emission in the blue region under this source would be expected to be higher. A comparison of results for simulated D65 and D75 illuminants is given in Figures 66a-d.

159

Figure 65. Spectral irradiance of fluorescent white materials illuminated with D65 (a) and D65 +UV (b) sources in a SpectraLight III viewing booth determined radiometrically. Spectral irradiance of fluorescent white materials illuminated with D75 (c) and D75 +UV (d) sources in a SpectraLight III viewing booth determined radiometrically.

160

161

Figure 66. Comparison of spectral irradiance curves measured over the surface of untreated (a) 0.025% FBA treated (b), 0.25% FBA treated (c) and 2.5% FBA treated (d) white materials between D65 against D75 and D65+ UV against D75+ UV in SpectraLight III viewing booths.

Differences in spectra between simulated D65 and D75 illuminants when the supplementary

UV component is not present are also shown. Results show a reduction in irradiance from the surface of treated samples in the UV range, with an increase in the quantity of FBA applied to the cotton substrates. Generally an increase in the amount of FBA employed directly correlates with an increased absorption of UV light and emission of visible light up to a certain extent beyond which further increases in FBA concentration could have a deleterious effect on the whiteness attained.

162

It was expected that a similar trend would be observed for the perceived whiteness of fluorescent white material as a function of UV content and the spectral radiance factor of light sources in viewing booths. To verify this, the amount of UV in viewing booths was changed and the spectral power distribution of illuminations in the UV range was assessed radiometrically.

Figure 67. Spectral irradiance curves measured in various illuminant combinations in a SpectraLight III viewing booth.

Results are shown in Figure 67 demonstrate a gradual change in UV amount when the emission of light from the supplementary UV bulb is changed from 100% (fully ON) to partially blocked between 75 and 25% as well as 0% (fully OFF). In general, irradiance in the

163

UV range was found to be higher for the D75 + UV illuminations compared with those based on D65 + UV. The UV intensity between 380 - 400 nm was approximately the same for all conditions examined. The biggest difference in UV intensity levels was observed between

340 and 370 nm where changes in UV from zero to 100% resulted in gradient increases in irradiance.

Figure 68. Variations in total UV energy measured by summing up the spectral irradiance from the surface of optically brightened samples illuminated under different conditions in SpectraLight III viewing booths.

Figure 68 shows the total energy in the UV range for various sources. It can be seen that D75

+ UV source had approximately 32% higher irradiance compared to other sources examined.

The FBA treated cotton samples absorb some of the UV light and exhibit varying levels of

164

fluorescence dependent upon the amount of the FBA employed. An increase in the concentration of FBA is shown to be inversely related to the measured irradiance in the UV range. In should be noted, however, that in the case of samples containing 2.5% FBA a portion of the absorbed UV contributes to fluorescence quenching.

The CIE whiteness index values for various samples under different UV amounts are shown in Table 12. The mean perceived whiteness values for various FBA treated substrates under different UV levels are presented in Figure 69 and Table 13.

Table 12. Effect of variations in UV on measured CIE whiteness index of non-brightened and (0-2.5%) FBA treated samples. Correlation coefficients are reported for each UV level shown in the column and the corresponding PW values under the same relative UV amounts used for perceptual assessments in viewing booths (i.e., 25% vs. 25%).

D65 illumination 0% 25% 50% 75% 100% TUVCS 58.10 65.76 92.01 111.41 116.92 0% 70.98 70.87 71.85 71.62 72.29 0.025% 80.57 87.13 107.26 123.79 127.96 0.25% 81.79 90.57 121.34 143.16 146.85 2.50% 70.21 79.10 109.94 132.15 137.09 Correlation Coefficient 0.60 0.79 0.83 0.93 0.95 D75 illumination 0% 25% 50% 75% 100% TUVCS 56.67 64.66 91.84 111.41 117.43 0% 70.29 70.19 71.20 70.97 71.65 0.025% 79.93 86.72 107.50 124.47 128.74 0.25% 80.76 89.94 121.53 143.81 147.54 2.50% 68.16 77.37 109.13 131.84 136.91 Correlation Coefficient 0.65 0.61 0.87 0.92 0.90

165

Figure 69. Perceived whiteness of five samples treated with various amounts of FBA and evaluated under ten different illumination conditions.

The correlation between measurements under UV calibrated mode and those from perceptual

assessments under similar percentages of the supplementary UV content available in the

booth are also shown in Table 12. It should be noted that, unfortunately, the scanning

spectroradiometer could not be used to determine the spectral composition or the exact

intensity of the flash xenon source employed in the spectrophotometer and the UV% of the source was adjusted using the filter settings available. Thus adjusted UV amounts may not correspond exactly to those set in the viewing booth. Accordingly, an indirect comparison of

UV contents was made using the measured radiance factor at different UV settings under

166

both conditions. High agreement between radiance factors at similar UV% levels could indicate the actual UV amounts under two conditions were reasonably similar.

Table 13. Effect of UV content on perceived whiteness (PW) of non-brightened and (0- 2.5%) FBA treated samples. Correlation coefficients are reported between each level and the calibrated UV level in the spectrophotometer based on AATCC TM110.

CIE WI PW for Supplementary UV% in Viewing Booths at Cal. UV% UV% 0 25 50 75 100 78.7

D65 illumination TUVCS 93.10 96.70 105.30 107.90 105.60 116.44 0.000% 81.90 82.20 80.90 81.10 81.30 71.92 0.025% 102.40 106.70 109.00 105.10 105.10 123.33 0.250% 108.70 111.40 113.20 113.20 109.00 144.61 2.500% 96.10 98.00 96.90 104.30 104.20 132.43 Correlation Coefficient 0.92 0.91 0.87 0.95 0.95

D75 illumination TUVCS 93.30 102.90 111.00 111.00 113.70 116.93 0.000% 81.70 83.30 81.20 81.20 81.80 71.27 0.025% 107.20 104.70 107.70 108.60 107.40 124.01 0.250% 113.30 112.90 116.40 118.10 117.80 145.28 2.500% 86.70 97.00 106.00 106.00 106.10 132.15 Correlation Coefficient 0.71 0.90 0.95 0.95 0.92

Results were thus compared between UV settings in the spectrophotometer and those set to the same levels (i.e. 75% and 75%) in the viewing booth. Results were also compared between various UV levels in the booth and the calibrated UV amount in the

167

spectrophotometer. As can be seen a good agreement is noted for measured and perceived

values under UV amounts set to the same levels. Table 13 shows measured values under

calibrated UV settings according to AATCC TM110 (in this case 78.7%) and the correlation

between values under this setting and perceived results under different amounts of

supplementary UV in the viewing booth. As can be seen for both D65 and D75 illuminations,

the correlation is strongest at or above 75% supplementary UV. This indicates that the use of the appropriate amount of UV in the viewing booth improves the agreement between measured and perceived results.

Table 14. Effect of variations in UV content on perceived whiteness (PW) of non-brightened and (0-2.5%) FBA treated samples. Results in each column show % mean change in perceived whiteness of each sample for the UV intensity level shown against zero supplementary UV. D65 Illumination 25% 50% 75% 100% Mean Std Dev TUVCS 3.9 13.1 15.9 13.4 11.6 5.3 0.000% 0.4 -1.2 -1.0 -0.7 -0.6 0.7 0.025% 4.2 6.5 2.6 2.6 4.0 1.8 0.250% 2.5 4.1 4.1 0.3 2.8 1.8 2.500% 2.0 0.8 8.5 8.4 4.9 4.1 D75 Illumination TUVCS 10.3 19.0 19.0 21.9 17.5 5.0 0.000% 2.0 -0.6 -0.6 0.1 0.2 1.2 0.025% -2.3 0.5 1.3 0.2 -0.1 1.6 0.250% -0.4 2.7 4.2 4.0 2.7 2.1 2.500% 11.9 22.3 22.3 22.4 19.7 5.2

168

Table 14 shows the mean change in perceived whiteness values for each sample under each

UV intensity level compared to the same source when no supplementary UV was added.

Results show that the perceived whiteness of the sample containing no FBA was approximately constant under all conditions, as expected. For the remaining samples, increases in UV content correlate well with increases in perceived whiteness of samples. The biggest variations are noted for the 2.5% FBA treated sample and the AATCC TUVCS. In these cases increasing the amount of UV in viewing booths resulted in increased perceived whiteness values for nearly all samples. The sample containing 0.25% FBA was perceived to be the whitest among all samples under all illumination conditions and further increases in the FBA content resulted in fluorescence quenching, which reduced the perception of whiteness. The UV calibration method specified by AATCC Test Method 110 resulted in a

UV filter setting of roughly 78.7%. An accurate adjustment of the UV content in the viewing booth to that employed in spectrophotometers would likely result in better agreement between perceived whiteness of optically brightened material and measured values.

The relationship between irradiance and perceived whiteness was nonetheless investigated.

As shown in Figures 65 and 69, the UV emission in the viewing booth under D75 source is higher than that under D65 and this results in higher perceived whiteness values. Results confirm that the inclusion of supplementary UV source improves irradiance in the UV range as expected. The maximum irradiance peaks appear around 460nm as shown in Figures 66a-d where irradiance is increased with an increase in FBA content from 0.25% to 2.5%. Results in Figure 64 also show that the Uchida WI model does not perform satisfactorily for the

169

TUVCS substrate, likely due to its tint value, and generates WI values that are 20 units

smaller than samples of approximately similar perceived whiteness.

3.3 Conclusions

Visual perception of optically brightened white material is directly affected by the absorption

of UV radiation and emission of visible light, generally in the blue region. However,

currently no standard protocol for perceptual assessment of fluorescent objects exists. While

the amount of UV light radiant on a sample can be adjusted in spectrophotometers for the measurement of optically brightened substrates, the adjusted UV content may not correlate with the UV content available in calibrated viewing booths.

The D65 and D75 daylight simulators in the calibrated viewing booths used in this study did not have sufficient UV compared to standard illuminants and the extent of UV deficiency for

D65 and D75 daylight sources was also found to be different. The inclusion of a supplementary UV source in the booth, which should be ideally adjustable, improved the agreement between the SPD of sources and standard illuminants in the UV range and significantly influenced the perception of fluorescent white objects under both sources.

The amount of supplementary UV in the viewing booth under both D65 and D75 was changed from 0 to 25, 50, 75 and 100% to determine its effect on perceptual assessment of fluorescent white objects. Variations in UV directly influenced radiometric and perceptual assessments of optically brightened substrates. Results show that, in general, the perceived whiteness of samples improved with an increase in the UV content of the source. Due to the

170

higher total UV energy available under D75 source correspondingly higher perceived whiteness values were obtained under that source.

The performance of the CIE WI and the Uchida whiteness models against perceptual results was found to be modest, and results showed that the CIE WI correlated [slightly] better under both D65 and D75 illumination conditions. The CIE whiteness index values under UV calibrated mode (in this study 78% UV) showed the best agreement against perceptual assessments under approximately 75% supplementary UV. In absence of quantifiable absolute amounts of UV available when measuring or viewing fluorescent samples, a precise determination of the reasons for this observation is not possible. However, this is likely due to improved agreement between the UV amounts available for fluorescence under both conditions.

Ideally, the extent and nature of UV radiation in all sectors of the supply chain including manufacturing, quality control, illumination industry, measurement technology, and viewing booths should be standardized to minimize variations between perceptual and measured data due to UV radiations.

171

4. The Investigation of Spatial Uniformity and Whiteness Boundary of an

EIZO Monitor

In the first three chapters, the results of investigation pertaining to the effect of texture on the

perception of whiteness and factors affecting the whiteness of optically brightened materials

were reported. In order to isolate the effect of texture unilaterally on perceived whiteness

perception, a series of whites with different surface textures were designed for display on an

LCD monitor. Prior to observer assessment of these surfaces, several issues such as monitor

uniformity, acceptable whiteness boundary and viewing conditions were examined, which

are explained in this chapter.

4.1 Calibration of the LCD Monitor

In order to ensure that images are viewed the way they are intended, monitors should be calibrated. Stimuli were generated on a high-resolution (1600×1200 pixels, 0.27 mm dot pitch) LCD monitor, EIZO ColorEdge CG211 [169]. The computer system connected to the

EIZO monitor had a 24-bit (RGB) color graphics card operating at a 60Hz refresh rate.

A colorimetric calibration of the LCD display is performed using a spectrophotometer

(GretagMacbeth Eye One). This display was calibrated to a D65 of 80 cd/m2 with gamma set to 2.2 for each of the three color primaries. CIE 1931 x,y chromaticity coordinates of the primaries were (0.638,0.322), (0.299,0.611), (0.145,0.058) for red, green and blue

172

primaries respectively. After calibration, the settings of this EIZO monitor were found to be

close to the sRGB standard monitor profile [151, 170].

However, during the calibration of the EIZO monitor with an i1 spectrophotometer and i1

Match software application some observations were made. Most importantly, it was noted that the orientation of the GretagMacbeth i1 spectrophotometer on the monitor panel affected the spectrophotometric readings [168]. This is due to polarized nature of the display panel

that affects the emission of light from the display.

4.2 Spatial Uniformity of Monitor

This 1600×1200 pixels display panel of the EIZO screen was divided into equal blocks as

shown in Figure 70. The screen was divided into 5 rows and 6 columns; the horizontal index

goes from H1 to H6 and the vertical index is from V1 to V5. The size of each block was 267

pixels in width and 240 pixels in height. The width of the spacing between blocks was 4% of the white blocks lengths in height orientation.

173

Figure 70. Schematic representation of the division of an EIZO monitor panel.

A PR-670 SpectraScan Spectroradiometer (Photo Research, Inc.) was used to measure the emission of from EIZO monitor and access the data via SpectraWinTM 2 software [171].

RGB values for each block were set to 255,255,255. Tristimulus values for each block were then measured using a PR-670 Spectroradiometer. L*a*b* values were then calculated based on their corresponding tristimulus values as shown in Table 15.

174

Table 15. Colorimetric values of 30 blocks covering the entire screen of the EIZO display.

H1V5 H2V5 H3V5 H4V5 H5V5 H6V5

X 83.40 83.17 82.66 83.15 84.01 81.74 Y 87.71 87.23 86.76 87.41 88.50 86.06 Z 94.98 94.91 94.10 94.57 95.11 92.30 L* 95.04 94.84 94.64 94.91 95.37 94.34 a* 0.07 0.50 0.38 0.13 -0.20 -0.11 b* 0.35 0.05 0.25 0.41 0.83 0.96 H1V4 H2V4 H3V4 H4V4 H5V4 H6V4

X 84.81 85.22 84.39 84.83 81.86 82.61 Y 89.09 89.55 88.69 89.33 86.10 87.17 Z 96.74 96.98 96.08 96.16 91.60 93.09 L* 95.62 95.81 95.45 95.72 94.36 94.81 a* 0.25 0.20 0.18 -0.14 0.05 -0.47 b* 0.17 0.35 0.32 0.73 1.46 1.23 H1V3 H2V3 H3V3 H4V3 H5V3 H6V3

X 83.02 83.81 83.80 83.85 83.70 82.26 Y 87.47 88.15 88.04 88.11 88.04 86.68 Z 93.69 94.78 94.79 94.89 94.79 93.08 L* 94.94 95.22 95.18 95.21 95.18 94.60 a* -0.23 0.05 0.23 0.20 0.04 -0.24 b* 1.04 0.80 0.72 0.70 0.72 0.88 H1V2 H2V2 H3V2 H4V2 H5V2 H6V2

X 79.03 80.68 81.79 82.22 79.60 79.50 Y 83.26 85.09 86.16 86.42 83.68 83.71 Z 88.56 91.27 92.56 92.87 90.38 89.08 L* 93.13 93.92 94.38 94.49 93.31 93.32 a* -0.21 -0.38 -0.20 0.16 0.13 -0.13 b* 1.46 0.94 0.85 0.83 0.51 1.44 H1V1 H2V1 H3V1 H4V1 H5V1 H6V1

X 77.16 78.44 78.99 78.26 77.92 78.53 Y 81.52 82.78 83.16 82.44 81.94 82.62 Z 86.60 88.57 88.84 88.03 87.69 87.88 L* 92.36 92.92 93.08 92.77 92.55 92.85 a* -0.65 -0.48 -0.10 -0.19 0.08 0.00 b* 1.53 1.09 1.19 1.21 1.08 1.46

175

Considering the textured samples are to be distributed around the center section of the display panel in the following experiments, the H4V3 block was used as reference and the color differences (CIEL*a*b*) between individual blocks and the reference block were calculated as shown in Table 16.

* Table 16. The color difference, DEab , between each of the 29 blocks against the reference block H4V3.

Color H1 H2 H3 H4 H5 H6 Difference V5 0.411 0.810 0.751 0.422 0.454 0.958 V4 0.669 0.699 0.448 0.615 1.155 0.939 V3 0.609 0.182 0.046 0.000 0.163 0.771 V2 2.252 1.431 0.929 0.729 1.907 2.047 V1 3.082 2.419 2.199 2.523 2.688 2.486

Based on results shown in Table 16, blocks H2V3, H3V3 and H5V3 have relatively small color differences against the reference block and could be used to display samples.

4.3 Whiteness Boundary of an EIZO Monitor

The device dependent perceptually noticeable boundary of whiteness was determined for the

EIZO monitor employed in this study. The procedure is described in the following section.

176

4.3.1 Experimental

4.3.1.1 Samples Preparation

A 2×2 inch white block was designed, for display in the center of the EIZO screen. The

distance from the white blocks' edges to the left and upper edge of the screen was 756 and

565 pixels, respectively. The color of the blocks was adjusted in the RGB color space with

Adobe Photoshop CS5, and the initial settings for the block were (255, 255, 255). Color was

then adjusted by decreasing values in one of the RGB channel 1 unit at a time. For example if

the value of the R channel was reduced by 1 unit to 254. The G and B channel values were

kept at 255. This allowed for an examination of the effect of variation in (red) channel on perception of whiteness of the self-luminous block. The same procedure was applied for channel, that is, the white block's color was adjusted from (255,255,255) to (255,240,255) with a change of one unit in green channel and from (255,255,255) to (255,255,240) again with a change of one unit in blue channel. In addition, R and G values were simultaneously reduced by 2 or 3 units and B values were kept at 255 to determine the effect of yellowness on the whiteness boundary of the white block. All RGB values were also simultaneously reduced by 1 unit to determine the effect of overall luminance on the whiteness of the sample. Table 17 shows the RGB values of those blocks changed in either one channel or two or more channels at the same time.

177

Table 17. RGB values of selected blocks for examining the whiteness boundary.

Change in RGB R G B R&G R&G&B color space 1 255,255,255 255,255,255 255,255,255 255,255,255 255,255,255 2 254,255,255 255,254,255 255,255,254 252,252,255 254,254,254 3 253,255,255 255,253,255 255,255,253 250,250,255 253,253,253 4 252,255,255 255,252,255 255,255,252 247,247,255 252,252,252 5 251,255,255 255,251,255 255,255,251 245,245,255 251,251,251 6 250,255,255 255,250,255 255,255,250 242,242,255 250,250,250 7 249,255,255 255,249,255 255,255,249 240,240,255 249,249,249 8 248,255,255 255,248,255 255,255,248 237,237,255 248,248,248 9 247,255,255 255,247,255 255,255,247 235,235,255 247,247,247 10 246,255,255 255,246,255 255,255,246 232,232,255 246,246,246 11 245,255,255 255,245,255 255,255,245 230,230,255 245,245,245 12 244,255,255 255,244,255 255,255,244 228,228,255 244,244,244 13 243,255,255 255,243,255 255,255,243 - 243,243,243 14 242,255,255 255,242,255 255,255,242 - 242,242,242 15 241,255,255 255,241,255 255,255,241 - 241,241,241 16 240,255,255 255,240,255 255,255,240 240,240,240

4.3.1.2 Sample Color Measurement

Color measurements of the block were performed using a PR670 spectroradiometer, which was described earlier. Calibration and standardization of the instrument were accomplished

178

with strict adherence to procedures stated in the equipment operating manual prior to and during the measurements [171]. Tristimulus values were obtained from PR670 spectroradiometer and L*a*b* values were calculated and distributed as shown in Figure 71.

Figure 71. The selected blocks (76 in total) depicted in the L*a*b* color space.

4.3.2 Visual Assessment

Two criteria were satisfied when selecting observers for visual judgments, they had no prior experience in judging color differences or color matching, and they were color normal according to the Neitz test [158] results. The observer population (5M, 5F) was comprised of college students with the age range from 21 to 27 and mean age of 24.

179

In order to minimize variability and ensure observers understanding of the test and performance were consistent, subjects were given a uniform set of instructions and prepared for the test in the same manner. In addition, the viewing sessions were limited to short

periods of 15-20 minutes to prevent observer fatigue and resultant poor decisions.

Each observer was allowed at least one minute to become adapted to the illumination

provided by the screen, which was the only source of illumination in the test room. During

the visual assessment, seating was arranged to provide comfort and to insure a 0o viewing

angle with the center of the EIZO monitor where the white block was displayed. With the

sample placed in the center of the EIZO monitor, the average distance, L, from eye to sample

was measured and the required sample size calculated to produce a 10 o field of view. Sample

tiles were designed to be 2×2 inch and L was approximately 68 cm (an arm's length) to

subtend a 10o viewing angle as shown in Figure 72.

Observers were asked not to view on a particular sample for more than 10 seconds and to

proceed with their initial interpretation/assessment. Sample tiles were presented in a manner

to minimize superfluous psychological contributions. Each observer participated in 3 trials

with time interval between successive trials of at least 24 hours. In the course of the visual

assessment, each observer was asked to wear a grey lab coat, minimize the effect of surround

on judgments. The blocks were designed as described in Section 4.3.1 and presented in order

gradually changing in R,G,B, R&G and R&G&B “channels” as shown in Figure 73.

The RGB values of the grey background of the interface shown in Figure 73 were

176,176,176 with approximate L*a*b* values (72,0,0) respectively. On the outer edge of the

180

background, a white frame was placed with RGB values of 255, 255 and 255 to provide white adaptation. In the course of the visual assessment, the observers were asked to determine whether they considered the color of the block as white or not.

Figure 72. Viewing and display configuration (10o field of view).

181

Figure 73. Samples display in the center of EIZO screen. Color of the block was changed

gradually based on changing RGB channel values.

4.3.3 Results and Discussions

Color tolerance is defined as a permissible color difference between a particular standard and a sample under specified conditions [172]. Color tolerance determinations can fall into two

182

categories, acceptability and perceptibility, depending on the judgment basis and intent of the experiments.

In establishing an acceptability data base for modeling or validating color-differences a sound statistical analysis must be performed to ensure validity of results. For the color difference experiments the response data should display a normal distribution which is an important point from a statistical perspective for analysis and a basic assumption in this experiment.

Probit analysis was employed to determine the frequency of response at which 50 percent of the observer population reject a given color difference stimuli when compared to a near neutral standard anchor pair [173]. Three variables necessary to run this statistical program are: DE, number of observers and number of observer rejections. In this experiment, DE is the color difference between each color batch displayed in the center of the EIZO monitor and the background with lightness of 72. The procedure involves transforming the information to a probit scale and iteratively fitting a straight line to the data points by maximum likelihood estimation. The software application, SPSS statistics 17.0, can be used to provide numerical information such as probabilities of rejection for associated stimuli levels, standard deviation and chi-squared values. Graphical depictions can also be used to generate the sigmoidal response curve and the linear probit line fit to the data points.

A few critical assumptions were required in structuring this experiment to allow the use of probit analysis in estimating color-difference tolerances. One of these assumptions on which the experiment was based was that the perceptibility response of a color-normal population to

183

color difference magnitudes follows a normal distributions. This is significant since the

probit model is based on this same distribution and is a linear transformation of the normal

distribution. The probit procedure also provides a solution for altering distributions that do not conform to the normal distribution. This normalization is achieved by transforming the stimuli values to a logarithmic scale. The probit analysis results are shown in Tables 18 and

19 [174].

Table 18. Parameter estimates.

95% Confidence Interval Parameter Std. Error Lower bound Upper bound

Color difference 2.086 -31.451 -23.272 Logit Intercept 2.978 36.563 42.520

Table 19. Chi-Square tests.

Chi-Square dfa Sig.

LOGIT Pearson Goodness-of- 972.455 74 .000b Fit Test

184

Based on the standard error, and chi-square values shown in Tables 18 and 19, which are

based on responses from observers for each color sample, it is reasonable to conclude that

probit analysis is not suitable for analyzing results of this experiment. The chi-squared test is used to determine how well the empirical data agreed with the probit model. Significant chi- squared values indicate the data does not adequately fit the probit regression line. One possible explanation for the high chi-squared values could be that the distribution of the perceptibility response from observers versus color difference magnitudes does not follow a normal distribution. Furthermore, the relationship between perceptibility responses and color difference magnitudes, as shown in Figure 74, verifies this statement.

185

Figure 74. The relationship between perceptibility responses and color difference magnitudes .

Although the probit analysis is not suitable in this case, the most precise estimate, which is

the tolerance at 50% rejection probability (i.e. the median tolerance, denoted T50), could be

used to fix the threshold of the rejection probability. Figure 75 shows colorimetric attributes

of 76 samples, used in this experiment, as distributed in the L*a*b* color space. Here green

dots mean the samples which were considered white and red dots denote the samples as not

white with the probability of at least 50% agreement among observers. The corresponding colorimetric properties of 76 selected samples are shown in Appendix A.

186

Figure 75. Samples considered as white at acceptable threshold of 50%.

187

5. Investigation of Various Methods to Generating Texture

The main aim of this research was to determine the effect of texture on perceived whiteness

of objects. As such various textures representing different surface features were generated

and displayed to observers whose response to magnitude of perceived whiteness were

collected and analyzed. In order to examine the effect of texture on whiteness of self-

luminous images, spatial uniformity of an EIZO monitor used in this study as well as the

whiteness boundary of the device were examined. As discussed in Chapter 4, monitor

uniformity and whiteness boundary were examined to determine how to present samples to

observers. In this section, first the color mapping of the solid "white" blocks is described.

The determination of perceived whiteness in presence of texture is then explained, and finally

the generation of white surface images with different textures is examined.

5.1 Mapping Samples Used to Determine Device Dependent Whiteness Boundary

In this section the RGB to Lab mapping of images that were used to determine the whiteness

boundary on the EIZO monitor is explained. According to Green [175] there are three main methods to achieve this type of mapping, i.e. physical models, look-up tables and numerical methods. Typically, least squares polynomial modeling, neural networks and three- dimensional lookup tables with interpolation and extrapolation can be used to derive a transformation between display RGB values and XYZ values. These are described in further detail below.

188

5.1.1 Polynomial Regression Method

In order to produce a smoothing effect during the transformation process, approximate but controlled interpolation is employed via minimizing least squares differences in higher

derivatives where the spline curves meet. The idea is to minimize the sum of the squares of

the estimate residuals. For a second order polynomial, the best fit would mean minimizing

the function shown in Equation (42) [176]:

S = n e = n (y a a x a x ) (42) 2 2 2 r � i � i − 0 − 1 i − 2 i i=1 i=1 In general, for an mth order polynomial, this would mean minimizing the function shown in

Equation (43):

S = n e = n (y a a x a x a x ) (43) 2 2 m 2 r � i � i − 0 − 1 i − 2 i − ⋯ − m i i=1 i=1

With respect to this approach, the reference target in this study contained 76 color samples where corresponding RGB values of each color sample can be represented by a 1×3 vector ρi

* * * (i = 1 …N), and their corresponding L a b values can be represented by a 1×3 vector xi (i =

1 … N). The idea behind using polynomials is that vector ρi can be expanded by adding more

terms (e.g., r2, g2, b2, etc.) and the polynomials are shown in Table 20 below:

189

Table 20. Augmented matrix used in polynomial regression method.

M×3 Augmented matrix 6×3 [R G B RG GB RB] 8×3 [R G B RG GB RB RGB 1] 9×3 [R G B RG GB RB R2 G2 B2] 11×3 [R G B RG GB RB R2 G2 B2 RGB 1] 14×3 [R G B RG GB RB R2 G2 B2 RGB R3 G3 B3 1]

Let R denote an N×3 matrix of vectors ρi and H the corresponding matrix of vectors xi. The

mapping from RGB to L*a*b* can be represented by

H = MR (44)

Where M is the unknown transformation matrix sought. The size of matrix M changes from

6×3, 8×3, 9×3, 11×3 to 14×3 depending on the polynomial being solved. For example, for

the 6×3 matrix M, a polynomial relationship between RGB and L*a*b* is given in Equation

45:

L = a R + a G + a B + a RG + a GB + a RB

11 12 13 14 15 16 a = a R + a G + a B + a RG + a GB + a RB (45)

21 22 23 24 25 26 b = a R + a G + a B + a RG + a GB + a RB

31 32 33 34 35 36 where, in this case, a total of 18 coefficients need to be determined. In this case, matrix M is

a 3×6 matrix with the coefficients a11 - a36. In our experiment, the best M is defined as the

190

one that minimizes the color differences for all 76 color samples. Therefore, least square fitting to the RGB color space is employed. This is equivalent to the minimizing of function shown in Equation (46).

E = N (X M ) (46) T T 2 � i − ρi i=1 The least-squares solution for minimizing E is shown in Equation 47.

M = (R R) R H (47) T −1 T Where RT denotes the transpose of R, and R-1 is the inverse matrix.

5.1.2 Artificial Neural Networks

In the field of artificial intelligence artificial neural networks, ANNs, defines a set of computational methods that were inspired from studies of how humans process information to solve problems. Despite the large variety of network structures that have been developed, the majority of practical applications of neural computing are based on one class of neural network known as a multi-layer perceptron (MLP) [177,178].

An MLP consists of layers of processing units known as neurons. Each neuron receives input and performs some function upon this input to produce an output. The function between input and output for any neuron is known as the activation function or the transfer function and is normally nonlinear. A typical nonlinear transfer function is the sigmoid, S-shaped, function but linear transfer functions are sometimes used for the neurons in the output layer.

191

The input for each unit is the weighted sum of the outputs from all of the units in the

previous layer. However, training the neural networks can be quite time-consuming, and

there are many parameters to determine such as the number of hidden units.

Generally speaking, the units in the first layer, also known as the input layer, receive their input from an input vector and those in the last layer, that is, the output layer, generate an output vector. Each unit in the hidden and output neurons also receives weighted input from a bias unit whose output is fixed at unity. In other words, the network as a whole can be thought of as a universal function that attempts to find a mapping between input vectors and output vectors which as shown in Figure 76.

Figure 76. Schematic diagram of an artificial neural network with one hidden layer.

192

The number of units in the input and output layers is determined from the nature of the

problem being solved empirically. Typically, if the network, for example, is being used to

perform a mapping between a four-dimensional vector and one-dimensional vector then the

number of units in the input and output layers would be four and one respectively.

With respect to ANNs it is very important to be aware of the difference between the training mode and testing mode. During the training mode, an example of input-output pairs is

presented to the network, the error between the desired output and the actual output is then

computed, and the values of the weights are modified to reduce the error. This process is

repeated for each input-output pair in the training set and the presentation of the whole

training set in this way is known as a training epoch. Training may require thousands of

epochs and typically the training procedure is very computationally intensive. Eventually, at

the end of the training period the values of the weights are fixed. In the course of the testing

mode, input vectors are presented to the network and output vectors are computed. The

performance of the network in testing mode using the data from the training set is known as

the training error. An obvious problem with MLPs is that they are prone to over-fitting the

training data. Furthermore, as the number of hidden layers or units in the network increases

then the training error should decrease. Ideally, MLP can generate a training error of zero

with a limit of a sufficiently complex form, however, such a network may show poor

generalization performance. Generalization refers to the ability of the network to perform

using data that was not used during the training mode. In the meanwhile, another dataset

known as testing dataset, is used to determine the testing error. In another step known as

193

validation a set of known or new data can be used to test the performance of the system and determine the accuracy of the algorithms employed.

5.1.3 Look-Up-Table Method

A color component indexes the LUT, and the value of the LUT at that index is used as the output. A 3D LUT uses the entire color to index into a 3 dimensional array – lut [r,g,b]. The index contains an RGB color which is used as the output. 3D LUTs can represent almost any color correction, including all those that 1D LUTs can do. They are much larger than 1D

LUTs, though. A 3D LUT is useful for color corrections that adjust hue and saturation. 1D

LUTs can only adjust individual component levels (like gamma, brightness and contrast)

[179, 180]. The little CMS used in section 5.3 in our study actually is an application of look- up-table method.

5.2 Comparison of Color Space Mapping between Polynomial Regression and

Neural Network Methods

As illustrated in section 5.1, polynomial regression was employed to analyze the samples used for determining device dependent whiteness boundaries. Table 21 shows the results of average color difference, DEab*, obtained from 76 color samples changing in five components, that is, red, green, blue, R&G and R&G&B separately.

194

* Table 21. Model performance, in terms of DEab , using various polynomials for five components.

Average ∆E 14×3 11×3 9×3 8×3 6×3

Red 0.040 0.041 0.035 0.081 0.036

Green 0.036 0.037 0.032 0.088 0.045

Blue 0.110 0.118 0.104 0.180 0.100

R&G 0.050 0.052 0.042 0.146 0.046

R&G&B 0.155 0.155 0.087 0.153 0.085

* Table 22. Model performance, in terms of DEab , by various polynomials for all 76 color samples.

∆E 14×3 11×3 9×3 8×3 6×3

MIN 0.007 0.003 0.008 0.024 0.010

MAX 0.938 0.929 0.941 0.955 0.960

AVG 0.074 0.077 0.058 0.128 0.061

STD 0.114 0.113 0.107 0.123 0.108

Table 22 shows the maximum, minimum, average and standard deviation of color differences for all 76 color samples. The size of the matrix M, the unknown transformation matrix

195

sought, is 9×3 for the best performance in color space mapping from RGB to XYZ, as shown in Table 22.

* Table 23 shows the results of average color difference, DEab , obtained from all color

samples in five components, that is, red, green, blue, R&G and R&G&B separately. Here a

few hidden layers with various architectures were tested to determine a range of average DE

around 5. Based on the results one single hidden layer with 7 and 10 architectures, two hidden layers with 7 and 7, as well as 10 and 10 for each layer respectively were selected to test the performance using MLPs.

* Table 23. Model performance, in terms of DEab , by MLPs for five components.

Average ∆E 1×7 1×10 2×7×7 2×10×10

Red 0.073 0.080 0.061 0.061

Green 0.090 0.089 0.072 0.069

Blue 0.155 0.327 0.118 0.116

R&G 0.119 0.059 0.032 0.087

R&G&B 0.085 0.075 0.043 0.084

Table 24 shows the maximum, minimum, average and standard deviation of color differences for all 76 color samples. The model with two hidden layers and 7 and 7 architecture resulted

196

in the best performance for data mapping from RGB to XYZ color space as shown in Table

24.

* Table 24. Model performance, in terms of DEab , by MLPs for all 76 color samples.

∆E 1×7 1×10 2×7×7 2×10×10

MIN 0.015 0.012 0.004 0.008

MAX 0.381 0.634 0.197 0.325

AVG 0.106 0.129 0.066 0.083

STD 0.074 0.141 0.044 0.059

5.3 Generating Texture Images with Different Surfaces

The polynomial regression and neural networks approaches, introduced in section 5.1 are

effective to reproduce the data mapping, especially for solid color blocks, from RGB to XYZ color space. However, in our study, the final target is to generate images containing different textures. The polynomial regression and neural networks methods are difficult in creating

images containing texture we expected. Therefore, another approach, i.e. little CMS, was

used to address this issue which is introduced in the following sections.

197

5.3.1 ICC Color Management

Color management serves the purpose of ensuring color accuracy throughout the entire

workflow from initial draft through to the finished displayed or printed product. In other

words, color management is a workflow that allows users to achieve consistent, predictable

color across multiple platforms and multiple devices. Ideally the task of color management is

to maintain color accuracy from start to finish, regardless of the hardware and software used.

In order to achieve this it is necessary to map the color characteristics of each input and

output device within the XYZ color space and to store this information in color profiles.

The International Color Consortium (ICC) [181] developed standards and guidelines for

color profiles, which are the data files that characterize each device's color. This standardization made it possible for different software and hardware devices to communicate color in the same consistent and repeatable way [182].

The key to a color managed workflow is the color conversion process, that is, how to convert and maintain color consistency from one device to another. Color gamut is the range of colors reproducible by a particular device, for example computer monitors generally have a larger color gamut than inkjet printers, and can therefore reproduce and display a greater range of color than typical inkjet printers.

An ICC color management system includes the PCS (profile connection space, which is normally CIELAB), the Device Profile, a CMM (color management module) and rendering intent. Attaching a color profile provides the transforms to be applied to the encoded image

198

data in order to produce image in a profile connection space (PCS) describing a specified

medium including its associated viewing conditions. Appropriate transforms to and from the

PCS are linked by the color management module (CMM) [183].

5.3.1.1 PCS

PCS, or profile connection space, is the color space in which the color conversion or color remapping takes place. This standard color space is the interface which provides an unambiguous connection between the input and output profiles as illustrated in Figure 74. It

allows the profile transforms for input, display, and output devices to be decoupled so that

they can be produced independently. A well-defined PCS provides the common interface for the individual device profiles. The profile connection space is based on the CIE 1931 standard colorimetric observer. This experimentally derived standard observer provides a very good representation of the human visual system color matching capabilities. The PCS is either CIELAB (L*a*b*) or CIEXYZ since they are independent color spaces [182].

Illuminant D50 is commonly used in graphic arts and is the illuminant of choice for this

industry. The PCS adopted white chromaticity is the chromaticity of the D50 illuminant

defined in ISO 3664 [184].

5.3.1.2 ICC Profile

An ICC profile is a set of data that characterizes a color input or output device, according to

the standards promulgated by the International Color Consortium (ICC). Figure 77 shows

how an ICC profile works; a monitor profile describes the color characteristic of the monitor

199

being used. The workflow, in basic terms, is to convert the colors of an image from the color space of the monitor to the color space of the output device [182,183].

Figure 77. Schematic diagram of how ICC profile works.

5.3.1.3 Rendering Intent

Transforms between corresponding colors in different viewing conditions often apply the chromatic adaptation component of a . The cross-media objective is often not to reproduce appearance, thus color rendering approaches independently use appearance models to deal with viewing condition differences, and gamut mapping to deal

200

with gamut differences. Because ICC profiling is nothing more than a translation from one

color space to another, a rendering intent is needed to provide the color management

software with an approach to make that translation, to map the colors in the image from one

color space to another, and also to deal with out-of-gamut colors (colors that are outside the range of the color space to which the image is being translated) [182]. Four rendering intents are used in ICC profile. These are:

• Absolute colorimetric intent: the in-gamut colors with the chromatically adapted

nCIEXYZ tristimulus values of the transformations for this intent are unchanged. As

for the measurement (relative to illuminant) of output color the output should be

matched to that of input color as much as possible.

• Relative colorimetric intent: the in-gamut chromatically adapted tristimulus values

such that the white point of the actual medium is rescaled and mapped to the PCS

white point transformations for this intent. With respect to the measurement (relative

to paper) of output color should be matched that of input color if possible.

• Perceptual intent: PCS values represent hypothetical measurements of a color

reproduction on the reference reflective medium. Basically speaking, the PCS

represents the appearance of that reproduction as viewed in the reference viewing

environment by a human observer adapted to that environment.

• Saturation intent: this transformation involves compromises such as trading off

preservation of hue in order to preserve the vividness of pure colors, in other words,

color transforms should maintain saturation in colors as much as possible.

201

5.3.2 Little CMS

Little CMS is an open source color management system, released as a software library for use in other programs which will allow the use of Internal Color Consortium profiles. lCMS is one of the first open sourced CMS systems and currently the only one with a GUI. It was initiated by Maria Marti in 1998 [184]. Little CMS accepts profiles conformant with ICC 4.2 or below and supports all features described in the ICC specification. Figure 78 depicts the main section in the CMS pertaining to transforming a bitmap between two ICC profiles

(scanner profile and monitor profile) using the lCMS API [184,185].

202

Figure 78. Transformation of a bitmap image between a scanner profile and EIZO monitor profile.

The scanner profile and EIZO monitor profile as described in section 5.3.1.2 were generated using GretagMacbeth ProfileMaker 5.0 [186]. The codes in Figure 78 show the transformation of an image between two ICC profiles. In the course of data transformation, a

203

suitable rendering intent introduced in section 5.3.1.3, is chosen to maximize image fidelity.

Visual Studio 2008 64bit software and Qt Project [187] were used to generate simulated texture software (STS) inserting lCMS system and its test interface as shown in Figure 79.

Figure 79. The interface generated using Visual Studio 2008, Qt Project including little CMS.

In this software, images could be completely imported using the shortcut key “F5”, zoomed in with “F6” and zoomed out “F7” and dragged using mouse.

204

5.3.2.1 Assessment of Accuracy with EIZO Monitor Profile

In order to test the accuracy of monitor profile generated using Eye-one Match 3, a set of 180 images with lightness values varying from 80 to 100 were designed (as shown in Appendix

B) and their colorimetric distributions are shown in Figure 80.

Figure 80. Colorimetric distribution of selected data in L*a*b* color space.

The EIZO monitor profile generates the relationship between L*a*b* and RGB. The dataset included (Lab)CMS values from (81,0,0) to (100,0,0) with a lightness interval of 1 unit, as well

205

as the variations with 0.5 or 1 unit in a*b* direction included as well in this dataset. The

(RGB)CMS information was then generated via EIZO monitor profile. The (Lab)m values of a set of specific samples displayed in the center of the EIZO monitor with specific (RGB)m values were measured. The generated (RGB)CMS values were extracted from the

(RGB)m dataset and their corresponding (Lab)m values were obtained. The DE values were then obtained by calculating the color difference between (Lab)CMS and (Lab)m.

Figure 81. Schematic representation of the methodology to test the accuracy of EIZO monitor profile.

206

The workflow shown in Figure 81 illustrates the methodology employed to test the accuracy of the EIZO monitor profile.

The color differences between (Lab)CMS and (Lab)m based on the 180 samples are shown in

Table 25 which are too large for the purposes of this study.

Table 25. Color difference between (Lab)CMS and (Lab)m for selected 180 samples.

Min. Max. Ave. SD Color difference 0.55 3.30 1.83 0.48

5.3.2.2 Measurement of Woolen Samples with a SF 600X Spectrophotometer

The woolen samples were measured with a Datacolor SF600x spectrophotometer according

to AATCC procedure for UV calibration for white samples, UV included as well as UV

excluded, specular excluded, LAV, 6 readings, illuminant D65 and 10 degree observer. The measurements were repeated three times and the average L*a*b* values are shown in Table 26.

207

Table 26. L*a*b* values measured with a Datacolor SF600X Spectrophotometer with UV and without UV.

Sample 1 2 3 4 5 6 7 8 9 10 (SF600X+UV) L* 82.21 81.02 81.26 83.08 82.51 82.82 81.19 82.60 82.80 80.82 a* 0.89 0.62 0.69 0.69 0.54 0.76 0.56 0.77 0.75 0.52 b* 3.48 2.91 2.72 2.73 3.21 2.81 2.71 2.67 3.02 3.41 Sample 1 2 3 4 5 6 7 8 9 10 (SF600X-UV) L* 81.49 80.33 80.51 82.34 81.79 82.04 80.47 82.04 82.20 80.19 a* -1.00 -1.29 -1.22 -1.24 -1.37 -1.16 -1.31 -1.20 -1.18 -1.36 b* 11.70 11.09 10.96 11.20 11.57 11.14 10.50 11.04 11.05 11.37

5.3.2.3 Measurement of Woolen Samples with a PR-670 Spectroradiometer

Figure 81 shows the white point calibration of a PR-670 spectroradiometer. The woolen samples were measured with the PR-670 spectroradiometer with the distance around 60 cm from radiometer to the grey stand shown in Figure 82 and Figure 83 shows the measurements with 10 woolen samples. The PTFE (Polytetrafluoroethylene) reflectance standard was used for white point calibration and mounted on top of a grey stand with an angle of 45o.

208

Figure 82. Calibration of Spectroradiometer PR-670.

Figure 83. Measurement of 10 woolen samples with Spectroradiometer PR-670.

209

Table 27 shows the colorimetric properties obtained from a PR-670 spectroradiometer when

10 woolen samples presented in SpectraLight III standard viewing booth (X-rite, Inc.) under

D65 with and without UV. Each sample was measured three times with an interval of 1 hour and the results were averaged which are shown in Table 27.

Table 27. L*a*b* values obtained using a PR-670 Spectroradiometer with and without UV.

Sample 1 2 3 4 5 6 7 8 9 10 (+UV) L* 80.32 80.23 81.08 82.41 80.18 80.72 80.86 81.04 81.51 78.76 a* 0.30 0.23 0.20 0.40 0.19 0.39 0.24 0.30 0.41 0.07

b* 2.86 3.46 3.13 3.63 4.26 3.59 3.48 4.38 3.56 4.71 Sample 1 2 3 4 5 6 7 8 9 10 (-UV) L* 80.67 79.90 80.34 82.04 79.39 80.87 80.57 80.76 81.03 78.67 a* -0.43 -0.55 -0.59 -0.42 -0.56 -0.28 -0.51 -0.45 -0.35 -0.64 b* 5.87 6.60 6.15 6.39 7.09 6.28 6.21 7.04 6.52 7.31

* The color difference, DEab , was calculated between SF600x spectrophotometer and PR-670 spectroradiometer with UV and without UV settings as shown in Table 28.

210

* Table 28. The color difference, DEab , between SF600x photometer and PR-670 radiometer with UV and without UV.

Sample 1 2 3 4 5 6 7 8 9 10

* DEab (+UV) 2.07 1.04 0.66 1.16 2.58 2.27 0.90 2.36 1.44 2.48

* DEab (-UV) 5.91 4.57 4.85 4.89 5.15 5.08 4.37 4.27 4.75 4.39

In presence of supplementary UV, the color difference ranges from 0.66 to 2.58 with a mean

value of 1.70 which is much smaller than the mean color difference without UV, 4.82.

Therefore in absence of supplementary UV the color difference measured by the radiometer

in the viewing booth, and likely the perceived difference, does not correlate well with

measured results using the spectrophotometer. Moreover, as evidenced by the reasonably

large difference between measured results in presence of UV, accurate calibration of all

instruments, including that of UV, is required to ensure the color attributes obtained from

these instruments reasonably accurately represent samples perceived colorimetric

characteristics. However, due to differences in a number of parameters, this may still be

inadequate in resolving the variation between instrumental results.

5.3.2.4 A Comparison of Accuracy Different Rendering Intents

Four rendering intents: absolute colorimetric, relative colorimetric, perceptual and saturation,

were used for transformation from device dependent color space to a device independent color space. It was therefore decided to examine the accuracy of the transformations based on

211

various rendering intents. The woolen samples were scanned using a calibrated EPSON

10000xl flatbed scanner at a resolution of 300dpi. Images were then displayed one by one in the center of the EIZO monitor via STS software which incorporated the little CMS. Figure

84 shows the interface of displaying one of the 10 woolen samples. The colorimetric properties of 10 woolen scanned samples were obtained using a PR-670 spectroradiometer with four rendering intents and are shown in Table 29.

Figure 84. The interface generated with STS of displaying the scanned woolen sample.

212

In order to test the accuracy of little CMS inserted in Simulated Texture Software (STS), and to determine which rendering intent is more suitable for the transformation between input color space and output color space, color differences were calculated among different transformed cases. In this case, three types of measurements for woolen samples were compared:

1. spectrophotometric measurement of samples using a Datacolor SF600x under conditions

specified in section 5.3.2.2.

2. radiometric measurement of identical samples displayed on a grey stand inside a

SpectraLight III viewing booth illuminated with simulated daylight (D65) with and

without supplementary UV according to the procedure described in section 5.3.2. 3 and,

3. radiometric measurement of images of identical samples displayed on an EIZO monitor

based on the STS software and using a PR-670 spectroradiometer according to the

procedure described in section 5.3.2.4.

213

Table 29. L*a*b* values of 10 scanned woolen samples displayed on EIZO monitor measured with spectroradiometer PR-670 under four rendering intents.

Rendering Intent 1 2 3 4 5 * Relative L 84.91 83.32 82.38 86.99 82.98 Colorimetric a* -0.24 -0.38 -0.54 -0.41 -0.67 b* 4.19 5.12 5.55 4.32 6.12 L* 84.80 83.37 82.52 86.42 83.63 Perceptual a* 0.81 0.73 0.08 0.89 0.17 b* 3.61 5.03 5.33 3.39 4.93 L* 84.69 83.25 82.36 86.25 83.47 Saturation a* -0.04 -0.06 -0.54 0.24 -0.46

b* 3.75 5.15 5.51 3.58 5.11 * Absolute L 84.11 82.69 81.84 85.72 82.95 Colorimetric a* -0.02 -0.09 -0.72 0.05 -0.64 b* 4.79 6.18 6.46 4.59 6.08 Rendering Intent 6 7 8 9 10 * Relative L 83.68 83.41 82.40 85.33 82.05 Colorimetric a* -0.36 -0.40 -0.40 -0.05 -0.39 b* 5.54 4.59 5.03 3.85 4.90 L* 83.86 83.41 82.01 85.84 81.64 Perceptual a* 0.53 0.55 0.37 1.17 0.44 b* 4.47 4.69 4.43 2.37 5.21 L* 83.69 83.24 81.85 85.88 81.48 Saturation a* -0.10 -0.08 -0.25 -0.34 -0.18

b* 4.66 4.88 4.61 5.43 5.39 Absolute L* 83.17 82.72 81.33 85.14 80.97 Colorimetric a* -0.29 -0.27 -0.43 0.33 -0.36 b* 5.63 5.84 5.57 3.58 6.34

214

Table 30. Color differences calculated for 10 woolen samples under different rendering.

Sample 1 2 3 4 5 (V.B.+UV)_(STS P.) 4.57 3.55 2.63 4.05 3.52 (V.B.+UV)_(STS R.C.) 4.81 3.56 2.84 4.70 3.47 (V.B.+UV)_(STS S.) 4.47 3.47 2.80 3.85 3.46 (V.B.+UV)_(STS A.C.) 4.27 3.68 3.54 3.46 3.41 (SP.+UV)_(STS P.) 2.60 3.17 2.96 3.41 2.08 (SP.+UV)_(STS R.C.) 3.01 3.34 3.28 4.36 3.19 (SP.+UV)_(STS S.) 2.66 3.24 3.24 3.32 2.35 (SP.+UV)_(STS A.C.) 2.48 3.74 4.04 3.29 3.13 (V.B.-UV)_(STS P.) 4.88 4.03 2.43 5.47 4.82 (V.B.-UV)_(STS R.C.) 4.57 3.74 2.13 5.36 3.73 (V.B.-UV)_(STS S.) 4.57 3.69 2.12 5.11 4.54 (V.B.-UV)_(STS A.C.) 3.63 2.86 1.55 4.12 3.70 (SP.-UV)_(STS P.) 8.93 7.07 6.12 9.07 7.06 (SP.-UV)_(STS R.C.) 8.28 6.74 5.77 8.34 5.62 (SP.-UV)_(STS S.) 8.62 6.73 5.79 8.69 6.74 (SP.-UV)_(STS A.C.) 7.45 5.58 4.72 7.53 5.66 Sample 6 7 8 9 10 Mean (V.B.+UV)_(STS P.) 3.27 2.84 0.98 4.56 2.94 3.29 (V.B.+UV)_(STS R.C.) 3.63 2.86 1.67 3.86 3.32 3.47 (V.B.+UV)_(STS S.) 3.20 2.78 1.01 4.81 2.81 3.27 (V.B.+UV)_(STS A.C.) 3.27 3.05 1.43 3.63 2.77 3.25 (SP.+UV)_(STS P.) 1.97 2.97 1.90 3.14 1.98 2.62 (SP.+UV)_(STS R.C.) 3.07 3.06 2.64 2.78 2.13 3.09 (SP.+UV)_(STS S.) 2.22 3.05 2.32 4.06 2.20 2.87

215

Table 30. Continued

Sample 6 7 8 9 10 Mean (SP.+UV)_(STS A.C.) 3.03 3.58 3.38 2.45 3.06 3.22 (V.B.-UV)_(STS P.) 3.58 3.39 3.01 6.54 3.79 4.19 (V.B.-UV)_(STS R.C.) 2.91 3.27 2.59 5.07 4.16 3.75 (V.B.-UV)_(STS S.) 3.26 3.02 2.67 4.97 3.43 3.74 (V.B.-UV)_(STS A.C.) 2.39 2.20 1.58 5.10 2.51 2.96 (SP.-UV)_(STS P.) 7.11 6.77 6.80 9.70 6.58 7.52 (SP.-UV)_(STS R.C.) 5.89 6.66 6.07 7.93 6.80 6.81 (SP.-UV)_(STS S.) 6.77 6.39 6.50 6.77 6.23 6.92 (SP.-UV)_(STS A.C.) 5.69 5.28 5.57 8.17 5.19 6.08

Table 30 shows a summary of results for these measurements. According to the results shown perceptual rendering intent gave the smallest average color differences. Based on this method color differences between samples measured in the viewing booth with UV and spectrophotometer with UV vs. images displayed using a perceptual rendering intent are relatively smaller than those for other cases.

Table 31 shows a comparison of color differences for radiometric measurement of samples displayed in the viewing booth with simulated D65 and the same samples measured with a

SF600x spectrophotometer with and without UV.

216

Table 31. Color difference obtained when samples displayed in the viewing booth and measured with a radiometer versus those measured with a spectrophotometer SF600X.

Sample 1 2 3 4 5

(V.B.+UV)_(STS P.) 0.68 0.86 2.22 1.57 1.15

(SP.+UV)_(STS P.) 2.39 0.58 1.89 2.87 1.44

(V.B.+UV)_(S.P.+UV) 2.31 0.94 2.02 2.44 0.66

(V.B.-UV)_(STS P.) 6.72 5.45 4.89 5.90 5.33

(SP.-UV)_(STS P.) 6.28 4.94 4.35 7.30 4.52

(V.B.-UV)_(S.P.-UV) 11.43 8.82 7.54 11.53 8.80

Sample 6 7 8 9 10 Mean (V.B.+UV)_(STS P.) 1.14 0.98 1.91 1.91 1.07 1.35 (SP.+UV)_(STS P.) 0.79 0.97 2.88 2.65 1.72 1.82 (V.B.+UV)_(S.P.+UV) 0.63 1.18 1.33 3.06 2.01 1.66 (V.B.-UV)_(STS P.) 5.02 4.79 4.28 7.41 4.37 5.42 (SP.-UV)_(STS P.) 4.87 5.69 8.17 6.40 5.44 5.80 (V.B.-UV)_(S.P.-UV) 8.84 8.40 8.58 10.61 8.26 9.28

In Tables 30 and 31, V.B., SP., P., R.C., S. and A.C. mean Viewing Booth,

Spectrophotometer, Perceptual, Relative Colorimetric, Saturation and Absolute Colorimetric respectively.

5.3.2.5 Conclusions

Results indicate that compared with the other three rendering intents perceptual rendering intent performs better and was thus selected for color space transformation with little CMS.

217

The average color difference for the 10 woolen samples displayed in the viewing booth and measured with a radiometer and the same woolen samples measured with a spectrophotometer in presence of UV was 1.70. Although for textile applications such a difference between colored samples is likely not acceptable, the acceptability tolerance for whites at this lightness level is not yet independently established. In addition, despite all control measures employed there may be a number of external factors that likely contribute to measurement of samples with a radiometer as well as those affecting measurements with a spectrophotometer and thus such a difference is not unexpected.

In case of measurements with the same device (i.e. radiometer) the average calculated color difference for the woolen samples displayed in the viewing booth in presence of supplementary UV and those displayed on the monitor using the simulated texture software was 3.29. The average color difference for the woolen samples measured using a spectrophotometer with UV and images displayed on the monitor using the simulated texture software was 2.62. These differences, in part, relate to the accuracy of the transformation process including little CMS inserted in simulated texture software which is apparently not adequately accurate for textile applications. However, differences may be considered reasonable for imaging applications though alternative methods for displaying images should also be considered.

218

5.3.3 Generated Texture Images with Similar Lightness and Whiteness based on woolen

Samples

In section 5.3.2, simulated texture software was used in conjunction with little CMS to generate texture images with the similar lightness and whiteness as verified by measurements using an SF600X spectrophotometer. However, measurements using a PR-670 spectroradiometer significantly deviated from those based on spectrophotometer and the color differences for all 10 scanned woolen samples displayed on EIZO monitor are up to 3

* DEab which is not acceptable. Another approach of adjusting 10 scanned woolen samples was thus applied to examine the effect of texture on perceived whiteness.

5.3.3.1 The Effect of Surround on the Measured Value of Displayed Images

In order to investigate the effect of surround on the measured values of the displayed images, six 2.5 × 2.5 inch images were generated and displayed in the center of the EIZO monitor with the RGB values ranging from (250,250,250) to (0,0,0) with an interval of (50,50,50).

The surround RGB values ranged from (50,50,50) to (250,250,250) with an interval of

(50,50,50). The L*a*b* values of 6 images with set RGB values given above under different surrounds were obtained using PR-670 spectroradiometer, as shown in Table 32.

The distance from the PR-670 spectroradiometer to the EIZO monitor was set to 125" to ensure the black circle within the PR-670 spectroradiometer's view finder covered the

219

majority of the image without touching the edges (as determined visually). This is schematically shown in Figure 85.

Surround

Displayed Image

Measured Region

Figure 85. A schematic demonstration of the position of the black dot in PR-670 spectroradiometer's view finder focused in an image during measurements.

220

Table 32. L*a*b* values of 6 images with different background settings.

255 200 150 100 50 0 Images 255 200 150 100 50 0 RGB values 255 200 150 100 50 0 * L _50,50,50 99.67 84.02 65.03 44.38 21.55 1.90 * L _100,100,100 99.64 84.03 65.06 44.38 21.53 1.89 * L _150,150,150 99.67 84.02 65.08 44.32 21.50 1.90 * L _200,200,200 99.54 84.02 65.08 44.36 21.52 1.88 background * L _250,250,250 99.53 84.02 65.06 44.39 21.54 1.87 * DL 0.15 0.01 0.06 0.07 0.05 0.03 * a _50,50,50 0.49 0.63 0.46 0.40 -0.25 0.14 * a _100,100,100 0.48 0.65 0.46 0.33 -0.20 0.17 * a _150,150,150 0.49 0.64 0.47 0.37 -0.25 0.17 * a _200,200,200 0.49 0.67 0.44 0.31 -0.20 0.13 background * a _250,250,250 0.42 0.65 0.42 0.34 -0.25 0.12 * Da 0.07 0.04 0.06 0.09 0.05 0.05 * b _50,50,50 0.17 0.08 -0.18 0.02 0.02 -0.75 * b _100,100,100 0.18 0.12 0.02 0.09 0.05 -0.77 * b _150,150,150 0.30 0.12 -0.01 0.11 0.10 -0.76 * b _200,200,200 0.28 0.12 0.02 0.16 0.15 -0.76 background * b _250,250,250 0.35 0.14 0.02 0.15 0.16 -0.73 * Db 0.18 0.06 0.20 0.14 0.14 0.04

The difference between the minimum and maximum L*, a*, b* for each image was calculated under different surrounds. The maximum difference in L*a*b* values was found to be less than 0.20 which indicates the effect of surround on colorimetric values of the displayed images on the EIZO monitor is minimal.

221

5.3.3.2 Generation of Texture Images with Similar L*a*b* Values

Ten woolen samples were scanned using an Epson XL10000 flatbed scanner with a resolution of 300 dpi, using no color management and texture images were thus generated in

tiff format. The 10 texture images in sRGB color space were transformed to XYZ color space

and subsequently to CIELAB color space. The converted mean, maximum and minimum

L*a*b* values of each image are shown in Table 33.

In order to investigate the effect of texture on the perceived whiteness of object color, the

mean L*a*b* values of each image were adjusted to the same value. In one approach the

minimum mean L*a*b* values of the 10 scanned texture images were used to normalize

* * * L a b values for all images. In other words, the weight coefficients, 89.32/Lmean_i, -

* * * 1.01/amean_i and 4.69/bmean_i, (i = 1, ... , 10), were applied to transform L a b values of each

image.

The L*a*b* values of the 10 normalized images, displayed in the center of the EIZO monitor,

were then measured using a PR-670 spectroradiometer and results are shown in Table 34.

222

Table 33. The mean, maximum and minimum L*a*b* values of 10 scanned texture

images via transformations (sRGB-XYZ- L*a*b*).

Images 1 2 3 4 5 6 7 8 9 10 * L mean 89.96 89.32 89.49 90.45 90.20 90.95 90.17 90.61 90.54 89.43 * L max 96.77 96.05 96.82 95.48 97.95 97.50 96.35 98.25 97.27 96.54 * L min 74.56 67.80 72.13 78.00 78.93 79.59 71.11 79.03 78.70 75.99 * a mean -1.14 -1.21 -1.23 -1.11 -1.14 -1.01 -1.17 -1.08 -1.09 -1.27 * a max 0.71 3.09 1.61 0.87 1.03 1.68 1.42 2.10 1.61 2.70

* a min -3.57 -3.70 -3.64 -3.17 -2.92 -2.60 -3.59 -2.69 -2.69 -4.31 * b mean 4.81 5.21 5.07 4.88 5.10 4.86 4.82 4.92 4.69 5.19 * b max 13.87 14.19 15.16 14.24 14.40 14.09 14.41 12.77 13.79 13.72 * b min 1.07 0.50 1.15 1.20 0.90 0.95 1.06 0.60 0.88 0.74

Table 34. L*a*b* values of 10 normalized images, minimum, maximum, mean as well as L*a*b* differences.

Normalized 1 2 3 4 5 6 7 8 9 10 Images L* 80.67 81.34 81.11 80.97 81.08 80.98 81.54 80.49 81.32 80.74 a* -0.22 -0.26 -0.44 -0.20 -0.30 -0.37 -0.26 -0.26 -0.34 0.05 b* 10.42 10.27 9.90 10.47 10.47 10.69 10.27 10.82 10.21 10.74 MIN MAX MEAN DIFF. L* 80.49 81.54 81.02 1.05 a* -0.44 0.05 -0.26 0.49 b* 9.90 10.82 10.43 0.93

223

The L*a*b* differences were unacceptable, thus an alternative approach based on normalizing

XYZ values of each image was considered. Table 35 shows the XYZ values of 10 scanned

texture images after the transformation from sRGB to XYZ.

Table 35. The mean, maximum and minimum Lab values of 10 scanned

texture images via transformations(sRGB-XYZ).

Images 1 2 3 4 5 6 7 8 9 10

Xmean 72.24 70.34 71.29 73.19 72.24 74.14 72.24 73.19 73.19 71.29

Xmax 86.50 85.55 87.45 83.64 90.30 88.40 86.50 90.30 88.40 86.50

Xmin 43.72 35.17 40.87 50.38 51.33 52.28 38.97 51.33 51.33 46.57

Ymean 76.00 75.00 75.00 77.00 77.00 78.00 77.00 78.00 78.00 75.00

Ymax 92.00 90.00 92.00 89.00 95.00 94.00 91.00 96.00 93.00 91.00

Ymin 48.00 38.00 44.00 53.00 55.00 56.00 42.00 55.00 54.00 50.00

Zmean 77.32 75.14 75.14 77.32 77.32 79.50 77.32 78.41 78.41 75.14

Zmax 96.92 94.74 98.01 93.65 101.28 99.10 96.92 103.46 99.10 95.83

Zmin 41.38 33.76 37.03 44.65 45.74 46.83 35.94 49.01 45.74 42.47

A similar normalization approach to that used for the L*a*b* values was used where weight coefficients, 70.34/Xmean_i, 75.00 /Ymean_i and 75.14/Zmean_i, (i = 1, ... , 10), were applied to

normalize the XYZ values of all images. The XYZ values of the 10 normalized images,

displayed in the center of the EIZO monitor, were measured using a PR-670

spectroradiometer and results are shown in Table 36.

224

Table 36. L*a*b* values of 10 normalized images, minimum, maximum, mean and L*a*b* differences.

Normalized 1 2 3 4 5 6 7 8 9 10 Images L* 80.53 80.51 80.50 80.58 80.38 80.40 80.47 80.39 80.24 80.62 a* -0.23 -0.11 -0.36 -0.24 -0.24 -0.23 -0.14 -0.21 -0.22 0.05 b* 9.72 9.58 9.46 9.72 9.66 9.82 9.75 9.85 9.96 9.77 MIN MAX MEAN DIFF. L* 80.24 80.62 80.46 0.38 a* -0.36 0.05 -0.19 0.41 b* 9.46 9.96 9.73 0.49

The differences between the minimum and maximum L*a*b* values among normalized

images are shown in Table 36 which are 0.38, 0.41 and 0.49 respectively and these were

acceptable for the purpose of this study. An approximate representation of the 10 normalized

images based on L*a*b* values measured with a PR-670 spectroradiometer, is given in Figure

86.

225

Figure 86. The 10 normalized images with the approximate L*a*b* values displayed on an EIZO monitor.

An examination of the uniformity of the EIZO monitor indicated that the center of the screen is the most uniform region. As shown in Figure 87, the EIZO monitor was divided into 5 × 6

blocks where blocks 8, 9, 10, 11, 14, 15, 16, 17 were found to be the most uniform.

One of the 10 normalized images was randomly chosen and then displayed in positions 8, 9,

10, 11, 14, 15, 16 and 17, and its colorimetric values were measured using a PR-670

spectroradiometer. The L*a*b* values of this block in various positions are shown in Table

37.

226

Figure 87. Investigation of the uniformity of the EIZO monitor.

Table 37. L*a*b* values of a target image corresponding to block positions 8, 9, 10, 11, 14, 15, 16 and 17 on an EIZO monitor.

Display Position on EIZO Monitor 8 9 10 11 14 15 L* 79.98 80.28 80.25 80.03 79.99 80.34 a* -0.29 -0.26 -0.47 -0.48 -0.54 -0.58 b* 9.33 9.28 9.48 9.35 9.65 9.49 Display Position on MIN MAX MEAN DIFF. EIZO Monitor 16 17 L* 80.35 80.01 79.98 80.35 80.15 0.37 a* -0.63 -0.71 -0.71 -0.26 -0.50 0.45 b* 9.51 9.56 9.28 9.65 9.45 0.37

227

While the overall difference in L*a*b* values for the blocks shown is less than 0.5, the

difference in lightness in the central region i.e. blocks 9, 10, 15 and 16 is smaller than 0.1

which indicates differences may be neglected.

5.3.3.3 Generated Anchor (reference) Images

A reference white textile sample (AATCC standard optically brightened white with L* ~ 100)

was scanned with an Epson XL10000, at a resolution of 300 dpi and without utilizing the

color management software.

Twelve anchor images were then obtained by normalizing the Y value of the scanned

AATCC standard. The samples were displayed on the center of the EIZO monitor and

measured using a PR-670 spectroradiometer. Table 38a-b shows two sets of normalized reference samples in two L*a*b* ranges. In Table 38a, the L* was changed from 67-99, while

in Table 38b a narrower range of 77-84, based on L* of actual woolen samples, was selected.

228

Table 38. L*a*b* values of normalized reference images (anchors).

a: Normalized Anchors 1 2 3 4 5 6 L* 67.98 69.87 71.69 73.54 75.45 77.39 a* 0.60 0.59 0.62 0.67 0.66 0.70 b* 1.90 1.96 2.00 2.02 2.05 2.10 a: Normalized Anchors 7 8 9 10 11 12 L* 79.33 81.34 83.34 85.36 98.08 98.11 a* 0.73 0.69 0.74 0.79 0.83 0.85 b* 2.07 2.20 2.21 2.19 2.38 2.36 b: Normalized Anchors 1 2 3 4 5 6 L* 77.16 77.63 78.12 79.09 79.60 80.08 a* 0.75 0.70 0.75 0.70 0.76 0.76 b* 1.99 1.98 1.99 2.01 2.04 2.03 b: Normalized Anchors 7 8 9 10 11 12 L* 80.60 81.08 81.58 82.07 82.55 83.07 a* 0.70 0.76 0.69 0.75 0.76 0.78 b* 2.08 2.07 2.11 2.08 2.03 2.13

5.3.3.4 Preliminary Design of Experiment

A 21.3" EIZO ColorEdge CG211 monitor with 17.04 × 12.78" width and height dimensions was selected for the display of samples in this study. The display screen was divided into 5 ×

6 blocks, with a block size of 2.84” × 2.55” (W × H).

The visual assessment was divided into two parts: in one part the observer was asked to rank

10 “normalized” texture images in terms of perceived whiteness. However, due to stated

229

issues with the uniformity of the EIZO monitor ideally only blocks 9, 10, 15 and 16 had to be used to display the texture images.

Figure 88. Display arrangement of the normalized images in the center of the monitor.

The position and size of images were examined and it was found that 9 images could be displayed in three rows with the last image shown in the middle of the fourth row as shown in Figure 88.

230

Table 39. L*a*b* values of normalized images displayed in the center of the monitor and measured by a PR-670 spectroradiometer.

Normalized Images 1 2 3 4 5 6 7 L* 80.63 80.58 80.55 80.44 80.48 80.50 80.57 a* -0.05 -0.27 -0.17 -0.07 -0.37 -0.11 -0.17 b* 9.79 9.75 10.05 9.95 9.99 9.76 10.11 Normalized MIN MAX MEAN DIFF. Images 8 9 10 L* 80.49 80.56 80.53 80.44 80.63 80.53 0.20 a* -0.29 -0.12 -0.41 -0.41 -0.05 -0.20 0.36 b* 9.83 10.12 10.04 9.75 10.12 9.94 0.37

For the images shown in Figure 88, the weight coefficients of 70.34 /Xmean_i, 75.00/Ymean_i and 75.14/Zmean_i, (i = 1, ... , 9), were applied to adjust XYZ of the first 9 images. The weighting coefficients 70.34/Xmean_10, 74.19/Ymean_10 and 75.14/Zmean_10 were applied to adjust the XYZ values of the 10th image on the fourth row. The L*a*b* values were then measured using a PR-670 spectroradiometer and are shown in Table 39.

231

Figure 89. Two anchors with L* values of 70 and 98 respectively (left to right), displayed above the texture image, and measured with a PR-670 spectroradiometer (a), three anchor samples with L* values of 75, 80 and 85 respectively (L to R), displayed above the texture image and measured with a PR-670 spectroradiometer (b).

232

The second part of the visual assessment involved displaying each of the normalized images

between blocks 15 and 16 with the reference samples shown on the top row in blocks 9 and

10. In order to select suitable anchor pairs (references) one approach considered was to

display two images with L* values of 70 and 98 respectively, as measured with the

spectroradiometer. Another approach involved displaying three images with L* values of 75,

80 and 85 respectively, again measured with the spectroradiometer, as shown in Figure 89a-

b.

5.3.3.5 Adjustment of Whiteness of Displayed Images

The whiteness and tint values of 10 normalized samples were calculated based on CIE

Whiteness Index and Tint Index shown Equation 20 and 21 respectively. As shown in Table

40, the tint value of the 10 normalized images ranged from 4.80 to 6.83 which was

significantly outside of the acceptable "white" range. In order to improve the whiteness of the

displayed images, and bring them within the CIE whiteness range, the XYZ values of the images were once again readjusted.

The AATCC standard was scanned using the same method used for the 10 woolen samples and

its XYZ values were measured (90.63, 95.11 and 98.40 respectively) with a PR-670

spectroradiometer. The corresponding L*a*b* values of the sample (98.08, 0.83 and 2.38

respectively), as well as its whiteness and tint values (84.66 and -2.275) were calculated

which shows the sample is within the CIE whiteness boundary.

233

Table 40. Whiteness and Tint values of the 10 normalized images based on measured XYZ values.

Normalized 1 2 3 4 5 6 7 8 9 10 Images X 54.79 54.62 54.61 54.45 54.41 54.55 54.63 54.46 54.64 54.47 Y 57.81 57.72 57.67 57.46 57.54 57.58 57.69 57.56 57.68 57.62 Z 51.73 51.68 51.34 51.24 51.27 51.54 51.3 51.45 51.28 51.3 Whiteness 6.77 6.83 5.20 5.50 5.31 6.62 4.89 6.17 4.80 5.22 Tint -4.12 -3.70 -3.94 -4.13 -3.66 -4.03 -4.05 -3.74 -4.14 -3.60

5.3.3.6 Generation of A New Set of Images based on the AATCC Standard

In order to improve the whiteness of the 10 texture images first it was decided to generate a series of images based on the standard AATCC sample with lightness values ranging from 85 to 97 with an interval of 1. Because the lightness of the scanned AATCC sample is 98.08 which is very close to 100, the selected lightness value was then divided by the mean lightness of the scanned AATCC standard sample images, that is, Li/Lmean_AATCC (i = 85,

86, ...97). Images with adjusted L* values were then displayed in the center of the EIZO monitor and measured with the spectroradiometer and their XYZ, L*a*b*, whiteness and tint values were obtained as shown in Table 41.

234

Table 41. XYZ, L*a*b*, whiteness and tint values of 13 normalized AATCC Std. samples.

Selected L* 85 86 87 88 89 90

X 41.16 43.83 46.68 49.73 52.91 56.29 Y 43.20 45.99 48.99 52.18 55.51 59.08 Z 44.54 47.44 50.54 53.83 57.34 60.94 L* 71.69 73.54 75.45 77.39 79.33 81.34 a* 0.62 0.67 0.66 0.70 0.73 0.69 b* 2.00 2.02 2.05 2.10 2.07 2.20 CIE WI 31.83 34.62 37.62 40.81 44.56 47.71 CIE Tint -2.29 -2.29 -2.29 -2.29 -2.32 -2.29 Selected L* 91 92 93 94 95 96 97 X 59.85 63.6 69.59 73.87 78.36 83.24 88.26 Y 62.80 66.72 72.85 77.36 82.07 87.18 92.46 Z 64.81 68.94 75.60 80.25 85.16 90.53 95.94 L* 83.34 85.36 88.38 90.49 92.61 94.81 97.01 a* 0.74 0.79 1.13 1.09 1.10 1.12 1.10 b* 2.21 2.19 1.99 2.06 2.08 2.08 2.17 CIE WI 51.6 55.94 63.34 67.68 72.39 77.75 82.86 CIE Tint -2.35 -2.39 -2.76 -2.69 -2.69 -2.67 -2.60

5.3.3.7 Generation of a New Set of Texture Images with Improved Whiteness

The 13 adjusted AATCC standard sample images were displayed in the center of the EIZO monitor. Three observers were asked to determine the whiteness appearance of the displayed images. It was found that the whiteness of the adjusted AATCC image at 97/Lmean_AATCC was acceptable. The XYZ values of this AATCC image were 88.26, 92.46 and 95.94 respectively and these were used to convert the XYZ values of the 10 scanned texture images to a new set

235

with higher whiteness based on a normalization involving X = 88.26/Xmean_i, Y=

92.46/Ymean_i, and Z = 95.94/Zmean_i where (i = 1, 2, ..., 10). The 10 normalized texture images with significantly improved whiteness were then displayed in the central blocks of the EIZO monitor as shown in Figure 90.

Figure 90. Display of the 10 normalized woolen texture images with improved whiteness.

The role of the observer was to rank the samples in terms of perceived whiteness from 1

(least white) to 10 (most white).

236

Table 42 shows the XYZ, L*a*b*, whiteness and tint values of the 10 converted woolen texture images which were displayed in the center of the EIZO monitor and measured with a

PR-670 spectroradiometer.

Table 42. XYZ, L*a*b*, CIEWI and Tint values of the 10 converted woolen texture images.

Normalized 1 2 3 4 5 6 7 Images X 88.52 89.38 88.73 88.92 88.46 88.48 88.64 Y 92.74 93.45 93.00 93.17 92.64 92.68 92.84 Z 96.72 97.62 96.93 97.15 96.16 96.27 96.57 L* 97.12 97.41 97.23 97.30 97.08 97.10 97.16 a* 1.09 1.43 1.02 1.07 1.16 1.12 1.14 b* 1.84 1.74 1.88 1.86 2.15 2.10 2.01 Whiteness 84.64 85.35 84.73 84.90 83.21 83.33 83.91 Tint -2.45 -2.45 -2.39 -2.39 -2.66 -2.58 -2.61 8 9 10 MIN MAX MEAN DIFF X 88.42 88.36 89.13 - - - - Y 92.65 92.61 93.39 - - - - Z 96.24 95.97 97.05 - - - - L* 97.09 97.07 97.39 97.07 97.41 97.19 0.34 a* 1.07 1.03 1.08 1.02 1.43 1.12 0.41 b* 2.10 2.25 2.08 1.74 2.25 2.00 0.51 Whiteness 83.88 82.35 84.12 82.35 85.35 84.04 3.00 Tint -2.44 -2.83 -2.48 -2.83 -2.39 -2.53 0.45

It was found that the scanned AATCC standard sample could not be suitably used as an anchor since the highest whiteness value of the adjusted image was 82, which was below the whiteness of the 10 normalized texture images. Three solid light grey anchor images with

237

CIE WI of 88, 99 and 108 respectively were generated and displayed in blocks 9 and 10 with a gap of approximately 1 inch between the blocks. The 10 normalized texture images were displayed below the anchor pairs within blocks 15 and 16 as shown in Figure 91.

Figure 91. Visual assessment of 10 normalized texture images using three anchors.

The role of the observer was to assign a whiteness value to the texture sample in relation to a set or arbitrary whiteness value given to each of the anchor samples above. It was found,

238

experimentally, that the selection of three anchors instead of two provided a better means of

narrowing down the WI of the sample image. This was tested further with a select group of observers before starting the larger study. This may have been due to narrowing down the range of WI for a given sample which resulted in a more convenient means of setting values.

5.3.3.8 Visual Assessment

A panel of 26 observers (13F, 13M, average age: 27) with normal color vision took part in this experiment. The distance from the EIZO monitor to the observer was approximately an arm's length, or 68 cm. The observers were asked to wear a gray lab coat to minimize the effect of surround and the external lighting was excluded in the course of the visual assessment.

239

Figure 92. Visual assessment of normalized textured white image using three anchor samples on top.

Three anchors displayed in blocks 9 and 10 were used to aid observers in their magnitude estimation of the perceived whiteness of textured images which were shown directly below the reference images as shown in Figure 92.

In Figure 92, the references on the top row are designed to have similar tint values as verified

experimentally: -2.63, -2.49, -2.11 (left to right respectively). Their CIE WI, however, ranged

from 92 (left), to 96 (middle) to 101 (right). During the visual assessment reference samples

240

were assigned arbitrary whiteness values of 1, 5 and 10 (from left to right). Each of the ten

normalized texture white images was displayed randomly and separately below the reference

samples. Textured images were modified to have similar lightness and tint values with L*

ranging from 97.07 to 97.41 and tint values ranging from -2.83 to -2.39. The observer was

able to press the down arrow key on the keyboard to change the texture image shown below

reference samples. The observer was asked to determine the magnitude of the perceived whiteness of the test sample based on the values given to the reference sample above.

The lightness (L*) of the grey background was set, separately, to three different levels at 30,

50 and 72 and the assessments were repeated under each lightness level, randomly. The gray background was bordered by a 0.12 inch strip of white obtained by setting the RGB values of pixels to their maximum to enable white point adaptation. The interface designed for the visual assessment is shown in Figure 93.

241

Figure 93. The interface designed for the visual assessment of textured images.

As a supplementary component of the study all texture images were displayed simultaneously in four rows in blocks 9, 10, 15, 16, 21 and 22 as shown in Figure 94. An

alphabet character, shown on the right hand corner of each image in random order was used

for identification of samples. In order to minimize the effect of variations in surround for all

samples two of the 10 samples were displayed twice to create four rows each containing three samples.

242

Observers were not given the option to move images during the assessment to eliminate issues associated with variations in the uniformity of the monitor. Images attributes were also adjusted spatially to result in similar lightness and variations in L* among all images were smaller than 0.3.

Figure 94. Twelve texture images displayed mostly within 6 blocks.

243

The observer was asked to rank images in terms of perceived whiteness from most white to

least white as shown in Figure 95. The grey background was also changed to one of the three set lightness values for all assessments to determine the effect of surround on visual judgments.

Figure 95. The interface designed for the visual assessment of textured images.

244

In order to minimize the memory effect in evaluation of rank orders, images were assigned randomized alphabet letters in each assessment. The same random order in each assessment was used for all observers in that trial. The instruction given to observers, for visual assessments, is shown in Appendix C.

5.3.3.9 Results and Discussions

Each observer gave a relative perceived whiteness value for each image from 1 to 10 for all images with different textures in three different backgrounds and the mean relative perceived whiteness (RPW) of samples from 26 observers was obtained.

Table 43. Ordered sample number and relative perceived whiteness of 10 texture images displayed under three backgrounds with different lightness.

Most Least Background White White L*=30 OSN 10 2 6 3 1 9 8 7 4 5 RPW 8.26 7.82 6.98 6.70 6.52 6.43 6.43 6.16 5.43 5.14 L*=50 OSN 10 2 6 8 9 3 7 1 4 5 RPW 8.27 7.29 6.99 6.61 6.56 6.38 6.13 5.79 5.14 4.85 L*=72 OSN 10 2 6 8 9 3 7 1 4 5 RPW 8.11 7.12 6.62 6.53 6.42 6.11 5.79 5.38 4.56 4.52

245

Ordered Sample Number (OSN) and relative perceived whiteness (RPW) of samples

obtained in three backgrounds with different lightness are shown in Table 43. In Table 43,

the alphabet tagged above each image was converted to the sample number we used before.

For L* of 50 and 72, the ordered sample number from most white to least white is the same,

however, the mean perceived whiteness in L* of 50 is slightly higher than that of L* of 72 as shown in Table 44.

Table 44. Mean, minimum and maximum relative perceived whiteness in three different backgrounds.

Background L*= 30 L*= 50 L*= 72 Mean 6.59 6.40 6.12 Min 5.14 4.85 4.52 Max 8.26 8.27 8.11 Diff. 3.12 3.42 3.59

The visual system processes information at many levels of sophistication. At the retina, there

is low-level vision, including light adaptation and the center-surround receptive fields of

ganglion cells. At the other extreme is high-level vision, which includes cognitive processes

that incorporate knowledge about objects, materials and scenes. Mid-level vision is simply an ill-defined region between low and high. The representations and the processing in the

246

middle stages are commonly thought to involve surfaces, contours, grouping and so on.

Lightness perception seems to involve all three levels of processing.

Moreover, Table 43 and 44 also indicate that the retina does not simply record light intensities, in other words, perceived brightness is not equal to the actual physical intensity of

the stimulus. Rather, retinal responses depend on the surrounding context (center-surround receptive field) involved the low-level vision even these texture images physically have the same lightness as shown in Appendix D. The darker the background, the brighter the texture images perceived. In other words, Figure 96 shows a schematic explanation of the effect of lateral inhibition on perception of the texture images displayed under a darker background.

Figure 96a depicts the visual mechanism for the image selected from Figure 95 under a darker background and five receptors that are stimulated by different regions of the dark context. Receptor A is stimulated by light from the main image, which appears to have higher whiteness compared to that of the lighter backgrounds. The surrounding receptors B,

C, D and E are located on the four dark frames. It is important to note that all images were adjusted to have similar lightness and chroma. Figure 96b shows a three-dimensional view of the grid and the receptors. Let us assume that our perception of the lightness and whiteness at

A is determined by the response of its bipolar cell. It would be more accurate to use ganglion cells because they are the neurons that send signals out of the retina, but to simplify things for the purpose of this example, let us focus on the bipolar cells. Figure 96b shows the bipolar cells and lateral inhibition, indicated by the arrows, affecting receptor A’s bipolar cell. The output of the bipolar cell that receives signals from receptor A is determined.

247

Figure 96. The mechanism of lateral inhibition affecting the perceived whiteness of the twelve texture images.

248

The size of the bipolar cell response depends on how much stimulation it receives from its

receptor and on the amount that this response is decreased by the lateral inhibition it receives

from its neighboring cells. Let us assume that light falling on A generates a response of 100 units in its bipolar cell. This would be the response of the bipolar cell if no inhibition were present. The amount of inhibition is determined by making the following assumption: the lateral inhibition sent by each receptor’s bipolar cell is one-tenth of each receptor’s response.

Because receptors B, C, D and E receive the same illumination as receptor A, their response is also 100. Taking one-tenth of this, we determine that each of these receptors is responsible for 10 units of lateral inhibition for B and E because of the wider dark stripe compared to the

249

stripe around receptor C and D. The C and D receptors are responsible for 2 units of lateral

inhibition due to the narrower dark stripes. To calculate the response of A’s bipolar cell, we

start with A’s initial response of 100 and subtract the inhibition sent from the other four

bipolar cells, as shown in Figure 96c. Likewise, under lighter background of L* of 50 and 72,

the lateral inhibition sent from the four receptors B, C, D and E is higher than that at L* of

30. It is assumed that C and D receptors are responsible for 4 and 6 units of lateral inhibition, respectively, due to the lighter background, therefore the perceived whiteness computed for this image is 72 and 68 as shown in Figure 96c.

The above phenomenon and results shown in Tables 43, 44 and Figure 96 also indicate that:

1. Photoreceptor sensitivity depends on the average/ambient light intensity, due to light

adaptation.

2. Retinal ganglion cells responses depend on the difference between light intensity in the

center and that in the immediate surround.

In the low-level visual processing as shown above, neurons stimulated by a bright background inhibit the less stimulated neurons of the inner square, while a dark background does not have such an inhibitory effect on the square which would appear brighter.

There is an important transformation that takes place in visual perception between the analog representation of the visual image in the retina and surfaces and objects as they appear to us.

The retinal image represents numerous different brightness levels and colors at a very large number of different points in space, with no explicit representation of which points belong together. But the image we perceive consists of a much smaller number of surfaces and

250

objects that are segregated from the background and from each other. There are probably

many stages of this transformation, but one stage of this process is known to be of great

importance, visual segmentation.

Segmentation is a process of parsing the different surfaces in an image, as well as grouping

together the parts of the same surface that are separated from each other in the image by

another, occluding surface. Segmentation therefore involves resolving the depth relationships

between surfaces and objects in a scene. At this stage local edges between segregating

regions are detected [188]. The texture segregation process is supposed to signal

discontinuities in space through a local analysis in which only elements lying in the

neighborhood of each element are involved [189]. The ability to segregate two regions

depends on the compactness of the texture surfaces due to density of its elements [190].

Chubb et al. showed that the texture contrast was strongly diminished when the surround texture had a high contrast, and both textures had the same spatial frequency and orientation.

The result described can be taken as evidence in texture segregation for interactions involving a larger surround beyond the neighborhood of each texture element. These interactions suppress both the contrast and contour perception from a texture image.

It has long been recognized that early visual mechanisms are insensitive to a uniform area of luminance or wavelength, while extracting their discontinuities such as lines and edges. . It is also true that we usually see the brightness and color of a uniform area as surrounded by edges [191, 192].

251

At the surface representation level, one representative process is the figure-ground phenomenon. In viewing an object in a scene, one sees this object, that is, one sees a figure against a background, called the ground. In other words, as we examine the world, we divide it into two regions: the figure, which is the object we are examining; and the ground, which is the background to this figure. The figure is characterized by asymmetries in contours (which belong only to the figure), by differences in surface appearance between figure and ground, by depth stratification (the figure appearing in a front plane), and by completion of the background behind the occluding figure.

With respect to the ten texture images, in the first three perceived as most white, i.e. full- cardigan, half-cardigan front and racking effect, the stripes with the same vertical texture pairs are grouped together as figures in front and the remaining vertical stripes with the same texture are grouped as ground in the back. The brightness contrasts in the figure are arranged such that the stripes grouped together look darker in the back than they do when they are viewed in front. This is because of the enhanced effect of brightness contrast across borders that define a figure and on the regions to which such borders are attached. This is one of many illustrations of the deep consequences of figure-ground assignment. Moreover, another consequence of the importance of figure-ground is that people remember the shapes of figures, not grounds. Figure-ground assignment is a special case of a more general problem in vision, the assignment of border ownership.

Consider the image in Figure 97, a region chosen from the computerized image after mathematical transformation, the border between stripe A (perceived as the front) and stripe

252

B (perceived as the back) involves a difference in luminance, the border include stretches of

boundary that are lighter on one side and darker on the other, stretches where the colors are

reversed, and stretches where there is no local visual information to signal the boundary.

Nevertheless, we perceive a smooth, continuous occlusion boundary at the edge of each

image. It is as if the visual system possesses the capability of segmenting regions of the

image based on a local textual property.

Figure 97. Segmentation from stripe A (front) and stripe B (back).

Assimilation is a perceived change in color/brightness in one part of a picture in the direction of the color/brightness of another part or other parts in the picture. Festinger et al. explained the occurrence of contrast or assimilation in terms of foreground/background perception

253

[193]. If part of a stimulus is considered foreground, contrast occurs between this part and the background, but if the same part of the stimulus is seen as background assimilation occurs within this area. This implies that the occurrence of assimilation occurs within this

part. This implies that the occurrence of assimilation does not precede figure-ground separation, or at least is dependent upon this separation to some extent. In the extreme case this means that assimilation only occurs after figure-ground separation, which is unlikely to be a retinal phenomenon. The neon spreading effect [194] also points to the idea that figure- ground analysis or probably a splitting of material and illumination properties must have occurred before assimilation takes place, but not the reverse. In the context of this study, texture images are perceived in the first stage as figure-ground separation before assimilation happens.

In this study, texture images are adjusted to have similar tristimulus values as verified by measurements using PR-670 spectroradiometer. Mathematical transformation of the raw image makes neighboring points of the computerized image inhibit each other. This makes the very faint edges in the original texture much sharper. This is why lateral inhibition is important because it makes edges stand out. A contour is an edge. It is these contours that are

important guides for our eye movements. When we view a scene, we tend to move our eyes

to regions of the scene that are rich in contours. The changes in the visual stimulation that can create contours can be based on changes in luminance, color, texture, motion and even depth [195]. For example, in one of the textures examined, the light luminance changes through most of the surface, and edges are simply areas where the change in the luminance is

254

greater than that in the rest of the image. To be able to perceive the entire texture, our perceptual systems need some ways to extract these changes and determine how to proceed across the texture.

Under the major premise, all texture images have similar tristimulus values, after the segmentation of the figure from the ground, in addition to the border effect among stripes on texture as illustrated above, the textures with more regions of the figure layer and more borders will be perceived as lighter and vice versa.

However, surface-based representations involved in judgments of textured regions in the stimulus are complicated and are determined by the high-level visual processing [196]. Using region-based as well as edge-based mechanisms it may be possible to optically segregate the process and analyze this vision phenomenon.

From a psychological perspective, this experiment shows that:

1. Perceived whiteness is not equal to the actual physical intensity of the stimulus. Rather,

perceived whiteness depends on the surface reflectance, independent of the illumination

conditions.

2. The visual stimulus that reaches the eyes depends both on the illumination level and the

reflectance of the surfaces.

3. Retinal responses depend on the local average image intensity. The darker the background,

the smaller the average intensity of the image.

4. Whiteness judgments can be analyzed by middle and high-level perceptual mechanisms

(e.g. figure-ground, edge-based).

255

5. Texture can be analyzed by its features, e.g. roughness, density, directionality, etc. and a

predictive model based on a combination of these features could be established to relate

texture to perceptual results.

256

6. Visual Perception of Texture

In Chapter 5, the visual experiment pertaining to assessment of object whiteness and an initial analysis of results from a psychological point of view was given. The visual perception of texture objects involves figure-ground and edge-based mechanisms that are considered middle and high-level cognitive process. These are not yet fully understood due to their complexity and since the experience is essentially unknown.

Despite these unknowns and limitations in elucidating the mechanism of visual perception from a psychological perspective, the image analysis described in the following sections was used to provide a framework for further exploration of the problem.

6.1 Investigation of the Effects of Texture on Color Perception

Textures are common distinguishing features used in segmentation and characterization of images. Texture segmentation and classification has been explored extensively via image processing and by computer vision scientists since 1980s. It is common to characterize textures in a statistical manner using various first and second order statistics. Previous work has shown that texture influences the observer’s ability to perceive color differences. By considering the frequency content of the texture patterns in relationship to the color frequency response of the human eye, it was attempted to elucidate the results of perceptual experiments in a more quantitative manner and lay a foundation for improved segmentation in computer vision applications.

257

Surface texture in materials can lead to directionality of reflected light and result in gloss, opacity and luster which affect perceived as well as measured color. The geometry of the instrument has a direct impact on the color measurement of specimens with such physical characteristics and may indeed create non-reproducible results [197]. In terms of psychophysical assessments, while various standardization organizations note that any directional specimens shall be oriented in the same direction during assessments [157], there is no standardized procedure that specifies the orientation of textured samples for viewing under standard assessment procedures.

The majority of spectrophotometric measurements involving textile samples use devices that are equipped with integrating spheres that simulate diffuse illumination. The light reflected from the sample is collected at 0 degrees, which can be set to either include or exclude the small residual specular component of the reflected light. Due to their surface characteristics, in most cases, textiles are measured with specular light included whereby surface reflection at different directions (via rotation) is averaged to account for the effect of texture when viewed from different angles. Since this mode of measurement may be a contributing factor to the discrepancy between observers perception of the object as opposed to measured lightness from textured samples, it was decided to measure textured samples at the same orientation that was seen by the observer.

Another important factor that needs to be taken into account when attempting to correlate perceived and measured colorimetric values of objects, is the size of the stimulus. In textiles, often a 10 degree field of view is selected in measurements to enable an accurate

258

representation of the often large colored material. In the field of imaging, however, a 2

degree field of view is preferred to represent the much smaller stimulus size. A change in the

field of view size is a well-known cause of metamerism, but can also contribute to the

strength of the correlation between measured and perceived values.

In terms of measurements, spectrophotometers are routinely used to provide a high degree of

accuracy and repeatability in measuring uniform surfaces. However, the degree of inter and

intra-instrument variability is significantly affected by variations in the geometry of

illumination and detection, especially for textured surfaces. Main issues affecting the

accuracy of measurements and imaging system are described in the following sections.

6.1.1 Experimental Preparation

The experimental data from chapter 1 for wool fabric samples, shown in Figure 34 and described in Table 5, used in the study of perceived lightness, were utilized in this section.

The correlation in Figure 37 demonstrates the distribution showed in Table 5.

6.1.2 Frequency Content and Color Perception

As discussed earlier in this thesis, experimental work indicated that texture affects the perception of lightness for textured samples. In this part of the study the aim of the investigation was to examine the incorporation of the optical transfer function of the eye and

259

its interaction with the frequency content of a given pattern. Monochrome patterns can be

generated using the L-band of an image and transforming them to CIELAB space, or the Y

band of the transformed sRGB image, as shown elsewhere [71]. Colored textures are to be

considered in subsequent work. In this section, the perceptual effect of lightness on textured

samples was investigated. Similar investigations pertaining to this have been reported in

recent years [71, 72].

Since the colorimetric properties of samples were considered, the images attributes were

transformed into a colorimetric space. For this study, the images were obtained with a Nikon

D90 camera using a light box with controlled illumination. The output of the camera was in

nominal sRGB values scaled between 0 and 255. While it is well-known that the

transformation from the camera sensor space to sRGB color space is not exact, it was

assumed that for the purpose of this investigation it was sufficiently accurate. The sRGB data

was then transformed to CIELAB space using the standard transformation with a D65 white

point [151]. Since only lightness was concerned, the L-band was considered. However, since

the L-band includes a nonlinear transformation, it was also decided to use a linear

approximation to lightness, in the form of the Y-band of the YIQ space [71, 198].

Since the Y-band is based on a linear transformation and it may better represent the initial response of the cones to the textured patterns compared to the L-band. On the other hand, the

L-band represents the response of the visual system after processing signals by the neural layers in the retina and the visual cortex so it may be more representative of the overall experience of the observer when viewing the image.

260

Let an image f (x, y) have a Fourier transform, F (u, v) as illustrated in Eq. 37. The

modulation transfer function (MTF) of the human eye is given by H (u, v). Note H (u, v) is

defined by cycles per degree. For this analysis, cycles per distance were changed based on

the viewing distance, which for samples used in this study was about 600 mm. There are

several quantities in the image that relate to the lightness, including average value (DC) and

total image power, which includes the power distributed over all frequencies.

The total power of the image, PT, is given by that shown in Equation (48):

∞ ∞ = | ( , )| (48) ∞ ∞ 2 푇 푃 �− �− 퐹 푢 푣 푑푢푑푣

The power influenced by the texture, PH, is defined using Equation (49):

∞ ∞ = | ( , ) ( , )| (49) ∞ ∞ 2 퐻 푃 �− �− 퐻 푢 푣 퐹 푢 푣 푑푢푑푣 Since previous studies show that a uniform field is perceived to be lighter than a textured

field [72], the effect of the interaction of texture with the human MTF must be to reduce brightness. The eye is not sensitive to variations at very low frequencies, nor at very high frequencies. This means

H (u, v) ≈ 0, for small u and v, and for large u and v

A one dimensional plot of the MTF, described in Equation 52, is shown in Figure 98 [151]. A

radially symmetric extension of this function in two dimensions was used here, although

261

there is evidence that the eye is slightly less sensitive to diagonal frequencies than to

horizontal and vertical frequencies [151].

Figure 98. MTF of human eye.

It may be conjectured that increased image power in the range of human sensitivity would

reduce the apparent lightness. A function, g(PT, PH), that models this property should thus

decrease monotonically as PH increases. The obvious choices include a ratio with PH in the denominator and a difference where PH is subtracted from a constant. We will consider the difference in energy/power as shown in Equation (50):

262

DP = PT − PH (50)

There are many variations on these basic forms using logarithms and exponentials. However,

the basic idea can be tested by using a rank correlation test to determine if increasing

frequency content is negatively correlated with perceived lightness as well as whiteness.

A digital image was required to analyze the effect of texture, since the various measures of

texture require statistics related to the variation of pixels. However, there is no precise

transformation from the recorded sRGB values of the camera to colorimetric values, such as

CIEXYZ or CIELAB. The formulas for making these transformations are well-known [198] but they are approximate, since the sensitivities of the RGB filters in the color filter array of the cameras are not within a linear transformation of the CIE color matching functions. This problem is illustrated by considering Figure 99, where total energy of the digital image is plotted against the L* value of the sample obtained from the spectrophotometer. Here it can be seen that the general trend is correct but the correlation coefficient is only 0.5673. The relationship between perceived lightness and measured L* shows a correlation coefficient of

0.9118, and is noted in Figure 37.

263

Figure 99. Total Energy of the L-band of Camera vs. L* measured by spectrophotometer.

While there has been much work to develop transformations from 3-band camera sensor values to colorimetric values, the commonly used methods are currently the "best" ones. It was decided to consider customized approaches. It has been noted that the use of three bands puts a serious limitation on the accuracy of such transformations. In addition, previous works on development of transformations were based on uniform color patches. In the case of uniform color patches, a camera and a spectrophotometer roughly measure the same thing.

However, in the measurement of textured objects, where variations are in three dimensions, the correlation of measurements based on a camera and a spectrophotometer is less clear.

In this study, however, it was attempted to duplicate the conditions of measurements between devices as closely as possible. The spectrophotometer measured a 1.5” circular area with diffuse lighting. The sensor was at 8o from normal. The distance from the sample to the

264

sensor was about 10”. The image of the sample was obtained using a DigiEye Light Box with

simulated D65 diffuse illumination. A Nikon D90 camera was placed on a parallel plane about 20” above the sample. All samples, which were created from identical yarns, were captured in the same high-resolution mode, as shown in Figure 33. Each of the ten 2”×2”

(630×630 pixels) samples was extracted and a 512×512 pixel image of each sample was used

for the computations. Recognizing that there were several influential challenges, it was

decided to continue to explore various relationships based on the available data from the

study.

6.1.3 Results of Preliminary Study

The conjecture tested was that the difference in the total energy and the weighted energy of

Eq. 50 would be positively related to the perceived lightness. There was some uncertainty

concerning the use of the Y component in the linear space or the L* component in the

CIELAB space. To test the effect of the component on the prediction, the correlation was

obtained in both spaces. It was found that while all parameters: the DC value, the total energy

and the difference were positively correlated with perceived lightness, none was as highly

correlated with perceived lightness as the L* measure of the spectrophotometer. The results

are shown in Table 45. The linear trend for the difference in total energy and weighted

energy is shown in Figure 100.

265

Since one measure of lightness was the ranking of the samples in perceived order of lightness, the rank correlation of the ranking using the L* and the various parameters derived from the images were also checked. The rank correlation of L* with perceived lightness was only 0.6848. However, none of the image parameters approached this value. The most correlated ranking was that of the weighted power of eye at 0.4545. In the limited region that was noted earlier, where the L* was particularly poor at ranking, the DC parameter did very well, interchanging only the first and second ranks.

Table 45. Correlations for imaging texture parameters; PL represents Perceived Lightness

Variable L* PL (Perceived Lightness) L* 100.00 0.9118 DC(L) 0.4186 0.3539 TP(L) 0.5673 0.3665 DP(L) 0.6669 0.5269 DC(Y) 0.3852 0.3205 TP(Y) 0.5427 0.3422 DP(Y) 0.6488 0.5091

266

Figure 100. Difference of total energy and weighted energy vs. perceived lightness.

6.1.4 Summary and Conclusions

At this point in this work, the idea of using the MTF of the eye with the power frequency distribution has yielded interesting results. There is clearly room for improvement over using the L* value as an indicator of perceived lightness. It is clear there is much work to do along this line to establish a definitive relationship between the perceived qualities and the quantities computed from digital images. Areas of subsequent investigation include using better models for the camera RGB to colorimetric space, better understanding of the effect of texture on the spectrophotometric measurement of the samples, and perhaps the use of fitting parameters in the relationship between the DC, total energy and weighted energy of the image data. There is also work on the frequency response of the eye in both the luminance and chrominance channels. This was used in image coding. It was noted that the sensitivity

267

of the eye is not radially symmetric in 2-D. We are less sensitive to frequencies at 45 degrees

than to those at 0 or 90. It is possible that this could be used to help improve the prediction of

perceived color differences for textured objects.

6.2 Influence of Texture on Perceived Whiteness of Objects

The analysis done in chapter 6.1 was based on Fourier Transform and Modulation Transfer

Function in the field of imaging. The rapid growth of digital technologies has, an increasing number of advances in images techniques including development of textures are being examined. Different sections of industry aim to generate textures that invoke feelings such as complexity, comfort and warmth, etc. [199]. In order to address this challenge, a number of steps have been examined which include: identifying the texture characteristic that affects human visual complexity perception and developing indexes to define these characteristics, and then correlating the perception of visual complexity with texture characteristics.

6.2.1 Generating White Textured Images

Without respect to images sharpness, the texture characteristics obtained from taking pictures with a Nikon D90 camera under well-controlled illumination conditions, as described in chapter 6.1, did not correlate very well with the perceived lightness as was expected. The 10 woolen knitted textures described in chapter 1 were thus scanned using an Epson XL10000 scanner at a resolution of a 300 dpi, and using no color management profile. All texture

268

images were generated in tiff format and are presented in Figure 101. The samples were then visually examined as described in the following section.

Figure 101. Scanned images of the woolen knitted samples representing various textures examined.

269

6.2.2 Visual Assessment of Whiteness

As explained in chapter 1, ten knitted woolen textures with different surface features were

generated using identical bleached yarns and their PW was ranked by 25 naïve observers (13

F and 12 M, mean age = 26). A rank of 10 was given to the most white and a rank of 1 to the least white sample using a simulated daylight illumination with a color temperature of 6500K and with an intensity of 1400 lux in the middle of a calibrated SPLIII viewing booth (X-

Rite). For the analysis of results a weighting factor of 1 was assigned to rank 10 (most white)

and 0.1 to rank 1 (least white) with other samples receiving a weighting factor between 0.1-1

according to rank with an interval of 0.1. The weighted probability rank for each texture was

then calculated by summing up the products of rank and the corresponding weighting factors.

Results are shown in Figure 36.

6.2.3 Features based on Texture Analysis

The human perception of texture maybe identified with five characteristics, namely,

regularity, understandability, roughness, directionality and density [199]. In this study,

regularity and understandability properties were ignored since for the purpose of the analysis

here samples can be considered to be uniform. The relationship, therefore, between perceived

whiteness (PW), based on weighted probability (wP), and texture was examined using only

270

three features: roughness, directionality and density. This is described in the following

sections.

6.2.3.1 Transformation to Grey Images

Earlier research has shown that the human visual system is composed of a luminance and a

chrominance component [71]. In our study, all samples were knitted using identical woolen

yarns therefore the difference between samples, in terms of chrominance, was not significant.

This was verified instrumentally using spectrophotometric measurements and the difference

in chroma between samples was found to be less than 0.2 [152]. Nonetheless, the scanned

samples were transformed to grey images to isolate the effect of luminance on the perception

of texture for samples examined in this study.

The scanned images were transformed into grey-level images using Equation 51 [200] as was

described in section 6.1:

Y = 0.299R + 0.587G + 0.114B (51)

Here Y is the value of the luminance channel in the YIQ color space, and R, G and B are the

values of the each pixel of each scanned image in the RGB color space. Texture features of

images were thus extracted from their corresponding grey-level images.

271

6.2.3.2 Roughness

The GLCM (Grey Level Co-occurrence Matrix) has been used to describe the homogeneity

characteristics of samples and assess the roughness of a texture [199]. However, this method

is not suitable here since textures contain low homogeneities. In other words, the GLCM and the image homogeneity are significantly changed when the distance and the direction between the co-occurrence points are altered. In this study the texton size and the texture

directions among samples are quite different, as shown in Figure 95, and thus the

homogeneity derived from the GLCM method could not be used to compare their roughness.

Nonetheless, according to the GLCM method, two factors influence the roughness property,

which are: a change in frequency, or scope. These two factors can be separately represented

by the frequency and power of the image spectrum using a Fourier transform. The roughness

of a texture can thus be described as a sum of the spectral power of the image, using

appropriate weighting ratios based on its frequency. Combining this feature with the

modulation transform function (MTF), can illustrate the visual sensitivity of different

frequency signals, as indicated in Equation 52 and Equations 37, 38 and 39.

( ) = + exp [ ( ) ] (52) 휔 휔 훽

퐻 휔 퐴 �훼 휔표� − 휔표 where ω is the circular frequency measured in cycles per degree, and can be transformed to

cycles per mm. In Equation 52, A =2.5, α = 0.0192, ωo =8.772, and β =1.1. The roughness of

the scanned texture was calculated using Equation 53:

272

= ( , ) ( , ) (53) where R is the roughness of a texture.푅 ∫ 푃The푢 roughness푣 ∙ 퐻 푢 푣 values푑푢푑푣 obtained for various images, together with weighted probability (wP) of sample appearing as most white, are shown in

Table 46.

Table 46. Roughness of the samples tested together with weighted probability (wP) of sample appearing as most white.

Texture Roughness wP Jersey Face 74.64 99.7 Jersey Back 77.22 77.5 2×3 Rib 107.24 67.8 Racking Effect 80.91 60.3 1×1 Rib 130.13 51.4 Bias Effect 80.15 50.1 Zigzag Effect 77.09 48.2 Half Cardigan Face 142.92 38.9 Full Cardigan 112.68 37.1 Half Cardigan Back 121.79 22.1

Samples in Table 46 are listed from the most white (on the first row) to the least white, based on the mean whiteness ranks obtained in Chapter 1. While this trend is not general, it can be seen that samples perceived as being whiter, such as Jersey face and Jersey back, exhibit lower roughness values than those perceived as being the least white, such as the Half and

Full Cardigan samples. This indicates that roughness has an inverse role on perceived whiteness.

273

6.2.3.3. Directionality

Directionality is defined as the orientation of image edges, because image edges have a

significant influence on visual perception [199]. Directionality is described as the main

orientation of edges in different directions of an image.

The line-likeness and orientation of edges were used to characterize the directionality of a

texture after detecting the edges of the texture image using the Canny algorithm [199], and

the directional co-occurrence matrix (DCM) was used to calculate the line-likeness as shown

in Equation 54:

2 ( , )×cos ( , ) = 푛 푛 휋 (54) 퐷푑 ( ) ∑푗 ∑푗 푃 푖 푗 ,� 푖 푗 ∙ 푛 � 푑푖푟 푛 푛 퐹 ∑푗 ∑푗 푃퐷푑 푖 푗

where PDd is the n×n local directional co-occurrence matrix of points at distance d. This matrix is defined as the relative frequency with which two neighboring cells are separated by a distance d along the edge direction occurring on the image. Variables i and j are the direction codes in matrix PDd. The disadvantage of the DCM method, however, is that only 8

directions can be used to depict the directionality of an image. With respect to the Fourier

transform, a texture with a directionality angle of θ will have a high value of P (u, v) in a perpendicular direction (90°+θ). The histogram of angular spectrum power can be used to describe the direction distribution of a texture [199].

274

The parameter A shown in Equation 52, is ranged from 0º-179º because of the conjugate

symmetry property of Fourier transform. Angles out of the [0º, 179º] range are also

represented by the conjugate angle due to the conjugate symmetry. The angular Fourier

spectral power is calculated using Equation 55 [199]:

( ) = ( , ) ( = 0° , 1°, 2° … … 179°) (55)

휃=퐴 Where A is the푃 퐴angle ∑in the푃 Fourier푟 휃 spectrum,퐴 P(A) is the sum of the spectral power in

direction A.

The directionality of a texture is a relative parameter to depict the line-likeness property; the

angular spectrum power is normalized as shown in Equation 56:

( ) ( ) = ° (56) 푃 °퐴 ( ) 179 푊 푎 ∑훼=0 푃 퐴 Assuming that the angle α has the highest probability (maximum W), Equation 57 is obtained by combining Equations 55, and 56 with Equation 54.

= ( ) cos |( )| (57)

휃 Where Dir is the directionality of a퐷푖푟 texture∑ and푤 α휃 is∙ the main휃 − orientation훼 of the texture. The

directionality of the textures in this study was calculated using Equation 57 and the results, together with weighted probability of sample appearing as most white (wP), are shown in

Table 47.

275

Table 47. Directionality of the samples tested together with weighted probability of sample appearing as most white (wP).

Texture Dir wP Jersey Face 0.80 99.7 Jersey Back 0.70 77.5 2×3 Rib 0.82 67.8 Racking Effect 0.71 60.3 1×1 Rib 0.82 51.4 Bias Effect 0.84 50.1 Zigzag Effect 0.61 48.2 Half Cardigan Face 0.86 38.9 Full Cardigan 0.90 37.1 Half Cardigan Back 0.65 22.1

The directionality feature is used to describe the line-likeness properties of a texture, and the directionality values close to 1 indicate that a texture is more uniform. The textures in Zigzag effect, half cardigan back, and Jersey back samples exhibit lower directionality values which mean the orientation (directionality) in these textures is more varied.

6.2.3.4 Density

The perceived density of a texture refers to the density distribution of a texture in visual

assessments [199]. Humans are more sensitive to variations in the edges or regions of color

change/ interest (ROI). The texture originated from various textons can be regarded as the

combinations of different edges. The density of various edges of an image can be used to

represent the density of a texture. The density of a texture is determined by the ratio between

276

the pixel number of the extracted edges and the pixel number of the whole texture as shown

in Equation 58 [199]:

= (58) 푁푒푑푔푒푠 퐷푒푛 � 푖푚푔 Where Den is the density of a texture, Nedges푁 is the pixel numbers of the extracted edges of a

texture, and Nimg is the pixel numbers of a texture. The range of Den index is (0,1) and the value of 0 means all the extracted edges of a texture have the same direction which is not realistic due to texture roughness.

The Density of the scanned textures was calculated based on Equation 58 and results obtained are shown in Table 48.

Table 48. Density of the scanned textures together with weighted probability of sample appearing as most white (wP).

Texture Den wP Jersey Face 0.26 99.7 Jersey Back 0.26 77.5 2×3 Rib 0.22 67.8 Racking Effect 0.21 60.3 1×1 Rib 0.22 51.4 Bias Effect 0.19 50.1 Zigzag Effect 0.20 48.2 Half Cardigan Face 0.17 38.9 Full Cardigan 0.18 37.1 Half Cardigan Back 0.20 22.1

277

The density of the images decreases from the most white samples to the least white samples, which means the number of the edge points of the whiter samples is larger than that of the less white samples. The increase in the density of a texture indicates the compact distribution of the edges in the image. In other words, the texture visually looks more condensed. The density property of a texture appears to have a positive effect on perceived whiteness of tested samples.

6.2.4 The Influence of Texture-determined Factors on Perceived Whiteness

The wP (weighted probability of sample appearing as most white) examined in this study was affected by three factors: roughness, directionality and density of the corresponding grey- level images. The analysis of the influence of these attributes on the wP was done using two approaches: examination of the effect of the individual factor on wP; and the effect of the combined features on wP. These are briefly described in the following sections.

The relationship between wP and each of the stated texture features is shown, individually, in

Figures 102 - 104.

The roughness factor has a negative role on the perceived whiteness, and increasing roughness decreases the perceived whiteness of the object as shown in Figure 102. The R2 is only 0.383 while the correlation coefficient (r) was -0.619. The negative correlation coefficient value confirms the inverse role of roughness on wP while the value itself implies that the individual differences in wP are not accidental and are related to variability in

278

roughness. It should be pointed out, however, that the number of samples tested in this study was limited and thus additional results would likely be needed to draw firm conclusions.

The effect of directionality on perceived whiteness of samples was not clear as shown in

Figure 103 and the trend line is nearly horizontal with R2 = 0.002. Directionality is a relative parameter indicating the directional distribution of a texture. Thus it seems appropriate to examine the effect of directionality in combination with the other texture features.

Density had a positive effect on wP of tested samples with R2 = 0.723, and r = 0.850. Results show that density plays the most important role in the wP of samples (See Figure 104).

Increasing density increases the edges in a certain area, and thus the texture appears more compact and condensed.

Figure 102. The relationship between roughness and wP.

279

Figure 103. The relationship between directionality and wP.

Figure 104. The relationship between density and wP.

As indicated directionality is a relative parameter that represents the directional distribution of a texture and the parameter (Dir) is obtained from the normalized spectral power angular distribution of the image. On the other hand, Roughness (R) is an absolute parameter, which

280

is related to 'power' perception of a texture based on variations in frequency. In order to determine the potential role of directionality on wP, directionality was multiplied by roughness to generate a combined factor denoted 'directional power'. The role of 'directional power' on wP was considered to be additive.

All texture features were combined and the regression analysis was carried out to determine the relationship among parameters. The results of the regression are shown in Table 49.

Table 49. Results of the regression analysis.

Textures R Dem R×Dir Den wP TUVCS 62.34 0.69 43.01 43.07 100 Jersey Face 74.64 0.80 59.80 0.26 99.7 Jersey Back 77.22 0.70 53.97 0.26 77.5 2×3 Rib 107.24 0.82 87.55 0.22 67.8 Racking Effect 80.90 0.71 57.45 0.11 60.3 1×1 Rib 130.13 0.82 106.08 0.22 51.4 Bias Effect 80.15 0.67 67.26 0.19 50.1 Zigzag Effect 77.09 0.61 47.08 0.20 48.2 Half Cardigan Face 142.92 0.86 123.00 0.17 38.9 Full Cardigan 112.68 0.90 101.52 0.18 37.1 Half Cardigan Back 121.79 0.65 78.83 0.20 22.1

281

In Table 46, R, R×Dir and Den are regarded as independent variables, and wP is considered

to be the dependent variable. The data was simulated using regression method with Matlab

and the model shown in Equation 59 was obtained.

wP = c - 1.02 ⋅ R + 0.94 ⋅ (R ⋅ Dir) + 585.58⋅ Den (59)

where c is a constant.

The F value and R2, to assess the significance of parameters of the model are 19.13 and 0.91,

respectively which indicate this model is reliable. The p-value of each independent variable was examined since each variable is not normalized in the same magnitude. All the p-values obtained were smaller than 0.02, which means each of the independent variables plays an important role in the model.

The coefficient of 'directional power' (R×Dir) is positive, which indicates ‘directional power’ is directly related to wP, and increasing 'directional power' will increase the perceived whiteness. Directionality is a parameter to assess the line-likeness property within the image, and a directionality value close to 1 indicates the texture is more uniform. Increasing the

directional power results in the increase of the texture uniformity, which is shown to improve

the perceived whiteness.

6.2.5 Conclusions

The influence of texture on the perceived whiteness was investigated in this section. Ten

woolen samples with different textures, representing common surface patterns in textiles

282

were developed using identical yarns. Knitted samples were then assessed by a panel of color

normal observers. Visual assessment results were obtained and the individual as well as

combined effect of three factors, namely, roughness, directionality and density on perceived

whiteness was separately examined using simulation and regression methods.

It was shown that roughness feature of a texture has a negative effect on perceived whiteness,

increasing roughness resulted in decreasing perceived whiteness. Directionality is a parameter that is used to define the line-likeness property of a texture and thus the uniformity of the texture. It was found that increasing uniformity improved perceived whiteness. Density is related to the compactness of a texture and was also found to have a positive effect on

perceived whiteness.

Using regression analysis an initial model was developed based on the three texture features

and perceived whiteness. While promising, the model is limited due to the limitation of

textures examined as well as the low R2 coefficient values for roughness and directionality factors versus perceived whiteness. Extended studies involving additional samples with a

wider range of surface patterns could result in development of improved models that would

elucidate the relationship between perceived and measured whiteness of textured objects.

With respect to the influence of each individual factor on perceived whiteness, the Density

factor was found to be the most significant parameter in determining the texture effect on the

perceived whiteness due to the R2 coefficient obtained between density and perceived

whiteness. In addition, the coefficients for each or the independent variables in Equation 59,

R, (R×Dir) and Den, were 585.58, -1.012 and 0.944 respectively indicating that Density is the

283

most important factor affecting perceived texture. The statistical analysis of Equation 59 is

given in Appendix E.

The CIE Whiteness index may be modified by adding a texture factor, T, which in this case is

limited to the density factor for textile substrates. The data was simulated and the following

formulas shown in Equations 60 and 61 were generated.

= (60) . . 1 푁퐶푆푈 −0 246∙푇+1 02 In other words, 푊ℎ푖푡푒푛푒푠푠 퐼푛푑푒푥 퐶퐼퐸 푊퐼 ∙

= [ + 800( ) + 1700( )] (61) . . 1 푁퐶푆푈 푛 푛 −0 246∙푇+1 02 푊ℎ푖푡푒푛푒푠푠 퐼푛푑푒푥 푌 푥 − 푥 푦 − 푦 ∙ Where x and y are the CIE chromaticity co-ordinates of the specimen and xn and yn are the

chromaticity coordinates of the illuminant, (0.313795 and 0.330972 for illuminant D65,

respectively). T represents texture effect, which here is confined to the density factor with the

boundary conditions 0 < T ≤ 1. The definition of density factor is shown in Equation 58.

According to this definition rough surfaces have low density and smoother surfaces have

higher density. Thus T has a direct relationship with smoothness of the substrate. WINCSU and wP results of samples assessed under source D65 are shown in Table 50.

284

Table 50. WINCSU and weighted probability (wP) of sample appearing as most white, normalized for TUVCS at 100, for illuminant D65.

Textile Substrates CIEWID65 wPD65 WINCSU TUVCS 122.47 100.00 128.76 Jersey Face 57.74 99.67 60.39 Jersey Back 56.76 77.50 59.37 2×3 Rib 56.52 67.78 58.52 Racking Effect 54.34 60.28 56.12 1×1 Rib 53.71 51.39 55.61 Bias Effect 52.71 50.14 54.16 Zigzag Effect 54.88 48.19 56.53 Half Cardigan Face 53.05 38.89 54.23 Full Cardigan 53.5 37.08 54.83 Half Cardigan Back 51.37 22.08 52.92

2 The R coefficient for the WINCSU which incorporates the effect of texture versus CIEWI for

illuminant D65 is 0.99986 which is very good. The correlation between WINCSU against weighted probability of sample appearing as most white (wP) is 0.619 which is reasonable.

The experimental results obtained with different sources, U30 and A, as discussed in Chapter

2, were also used to examine the validity of Equation 61 since the sig. values of perceived whiteness under U30, A and D65 are high. Table 46 shows the weighted probability of

perceived whiteness under U30 and A illumination. Figure 35 shows the weighted

probability of perceived whiteness under D65 and the data in Figure 35 was added to Table

51 as the Predicted Whiteness Index.

285

Table 51. Weighted probability (wP) of sample appearing as most white and WINCSU under U30 and A.

Weighted probability of PW wPU30 wPA WINCSU TUVCS 100 100 128.76 Jersey Face 82.82 78.72 99.70 Bias Effect 75.65 73.73 50.10 Racking Effect 67.95 68.45 60.30 2×3 Rib 55.64 55.13 67.80 1×1 Rib 52.81 54.62 51.40 Half Cardigan Face 48.34 50.26 38.90 Half Cardigan Back 45.75 46.93 22.10 Zigzag Effect 45.38 44.36 48.20 Full Cardigan 44.75 48.85 37.10 Jersey Back 30.90 28.97 22.10

The R2 values obtained between weighted probability of sample appearing as most white

WINCSU under U30 and A is 0.9022 and 0.8928 respectively.

Overall, the predicted model as shown in Equation 61 is reasonably adequate even for the

experimental result under U30 and A illuminations where the R2 values are acceptable. Thus

it seems the effect of texture on perceived whiteness could be reasonably adequately

mathematically modeled. However, the application of this equation would be restricted for several reasons. First and foremost, the number of samples used for this study was limited which would reduce the validity of the predicted model. Moreover, neural signal processing

when dealing with texture, depth and color, is still not fully understood and this is a

considerable obstacle in developing accurate predictive models. The processing of texture,

286

depth and color within human brain should be further elucidated by designing appropriate

psychological experiments. It is likely that different weighting coefficients might be needed for each of these parameters to improve the accuracy of the predictive models. It seems there

is still a long way before accurate models that incorporate texture can be generated. Such

efforts will require collaborative efforts among scientists from different fields including

psychology, neuro-visual science to imaging and color science.

287

7. Conclusions and Future Work

The goal of this research was to investigate several factors affecting the perception and measurement of optically brightened white textiles with a view to determine whether the performance of the current CIE whiteness index can be improved. The following works were carried out to achieve this objective:

1. A panel of 10 woolen and 10 cotton samples was prepared. The knitted woolen fabrics

representing varying levels of surface roughness were bleached using a commercial

recipe containing sodium borohydride (SBH) and sodium bisulphate (SBS). To increase

the level of whiteness attained, samples were simultaneously optically brightened with a

commercially available fluorescent brightening agent, UVITEX, at nine different

concentrations (0.1, 0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, and 2.0% o.w.f). The optimal

conditions for bleaching as well as the amount of brightening agent to generate fabrics of

appropriate base whiteness were examined using a panel of five expert visual assessors

under controlled illumination and viewing conditions. A second set of woolen knitted

samples was also obtained by first bleaching the woolen yarn under optimal bleaching

and brightening conditions (1.25% o.w.f. FBA) followed by knitting bleached yarns to

generate different surface patterns. A different set of ten knitted cotton fabrics with

different surface patterns was also generated from cotton yarns that were already

bleached and brightened. Knitted samples represented common textile structures and

represented a broad range of surface characteristics commonly employed in textile

applications.

288

2. A SpectraLight III calibrated viewing booth (X-Rite) illuminated with filtered tungsten

bulbs simulating D65 illuminant was used to assess samples. Supplementary UV light

was added to the simulated D65 illuminant during the assessments denoted as (D65+UV).

A 0/45 illumination viewing geometry was employed for psychophysical assessments.

The L* and whiteness values of the 20 woolen and cotton samples were obtained using an

SF600X spectrophotometer. The relationship between the perceptual assessment and CIE

whiteness index of samples was then examined using cluster and weighted probability

methods. Results showed that the correlation of perceived lightness and perceived

whiteness against CIE Whiteness Index values was modest for wool samples with

moderate L* ranging from 80 ~ 90 and poor for cotton samples with high L* ranging from

90 ~ 100 values under source D65. The correlation between perceived whiteness rank and

mean perceived lightness for cotton and woolen samples under source D65 verified the

well-known conclusion that whiteness is significantly affected by lightness.

3. A set of cotton fabric samples was whitened with C.I. Fluorescent Brightener 28 using 0%,

0.025%, 0.25% and 2.5% on weight of fabric (o.w.f) concentrations and four layers of

fabric were mounted on 3×3 inch stiff cardboards. Spectrophotometric reflectance

measurements were obtained from a PTFE plate and FBA treated samples using a

Spectraflash SF600X reflectance spectrophotometer. The CIEWI of FBA treated

substrates generally increased according to the UV content of the light source for all

substrates under D65 and D75 illuminants. Generally an increase in the amount of FBA

289

employed, up to an optimal amount, directly correlates with an increased absorption of

UV light and emission of visible light.

4. The same samples and the same visual assessment protocol were employed in the

examination of the effect of texture on perceived whiteness under sources U30 and A.

The results showed poor correlations between the perceived whiteness and CIE whiteness

index for cotton (L*: 90 ~ 100) and woolen (L*: 80 ~ 90) samples under these sources.

Perceptually, the most white brightened woolen samples (with L* around 80 ~ 90) was

found to be the one with the smoothest surface under sources U30 and A. However, this

relationship did not apply for optically brightened samples of high radiance possibly

because of the emission of light due to fluorescence. For woolen samples, the difference

in mean perceived lightness of objects between U30 and A was statistically significant,

but this did not apply for cotton samples.

5. The above observations also indicated that CIE Whiteness Index does not accurately

predict the colorimetric properties of fluorescent whites. While the amount of UV light

radiant on a sample can be adjusted in a spectrophotometer for the measurement of

optically brightened substrates, the adjusted UV content may not correlate with the UV

content available in calibrated viewing booths. The D65 and D75 daylight simulators in

the calibrated viewing booth used in this study did not have sufficient UV compared to

standard illuminants and the extent of UV deficiency for D65 and D75 daylight sources

was also found to be different. The variations in UV directly influenced radiometric and

perceptual assessments of optically brightened substrates when the amount of

290

supplementary UV in the viewing booth under both D65 and D75 was changed from 0 to

25, 50, 75 and 100% to determine their effects on perceptual assessment of fluorescent

white objects. The results showed that the perceived whiteness of samples improved with

an increase in the UV content of the source in the booth. Higher perceived whiteness was

obtained under D75 because of the higher total UV energy available under this source.

The performances of CIE Whiteness Index and Uchida whiteness models were examined

against perceptual results which showed modest correlations and CIE WI performed

slightly better than Uchida whiteness models under D65 and D75 illumination conditions.

In addition, the CIE WI obtained from measurements under UV calibrated mode (78%

UV included in this case) showed the best agreement against perceptual assessments

under approximately 75% supplementary UV in the viewing booth.

6. In order to isolate the effect of texture on perceived whiteness, spatial uniformity and

whiteness boundary of an EIZO monitor were investigated to determine how best to

simulate the effect of different textures on perceived whiteness of displayed images. A

division of the display surface to various blocks and assessment of the variability among

different blocks showed that blocks H2V3, H3V3 and H5V3 had relatively small color

differences (< 0.2) against the reference block and were thus used to display samples. The

whiteness boundary was determined by varying five components: red, green, blue, yellow

and white and the results showed that 27 out of 76 samples were considered as white with

observers providing positive white responses at 50% or higher. Several methods

including polynomial regression, and artificial neural networks were tested and applied to

291

map selected images from RGB to LAB color space to determine the device dependent

whiteness boundaries when displayed on the EIZO monitor. The results showed that the

unknown transformation matrix sought was 9×3 for the best performance in color space

mapping from RGB to XYZ for polynomial regression method. Moreover, in terms of

artificial neural networks, two hidden layers and 7×7 architecture resulted in the best

performance for data mapping from RGB to XYZ color space.

7. Little CMS was inserted within a simulated texture software application which was created

to convert device independent color space to device dependent color space to display the

texture images with similar whiteness on the EIZO monitor and simulate the samples as

displayed in the viewing booth. The accuracy of monitor profile generated was examined

using an Eye-one Match 3 and the mean color difference in L*a* b* color space between

CMS generated images and those measured on the EIZO monitor was 1.83 which is

relatively large for accurate display of images on the monitor. Four rendering intents

were tested in the course of color space transformation by displaying ten scanned woolen

samples on the EIZO monitor. The results showed that perceptual rendering intent

produced the smallest average color difference. However, the color difference obtained

under perceptual rendering intent was too large to display the texture images.

8. The XYZ values of images with similar CIE whiteness and tint values were converted into

a new set with high whiteness based on a normalization method following a series of test

trials. A new set of XYZ values resulting in higher whiteness indices were obtained for

the 10 scanned texture images based on a normalization involving X = 88.26/Xmean_i, Y=

292

92.46/Ymean_i, and Z = 95.94/Zmean_i where (i = 1, 2, ..., 10). The XYZ values of the 10

scanned texture images were measured with a PR-670 spectroradiometer and used to

calculate their corresponding whiteness. The new values were found to be acceptable for

used and display of images on the EIZO monitor.

9. In terms of the visual assessment protocol on the EIZO monitor, the scanned AATCC

standard sample could not be suitably used as an anchor since the highest whiteness value

obtained was even below the whiteness of the 10 normalized texture images. Three solid

whitish anchor images with CIE WI of 88, 99 and 108 respectively were thus generated.

In the first part of the visual assessment, observers assessed the magnitude of the test

sample's whiteness and provided a numerical value with the aid the three anchors (with

assigned arbitrary whiteness values of 1, 5 and 10). A range of white samples (12 in total)

containing different textures, distributed in a region of the display considered as spatially

uniform, were shown to observers who were asked to rank the samples from the most

white to the least white against three backgrounds of varying luminance factor.

10. The effect of surround on the colorimetric value of the displayed images on the EIZO

monitor was found to be minimal for the solid samples with the RGB values ranging

from (250,250,250) to (0,0,0) with an interval of (50,50,50) when it was displayed in the

center of an EIZO monitor. However, the results of visual assessment pertaining to

perceived whiteness of different textures under the backgrounds of L* of 30, 50 and 72

showed that under a relatively darker background, perceived whiteness was different

from that under lighter backgrounds.

293

11. The lateral inhibition and figure-ground theory accompanying with occurrence of contrast

or assimilation, involving middle and high-level perceptual mechanisms, are responsible

for the effect of background on perceived whiteness.

12. The MTF of the eye together with the power frequency distribution were used to assess

whether a definitive relationship could be established between the perceived qualities and

the quantities computed from digital images. Results showed that the DC value, the total

energy and the difference were positively correlated with perceived lightness. However,

none was as highly correlated with perceived lightness as the L* measured by the

spectrophotometer.

13. Using a predictive model, roughness, directionality and density factors of texture features

were established and correlated against the visual assessment results of 10 scanned

woolen samples. The three features were combined and simulated together to predict the

perceived whiteness. The performance of the model was found to be modest.

14. Density was used as the primary texture factor because it had the highest correlation

against perceived whiteness and was thus used as an additional parameter in the CIEWI

formula. Incorporation of this parameter aimed to improve the predictions of the

whiteness index model for textured objects. The results of visual assessment under source

U30 and A were used to verify the performance of the modified model which showed the

R2 values between weighted probability of perceived whiteness under U30 and A versus

predicted whiteness are acceptable.

294

The following areas are suggested topics for additional research:

1. The main shortcoming in this study was the limited number of samples tested which

statistically decreased the validity of results. More physical as well as synthesized texture

images with surface variations should be examined and incorporated in the study to

increase sample size and validity of results.

2. The extent of UV content of sources in the viewing booth should be designed to be

adjustable to improve the agreement between the SPD of sources and standard

illuminants in the UV range as this clearly influences the perception of fluorescent white

objects under both sources. A study based on adjusted UV content and perceived

whiteness should then be carried out to determine the correlation against exact UV

contents for fluorescent objects.

3. In the study of UV content in viewing booths and measurements it was concluded that the

CIE WI at the UV calibrated mode (78% UV in this case) had the best correlation with

perceptual assessment under approximately 75% UV in an SPLIII viewing booth. While

this finding was useful, quantifiable absolute amounts of UV available in the

spectrophotometer or the viewing booth were not available when measuring or viewing

fluorescent samples, thus a precise determination of UV amount needs to be determined

in future studies attempting to examine the effect of UV on measurements and perception

of fluorescent white objects.

295

4. The method of examining the accuracy of monitor profile was not precise in this study

because the RGB information as output could not be obtained. Future work should

therefore improve the readability of data from either input or output.

5. Due to time limitations, little CMS, which is an open source code was used in this study.

However, the associated look-up-table and algorithms with this source code could not be

obtained due to the system encapsulation. The development of another look-up-table

specifically designed for this purpose would be beneficial in future.

6. Highly significant differences were found for perceived whiteness between U30 or A and

D65, though the data could not be generally used. The textures should be further

examined under source D65 to test the validity of the predicted whiteness model.

7. Texture features, such as roughness, directionality, density, etc. are important in texture

discrimination. However, the viewing conditions also play a significant role in texture

perception. Generally, viewers are more attracted to local features in images at a shorter

viewing distance and to global features in images at a longer viewing distance. Besides

the main factor of viewing distance, the factors including gender, changing the order of

viewing positions, and prior knowledge are also shown quantitatively to have a

significant influence on texture perception. Additional work in the following study could

include the construction of numerical relationships between viewing distance and texture

perception as well as to cognitive investigation of biases in global and local perceptions.

In other words, a possibility of building a dynamic texture classification model that

296

considers the affecting factors such as viewing distances, gender ratios and prior knowledge is needed.

297

REFERENCES

1. McDonald R. (1997). “Color Physics for Industry,” Society of Dyers & Colorists,

Bradford, UK.

2. http://www.internal.schools.net.au/edu/lesson_ideas/optics/optics_wksht2_p1.html,

Feb., 2012.

3. Jackson G.R. and C. Owsley (2000). “Scotopic sensitivity during adulthood,” Vision

Res, 40(18): 2467-73.

4. Coren S., L.M. Ward, and J.T. Enns (2004). “Sensation and perception,” John Wiley

& Sons, Hoboken, NJ: 54-56.

5. Coren S., L.M. Ward, and J.T. Enns (2004). “Sensation and perception,” John Wiley

& Sons, Hoboken, NJ: 57.

6. Young T. (1802). ” Bakerian Lecture: On the Theory of Light and Colors,” Phil.

Trans. R. Soc. Lond. 92:12-48.

7. Dartnall, H. J. A., J. K. Bowmaker and J. D. Mollon (1983). “Human visual pigments:

microspectrophotometric results from the eyes of seven persons,” Proc. R. Soc. B-

Biol. Sci. 220: 115-130.

298

8. Nathans J., D. Thomas and D.S. Hogness (1986). “Molecular genetics of human color

vision: the genes encoding blue, green, and red pigments,” Science, 232: 193-202.

9. Nathans J., T.P. Piantanida, R.L. Eddy, T.B. Shows and D.S. Hogness, (1986).

“Molecular genetics of inherited variation in human color vision,” Science 232: 203-

210.

10. Hering E. (1964). “Outlines of a theory of the light sense” Translated by Leo M.

Hurvich and D. Jameson, Harvard University Press, Cambridge, Mass.

11. De Valois R.L., I. Abramov and G.H. Jacobs (1966). “Analysis of response patterns

of LGN cells,” J. Opt. Soc. Am. 56: 966-977.

12. Bird J.F. and R.W. Massof (1978). “A general zone theory of color and brightness

vision. II. The space-time field,” J. Opt. Soc. Am. 68: 1471-1481.

13. Guth S.L., R.W. Massof and T. Benzschawel (1980). “Vector model for normal and

dichromatic color vision,” J. Opt. Soc. Am. 70: 197-212.

14. Zeki S. (1993). “A vision of the brain”. Wiley-Blackwell, Oxford, Boston.

15. Hunt R.W.G. and M.R. Pointer (2011), “Measuring color”, John Wiley & Sons, West

Sussex, UK: 10.

16. Hunt R.W.G. and M.R. Pointer (2011), “Measuring color”, John Wiley & Sons, West

Sussex, UK: 268.

299

17. http://en.wikipedia.org/wiki/Contrast_effect, Jan. 2013.

18. Arend L. and A. Reeves (1986). "Simultaneous color constancy," J. Opt. Soc. Am. A.

8, 661-672.

19. Coren S., L.M. Ward, and J.T. Enns (2004). “Sensation and Perception”, John Wiley

& Sons, Hoboken, NJ: 87.

20. Rosotti H. (1985). “Colour: Why the World Isn't Grey”, Princeton University Press,

Princeton, NJ: 135-136.

21. http://www.colorcube.com/illusions/scindctn.html, Jan. 2013.

22. Hartline H.K. and F. Ratliff (1957). "Inhibitory interaction of receptor units in the eye

of Limulus", J. General Physiology, 40, 357-376.

23. http://www.indiana.edu/~p1013447/dictionary/lat_i.html, Jan, 2013.

24. http://www.itp.uni-hannover.de/~zawischa/ITP/introcol.html, Jan. 2013.

25. Anstis S. and J.R. Brlan (1975). “Illusory reversal of visual depth and movement

during changes of contrast”, Vision Res, 15: 957-961.

26. Heinemann E.G. and S. Chase (1995). "A quantitative model for simultaneous

brightness induction", Vision Res, 35(14), 2007-2020.

27. Nemcsics A. (1985). “Coloroid color atlas”. Budapest: Révai Printing House.

300

28. Nemcsics A. (1987). “Color space of the COLOROID-Color system,” Col. Res. Appl.

12:135-146.

29. Kelly K.L. and D.B. Judd (1955). “The ISCC-NBS method of designating colors and

a dictionary of color names,” National Bureau of Standards, USA.

30. Geory A.K. (2004). “Industrial color physics,” Springer, New York.

31. Roy A.K. (2006). “Textile preparation and dyeing,” Science Publishers, Enfield, NH:

313.

32. Puebla C. (2006). “Whiteness assessment: A Primer,” Axiphos GmbH, Germany: 25.

33. Lee H.C. (2004). “Introduction to color imaging science,” Cambridge University

Press, Cambridge, UK: 442 – 443.

34. Puebla C. (2006). “Whiteness assessment: A Primer,” Axiphos GmbH, Germany: 18.

35. Wright W.D. (1957), Nature 179: 179.

36. Essen L. (1951). “Proposed new value for the velocity of light,” Nature 167: 258-259.

37. Taylor A.H. and Kerr, G.P. (1941). “The distribution of energy in the visible

spectrum of daylight,” J. Opt. Soc. Am. 31: 3-8.

38. McDonald R. (1997). “Color physics for industry,” Society of Dyers & Colorists,

Bradford, UK: 11.

301

39. Forsythe W.E. and E.Q. Adams (1945). “Radiating characteristics of tungsten and

tungsten lamps,” J. Opt. Soc. Am. 35:108-113.

40. Wensel H.T., D.B. Judd and W.F. Roeser (1934). “Establishment of a scale of color

temperature”, J. Res. Natl. Bur. Stand. 12: 527.

41. CIE (1987). CIE Publications No. 17.4 “International Lighting Vocabulary”, CIE

Central Bureau, Vienna.

42. Geory A.K. (2004). “Industrial Color Physics”, Springer, New York: 17.

43. Nottingham W.B. (1949)."A survey of present methods used to determine the optical

properties of phosphors", J. Opt. Soc. Am. 39(8):641-647..

44. Kaufman J.E. (1981). “IES Lighting Handbook”, Illuminating Engineering Society of

North America, New York.

45. CIE (1932). “Commission Internationale de l' Éclairage Proceedings”, Cambridge

University Press, Cambridge, UK.

46. CIE (1996). CIE Publications No. 15.2 “Colorimetry”, CIE Central Bureau, Vienna.

47. McDonald R. (1997). “Color Physics for Industry,” Society of Dyers & Colorists,

Bradford, UK: p20.

302

48. Lam Y. M., J.H. Xin and K.M. Sin (2001). “Study of the influence of various D65

simulators on colour matching,” Color. Technol. 117(5): 251-256.

49. Xu H., M. R. Luo and B. Rigg (2003). “Evaluation of daylight simulators. Part 1;

Colorimetric and spectral variations,” Color. Technol, 119(2): 59-69.

50. Xu C. (2009). “The chemistry and perception of fluorescent white materials”, PhD

Dissertation, North Carolina State University, USA.

51. Nassau K. (2001). “The physics and chemistry of color: the Fifteen Causes of Color”,

John Wiley & Sons, New York.

52. McDonald R. (1997). “Color physics for industry,” Society of Dyers & Colorists,

Bradford, UK: 23.

53. Narendran N., Y. Gu, J. P. Nova and Y. Zhu (2005). “Extracting phosphor-scattered

photons to improve white LED efficiency,” Phys. Stat. Sol. A, 202: 60.

54. Liu Z., S. Liu, K. Wang and X. Luo (2008). “Optical Analysis of Color Distribution

in White LEDs with Various Packaging Methods,” IEEE Photon. Technol. Lett. 20:

2027-2030.

55. Sommer C., F.P. Wenzl, P. Hartmann, P. Pachler, M. Schweighart and S. Tasch

(2008). “Tailoring of the Color Conversion Elements in Phosphor-Converted High-

Power LEDs by Optical Simulations,” IEEE Photon. Technol. Lett. 20: 739-742.

303

56. Sommer C., F.P. Wenzl, P. Hartmann, P. Pachler and M. Schweighart and S. Tasch

(2009). “A detailed study on the requirements for angular homogeneity of phosphor

converted high power white LED light sources,” Opt. Mater. 31(6): 837-840.

57. Sommer C., F.P. Wenzl, P. Hartmann, P. Pachler, M. Schweighart and S. Tasch

(2009). “The Effect of the Phosphor Particle Sizes on the Angular Homogeneity of

Phosphor-Converted High-Power White LED Light Sources,” IEEE J. Sel. Top.

Quantum Electron., 15:1181-1184.

58. Julesz B., and J.R. Bergen (1983). “Textons: the fundamental elements in preattentive

vision and perception of textures,” Bell System Tech. J. 62(6): 1619-1645.

59. Rao A.R. (1990). “A Taxonomy for Texture Description and Identification”,

Springer-Verlag, Berlin/New York, 1990.

60. Richards W. and A. Polit (1974). “Texture matching,” Kybernetic (Cybernetic), 16:

155-162.

61. ASTM (2001), ASTM International Standard Annual Book of ASTM Standards:

Textiles, Section 7: 01.

62. Morton W.E. and J.W. Hearle (2008). “Physical Properties of Textile Fibers”,

Woodhead Publishing, Cambridge, UK.

304

63. Johnson A. (1989). “The Theory of Coloration of Textiles”, Woodhead Publishing,

Cambridge, UK: 135.

64. Geory A.K. (2004). “Industrial Color Physics”, Springer, New York: 60.

65. Lee W., and M. Sato (2001). “Visual perception of texture of textiles,” Color Res.

Appl, 26(6): 469-477.

66. Srinivasan G.N., and G. Shobha (2008), Proceedings of word academy of science,

engineering and technology, 36: 2070-3740.

67. http://www.tx.ncsu.edu/tpacc/comfort-performance/kawabata-evaluation-system.cfm

, Feb. 2012.

68. Kawabata S. (1975). “The standardization and analysis of hand evaluation”, The hand

evaluation and standardization committee, Textile machinery society of Japan.

69. Hu J.L. (2004). “Structure and Mechanics of Woven Fabrics”, Woodhead Publishing,

Cambridge, UK.

70. Nadia M.T. (2010). “Modeling and Simulating Bodies and Garments”, Woodhead

Publishing, Cambridge, UK: 88.

71. Xin J.H., H.L. Shen and C.C. Lam (2005). “Investigation of Texture Effect on Visual

Colour Difference Evaluation,” Color Res. Appl., 30(5): 341-347.

305

72. Montag E.D, and R.S. Berns (2000). “Lightness dependencies and the effect of

texture on suprathreshold lightness tolerances,” Color Res Appl,. 25(4): 241-249.

73. Ojala T. and M. Pietikainen (2004). “Texture Classification”, Machine Vision and

Media Processing Unit, Finland.

74. Kitaguchi S., S. Westland and M.R. Luo (2005). "Suitability of texture analysis

methods for perceptual texture", Proceeding of the 10th congress of the International

Colour Association Color '05, 923-926.

75. Peters R.H. (1963). “Textile Chemistry, Vol. I: The Chemistry of Fibres”, Elsevier:

Amsterdam.

76. Peters R.H. (1967). “Textile Chemistry, Vol. II: Purification of Fibres”, Elsevier:

Amsterdam.

77. Peters R.H. (1975). “Textile Chemistry, Vol. III: The Physical Chemistry of Dyeing”,

Elsevier: Amsterdam.

78. Lewin M. (2007). “Handbook of Fiber Chemistry”, CRC Press, Boca Raton.

79. Roy A.K. (2006). “Textile preparation and dyeing”, Science Publishers, Enfield, NH.

80. Karmakar S.R. (1999). “Chemical technology in the pre-treatment processes of

textiles”, Elsevier, Amsterdam.

306

81. Tiki A., A. Amin and A. Kanwal (2010). “Chemistry of optical brighteners and uses

in textile industries,” Pakistan Textile Journal, 59(7): 42.

82. Becker R.S. (1969). “Theory and interpretation of fluorescence and

phosphorescence”, John Wiley & Sons, New York.

83. Zweidler R. (1971). “Why and how florescent whiteners work,” Ciba-Geigy Review,

3: 38-44.

84. Allen E. (1957). “Mode of action of fluorescent whitening agents and measurement of

their relative efficiency,” J. Opt. Soc. Am., 47: 933-943.

85. Wyszecki G. (1972). “Basic concepts of colorimetry of fluorescent materials,” J.

Color Appear 1: 8-17.

86. Allen E. (1972). “Fluorescent colorants: true reflectance, quantum efficiency and

match formulation,” J. Color Appear., 1(5): 28-32.

87. Grum F. (1981). “Whiteness and fluorescence,” Text. Chem. Colour, 13(8):34-36.

88. Anliker R. and G. Muller (1975). “Fluorescent whitening agents”, Georg Thieme

Publishers, Stuttgart.

89. Williamson R. (1980). “Fluorescent brightening agents”, Elsevier: New York.

90. Berns R.S. (2000). “Principles of color technology”, John Wiley and Sons, New York.

307

91. Banks W.H. (1989). Proceedings of the 20th Research Conference of the International

Association of Research Institutes for the Graphic Arts Industry, Moscow, USSR.

92. Aksoy B., M.K. Joyce and P.D. Fleming (2003). “Comparative study of

brightness/whiteness using various analytical methods on coated papers containing

colorants,” TAPPI, Spring Technical Conf. and Trade Fair, Chicago.

93. Selling H.J. and L.F.C. Friele (1950). “Whiteness relations and their application,”

Appl. Sci. Res. B1: 453-476.

94. Harrison V.G.W. (1938, 1939). "The measurement of shades of white paper",

P.A.T.R.A. Report Nr. 2 and Nr. 3, London.

95. Hunter R.S. and R.W. Harold (1987). “The Measurement of Appearance”, John Wiley

& Sons, New York.

96. Wyszecki G. and W.S. Stiles (1982). “Color Science, Concepts and Methods,

Quantitative Data and Formulae”, John Wiley & Sons, New York.

97. Cegarra J., J.D. Ribe Vidal and J.F. Fernandez (1976). “The quantitative

measurement of the degree of whiteness of wool tops,” J. Textile Ins. 67(1): 5-11.

98. Roy A.K. (2006). “Textile Preparation and Dyeing”, Science Publishers, Enfield,

NH: 322.

308

99. ASTM (2005). ASTM E313-10 “Standard Practice for Calculating Yellowness and

Whiteness Indices from Industry Measured Color Coordinates”.

100. Fairchild M.D. (2005). “Color Appearance Models”, Wiley-IS&T Series in Imaging

Science and Technology, Chichester, UK.

101. CIE (2004). CIE Publication No. 159 “A Color Appearance Model for Color

Management Systems: CIECAM02”, Central Bureau of the CIE, Vienna.

102. Katayama I., K. Masumi and T. Aoki (2007). “Quantitative Evaluation of Perceived

Whiteness under Different Illuminations,” J. Light Vis Environ, 31(2): 80-88.

103. Katayama I. and M.D. Fairchild (2010). “Quantitative Evaluation of Perceived

Whiteness Based on a Color Vision Model,” Color Res. App. 35: 410-418.

104. Guth S.L., R.W. Massof and T. Benzschawel (1980). “Vector model for normal and

dichromatic color vision,” J. Opt. Soc. Am. A, 70: 197-212.

105. Krantz D.H. (1975). “Color measurement and color theory. I. Representation

theorem for Grassmann structures,” J. Math. Psychol, 12(3): 283-303.

106. Ganz E. (1976). “Whiteness: photometric specification and colorimetric evaluation,”

Appl. Opt., 15(9): 2039-2058.

107. Ganz E. (1979). “Metameric white samples for testing the relative UV content of

light sources and of natural daylight,” Appl. Opt., 18(7): 1069-1072.

309

108. Krantz D.H. (1975). “Color Measurement and Color Theory, II Opponent-colors

Theory,” J. Math Psychol., 12, 304-327.

109. Uchida H. (1990). “A Study on the CIE Whiteness Formula,” J. Color Sci. Asso.

Jpn., 14: 106-113.

110. Uchida H. (1998). “A New Whiteness Formula,” Color Res. App. 23: 202 – 209.

111. Uchida H. (1994). “Whiteness Formula Dealing with Tint and Purity,” J. Color Sci.

Asso. Jpn. 17: 226-233.

112. Jafari R. and S.H. Amirshahi (2007). “A Comparison of the CIE and Uchida

Whiteness Formulae as Predictor of Average Visual Whiteness Evaluation of

Textiles,” Tex. Res. J., 77(10): 756-763.

113. CIE (1931), CIE Proceedings, Cambridge University Press, Cambridge.

114. Hunter R.S. (1962). “Measurements of the Appearance of Paper,” TAPPI, 45: 203A.

115. Hunter R.S. and R.W. Harold (1987). “The Measurement of Appearance”, John

Wiley and Sons, New York.

116. CIE (1963). “CIE Proceedings, Vienna Session, Vol. B., Committee Report E-1.4.1”,

Bureau Central de la CIE, Paris.

310

117. Kool I.P.L. (1990). “Color Perception, Color Measurement and Color Matching,”

TAPPI, Papermakers Conference Proceedings, 1-7.

118. McDonald R. (1997). “Color Physics for Industry,” Society of Dyers & Colorists,

Bradford, UK: p60-62.

119. AATCC (2010), “AATCC Technical Manual, Vol. 85”, Research Triangle Park,

N.C., USA.

120. Optronic Laboratories (2000). “Operation Manuals for OL Series 730-9 Reflex

Telescope, M000031”, Orlando, FL.

121. Hanson A. (1995). “The Colorimetry of Visual Displays”, National Physical

Laboratory, Great Britain, Centre for Quantum Metrology.

122. Sterns E.I. (1991). “An evaluation of some tristimulus weights,” Color Res. Appl,

16(5): 317-321.

123. Ganz E. (1979). "Whiteness perception: individual differences and common trends",

Appl. Opt., 18(17), 2963-2970.

124. Engeldrum P.G. (2000). “Psychometric Scaling: a Toolkit for Imaging Systems

Development”, Imcotek Press, Winchester MA.

311

125. Shamey R., L.M. Cardenas, D. Hinks, and R. Woodard (2010). “Comparison of

naive and expert subjects in the assessment of small color differences,” J. Opt. Soc.

Am. 27(6): 1482-1489.

126. Melgosa M., P.A. Garcia, L.G. Robledo, R. Shamey, D. Hinks, G. Cui, and R.M.

Luo (2011). “Notes on the application of the standardized residual sum of squares

index for the assessment of intra- and inter-observer variability in color-difference

experiments,” J. Opt. Soc. Am., 28(5): 949-953.

127. Keelan B.W. (2005). “Handbook of Image Quality, Characterization and

Prediction”.John Wiley & Sons, New York.

128. Clonts R., R. Shamey and D. Hinks (2010). “Effect of Colorimetric Attributes on

Perceived Blackness of Materials,” European Conference on Colour in Graphics,

Imaging, and Vision, 83-87.

129. Lin J., R. Shamey and H.J. Trussell (2012). “The Effect of Texture on Perception and

Measurement of Whiteness,” AATCC Review, 61-68).

130. AATCC (2002). AATCC RA36 Evaluation Procedure 8, Research Triangle Park,

NC, USA.

131. AATCC (2002), AATCC RA36 Evaluation Procedure 1, Research Triangle Park,

NC, USA.

312

132. ASTM (2003). ASTM D2616-96 “Standard Test Method for Evaluation of Visual

Color Difference with a Gray Scale”.

133. Kuo W.G. and M.R. Luo (1996). “Methods for Quantifying Metamerism. Part 1-

Visual Assessment,” J. Soc. Dyers Colour, 112:312-320.

134. Lee S.G. (2008). Assessment of Metrics in Color Spaces, Master Thesis, North

Carolina State University.

135. Fairchild M.D. (2005). “Color Appearance Models”, Wiley-IS&T Series in Imaging

Science and Technology, Chichester, UK: 50.

136. ASTM (2009). ASTM E1499-97 “Standard Guide for Selection, Evaluation, and

Training of Observers”.

137. Guan S.S. and M.R. Luo (1999). “Investigation of parametric effects using small

colour differences,” Color Res. Appl., 24: 331-343.

138. Luo M.R. and B. Rigg (1987). “BFD(l:c) color difference formula, Part I—

Development of the formula,” J. Soc. Dyers Color, 103: 86-94.

139. Alder C., K.P. Chaing, and, T.F. Chong (1982). “Uniform chromaticity scales—new

experimental data,” J. Soc. Dyers Color, 98:14-20.

313

140. Garcia P.A., R. Huertas, M. Melgosa, and G. Cui (2007). “Measurement of the

relationship between perceived and computed color differences,” J. Opt. Soc. Am.

24(7): 1823-1829.

141. Kirchner E. and Dekker N. (2011). "Performance measures of color-difference

equations: correlation coefficient versus standardized residual sum of squares", J.

Opt. Soc. Am. A. 28(9):1841-1848.

142. Shafer D.S. and Z.Z. Zhang (2012). “Introductory Statistics”, Flat World Knowledge,

135.

143. Chen C.H., L.F. Pau and P.S.P. Wang (1998). “The Handbook of Pattern

Recognition and Computer Vision”, World Scientific Publishing Company,

Hackensack, NJ: 207.

144. http://w3.ualg.pt/~dubuf/pubdat/texture/texture.html, Jan. 2013.

145. Jain R., R. Kasturi and B. G. Schunck (1995). “Machine vision”, McGraw-Hill, New

York.

146. Levine M. (1985). “Vision in Man and Machine”, McGraw-Hill, New York.

147. Haralick R., K. Shanmugam and I. Dinstein (1973). “Textural Features for Image

Classification,” IEEE, Trans. on Systems, Man and Cybernetics, SMC, 3(6):610-621.

314

148. Campbell F.W. and J. G. Robson (1968). “Applications of Fourier analysis to the

visibility of gratings,” J. Physiol. (Lond.), 181: 576-593.

149. Owens H.C. (2002). Spatiochromatic processing in humans and machines, PhD

Thesis, Derby University.

150. http://carlesmitja.net/2011/12/16/new-mtf-plug-in/, Jan., 2013.

151. Trussell H.J. and M.J. Vrhel (2008). “Fundamentals of Digital Imaging”, Cambridge

university press, UK: p197.

152. Lin J., R. Shamey and H. J. Trussell (2010). “The effect of texture on perception and

measurement of whiteness,” ISCC 2010 Annual Meeting, Raleigh, NC.

153. Yilmazer D. (2009). “Bleaching of Wool with sodium borohydride,” J. of Eng.

Fibers and Fabrics, 4(3): 45-50.

154. AATCC (2008). AATCC Committee RA36, EP 11, Research Triangle Park, NC,

USA.

155. AATCC (2005). AATCC Committee RA34, TM 110-2005, Research Triangle Park,

NC, USA.

156. Schanda J. (2007), "Colorimetry Understanding the CIE System", Wiley publishing,

69-71.

315

157. AATCC (2007). AATCC Committee RA36, EP 9, Research Triangle Park, NC,

USA.

158. Neitz J. (2001). “Manual: Neitz Test of Color Vision”, Western Psychological

Services.

159. Bezdek J.C. and R. J. Hathaway (2002). “VAT: A tool for visual assessment of

(cluster) tendency,” Proc. Int’l Joint Conf. on Neural Networks, San Francisco, CA:

2225-2230.

160. Hunter R.S. (1967). “Instruments and Test Methods for Control of Whiteness in

Textile Mills,” Amer. Dyest. Rep. 56(25): 80-87.

161. Griesser R. (1981). “Instrumental Measurement of Fluorescence and Determination

of Whiteness: Review and Advances,” Rev. Prog. Color. 11: 25-36.

162. Trussell H. J., J. Lin and R. Shamey (2011), "Effect of Texture on Color Perception,"

IEEE 10th IVMSP Proceedings: Perception and Visual Signal Analysis, 7-11.

163. ASTM (2008). ASTM E308-06 “Standard Practice for Computing the Colors of

Objects by Using the CIE System, ASTM Standards on Color and Appearance”.

164. CIE (2004) CIE Publication CIE No. 15. “Colorimetry”, CIE Central Bureau,

Vienna.

316

165. AATCC (2012). AATCC Test Method 110-2011 “Whiteness of Textiles, AATCC

Technical Manual”, Research Triangle Park, NC, USA.

166. Optronic Laboratories (2002). “OL Series 750 Automated Spectroradiometric

Measurement System,” Orlando, FL.

167. Jafari R. and S. H. Amirshahi (2007). “A Comparison of the CIE and Uchida

Whiteness Formula as Predictor of Average Visual Whiteness Evaluation of

Textiles,” Textile Research J. 77(10): 756-763.

168. Fang G. (2012). Development of instrumental techniques for color assessment of

camouflage patterns, PhD thesis, North Carolina State University, USA.

169. http://www.engineersgarage.com/contribution/lcd-monitor, Jan. 2013.

170. Stokes M., M. Anderson, S. Chandrasekar, and R. Motta. A Standard Default Color

Space for the Interset-sRGB, http://www.w3.org/Graphics/Color/sRGB.html. Jan.

2013.

171. PR-655/670 SpectraScan user manual, Photo Research INC.

172. Berns R.S., D. H. Alman, L. Reniff, G. D. Snyder, and M. Balonon (1991). “Visual

Determination of Suprathreshold Color-difference Tolerances Using Probit Analysis,”

Col. Res Appl, 16(5): 297-316.

173. Finney D.J. (1971). “Probit Analysis”, Cambridge University Press, Cambridge, UK.

317

174. SAS Institute Inc. (1985). “SAS User's Guide: Statistics”, SAS Institute Inc. Cary,

NC.

175. Green P., L.W. MacDonald (2002). “Colour Engineering”, John Wiley & Sons,

Chichester, UK.

176. Hong W., M. R. Luo and P. A. Rhodes (2000). “A study of digital camera colour

characterization based on polynomial modelling,” Color Res. Appl., 26(1): 76-84.

177. Kang H.R., and P.G. Anderson (1992). “Neural network applications to the colour

scanner and printer calibrations,” J. Electronic Imaging, 1(2): 125-134.

178. Cheung T.L.V. and S. Westland (2002). “Color camera characterization using

artificial neural networks,” Proceedings of the IS&T/SID's 10th Color Imaging

Conference, Scottsdale, Arizona: 117-120.

179. Hung P.C. (1993). “Colorimetric calibration in electronic imaging devices using a

look-up table method and interpolations,” J. Electronic Imaging, 2(1): 53-61.

180. Monga V., and R. Bala (2008). “Sort-select-damp: an efficient strategy for color

look-up table lattice design,” Proceedings of the IS&T/SID's 16th Color Imaging

Conference, Portland, Oregon: 247-253.

181. http://en.wikipedia.org/wiki/ICC_profile, Jan, 2013.

318

182. International Color Consortium (2010). “Image technology colour management-

Architecture, profile format and data structure, Specification ICC. 1:2010. Profile

version 4.3.0.0.”

183. Revie W.C. (2002). “ICC color management for print production,” TAGA Annual

Technical Conference.

184. Marti M. (2011). “Little CMS Engine API 2.3”.

185. Marti M. (2012). “How to use the engine in your applications, Ver 2.4”.

186. X-rite ProfileMaker 5, http://www.xrite.com/product_overview.aspx?ID=793. Jan.

2013.

187. www.qt-project.org, Jan. 2013.

188. Caputo G. (1997). “Object grouping contingent upon background,” Vision Res.,

37(10):1313-24.

189. Sagi D. and B. Julesz. (1987). “Short-range limitation on detection of feature

differences,” Spat. Vis., 2(1): 39-49.

190. Nothdurft H.C. (1985). “Sensitivity for structure gradient in texture discrimination

tasks,” Vision Res., 25(12): 1957-68.

319

191. Marr D. (1982). “Vision: a computational investigation into the human

representation and processing of visual information”, Freeman and Company, New

York.

192. Devalois R.L., and K.K. Devalois (1990). “Spatial Vision”, Oxford University Press,

New York.

193. Festinger L., S. Coren and G. Rivers (1970). “The effect of attention on brightness

contrast and assimilation,” The Amer. Jour. of Psy., 83(2): 189-207.

194. Cicerone C.M., D.D. Hoffman, P.D. Gowdy, and J.S. Kim (1995). “The perception

of color from motion, Perception and Psychophysics,” Percept and Psychophysics, 57

(6): 761-777.

195. Mertens I., H. Siegmund, and O. J. Gruesser (1993). “Gaze motor asymmetries in the

perception of faces during a memory task,” Neuropsychologia, 31(9): 989-998.

196. Leo M.C., and S.W. John (2003). “The Visual Neuroscience”, the MIT Press,

Cambridge, MA: 113-115.

197. AATCC (2004). AATCC Committee RA36, EP6, Research Triangle Park, N.C.,

USA.

198. Pratt W. K. (2001). “Digital Image Processing”, John Wiley & Sons, New York.

320

199. Guo X., C.M. Asano, A. Asano, T. Kurita, and L. Li (2012). “Analysis of Texture

Characteristics Associated with Visual Complexity Perception,” Opt. Rev. 19(5): 306.

200. Sangwine S.J., and R. Horne (1998). “The color image processing handbook”.

Chapman & Hall, London: 67-92.

321

APPENDICES

322

Appendix A. A panel of 76 selected samples considered as white at acceptable threshold of 50%.

No. regarded R G B X Y Z L* a* b* 50% as white 1 255 255 255 93.20 98.11 106.00 99.26 0.32 -0.45 28 Y 2 254 255 255 92.89 97.94 106.03 99.20 0.06 -0.59 23 Y 3 253 255 255 92.52 97.73 106.00 99.12 -0.25 -0.71 23 Y 4 252 255 255 92.19 97.57 106.00 99.05 -0.57 -0.82 23 Y 5 251 255 255 91.93 97.45 106.03 99.01 -0.83 -0.92 22 Y 6 250 255 255 91.55 97.25 106.00 98.93 -1.17 -1.04 19 Y 7 249 255 255 91.25 97.09 105.90 98.86 -1.44 -1.08 16 Y 8 255 255 255 93.17 98.08 106.03 99.25 0.32 -0.49 28 Y 9 255 254 255 92.90 97.55 106.03 99.04 0.74 -0.85 24 Y 10 255 253 255 92.61 96.99 106.00 98.82 1.17 -1.21 21 Y 11 255 252 255 92.23 96.35 105.93 98.57 1.58 -1.61 21 Y 12 255 251 255 91.94 95.82 105.87 98.36 1.97 -1.93 15 Y 13 255 255 255 93.29 98.19 106.13 99.30 0.35 -0.48 28 Y 14 255 255 254 92.97 97.98 104.30 99.17 0.05 0.94 22 Y 15 255 255 253 92.82 97.87 103.53 99.21 0.13 0.53 21 Y 16 255 255 252 92.81 97.87 103.53 99.17 0.03 0.94 18 Y 17 255 255 251 92.65 97.75 102.63 99.12 -0.05 1.44 17 Y 18 255 255 250 92.52 97.66 101.90 99.09 -0.13 1.84 15 Y 19 255 255 255 93.31 98.20 106.17 99.30 0.37 -0.50 28 Y 20 254 254 255 92.54 97.34 105.97 98.96 0.45 -0.96 28 Y 21 253 253 255 91.91 96.64 105.97 98.69 0.51 -1.43 27 Y 22 252 252 255 91.29 95.87 105.90 98.38 0.71 -1.92 23 Y 23 251 251 255 90.76 95.25 105.93 98.13 0.82 -2.36 22 Y 24 250 250 255 90.09 94.49 105.87 97.83 0.92 -2.85 15 Y 25 255 255 255 93.27 98.19 106.17 99.30 0.31 -0.51 28 Y 26 252 252 252 90.78 95.52 103.37 98.24 0.39 -0.56 18 Y 27 250 250 250 89.34 93.97 101.60 97.62 0.45 -0.50 17 Y 1 247 247 247 86.87 91.43 98.81 96.59 0.34 -0.46 11 N 2 245 245 245 85.44 89.87 97.18 95.94 0.44 -0.50 7 N 3 242 242 242 83.10 87.34 94.61 94.88 0.56 -0.60 3 N 4 240 240 240 81.71 85.83 92.97 94.24 0.65 -0.60 0 N 5 237 237 237 79.40 83.46 90.40 93.22 0.54 -0.59 0 N 6 235 235 235 78.02 81.97 88.75 92.56 0.61 -0.56 0 N 7 232 232 232 75.71 79.54 86.04 91.48 0.61 -0.50 0 N

323

Appendix A. Continued

No. regarded R G B X Y Z L* a* b* 50% as white 8 230 230 230 74.30 78.05 84.34 90.80 0.62 -0.43 0 N 9 228 228 228 73.04 76.72 82.91 90.19 0.63 -0.43 0 N 10 248 255 255 90.88 96.89 106.00 98.78 -1.77 -1.28 10 N 11 247 255 255 90.51 96.70 105.90 98.71 -2.12 -1.35 6 N 12 246 255 255 90.19 96.54 105.90 98.65 -2.42 -1.46 2 N 13 245 255 255 89.89 96.38 105.87 98.58 -2.70 -1.55 0 N 14 244 255 255 89.59 96.21 105.90 98.52 -2.95 -1.68 0 N 15 243 255 255 89.25 96.04 105.83 98.45 -3.28 -1.76 0 N 16 242 255 255 88.91 95.87 105.90 98.38 -3.62 -1.92 0 N 17 241 255 255 88.58 95.70 105.83 98.31 -3.93 -1.99 0 N 18 240 255 255 88.31 95.55 105.80 98.25 -4.17 -2.07 0 N 19 255 250 255 91.64 95.24 105.87 98.13 2.43 -2.33 13 N 20 255 249 255 91.41 94.78 105.83 97.95 2.80 -2.62 9 N 21 255 248 255 91.02 94.13 105.70 97.68 3.23 -2.99 6 N 22 255 247 255 90.74 93.60 105.70 97.47 3.64 -3.36 2 N 23 255 246 255 90.44 93.03 105.60 97.24 4.09 -3.69 1 N 24 255 245 255 90.14 92.49 105.60 97.02 4.49 -4.07 0 N 25 255 244 255 89.84 91.93 105.60 96.79 4.93 -4.47 0 N 26 255 243 255 89.54 91.39 105.50 96.57 5.34 -4.79 0 N 27 255 242 255 89.22 90.79 105.43 96.32 5.82 -5.17 0 N 28 255 241 255 88.94 90.25 105.40 96.10 6.27 -5.53 0 N 29 255 240 255 88.68 89.75 105.37 95.89 6.68 -5.87 0 N 30 255 255 249 92.35 97.55 101.00 99.04 -0.25 2.35 11 N 31 255 255 248 92.20 97.45 100.23 99.01 -0.35 2.78 6 N 32 255 255 247 92.05 97.33 99.32 98.96 -0.41 3.29 3 N 33 255 255 246 91.90 97.23 98.64 98.92 -0.51 3.67 1 N 34 255 255 245 91.74 97.13 97.85 98.88 -0.63 4.12 0 N 35 255 255 244 91.56 97.00 97.00 98.83 -0.73 4.60 0 N 36 255 255 243 91.41 96.91 96.09 98.79 -0.85 5.14 0 N 37 255 255 242 91.29 96.82 95.36 98.76 -0.91 5.57 0 N 38 255 255 241 91.14 96.71 94.49 98.71 -0.99 6.08 0 N

324

Appendix A. Continued

No. regarded R G B X Y Z L* a* b* 50% as white 39 255 255 240 90.96 96.60 93.67 98.67 -1.1 6.56 0 N 40 249 249 255 89.37 93.69 105.73 97.51 0.99 -3.32 13 N 41 248 248 255 88.74 92.97 105.70 97.22 1.09 -3.80 12 N 42 247 247 255 88.14 92.29 105.70 96.94 1.18 -4.28 12 N 43 246 246 255 87.50 91.53 105.60 96.63 1.34 -4.75 10 N 44 245 245 255 86.98 90.89 105.67 96.36 1.51 -5.25 5 N 45 244 244 255 86.15 90.01 105.40 96.00 1.52 -5.70 5 N 46 243 243 255 85.62 89.35 105.40 95.73 1.71 -6.18 0 N 47 242 242 255 84.99 88.62 105.40 95.42 1.84 -6.70 0 N 48 241 241 255 84.41 87.97 105.33 95.15 1.92 -7.13 0 N 49 240 240 255 83.89 87.33 105.30 94.88 2.09 -7.58 0 N

325

Appendix B. A panel of 180 samples used for testing the accuracy of EIZO monitor profile.

Sample L* a* b* R G B X Y Z 1 81 0 0 199.8 199.8 199.8 56.4 58.5 48.2 2 82 0 0 202.6 202.6 202.6 58.1 60.3 49.7 3 83 0 0 205.4 205.4 205.4 59.9 62.2 51.3 4 84 0 0 208.3 208.3 208.3 61.8 64.1 52.8 5 85 0 0 211.1 211.1 211.1 63.6 66.0 54.4 6 86 0 0 214.0 214.0 214.0 65.6 68.0 56.1 7 87 0 0 216.9 216.9 216.9 67.5 70.0 57.7 8 88 0 0 219.7 219.7 219.7 69.5 72.1 59.4 9 89 0 0 222.6 222.6 222.6 71.5 74.2 61.2 10 90 0 0 225.5 225.5 225.5 73.6 76.3 62.9 11 91 0 0 228.4 228.4 228.4 75.7 78.5 64.7 12 92 0 0 231.3 231.3 231.3 77.8 80.7 66.6 13 93 0 0 234.3 234.3 234.3 80.0 83.0 68.4 14 94 0 0 237.2 237.2 237.2 82.2 85.3 70.3 15 95 0 0 240.1 240.1 240.1 84.5 87.6 72.3 16 96 0 0 243.1 243.1 243.1 86.8 90.0 74.2 17 97 0 0 246.1 246.1 246.1 89.1 92.4 76.3 18 98 0 0 249.0 249.0 249.0 91.5 94.9 78.3 19 99 0 0 252.0 252.0 252.0 93.9 97.4 80.4 20 100 0 0 255.0 255.0 255.0 96.4 100.0 82.5 21 81 -1 1 199.0 200.6 198.0 56.0 58.5 47.4 22 81 1 -1 200.7 199.0 201.6 56.8 58.5 49.1 23 81 -0.5 0.5 199.3 200.2 198.9 56.2 58.5 47.8 24 81 0.5 -0.5 200.2 199.4 200.7 56.6 58.5 48.7 25 82 -1 1 201.8 203.4 200.8 57.7 60.3 48.9 26 82 1 -1 203.5 201.8 204.4 58.6 60.3 50.6 27 82 -0.5 0.5 202.2 203.0 201.7 57.9 60.3 49.3 28 82 0.5 -0.5 203.0 202.2 203.5 58.3 60.3 50.2 29 83 -1 1 204.6 206.3 203.6 59.5 62.2 50.4 30 83 1 -1 206.3 204.6 207.3 60.4 62.2 52.2 31 83 -0.5 0.5 205.0 205.8 204.5 59.7 62.2 50.8 32 83 0.5 -0.5 205.9 205.0 206.4 60.1 62.2 51.7 33 84 -1 1 207.4 209.1 206.5 61.3 64.1 51.9 34 84 1 -1 209.1 207.5 210.1 62.2 64.1 53.8 35 84 -0.5 0.5 207.9 208.7 207.4 61.6 64.1 52.4

326

Appendix B. Continued

Sample L* a* b* R G B X Y Z 36 84 0.5 -0.5 208.7 207.9 209.2 62.0 64.1 53.3 37 85 -1 1 210.3 211.9 209.3 63.2 66.0 53.5 38 85 1 -1 212.0 210.3 213.0 64.1 66.0 55.4 39 85 -0.5 0.5 210.7 211.5 210.2 63.4 66.0 54.0 40 85 0.5 -0.5 211.6 210.7 212.0 63.9 66.0 54.9 41 81 -1 -1 198.2 200.4 201.6 56.0 58.5 49.4 42 81 1 1 201.4 199.3 198.1 56.8 58.5 47.4 43 81 -0.5 -0.5 199.0 200.1 200.7 56.2 58.5 48.7 44 81 0.5 0.5 200.6 199.5 198.9 56.6 58.5 47.8 45 82 -1 -1 201.0 203.2 204.4 57.7 60.3 50.6 46 82 1 1 204.3 202.1 200.9 58.6 60.3 48.9 47 82 -0.5 -0.5 201.8 202.9 203.5 57.9 60.3 50.2 48 82 0.5 0.5 203.4 202.3 201.8 58.3 60.3 49.3 49 83 -1 -1 203.8 206.0 207.2 59.5 62.2 52.2 50 83 1 1 207.1 204.9 203.7 60.4 62.2 50.4 51 83 -0.5 -0.5 204.6 205.7 206.3 59.7 62.2 51.7 52 83 0.5 0.5 206.3 205.2 204.6 60.1 62.2 50.8 53 84 -1 -1 206.6 208.9 210.0 61.3 64.1 53.8 54 84 1 1 209.9 207.7 206.5 62.2 64.1 51.9 55 84 -0.5 -0.5 207.5 208.6 209.2 61.6 64.1 53.3 56 84 0.5 0.5 209.1 208.0 207.4 62.0 64.1 52.4 57 85 -1 -1 209.5 211.7 212.9 63.2 66.0 55.4 58 85 1 1 212.8 210.6 209.4 64.1 66.0 53.5 59 85 -0.5 -0.5 210.3 211.4 212.0 63.4 66.0 54.9 60 85 0.5 0.5 212.0 210.8 210.3 63.9 66.0 54.0 61 86 -1 1 213.1 214.8 212.2 65.1 68.0 55.1 62 86 1 -1 214.8 213.2 215.8 66.0 68.0 57.0 63 86 -0.5 0.5 213.6 214.4 213.1 65.3 68.0 55.6 64 86 0.5 -0.5 214.4 213.6 214.9 65.8 68.0 56.6 65 87 -1 1 214.4 213.6 214.9 65.8 68.0 56.6 66 87 1 -1 217.7 216.0 218.7 68.0 70.0 58.7 67 87 -0.5 0.5 216.4 271.3 215.9 67.3 70.0 57.3 68 87 0.5 -0.5 217.3 216.4 217.8 67.7 70.0 58.2 69 88 -1 1 218.9 220.5 217.9 69.0 72.1 58.5

327

Appendix B. Continued

Sample L* a* b* R G B X Y Z 70 88 1 -1 220.6 218.9 221.6 69.9 72.1 60.4 71 88 -0.5 0.5 219.3 220.1 218.8 69.3 72.1 59.0 72 88 0.5 -0.5 220.2 219.3 220.7 69.7 72.1 59.9 73 89 -1 1 221.7 223.4 220.8 71.0 74.2 60.2 74 89 1 -1 223.5 221.8 224.5 72.0 74.2 62.2 75 89 -0.5 0.5 222.2 223.0 221.7 71.3 74.2 60.7 76 89 0.5 -0.5 223.1 222.2 223.5 71.7 74.2 61.7 77 90 -1 1 224.6 226.3 223.6 73.1 76.3 61.9 78 90 1 -1 226.4 224.7 227.4 74.1 76.3 64.0 79 90 -0.5 0.5 225.1 225.9 224.6 73.3 76.3 62.4 80 90 0.5 -0.5 225.9 225.1 226.4 73.8 76.3 63.5 81 86 -1 -1 212.3 214.6 215.8 65.1 68.0 57.0 82 86 1 1 215.7 213.0 212.2 66.0 68.0 55.1 83 86 -0.5 -0.5 213.2 214.3 214.9 65.3 68.0 56.6 84 86 0.5 0.5 214.8 213.7 213.1 65.8 68.0 55.6 85 87 -1 -1 215.2 217.4 218.6 67.0 70.0 58.7 86 87 1 1 218.5 216.3 215.1 68.0 70.0 56.8 87 87 -0.5 -0.5 216.0 217.1 217.7 67.3 70.0 58.2 88 87 0.5 0.5 217.7 216.6 216.0 67.7 70.0 57.3 89 88 -1 -1 218.0 220.3 221.5 69.0 72.1 60.4 90 88 1 1 221.4 219.2 217.9 69.9 72.1 58.5 91 88 -0.5 -0.5 218.9 220.0 220.6 69.3 72.1 59.9 92 88 0.5 0.5 220.6 219.4 218.8 69.7 72.1 59.0 93 89 -1 -1 220.9 223.2 224.4 71.0 74.2 62.2 94 89 1 1 224.3 222.0 220.8 72.0 74.2 60.2 95 89 -0.5 -0.5 221.8 222.9 223.5 71.3 74.2 61.7 96 89 0.5 0.5 223.5 222.3 221.7 71.7 74.2 60.7 97 90 -1 -1 223.8 226.1 227.3 73.1 76.3 64.0 98 90 1 1 227.2 224.9 223.7 74.1 76.3 61.9 99 90 -0.5 -0.5 224.7 225.8 226.4 73.3 76.3 63.5 100 90 0.5 0.5 226.4 225.2 224.6 73.8 76.3 62.4 101 91 -1 1 227.5 229.2 226.5 75.2 78.5 63.7 102 91 1 -1 229.3 227.6 230.3 76.2 78.5 65.8 103 91 -0.5 0.5 228.0 228.8 227.5 75.4 78.5 64.2

328

Appendix B. Continued

Sample L* a* b* R G B X Y Z 104 91 0.5 -0.5 228.9 228.0 229.3 75.9 78.5 65.3 105 92 -1 1 230.5 232.2 229.5 77.3 80.7 65.5 106 92 1 -1 232.2 230.5 233.2 78.3 80.7 67.7 107 92 -0.5 0.5 230.9 231.8 230.4 77.6 80.7 66.0 108 92 0.5 -0.5 231.8 230.9 232.3 78.1 80.7 67.1 109 93 -1 1 233.4 235.1 232.4 79.5 83.0 67.4 110 93 1 -1 235.1 233.4 236.1 80.5 83.0 69.5 111 93 -0.5 0.5 233.8 234.7 233.3 79.7 83.0 67.9 112 93 0.5 -0.5 234.7 233.8 235.2 80.3 83.0 69.0 113 94 -1 1 236.3 238.0 235.3 81.7 85.3 69.2 114 94 1 -1 238.1 236.4 239.1 82.7 85.3 71.5 115 94 -0.5 0.5 236.7 237.6 236.3 82.0 85.3 69.8 116 94 0.5 -0.5 237.6 236.8 238.1 82.5 85.3 70.9 117 95 -1 1 239.3 241.0 238.2 84.0 87.6 71.1 118 95 1 -1 241.0 239.3 242.0 85.0 87.6 73.4 119 95 -0.5 0.5 239.7 240.6 239.2 84.2 87.6 71.7 120 95 0.5 -0.5 240.6 239.7 241.1 84.7 87.6 72.8 121 91 -1 -1 226.7 229.0 230.2 75.2 78.5 65.8 122 91 1 1 230.1 227.8 226.6 76.2 78.5 63.7 123 91 -0.5 -0.5 227.6 228.7 229.3 75.4 78.5 65.3 124 91 0.5 0.5 229.3 228.1 227.5 75.9 78.5 64.2 125 92 -1 -1 229.6 231.9 233.1 77.3 80.7 67.7 126 92 1 1 233.0 230.7 229.5 78.3 80.7 65.5 127 92 -0.5 -0.5 230.5 231.6 232.2 77.6 80.7 67.1 128 92 0.5 0.5 232.2 231.0 230.4 78.1 80.7 66.0 129 93 -1 -1 232.5 234.8 236.1 79.5 83.0 69.5 130 93 1 1 236.0 233.7 232.4 80.5 83.0 67.4 131 93 -0.5 -0.5 233.4 234.5 235.2 79.7 83.0 69.0 132 93 0.5 0.5 235.1 234.0 233.3 80.3 83.0 67.9 133 94 -1 -1 235.5 237.8 239.0 81.7 85.3 71.5 134 94 1 1 238.9 236.6 235.4 82.7 94.0 69.2 135 94 -0.5 -0.5 236.3 237.5 238.1 82.0 69.8 70.9 136 94 0.5 0.5 238.1 236.9 236.3 82.5 85.3 69.8 137 95 -1 -1 238.4 240.7 242.0 84.0 87.6 73.4

329

Appendix B. Continued

Sample L* a* b* R G B X Y Z 138 95 1 1 241.9 239.5 238.3 85.0 87.6 71.1 139 95 -0.5 -0.5 239.3 240.4 241.1 84.2 87.6 72.8 140 95 0.5 0.5 241.0 239.8 239.2 84.7 87.6 71.7 141 96 -1 1 242.2 243.9 241.2 86.2 90.0 73.1 142 96 1 -1 244.0 242.3 245.0 87.3 90.0 75.4 143 96 -0.5 0.5 242.7 243.5 242.1 86.5 90.0 73.7 144 96 0.5 -0.5 243.5 242.7 244.0 87.1 90.0 74.8 145 97 -1 1 245.2 246.9 244.2 88.6 92.4 75.1 146 97 1 -1 247.0 245.2 248.0 89.7 92.4 77.4 147 97 -0.5 0.5 245.6 246.5 245.1 88.9 92.4 75.7 148 97 0.5 -0.5 246.5 245.6 247.0 89.4 92.4 76.8 149 98 -1 1 248.1 249.9 247.1 91.0 94.9 77.1 150 98 1 -1 249.9 248.2 250.9 92.1 94.9 79.5 151 98 -0.5 0.5 248.6 249.5 248.1 91.2 94.9 77.7 152 98 0.5 -0.5 249.5 248.6 250.0 91.8 94.9 78.9 153 99 -1 1 251.1 252.9 250.1 93.4 97.4 79.2 154 99 1 -1 252.9 251.2 253.9 94.5 97.4 81.6 155 99 -0.5 0.5 251.6 252.4 251.1 93.7 97.4 79.8 156 99 0.5 -0.5 252.5 251.6 253.0 94.2 97.4 81.0 157 100 -1 1 254.1 255.0 253.1 95.8 100.0 81.3 158 100 1 -1 255.0 254.2 255.0 97.0 100.0 83.7 159 100 -0.5 0.5 254.6 255.0 254.0 96.1 100.0 81.9 160 100 0.5 -0.5 255.0 254.6 255.0 96.7 100.0 83.1 161 96 -1 -1 241.4 243.7 244.9 86.2 90.0 75.4 162 96 1 1 244.8 242.5 241.3 87.3 90.0 73.1 163 96 -0.5 -0.5 242.2 243.4 244.0 86.5 90.0 74.8 164 96 0.5 0.5 244.0 242.8 242.2 87.1 90.0 73.7 165 97 -1 -1 244.3 246.7 247.9 88.6 92.4 77.4 166 97 1 1 247.8 245.5 244.2 89.7 92.4 75.1 167 97 -0.5 -0.5 245.2 246.4 247.0 88.9 92.4 76.8 168 97 0.5 0.5 246.9 245.8 245.1 89.4 92.4 75.7 169 98 -1 -1 247.3 249.6 250.9 91.0 94.9 79.5 170 98 1 1 250.8 248.4 247.2 92.1 94.9 77.1 171 98 -0.5 -0.5 248.2 249.3 250.0 91.2 94.9 78.9

330

Appendix B. Continued

172 98 0.5 0.5 249.9 248.8 248.1 91.8 94.9 77.7 173 99 -1 -1 250.3 252.6 253.9 93.4 97.4 81.6 174 99 1 1 253.7 251.4 250.2 94.5 97.4 79.2 175 99 -0.5 -0.5 251.1 252.3 252.9 93.7 97.4 81.0 176 99 0.5 0.5 252.9 251.7 251.1 94.2 97.4 79.8 177 100 -1 -1 253.3 255.0 255.0 95.8 100.0 83.7 178 100 1 1 255.0 254.4 253.1 97.0 100.0 81.3 179 100 -0.5 -0.5 254.1 255.0 255.0 96.1 100.0 83.1 180 100 0.5 0.5 255.0 254.7 254.1 96.7 100.0 81.9

331

Appendix C. Subjects visual assessment instructions

Thank you for agreeing to take part in this study. The results of this study will help improve our understanding of the human color vision. In the first part of the study you will be tested for color vision using a test known as the Neitz test. You will then be asked to wear a gray lab coat and sit in front of a monitor at a specific distance. Please do not move excessively during assessments and do not lean forward or back when attempting to provide a response. The lighting in the room will be switched off but there will be sufficient light from the monitor during assessments. You will be shown a screen with gray background. You will be asked to focus on the gray display for one minute. This will help you adapt to the viewing conditions. During the adaptation period the experiment will be explained. The visual assessment is divided into two parts. In the first part of the study, you will be shown three reference square samples on the monitor which have assigned whiteness values of 1, 5 and 10 from your left to your right. You be also shown a test sample below. Your task is to assess the magnitude of the test sample's whiteness and provide a numerical value. If you feel the whiteness of the sample shown is higher than the reference set above you can give a value greater than 10. Ratings do not have to be integer numbers and fractions, i.e. 6.7 could also be used. Once you are happy with your response you can press the down arrow key to proceed to the next image. In the second part of the study a range of white samples (12 in total) containing different textures will be shown. Your task is to rank these samples from most white to least white. Once you have completed parts 1 and 2 for one gray background you will be shown a different gray background and the procedure will be repeated twice for two more gray backgrounds. The entire assessment should not last more than 10 minutes but there are no time restrictions. We ask that you repeat the assessment three times in total and each time on a different day. There are no right or wrong answers so relax and let me know if you have questions?

332

Appendix D. Texture displayed on the EIZO under different background.

333

Appendix E. Statistical analysis of Equation 59.

Variable Coefficient Standard t-test P-value Lower 95% Upper 95% s s Error c -39.5634 32.978 -1.199 0.28 -120.26 41.128 R 585.5762 113.52 5.158 0.002 307.80 863.36 R⋅Dir -1.0164 0.30 -3.393 0.015 -1.75 -0.28 Den 0.9444 0.30 3.153 0.019 0.21 1.68

334