DATES: To be assured consideration, comments must be received at one of the addresses provided below, no later than 5 p.m. on March 17, 2009. ADDRESSES: When commenting, refer to file code CMS–2252–P. Facsimile (FAX) transmission is not accepted.. You may submit comments in one of four ways: 1. Electronically. You may submit electronic comments to http://www.regulations.gov. Follow the instructions under the ‘‘More Search Options’’ tab. 2. By regular mail. You may mail written comments to the following address: Centers for Medicare & Medicaid Services Department of Health and Human Services Attention:CMS–2252–P P.O. Box 8016, Baltimore, MD 21244–1850. 3. By express or overnight mail. Centers for Medicare & Medicaid Services, Department of Health and Human Services Attention: CMS–2252–P, Mail Stop C4–26–05 7500 Security Boulevard, Baltimore, MD 21244–1850. 4. By hand or courier. You may deliver (by hand or courier) your written comments (one original)before the close of the comment period to either of the following addresses: a. Room 445–G, Hubert H. Humphrey Building, 200 Independence Avenue,SW. Washington, DC 20201. Because access to the interior of the Hubert H. Humphrey (HHH) Building is not readily available to persons without Federal Government identification, commenters are encouraged to leave their comments in the CMS drop slots located in the main lobby of the building. A stamp-in clock is available for persons wishing to retain a proof of filing by stamping in and retaining an extra copy of the comments being filed.) b. 7500 Security Boulevard Baltimore, MD 21244–1850. If you intend to deliver your comments to the Baltimore address, please call (410) 786–9994 in advance to schedule your arrival with one of our staff members. Requested Comments by CMS

A. Cytology Challenges and new Technology 1) Is the proposed definition for ‘‘cytology challenge’’ appropriate to address future technological advances?

The words “cytology challenge” are sufficiently nonrestrictive and should be used to include future technological advances for screening and interpreting gynecologic cytology.

2) Should criteria be included in the regulations for pilot testing before CMS approval of any new cytology testing media? If so, please specify the appropriate criteria.

No. There should be no criteria for pilot testing required in the regulations. It would be impossible to describe and design criteria for reasonable and timely pilot studies for unknown technologies. Providers should design and execute pilot studies with rigorous statistical analysis based upon the methods of testing employed.

3) Should pilot testing include a comparison to current technology? What is an acceptable comparison?

No specific criteria for pilot testing should be stipulated in the regulations. However, Providers should outline their specific methods used for pilot testing, including inter-method comparison. Comparisons should require rigorous statistical validation.

4) If specific criteria for pilot testing are required, what burden would be incurred by PT programs and laboratories participating in a pilot test?

No pilot testing should be stipulated in the regulations. However, all pilot testing performed by providers should require statistical validations. Costs of pilot testing to providers and laboratories are burdensome. The providers must incur the costs of setting up a pilot study. Laboratories participating in the pilot study incur the time costs of providing pathologists (i.e. technical supervisors), cytotechnologists and proctors for two tests (i.e. standard glass slide examination and the new technology examination). 5) Would requiring pilot testing cause an increase in the cost of cytology PT?

Pilot testing should not be stipulated in the regulations. However, pilot testing would cause an increase in the cost and the additional cost would be passed on to participants.

B. Testing Individuals 1) Should enrollment and participation in an educational program be required for all cytology laboratories? If so, how would this enrollment be monitored by CMS?

No. Enrollment and participation in educational programs are best managed and conducted by professional organizations, state licensing boards, and hospital and laboratory accreditation programs such as College of American Pathologists (CAP) Laboratory Accreditation Program, or even the Joint Commission. These organizations already exist and determine appropriate criteria, format and valid content for educational programs and are approved by medical education accrediting agencies such as the Accreditation Council for Continuing Medical Education (ACCME) or the American Academy of Continuing Medical Education (AAOCME) for physicians. In addition, the American Society of Cytopathology (ASC) provides educational programs accredited by the (ACCMEE) for physician members as well as Continuing Medical Laboratory Education (CMLE) for cytotechnologists that meet the continuing education requirements for the ASCP Board of Registry Certification Maintenance Program.

REFERENCE: Birdsong G, Howell L and Atkison K et al. Scientific issues related to the cytology proficiency testing regulations. Cytojournal 2006 April 18;3:11

2) If enrollment and participation in educational programs were to be required, what criteria would be appropriate for CMS to adopt through rulemaking to evaluate these programs?

CMS should not mandate participation in an educational program such as proficiency testing. If CMS adopted a rule for mandatory education, the criteria for educational materials should not be determined by CMS. The criteria for these educational materials or programs should be under the purview of accrediting educational organizations with well established histories in the field of education and teaching.

3) If enrollment and participation in educational programs were to be required, how might CMS monitor or evaluate an individual’s participation in such a program? If CMS adopted a rule for mandatory education instead of gynecologic proficiency testing, the process should be monitored by its deemed laboratory accrediting organizations. CMS does not have the resources, to monitor or evaluate an individual’s participation in educational programs.

4) If educational programs were required, what enforcement actions might be appropriate for laboratories if laboratories/individuals did not participate in the required programs?

Accreditation agencies should monitor education in laboratories and provide enforcement for lack of participation. Laboratories should provide the opportunity for education for individuals . Laboratories that do not maintain compliance would be managed according to their deemed laboratory accrediting organization protocol.

C. Frequency of Testing 1) How many cytology challenges per test event are appropriate to assess individual performance?

Individual performance does not indicate proficiency; proficiency is a multifactoral process of which one portion consists of the screening and interpretation tasks embodied in the current PT event. Other considerations involved in proficiency may include such items as judgment as to when to obtain additional history and consultation with other care providers and cytologists, and when to consult with reference material (items precluded in the current testing protocol). No one test can assess individual performance in an environment of evolving technology and future molecular testing. The law states “periodic confirmation and evaluation of the proficiency of individuals involved in screening or interpreting cytologic preparations, including announced and unannounced on-site testing of individuals, with testing to take place, to the extent practicable, under normal working conditions.” PT testing, as currently configured, does not provide a “normal working conditions” environment.

Statistical theory would dictate that the higher the number of challenges per test event, the better a program can assess individual performance; however, there are practical limitations to large challenge sets. When choosing the optimal challenges, it is accepted practice to understand the misclassification rates utilizing binomial expansion theory and applying test criteria with examples rates of both “competent” and “incompetent” individuals. For example, the misclassification or false pass rate for incompetent individuals (true pass rate at 80%) against a 90% pass rule drops from 38% with a 10 slide (challenge) set to 20% with a 20 slide set. Additionally, the misclassification or false failure rate for competent individuals (true pass rate at 95%) only drops from 9% with a 10 slide set to 8% with a 20 slide test. Therefore, based upon the change in false positive rates, there may be a rationale to move from a 10 slide testing event to a 20 slide testing event, however, there is not as much benefit in improvement of the false negative rates.

It is not possible to determine the appropriate number of challenges per test event without performing a power analysis for each potential scoring grid. The ‘evidence’ for increasing the number of challenges derives from sources that ask the regulatory authorities to, “contract with experts in statistics and test theory, through interaction with knowledgeable cytopathologists and cytotechnologists, (to) design an equitable and scientifically well-founded system,” (Nagy and Naryshkin, 2007). It is not acceptable to base evidence for drastic changes in a nationwide proficiency testing from ‘evidence’ that concedes that dichotomous statistics do not fit our current model of test. (Nagy and Naryshkin, 2007). Valid statistical modeling brings in theory and practice and should be applicable to current available (or proposed) proficiency test setting. It is not acceptable to base significant changes in proficiency testing on improper statistical applications because it may be the only available evidence at this time or simply because the dichotomous test design is ‘simple, transparent and mathematically calculable’ yet inappropriate for the application to modern proficiency testing. (Nagy and Naryshkin, 2007). Sample sizes will vary based upon the ‘assumptions’ that derive from the statistical analysis, and those assumptions must be valid and the parameters found in the statistical model must be based upon evidence, practice and theory. In addition, testimony at a professional society, without being subject to a peer review process, should also not be considered substantial and statistically valid evidence to change proficiency testing methodology.

REFERENCE: Nagy GK, Naryshkin S. The Dysfunctional Federally Mandated Proficiency Test in Cytopathology A Statistical Analysis. Cancer Cytopathol 2007;111:467–76

2) Should annual testing continue to be required with 10 slides per test?

No. Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. Regarding annual testing, there is no published data to support that performance significantly declines over time. The American Board of Pathology (ABP) recommends recertification for pathology proficiency by testing at 10 year intervals. There is no valid data available to support annual (or any other) testing interval. By precedent from ABP recertification, testing intervals can be as great as every 10 years. The Cytology Education and Technology Consortium (CETC) recommends a 5 year interval for cytology proficiency testing.

3) Is 2 years an appropriate testing interval using 20 slides per test?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. There is no published data to support that performance significantly declines over time. If proficiency testing continues as proposed the testing interval should be lengthened to 10 years as the American Board of Pathology (ABP) recommends recertification by testing in 10 year intervals. The ABP process is the only other pathology testing program designed to monitor individual proficiency, including cytology proficiency in Anatomic Pathology. The Cytology Education and Technology Consortium (CETC) recommends a 5 year interval for cytology proficiency testing.

4) Why would a testing frequency longer than every 2 years be appropriate?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. There is no published data to support that performance significantly declines over time. If proficiency testing continues as proposed the testing interval should be lengthened to 10 years as the American Board of Pathology recommends recertification by testing in 10 year intervals. The ABP process is the only other pathology testing program designed to monitor individual proficiency, including cytology proficiency in Anatomic Pathology. The Cytology Education and Technology Consortium (CETC) recommends a 5 year interval for cytology proficiency testing. The American Board of Medical Specialties implemented the Maintenance of Certification for 24 medical boards utilizing a recertification cycle of 6-10 years.

5) If an individual is allowed to pass a 20 cytology challenge test when an HSIL or cancer (Category D) cytology challenge is reported as Normal or Benign Changes (Category B), how long should the timeframe be between testing events?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. Misclassification or false failure rate for competent individuals drops from 9% with a 10 slide set to 8% with a 20 slide test. If mandatory individual PT is to continue with punitive consequences, there should be no alteration in the interval of testing based upon individual responses to specific challenges. There is no published data to suggest that performance significantly declines over time. If proficiency testing continues as proposed, the testing interval should be lengthened to 10 years as the American Board of Pathology (ABP) recommends recertification by testing in 10 year intervals. The ABP process is the only other pathology testing program designed to monitor individual proficiency, including cytology proficiency in Anatomic Pathology.

6) What type of data should be collected to determine if a longer interval between testing is appropriate? Who should collect the data? How long should the data be collected?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. The current program does not produce data that can be used to support any other testing interval. There is no published data to suggest that performance significantly declines over time. If proficiency testing continues as proposed the testing interval should be lengthened to 10 years as the American Board of Pathology (ABP) recommends recertification by testing in 10 year intervals. The ABP process is the only other pathology testing program designed to monitor individual proficiency, including cytology proficiency in Anatomic Pathology. The Cytology Education and Technology Consortium (CETC) recommends a 5 year interval for cytology proficiency testing. The American Board of Medical Specialties implemented the Maintenance of Certification for 24 medical boards utilizing a recertification cycle of 6-10 years.

Data in support of testing at any interval other than annual would be impossible to gather under the current program, because even looking at individual performance at various yearly intervals does not take into account the fact that the individual actually took the test (and therefore studied/prepped for the test) in the intervening years. Since no population of cytologists exists outside of the current annual testing system, there is no population of practitioners available for study of other testing intervals.

7) What types of data are needed to validate testing less frequent than annually?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. There is no data currently available to support annual testing, or any other interval of testing, that reflects proficiency in the interpretation of gynecologic cytology. The current program does not produce data that can be used to support any other testing interval.

Data in support of testing at any interval other than annual would be impossible to gather under the current program, because even looking at individual performance at various yearly intervals does not take into account the fact that the individual actually took the test (and therefore studied/prepped for the test) in the intervening years. Since no population of cytologists exists outside of the current annual testing system, there is no population of practitioners available for study of other testing intervals.

D. Number of Cytology Challenges 1) Are there logistical concerns and costs associated with administering testing events with more than 20 cytology challenges?

Yes, there are logistical concerns and costs of administering more than 20 challenges. These concerns include, but are not limited to:

 Quality control: Any increase over 10 slides/challenges multiplies the staff time proportionally for quality control including QC of slides/challenges, instructions, result forms, evaluations etc.  Fulfillment: Any increase over 10 slides/challenges multiplies the staff time proportionally for design and slide packing if applicable  Testing . Number of consecutive days: The number of consecutive days that a laboratory is allowed to test is of concern. Because the number of slides/challenges may be 20+, it will take more than twice as long for the cytotechnologist to prescreen 10 test slides, and more than twice as long for additional examinees to have their turn with the slides/challenges. The number of consecutive testing days must be increased causing logistical problems with the both laboratories and PT provider. The calculated time needed to test 20 challenges whether they are slides or new technology has not been determined. . Number of examinees per slideset / challenge set: The number of individuals that can be tested using one 20 slideset has not been determined nor has the number of individuals that can be tested using 20 challenges with new technology has not been determined. This is the same for more than 20 challenges. . Time: The time needed per test must be extended. If the estimated time allotted is based upon that for the current 10 slides/two hours model, effectively removing cytotechnologists and pathologists providing patient care services from 2 hours per day, then 20 slides would be at least a half day (4 hours) per examinee and increase proportionally from there. The time needed to calculate how long a laboratory and individuals would need to test with 20 challenges using future technology has not been determined. . Proctors will be taken away from their regular work to engage in administration of gynecologic proficiency testing. Technical supervisors (pathologists), cytotechnologists and other laboratory personnel, such as medical technologists, etc, will be affected by gynecologic proficiency testing administration, ultimately affecting patient care. This will affect all laboratories, especially hard hit will be small laboratories which have limited staff.  Managing the efficient administration of challenges: Based on the current model, if the time were not increased for consecutive testing days, more slidesets would have to be provided per laboratory. More slidesets provided per laboratory requires more cytotechnologist and slide quality control staff to be employed to do review slides between shipments, higher slide costs, etc. The impact of using new technology to administer 20 challenges or more has not been determined.

2) If 20 cytology challenges are used, thereby requiring a 4 hour timeframe to administer the test, what would be the impact on the laboratory operation?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. However if a 20 slide challenge biennially does occur, the impact on laboratory operation would be excessive. For each individual that is tested, a proctor’s time is also taken away from the daily workload schedule If a laboratory is large, and several proctors are needed to administer the test to multiple individuals then the workload is affected proportionately. Routine Pap test screening, QC Pap test screening and Non Gynecologic Cytopathology are all interrupted by this process. A four hour time frame underestimates the disruption of clinical services in laboratories where central screening is performed and a pathologist interprets and reports the Pap test at a remote site. Because this practice constitutes “normal working conditions” and is acceptable under the law, if CMS implements a four hour testing limit, it should also re-consider its position on cytology PT referral in this setting as it represents an onerous disruption of clinical care impacting patient safety. Laboratories that operate in a multi-hospital system have already found the cytology PT referral rule to cause an undue stress on their staffing as either a cytotechnologist, pathologist and/or proctor must travel (sometimes at distances greater than 50 miles) in order to complete the current testing event. This aspect would increase with the proposed 20 slide/4 hour testing event. Since CMS has deemed that the testing must also occur on consecutive days, the impact for multi- day testing to accommodate all participants becomes even more pronounced. This testing event would be directly linked to the number of slidesets sent per laboratory which is then linked to the PT provider supply of slidesets.

3) Would laboratories prefer a 4 hour testing timeframe biennially, rather than the current 2 hour testing timeframe annually?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. Laboratories would prefer not to have gynecologic cytology proficiency testing at all, and would consider educational testing in gynecologic cytology a more optimal solution in order to stay current with emerging technologies and cervical cancer carcinogenesis. There should be no predetermined time limit for testing written in the regulations as the time allotment needed for challenges using new technologies is unknown.

4) Should there be a requirement for each test set to contain at least one cytology challenge from each of the four response categories or more than one cytology challenge from each response category?

Proficiency testing as now practiced and as currently proposed, is not statistically valid to assess individual performance. If mandatory individual PT is to continue with punitive consequences, the following is recommended: each test set should contain at least one challenge from category Unsatisfactory (category A), Negative for intraepithelial lesion or malignancy (category B) and two challenges from the Epithelial cell abnormality (category C) in a three tiered system.

We are also soliciting comments on the effects of these proposals on PT programs as follows:

1) Are there a sufficient number of referenced cytology challenges available to assemble 20 cytology challenge test sets to test all cytology personnel nationally?

There will be very serious logistical concerns in assembling 20 challenge cytology test sets.

2) Would increasing the number of cytology challenges increase the PT program’s cost to administer the program?

Yes, cost of materials will increase, at minimum, proportionally in costs of materials, staff time for quality control, infrastructure development, shipping costs, etc. The challenge type will also have an impact on PT costs with additional expense increases based on changes in the technology type for the challenge.

3) Would program costs to participants increase from a 10 slide annual test to a 20 cytology challenge biennial test?

Yes, costs to participants would increase. Any additional costs to administer the test will ultimately be reflected in charges to participants to cover the cost of the program. 4) What statistical methods and testing research could CMS use to better determine the statistical power of a cytology proficiency test with 20 challenges and a multinomial, weighted scoring scheme?

Individual performance does not indicate proficiency; proficiency is a multifactorial process of which one portion consists of the screening and interpretation tasks embodied in the current PT event. Other considerations involved in proficiency may include such items as judgment as to when to obtain additional history and consultation with other care providers and cytologists, and when to consult with reference material (items precluded in the current testing protocol). No one test can assess individual performance in an environment of evolving technology and future molecular testing. The law states “periodic confirmation and evaluation of the proficiency of individuals involved in screening or interpreting cytologic preparations, including announced and unannounced on-site testing of individuals, with testing to take place, to the extent practicable, under normal working conditions.” PT testing, as currently configures, does not provide a “normal working conditions” environment.

E. Response Categories 1) Should criteria be defined in the regulation for ‘‘unsatisfactory’’ cytology challenges?

No, criteria for an unsatisfactory specimen should not be defined in regulations. Criteria change with increased understanding of the disease process. Criteria should be established by expert consensus not by regulation. If the regulations continue to add more specificity, then they will once again become outdated as technology advances.

One of the current proposed definitions for Unsatisfactory is incorrect, specifically paragraph 493.945(b)(1)(ii) “absence of endocervical/transformation zone component”. A specimen that has the minimum number of squamous cells and is not obscured more than 75% by a factor (e.g., inflammation) should NOT be considered Unsatisfactory if it does not have an endocervical or transformation zone (T-zone) component. Studies have NOT shown women lacking EC/TZ components are more likely to have squamous lesions on follow-up. (“The Bethesda System for Reporting Cervical Cytology” Second Edition, 2004 Springer –Verlag, New York) In normal working conditions, specimens are NOT signed out as “Unsatisfactory due to lack of endocervical/transformation zone component.” This information is only provided as an explanatory note in the specimen adequacy section of the patient report, not in the interpretation or general category. In the 2006 American Society for Colposcopy and Cervical Pathology (ASCCP) guidelines, clinicians are advised not to call the patient back for a repeat Pap test when the Pap test is satisfactory but does not contain evidence of T-zone sampling. The patient is to continue with routine screening in one year if no clinical symptoms are present. (Davey et al, J Low Genit Tract Dis; 2008:APR; 12(2): 71-81.) The proposed regulation paragraph 493.945(b)(1) (ii) needs to delete the definition of Unsatisfactory in that section.

2) If criteria for ‘‘unsatisfactory’’ are described, should the regulations include descriptions or criteria specific to each preparation type?

If mandatory individual PT is to continue with punitive consequences, criteria for an unsatisfactory specimen should not be under the purview of regulations. Under “normal working conditions,” these consensus defined criteria can be accessed as needed and should not be defined in regulations. Criteria change with advances in technology and methods and federal regulatory responses are unlikely to be flexible enough to respond rapidly.

3) Should a fifth response category be required, separating HSIL or cancer (Category D) to more closely follow Bethesda terminology? We note that Bethesda 2001 separates LSIL (Category 4) from HSIL (Category D), and separates HSIL from cancer, also (Category D).

If mandatory individual PT is to continue with punitive consequences, the following is recommended: No fifth response category should be required. The response categories for PT should reflect the biologic understanding and the clinical management of women at risk for cervical cancer.

If mandatory individual PT is to continue with punitive consequences, there should be three response categories reflecting the Bethesda “General Categorization”. (Unsatisfactory, Negative for epithelial lesion or malignancy, Epithelial cell abnormality) In the context of patient management, CMS should reconsider the position of rejecting the Cytotechnology Education and Technology Consortium (CETC) and American Society for Cytotechnology (ASCT) suggestion of combining LSIL (Category C) and HSIL (Category D) into one category. LSIL and HSIL constitute a morphologic spectrum, especially when the underlying lesion is CIN 2. The College of American Pathologists Practical Guide to Gynecologic Cytopathology: Morphology, Management and Molecular Methods reiterates this with the following statement “In real practice situations, morphology represents a spectrum of change without sharp cut-offs between entities.”

The ASCT and the CETC are both correct when they state that the current patient management guidelines dictate that both categories (C and D) require the patients to be referred to colposcopy (depending upon patient age). We recommend combining C and D into one category “Epithelial Cell Abnormality: Further Patient Management Required.” Since the Pap test is a screening process which requires patient triage and appropriate follow up, a three-tiered response category best reflects clinical practice. There has been precedent set using a three-tiered educational system. In 1996, the CAP Interlaboratory Comparison Program was the vehicle that the CAP Laboratory Accreditation Program used to monitor the performance of cytology laboratories. Laboratories that failed to obtain a 90% score during a testing cycle were identified, held accountable and required corrective action. The CAP three-tiered program is an example of a successful educational and accredited monitoring program that identified those laboratories which did perform adequately as compared to others in the peer review process..

4) If a fifth category of cancer is required, should an individual who has an incorrect response in this category be allowed to pass PT?

If mandatory individual PT is to continue with punitive consequences, no fifth category should be required. There is an apparent incorrect assumption that the Pap test is a diagnostic test. We reiterate that the Pap test is a screening procedure that may identify patients who require further clinical management. It is unrealistic to assume that these response categories carry the same significance as a confirmatory diagnostic test.

F. Cytology Challenges referencing 1) Should the review of cytology challenges by three physicians certified in anatomic pathology be on undotted slides?

No. The initial review by anatomic pathologists serves as the entry point for a challenge into PT testing. The process of field validation provides the robustness of the challenge. A blinded (undotted) initial review is not required to identify those challenges that may perform well; the field validation establishes whether the challenge is referenced into the correct response category. The challenges should be initially evaluated and validated in the manner that they will be evaluated in the testing situation. Therefore, slides/challenges should be first screened by cytotechnologists, marked, and then evaluated by pathologists.

The ASC recommends that cytology laboratories utilize cytotechnologists for all manual primary screening of Pap tests. Implicit in this recommendation is that challenges referred to pathologists will be dotted; this represents “normal working conditions.”

The vast majority of gynecologic cytology Pap test specimens in the United States are initially screened by cytotechnologists and data supports the value of the cytotechnologists’ role. Most pathology residency training programs and cytopathology fellowships do not include specific training and evaluation in primary screening of Pap tests that is as extensive as that provided in the formal training given in schools of cytotechnology. However, in certain unusual or emergency circumstances the primary screening of Pap tests may be done by a pathologist. Approved by the ASC Executive Board, April, 2006

2) Should the three physicians certified in anatomic pathology independently determine the response category for each cytology challenge?

No. Three pathologists should review the challenges, allowing entry into the field validation process, establishing the robustness of the challenge. For the initial entry into the program, independent review is not necessary as the field validation process ultimately assigns the reference category prior to entrance into a PT event. Slides/challenges should be first screened by cytotechnologists, marked, and then evaluated by pathologists.

3) Should PT programs be required to include cytotechnologists in the review process for referencing cytology challenges? If so, describe a process for including cytotechnologists.

Yes. If mandatory individual PT is to continue with punitive consequences, we recommend that cytotechnologists be included in the review process. Cytotechnologist screening, along with review by three pathologists, constitutes the best method of entering challenges into the field validation process, paralleling “normal working conditions.”

G. Biopsy Confirmation

1) Should the requirement for biopsy confirmation of LSIL (Category C) cytology challenges for PT be retained?

No. The ASC does not support the requirement for biopsy confirmation of LSIL. If mandatory individual PT is to continue with punitive consequences, no biopsy confirmation should be required on LSIL challenges. Studies have shown that LSIL is the most reproducible category in cytology – actually far more reproducible than CIN 1 biopsies. [Reference: Stoler MH, Schiffman M. Interobserver reproducibility of cervical cytologic and histologic interpretations: realistic estimates from the ASCUS-LSIL Triage Study. JAMA 2001; 285:1500-5.] There are many reasons why the biopsy does not represent the “gold standard” for LSIL verification. These reasons include: resolution of HPV infections prior to the biopsy being performed, sampling issues during colposcopy, and variable reproducibility of CIN 1 histologic interpretation. 2) How many pathologists’ diagnoses should be required for biopsy confirmation of these PT samples?

The reference category for each individual challenge is determined by the field validation process and ultimately not by biopsy confirmation. Therefore, biopsy confirmation should not be required for any PT challenge.

H. Validation of Cytology Challenges 1) Should the regulations include a requirement for field validation of each cytology challenge before inclusion in a test set?

Yes, only field-validated challenges should be used if mandatory individual PT is to continue with punitive consequences. “[Field-] validated slides showed higher concordance: laboratories 98.3%, pathologists 96.6% and cytotechnologists 97.9%).” [Reference; 1997 PAP Year End Summary; CAP Press;Northfield, IL] In addition, there are many scientific papers published in peer-reviewed journals describing the performance characteristics of challenges that have performed well and poorly in an interlaboratory comparison program. [See full bibliography at end of document]

2) Should criteria for this initial field validation be stated in the regulations?

No. If mandatory individual PT is to continue with punitive consequences, stipulating criteria would not allow flexibility when new technologies are introduced. Criteria for field validation should be established by rigorous statistical analyses appropriate for each challenge. Validation criteria used for challenges must be made available to participants and any other interested parties.

3) If so, how should the criteria be defined?

Validation criteria should not be defined by CMS. Criteria for field validation should be established by rigorous statistical analyses appropriate for each challenge, but not established by regulation.

4) Should continuous monitoring of each cytology challenge be required?

Yes. If mandatory individual PT is to continue with punitive consequences, cytology challenge performance should be continuously monitored as changes in challenge performance occur due to technical factors. 5) Should continuous monitoring criteria be specified in the regulations? If so what criteria should be required?

No. If mandatory individual PT is to continue with punitive consequences, criteria should be independently established by the PT provider in a manner appropriate for the technology of the challenge. Regulatory stipulation of criteria would not allow flexibility when introducing new technologies.

6) Will the requirement for continuous field validation add any additional costs?

Yes, any requirement not currently included in the regulations will add costs to a program.

I. Scoring Scheme

1) Should the automatic failure for misdiagnosing an HSIL or cancer (Category D) as a Normal or Benign Change (Category B) be retained for pathologists and cytotechnologists?

No. If mandatory individual PT is to continue with punitive consequences, automatic failure should not occur if the participant responds with a category B when the field validated challenge is Category D. Statistically, the misclassification or false failure rate for competent individuals (true pass rate at 95%) is 9% with a 10 slide set or 8% with a 20 slide test. A single slide error is more likely a result of a false failure than an incompetent practitioner.

2) Should pathologists and cytotechnologists be evaluated using the same scoring scheme? If not, how should the scoring grid be composed?

No. If the current non-statistically validated, inaccurate, mandatory individual PT is to continue with punitive consequences, we recommend that a thorough multinomial statistical analysis be completed BEFORE any changes are made. Unfortunately, cytopathology PT is not well founded in statistics and science. 1 Statistical methods have not been applied to the current or proposed PT. As quoted in the Federal Register, part VI. Regulatory Impact Statement, section q “the increase in cytology challenges should increase test sensitivity”. However, no statistical analysis has been done to offer proof. Simply adding ten more slides to the current ten slide test does NOT make it a more sensitive test. There is no other required laboratory proficiency testing applied to individuals for comparison. Nor is there published evidence-based literature. A high level of “mathematical and statistical understanding by the designers of the test is crucial if a fair and scientifically valid system of proficiency testing in cytopathology (PTC) is to be established.” 2 If scoring grids are to be evaluated, whether they are two, three, four, or five tiered responses, a different power analysis for each potential grid must be completed. Furthermore, it is impossible to comment on proposed scoring grids without knowing the true test slide number. As it stands, “the results of the CLIA’88-mandated PTC mostly mirror the statistical chances and not the examinees’ skills.” 2

If cytology PT is to continue, the ASC urges CMS to develop a fair and statistically validated test. As expressed by Drs. Nagy and Naryshkin:

We emphasize that the theoretical underpinnings of PTC are much more complex than may be perceived readily. We hope that, if mandatory, nationwide PTC remains in any form, then it is redesigned to be a valid and reliable proficiency testing system or possibly a board-type examination. We believe that accomplishing this would require the engagement of both cytologists and experts who are well versed in the practical and theoretical aspects of modern test theory. This does not mean that more descriptive data from the existing results of the CLIA’88- mandated PTC should be collected. On the contrary, because the design of the CLIA’88-mandated test is flawed, little true insight may be gained by amassing and further studying descriptive data from such a source. Rather, we advocate the careful application of more inferential or theoretical statistics, which would allow a fairer conceptual design of PTC while leaving the final decisions in the hands of expert cytopathologists and cytotechnologists who are familiar with the wider aspects of our difficult discipline.

Once it is statistically validated, we recommend a three tiered system by combining C and D into one category “Epithelial Cell Abnormality (ECA): Further Patient Management Required.” Since the Pap test is a screening modality which requires patient triage and appropriate follow up, a three-tiered response category best reflects clinical practice.

The following is a grading scheme that you could use if you had a three tiered scheme and a 20 field validated slide challenge PT:

Cytotechnologist - 20 field validated slides-proposed point values

Validated Response CT response Unsat* NILM** ECA*** Unsat (Category A) 5 0 2.5 NILM (Category B) 2.5 5 2.5 ECA (Category C) 0 -5 5 Pathologist - 20 field validated slides-proposed point values

Validated Response Technical Supervisor (Pathologist) response Unsat* NILM** ECA*** Unsat (Category A) 5 0 0 NILM (Category B) 2.5 5 0 ECA (Category C) 0 -5 5

* Unsat =Unsatisfactory Specimen **NILM=Negative for intraepithelial lesion or malignancy ***ECA= Epithelial cell abnormality-further management required.

REFERENCES: 1. Nagy GK, Newton LE. Cytopathology Proficiency Testing: Where Do We Go From Here? Diagn Cytopathol 2006;34:257–264

2. Nagy GK, Naryshkin S. The Dysfunctional Federally Mandated Proficiency Test in Cytopathology A Statistical Analysis. Cancer Cytopathol 2007;111:467–76

3) Should the cytotechnologist scoring scheme be more stringent than the current regulations?

No, If the current non-statistically validated, inaccurate, mandatory individual PT is to continue with punitive consequences, the cytotechnologist should not be evaluated with a more a stringent scoring scheme. We recommend combining C and D into one category “Epithelial Cell Abnormality (ECA): Further Patient Management Required.” Since the Pap test is a screening modality which requires patient triage and appropriate follow up, a three-tiered response category best reflects clinical practice.

The following is a grading scheme that you could use if you had a three tired scheme and a 20 slide challenge PT: Cytotechnologist - 20 field validated slides-proposed point values

Validated Response CT response Unsat* NILM** ECA*** Unsat (Category A) 5 0 2.5 NILM (Category B) 2.5 5 2.5 ECA (Category C) 0 -5 5

Pathologist - 20 field validated slides-proposed point values

Validated Response Technical Supervisor (Pathologist) response Unsat* NILM** ECA*** Unsat (Category A) 5 0 0 NILM (Category B) 2.5 5 0 ECA (Category C) 0 -5 5

* Unsat =Unsatisfactory Specimen **NILM=Negative for intraepithelial lesion or malignancy ***ECA= Epithelial cell abnormality-further management required.

4) How would the same scoring scheme meet the statutory requirement for evaluating workplace performance of both cytotechnologists and pathologists with respect to their responsibilities in reviewing cytology preparations?

If the current non-statistically validated, inaccurate, mandatory individual PT is to continue with punitive consequences it cannot be justly used to evaluate the workplace performance of cytotechnologists and pathologists. The test does not simulate normal working conditions. Cytology PT does not reflect the examinees’ skills but mirrors the statistical changes of a test. 2 A single slide error is more likely a result of a false failure than an incompetent practitioner.

REFERENCES: 1. Nagy GK, Newton LE. Cytopathology Proficiency Testing: Where Do We Go From Here? Diagn Cytopathol 2006;34:257–264 2. Nagy GK, Naryshkin S. The Dysfunctional Federally Mandated Proficiency Test in Cytopathology A Statistical Analysis. Cancer Cytopathol 2007;111:467–76

CMS has requested additional information from cytology PT providers to analyze trends in PT failures over time. This information should include, at a minimum, the impact of automatic failures due to missed High-Grade Lesions (HSIL), and the impact of false positives and false negatives on scores over time. Examples of information to be collected include: a) The number of automatic failures b) The number of automatic failures with additional false positives c) The number of automatic failures with additional false negatives d) The number of automatic failures with both additional false positives and false negatives e) The number and types of false positives that led to PT failure f) The number and types of false negatives that led to PT failure over time.

We recommend the following changes be made for the CMS request for additional information from cytology PT providers. a. The number of “unacceptable” scores due to false positives b. The number of “unacceptable” scores due to false negatives c. The number of “unacceptable” scores due to a mixture of false positives and false negative d. The number and types of false positives that led to “unacceptable” scores e. The number and types of false negatives that led to “unacceptable” scores.

J. Retesting and Remediation 1) Should the PT programs provide more specific information concerning incorrect responses to the laboratory and individual to improve the testing process? Please clarify what information should be provided.

Yes. If mandatory individual PT is to continue with punitive consequences, we recommend that the PT provider should include at least the following information to the participant: case number (e.g. 1-20), the participant’s response by category and the reference response by category. For those individuals receiving a score less than 90%, the information of the incorrect response categories should be given to the laboratory director. The laboratory director is responsible for providing the documented remedial training and education in the area of deficiency on the second test (first retest) and so on.

2) Should all testing be conducted in the laboratory or should some testing be conducted at the location of the PT program?

If mandatory individual PT is to continue with punitive consequences, the PT provider should determine the location of testing, which should not be stipulated in the regulations. Additionally, the first and second test should be allowed to occur under “normal working conditions” as required by law. Examinees should be allowed to examine challenges in the environment and under standard operating procedures of their laboratory. If CMS continues individual PT with punitive consequences, it should also re-consider its position on cytology PT referral in this setting as it represents an onerous disruption of clinical care impacting patient safety

3) How many times should an individual be permitted to take a retest? Please provide rationale to support your recommendation.

If mandatory individual PT is to continue with punitive consequences, the following is recommended: no limit should be set as to the number of retests permitted. The currently proposed regulations stipulate that unsuccessful PT ( less than 90%) after the fourth test (third retest) results in continuous loop of 35 hours of documented, continuing education focusing on incorrect response categories and a discontinuation of examining GYN cytology until a score of 90% is achieved on a retest.

K. Appeals Process 1) What criteria should be included in an appeals process?

If mandatory individual PT is to continue with punitive consequences, criteria should not be established by CMS and/or written in the regulations. Each provider should establish their own appeals process and whatever appeals process is established must be transparent and available to the participant.

2) Should PT programs be required to provide participants with a description of their appeals process? Yes. The PT programs should be required to provide participants with a description of their appeals process. It should be readily available to all laboratories and participants prior to their enrollment and at any time during the testing cycle.

3) When should a description of the appeals process be shared with the participants?

The appeals process should be made available prior to the time of enrollment and at any time on the PT provider’s Web site.

L. Testing Site for the First Event

1) …, a few individuals have requested more choices for testing locations including but not limited to professional meetings, seminars, and trade shows. We are soliciting the public’s comments on this proposal.

Off site testing may be allowed, but should not be a requirement of PT programs. The use of off site testing does not approximate “normal working conditions,” but should be available to participants in special circumstances. Providers of off site testing at meetings would incur additional costs – room rental, microscope rental and transport, staffing and proctor administrators and these costs should be passed on to examinees. Examinees would incur extra cost if they required an accompanying cytotechnologist.

M. Proctors 1) What specific criteria should there be for selection of the proctor?

Regulations should not specify criteria for the proctor. While CMS must approve the criteria, training, and oversight of proctors in PT provider applications, placing specific criteria in the regulations does not allow for improving the proctor process. Delete paragraph 493.945(b)(5) of the proposed regulations.

2) How often should proctor training and testing be required?

Regulations should not specify criteria for the proctor. While CMS must approve the criteria, training, and oversight of proctors in PT provider applications, placing specific criteria in the regulations does not allow for improving the proctor process. Delete paragraph 493.945(b)(5) of the proposed regulations. 3) What penalties should be applied to laboratories and individuals when testing is not conducted according to requirements?

The proctoring process should not be specified in regulations, nor should penalties. All PT provider applications should specify duties and responsibilities of the proctor and would be subjected to approval by CMS. Delete paragraph 493. 853(b)(4) of the proposed regulations.

Collection of Information Requirements (Submit these comments to OMB as well) 1) The need for the information collection and its usefulness in carrying out the proper functions of our agency. 2) The accuracy of our estimate of the information collection burden. 3) The quality, utility, and clarity of the information to be collected. 4) Recommendations to minimize the information collection burden on the affected public, including automated collection techniques.

The ASC offers the following suggestions be included in the estimate of information burden: The discussion regarding the burden to notify each laboratory employee of the date, time and location of testing (17.8 hours annually based on biennial testing) fails to account for the full burden: before notifying everyone of date, time, and location, there is significant administrative burden to determine a testing date that is suitable. We estimate conservatively that 20 minutes per laboratory is needed for this step biennially; = 0.3 hours per lab x 2,142 labs = 714 hours biennially or 357 hours annually. Therefore a more realistic conservative assessment would be:

18,877.64 hours total information collection burden (See notes below)

If the total information time is translated into a slide equivalent based upon work load limits prescribed by CLIA ’88, it translates into 89,893 SLIDES that cannot be screened annually, representing significant time lost from patient care for a process that has no proven benefit.( 18877.64 hours / .21 hours per slide = 89,893 slides)

Additionally, time that is required for testing individuals, accompanying cytotechnologists or proctors to travel to remote sites or main laboratories to be tested is not indicated in the calculations. With the interpretation of PT referral recently proffered by CMS, there is a significant time burden, especially for practitioners practicing in far flung, rural or less populated areas. The time to travel depends upon the distance to laboratory sites, number of required travelers (either (1) pathologist or cytotechnologist traveling to a main facility or (2) proctor and cytotechnologist travelling to a remote site to test pathologist) and the number of remote sites per practice. This is not only a burden of time for the testing of personnel, but also time taken from patient care activities. Although the additionally described requirements which must be fulfilled by the PT providers are not “counted” directly in the burden assessment, it is important to note that the PT providers pass these burdens to the participants by charging fees to the laboratories whose personnel are required to be tested. Therefore, the overall burden to the participants includes not only the loss of valuable time as calculated above, but also a financial loss for each laboratory, for a process that has no proven benefit.

There is an additional burden to be considered which was not mentioned: the burden on the physician or cytotechnologist who changes jobs or work at more than one place to provide PT testing documentation to the laboratory director at the new workplace. The added average time burden associated with this requirement is at least 20 minutes per affected person.

Testing (PT) program with information necessary to identify all laboratory employees at its facility who are to be tested. = 513.24 hours annually based on biennial testing

Notify each laboratory employee of the date, time and location of testing. = 17.8 hours annually based on biennial testing [This fails to account for full burden: before notifying everyone of date, time, and location, there is significant administrative burden to determine a testing date that is suitable. I would estimate conservatively that 20 minutes per laboratory is needed for this step biennially; .=0.3 hours per lab x 2,142 labs = 714 hours biennially or 357 hours annually.]

Contact the PT program to determine the date, time, and location of the make-up examination for excused absences.= 22.1 hours annually based on biennial testing

Employee after 2nd unsuccessful attempt at test to complete training and obtain documentation of that training. = 220 hours annually based on biennial testing

Pap smears screened by employee after 2nd unsuccessful attempt at test must be re-examined by a laboratory employee who has passed the PT test and the re-examination must be documented. =17,325 hours annually for rescreening plus 247.5 hours annually for documentation

Employee after 3rd unsuccessful attempt at test to obtain and document the continuing education = 175 hours annually Archives Articles for Bibliography

Moriarty AT, Crothers BA, Bentz JS, Souers RJ, Fatheree LA, Wilbur DC, Automatic Failure in Gynecologic Cytology Proficiency Testing (PT): Results from the College of American Pathologists (CAP) Proficiency Testing Program (PAP PT). Arch Pathol Lab Med (In press)

Moriarty AT, Darragh TM, Souers R, Fatheree LA, Wilbur DC, Performance of Candida: Fungal induced atypia and proficiency testing: Observations from the College of American Pathologists (CAP) Proficiency testing (PT) Program. Arch Pathol Lab Med (In press)

Moriarty AT, Darragh TM, Souers R, Fatheree LA, Wilbur DC, Performance of Herpes simplex challenges in proficiency testing: Observations from the College of American Pathologists (CAP) Proficiency testing (PT) Program. Arch Pathol Lab Med ( In press)

Eversole G, Moriarty AT, Schwartz MR , Clayton AR, Souers R, Fatheree LA, Tench WD, Henry MD, Wilbur, DC. Bethesda 2001 Implementation and Reporting Rates 2006 Practices of Participants in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology Arch Pathol Lab Med ( In press)

Crothers BA, Moriarty AT, Fatheree LA, Booth CN, Tench WD, Wilbur DC, Appeals in Gynecologic Cytology Proficiency Testing (PT): Review and Analysis of Data from the 2006 PAP PT Program of the College of American Pathologists (CAP) Arch Pathol Lab Med 2009; 133:44-48

Jonathan H. Hughes, Joel S. Bentz, Lisa Fatheree, Rhona J. Souers, David C. Wilbur, for the Cytopathology Resource Committee, Changes in Participant Performance in the “Test-Taking” Environment: Observations From the 2006 College of American Pathologists Gynecologic Cytology Proficiency Testing Program College of American Pathologists Arch Pathol Lab Med 2009; 133:279-282.

Ann T. Moriarty MD, Mary R. Schwartz MD, Galen Eversole MD, Marilee Means PhD, SCT(ASCP), Amy Clayton MD, Rhona Souers MS, Lisa Fatheree BS, SCT(ASCP), William D. Tench MD, Michael Henry MD and David C. Wilbur MD. 2008: Human Papillomavirus Testing and Reporting Rates: Practices of Participants in the College of American Pathologists Interlaboratory Comparison Program in Gynecologic Cytology in 2006. Archives of Pathology and Laboratory Medicine: Vol. 132, No. 8, pp. 1290–1294.

Joel S. Bentz MD, Jonathan H. Hughes MD, Lisa A. Fatheree SCT(ASCP), Mary R. Schwartz MD, Rhona J. Soures MS, David C. Wilbur MD and for the Cytopathology Resource Committee, College of American Pathologists. 2008: Summary of the 2006 College of American Pathologists Gynecologic Cytology Proficiency Testing Program. Archives of Pathology and Laboratory Medicine: Vol. 132, No. 5, pp. 788–794.

Andrew A. Renshaw, MD; Molly K. Walsh, PhD; Barbara Blond, MBA; Ann T. Moriarty, MD; Dina R. Mody, MD; Terence J. Colgan, MD; for the Cytopathology Resource Committee, College of American Pathologists 2006. Robustness of Validation Criteria in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology Archives of Pathology and Laboratory Medicine: Vol. 130, No. 8, pp. 1119–1122.

Nancy A. Young MD, Ann T. Moriarty MD, Molly K. Walsh PhD, Edward Wang PhD and David C. Wilbur MD. 2006: The Potential for Failure in Gynecologic Regulatory Proficiency Testing With Current Slide Validation Criteria: Results From the College of American Pathologists Interlaboratory Comparison in Gynecologic Cytology Program. Archives of Pathology and Laboratory Medicine: Vol. 130, No. 8, pp. 1114-1118.

Andrew A. Renshaw MD, Dina R. Mody MD, Edward Wang PhD, Jennifer Haja CT(ASCP), Terence J. Colgan MD and for the Cytopathology Resource Committee, College of American Pathologists. 2006: Hyperchromatic Crowded Groups in Cervical Cytology—Differing Appearances and Interpretations in Conventional and ThinPrep Preparations: A Study From the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 130, No. 3, pp. 332–336.

Andrew A. Renshaw MD, Mary R. Schwartz MD, Edward Wang PhD, Jennifer Haja CT(ASCP) and Jonathan H. Hughes MD, PhD. 2006: Cytologic Features of Adenocarcinoma, Not Otherwise Specified, in Conventional Smears: Comparison of Cases That Performed Poorly With Those That Performed Well in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 130, No. 1, pp. 23–26.

Andrew A. Renshaw MD, Michael R. Henry MD, George G. Birdsong MD, Edward Wang PhD, Jennifer Haja CT(ASCP) and Jonathan H. Hughes MD, PhD. 2005: Cytologic Features of Squamous Cell Carcinoma in Conventional Smears: Comparison of Cases That Performed Poorly With Those That Performed Well in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 129, No. 9, pp. 1097–1099.

Andrew A. Renshaw MD, Dina R. Mody MD, Edward Wang PhD, David C. Wilbur MD, Terence J. Colgan MD and for the Cytopathology Resource Committee, College of American Pathologists. 2005: Measuring the Significance of Participant Evaluation of Acceptability of Cases in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 129, No. 9, pp. 1093–1096. Tamela M. Snyder MD, Andrew A. Renshaw MD, Patricia E. Styer PhD, Dina R. Mody MD, Terence J. Colgan MD and for the Cytopathology Resource Committee, College of American Pathologists. 2005: Altered Recognition of Reparative Changes in ThinPrep Specimens in the College of American Pathologists Gynecologic Cytology Program. Archives of Pathology and Laboratory Medicine: Vol. 129, No. 7, pp. 861–865.

Andrew A. Renshaw MD, Marianne U. Prey MD, Lori Hodes CT(ASCP), Maggie Weisson CT(ASCP), Jennifer Haja CT(ASCP), Ann T. Moriarty MD and for the Gynecologic Cytology Committee, College of American Pathologists. 2005: Cytologic Features of High-Grade Squamous Intraepithelial Lesion in Conventional Slides: What Is the Difference Between Cases That Perform Well and Those That Perform Poorly?. Archives of Pathology and Laboratory Medicine: Vol. 129, No. 6, pp. 733–735.

Andrew A. Renshaw MD, Edward Wang PhD, Dina R. Mody MD, David C. Wilbur MD, Diane D. Davey MD, Terence J. Colgan MD and for the Cytopathology Resource Committee, College of American Pathologists. 2005: Measuring the Significance of Field Validation in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology: How Good Are the Experts?. Archives of Pathology and Laboratory Medicine: Vol. 129, No. 5, pp. 609–613.

Andrew A. Renshaw MD, Barbara Dubray-Benstein CT(ASCP), Jennifer Haja CT(ASCP), Jonathan H. Hughes MD, PhD and for the Cytopathology Resource Committee, College of American Pathologists. 2005: Cytologic Features of Low-Grade Squamous Intraepithelial Lesion in ThinPrep Papanicolaou Test Slides and Conventional Smears: Comparison of Cases That Performed Poorly With Those That Performed Well in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 129, No. 1, pp. 23–25.

Andrew A. Renshaw MD, Michael A. Schulte MD, Elizabeth Plott CT(ASCP), Barbara Dubray-Benstein CT(ASCP), Camilla J. Cobb MD, Richard L. Lozano MD, Margaret H. Neal MD, Jonathan H. Hughes MD, PhD, Nancy A. Young MD and Marianne Prey MD; for the Cytopathology Resource Committee, College of American Pathologists. 2004: Cytologic Features of High-Grade Squamous Intraepithelial Lesion in ThinPrep Papanicolaou Test Slides: Comparison of Cases That Performed Poorly With Those That Performed Well in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 128, No. 7, pp. 746–748.

Andrew A. Renshaw MD, Barbara Dubray-Benstein CT(ASCP), Camilla J. Cobb MD, Richard L. Lozano MD, Margaret H. Neal MD, Marianne Prey MD, Michael A. Schulte MD; and for the Gynecologic Cytology Committee, College of American Pathologists. 2004: Cytologic Features of Squamous Cell Carcinoma in ThinPrep Slides: Evaluation of Cases That Performed Poorly Versus Those That Performed Well in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 128, No. 4, pp. 403–405.

Andrew A. Renshaw MD, Dina R. Mody MD, Richard L. Lozano MD, Emily E. Volk MD, Molly K. Walsh PhD, Diane D. Davey MD and George G. Birdsong MD. 2004: Detection of Adenocarcinoma In Situ of the Cervix in Papanicolaou Tests: Comparison of Diagnostic Accuracy With Other High-Grade Lesions. Archives of Pathology and Laboratory Medicine: Vol. 128, No. 2, pp. 153–157.

Andrew A. Renshaw MD, Nancy A. Young MD, George G. Birdsong MD, Patricia E. Styer PhD, Diane D. Davey MD, Dina R. Mody MD and Terence J. Colgan MD. 2004: Comparison of Performance of Conventional and ThinPrep Gynecologic Preparations in the College of American Pathologists Gynecologic Cytology Program. Archives of Pathology and Laboratory Medicine: Vol. 128, No. 1, pp. 17–22.

Diane D. Davey MD, Margaret H. Neal MD, David C. Wilbur MD, Terence J. Colgan MD, Patricia E. Styer PhD and Dina R. Mody MD. 2004: Bethesda 2001 Implementation and Reporting Rates: 2003 Practices of Participants in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 128, No. 11, pp. 1224– 1229.

Andrew A. Renshaw MD, Diane D. Davey MD, George G. Birdsong MD, Molly Walsh PhD, Patricia E. Styer PhD, Dina R. Mody MD and Terence J. Colgan MD. 2003: Precision in Gynecologic Cytologic Interpretation: A Study From the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Archives of Pathology and Laboratory Medicine: Vol. 127, No. 11, pp. 1413– 1420.

Diane D. Davey MD and Richard J. Zarbo MD, DMD. 2003: Introduction and Commentary, Strategic Science Symposium: Human Papillomavirus Testing—Are You Ready for a New Era in Cervical Cancer Screening?. Archives of Pathology and Laboratory Medicine: Vol. 127, No. 8, pp. 927–929.

Dina R. Mody MD. 2003: The Pap Smear: Controversies in Practice. Archives of Pathology and Laboratory Medicine: Vol. 127, No. 4, pp. 510–510.

Terence J. Colgan MD, Sherry L. Woodhouse MD, Patricia E. Styer PhD, Mary Kennedy CT (ASCP), MPH and Diane D. Davey MD. 2001: Reparative Changes and the False- Positive/False-Negative Papanicolaou Test. Archives of Pathology and Laboratory Medicine: Vol. 125, No. 1, pp. 134–140.

Bruce A. Jones MD and Diane D. Davey MD. 2000: Quality Management in Gynecologic Cytology Using Interlaboratory Comparison. Archives of Pathology and Laboratory Medicine: Vol. 124, No. 5, pp. 672–681. Diane D. Davey MD, Sherry Woodhouse MD, Patricia Styer PhD, Janet Stastny DO and Dina Mody MD. 2000: Atypical Epithelial Cells and Specimen Adequacy. Archives of Pathology and Laboratory Medicine: Vol. 124, No. 2, pp. 203–211.

Sherry L. Woodhouse MD, Janet F. Stastny DO, Patricia E. Styer PhD, Mary Kennedy CT (ASCP), MPH, Amy H. Praestgaard MS and Diane D. Davey MD. 1999: Interobserver Variability in Subclassification of Squamous Intraepithelial Lesions. Archives of Pathology and Laboratory Medicine: Vol. 123, No. 11, pp. 1079–1084.