1

Malavika Attur Dr. Harold Varmus MHC 360: The Purpose, Practice and Politics of Science

Discovery vs. in Scientific

Until the recent ruling of v. Association for Molecular Pathology in

2013, it was possible to the genetic material of a living organism that had been isolated and/or manipulated from the host organism. The Supreme Court ruling that “…naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated…” (Myriad Genetics, 2013) left scientists and lawyers alike curious as to the impacts the decision would have on the research community, and how patent rules and limitations would change in the future.

The patent history that lead up to this case is lengthy and complex. The first patent statue passed by the US government was the US of 1790. It defined a patentable invention as “…any useful art, manufacture, engine, machine or device, or any improvement there on not before known or used” (USPTO, 2002). The ultimate mission of a patent system is to incentivize and enable both individuals and businesses to turn creative ideas into useful goods that improve some aspect of human life. It is clear that the US patent system has brought many economic benefits to the country – from promoting innovation to encouraging investing, growth and efficient use of resources to allowing for monetization and financial returns. The life sciences in particular have been significantly by the evolution of patent law.

As a subgroup of all existing patents, biological patents began to pop up as early as the

1900’s. The first biological patent is considered to have been filed by Jokichi Takamini in 1906 for isolated adrenaline from the suprarenal glands of an animal (Beauchamp, 2013). Further 2 discussed later in this paper, this patent resulted in a court case whose decision highlighted the four patent judgement criteria found in Title 35 of the United States Code - that the invention in question must have a proper use (useful), it must have not been known or used before filing

(novel), must not be an improvement easily made by someone specialized in the relevant area

(non-obvious), and must enable one skilled in the field to use it for its specified purpose

(enabled) (35 U.S.C. § 101,102, 103, 112). While initially broad, understanding and implementation of these rules has changed as years have passed due to differing decisions of numerous of patent cases.

It is currently thought that there are three exceptions to patent eligibility – laws of nature, natural phenomena and thought. It is important to note that these exceptions are not described in 35 U.S.C; these are judicially-created as a result of patent case decisions (Beaucoup,

2013). Any patent cases contingent on these exceptions are important because they set precedence for future cases on similar matters and they hint towards the increasingly blurry line between invention and discovery in biological research. This is due to the fast-paced innovations in and sequencing techniques, which have significantly broadened the range of potentially patentable subject matter (Dutfield, 2003). To understand the current state of patentable subject for biological patents, we will look at landmark court cases which have had the largest impact on the narrative of genetic material patent eligibility.

The Case of the First Biological Patent

One of the foundational cases in the debate of gene patents is Parke-Davis & Co. v. H. K.

Mulford Co., a dispute which revolves around the previously mentioned Jokichi Takamini.

Decided in 1911, the court’s decision upheld Takamini’s patent for the isolated adrenaline of the suprarenal gland after a case was filed against Mulford Co. after they created a similar product. 3

Learned Hand, the judge that ruled for on the side of Parke-David (who represented Takamini), deemed that the adrenaline was eligible for patent because it was identified, isolated and purified to derive the active hormone (Beauchamp, 2013). The victory from this case was integral in establishing of natural vs. non-natural things. This case was also a watershed moment in the practice of patenting isolated biological compounds, key in expansion of the biomedical and fields, even though patent cases in later years would be starkly anti-product of nature (Beauchamp, 2013). Additionally, the ruling from this case has also been the center of decades of frenzied debate, and is still brought up in discussion of the Myriad ruling of 2013. Those in favor of the decision say that this case is the foundation of the isolation and purification debate for allowing gene patenting, while those against it say that it is a sharp departure from prior understanding of products of nature.

Patentability of Natural Principles

The next two court cases helped in establishing the patent exceptions of natural phenomena and laws of nature. The first one is Funk Bros Seed Co. v. Kalo Inoculant Co. in

1948 (Funk Bros Seed Co., 1948). The invention in question for this case was a packaged mixture of Rhizobium bacteria. Before 1948, packets of Rhizobium bacteria, used by certain plants to help fix nitrogen from the air for conversion into organic nitrogenous compounds, were sold to farmers. Different strains of the bacteria generally have mutually inhibitive effects on one another and so are usually only sold separately Bond, the man behind the patent, found unique strains that did not have mutually inhibitory effects and thus began to sell mixed inoculant packages (Beauchamp, 2013). Funk Bros Seed Co. began to sell similar mixtures, resulting in a court case that eventually reached the Supreme Court. There, it was ruled that “He who discovers a hitherto unknown phenomenon of nature has no claim to a monopoly of it which the law 4 recognizes. If there is to be invention from such a discovery, it must come from the application of the law of nature to a new and useful end” (Funk Bros Seed Co., 1948). This was a fundamentally important case for two main reasons – it made clear that phenomena of nature are unpatentable, and gave a statement on judgement criteria of the application of the discovery. The use of the natural principle must not be a product of skill, but one of invention. Anyone who learned of the non-inhibitive qualities of the Rhizobium strains could easily mix them together, deeming it to be “falling short of invention” (Funk Bros Seed Co., 1948).

The Mayo Collaborative Services v. Prometheus Laboratories case of 2012 was a dispute that tackled a similar issue to that of Funk Bros Seed Co. v. Kalo Inoculant Co. – whether laws of nature are patentable (Mayo Collaborative Services, 2012). The patents in question was held by Prometheus, the exclusive licensee for multiple patents that dealt with the use of thiopurine drugs in the treatment of autoimmune diseases. When thiopurine drugs are injected by the patient, they are metabolized for the production of metabolites. Because all patents metabolize the drugs differently, it is difficult for doctors to determine the correct dosage. The patents claimed by Prometheus were regarding the process used to determine the correlation between metabolite levels after different drug dose injections; an administering step, a determining strep and a change in dose step. The Supreme Court unanimously ruled in 2012 that the correlation between the naturally produced metabolites and therapeutic efficacy natural law and resultantly invalidated the patent (Mayo Collaborative Services, 2012). The implication from this case is similar to the one previously discussed, or that when a process relies on a natural law to work, it is additionally required to work in a way that is not routine or conventional.

Both the Funk Bros and Mayo cases are significant in the limiting approach they took towards patenting laws of nature/ natural phenomena. It is clear from both that the boundary 5 between a patentable application of a natural principle vs. an unpatentable natural principle is decided on a case to case basis – at that point the boundary between discovery and invention was still very unclear.

Abstract Thought

Diamond v. Diehr was a foundational case in the discussion of the patentability of abstract thought. In 1981, Diehr and Lutton formulated a math equation to determine the time needed to cure synthetic rubber for desired characteristics. They used the math equation to write a computer program that did the calculations for the curing process, opening the molding cavity when temperature detectors determined that the rubber reached its desired state. They applied for a patent for the curing process, which was denied by the USPTO and eventually reached the

Supreme Court (Diamond, 1981). The Court ruled that while the execution of the physical process was patentable, mathematical formulas in the abstract are not. The process itself was considered patentable subject matter because the invention aided in the transformation of synthetic rubber to a new state. This case was the center of great discussion both amongst the

Court judges as well as industry professionals. Dissenters to the decision argued that there was still lack of understanding on the distinction between what the scientist claims to have discovered vs. whether the invention is actually novel; essentially whether the language used in the or prior patent cases should take importance in deciding whether something qualifies as patentable subject matter (Stobbs, 2016). Similar to the cases concerning natural principles, it is clear that Diamond v. Diehr does not give a solid answer as to what constitutes an unpatentable abstract idea – a case by case basis was to be used to tweak the courts understanding of discovery vs invention.

6

Modified Organisms

Similar to Parke-Davis & Co. v. H. K. Mulford Co, the Diamond v. Chakrabarty case of

1980 was a landmark case in the genetic material patent eligibility. In 1972, Chakrabarty filed a patent for a genetically modified bacterium he developed with the addition of plasmids that gave the bacterium the ability to break down crude oil. Sydney Diamond, Commissioner of Patents and Trademarks, helped bring the case through the court circuit up to the Supreme Court, where it was ruled that “live human-made micro-organisms are patentable subject matter because it constitutes a composition of matter” (Diamond, 1980). This decision revolutionized the biotechnology industry – the decision that lifeforms derived from nature can be eligible for patenting if modified gave biotech companies the assurance that emerging technology can be protected by patents. As a result of this ruling, many subsequent patents filings have been granted on genetically modified organisms such as plants, animals, stem cells and tissue – one patent of note was the Harvard mouse, the first transgenic animal patent (Schneider, 1988). Gene patents in particular were becoming popular - by 2012, over 20 percent of the human genome was privately owned (ACLU, 2013). The competitive climate facilitated by the case decision eventually lead to the infamous Myriad Genetics patent dispute.

Gene Patenting

The issue of gene patenting came to public attention with Association for Molecular

Pathology v. Myriad Genetics in 2013. Myriad Genetics is an American molecular diagnostic company that obtained patents for the human tumor suppressor BRCA1 and BRCA2 (both make proteins responsible for repairing DNA in breast tissue), multiple separate proteins and associated diagnostic tests that were useful in identifying susceptibility to breast cancer

(Association for Molecular Pathology, 2013). Doctors and numerous other professionals in the 7 medical field became unhappy with Myriad’s consistent accusations against others who tried to identify the genes and form their own tests of patent violations and their willingness to block innovation for profit. A later lawsuit lead to a Supreme Court decision that ruled “a naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated” (Association for Molecular Pathology, 2013). The decision of Myriad calls into question the rationale of Parke-Davis & Co. v. H. K. Mulford Co, which as previously mentioned, noted that isolating or manipulating naturally occurring molecules was basis for patent eligibility. While the decision does disrupt long-held beliefs and expectation of scientists and other professionals in the biotech and medicine industries, the Court emphasized that there would still be other ways to preserve patent eligibility of fundamental to the industry.

The example provided was an important one - complementary DNA (cDNA) was still eligible for patenting (Feldman, 2014).

Concluding Thoughts

There is no simple, concise answer to the question asking of the difference between discovery and invention in scientific patents. It is clear from the discussed court cases that understanding of whether products of nature can be patented changes on a case to case basis – some patents have been issued and some have been denied. Regardless of the fact that historical foundations of patent eligibility are rather shaky, there is intense debate on the implication of the

Myriad case and whether there should be increasing constrictions of patent eligibility. Those who argue for an expansive interpretation of invention say that the locating and isolating of biological matter takes skills and ingenuity, fulfilling the novel patent criteria. In addition, they also note that patent ownership attracts investments to up and coming biotech companies for risky research and drug development (Dutfield, 2003). On the other hand, proponents of patent restrictions 8 argue that the isolation and modification of biological matter lacks an inventive step, as any professional in the field can easily recreate the invention. They also bring up the fact that current understanding of gene function is far beyond the simple premise of DNA makes RNA makes protein – rather than being isolated islands, genes work together for successful protein manufacturing (Dutfield, 2003). Additionally, patent thickets and the anticommons effects, both of which were seen in the Myriad case, restrict downstream research and product development, leading to large increases in research and development costs as the result of multiple licensing deals (Heller, 1998).

It is clear that patenting in the sciences, particularly the biological sciences, has become an integral part of the industry. The recent ruling of the Association for Molecular Pathology v.

Myriad Genetics case of 2013 boldly went against prescribed patentability narratives by declaring genes to be unpatentable due to be products of nature. More time is needed to understand the implications of the decision and to see how it establishes precedence for future cases do get a better sense of how the distinction between discovery and invention has evolved.

It may be interesting to have future policy discussions on whether patent eligibility should be eased for new areas of research that are monetarily risky for the sake of downstream research. In the end, one that thing that can be agreed upon is that the patenting any invention should not impeded any technology development or progression in science research that would ultimately improve the quality and longevity of human life.

9

Works Cited

ACLU. (2013). Legal Challenges to Human Gene Patents. Retrieved from https://www.aclu.org/files/pdfs/freespeech/brca_qanda.pdf

Association for Molecular Pathology v. Myriad Genetics. 569 U.S. ___ (2013). Retrieved from https://supreme.justia.com/cases/federal/us/569/12-398/

Beauchamp, C. (2013). Patenting Nature: A Problem of History. Stanford Technology Law Review. 16(2), 257.

Diamond v. Chakrabarty. 447 U.S. 303. (1980). Retrieved from https://supreme.justia.com/cases/federal/us/447/303/case.html

Diamond v. Diehr. 450 U.S. 175. (1981). Retrieved from https://supreme.justia.com/cases/federal/us/450/175/case.html

Dutfield, G. (2003). and basic research: discovery vs. invention. [Policy brief]. Retrieved from: http://www.scidev.net/global/policy-brief/intellectual-property- and-basic-research-discovery.html

Feldman, R. (2014, November 2). Gene Patenting After the U.S. Supreme Court Decision – Does Myriad Matter? Stanford Law and Policy Review. 26,16.

Funk Brothers Seed Co. v. Kalo Inoculant Co. 333 U.S. 127. (1948). Retrieved from https://supreme.justia.com/cases/federal/us/333/127/case.html

Heller, M., Eisenberg, R. (1998). Can Patents Deter Innovation? The anticommons in Biomedical Research. Science. 20 (5364), 698-701

Mayo Collaborative Services v. Prometheus Laboratories Inc. 566 U.S. ___ (2012). Retried from https://supreme.justia.com/cases/federal/us/566/10-1150/opinion3.html

Schneider, K. (1988). Harvard Gets Mouse Patent, A World First. The New York Times. Retrieved from http://www.nytimes.com

Stobbs, G. (2016). Business Method Patents. New York, New York: Wolters Kluwer.

United States Patent and Trademark Office. (2002). The U.S. Patent System Celebrates 212 Years [Press Release]. Retrieved from https://www.uspto.gov/about-us/news-updates/us- patent-system-celebrates-212-years

10

Caitlin Larsen

MHC 360

Professor Harold Varmus

The Public Role of Science

The American Museum of Natural History (AMNH) has been providing access to science since 1869 to people not only from New York City, but from all around the world. From the aloof, blue whale suspended over the dimly lit Milstein Hall of Ocean Life to the jaw-dropping, imposing Tyrannosaurus Rex in the Dinosaur Hall, to the Milky Way that forms part of the wondrous expanse of the universe condensed into the Hayden Planetarium, every corner of the museum provides an opportunity for visitors to get a glimpse of all that our planet has to offer.

However, there is much more to the museum than its exhibits. Within the museum’s home between 79th and 81st streets, there is also well-developed research and education going on behind the scenes. In an effort to expand the educational programs the museum already offers, the museum has proposed an expansion called the Richard Gilder Center for Science, Education, and Innovation. In the words of Dan Slippen, the Senior Director of Science Education at

AMNH, during a personal interview: “This facility will be an all-inclusive museum building. It will carry educational classrooms, scientific spaces, collection storage, and a new theater that will be able to give the visiting public an opportunity to see what they can’t with their own naked eye.” This expansion’s exhibits focus on seeing things at the microscopic level: taking a deeper look into the brain, the human body, the depths of the ocean, or a grain of sand. However, the goals of the Gilder Center are perhaps too big, if not in scope then in terms of the actual size of 11 the expansion. Some members of the public are not very thrilled about the expansion; namely, a small sliver of the community that has organized into a group called Community United to

Protect Theodore Roosevelt Park. Of the 218,000 square feet the Gilder Center plans to occupy,

11,600 square feet will encroach upon Theodore Roosevelt Park, which has been home to the museum for over a century, since 1877. Additionally, the expansion will include a new entrance at Columbus Ave and 79th Street. Community United has resisted the Gilder Center due to its expansion into parkland, its likely detrimental effect on the surrounding neighborhood, the museum’s irresponsible use of public funds, and the museum’s lack of transparency and honest communication with the public.

It seems as if every week there is a new fight in New York City over new housing developments and tall towers that will adversely affect the community. In my paper, I seek to find what distinguishes this particular public controversy over the museum’s expansion from the typical NYC real estate battle over prized space. More specifically, I want to ask how a prestigious scientific, academic institution, supported by influential donors and powerful politicians, accomplishes its programmatic goals over community objections. Although scientific institutions like AMNH have the backing of politicians and millions of dollars, what separates an expanding scientific institution from any other development project is the veneration of the public benefits and positive impact science has on a community. In the words of City

Councilmember of District 6 where the museum resides, “The American Museum of Natural

History is a gem. And there’s an opportunity to grow that space that would be available for the natural sciences—who wouldn’t jump on that?” Due to the combination of all of these factors, the progress of the Gilder Center’s expansion has been remarkably smooth, without the community’s resistance providing much of an obstacle. 12

I had the opportunity to speak with William Raudenbush, the Vice President of

Community United. Upon sending an email to the organization that I was unsure would even be seen, I found myself with a response mere hours later. Mr. Raudenbush expressed keen interest in speaking with me, stating that I was “in the perfect place at the perfect time to have a meaningful conversation about an important topic that does not always have the opportunity to see the light of day.” When I got on the phone with Mr. Raudenbush, I was immediately struck by his fervent passion in his fight against the expansion.

Community United’s first argument is that the Gilder Center will encroach on/limit/reduce the community’s valuable public park space, a place where parents take their children to play after school and form important memories. However, the Museum certainly has the upper hand in this argument. As Dan Slippen explained to me, the legislation that originally permitted AMNH to build their facilities in Theodore Roosevelt Park not only allowed the

Museum complex to reside in the park, but also gave them permission to expand for the purposes of the Museum. In addition, the Gilder Center would take up only 11,600 square-feet of the

765,349 total square footage of the park: roughly 1.5% of the park according to the NYC Parks

Department. Only seven trees would be impacted by the expansion. The Museum, aware of the community’s love for the park, even compromised their proposal for the Gilder Center. They expressed in their Environmental Impact Statement draft that they had scaled back the expansion to take up only a quarter of an acre of the 17.57 acre park, as opposed to half an acre.

Additionally, according to their Environmental Impact Statement draft, the Museum is removing three of their existing buildings in order to minimize any detrimental impact on the park.

Michael Kimmelman of the New York Times highlights a possible proposal for a trade- off between the museum and nearby residents in his article “Fair Trade: A Museum Expansion 13 for an Open Park.” As part of the NYC Parks Department initiative called Parks Without

Borders, the gated and closed southwest section of the park can be opened to the public. This would make approximately an acre of new parkland available for the community, which is even more acreage than what they would lose with the Gilder Center. However, this suggestion highlights another potential weakness of groups like Community United against AMNH: The subjective definition of community. As Kimmelman outlines, the residents surrounding the park are divided between the north and south sides. While the Gilder Center has already been accepted by the Theodore Roosevelt Park Neighborhood Association of residents around 81st Street, a group of residents a mere four blocks south on 77th Street has their own group, called Friends of

Roosevelt Park. This group opposes the Parks Without Borders plan that would disturb the peace outside their homes by opening this section of the park up to the public. Even Dan Slippen from

AMNH acknowledged the wide range of groups when asked about any resistance the Gilder

Center has encountered: “There’s been a number of organizations that were once all together, then splintered off into separate organizations.”

While Dan Slippen related to me that “the overall issues we’ve been hearing are directed towards impact on the park, not about the museum and what it does, Mr. Raudenbush’s next point of contention focused on AMNH’s irresponsible financial budgeting. In fact, he spent the bulk of our conversation discussing this issue, claiming that “what’s galling about the project isn’t about parkland; it’s everything together.”

According to Dan Slippen, the total cost of the Gilder Center is expected to be approximately $325 million, with two thirds of that amount coming from private funding, and the rest— about $108 million— from public funding. According to a New York Times article by

Robin Pogrebin, Richard Gilder himself has donated $50 million to the new addition to AMNH. 14

Raudenbush’s first complaint about the museum’s funding revolved around the important distinction when it comes to the city’s budget of expenses versus capital funds. According to

NYC’s Independent Budget Office, capital funds are investments the city makes in projects over multi-year periods. On the other hand, the expense budget limits spending by funding costs for only one fiscal year at a time. The Gilder Center’s public funding comes primarily from the city’s capital budget. In fact, in 2014, $15 million was included in the capital budget for the

Museum’s expansion according to Robin Pogrebin of the New York Times. What Raudenbush finds appalling about this fact is that he does not view the Museum as a fiscally responsible body. Raudenbush provided me with document detailing AMNH’s financial statements, labeled

“Series 2015 AMNH bonds.” This document reveals that the Museum is over $400 million in debt. With this level of debt, Raudenbush wonders why the city would essentially enable the

Museum to rack up even more debt in this costly endeavor, the total of which does not include the maintenance, heating, and cooling of the new building.

Despite Mr. Raudenbush’s research, the Museum has a hand on the purse strings of prominent politicians in the community. As of 2016, NYC had committed a total of $44.3 million to the project. The Museum’s hope to receive one third of total costs from private funding does appear too much of a stretch. An article written by Jackson Chen for local newspaper Manhattan Express details the extent of the public funding received so far. Since fiscal year 2012, big-name politicians such as Mayors Michael Bloomberg and Bill DeBlasio,

Borough Presidents Scott Stringer and Gale Brewer, and City Councilmembers such as Dan

Garodnick, Jimmy Van Bramer, and even the Councilmember for the Museum’s district, Helen

Rosenthal have approved the allocation of funds for the project. Community United’s next point 15 of contention is that contributions of millions were given by these politicians as early as 2012, before the project was fully planned, and before asking the community for their input.

This lack of community transparency has continued to the present day. Mr. Raudenbush related to me that in 2015, AMNH conducted a study over one week, in which 1,500 people were polled. They used this study to argue in their application for the Richard Gilder Center that the new facility will bring 1.4 million more visitors in its first year, only 20% of whom will use the new entrance on Columbus Avenue. Raudenbush and members of his group feel that this study should have been conducted by an independent body, because the Museum’s vested interest in the expansion could possibly have led to biased results. The main concern is that, if the museum numbers are off, there is no system in place to mitigate a drastic increase in crowds and transportation. This, in turn, could potentially affect the services the community receives in terms of police, ambulances, and the fire department. However, when I questioned a source at the

Museum, who asked to remain anonymous, the person was frustrated by this claim. The source explained that not only is there already an entrance on Columbus Avenue that would merely be moved when the Gilder Center is built, but that the majority of visitors use the museum entrance that is closer to train lines and access the rest of the complex from within the museum.

Finally, Mr. Raudenbush expressed concern about the Gilder Center’s main goal: expanding science education to children in populations around New York City. According to the

Museum’s Environmntal Impact Statement draft, the Gilder Center renovations will include three new classrooms for middle school students and six renovated classrooms for elementary school students. These state-of-the-art classrooms will be available to public school students who would not otherwise have access to scientific equipment and research. Additionally, the Gilder Center has plans to work with the Department of Education to invite schools to come to the museum for 16 field trips. Finally, the Research Library and Learning Center that exists in the Museum will be renovated to make it more accessible and to provide additional space for programs that foster adult learning. Lisa Gugenheim, senior vice president at AMNH, has told Madeleine Thompson of the NY Press that, taken together, these are “spaces that are specially meant to support learners across their lifespan.” Dan Slippen supported this notion, telling me: “the museum provides access for schools with a K-12 educational program, starting with early-childhood education to science apprenticeship to science research mentoring, to paid internships in college.

The Museum already contains an Urban Advantage Middle School Initiative, which serves over

62,000 students from more than 220 schools. This program does not merely provide for one-time visits to the museum. According to the program’s website, it supports students in the long-term, and even provides professional training for teachers and scientific equipment for schools.

Science education is a prominent goal of the Museum, according to President Ellen

Futter, who made a public statement to Robin Pogrebin of the New York Times on the “gap in the public understanding of science at the same time when many of the most important issues have science as their foundation – human health, , environment, biodiversity, climate change, mass extinction. This museum has a role to play in society in terms of enhancing the role of science.” With such a lofty goal in their mission statement, it is hard to argue with the

Museum. While they could be profiting off their exhibits, they are giving back to the community by shaping a future generation of scientists. My undisclosed source related to me that there is a program hosted by the museum called the Master of Arts in Teaching, which the museum website further highlights. Participants of this program must agree to teach science in the city upon completion of their degree. In fact, the program was specifically created in response to the shortage of science teachers in NYC’s inner-city public schools. 17

Community United, in a surprising twist, supports the educational mission of the

Museum. Mr. Raudenbush stressed the importance of science to our future. However, he thinks the city should reallocate to our public schools the millions it not gives to the Museum. Instead of the Museum inviting to its new facility those public school students who don’t have access to research facilities, the public schools themselves could host classrooms like the ones at the

Gilder Center with the reallocated funds. I found this to be the most convincing argument on

Community United’s part. However, the Museum is an organization that wants to do public good, and they are doing this in what they consider to be the best, most concrete way possible.

They do have programs for children that extend beyond mere field trips for the day, such as the

Urban Advantage Middle School Initiative; and the expanded resources of the Museum, such as the behind-the-scenes exhibits to which children have access, cannot necessarily be brought to a classroom. Here, young minds can see scientists at work in their facilities and find inspiration.

Though the city could rethink their allocation of city expense funds in the future, the Gilder

Center is an expansion that goes beyond mere self-interest on the part of the Museum. That, in my opinion, is why Dan Slippen related to me happily that the Gilder Center is well on its way to becoming a concrete reality. The Center has been approved by the Landmarks Preservation

Commission and local Community Board 7. It is in the process of putting together its

Environmental Impact Statement, a draft of which is expected to be released shortly. Slippen said, “That will take a couple of months, get approved, and we’ll be ready to go by early 2018.”

He did not seem at all concerned that the Gilder Center might not move forward.

Community United is also aware that the Gilder Center project is too large and beneficial for Community United to curtail. Mr. Raudenbush told me with a note of determination in his voice that they have raised funds to hire a lawyer; however, he mentioned that this fight has 18 become less about stopping the Gilder Center and more about focusing on an honest discussion of its impact on the community.

Ultimately, the Museum has compromised quite a bit with the community while simultaneously balancing the needs of its program, garnering support from a variety of groups.

Even Mr. Raudenbush expressed to me the fact that his organization and the community’s residents love the Museum and what it does. Slippen mentioned that communication with all other community groups has been easier, at one point involving the whole community in the task of helping to shape the Museum’s design for the Landmarks Preservation Commission. Though

Community United does have some valid concerns, according to their organization’s website, their new lawyer promises not to give up in the fight to get the most compromise and transparent communication possible for their neighborhood. Ultimately, compared to the number of residents that will be affected by the expansion — residents who, by the way, already have access to

Central Park adjacent to Theodore Roosevelt Park— the sheer size of the community that will be helped by AMNH seems to provide a rationale for the expansion that far outweighs community concerns. The wealth that this scientific institution has gained through not only their public exhibits but private and public funding is a function of science’s venerated status in the world in which we live today and the public good it accomplishes for others.

19

Works Cited

American Museum of Natural History Taxable Bonds, Series 2015. New York: American Museum of Natural History, 2015. PDF.

Chen, Jackson. "Public Bucks –– Tens of Millions –– Already Sunk Into Natural History Expansion, Critics Note." Manhattan Express. Manhattan Express News, 05 May 2016. Web. 07 May 2017.

Community United to Protect Theodore Roosevelt Park. N.p., n.d. Web. 07 May 2017.

“History 1869-1900.” AMNH. American Museum of Natural History, n.d. Web. 21 May 2017.

Kimmelman, Michael. "Natural History Museum's Expansion: Part Dr. Seuss, Part Jurassic Park." The New York Times. The New York Times, 05 Nov. 2015. Web. 07 May 2017.

Kimmelman, Michael. “Fair Trade: A Museum Expansion for an Open Park.” The New York Times. The Ne York Times, 25 Jan. 2017. Web. 07 May 2017.

“MAT Program Overview.” AMNH. N.p., n.d. Web. 07 May 2017.

Pogrebin, Robin. "American Museum of Natural History Plans an Addition." The New York Times. The New York Times, 10 Dec. 2014. Web. 07 May 2017.

Raudenbush, William. Personal Interview. 01 May 2017.

"Richard Gilder Center for Science, Education, and Innovation." AMNH. American Museum of Natural History, n.d. Web. 07 May 2017.

Rosenberg, Zoe. "Natural History Museum's $325M Expansion Plan, Revealed." Curbed NY. Curbed NY, 05 Nov. 2015. Web. 07 May 2017.

Slippen, Dan. Personal Interview. 02 May 2017.

Thompson, Madeleine. "Putting the Stars on Display." Putting the Stars on Display | Manhattan, New York, NY | Local News. New York Press, 18 Jan. 2017. Web. 07 May 2017.

Understanding New York City’s Budget: A Guide. Philadelphia: Museum, 1980. NYC’s Independent Budget Office, June 2013. Web. 07 May 2017.

20

United States. New York City Department of Parks and Recreation. American Museum of Natural History Gilder Center for Science, Education, and Innovation Environmental Impact Statement Draft Scope of Work. New York: n.p., 2016. Print.

“Urban Advantage NYC.” Urban Advantage. N.p., n.d. Web. 07 May 2017.

21

Corey Tam

MHC 360: The Politics of Science

Professor H. Varmus

22 May 2017

Addressing the Continued Lead Use in the United States

When most people in the United States turn on their faucet, they reasonably presume that the tap water will be safe for consumption. It most likely will be in the loosest sense. But, the harsh reality is that the water exiting the faucet will contain trace amounts of metals, organic matter, etc. One of the metals that may be present in higher concentrations in tap water is lead, which is known to have severe negative consequences when absorbed by the body, especially in minors. The question then becomes, “Why would lead be found in higher concentrations in water?” The answer is simple: lead is still present in the antiquated water infrastructure. The continued use of the lead is clearly problematic, as exemplified by the worst-case scenario of the

Flint water crisis. But before addressing the issue at hand, the topics of how lead became such a prominent material for water infrastructure and how current regulations deal with lead need to first be expounded.

While lead is a natural compound found in the Earth’s crust, most people are exposed to anthropogenic or artificial lead. 1 As a metal, lead is heavy and bluish-gray and bluish-gray and has a low melting point. 2 However, lead is not naturally present in its metal form, but rather is merged with other elements to form lead compounds. 3 Lead is commonly used in alloys to create “pipes, storage batteries, weights, shot and ammunition, cable covers, and sheets used to

1. Toxicological Profile for Lead. (2007). Atlanta, GA: Agency for Toxic Substances and Disease Registry, pp.2. 2. Ibid., pp.1. 3. Ibid., pp.1. 22 shield us from radiation.” 4 It is also used in caulk and pigments of “paints, dyes, and ceramic gauzes” to make the coatings more durable and resistant to moisture. 5 As previously exemplified, there are many lead-containing objects that can expose people to the element.

How does lead enter and exit the body? There are three pathways through which lead can enter the body. Inorganic lead, or lead that is not bound to carbon atoms, can enter into the body through inhalation, ingestion, or dermal exposure, but dermal exposure is comparatively less effective for lead absorption. 6 According to the Agency for Toxic Substances and Disease

Registry (ATSDR) of the U.S. Department of Health and Human Services, animal studies have demonstrated the flip side; organic lead, or lead that is bound to carbon atoms, is readily absorbed through dermal exposure. 7 The primary pathway of lead intake is by ingestion, then followed by inhalation, and lastly by dermal exposure. 8 For drinking water, lead is ingested as a result of leaching from lead pipes, lead solders (i.e. pipe joints), and lead-alloyed brass fixtures on faucets. 9 Almost all lead that is inhaled is absorbed into the body, but only twenty to seventy percent of all lead that is ingested is absorbed into the body. 10 The amount and rate of which lead is absorbed upon ingestion depend on both the characteristics of the individual and the characteristics of the lead-containing object. 11 For instance, age, fasting, nutrition (i.e. calcium and iron levels), and pregnancy were all statuses of the individual that influenced the lead intake.

4. Ibid., pp.1-2. 5. Ibid., pp.2. 6. Ibid., pp.156; The two means of lead intake, inhalation and ingestion, are respectively distinguished as “occupational” and “non-occupational” for self-explanatory reasons. 7. Ibid., pp.156. 8. Royce, S. E., & Wigington, P. S. (2000). Case Studies in Environmental Medicine (CSEM): Lead Toxicity. Atlanta, GA: U.S. Dept. of Health & Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry, pp.16. 9. Ibid., p.12. 10. Ibid., pp.16. 11. Toxicological Profile for Lead. (2007). Atlanta, GA: Agency for Toxic Substances and Disease Registry, pp.156. 23

12 Similarly, size, composition, solubility, and lead species of the ingested object influenced the lead intake. 13 The subsequent expulsion of lead– irrespective of its pathway – necessarily involves the excretory system. 14 Lead is removed through the major routes of urine and feces and the minor routes of sweat, saliva, hair, nails, and breast milk. 15

The bodily retention of lead has manifest health consequences. Adopted by the Center of

Disease Control (CDC) in 2012, the upper value of the reference range for blood lead level

(BLL) in children is five micrograms per deciliter (5µg/dL). 16 As of 2015, the National Institute for Occupational Safety and Health (NIOSH) of the CDC altered its previous value of 10µg/dL to that of 5µg/dL as the reference BLL for adults. 17 Despite these designated ranges for BLL, there is no safe BLL because any lead content within the blood is harmful. 18 As the ASTDR states, the reference levels of BLL only serve as “advisory level[s] for environmental and educational intervention.” 19 The distinction between “adult” and “children” BLL is important because children (i.e. minors below the age of 18) generally absorb more lead than adults. 20 Due to the development of the body, they absorb more lead and are more sensitive to the effects of lead. 21 Depending on the amount of lead retained, the effects of high BLL on children can vary from minor disruptions to behavioral and physical growth to “anemia, kidney damage, colic, muscle weakness, and brain damage.” 22 The effects of high BLL on adults are fewer but still substantial; they affect the reproductive, gastrointestinal, hematological, endocrine,

12. Ibid., pp.158. 13. Ibid., pp.158. 14. Ibid., pp.156. 15. Ibid., pp.156. 16. Royce, S. E., & Wigington, P. S. (2000). Case Studies in Environmental Medicine (CSEM): Lead Toxicity. Atlanta, GA: U.S. Dept. of Health & Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry, pp.22. 17. Ibid., pp.22. 18. Ibid., pp.22. 19. Ibid., pp.22. 20. Toxicological Profile for Lead. (2007). Atlanta, GA: Agency for Toxic Substances and Disease Registry, pp.7. 21. Ibid., pp.9-10. 22. Ibid., pp.10. 24 developmental, renal, and cardiovascular functions. 23 To make matters worse, the lead migrates within the body from the blood to the soft tissues and organs to the teeth and bones, where they will be released in times of bodily stress like lactation, pregnancy, etc. 24

But how did lead enter water supplies? Toxicologist Richard Rabin explains in his journal article, “The Lead Industry and Lead Water Pipes ‘A Modest Campaign,’” the role that the lead industry played in the implementation of lead pipes throughout the United States and in the resulting public health issues. Since the late nineteenth century, there were concerns voiced by sanitary engineers, physicians, and public health officials about the use of lead and the adverse health effects associated with lead. 25 Consequently, the number of implemented lead pipes declined until it reached its trough in 1930. 26 Fearing the loss of its revenue as a result of the growing distaste for lead products, six lead companies (i.e. the National Lead Company,

American Smelting and Refining, Anaconda, the Hecla Mining Company, Eagle Picher, and the

St Joseph Lead Company) formed a conglomeration known as the Lead Industries Association

(LIA) in order to lobby the use of lead pipes. 27 They argued that the of lead outweighed the health concerns with lead. They highlighted that the malleability and durability of lead pipes procured better economic investments. 28 They denied that the lead pipes would leach the lead by stating that when lead comes in contact with water – with the exception of “soft” or acidic water

– will form a protective layer that would prevent corrosion. 29 Their lobbying campaign was so successful that they were able to recreate a market for lead pipes and to revert the plumbing

23. Royce, S. E., & Wigington, P. S. (2000). Case Studies in Environmental Medicine (CSEM): Lead Toxicity. Atlanta, GA: U.S. Dept. of Health & Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry, pp.32-38. 24. Toxicological Profile for Lead. (2007). Atlanta, GA: Agency for Toxic Substances and Disease Registry, pp.7-8. 25. Rabin, R. (2008, September). The Lead Industry and Lead Water Pipes “A MODEST CAMPAIGN.” American Journal of Public Health, 98(9), pp.1585. DOI: 10.2105/AJPH.2007.113555. 26. Ibid., pp.1589. 27. Ibid., pp.1587. 28. Ibid., pp.1586. 29. Ibid., pp.1586. 25 codes of many municipalities to incorporate lead infrastructure. 30 The implementation of lead pipes continued until the year of 1986 when the Safe Drinking Water Act was enacted. 31

Nonetheless, the successful lobbying of the six companies under the LIA created a legacy of worn lead pipes and other lead-containing components that is still in the water infrastructure of cities today.

The Safe Drinking Water Act is the federal act that officially ended the implementation of future lead infrastructure with exceptions. Pursuant to the amendment of 1986, the following was added onto the original act:

“No person may use any pipe, any pipe or plumbing fitting or fixture, any solder, or any flux, after June 19, 1986, in the installation or repair of— (i) any public water system; or (ii) any plumbing in a residential or nonresidential facility providing water for human consumption.” 32

This clause was what ended the further construction of lead infrastructure, but the use of the word, “after,” does not change the fact that there were – and still are – many lead pipes, solders, and fixtures laid during the time before the act. In addition, the definition of “lead-free,” as promulgated from this act, signifies: “‘‘(A) when used with respect to solders and flux refers to solders and flux containing not more than 0.2 percent lead, and (B) when used with respect to pipes and pipe fittings refers to pipes and pipe fittings containing not more than 8.0 percent lead.” 33 This portion of the act is actually the beginning of what would be known as the Lead and Copper Rule. As demonstrated by the direct passage, so-called “lead-free” infrastructure after 1986 as set forth by the Lead and Copper Rule may not actually be lead-free, for it permits the future infrastructure to have some lead content. In addition to the definition of “lead-free,”

30. Ibid., pp.1587. 31. Ibid., pp.1590. 32. Title XIV of The Public Health Service Act: Safety of Public Water Systems (Safe Drinking Water Act), Environmental Protection Agency, pp.990. 33. Ibid., pp.991. 26 the Lead and Copper Rule possesses the additional measure of action levels for lead and copper, which are 0.015mg/L and 1.3mg/L, respectively. 34 According to the EPA website, action levels are not the maximum contaminant levels allowed in the water, but rather demonstrate the ninetieth percentile of contaminants allowed. 35 So, what this additional measure mandates is that water suppliers conduct tests on water that is first-drawn from the consumer tap; if the tests demonstrate lead levels that exceed the action level, then the water suppliers must increase its corrosion inhibitors, provide additional tests, and inform water consumers about the risks of lead and how to prevent lead consumption. 36

As exemplified by the worst-case scenario of the Flint Water Crisis, the continued use of lead in water infrastructure is like the wait before a ticking bomb detonates. The city of Flint in

Michigan State passed an ordinance in 1897 that mandated all connection pipes to main pipes to be constructed out of lead. 37 In 1967, the city swapped its water source from the Flint River under the Flint Water Service Center (FWSC) to Lake Huron under the Detroit Water and

Sewage Department (DWSD) in order to accommodate a growing population and avoid the poor water conditions of the Flint River. 38 After that year, the FWSC became a spare water treatment facility used to mix its treated water with that of DWSD to offset costs. In 2013, officials in Flint decided to join a newly emerging water company, Karegnondi Water Authority (KWA), but had to wait before the completion and furnishing of the waterworks. 39 After failing to secure a contract with DWSD, the city decided to use its spare facility, FWSC, to furnish water to

34. Lead and Copper Rule: A Quick Reference Guide (2008, June). Environmental Protection Agency, pp.1. 35. Ibid., pp.2. 36. Ibid., pp.2. 37. Masten, Susan J., Davies, Simon H., McElmurry, Shawn P. (2016, December). Flint Water Crisis: What Happened and Why? Journal American Water Works Association, 108(12), pp.23. DOI: 10.5942/jawwa.2016.108.0195. 38. Ibid., pp.23. 39. Ibid., pp.23 27 everyone in Flint. 40 The problem was that the facility was inadequately equipped to handle such a large operation such that the facility did not have a corrosion-control plan and reports of the treatment processes were haphazardly documented. 41 As a result, within several months, from

April to October 2014, the event known as the Flint Water Crisis ensued. The water that was produced exceeded the action level of lead by three times at one point – amongst a number of other charges like Legionella. 42 And, it was also tested that the BLL in children increased by a factor of 2.5 during the incident. 43

The event at Flint highlights the problem of discrimination with the Lead and Copper

Rule of the SDWA. Water that exits the water treatment facility and the main pipe are typically lead-free. The actual contamination occurs in the connecting pipes, solders, and fixtures. When lead is used in the water infrastructure, as in the case of Flint, lead can leach out from all these sources. What happens is that lead levels in water can exceed the action level and prompt the water companies to go to the contaminated households and inform their inhabitants the dangers of lead and whatnot. Liability to fix the pipes is not assigned to the water suppliers, but to the inhabitants. So, in the case of Flint residents, who are predominantly low-income families with minority backgrounds, they are essentially asked to fix the problem themselves with no monetary means of doing so.

There are several policy considerations that can be made in order to prevent another water crisis like that in Flint. First, accountability, as defined as transparency, should be available to property buyers. Real estate agents are not obligated to disclose if there are lead pipes within a property. So, what is being proposed is that the presence of lead pipes should be revealed to the

40. Ibid., pp.23 41. Ibid., pp.26. 42. Ibid., pp.24. 43. Ibid., pp.24. 28 potential buyer of real estate. Similarly, brass fixtures should have warning labels to indicate the amount of lead alloyed so that the buyers are at least conscientious of their decisions. In a different sense, accountability should be shifted from the individual to the property owner of the land. That is, the individual should not be the one paying for the replacement of lead pipes.

Instead, the owner of the property should be paying for the replacement of lead pipes as part of the whole routine that includes contaminated soil excavation. Those who may not be able to afford the removal of the pipes should be able to contact the city or state government and apply for price abatement. These prophylactic measures should be taken because they are more economically sound than waiting for the event of a public scandal to replace the lead infrastructure.

When deliberating over the matter of the crisis in Flint, it is important to first deliberate how the pre-existing conditions of Flint enabled the crisis to occur. The continued use of lead in water infrastructure poses a huge problem. It affects everyone whether they realize it or not.

Moreover, the non-performance of the government and its people to deal with the issue means more Flint-like crises are potentially in the horizon. To prevent such crises, the Lead and Copper

Rule should be renovated to specifically target the audience that is usually affected by these water issues – the low-income families with minority backgrounds.

29

Allegra DePasquale MHC 360 Professor Varmus Final Paper

The Politics of Field Research in the Developing World: A Case Study

Introduction

Despite its reputation for objectivity, science is far from an apolitical endeavor. For scientists conducting field research in the Global South, navigating the politics of the developing world is one of the greatest challenges. Typically, as post-colonial nations, these countries grapple with issues of weak institutions, uncertain political legitimacy, and rampant corruption.

Given the troubled nature of post-colonial politics, scientists who wish to conduct research in these countries must carefully navigate political difficulties in order to operate a successful research project. In this paper, I ask how Western scientists conducting research in the Global

South do science. I will present the challenges faced by scientists conducting primatological research in the East African country of Uganda as a case study. These challenges include issues of armed conflict, corruption, weak institutions, and instability. Ultimately I will attempt to make suggestions as to how to overcome these challenges as primates face increasingly precarious circumstances.

Armed Conflict

Perhaps the most well known example of armed conflict disturbing primatological research is that of Dian Fossey studying the mountain gorillas in Zaire (now the Democratic

Republic of Congo). In the course of a rebel uprising, Fossey was kidnapped and sexually assaulted by rebel soldiers. She was forced to abandon her field site in Zaire, and instead relocate 30 to Rwanda (Fossey 1983). While this is an extreme example of a primate researcher experiencing violence during the course of her research, conflict is common in the post-colonial countries in which primates are located.

Since its independence from Great Britain in 1962, Uganda has seen periods of significant conflict, both internal and external. Internally, Uganda has been involved in a civil war more or less since 1986, when Yoweri Museveni came to power. Since then, the Lord’s

Resistance Army, an insurgent group led by the infamously cruel Joseph Kony, has terrorized

Northern Uganda in the hopes of overthrowing Museveni’s regime (Gersony 1997). Externally,

Uganda has seen spillover from its tumultuous borders with Rwanda, the Democratic Republic of

Congo, and Sudan. The presence of violent conflict has made primatological research in Uganda extremely difficult, especially in Bwindi Impenetrable National Park, which sits right on the border of the Congo and Rwanda.

In 1999, Hutu rebels from Rwanda infiltrated BINP and kidnapped 14 eco-tourists and a field assistant to USC primatologist Craig Stanford. The rebels murdered eight of these eco- tourists, but Stanford’s assistant survived. According to Stanford, he had to carefully consider whether or not he could continue working in Bwindi. Stanford was, very reasonably, afraid. He shut down his project for six months while the border conflict subsided, though eventually he decided to continue (Silsby 2001). The threat of Rwandan rebels put his research project in a precarious position, in which he had to consider issues that Western scientists working in their home country (here, the US) would never have to consider: safety and physical well-being. It is in the light of armed conflict that working in the developing world is perhaps most challenging, as it transcends politics to become violence. Further, the threat of conflict disrupts primate research significantly worse than other branches of biological research, as reliable primate 31 behavior data must be longitudinal and continuous. By having to cease operations, the researcher is jeopardizing the quality of their data set by sacrificing sample size. The choice between safety and science is a choice one would never have to make in the US.

Another example that demonstrates the challenge of dealing with armed conflict while conducting primate research in Uganda comes from Hunter College primatologist, Jessica

Rothman. She also worked in Bwindi Impenetrable in the late 90s, when there was considerable

Hutu activity in the park. Like Stanford, she was also forced to question whether or not she could continue working in Uganda – at least in Bwindi. She was also working in Bwindi at the time of the 1999 kidnappings, though fortunately wasn’t affected. She remembers being in the park with the rebels, though, and hiding in the forest, swearing she’d never come back to Uganda to do research. Armed conflict is one of the more visceral aspect of post-colonial politics that makes doing science difficult. It’s not just science that’s difficult to conduct in the field, but it’s assuring one’s own safety.

Corruption

In addition to facing armed conflict, researchers must also navigate corruption at the national, regional, and local levels. According to Transparency International, Uganda is ranked

151 out of 176 on the Corruption Perceptions Index, indicating that corruption is ubiquitous throughout the Ugandan political system. Researchers, who must interact with the national and regional governments to obtain research permits and permissions, are not absolved from engaging with these corrupt governmental agencies. While there are formal avenues for conducting primatological research in Uganda, outlined by the Uganda Wildlife Authority on their website (http://www.ugandawildlife.org/wildlife-a-conservation-2/researchers- 32 corner/research-a-monitoring) these formal routes are not always followed to the letter. As is common in post-colonial political systems, a kind of informal politic reigns (Rubongoya 2007).

Thus, researchers must carefully engage with the informal politics of Uganda in order to expedite their permissions and permits, relying as much on their personality and their networks as they do their science (Rothman, personal communication).

A more common struggle for primatologists in Uganda, though, is local corruption, in which the rangers and officers in charge of preventing and arresting intruders fail to do so, leaving protected areas vulnerable. Oftentimes the rangers charged with patrolling the national parks and reserves are the same people to enter the park for bushmeat and firewood, thereby draining these protected areas of their resources, leading to deforestation and defaunation

(McLennan 2010). This kind of localized corruption is ultimately more harmful to the goals of a primatological research project than is that of bureaucratically inclined informal politics. Faced with ecological disruption, scientists are forced to allocate resources away from research and towards conservation in order to preserve their study species and their ecological community.

It is near impossible to study primates in Uganda without engaging in conservation at some level, as primates are significantly threatened by this kind of local resource degradation stemming from lack of law enforcement and corruption within the national park. The choice to divert funds towards conservation is an easy one: if you do not actively engage in concerted conservation efforts to combat local corruption and resource extraction, you will lose your research subjects and therefore, your data. An example is the Kibale Snare Removal Project

(KSRP), an offshoot of the Kibale Chimpanzee Project (KCP). KCP established KSRP to combat the snaring of chimpanzees within Kibale National Park, one of the most productive primatological field sites in the world. Snaring describes the illegal use of wire and plastic snare 33 traps to catch prey, typically intended to be duiker or bush pig. Before KSRP, chimps were often snared – leading to serious injury, and in some cases, death. KSRP patrols the park and removes as many snares as possible in order to reduce the rate of chimp snaring

(https://kibalechimpanzees.wordpress.com/snare-removal-program/). This has been a resounding success, though snares continue to appear in the forest. Local corruption remains a significant hurdle to primatological research in Uganda.

Weak Institutions

Often intertwined with corruption is the issue of weak institutions, which Uganda, like most other post-colonial states, also struggles with. The legislative and judiciary branches are exceptionally weak, and power is concentrated in the executive – the presidency. Museveni has been president since 1986, in elections that are widely considered to be unfair, with political legitimacy that is questionable at best (Rubongoya 2007). Most pertinent to primatologists, however, is the issue of law enforcement and punishment, which, of course, is also wrapped up in issues of corruption. Formally, Uganda has the adequate legislative framework to protect its biodiversity, protecting around 16% of its land for conservation, according to the World Bank. It also has a fairly robust agency, the Uganda Wildlife Authority, charged with managing wildlife.

However, as mentioned in the section on corruption, these laws and procedures are inadequately enforced, in part due to local-level corruption. For example, rangers in Bwindi Impenetrable

National Park can be found illegally offering tourists a day trekking mountain gorillas, despite these tourists not going through the official UWA registration and vetting process. This can lead to the spread of pathogens and parasites to the gorillas, an already endangered species (Rothman personal communication). Uganda has poor institutional capacity, and despite Museveni’s 34 rhetoric regarding the importance of wildlife, little is enforced in practice (Sandbrook and Roe

2010). Poaching, deforestation, and resource extraction still occur in protected areas, to the detriment of long-term primate studies.

As a result, primates, even in protected areas, remain vulnerable, thereby exacerbating and/or creating conservation issues that primatologists must address if they are to conduct research. This is markedly unlike working in protected areas outside of the developing world, as in the US, for example, where conservation issues in zoological research are seldom paramount.

This is because weak institutions complicate research by failing to protect primates and their habitats, and to punish those that violate them.

Political Instability

Lastly, Uganda has faced significant political instability, though this has died down considerably in the last two decades as Museveni has retained and strengthened his vice grip on the Ugandan presidency. Uganda has thus been relatively stable since the turn of the 21st century.

Historically, though, this has not been the case, and instability has severely affected primate research in the country, particularly in Bwindi. Due to instability and military presence in Bwindi in the 90s, two research sites, Nkuringo and Ruhija, were forced to close and cease data collection (Newton-Fisher et al 2006). In addition to this, it was suggested by Mudakikwa et al

(1998) that military presence in the park serves to increase the risk of disease transmission risk to the gorillas, as it does with tourists (mentioned above). Thus, gorillas face substantial threat from political instability and the ensuing military presence within the park. Not only is there less research being conducted in the park due to all that accompanies instability (conflict, lack of funds), but ultimately the gorillas’ health is jeopardized. 35

Luckily, the political climate has improved and primatologists working in Uganda no longer have to fear regime change or political upheaval as significant impediments to their research. Though, it is important to note that instability has been a problem in Ugandan politics in the past and it very well could be in the future, as Museveni has molded the current political institutions to suit him, and him alone.

Conclusion

It is clear that the local, regional, and national politics of Uganda significantly affect primatological research, though I believe these challenges of working in the developing world can be applied to any field that engages with such political institutions. Armed conflict, corruption, weak institutions, and political instability are all too common in post-colonial countries, which often are collapsed with the developing world. In order for a Western scientist to successfully do science in these countries, where institutions operate much differently, they must engage with and navigate these post-colonial politics. They must operate in a political climate much different than that of their home country. Researchers must play by the rules of the country in question, even if that forces them to engage with corrupt officials and rangers, informal politics, and weak law enforcement. This adds another dimension to science, one that researchers from the West may not be used to. However, it is critical to acknowledge and address these challenges head on in order to successfully collect data and continue research operations.

36

References

Fossey, D. (1983). Gorillas in the mist. Bronx, NY: Ishi Press International.

Gersony, R. (1997). The Anguish of Northern Uganda: Results of a Field Based Assessment of the Civil Conflicts in Northern Uganda. United States Embassy, USAID Mission Kampala.

Kibale Snare Removal Program. (n.d.). Retrieved from https://kibalechimpanzees.wordpress.com/snare-removal-program/

McLennan, M. R. (2010). Chimpanzee responses to researchers in a disturbed forest-farm mosaic at Bulindi, western Uganda. American Journal of Primatology, 72(10), 907-918.

Mudakikwa AB, Sleeman J, Foster J, et al. An indicator of human impact: Gastrointestinal parasites of mountain gorillas (Gorilla gorilla beringei) from the Virunga Volcanoes Region, Central Africa. In: Proceedings Joint Meeting Am Assoc Zoo Vet and Am Assoc Wildlife Vet 1992; 436-437.

Naughton-Treves, L., Alix-Garcia, J., & Chapman, C. A. (2011). Lessons about parks and poverty from a decade of forest loss and economic growth around Kibale National Park, Uganda. Proceedings of the National Academy of Sciences, 108(34).

Newton-Fisher, N. E. (2011). Primates of Western Uganda. New York: Springer.

Rubongoya, J. (2007). Regime Hegemony in Museveni's Uganda: Pax Musevenica. New York:Palgrave Macmillan.

Sandbrook, C., and Roe, D. (2010).Linking Conservation and Poverty Alleviation: The Case of Great Apes, LondonArcus

Silsby, G. (2001, October 8). Undeterred by Ugandan Terrorism, USC Researcher Makes a Startling Discovery. Retrieved May 15, 2017, from http://news.usc.edu/4424/Undeterred-by- Ugandan-Terrorism-USC-Researcher-Makes-a-Startling-Discovery/

T. I. (n.d.). Corruption Perceptions Index 2016. Retrieved May 15, 2017, from http://www.transparency.org/news/feature/corruption_perceptions_index_2016

Uganda Wildlife Authority Research & Monitoring. (n.d.). Retrieved May 15, 2017, from http://www.ugandawildlife.org/wildlife-a-conservation-2/researchers-corner/research-a- monitoring

World Bank Terrestrial protected areas (% of total land area). (n.d.). Retrieved May 15, 2017, from http://data.worldbank.org/indicator/ER.LND.PTLD.ZS?page=2&year_high_desc=false

37

From Stem Cell Therapy to Cancer Treatment

Elisha Edwards MHC 360: The Purpose, Practice, and Politics of Science Dr. Harold Varmus May 15th, 2017

38

Geron Corporation was founded by Michael D. West on November 28th, 1990. It is currently a clinical stage biopharmaceutical company that innovates and examines a telomerase inhibitor called Imetelstat for hematologic myeloid malignancies. However,

Geron Corporation initially geared their resources toward gerontology and human embryonic stem cell technology (2,3).

Stem cells are undifferentiated cells that have the potential to transform into other types of cells, such as muscle, bone, skin, etc. Stem cells are extracted from either adult tissue or embryos formed during the blastocyst phase. Embryonic stem cells are classified as pluripotent because they are capable of being the precursor to a variety of human cells.

Furthermore, they are easily self-sustainable within a culture and can reproduce themselves indefinitely (5,6).

All clinical trials that introduce the testing of a new pharmaceutical drug must be reviewed and approved by the U.S. Food and Drug Administration. First, the scientific researchers of a company will have to complete the Investigational New Drug application, which will then be reviewed by an institutional review board comprised of doctors, researchers, and community members. Application details include safety concerns, manufacturing procedures, protocol and criteria of the study, etc. The testing of the drug will, then, have to pass through several trial stages. In Phase I clinical trials, the safety of the drug is examined on a small group of participants. Phase II clinical trials include a larger group of participants and test the new drug’s potency and performance. Phase III clinical trials consist of anywhere from 1,000 to 3,000 participants. This phase examines the efficacy of the drug, establishes side effects, and compares the study’s results of the drug to that of similar drugs currently on the market. New drugs must successfully pass through 39 the three aforementioned phases in order to be marketed to the public. Lastly, a phase IV is utilized to gain additional information once the drug is released for sale (2).

The U.S. Food and Drug Administration provided Geron Corporation with approval for the first human clinical trials of embryonic stem cells in January 2009. Geron

Corporation aimed to conduct its study on ten patients who have suffered complete thoracic-level spinal cord injuries, but it only ended up observing four patients. Complete spinal cord injuries are defined as the removal of the brain’s ability to send impulses down the spinal cord below the site of the injury (2). The therapeutic product they were testing was named GRNOPC1, consisting of embryonic stem cell-derived oligodendrocyte progenitor cells that have demonstrated remyelination and nerve growth-inducing properties. Oligodendrocytes provide support and insulation to axons in the central nervous system by creating the myelin sheath. The trial was originally expected to start in the summer of 2009. However, it was delayed by the U.S. Food and Drug Administration after cysts were discovered on mice that were tested with these stem cells. Phase I of the clinical trials were approved to start in 2010 (1).

In 2005, the study of human embryonic stem cell-derived oligodendrocyte progenitor cell transplants on laboratory rats was completed at the University of California at Irvine (4). Following anesthetization, a contusion injury was induced to the spinal area on female adult rats using an Infinite Horizon Impactor, which can deliver a sudden impact of a desired force to a specific area. One group of rats were injected with the stem cells 7 days after injury and the other group of rats were injected 10 months after injury (4).

In both groups, the transplant cells survived, reproduced over short distances, and differentiated into oligodendrocytes. Although the rats that received the stem cells 7 days 40 after injury showed improvements in remyelination and locomotor ability, the rats that received it 10 months after injury showed no neurological or muscular improvements.

Keirstead et al. concluded that embryonic stem cells were capable of differentiating into functional oligodendrocytes. Their study also supported that therapeutic potential after spinal injury at earlier points in time leads to improvements. All rats were euthanized 8 weeks after stem cell transplantation (4).

In 2010, Geron Corporation began to execute their trials on patients. In order to be eligible to participate, patients were required have a complete spinal cord injury within seven to fourteen days before enrollment and had to be between the ages of 18 and 65.

Patients also could not have a history of cancer or significant organ damage, could not be pregnant or nursing, and could not participate in any other interventional studies.

Participants received one dose of GRNOPC1, which consists of over two million stem cells.

The first patient, Timothy Atchison, was enrolled into the study two weeks after he was involved in a car accident. After being injected with GRNOPC1, he began to experience a few slight sensations and could indicate discomfort when his leg hairs were pulled (2,7).

Preliminary results of this study demonstrated that no changes to the spinal cord were present. Also, no adverse side effects were reported. In November 2011, Geron

Corporation made a public announcement that it was closing the study in order to turn its main focus toward cancer research (7). This resulted in the company’s stock price quickly dropping from $2.28 to $1.50 per share. The scientific community was highly disappointed as many were looking forward to advancing their knowledge on stem cells based on the official results of the study (3). Another biotechnology company, BioTime, Inc., acquired several patents on the stem cell products from Geron Corporation. Despite the 41 discontinuation of the stem cell research, Geron Corporation agreed to continue monitoring the patients’ progress for the upcoming years (2).

Presently, Geron Corporation has fully invested its time and money into Imetelstat.

Imetelstat is revolutionary in a way because it can bind to telomerase with high affinity, which results in direct inhibition of telomerase enzymatic activity whereas its drug counterparts produce an indirect effect via inhibition of protein translation (2). The only drug that is approved for the treatment plans of myelofibrosis is Jakafi, developed and marketed by Incyte Corporation. Jakafi resolves some of the symptoms associated with myelofibrosis while Imetelstat is the first drug to ever induce partial and complete responses in myelofibrosis patients during early-stage clinical trials. If Geron Corporation’s revolutionary drug manages to pass through all phase trials, Imetelstat could entirely replace Jakafi and earn Geron Corporation a substantial increase in revenue. Johnson &

Johnson's biotechnological subsidiary, Janssen, is handling the drug's clinical program through a collaborative license with Geron Corporation (3).

Preclinical studies have supported that Imetelstat inhibits telomerase activity and decreases the length of telomeres. It also inhibits the rapid reproduction of a various types of tumor cells, reducing the growth of primary tumors and thus reducing metastases. When coupled with approved anti-cancer therapy methods, such as chemotherapy, Imetelstat produces a synergistic anti-cancer effect (9).

Tefferi et al. (9) conducted a study with 33 patients that had high-risk or intermediate-II-risk myelofibrosis. Imetelstat was administered as a 2-hour intravenous infusion and patients received 9.4 mg per kg of their body weight every one to three weeks.

A complete or partial remission of myelofibrosis was observed in 7 patients with a median 42 response range of 18 months for complete remissions and 10 months for partial remissions. Imetelstat was found to be active in patients with myelofibrosis, but it also had the potential to cause myelosuppression, a condition in which bone marrow activity is lessened and results in less blood cells. Other adverse effects included grade 4 thrombocytopenia in 18% of patients, grade 4 neutropenia in 12% of patients, grade 3 anemia in 30% of patients, grade 1 or 2 elevation in levels of total bilirubin in 12% of patients, alkaline phosphatase in 21% of patients, and aspartate aminotransferase in 27% of patients (9).

In another study by Tefferi et al. (8), nine patients took Imetelstat for refractory anemia with or without thrombocytosis. Based on their previous study’s concern for the development of myelosuppression, the 2-hour intravenous infusion of Imetelstat was reduced to 7.5 mg per kg of body weight for every four weeks. Four patients remained on treatment throughout the study, while the other five patients’ treatment plans were discontinued due to death unrelated to Imetelstat, discovery of second malignancy, etc.

Three patients become transfusion-independent in a median time of 11 weeks and one patient had their leukocytosis and thrombocytosis resolved (8).

Imetelstat is presently being studied in two separate clinical trials: one evaluating the activity of two different dosages of Imetelstat on myelofibrosis and the other evaluating how Imetelstat impacts myelodysplastic syndrome. Both of these trials are within phase II.

On April 10th, 2017, Geron Corporation’s stock price went up an outstanding 19.53% per share, raising their market price to $2.59 by closing (3). This surge followed a report that their current studies have affirmed that 9.4 mg per kg of body weight is supported to be the 43 appropriate dosage for patients with relapsed or refractory myelofibrosis (2). As of market closing on May 12th, 2017, Geron Corporation’s stock price was at $2.85 per share (3). 44

References

1. Alper J. Geron gets green light for human trial of ES cell-derived product. Nature Biotechnology. 2009; 27:213-214.

2. Geron Corporation. “For Patients: Clinical Trials.” Accessed on April 21, 2017.

3. Geron Corporation. “Investors: Press Releases.” Accessed on May 12, 2017.

4. Keirstead HS, Nistor G, Bernal G, Totoiu M, Cloutier F, Sharp K, and Steward O. Human embryonic stem cell-derived oligodendrocyte progenitor cell transplants remyelinate and restore locomotion after spinal cord injury. The Journal of Neuroscience. 2005; 25:4694-4705.

5. Lebacqz K, Mendiola M, Peters T, Young EWD, and Zoloth-Dorfman L. Research with human embryonic stem cell: ethical considerations. Hastings Center Report. 1999; 29:31-36.

6. Schwartz SD, Hubschman JP, Heilwell G, Franco-Cardenas V, Pan CK, Ostrick RM, Mickunas E, Gay R, Klimanskaya I, and Lanza R. Embryonic stem cell trials for macular degeneration: a preliminary report. Lancet. 2012; 379:713-720.

7. Stein, Rob. “First test of human embryonic stem cell therapy in people discontinued.” The Washington Post. Nov. 2011. Web. Accessed on April 21, 2017.

8. Tefferi A, Al-Kali A, Begna KH, Patnaik MM, Lasho TL, Rizo A, Wan Y, and Hanson CA. Imetelstat therapy in refractory anemia with ring sideroblasts with or without thrombocytosis. Blood Cancer Journal. 2016; 6:405.

9. Tefferi A, Lasho TL, Begna KH, Patnaik MM, Zblewski DL, Finke CM, Laborde RR, Wassie E, Schimek L, Hanson CA, Gangat N, Wang X, and Pardanani A. A pilot study of the telomerase inhibitor imetelstat for myelofibrosis. The New England Journal of Medicine. 2015; 373:908-919.

45

Kelsy Hillesheim and Ellianna Schwab MHC 360 Professor Harold Varmus 22 May 2017

Open Science and Collaboration in the Computational Era Thanks to the expansive capabilities of modern computation and the free-natured spirit of the internet, science is yet again on the brink of a revolution. This time, the revolution is rooted in what is known as open science. There are many different definitions of open science floating around, ranging from simply removing the paywall barring access to scientific publication to open sharing of every detail of one’s scientific work, down to the brainstorming and analytic code behind analysis. The latter is how we choose to define open science here. As it takes hold, this radical open science also leaves the most at stake for the scientists who practice it. This kind of open science has the potential to speed up and prioritize experimental reproducibility once again, this time through sharing everything, including lab notes. However, with this openness comes palpable risk to the careers of scientists, especially early-career scientists. Given these concerns, will open science gain acceptance as the new scientific standard? If it does, will it threaten the careers of scientists and irrevocably change science as we know it? These are questions we seek to answer. In order to effectively answer this question, we will explore the roots of the traditional model of scientific collaboration, look at how the traditional model differs from the open science model, and finally approach the risks and rewards of embracing open science.

The Traditional Model of Scientific Collaboration The traditional model of scientific collaboration as we understand it today began during the scientific revolution with a network of correspondence historians call the “republic of letters.”1 It did not rely on strokes of genius, but on strokes of the pen. Limited primarily to members of the bourgeoisie, this idea exchange dramatically altered the scope and structure of science. Every educated person owned a “cabinet of curiosities” to show off to whomever might stop by. In the salons of 46

France and coffeehouses of England, reading materials were abundant and ideas ran rampant. In France, the Encyclopedie was seen as the authoritative text for every educated household to own. Due to its costly price, owning one was a sign of bourgeois status, and it closely resembled what we now call scientific journals. Much like the most respected journals of today, having one’s work published in the Encyclopedie was a great honor. The free flow of ideas between the people doing science in the era of the Republic of Letters expanded the scope of science in that it allowed for relatively rapid reproducibility. The structural changes that ensued -- mainly that became priority -- remain foundational to modern science as we know it. It also involved a competitive rush to publication, as ownership was generally awarded to those who published first. We can see this in the priority dispute between Newton and Leibniz over the invention of calculus. Although the more secretive Newton won purely due to his position at the helm of the Royal Society, Leibniz had the last laugh. While Leibniz’s name does not hold the same recognition that Newton’s does, Leibniz notation is widely used today due to his widespread establishment of primacy through publication. 1 Shapin, Steven. 1996. The scientific revolution . Chicago, IL: University of Chicago Press. 47

Since the end of the scientific revolution, the practice of science has often been viewed as shown in the movie Arrowsmith2 at the beginning of the semester. An ambitious, early-career scientist like Dr. Arrowsmith has a promising idea or discovers something new. In the most quiet, secretive manner he or she/they races to claim scientific precedence through publishing of results. Like Dr. Arrowsmith, many scientists traditionally collaborated within their home institution in small groups, perhaps with a mentor and a trusted colleague or two. However, these scientists took great care in how much progress they might share with colleagues elsewhere, for fear of “being scooped.” Once a scientist achieved publishable results and won the race, those results were written up and shared in well-respected, high-impact journal like Nature, Science and the like. Other scientific groups could access only the data and results written in each paper after paying a subscription fee. If they wanted more access -- raw data or inquiry into analysis algorithms -- they had to contact the authors and independently build a collaboration from there. What is Different About Open Science? In its earliest stages, open science was defined as the same finished- product publication with the paywall removed. Due to the quickly growing internet and innovative scientists like Paul Ginsparg, who founded the publicly accessible and searchable website the ArXiv in 1991, scientists, mostly physicists, astronomers and mathematicians, were uploading “” copies of their soon-to-be published papers for open-access to anyone with an internet connection.3 Ginsparg writes that at first ArXiv and other “open-access” journals were considered 2 Arrowsmith . Directed by John Ford. Produced by Samuel Goldwyn. By Sinclair Lewis. Performed by Ronald Colman and Helen Hayes. 3 Ginsparg, Paul. "It was twenty years ago today . . ." ArXiv.org , September 13, 2011, 1- 9. Accessed May 21, 2017. doi:arXiv:1108.2700v2. controversial, as we discussed in class earlier this semester. Paywall journals grew concerned that they would lose income if scientific papers were available for free, and scientists worried that the appeal of open- 48 access would cause others to stop seeking peer-review. However time proved neither of these worries to be true. “[ArXiv and peer-reviewed journals] maintain different roles,” Ginsparg points out in his written history of the site. The majority of ArXiv papers remain pre-prints of accepted submissions to peer-review journals, and institutions and individual scientists generally maintain their paid subscriptions. As the internet, and indeed computation capability grew, “open science” began to expand beyond the boundaries of to finished- product science. Websites like Sourceforge, founded in 1999, allowed scientists to upload and share scientific analysis algorithms in both public and private accounts, and giant servers linked across the internet allowed for more natural collaboration between scientists around the globe.4 A newer, “more radical,” to quote Professor David Hogg of New York University, open science began to blossom -- one that was rawer, messier, but encouraged open collaboration from the very beginning. Professor Hogg calls this “extreme” open science, and defines it as allowing open access not only to finished-product science, but also access to the entire process. “[My group] benefits from its extreme openness —not just my blogging [of our progress], but our web-exposed code, paper [drafts], and [telescope and grant] proposal repository, and our open-source software projects5.” Professor Hogg, never one to shy away from controversy, marched in the March for Science with a double sided sign: ‘arxiv.org #openscience’ on the one side and ‘No ban, No wall’ on the other. He is 4 McGovern, Patrick. "SourceForge.net, Where Technology is Going." Lecture, Technology Group, November 15, 2004. 5 Hogg, David W. "open science." Hogg's Research (web log), May 14, 2010. Accessed May 21, 2017. https://hoggresearch.blogspot.com/. instrumental in the construction of new scientific institutions dedicated to this radically open science. One, the Simons Foundation Computational Center for Astrophysics (CCA) in downtown NYC is a collaboration between the many scientific institutions based in and located near NYC: NYU, Columbia, CUNY, AMNH, Princeton and 49

Yale. It is in that very place, the CCA, that one can begin to gain clarity on the nature of open science. Stepping into the facility, the open concept layout is immediately obvious. The minimalist, modern architecture of the lounge leads naturally into glass offices and meeting rooms. One could be forgiven for mistaking the CCA for the most high-tech of startups, if it weren’t for astrophysics equations scrawled on the gray, black, and white shaded walls. Most striking is the limitations on privacy. The glass walls mean that every scientist can see everyone else, even as closed doors on meetings and offices keep a quiet and serene atmosphere.

What rewards does Open Science Offer? The appeal of open science is often immediately apparent, it offers an unusually accessible ease of collaboration. At a CUNY graduate-level course in Astrostatistics taught by Professor Kelle Cruz of Hunter College, undergraduate and graduate students willingly lend a hand to one another and work together when stuck on a challenging component of their analytical code. The use of python as a language of choice by most students is indicative of the culture of the internet inspiring the culture of open science. Python developers have a culture of being very clear and descriptive with their code, as the code was created in the era of the internet to be shared in an open source fashion.6 At the graduate course in the CCA, there is no tension or 6 Van Rossum, Guido. Computer Programming for Everybody (Revised Proposal) . Corporation for National Research Initiatives. July 1999. Accessed May 21, 2017. animosity between students, but a sense of ease that transcends present challenges. There was a sense of solidarity between the students - that together, they could make their code work and achieve more results on more hypotheses by relying on the strengths of everyone. However, this ease requires the collaborative spirit that infuses the CCA. Such collaboration is not welcome everywhere. Prizes that bring with them 50 elusive glory loom over the heads of researchers, making some fields significantly more competitive than others. Can collaboration work for these fields too? As Professor Hogg puts it: “ It is easy to forget that when [my group] first went fully open, we did so because it made it easier for us to find our own code7. ” He elaborates though, that they found that there were not only ‘huge’ advantages for science, but also for the individual. Extreme open science allows for fluid collaboration and helpful feedback from the very beginning of a project. Professor Kelle Cruz teaches her students how to construct their analysis as open source as soon as they begin the project. Using Github, a site founded in 2008 for code-sharing that allows open comments and feedback, students can track project contributions, “watch” others’ code, and even request that their own code be added to others’ projects. Originally created for software and internet developers, Github is built for collaboration across institutional and global boundaries.8 Such ambitious collaboration as this requires methodologies for tracking individual participation, so that no one contributor is ever taken advantage of. On its easy to use interface, Github shows every contributor to a project and the 7 Hogg, David W. "#DSESummit, day 1." Hogg's Research (web log), October 5, 2015. Accessed May 21, 2017. https://hoggresearch.blogspot.com/. 8 Gousios, Georgios, Bogdan Vasilescu, Alexander Serebrenik, and Andy Zaidman. "Lean GHTorrent: GitHub data on demand." Proceedings of the 11th Working Conference on Mining Software Repositories - MSR 2014 , May 31, 2014. doi:10.1145/2597073.2597126. quantifiable percentage of their contribution. If someone has an idea, they type it into the project and are immediately logged as a contributor. If another takes the idea and translates it into code, they are added to the contributor list as well. Professor Cruz believes that this methodology is essential to being a “better” scientist and puts that belief into practice in her career. She 51 contributes all of her analysis code to Github and stores her paper drafts on the site. Once a paper is accepted for publication, she not only uploads it the Arxiv, but also makes all the initial drafts, data, and comments available to the public. “This transparency pushes all science forward,” Professor Cruz says.9 It allows her peers and colleagues to reproduce her methods, and when she proposes a new method of stellar categorizing, it allows them to see the scientific reasoning from its first iteration through its final polished result. It also provides students, postdocs and early-career faculty with examples of scientific practice. She doesn’t need to spend as much time verbally teaching students one- by-one how to write a paper or how to code an equation; they can see her methodologies in action for themselves.

What is the Risk in Practicing Open Science? Professor Cruz is quick to point out that extreme open science can be risky for students and early-career scientists. “Some scientists are eager to find mistakes, even if they’ve been corrected. Letting people see your drafts means that they can see wrong directions you might have taken before arriving at the correct answer. Some people won’t see the correction and will only see the wrongdoing.” Professor Cruz doesn’t think that this should deter students from 9 Cruz, Kelle. "Questions about Open Science." Interview by authors. May 3, 2017.

adopting open science practices, and instead wants them to be aware of the risk and the possible reward. “Not everyone will continue on to academia. Having documentable, openly-available code and analysis practices makes you more marketable in other professions.” She recommends that young scientists keep this in mind when they choose what to make public -- if their main goal is academia, they might choose to be less open with their entire scientific process until they have secured 52 tenure. Dr. Adrian Price-Whelan, a prize postdoc funded by both Princeton and the CCA disagrees. “I've never experienced online harassment or getting ‘scooped,’ but I have gotten benefits of posting my code on the web.”10 He says that posting his code and brainstorming early and often has led to contributions and collaborations with people that he wouldn’t have known otherwise. He attributes his documentable early-career success to his practice of extreme open science. Dr. Price-Whelan completed his Ph.D. in only five years and immediately won a prestigious prize postdoctoral position, one that many scientists are only chosen for after one or more prior postdoc positions.

Will Open Science Succeed or Fail? What is remarkable about the science and collaboration at the CCA is how everyone there seems occupy a duality of spaces. On the one hand, the CCA’s scientists and students are present in the physical space of the Simon’s center. At the same time, everyone is immersed in the digital space of the internet, pushing their code to Github and often communicating between office rooms using Slack, an internet office chat system. In order for open science to succeed and 10 Price-Whelan, Adrian. "Questions about Open Science." E-mail interview by authors. May 11, 2017. Page 8

be widely accepted, the Internet must be legitimized as a space of science. Github must become as valid a space of science as a traditional biology lab. It is unclear whether radical open science will attain legitimacy in all fields of science, though. The traditional spaces of science were built to be exclusive, and as a result, modern science as we know it was built on this very reality. The values that traditional science has always held stand in direct opposition 53 to the values of the Internet. Traditional science was a gentleman’s pursuit and as such, only gentlemen were allowed to observe and comment upon it. One had to be invited in order to enter into the spaces of science. Even those who were granted consistent access were not always permitted to practice science. A key example of this can be found in the relationship between Robert Boyle and Robert Hooke during Boyle’s years at the Royal Society.11 When Hooke served as Boyle’s lab assistant, he was a mere technician in Boyle’s eyes. This is not the fault of Boyle, however, as this was the way lab assistants were viewed at the time. Technicians were seen as fully qualified to carry out scientific experiments, but not to comment on them. These technicians were a separate class of people from scientists, a divide that remains today due to the increased professionalization of science. It is important to note that although they carried the experiments out, they were not seen as active forces in science. To “try” an experiment mean to observe it taking place, not to physically carry it out. The manual labor involved was not seen as fitting for a gentleman member of society. This compartmentalization extended even to the way lab notes were taken. While the observer-scientist was allowed to state his observations in the first person, the technician was 11 Shapin, Steven. “The House of Experiment in Seventeenth-Century England,” Isis 79 (1988): 373-404. relegated to third-person language, thereby removing the technician from the picture (as he is not permitted to be an observer).12 This third- person preference remains the standard in science today, as does the professionalized structure that was ultimately built on the exclusivity of the European genteel classes. In contrast to traditional spaces of science, the more egalitarian Internet is accessible to all with a device that connects to it. The Internet’s early creators valued openness most of all, even above things like security, and structured their creation to preserve this openness. Tim Berners-Lee, head creator of the World Wide Web is speaks often about the importance of an open, free Internet.13 He is quoted in the New York 54

Times saying, “ I spent a lot of time trying to make sure people could put anything on the web, that it was universal.”14 On the occasion of the 25th anniversary of the Internet, he began circulating a “Magna Carta” online bill of rights document to preserve the values of openness and ease of access that the Internet was built on. The purpose of this document was to further establish an international standard of openness on the Web.15 An avid supporter of net neutrality, the idea that companies should not be able to pay for the consumer to have a faster browsing experience. Further, he seems to think that the openness of the Internet is foundational to democracy itself in the present era telling The Guardian in the very same article, “ Unless we have an open, neutral Internet we can rely on

12 13 Dear, Peter. “Totius in verba: Rhetoric and Authority in the Early Royal Society.” Isis 76 (1985): 145-161. Williams, Lauren C. "On The Internet's 25th Birthday, The Creator Of The Web Pushes For An Online Bill Of Rights." ThinkProgress. March 12, 2014. Accessed May 22, 2017. https://thinkprogress.org/on-the-internets-25th- birthday-the-creator-of-the-web-pushes-for-an-online-bill- of-rights-7db5d9c89fd0.

14 Bilton, Nick. "As the Web Turns 25, Its Creator Talks About Its Future." The New York Times . The New York 15 Times, 11 Mar. 2014. Web. 22 May 2017. Kiss, Jemima. "An online Magna Carta: Berners-Lee calls for bill of rights for web." The Guardian . Guardian News and Media, 12 Mar. 2014. Web. 22 May 2017.

without worrying about what’s happening at the back door, we can’t have open government, good democracy, good healthcare, connected communities and diversity of culture.” It is clear that if Berners-Lee has say in the matter, the openness of the Internet is not going anywhere. Ultimately, traditional science and open science spaces cannot coexist without one of them budging on their core values. The Internet’s foundation of egalitarianism and science’s of exclusivity are necessarily at odds with one another. In order for a resolution to be reached, one or both must budge.16 We are currently at that crossroads and think that we are beginning to witness the beginnings of a paradigm shift within science - this time driven not by a grand new theory, but a new methodology - that of open science. The initial stages of this shift are already observable at places like the CCA, as well as in how every branch of science is currently in the middle of thinking about how it 55 might be affected by radical open science. That is exactly what a paradigm shift does according to Kuhn; it affects every branch of science in some way and once the shift occurs, we cannot imagine going back to the way things used to be. As the egalitarian nature of the CCA shows, this paradigm shift has the potential to be an incredibly positive force in science, making it it more widely accessible to all. The CCA, a champion of open science practices in the Astrophysics community, is a relatively new organization -- it just opened it’s doors this past November. Many established scientists remain skeptical of welcoming extreme open science while the ethics surrounding transparency, privacy and the ability to give proper credit are still being worked out. As Professor Hogg says, “ This is all the more reason that we should encourage ethical discussions in the community, and also encourage those discussions to be specific and technical. ” However, in this new era of giant

16 Kuhn, Thomas S. 1962. The structure of scientific revolutions . [Chicago]: University of Chicago Press.

datasets, supercomputers, and cloud storage, open science might be the wisest way to go. When terabytes upon terabytes of information are being collected from telescopes surveying the wide universe around every night17, there is too much data to be studied alone. We, as scientists, might only get to the biggest answers by embracing the risk of sharing the workload in the most collaborative and open way that we can. Once traditional science sees that the power to answer some of the most outstanding questions lies in extreme computation and collaboration, it might have no choice but to make room for open science at the table.

17 Telescope, Large Synoptic Survey. "About LSST." About LSST | The Large Synoptic Survey Telescope. Accessed May 21, 2017. https://www.lsst.org/about.

56

Sharon Huang

MHC 360

Stem cell therapies and approaching regulation for these procedures

The term “stem cell” dates back to the late 1800s, when scientist Ernst Haeckel used it to describe the fertilized egg that gives rise to all cells of an organism. Stem cells are undifferentiated cells that have the potential to turn into other types of cells. In the human body, they are important for internal repair and can replace damaged tissue. Much of the excitement surrounding use of stem cells comes from the possibilities they present for regenerative medicine. I am interested in discussing the current uses of stem cells in the present day, the disturbing marketing of certain stem cell therapies directly to consumers that escape oversight from the Food and Drug Administration (FDA), and the regulatory environment surrounding stem cells in the United States.

It is important to differentiate between the different types of stem cells and their sources.

There are a number of stem cell types, but those that are of primary focus in scientific research are pluripotent and multipotent stem cells. Multipotent stem cells can give rise to a limited number of cell types and are derived from tissues such as bone marrow, adipose tissue, and peripheral blood. Pluripotent stem cells can give rise to almost any type of cell and are primarily derived from human embryos. In 2006, it was discovered that the introduction of certain transcription factors could convert adult somatic cells to act like pluripotent stem cells- these cells are known as induced pluripotent stem cells. However, the conversion method is slow and quite inefficient, and still requires further research before clinical use of iPS cells is possible. The discovery of pluripotent stem cells and the fact that they are derived from human embryos has 57 prompted controversy over whether or not it is ethical to study and manipulate human embryos for scientific purposes. Less of this kind of attention has been given to the use of multipotent stem cells for therapeutic effects and some U.S. businesses have begun to advertise some unregulated and potentially dangerous procedures.

Stem cell therapies make use of certain types of multipotent stem cells. One therapy that has entered clinical trials is hematopoietic stem cell transplantation. This treatment is used to treat hematologic and lymphoid cancers and has been studied extensively since the 1960s.1 For this procedure, the patient must first undergo a preparative regimen that involves chemotherapy to get rid of cancerous cells. Donor stem cells harvested from peripheral blood of bone marrow are then transplanted via intravenous infusion to replenish blood stem cells. The “donor” stem cells can come from the patients themselves (autologous) or from histocompatible donors

(allogeneic). The procedure has resulted in higher cure rates and longer remission times, but mortality rates caused by treatment complications are still high as well. One of these complications is graft-versus-host disease (GVHD). Each person has a unique set of histocompatibility present on the surface of their cells that help the immune system recognize “self” versus foreign cells. The degree of histocompatibility between the patient and the donor cells is related to the severity of GVHD. Older patients are more vulnerable to GVHD.

The inherent toxicity of the preparative regimens also contributes to the mortality rate.

In the past decade, adipose-derived stem cell therapies have made an appearance on the market in the United States. The process typically entails a blood extraction and a liposuction.

The extract obtained from the liposuction is treated with enzymes to obtain the stromal vascular fraction (SVF) which contains a mix of mesenchymal stem cells and other body cells.

Mesenchymal stem cells have the potential to differentiate into bone, cartilage, fat, or muscle 58 cells. The patient’s blood is centrifuged to obtain platelet-rich plasma, mixed with the stromal vascular fraction and then injected into the desired treatment site. In contrast with the hematopoietic stem cell transplant procedure, this process is quite crude. Clinics that provide these therapies promise to be able to treat a multitude of diseases, but use only anecdotal evidence and patient testimonials to back up their claims. There is a lack of peer-reviewed evidence that the procedure has any effectiveness, which is very worrisome. The marketing of these treatments is rife with ethical issues, which I will discuss through an examination of certain cases.

In November 2016, a lawsuit was filed against a publicly traded company located in

Florida known as U.S. Stem Cell (previously known as Bioheart). Three patients, all elderly women in their late 70s and early 80s, paid $5000 to receive adipose-derived stem cell injections in both eyes to treat their age-related macular degeneration in 2015.2 All patients presented with detached retinas shortly after the procedure. One patient is now completely blind because of the treatment. The other two patients have lost most of their eyesight and are in a much worse state compared to the level of vision loss they might have experienced after one year if they had not received the “treatment.” Two of the patients had discovered the clinic through its posting of a clinical trial on www.clinicaltrials.gov, and mistakenly thought they were participating in clinical trials approved by the government. Dr. Thomas A. Albini at the University of Miami School of

Medicine had treated two of the patients, and noted the bad practice of injecting both eyes at once.3 One would usually treat one eye at a time, to confirm the treatment was effective and in case of any problems, so the patient would at least still have one eye with vision. This case made three things clear; patients are in danger because of these stem cell procedures, potentially 59 dangerous procedures are escaping FDA oversight, and that patients need to be better informed or educated about how clinical trials work and are “approved.”

Another lawsuit was filed in November 2016 against StemGenex, a La Jolla clinic that also offered adipose-derived stem cell treatments.4 This is a class-action lawsuit seeking reparations for misleading advertising. The lawsuit claims that StemGenex profited by “targeting the ill and the elderly” and had “no reasonable basis for its marketing claim that the Stem Cell

Treatments were effective to treat diseases as advertised.” Patients cite that the StemGenex website boasted a 100% customer satisfaction rate in a misleading pie chart and that their stem cell therapy could effectively treat Parkinson’s, lung disease, multiple sclerosis, Alzheimer’s and a number of other diseases. Patients paid $15,000 for their treatments and saw no positive effects afterwards. Thankfully, no patients reported any serious consequences after their treatment.

U.S. Stem Cell and other clinics have used the argument that autologous stem cell interventions are not classified as drugs or drug-related devices5 since they come from the patient themself and are for “homologous use.” They also claim that the cells are “minimally manipulated” so the FDA has no hand in approving or disapproving of the procedures. However, a 2014 draft guidance from the FDA states that production of the SVF via enzymatic digestion does not meet the criteria for minimal manipulation.6 Furthermore, the FDA defines

“homologous use” as “the repair, reconstruction, replacement, or supplementation of a recipient’s cells or tissue with an HCT (human cells, tissues, and cellular and tissue-based) product that perform the same basic function or functions in the recipient as the donor” which is clearly not the case with adipose-derived stem cell interventions. Attempting to repair photoreceptor cells with a solution of mesenchymal stem cells and other materials does not fall within the statement “that perform the same basic function” in my interpretation. Clinics also cite 60 a surgical exemption 21 CFR 1271.5 to condone their procedures, which states “You are not required to comply with the requirements of this part if you are an establishment that removes

HCT/P...and implants such HCT/P into the same individual during the same surgical procedure.”

The FDA has since clarified that the processing of adipose tissue to obtain SVF is no longer returning the same HCT/P in the “same” surgical procedure. As such, it seems that the exemptions that clinics claim to have are actually not applicable to these stem cell therapies according to the FDA. All these procedures need to be approved by the FDA before going to market, and the FDA needs to be more active in investigating businesses that do not comply with their regulations.

Official legal concerns aside, it seems to me that the clinics described want to jump over regulatory fences so they can turn a larger profit from patients using the “miraculous possibilities” stem cells might provide, a real ethical concern. Leigh Turner, a professor of at the University of Minnesota has written a number of papers about the ethical and regulatory problems surrounding the stem cell industry in its current state. Turner co-authored a paper that looked into the prevalence of clinics advertising stem cell therapies though direct-to- consumer marketing.7 By performing online keyword searches looking for terms such as “stem cell treatment” and “stem cell therapy,” it was discovered that 570 clinics in the United States advertise some type of stem cell intervention (as of 2016) and 351 of the clinics engaged in direct-to-consumer marketing of the stem cell treatments. As mentioned before, these clinics market their procedures for a variety of applications such as neurological disorders, degenerative conditions, and even cosmetic purposes and there is no peer-reviewed evidence that the therapies are useful for treatment of any of the mentioned conditions. Patients are also left to decide whether the treatment would be beneficial with no unbiased professional opinion to guide their 61 decision. This advertising targets a vulnerable patient population and the treatments expose patients to potential psychological and physical harm. The combination of misleading advertising and the high costs of these procedures presents a serious ethical issue and seriously undermines the FDA. It might be a different story if the treatments were completely benign and would have no possible long-term consequences on its patients (maybe like something similar to supplements, which are not regulated by the FDA), but this is not the case. These procedures must undergo clinical trials before being allowed to be sold to the public.

In essence, what clinics benefit from marketing directly to consumers is the fact that most have not tested their products in rigorous clinical trials that are usually quite costly. Clinical trials have four phases. Phase 0 tests the the pharmacodynamics and pharmacokinetics of the potential drug in 10-15 subjects. Phase 1 consists of 20-80 subjects and screens for safe dosage ranges and possible side effects. Phase 2 establishes efficacy of the treatment and further evaluates the overall safety and possible side effects of the drug in 100-300 subjects. Phase 3 is similar to

Phase 2 but is of a larger scope, testing 1,000-3,000 patients. Phase 4 takes place after the drug goes to market- it is the continual monitoring of users to maintain that the drug is still safe. Phase

3 of trials is usually the most costly and the most difficult to pass successfully. All phases are necessary for safety to observe how different types of people will react to the drug, and it is unethical to market potentially unsafe treatments to unsuspecting patients.

In the past, there were complaints about regulations on stem cell research and therapies being too restrictive and burdensome. Even if these claims were true, scientists should work in a way that attempts to comply with regulations rather than circumvent it altogether. To get an idea of the regulatory environment surrounding stem cell therapies, I will outline some of the major events that have occurred in the past decade. Under President George W. Bush, federal funding 62 of stem cell research was not allowed at all, except for work on certain cell lines. During that time, many scientists were reluctant to touch stem cells for fear of losing their grants, which pushed stem cell research into the private sector. In 2009, Obama overturned that decision and removed the funding restrictions on stem cell research. Since then other documents calling for the lowering of a safety and efficacy standards for stem cell therapies have been presented to the

House of Representatives. Proponents of the Reliable and Effective Growth for Regenerative

Health Option that Improve Wellness (REGROW) wanted the FDA to provide conditional approval of cell therapies that demonstrated safety and “reasonable expectation” of efficacy. This would essentially allow conditionally approved therapies to skip phase 3 clinical trials. The act did not pass, but a similar document, the 21st Century Cures Act, passed in December 2016. This legislation creates a “Regenerative Advanced Therapies Designation,” that will allow therapies related to regenerative medicine to be expedited for review and approval under the

“Breakthrough Therapy” designation if they provide “preliminary clinical evidence indicates that the drug has the potential to address unmet medical needs.” It is much less extreme than the

REGROW Act and grants the FDA great power over what therapies can get approved. It isn’t clear whether this will be a good solution in the long run, but seems to be acceptable as long standards for safety and efficacy are not compromised in favor of approving stem cell therapies.

However, interpretation of the 21st Century Cures Act will rely heavily on the FDA director, which might be worrying with our current presidential administration.

It was interesting to see legislation passed to accelerate approval of stem cell therapies when there is a surplus of businesses offering unregulated stem cell therapies. The Cures Act might not be a bad piece of legislation, but the FDA seems to have a more serious issue of dealing with these unregulated therapies. These clinics should be held responsible for 63 noncompliance with federal regulations because of the danger they pose to ill people. It also seems that patients need to be better educated about how clinical trials are performed and what they should expect from a legitimate clinical trial should they choose to participate in one. Still, the main concern that needs to be addressed going forward is the current lack of oversight on the stem cell industry in the United States and the bad medical practice that may be occurring within these clinics. I implore the FDA to broaden their investigations and continue to monitor the market very closely in the future.

64

References

1. Copelan EA, Hematopoietic Stem-Cell Transplantation. N Engl J Med 2006;354:1813-

26.

2. Kuriyan AE, Albini TA, Townsend JH, Rodriguez M, Pandya HK, Leonard II RE, Parrott

MB, Rosenfeld PJ, Flynn Jr. HW, Goldberg JL, Vision Loss after Intravitreal Injection of

Autologous “Stem Cells” for AMD. N Engl J Med 2017;376:1047-53.

3. Grady D, Patients Lose Sight After Stem Cells Are Injected Into Their Eyes. The New

York Times 2017. https://www.nytimes.com/2017/03/15/health/eyes-stem-cells-

injections.html

4. Hiltzik M, The stem cell therapies offered by this La Jolla clinic aren't FDA approved,

may not work — and cost $15,000. The LA Times 2017.

http://www.latimes.com/business/hiltzik/la-fi-hiltzik-stemgenex-20170330-story.html

5. NSI Stem Cell. FDA Compliant Adipose Stem Cell Therapy In U.S.

https://nsistemcell.com/fda-compliant-adipose-stem-cell-therapy/

6. Turner L, US stem cell clinics, patient safety, and the FDA. Trends Mol Med.

2015;21(5):271-3.

7. Turner L, Knoepfler P, Selling Stem Cells in the USA: Assessing the Direct-to-Consumer

Industry. Cell Stem Cell. 2016 Aug 4;19(2):154-7.

65

“Political Scientists” Lisa Li MHC 360

The relationship between science and politics in the United States has been one marked by caution and apprehension. Science, an institution built on the values of objectivity and neutrality, questions the physical and natural world through the scientific method to avoid bias. Politics, on the contrary, is founded on a socially constructed set of ethics and value. Despite their differences, science and politics have crossed paths in the past. In 1945, the development of the atomic bomb put an end to World War II. And during the Cold War, an increase in state funding catalyzed technological advances that would play a significant role in the arms race and space race. However, when scientists decide to take a political stance or engage in activism, they are often scrutinized by both the scientific community as well as the general public. The question remains about whether this scrutiny is justified. After all, science is political. As scientists, their success in the field is contingent on their ability to stay objective. But should scientists who make political statements risk losing their credibility? Many scientists have been cautious to take a strong stance on politics. Nevertheless, there were also scientists that embraced their social conscience – some of whom used their platform for advocacy that would impact the political discourse. Such a figure was Linus C. Pauling, also known as the founder of molecular biology. Pauling was a brilliant scientist whose work contributed to the understanding of the cause of sickle cell anemia, the nature of chemical bonds and the structure of proteins. His work on genes and proteins would eventually propel Francis Crick, James Watson, and Maurice Wilkins’s 1953 discovery of the double helix of DNA. Pauling received international plaudits as a chemist, but also developed a reputation as a peace activist. Before the United States’ involvement in World War II, Pauling was openly in favor of the intervention to stop the spread of fascism. In fact, Pauling and many others at Caltech benefitted from the increased war time funding – with Pauling overseeing the development of innovations to be used during war, including an artificial substitute of blood plasma and an apparatus that could measure oxygen levels in submarines. Initially keeping his political views relatively private, Pauling began to shift views when he witnessed a growing sense of nationalism and racial tension. Pauling had hired George H. 66

Nimaka, a Japanese-American, as a gardener at his home. Several days later, Pauling had discovered anti-Japanese graffiti on his garage door. In bright red, a message painted, “Americans die but we love Japs- Japs work here Pauling” alongside an image of the rising sun flag. When he condemned the incident, he received threats made against him and his family. But it was in 1945, when the atomic bombing of Hiroshima and Nagasaki killed thousands of innocent civilians, which would greatly influence Pauling’s political involvement. By 1949, he helped organize a Congress for Peace. Dr. W. E. B. DuBois, singer Paul Robeson, actor Charlie Chaplin, and O. John Rogge, a former Assistant Attorney General of the United States were among the initiative. In response, the State Department denounced the group, claiming that it was “devoted to providing an apologia for the Moscow point of view.” Nevertheless, Pauling continued to advocate for peace by starting a “peace crusade”. Pauling’s efforts raised concerns from Senator Joseph R. McCarthy, the Wisconsin Republican who was chairman of Senate Permanent Subcommittee on Investigations. He claimed that Pauling had a record of memberships in Communist organizations, despite Pauling denying he was ever a Communist. Partly as a result of the Senator’s attack, the State Department denied Pauling a passport in April 1952, restricting him from travelling abroad. Due to this restriction, Pauling was unable to participate in a conference on the structure of proteins. Meanwhile, Watson and Crick were able to access X-ray crystallography data from researchers at King’s College in London that allowed them to disprove Pauling’s triple helix model of DNA. Many years later, Pauling continued to advocate for international peace and against testing of nuclear weapons. With regards to nuclear weapons, Pauling has said, “The power to destroy the world by the use of nuclear weapons is a power that cannot be used – we cannot accept the idea of such monstrous immortality. The time has now come for morality to take its proper place in the conduct of world affairs; the time has now come for the nations of the world to submit to the just regulation of their conduct by international law.” Pauling’s controversial position was one that many of his colleagues avoided. Yet, as a scientist, this only gave him additional reasons to promote a political message of peace. The damaging consequences of nuclear weapons were clear to Pauling. Despite the ability of these weapons to greatly influence international politics, investing in the development of nuclear weapons would be devastating to the health of those affected. In Pauling’s view, the decision to become a peace activist was not only necessary but also a moral obligation. 67

The head of the Manhattan Project and “father of the atomic bomb”, J. Robert Oppenheimer also found himself conflicted after the atomic bombing of Hiroshima and Nagasaki. Oppenheimer, whose work led to the end of World War II and admiration from many Americans also led to controversy surrounding the ethics of such a powerful weapon. Among the scientific community, the atomic bomb was a tragedy. The idea that scientific endeavors were involved in such a ruthless act raised concerns among the scientific community. Despite Oppenheimer celebrating the end of the war and the success of the Manhattan Project, the death toll and chilling descriptions of radiation sickness had a sobering effect. In his first meeting with President Truman on October 25, 1945, Oppenheimer said, “Mr. President, I feel I have blood on my hands.” Indeed, the moral implications of this event would thereon change Oppenheimer’s fate in cooperating with the government. After the war, the Atomic Energy Commission (AEC) was set up to replace the Manhattan Project. The AEC was charged with overseeing all atomic research and development in the United States. Oppenheimer served as the chairman of the General Advisory Committee and resisted efforts to develop the hydrogen bomb. During the era of the Cold War, his reluctance caused suspicion and controversy. To further complicate the issue, many of his graduate students were under investigation in 1943 for left-wing sympathies and conspiracies. Several months earlier, Oppenheimer had been approached by “intermediaries” for an official at the Soviet consulate, who discussed passing on secret work being done at Berkeley. Oppenheimer declined to share such information and later refused to reveal most of the men to General Leslie Groves, claiming they posed no security risk. In his role as a political advisor, Oppenheimer had made enemies including Lewis Strauss, an AEC commissioner who had long resented Oppenheimer for humiliating Strauss before Congress years earlier. On December 21, 1945, Strauss accused Oppenheimer of disloyalty and presented a list of charges against him. Oppenheimer refused to resign and demanded a hearing. Strauss arranged for the FBI to tap Oppenheimer’s phones and detailed transcripts of Oppenheimer’s discussion with his lawyer were provided to Strauss. Many scientists and public officials attested to Oppenheimer’s loyalty and indisputable service to the nation, but on May 27th, Oppenheimer was denied security clearance and lost his position at the AEC. Despite Pauling and Oppenheimer’s scientific contribution to war efforts, they would both be reprimanded for their political involvement. This uneasy relationship between scientists 68 and their social conscience revealed the severity of expressing political opposition. Scientists would successfully cooperate with politics under one condition: that scientists align themselves along with the ideology of those in political power. If a scientist chose to challenge the political discourse, that scientist would be vulnerable to persecution. Moreover, a scientist that was also a government official would be less free to express discontent about political concerns. Because Pauling did not obtain any classified information and was not a government official, he was able to advocate more freely. Oppenheimer, having been a part of a government assignment, would face greater repercussions; he would lose his position as a government scientist and his dignity – regardless of his contributions that undeniably changed the course of World War II. The harsh reality was that political ideologies, unlike scientific facts, change based on the ever-changing atmosphere of international and domestic events. However, as Americans, activism against decisions based on political ideologies should be respected. The notion that scientists are better off staying neutral to avoid the risk of losing credibility is a dangerous one. As with every American, scientists should also be able to freely engage in activism or express their concerns without the fear of being punished for doing so. Especially at times when science becomes integrated into political action, the perspective of scientists ought to be even more crucial. In recent years, the debate about scientists finding a political voice has continued. In the field of environmental science, scientists are using their platform to raise awareness of their concerns over the lack of scientific advice integrated into regulation policies. In fact, the attention some scientists have gained through this activism has brought them to the limelight of the mainstream media. While some scientists have warned that being a scientist obligates one to respect the principle of neutrality, one scientist could not have a more opposing outlook. Barry Commoner, often called the “Paul Revere of Ecology”, was among the most prominent environmentalists in the 1950s and 1960s. He was one of the first scientists who declared that scientists held the responsibility to keep the public informed about the dangers posed by the advances in science and technology. Commoner, like Pauling and Oppenheimer, recognized the power of scientific technologies and political action had on life and death after the atomic bombings. But as an environmentalist, he also recognized the environmental dangers that would accompany nuclear technologies. His research on the global effects of radioactive fallout, which included studying the concentrations of strontium 9 in the baby teeth of children, contribute materially to the adoption of the Nuclear Test Ban Treaty of 1963. Commoner stood out as a 69 scientist-activist though, by viewing the environmental crisis as a product of a flawed economic and social system. He claimed that corporate greed, misguided government priorities, and the misuse of technology were responsible. Moreover, he argued that environmental dangers always disproportionately affect the poor and minorities. Unafraid to use political explanations for scientific concerns, Commoner’s enthusiasm in science and activism eventually led to his running for president in 1980. He had run through the Citizens’ Party, which he had founded a year earlier. Although he gained less than 1 percent of the vote, he was able to use his platform as a politician to highlight environmental concerns. Some even suspect that Commoner’s decision to run for president was used as an opportunity to raise awareness of environmental issues. Although this speculation may or may not be true, one thing was certain: for Commoner, being a scientist an activist was inseparable. As the ‘green’ movement started to evolve, a scientist named James Hansen would shift the focus of environmental science towards global warming. In the late 1980s, Hansen became the first scientist to offer evidence that the burning of fossil fuels heats up the planet. Hansen, like Commoner, was an enthusiastic activist. Having testified before Congress, marched in rallies, and participated in protests against the Keystone XL Pipelines, Commoner was also the top climate scientist at the National Aeronautics and Space Administration (NASA). But his position at NASA would restrict his ability to voice his concerns about global warming. In late 2005, after he called on the United States to reduce greenhouse gas emissions, he found that NASA officials began filtering public statements and press interviews in an effort to limit his ability to publicly express scientific concern that clashed with the Bush administration’s policy of opposing mandatory reductions in greenhouse gas emissions. But Hansen refused to be silenced, arguing that his loyalty was to NASA’s mission statement – “to understand and protect our home planet”. Hansen, like Commoner, also believed that communicating to the public was essential and was perhaps the “only thing capable of overcoming the special interests that have obfuscated the topic”. As the head of the Goddard Institute for Space Studies at NASA, he asserted that to avoid the worst impacts of climate change, sweeping changes in energy and politics would be needed, including investments in new nuclear technology, a carbon tax on fossil fuels, and a new political party that is free of corporate interests. His public stand eventually brought about reforms of NASA’s public relations policy. In February 2006, NASA Administrator Michael Griffin issued an agency-wide statement clarifying that the role of public 70 affairs officers was not “to alter, filter or adjust engineering or scientific material produced by NASA’s technical staff.” In 2013, he quit his position at NASA to devote himself to the fight against climate change. “As a government employee, you can’t testify against the government,” he said in an interview. For scientists, being politically outspoken is rare due to the uneasy relationship science and politics share. Yet, it is essential, as it is to any American, that they express our political views without being penalized for doing so – whether it is related to science or not. At times when science is involved in political action, their scientific knowledge can help guide the political discourse. The political effect Pauling, Commoner and Hansen had in peace, nuclear testing, and public relations policy were not insignificant. Pauling’s recognition, reflected in his Nobel Peace Prize in 1962, Commoner’s contribution to the Nuclear Test Ban Treaty of 1963, and Hansen’s effect on science communication prove this point. Recently, scientists have begun to react to current political agendas that undermine the role of science in our society. Despite receiving some backlash, this rude awakening may be needed for science, as an institution, to realize their potential in playing a role in politics. Science, like politics, affects everyone – not just scientists. Scientists must understand that science does not stand alone. It is contingent on societal acceptance and often this acceptance must accompany morality. Scientists need to display this morality to relay the message science sends to the general public. If not, scientists will risk becoming background noise. Instead of standing at the sidelines, scientists should be politically outspoken because science will always be political. But perhaps scientists could help make politics more scientific.

71

Veena Mehta

Dr. Harold Varmus

The Purpose, Practice, and Politics of Science

15 May 2017

Biting Down on Bite Mark Analysis: Limitations of Forensic Odontology

Richard Buckland, wrongfully accused of the brutal murder and rape of two teenage girls, probably sighed in relief when pioneering DNA sequencing technology pointed to Colin

Pitchfork as the actual murderer. The case, closed in 1987, became the first to use DNA technology, which matched Pitchfork’s DNA to semen found at the crime scene5. DNA sequencing technology quickly gained popularity as a method for evidence analysis, and since

1987, many other wrongfully convicted individuals have been released because of DNA evidence. The new technology was able to identify an exact genetic sequence found from DNA at a crime scene, which would likely only match DNA from one person. There was certainty in this type of analysis, unlike other forms of identification, such as eyewitness testimony or lie detection technologies. Other scientists began to offer new ideas to analyze evidence from a scientific perspective, and slowly, forensic science blossomed into the widespread field of study that it is today. Forensic science is commonly defined as the use of scientifically based methods to analyze and interpret evidence found at a crime scene, which can then be used in courtrooms to convict guilty individuals or exonerate wrongfully accused ones8. Because of its scientific basis, evidence presented by forensic scientists has historically been considered very accurate and hard to disprove. However, as these methods have been analyzed further, their true validity and accuracy have become questionable and reason for extensive debate. 72

A 2016 report written by the President’s Council of Advisor’s on Science and

Technology (PCAST) took a stab at the forensic science community when it concluded that the science actually showed very little validity and reliability. The report specifically focused on forensic “feature-comparison” methods, such as DNA identification, fingerprint comparisons, and hair analysis. In contrast to methods like forensic entomology or pathology, these comparison methods are different because they aim to compare evidence from the crime scene to a “source” sample. Feature-comparison methods are useful in two different ways – for either inclusion or exclusion of an individual as a suspect. These methods always compare evidence at a crime scene to a known source sample. If the two samples do not match, the known, source sample can be excluded from a list of suspects. However, if there is a match, the person, from whom the source sample was obtained, can be placed in contact with the victim. It is important to note that a “match” does not necessarily equate to “guilty”. These methods should ideally be used in conjunction with other evidence and testimony to finally convict a suspect as guilty.

According to research by the PCAST, the results from feature-comparison methods were not sufficiently reproducible, a phenomenon termed as foundational validity. Furthermore, the methods seemed to lack applied validity, or proof that the particular method had been applied accurately in practice. One of the techniques that the PCAST discredited rather harshly was bite- mark analysis, which attempts to compare dental impressions made on skin or other surfaces to the dental features of a suspect. Not only does the report claim that the current methods lack both foundational and applied validity, the PCAST also projects that bite-mark analysis will likely never develop into a scientifically valid method8.

Bite mark analysis is just one of many methods utilized in the broader field of forensic odontology. Forensic odontology, or forensic dentistry, is a field of study that employs the use 73 of teeth and other dental markings for identification and crime investigation. Although the field was only accepted for courtroom use in the mid-1800s, dental evidence has been documented for use in identification as early as 49 AD. Many sources cite the case of Agrippina, wife of the

Roman Emperor Claudius, as the beginnings of forensic odontology. Jealous of Lollia Paulina, a rich divorcee who seemed to threaten her position as the Empress, Agrippina orders her soldiers to kill Lollia and bring back the severed head as proof of death. According to the story, the severed head was completely distorted, and Agrippina was only able to positively identify Lollia after examination of her discolored front teeth9. Many other cases are often cited to illustrate the long history of simple dental identification, but the famous case of the Bazar de la Charite fire in

1897 marks the first use of forensic odontology as a testable and rigorous science. The fire killed

126 Parisian aristocrats, and all but 30 were identified using jewelry or other personal items.

These remained unidentified until a group of dentists were invited to examine the dental remains and supposedly identified all but 5 victims based on dental “records” and unique dental markings10. From its humble beginnings, forensic odontology has grown into a prominent field and is actually a specialty within the dentistry profession. Specialists are involved in a wide- variety of cases that include techniques other than bite mark analysis. As in the Bazar de la

Charite fire case, forensic odontology has proved useful in human identification, particularly when the corpse is old or heavily damaged. Teeth are coated with enamel, the most durable substance in the human body, making them very difficult to damage. In addition, forensic dentists play a role in analyzing maxillofacial trauma, which can be useful in determining how an assault or murder happened11.

Despite the field’s significant contributions, the PCAST’s bleak outlook on this field can also be supported by many historical cases. For example, the Innocence Project was instrumental 74 in exonerating Ray Krone, an Arizona native, who served 10 years in prison on a wrongful conviction of murder and kidnapping. The victim was found with bite marks on her chest and neck, and Krone was identified as being with the victim on the night of the murder. Maintaining that he was innocent, Krone willingly submitted to making a Styrofoam impression of his teeth for comparison with the bite marks found on the victim. To Krone’s surprise, the forensic odontologist on this case testified that Krone’s dental impression was a “perfect match” for the bite marks found on the victim. Krone was sentenced to death for murder and a consecutive twenty-one year imprisonment for kidnapping. Ten years after his sentencing, DNA testing was used to compare saliva and blood found at the crime scene to Krone’s DNA. The results supported Krone’s plea of innocence, and actually pointed to Kenneth Phillips as the murderer.

Krone’s case is just one of many examples that highlights the unreliability of bite-mark analysis7.

Two main reasons explain this unreliability. One is the lack of precision and accuracy in the technique itself. Bite marks are often analyzed from skin, which is elastic and constantly changing over time. In deep marks, excessive inflammation of the area and the healing process may often distort the mark. Since analysts often isolate slight differences in the dental structure to identify a potential match, changes in the skin elasticity or distortion in highly inflamed areas are cause for concern. It is also possible that natural changes in dentition may occur, resulting in potential false positive results2. Furthermore, bite mark analysis, like many other “feature- comparison” methods of forensic science, is based on the assumption that all potential “source” samples are significantly different from one another so that the evidence at the crime scene would only match one source. However, this assumption has been proven false on many occasions. In 2011, Bush et al. analyzed the anterior lower tooth angles in 344 dental scans to identify potential matches. Specifically looking at 6 of the lower teeth, the researchers were able 75 to identify 58,996 combinations for placement and angles of these six teeth. The researchers expected the chances of finding an exact match for all 6 teeth to be less than one in a trillion.

However, their results indicate that 16 of the dental scans showed the same placement and angle of all 6 teeth, a significantly higher proportion than expected1. This proportion would likely increase when elasticity of the skin and inflammation are considered. Practically, this would mean that a bite mark found at the crime scene could actually match a number of people. Since analysts only compare evidence to bite marks of potential suspects, it is probable that a suspect would match the particular bite mark even if he were not actually in contact with the victim or at the crime scene. In methods like forensic entomology, where time of death can be estimated from insect development and colonization at the crime scene, there is no need for comparison to a “source”, making the validity and accuracy of these methods less concerning.

The second reason for unreliability in bite mark analysis is the frequent bias of those who perform these techniques. Despite the overwhelming evidence that bite marks do not have the needed specificity, a survey conducted with seventy-two forensic odontologists showed that 91% believed dental markings and patterns were unique for every individual6. Because there are no established national or international standards of comparison or databases for reference, much of the analysis is based on the experience and subjective opinion of the particular analyst. Results from an experiment conducted by the American Board of Forensic Odontology show that there is even disagreement among dentists when trying to characterize a bite mark. In this study, 38 practicing, board-certified forensic odontologists were independently presented with 100 cases and asked to determine whether the bite mark was human and whether there were distinct characteristics that could be used for identification. 90% or more of the odontologists agreed on these questions only in 8 out of the 100 cases2,4. In addition, many of the cases that involve bite 76 marks are high profile, usually involving a sexual assault or murder, and this can place unwanted pressure on odontologists for the conviction of a particular suspect. This bias has been analyzed in the larger field of forensic science, and a 2015 press release by the FBI actually cited purposeful, erroneous testimony by hair analysts favoring the prosecution in 96% of the 268 cases analyzed3. This combination of questionable techniques and biased scientists creates an atmosphere of distrust towards forensic science, and although the field has created a platform for objective analysis in crime investigations, there is also significant room for error.

Despite the PCAST’s claim that bite mark analysis will never be a scientifically valid method, some changes to the field of forensic odontology and the broader field of forensic science might prevent wrongful convictions like Ray Krone’s. First, significant research needs to be conducted, in order to determine national standards for analyzing and comparing bite marks. For example, extensive databases with known sequences of DNA are widely available, which removes the bias and subjective interpretation from DNA analysis. If the two sequences match, then it can be said, with a certain level of uncertainty, the origin of the unknown sample.

In addition, much research has been conducted to identify the genetic profiles of various populations, which makes it easy to assign uncertainty of a particular DNA “match”8. These resources and standards do not exist yet for forensic odontology or bite mark analysis.

Considering the misconception seen in many analysts, forensic odontologists who analyze evidence and testify in courtrooms should be directly involved in research to improve the field.

This will allow for better understanding of the limitations in the field itself and better communication of those limitations when testifying. Finally, all forensic scientists should be given evidence to analyze blindly. Allowing scientists to know the circumstances surrounding the case creates unnecessary biases and pressures when conducting the analysis. In its current 77 state, testimony from these unreliable methods may be causing more harm than good. However, with certain changes and an effort to remove bias, the field of forensic science has the potential to transform crime investigation and safeguard against wrongful convictions.

78

References

1Bush, M.A., Bush, P.J., and H.D. Sheets. (2011). Statistical evidence for the similarity of the

human dentition. Journal of Forensic Sciences, 56(1), 118-23.

2Committee on Identifying the Needs of the Forensic Sciences Community & National Research

Council. Strengthening Forensic Science in the United States: A Path Forward. National

Academy of Sciences: National Academies Press, 2009. Print.

3"FBI Testimony on Microscopic Hair Analysis Contained Errors in at Least 90 Percent of Cases

in Ongoing Review." FBI. FBI, 20 Apr. 2015. Web.

4Freeman, A. and Pretty, I. “Construct validity of bitemark assessments using the ABFO decision

tree.” Annual Meeting of the American Academy of Forensic Sciences, 2016. Conference

Presentation.

5“No parole for Colin Pitchfork: First killer caught by DNA.” BBC News, BBC, 29 April 2016.

Web. 7 May 2017.

6Pretty, I. A. (2003). A Web-Based Survey of Odontologist's Opinions Concerning Bitemark

Analyses. Journal of Forensic Sciences, 48(5), 1117-20. Web.

7"Ray Krone." Innocence Project. Innocence Project, n.d. Web. 06 May 2017.

8Report to the President, Forensic Science in Criminal Courts: Ensuring Scientific Validity of

Feature-Comparison Methods. Washington, D.C.: Executive Office of the President of

the United States, President's Council of Advisors on Science and Technology, 2016.

Print.

9Singh K, Anandani C, Bhullar RK, Agrawal A, Chaudhary H, et al. (2012). Teeth and their

Secrets - Forensic Dentistry. Journal of Forensic Research, 3(1), 141. 79

10Taylor, J. (2009). A brief history of forensic odontology and disaster victim identification

practices in Australia. Journal of Forensic Odonto-Stomatology, 27(2), 64-74.

11Verma, A. K., Kumar, S., Rathore, S., & Pandey, A. (2014). Role of dental expert in forensic

odontology. National Journal of Maxillofacial Surgery, 5(1), 2–5.

80

Michael Joyce

Dr. Varmus

HON 360

May 22, 2017

The Influences on Scientific Research

Scientific research is often thought to be free from much outside influence, however, the necessity of funding and protection of work has guided and constrained the direction of different fields of scientific inquiry. These two sources of influence are important because they have the capability of guiding the direction of scientific research or creating an environment of less freedom in research for scientists in an effort to prevent losing money. Though one can understand the desire on the behalf of investors to not lose money and choosing to invest in something that will likely be profitable, it is important to look at the implications that efforts to protect wealth have on research. Science and scientific research are important to many aspects of life, and thus have many influences, the greatest of which is arguably those who fund and support research. In this paper, I will explore how money and patents are effecting the freedom of scientific research. To explore the problems these constraints have create, I will examine the environment surrounding scientific research by a fledgeling Indian government. By looking at what happened once the government withdrew from the fields, I will illustrate how the possibilities for scientific research today free from influences such as funding and patenting of research.

Funding has become an important part of scientific research and those who fund research can influence where their money goes and what it is used for. Scientists in today’s world who typically make breakthroughs in many different fields are very often supported by an institution 81 or another money-granting corporation. This means that there can be a large initial investment that occurs before the scientific process begins. This trend is a concern because it, “...limits the independence of university researchers, impedes some avenues of investigation, limits access to some research materials, and constrains the open diffusion of research findings” (Glenna 149).

There is unfortunately a limit to what can be done with the money provided to scientific researchers. Sometimes, these limits are placed on the technology and resources that can be used before the funds run out. If more funding became available, researchers might have better resources at their disposal to aid in their research as well as more opportunities to pursue their research and findings further without needing to apply for more funding. The limits to the use of funding from those who give funding and influence how it is used directly impacts the course of scientific research.

An additional way in which funders influence scientific research is the use of patents, a means through which efforts are made to protect an initial investment into scientific research.

Glenna states that this trend can help protect advancements and is a way in which the initial output of money can be recovered, possibly even making a profit for the investors. This is a change from the way scientific research, which was not as motivated to make a profit as it is today, was long ago. Fortunately, the physical practice of science has not greatly changed, though with more advanced equipment and more knowledge than was available in the past.

Though there has always been a necessity for financial support of scientific endeavors, the aim has changed from simply making discoveries and changing the world and there is now a drive to make a substantial profit in the process of doing science. Curiosity in a particular question or phenomenon is no longer grounds enough to perform research, there is usually at least the possibility of financial profit through a patent that plays an important role in determining what 82 projects will get funding and can continue. Funders of science are able to have a particularly strong influence on scientific research because of their desire to make back their money, which can typically be done through the use of patents.

Patenting itself reflects larger issues in scientific research, which is increasingly producing intellectual property. Scientists’ concerns about intellectual property rights are limiting the possibilities of scientific research. Intellectual property is something that is the creation of the mind that can be used commercially in some way. Though there are emerging fields where physical discoveries are being made on a daily basis, there are also times when discoveries and breakthroughs are ideas that are the results of the study. This is a fairly novel phenomenon which is changing the way science is practiced. Instead of scientists creating or finding something physical that they can sell to make a profit, the product of their work is something non-physical. This has led scientists to call for the, “...design [of] intellectual property policies to enable inventors to benefit sufficiently from investment of their time and resources to warrant their commitment, but without preventing future inventions or improvements on that existing invention” (Glenna 154). Scientists and those who sponsor their work are calling for changes to be made to the way their work is protected, but they do not want to make their protections too binding so no new scientist can do research. Many believe that their property and hard work must be protected in order to fuel further research, but also understand that too much protection could backfire and cause a decrease in interest in the area. In time, a balance will be reached at which point researchers will be rewarded for their research and their work protected by patents and young scientists will feel as though they have a chance to make a difference in their field not inundated with patents. 83

As conditions currently stand, it appears as if the focus on patenting and protecting the work of researchers is preventing new scientists from going into research or scientists from publishing their research. It is often difficult to define intellectual property, especially since it is not often a physical object that can be delineated easily. There are also questions surrounding whether the thought process and methods that lead to the idea are protected under the patent, especially since that knowledge could eventually lead to the patented final idea. Because of these questions and uncertainties, “...scientists delay scientific publication of research they expect to patent and are reluctant to freely share research results and materials with other scientists”

(Bentwich 137). This hesitancy to release information from studies is the result of the drive for stronger patent protection, which has been a great protector of scientific property. Unfortunately, the problem is that scientists are afraid to share and patent their own work because it is all too possible they would be violating another’s patent, which would have monetary and legal consequences. This problem becomes increasingly more daunting to someone who is thinking of starting a career in scientific research. These people will have to navigate a labyrinth of legal documents and ownership to find a niche in their field that is free for them to work on in order to make their mark in the world. As all of these niches get smaller or filled, there will possibly be a decrease in the amount of people entering scientific research because they will not have a place to make a name for themselves. Patenting of scientific research is currently benefiting scientists already doing research and who are making strides as well as those supporting them financially with grants, but the practices will eventually deter and even possibly prevent people from conducting research as freely.

The potential improvement of scientific research, free from the impact of outside influence, can be seen by examining the history of scientific research in the modern country of 84

India. Part of what makes India an interesting place to look to for guidance in determining how to deal with the problem has to do with India’s history. India is a young country historically with a lot of colonial influence and roots stretching back thousands of years, with historical notoriety for using plants to make medicine, and from its extended time as a British colony. As a result, scientific institutions have been closely related to the Indian government, which had strong influence over the types of research done. This was especially seen in the years following India’s independence from Britain, in which the young Indian government emphasized technology and engineering to develop the young country quickly (Mallick 629). Along with the pressure to study these particular fields was an urging for publication writing, which was a way in which many academics would be able to keep their positions (Mallick 633). Scientists in India were finding that they did not have the freedom to decide what their research would be on, they were instead elbowed into producing results and publications on a consistent basis. Over time, however, the practice of science in India improved and, “...practice of science and its products are increasingly being intertwined with social, economic, political, cultural, legal, ethical, institutional and ideological issues” (Mallick 632). As the environment changed in India and scientists were slowly given more freedom, their work began to include many different fields, giving them much more freedom. Thus, it is likely that some loosening of regulations and ability of scientists to explore and connect a variety of fields helped improve the freedom of scientific research and increase the number of those interested in it while protecting the inventions being made.

Outside influences of many forms, especially financial or governmental, have the potential to impede on the freedom of scientists and scientific research. When those influences are eased or modified, as they were in India, greater liberty in scientific research is possible. In 85 an ever-changing world, the question of how science should be carried out and the best way to foster continued scientific research will need to be addressed. When looking back at history, the question of scientific research can easily be overlooked because the problems are not seen, only the solutions. The question of how scientific research can be produced and driven forward has always been present, but the problems that go along with the question has changed over the course of time. In the past, there was less of a concern about the results of scientific research, which were more likely to result in a physical item. Today, however, the results of scientific research are more frequently ideas rather than something physical. With this, it is becoming difficult for investors in science to recover the money that was invested into the research in order to make the discovery. This is resulting in an increase in patent applications for ideas that are roadblocks for young scientists and others in the field to do innovative research. In order to solve this dilemma, it is possible to look at countries’ pasts where scientific research and institutions found a balance between growth and freedom in research. Over time, a standard will be set and there will not be the same concerns that there are presently. Instead, there will be new conflicts that arise that will need to be settled in order for science to continue developing forward.

86

Works Cited

Bentwich, Miriam. "Changing the Rules of the Game: Addressing the Conflict between Free

Access to Scientific Discovery and Intellectual Property Rights." Nature

Biotechnology,

vol. 28, no. 2, Feb. 2010, pp. 137-140. EBSCOhost, doi:10.1038/nbt0210-137.

Glenna, Leland L., et al. "Intellectual Property, Scientific Independence, and the Efficacy and

Environmental Impacts of Genetically Engineered Crops." Rural Sociology, vol. 80, no.

2, June 2015, pp. 147-172. EBSCOhost, doi:10.1111/ruso.12062.

Mallick, Sambit and Haribabu Ejnavarzala. "The Intellectual Property Rights Regime and

Emerging Institutional Framework of Scientific Research: Responses from Plant

Molecular Biologists in India." Asian Journal of Social Science, vol. 38, no. 1, Jan.

2010, pp. 79-106. EBSCOhost, doi:10.1163/156853110790799973.

Mallick, Sambit. "The Intellectual Property Rights Regime and the Changing Structure of

Scientific Research in India: Lessons from the Developing World." Perspectives on

Global Development & Technology, vol. 8, no. 4, Dec. 2009, pp. 628-654. EBSCOhost,

doi:10.1163/156915009X12583611836172.

87

Nisma Zakria

Religious Influence on Assisted Reproductive Technologies

Introduction

The overlap between religion and science is a lot more common that it is perceived to be.

Some individuals in society who hold a high regard towards their religion make some or all of their decisions based on religious attitudes. Examples where religious affiliation has an effect on decisions should be an open discussion especially when talking about more sensitive matters (i.e. procreation, abortion, infertility therapy). A topic of discussion that has generated controversy among religious leaders is assisted reproductive technologies (ART). Hence, I will be looking at some controversial decisions surrounding ART in the context of religion. Afterwards, I will be talking about three of the sought after procedures by people who want to conceive children.

Subsequent to that, I will talk about the stances of the primary monotheistic religions and

Hinduism and Buddhism on each of these procedures as they play a role in decision-making.

The world of assisted reproductive technologies changed forever when the birth test-tube baby, Louise Brown, was born in 1978. She was conceived specifically using in vitro fertilization which ultimately paved the way for other ART procedures to be invented. Some examples include third party gamete donation of sperm, eggs, embryos, uteruses (surrogacy), embryonic stem cell research, and intracytoplasmic sperm injection to counteract male infertility.

It is no secret that approximately 50-80 million people worldwide (8-12%) of the world’s population experience difficulties with procreation (Mindes, Ingram, Kliener, & James, 2003).

About one million of these people will eventually turn to ART (Centers for Disease and

Prevention [CDC], 2007) in order to try to conceive. A lot of couples that utilize ART take into 88 account a variety of factors that have significance in decision-making. For some, the decision to utilize ART is influenced by religious values and ideology (Culley, Hudson & Van Roij, 2009).

Another reason for discussing bioethical concepts is because cities or countries governed by a certain religious ruling can impact whether or not ART is allowed in that area. The majority of people who use ART are couples that are unable to conceive or samesex couples who want a child. Since a large portion of the research focuses on heterosexual couples, I will be exploring issues and problems regarding this population.

Under the umbrella of ART are a few procedures that all have the end goal of treating infertility. The main types of ART that I will delve into are IVF, the use of donor gametes, and using a surrogate in order to conceive a child. As defined by Traina et al (2008), in vitro fertilization (IVF) is commonly known as the “test-tube baby” procedure, which is helpful for women withblocked fallopian tubes. It begins with a drug regimen to induce superovulation; in this case, however, the harvested ova are mixed with homologous or donor sperm and cultured for 48-72 hours. At the end of this period, the physician removes a small number of embryos from the culture dish and transfers them to the woman’s uterus through a catheter, where one or several may implant and grow normally. Unused viable embryos may be frozen for future use or destroyed, and unused ova may be donated or discarded.

The process of in vitro fertilization can also be done with the sperm of a donor if the intended male partner lacks adequate sperm; this would then fall under the category of a donor gamete.

The procedure of surrogacy occurs when there is no viable womb. In this case, another woman, who is the surrogate would go through IVF and carry the child for the time being, which is not legal is most U.S states (Connor et al, 2012). Reasons why people may resort to this is because the intended mother may be prone to having miscarriages or cannot carry the baby herself due to 89 health related issues. Additionally, a single person or a gay couple may use this procedure to have their own child (Traina et al, 2008).

The aforementioned ART procedures entail slightly different ethical dilemmas for couples that are looking to utilize them (Shenfield & Sureau, 2002). One of the dilemmas includes conceiving outside the body or using a surrogate mother to carry the baby. Another is deciding what to do with left over embryos. The predicament that exists lies within the moral status of a pre-embryo (Fasouliotis & Schenker, 2000). Three of the most popular perspectives include: (a) the pre-embryo does not have a moral status since it is just a part of the mother’s body, (b) the pre-embryo has the potential to become a human being and (c) the pre-embryo is considered a human being (Fasouliotis & Schenker, 2000). Some religious principles also find it controversial whether to have an anonymous donor, a known donor, or a donor at all. The dilemma revolves around kinship and staying within the family’s heritage. A few other dilemmas include whether male masturbation is permissible or not and lastly, choosing to keep the procedure a secret from others or not (Connor et al, 2012). Understanding these different viewpoints can help us comprehend any opposition to ART (Connor et al., 2012).

With all of these ethical dilemmas among the population of people that resorts to ART, it is important to note that a majority of people have core beliefs, most likely tied to their religion, that impact the decision making process. Therefore, in reality, medical practitioners should at least have an idea of people’s religious backgrounds. A Gallup poll shows that

Americans reported on their spiritual beliefs including believing in God (95%), having a religious preference (92%), and being a member of a church (66%, Gallup, 2002). In the discussion of religious influence, I will cover the positions of Judaism, Roman Catholicism,

Protestant Christianity, Islam, Buddhism, and Hinduism each with their fundamental rules on 90 one’s duty to have children. These positions cover a variety of aspects of ART including natural conception, what is considered a life, and the standpoints on using a surrogate (Connor et al,

2012).

Judaism

The religion of Judaism is categorized into three sects: reformed, orthodox, and conservative (Silber, 2005). The sects are comprised of 85% reformed, 10% orthodox, and 5% conservative (Silber, 2005) and most of the rulings on assisted reproductive technologies are based off of orthodox views that have been debated by rabbis (Kahn, 2006). Moreover, the

Jewish attitude toward infertility can be learned from the fact that the first commandment of God to Adam was “Be fruitful and multiply” (Genesis 1:128). It can be interpreted from this commandment that Jews are encouraged to have children according to their religion. Another source that Jews adhere to stems from rabbinic authorities; they point out that having children can lead to less problems and potentially protect a marriage (Kahn, 2006).

The use of donor gametes is controversial among orthodox Jews. Some Rabbis forbid the use of donors and others say this is a matter that should stay as a private matter between the couple (Silber, 2005). This controversy stems from discrepancies in kinship and wanting to strictly continue the Jewish heritage line. In terms of surrogacy, most rabbis permit it as long as the surrogate is a Jewish woman. It also must be taken into consideration that these rules were put into place during previous times when DNA testing was not yet invented. However, that is not the case any more; DNA testing is a lot more common and accurate than it was.

Before any of these procedures are carried out however, the intended mother is first tested for any abnormalities. This is done because masturbation is considered a taboo (Haimov- 91

Kochman et al., 2008) among orthodox Jewish men; hence checking the woman may allow the man to avoid this step of the procedure.

Christianity

Roman Catholicism

The Roman Catholic Church has 2 billion followers and represents the largest Christian church. However, the Catholic Church has strong views against ART and these views are based off of the teachings of the Bible and traditional practices. One exceptionally influential piece of literature is St. Augustine of Hippo’s “The City of God”. This piece includes teachings that would impact the attitudes toward sexuality and the rules of marriage within the Roman Catholic sect (Benagiano & Mori, 2009). Specifically, in 1987, the Congregation for the Doctrine of Faith established the principle of nonseparation to officially ban any practice of assisted reproduction, including inseminations, IVF, donor gametes, and gestational carriers. One of the reasons that the Catholic Church condemns the use of ART is because it does not respect the sanctity of human life since procreation is turned into a scientific procedure. The Roman Catholic Church believes that the process of life should only be the work of God and not that of man. The ability to create or manipulate life with IVF or otherwise defies the natural process of life (Fasouliotis &

Schenker, 2000). The church also finds it important that if a couple is infertile, this is nothing but an experience to allow them to reflect on the virtue of patience and parenthood (Shenker).

Another reason for this principle is that separating procreation and sexual intercourse violates the nature of marriage (Benagiano & Mori, 2009). It can be deducted that Roman Catholics see this process as undermining the main purpose of marriage and therefore it is not permissible.

Protestant Christianity 92

Protestant Christianity stemmed from the Catholic Church’s aim to reform and therefore has separated its ideological principles from those of the Catholic Church. Since Protestants do not have a centralized organization, their rulings generally come from Christian bioethicists and theologians. Within the religion in it itself, there are opposing views. Specifically, the Church of

England released an official statement in 1984 supporting the use of ART. However one of its committee members, O’Donovan, was outspoken about his disagreement with the statement of the Church (Sutton, 2008). He believes that the use of ART in reality can impact how a parent views their child from a religious point of view; the way a child is conceived can have a parent thinking of their children are possessions instead of gifts from God (Sutton). With this, the topic of what to do with leftover embryos approaches us again. Protestant Christians have disagreements on when an embryo is considered a human because when there are leftover embryos from an IVF procedure, some Christians deem it impermissible to discard them.

However, options such as freezing an embryo or donating it for research or even donating in to another couple are becoming more prevalent (Connor et al., 2012).

Islam

Worldwide, Islam is the second largest religion with about 1.5 billion Muslims

(adherents.com, 2010). Within Islam, there are two sects, the Shia and the Sunni each with their own opinions about the use of ART. Overall, Islam generally supports scientific and technological advances but with a few limitations depending on they type of procedure. IVF is permitted only using eggs from the wife with sperm of the husband and in the wife’s uterus again. Third party donors are not allowed whether they are providing sperm, eggs, embryos, or being a surrogate (Moosa, 2003). These third party donors are not allowed as part of the process because there are guidelines regarding familial relationships and kinship. Islam argues that 93 gamete donation is considered to be adultery for bringing in a foreign person into the production of the child. Moreover, this creates the possibility of incest as a person accidently marry someone with the same donor gametes and lastly, it complicates kinship, paternity, and inheritance.

Buddhism

As a religion, Buddhism does not have a central authority and therefore it differs in all parts of the world. Each area in which Buddhism is practiced maintains its own interpretations.

The religion has only required monks to follow the rules very strictly. However, followers of

Buddhism are allowed to do whatever they would like to do as long as it does not harm other people or the flow of Earth. This means that Buddhism generally supports ART procedures, nevertheless, it still has its restrictions. There rises the issue of what to do with additional embryos since they consider them to be a life. Also, in the matter of using third party donor sperm, it is not prohibited in Buddhism, however, it is suggested that people refrain from this procedure if they can (Schenker, 1992).

Hinduism

Similar to Buddhism, Hinduism also supports the utilization of ART with some restrictions as well. The utilization of IVF relies on basic fundamental views regarding marriage and family: (1) Marriage is considered sacred and permanent (2) Male infertility is not a cause for divorce (3) The emphasis in reproduction is not just on having children, but on having a male offspring (4) It is a religious duty to provide male offspring. Therefore the wife of a sterile male could be authorized to have intercourse with a brother-in-law or another member of the husband’s family for the purpose of having a male offspring. This is permissible only if the couple has had 8 years of infertility or after 11 years of only delivering female offspring 94

(Schenker, 1992). Hence, the religion suggests that a women first look within her family for solutions or for a way to conceive a male child before looking for a sperm donor. If it is not possible to conceive a child with another member of the husband’s family, then the sperm donor must be closely related to the husband.

Conclusion

Setting a platform to discuss the effect religious perspectives have on scientific technologies is important to a majority of medical practitioners as well as family therapists who are heavily involved in this process as well. The concept and practice of assisted reproductive technologies is a sensitive topic that must be handled whilst having complete awareness of various cultural and/or religious perspectives. Being aware can allow a process such as this to flow smoothly without adding any complications. A practitioner can accurately guide a patient to the correct form of ART based on their religious affiliations as well as personal beliefs.

95

References

Adherents.com. (2010). Major religions of the world ranked by adherents. Retrieved from

http://www.adherents.com/Religions_By_Adherents.html

Benagiano, G., & Mori, M. (2009). The origins of human sexuality: Procreation or recreation?

Ethics, Bioscience, and Life, 4(1), 50-59

Centers for Disease Control and Prevention. (2007). 2005 Assisted reproductive technology

success rates: National summary and fertility clinic reports. Atlanta, GA: Author.

Connor, J., Sauer, C., Doll, K. (2012). Assisted reproductive technologies and world religions:

Implications for couples therapy. Journal of Family Psychotherapy, 23(2), 83-98.

Cully, L., Hudson, N., & Van Rooij, F. (2009). Marginalized reproduction: Ethnicity, infertility

and reproductive technologies. Sterlin, VA: Earthscan.

Fasouliotis, S. J., & Schenker, J. G. (2000). Ethics and assisted reproduction. European Journal

of Obstetrics & Gynecology and Reproductive Biology, 90, 171-180.

Gallup, G. H. (2002). Gallup index of leading religious indicators. Retrieved from

http://www.gallup.com/poll/5317/gallup-index-leading-relgious-indicators.aspx?

Haimov-Kochman, R., Rosenak, D., Orvieto, R, & Hurwitz, A. (2010). Infertility counseling for

orthodox Jewish couples. Fertility & Sterility, 93(6), 1816-1819.

Kahn, S. M. (2006). Making technology familiar: Orthodox Jews and infertility support, advice,

and inspiration. Culture, Medicine, & Psychiatry, 30, 467-480.

Mindes, E., J., Ingram, K. M., Kliewar, W., & James, C. A. (2003). Longitudinal analyses of the

relation between unsupportive social interactions, coping, threat appraisals, and

psychological adjustment among women with fertility problems. Social Science &

Medicine, 56, 2165-2180. 96

Schenker, J. G. (1992). Religious views regarding treatment of infertility by assisted

reproductive technologies. Journal of Assisted Reproduction and Genetics, 9 (1).

Schenker, J. G. (2000). Women’s reproductive health: Monotheistic religious perspectives,

International Journal of Gynecology & Obstetrics, 70, 77-86.

Shenfield, F., & Sureau, C. (Eds). (2002). Ethical dilemmas in reproduction. New York, NY:

Pathenon Publishing.

Silber, S. J. (2005). Religious perspectives of ethical issues in ART: Infertility, IVF and Judaism.

Middle East Fertility Society Journal, 10, 200-204.

Sutton, A. (2008). Christian bioethics: A guide for the perplexed. London, England: T &T Clark.

Traina, C., Georges, E., Inhorn, M., Kahn, S., & Ryan M.A. (2008). Compatible Contradictions:

Religion and the Naturalization of Assisted Reproduction. Altering Nature: Volume II:

Religion, Biotechnology, and Public Policy.

97

Spogmay Khan

MHC 360

Dr. Varmus

Patents and EpiPen’s Monopolistic Control

What happens when one has a life-threatening allergic reaction and the drug he or she needs is unaffordable. This is the case for individuals who have severe allergic reactions and financially struggle to purchase EpiPens. Anaphylaxis, the most severe, is a life-threatening allergic reaction that occurs very quickly. It is dangerous because of the rapid response of the body that occurs within a few minutes of exposure to the allergen. It can be triggered by an allergy to a particular food, insect bite or sting, medication, latex, and other variety of other allergic triggers. When the allergen comes into contact with IgE in human body cells, histamine is released to neighboring cells. Histamine release, through a chain of other interactions, causes redness, swelling, constriction of smooth muscle cells (in breathing and digestive pathways), and itching. However, the reaction can be treated with a number of ways.

For example, injections of adrenaline (epinephrine), oxygen, intravenous replacement fluid, breathing tubes, and/or antihistamines for skin rashes are available treatments. Among the remedies, injection of epinephrine is the most effective treatment because it immediately elevates heart rate and blood pressure, enhances blood flow into muscle (relaxing smooth muscles), and increases metabolic rate and blood sugar. The effects seen instantaneously reverse the adverse effects cause by anaphylaxis. Therefore, it is extremely crucial for individuals who have severe allergic reactions to have access to an EpiPen. Recently, there has been uproar among consumers because of the recent spike in the price of EpiPen. Founders of Epipen, Mylaan, have a patent on its design of the EpiPen is the essential problem that prevents competitors from developing a 98 similar device. Thus, numerous individuals struggle to meet the costs of EpiPen even with insurance coverage.

The drug epinephrine is relatively inexpensive, costing as low as less than a dollar per milliliter, which constitutes for the amount in three EpiPens. Nonetheless, its not the medication that is expensive rather the problem lies within the design and administering the correct dosage, which Mylan’s patent has a lock on.

Patents allow ownership of intangible creations of the human mind. The owner has the right to exclude others from making, using, offering for sale, or selling the invention for 20 years. Specific circumstances are created by law to limit the enforcement of the extent of the application has been made. For instance, there is a limit in duration of a patent, which is 20 years form date of application. Patents can be a product, article of manufacture, or even a process.

Requirements of patents include meeting guidelines such as producing an invention that has never been disclosed and cannot be known to a person ordinarily skilled in the field.

Additionally, improvement of a previous patent can be an invention and thus can be accepted as a new patent.

In order for a to be approved, it has to go through a rigorous review process. In this process, it is determined if the requirements have been met. The first part includes comparing the claims of the patent against published literature, including previous patent. This examination process prevents claiming a patent on an invention that already exists.

After approval, the patent is valid for 20 years to allow investors to provide financial resources necessary for ’s research and allow development for manufacturing and placing it on the market. The main purpose of patent is to maximize profit to inventor and investors. 99

Before it is put on the market, developers of have to have knowledge about the market for the invention embodied in the patent to allow support of cost of development and return a profit.

Generally, markets are neutral. The main principal in marketing is scarcity. Scarce products cost more than widely available products. Thus, patent serves as reward for the risk taken by the company and those who financed the research and development. Although the concept of the patent is fundamentally the same, the laws and nature governing of the patent differs among each industry. Different patents are created for different industries. For instance, in the case of the electronic industry, patents are generally shared by pooling and cross licensing. This is necessary because a product often contains many patented technologies. Conversely, pharmaceutical, chemical and biotechnology industries do not share patents because in essence the patent is the product.

In the pharmaceutical industry, patents protect the extensive investment in research and clinical testing required before placing it on the market. Patents are extremely crucial because of the manufacturing process is often easy to replicate and can be copied with a fraction of investment. Therefore, pharmaceutical companies can easily and replicate the product with little capital investment. Much of the capital investment is from lab research and clinical trials.

Therefore, patent exclusivity is the only way to protect and receive a return of investment

Overall, patents exist for the purpose of two reasons; stimulate interest in research and find solutions to problems and promote the broader good of the country. In essence, the granting of a patent was designed to advance interests of its creator and the economy and well-being of the nation. However, the patent system has become corrupt. The patent protection, specifically in the drug industry, has been distorted by the political system, intense lobbying and large campaign contributions. Many companies find it simpler and safer from a financial perspective 100 to either buy the rights to drugs developed by others and increase the prices or obtain a medication that already exist and emphasize their monopolistic control by raising the price as much as 500% or more, as in the case of the EpiPen.

The EpiPen was first approved by the Food and Drug Administration (FDA) in 1987. Its effective design appealed to numerous patients with an aversion to needles.Kaplan’s design , original owners of the EpiPen, releases a spring-loaded syringe pre-filled with a dose of adrenaline. Its design allows the drug to be delivered through a person’s clothing. This eliminates the slow process of filling a syringe especially in an emergency and consumers are easily trained with a short tutorial. Drugmaker Mylan bought the EpiPen, $200 million revenue in 2007, to a current revenue exceeding $1 billion. Mylan launched clever marketing and advocacy campaign aimed at getting schools to stock EpiPens by using the high-profile deaths of several schoolchildren. In 2013, President Obama signed the School Access to Emergency Epinephrine

Act, which provides financial incentives to schools that stock epinephrine auto-injectors. The price for a 2 pack increased from $94 to over $600. Much of the cost is covered by insurance, but patients still bear a large cost. EpiPens still cost about $415 after insurance company discount.

Currently, doctors are suggesting patients cut costs by carrying kits composed of epinephrine and syringes, a relegated system introduced by Kaplan. However, a syringe does not administer the correct dosage or safely doing so as a well-calibrated EpiPen does. Furthermore, it carries the risk of injection into a vein, instead of muscle, which can be fatal. In comparison, the cost of

EpiPens is relatively cheaper in other countries such as France, costing about $85. Mylan claims that the price increase is the result of added features including the flip-top case, and is “investing substantial amounts in research into additional improvements, such a formulation with a longer shelf life for more than a year. 101

Pharmaceutical companies are free to set the price for its drug at any level the market will bear that maximizes their profits. Prices have increased for treatment of Hep C, cancer, and high cholesterol. For instance, Daraprim, which treats a life-threatening parasitic infection, increased

$13.50 to $750 per tablet. Although, there is another epinephrine administering drug on the market, there is still a lack of competition. Much of the complication is the result of the regulation process that exists because of the patent and the FDA.

The alternative drug AuviQ is in the shape of a credit card and provides audio instructions. However, it was recalled shortly after a few years of being on the market as a result of delivering inconsistent doses of the drug. Currently Adrenaclick is the only other auto-injector still available to consumers. It costs about a quarter of an EpiPen and its manufacturer also makes a generic version. The instruction differs insignificantly yet the difference might become inconvenient in an emergency situation.

The EpiPen is design is very simple yet cannot be mimicked because of the patent that last through 2025. The EpiPen has been dominant on the market for so long, making it difficult to create a different method of administering or even a design. Part of the difficulty arises from creating a design without infringing on Mylan’s patent and meeting the FDA standard. FDA’s regulatory process is slow and expensive further preventing drugs from entering or maintain their position on the market. The system is the most challenging because of the lengthy and expensive process to get FDA approval for a new drug. For instance, Teva pharmaceuticals failed to obtain regulatory approval, delaying their entry into the market.

Upon the expiration of the patent in 2025, additional drugs will be on the market.

Because of its long lasting history on the market, doctors and patients have developed a brand 102 loyalty to EpiPen. Thus, it might be difficult for competitors to gain consumer because of

EpiPen’s brand equity.

103

Palwasha Syar

MHC 360

Dr. Harold Varmus

Hepatitis C: Drug Pricing

Hepatitis C is disease that is caused by a virus that infects the liver. Hepatitis C, or Hep C for short, is just one of the hepatitis viruses; the other most common hepatitis viruses are A and

B, which differ from Hep C in that way they are spread and treated (World Health Organization,

2016). Hep C often has no noticeable symptoms and can be spread via blood contact to other people without the knowledge of the infected person. It can begin as an acute infection, but in some people, it can progress to a long-term illness that can result in serious liver problems, such as cirrhosis and liver cancer. According to the Centers for Disease Control (CDC), there are an estimated 2.7-3.9 million people in the United States with chronic hepatitis C, and a 75%–85% chance of a person developing chronic HepC after they have been infected with the virus.

Hepatitis C is most commonly spread via blood contact with an infected person. Before 1992, the disease was most often spread by blood transfusions, organ transplants and hemodialysis (CDC).

Today, people most commonly become infected with the Hepatitis C virus by sharing needles or other equipment to inject drugs. These populations are often very poor, homeless, chronic drug users or incarcerated individuals (CDC).

Unlike, Hepatitis A and Hepatitis B, there is no vaccine against Hepatitis C. However, there are medications available that have a high success rate of treating Hepatitis C and have fewer side effects than previous treatment options. One of the most successful Hep C treatment drugs produced by Gilead Sciences, Inc. is Harvoni, a combination of the already approved drug sofosbuvir (Sovaldi) and a new drug called ledipasvir. According to the Gilead Sciences webpage, 104

Harvoni had a very high success rate in its clinical trials. After a 12-week treatment plan, 96-99% of patients with Hep C genotype 1 were cured. Harvoni is most successful against genotype 1 of the

Hep C virus, which is the most prevalent genotype in the United States. However, the incredible success of Harvoni is offset by the staggering price tag of Harvoni, which is set at $1,000 per pill and $94, 500 per a standard treatment (Gilead Sciences, Inc.). Harvoni prices are the highest at

$94,500 per standard treatment plan in the United States, a stark contrast to the price of $900 in India

(Staton, 2016). In response to immediate backlash and public outcry at the excessively high prices, Gilead Sciences started to offer a co-pay program in the United States that will reduce the cost of a Harvoni pill from $1,000 to only $5 (Gilead Sciences, Inc.). However, this sparing program is wrath with several problems, on of the main being that the co-pay program covers

25% of the 12-week treatment plan. So the cost of the treatment plan only goes down to $75,000 for a 12-week treatment plan (Harvoni Cost). Furthermore, in order to be eligible for the co-pay program the participant must not have Medicare, an eligibly requirement that is remote and unjust for the poor population of HepC patients. Aside from Harvoni, other successful treatment options or drugs are also highly expensive. Sovaldi by Gilead Sciences Inc. is priced at 84,000 for a 12-week treatment plan. It was released before Harvoni and treated fewer genotypes of

HepC (Harvoni Cost). The cheapest HepC treatment plan to date is Zepatier by Merck. It was approved by the FDA on January 28, 2016 and costs approximately 56,000 per standard treatment (Merck Newsroom).

The high price of Hep C drugs, especially Harvoni, is partly due to the money invested on the science involved in the development of the drugs. For many years after the hepatitis C virus was discovered in 1989, scientists did not have an effective way to replicate the virus in vitro. Since Hep C is difficult to reproduce in lab, it could not be 105 effectively replicated and analyzed for its activity upon exposure to drugs. In 1997,

Charles Rice at the Washington University in St. Louis developed a revolutionary technique that allowed the Hep C virus’ RNA to be cloned and used to infect chimpanzees

(Robbins, 2016). Around the same time, German scientist Ralf Bartenschlager, inserted a special gene into the host cells that gave these cells the ability to stay alive. He discovered that some host cells were able to stay alive longer than the others because through several rounds of replication they had acquired a mutation that enhanced their ability to replicate

(Robbins, 2016). The revolutionary breakthrough occurred when both scientists managed to insert the sequence of the mutation into the original RNA of the virus, thus allowing the

Hep C infected cells to stay alive longer and be effectively replicated for drug testing. For their work Ralf Bartenschlager and Charles Rice received the Lasker Award in 2016

(laskerfoundation.org). They shared this award with Michael Sofia, an industry scientist, who used this research to help invent the Hep C drug sold as Sovaldi. Sovaldi, which is one of the combination drugs in Harvoni, is a nucleotide polymerase inhibitor. It interferes with the reproduction of the virus’ genetic material. The other combination drug, Ledipasvir, is an NS5A inhibitor. It interferes with the life cycle of the Hep C virus in the cell (Bhatia et al,

2014).

In addition to clinical trials and other manufacturing costs, Gilead Sciences, which released Sovaldi and Harvoni, spent $11 billion in 2011 to acquire Pharmasset Inc, the pharmaceutical company that originally manufactured Sovaldi (Staton, 2016). Since a large sum of money was invested in the research and development of Sovaldi and Harvoni, that cost is included in the pricing of both drugs. Furthermore, investors and firms invest a lot of money in the manufacturing and a clinical trial of the drugs, with the intent of recouping 106 the investment once the drug gets approved. Therefore, drug companies often set their drug prices very high so that they can return their investments and make a profit. The high pricing of drugs in the United States can be “attributed to the American free market and the oligopolistic nature of the drug industry” (Spinello, 1992). In most instances, drug companies are allowed to charge any price that they believe the consumer market can handle. Many companies also defend their pricing policy by stating that the profits obtained from the sales of the drug will be used in the research and development of more advanced and improved drugs. The consumer costs which may seem unethical and extreme are investments for more effective drug treatments. Another argument that is often used to justify the high prices of drugs is that drugs should be sold based on their “value.” This is the value that the drug has to the patient. Because Harvoni and Sovaldi have a very high success rates at curing Hep C, the value of both drugs is very high. Furthermore, the cost of these treatment options is cheaper compared to the alternative of treating the complications that arise from chronic Hep C.

In order to partly reduce the price of the Hep C drugs and make it more available to those patients who need it most, there are certain policies and measures that drug companies can engage in. Drug companies that have a powerful social impact can adopt the stakeholder model in their company-based decisions, such as distribution and pricing of drugs (Spinello, 1992). This model allows the pharmaceutical companies to treat the consumers as important stakeholders in their decisions and link strategic decisions, such as pricing, with social and ethical concerns. By becoming responsible social agents, companies will be able to get a better understanding of the consequences of their decisions, and adapt their pricing to suit the needs and financial capability of the market (Spinello, 1992). By 107 adopting this model and understanding the various social dimensions of their decisions, companies will be able to avoid angry public outcries at their prices. Another approach to lower the prices of Hep C medication would be to replicate the European regulation system.

According to the Regulatory Affairs Professional Society, prices in European countries are often lower due to the negotiations between the government and the drug companies

(2016). European companies bargain with the pharmaceutical companies by covering some of the company’s research and development costs. Therefore, the pharmaceutical company does not transfer their entire drug cost onto the consumers, and the overall drug price is lower. Even though the average drug price is lower, drug companies, for examples

Gilead Sciences, still make a profit. It may even be more feasible for a company to set lower prices because that could guarantee more sales and therefore a larger profit. A final mode of action that may lower drug prices is for the FDA to extend patent life. Average patent life for a drug is 20 years. During that period of time, the drug companies will try to maximize their profit by astronomically increasing the drug price. The drug company may file for another patent to increase a drug’s life, however, that action is often challenged by generic companies in court (Herper, 2002). In response, drugmakers can sue to keep generic versions, which will keep the off the market because FDA is required by law to freeze drug approval for 30 months unless the court case is settled (Herper, 2002). Once the original patent is close to expiring, the drug company and the generic company will try to stifle each other’s profit by suing in court. These types of patent fights slow medical innovation and are detrimental for the companies and the public. Therefore, it may be more fruitful if FDA elongated patent life for pharmaceutical companies. Although this action does give companies a longer period of time to accumulate their profit, companies 108 will also be less inclined to increase their prices because they do not have to worry about recouping their profit in a very short period of time. Regulating drug prices is a complicated process. However, with planning and contribution by the government, public and the pharmaceutical company, fairer prices for Hep C and other expensive drugs can be achieved.

References

"What Is Hepatitis?" World Health Organization. World Health Organization, July 2016.

Web.

Brennan, Zachary. "European Drug Prices: New Commission Report on What Policies Work

and What Could Work." Regulatory Affairs Professionals Society, 25 Feb. 2016. Web.

Bhatia, Harmeet Kaur et al. “Sofosbuvir: A Novel Treatment Option for Chronic Hepatitis C

Infection.” Journal of Pharmacology & Pharmacotherapeutics 5.4 (2014): 278–284.

PMC. Web.

Robbins, Rebecca. "The Lab Breakthrough That Paved the Way for Hepatitis C Cures." STAT.

STAT, 13 Sept. 2016. Web. 18 May 2017.

Spinello, Richard. “Ethics, Pricing and the Pharmaceutical Industry.” Journal of Business

Ethics. 11.8 (1992): 617-626. Web

Maron, Dina Fine. "Inventor of Hepatitis C Cure Wins a Major Prize--and Turns to the Next

Battle." Scientific American. N.p., 12 Sept. 2016. Web.

Grady, Denise. "Are New Drugs for Hepatitis C Safe? A Report Raises Concerns." The New

York Times. The New York Times, 24 Jan. 2017. Web. 109

Herper, Matthew. "Solving The Drug Patent Problem." Forbes. Forbes Magazine, 02 May

2002. Web.

"What Is Hepatitis C?" American Liver Foundation. HepC 123, 2016. Web.

"Viral Hepatitis." Centers for Disease Control and Prevention. Centers for Disease Control

and Prevention, 17 Oct. 2016. Web.

"Harvoni Cost in US, Canada, India (Updated), UK, Egypt and Europe." Hepatitis C Society.

Hepatitis C Society, 2016. Web.

“Merck Receives FDA Approval of ZEPATIER™ (elbasvir and Grazoprevir) for the

Treatment of Chronic Hepatitis C Virus Genotype 1 or 4 Infection in Adults

Following Priority Review | Merck Newsroom Home. Merck Inventing for Life, 28

Jan. 2016. Web. 18 May 2017.

110

Fran Yeh

Interdisciplinary Research and Education: What is it? Why do it? How to do it?

Interdisciplinary research and education has gained more and more interest in the recent years due to a number of reasons. However, there are issues with interdisciplinary research and education. This paper will attempt to examine what interdisciplinary research and education is, the reasons for supporting it, the issues that this form of academia may hold, and the efforts to bring ideas into reality and resolve the issues posed.

First and foremost, the definition of interdisciplinary research and education must be stated. The definition from the National Academic Press poses that interdisciplinary research is one that “integrates information, data, technology, tools, perspectives, concepts and theories from two or more disciplines … to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline…” (“What is

Interdisciplinary Research? | NSF - National Science Foundation,” n.d.) This definition gives a large leeway to help with the further discussion of this . However, this definition poses an issue: what is a discipline? Although there is much debate surrounding this definition, for the scope of this essay, the definition of a discipline will be taken broadly as

“academic studies that focus on a self-imposed limited field of knowledge.” (Cohen & Lloyd,

2014). This definition is also very broad, but gives boundaries to each academic study that acknowledges itself as a discrete unit from others similar to it. It is important to note that in this definition, a discipline is not completely analogous to a specialty although a specialty can eventually lead to a separate discipline of itself. For example, an immunologist that specializes in T-cell is in the discipline of immunology, not T-cell immunology. However, if 111 the scientific community or public society eventually recognizes that T-cell immunology is distinct enough from other immunologies to be considered a different discipline, a T-cell immunology may emerge. Although this definition is far from complete or holistic, this is the definition that this paper will use in its analysis of different reasons, issues, and methods.

The recent years have shown an increased support for interdisciplinary research in the form of academic discourse, academic papers, and government reports. In the realm of research, Gardy and Brinkman discuss the issues with having so many “super-focused” disciplines. Their analogy that they propose is that the environment of modern day science is akin to the story of the six blind men trying to describe an elephant. These six blind men frequently bicker amongst themselves: one of them, who touched the side of the elephant, argues that an elephant is a wall; another who touched foot of the elephant fights for the fact that an elephant is a tree. However, as enlightened 21st Century citizens, we understand that an elephant is actually an amalgamation of the former descriptions. This is similar to the modern day science in that specialized scientists become blind to other disciplines that could help illuminate a problem that these scientists are trying to solve

(Gardy & Brinkman, 2003). This is where interdisciplinary research could be used. In the governmental report by Koonin and Varmus, there was discussion of pushing the

Department of Energy (DoE) and the National Institutes of Health (NIH) closer together to work in collaboration to help with biomedical advancement. The argument that Koonin and

Varmus posed was that the DoE has been developing big data analysis capabilities. This would be ideally suited for the emerging developments in genetics and sub-fields of biomedical science that require huge amounts of storage and manipulation of this big 112 groups of data (“9-22-16_Report of the SEAB TF on Biomedical Sciences with transmittal.pdf,” n.d.). Additionally, as biology has been able to optimize production of certain chemicals or to take advantages of interesting mechanics of physics, the engineering world would benefit from studying biologically optimized characteristics across the spectrum of life. This can be seen in the development of the DARPA-funded

“gecko gloves” that enable humans to climb up vertical walls. This technology draws upon close inspection of the biology of gecko paws, whereby the fine hairs create great surface area for enough Van der Waals forces for geckos to cling on walls and ceilings (“Z-Man,” n.d.). However, it is not just the research world and climbers that could benefit from interdisciplinary research and education. A National Academic Press cited that interdisciplinary research and education could eventually lead to a better economy by increasing job attractiveness in an increasingly globalized world. Here, the authors cite multiple different reasons for why companies themselves in certain areas: “… availability and quality of research and innovation talent… availability of quality workforce… indirect costs (such as healthcare, etc.) … quality of research universities …”

(Rising Above the Gathering Storm: Energizing and Employing America for a Brigher

Economic Future, 2007). In all of these cases, interdisciplinary research and education can help with augmenting each of these reasons in the US. As described before, interdisciplinary research and education can help with creating quality research at various institutions across the US; by integrating different disciplines, new ideas are expected to emerge. With additional biomedical advancement, healthcare for employees would be better – another incentive to bring jobs back to the US. With a more qualified workforce, we can turn to Newell in his discussion with Benson in support for interdisciplinary 113 undergraduate education. In this discussion, Newell cites that modern day jobs are more frequently concerned with solving problems rather than being entrenched in a specific discipline. As an example, he notes that nowadays, there are more public officials and salesmen than bench chemists. With public officials and salesmen, they are required to take pieces of information from different sources that pertain to a specific issue or product and be able to resolve the issue or pitch a product. This requires the skill of integrating information to produce a cohesive argument – a skill that is uniquely interdisciplinary

(Newell & Miller, 1983).

However, with so much theoretical benefits of interdisciplinary studies, some academics have raised issues associated with interdisciplinary focuses. Frequently, academics and institutions like Koonin and Varmus, and the National Academic Press cite the lack of funding for interdisciplinary focuses. However, there are also issues of qualified researchers and educators as noted by Benson. Here, he notes that because of the culture of

PhD training programs call for specialization, many PhDs are unqualified to lead interdisciplinary projects or classes. This could cause, as Benson notes, that these projects and classes could become unfocused or biased in favor for or against certain disciplines.

This culture of specialization in PhD training programs and the culture of very specific disciplines also may produce faculty members that are entrenched in their preferred discipline of study and tend not to stray away from their discipline, even if their research questions may eventually require straddling into another discipline.

Although there were problems with interdisciplinary studies that were raised, most of them have fairly simple resolutions. For the issue of funding, the easiest way to resolve this issue would be to increase the funding pool available. This has been proposed in many 114 different ways. The Koonin and Varmus report cited the creation of a separate grant type specific for projects that propose to incorporate DoE and NIH support. NAP report. For the issue of unqualified faculty, Newell as well as Koonin and Varmus note the use and implement of more collaborations, meetings, and interdisciplinary conferences. This will ultimately help scientists and academics become more well-versed in interdisciplinary thinking, and consequently become more and more qualified to conduct interdisciplinary research and lead interdisciplinary classes. Another method to help with this issue would be to increase the funding to encourage graduate school programs to initiate interdisciplinary topics. This was cited from the National Academy Press as well as the

Koonin and Varmus report. In fact, we can see the fruits of these reports efforts now, with the emergence of graduate school programs such as the Virginia Tech’s Translational

Biology Medicine and Health, a PhD training program that incorporates neuroscience, cardiology, immunology, and public health into its curriculum and focus, and Vanderbilt

University’s Interdisciplinary Graduate Program, another PhD training program that focuses on various disciplines in biology and attempt to integrate them into a single PhD training experience. Personally, I have been the Virginia Tech’s Translational Biology

Medicine and Health program. Here, the professors were from a wide array of disciplines – from psychology to engineering, immunology to cardiology. All of them had real investments in this interdisciplinary program, noting that many of their successful projects stemmed from having graduate students whose undergraduate or work experience was completely different discipline than that the projects were based upon. However hopeful, this graduate school program is still in only five years old and its graduates have still yet to 115 find their place in society to implement the interdisciplinary training that they have received.

There are also other success stories of interdisciplinary research and education. In terms of the Genome Project, the sequencing of the genome could not have been done without the help of computer scientists, who are capable to deal with big data analyses.

Additionally, advances in engineering helped speed up the sequencing of the genome – further increasing the impact of the Genome Project. Another issue that has drawn the focus of interdisciplinary research is that of the Irritable Bowel Syndrome (IBS). In April of

2016, I had attended an Experimental Biology Conference in San Diego. Here, there was a series of talks on the topic of Irritable Bowel Syndrome. The background for IBS is still unclear; scientists are still debating why it happens and what is happening. From a person without much knowledge of the issue, IBS may seem like a completely gastrointestinal issue. However, these talks at the Experimental Biology Conference revealed that there may be an intricate link between the nervous system, the immune system, and the consequences upon the gut microbiota. First and foremost, the speakers mentioned that the Vagus nerve, a large nerve that controls many involuntary functions, innervates certain immune cell “depots” in the gastrointestinal tract. With stimulation of the Vagus nerve, these immune cell “depots” would activate, causing the immune cells to attack the gastrointestinal microbiota. This consequently would cause further inflammation and lead to IBS. However, interestingly enough, this inflammation would lead to even more vagal nerve stimulation and snowballing into a great deal of stomach pain and gastric issues.

These findings would not have been able to come to fruition without an interdisciplinary outlook. 116

As the world continues to globalize and modernize, specialization and more focused disciplines are inevitable. However, sometimes using interdisciplinary research and education, although may be more time-consuming and more resource-intensive, may be able to eluc idate issues more clearly and more holistically. Furthermore, using interdisciplinary research and education will ultimately be able to create a more well- rounded workforce – capable of seeing the elephant as a whole, rather than merely understanding the anatomy of its foot.

117

The Relationship Between Age and Scientific Creativity

William Zeng

22 May 2017

118

ABSTRACT

This paper gives a short introduction into some of the theories about age and creativity.

The critical question presented is: Are younger scientists more creative than older scientists?

However, before discussing approaches to scientific creativity, what it means to be creative in science are discussed. The first theory assumes as true that younger scientists are more creative than their older coworkers. The second approach, delves into a polar theory of creativity. The third, stipulates that creativity, if productivity is held constant, is random—neither younger or older scientists are more creative than one another. Recommendations based upon these theories are presented.

INTRODUCTION

When an average person is asked to picture a scientist, a person making a breakthrough that would affect them, they don't often picture someone older. There is this saying that the

“young are our future” and many would hope that breakthroughs would come from a younger generation. Even the term “creative spark” common in popular use, implies that creativity is a fleeting thing, something that dies with time but younger scientists have the more of. For example, Charles Darwin was 29 when he came up with his theory of natural selection. Einstein had his miracle year at age 26; “Marie Curie made big discoveries about radiation in her late 20s.

Mozart’s Symphony No. 1 in E flat: 8 years old” (Carey, 2016). Perhaps in the past, because average life expectancy was around 30 years old, so many massive discoveries that shaped the modern world were made by men and women, who, by modern standards, were disproportionately young. But as medicine got more advanced, and life expectancy rose, scientists, much like the general population, lived, on average, longer as well. But this didn't erase the common assumption that the young lead the breakthroughs while older scientists don’t. 119

If the common understanding is true and that young scientists make the most breakthroughs, why doesn't America cycle through scientists like day old bread? Doesn’t America want to be the best?

Why is it that younger scientists might be more ken to make breakthroughs than older scientists? Is it because grey hairs inhibit genius? Maybe the less wrinkly one’s hands are, the better he or she can hold beakers! There have been many theories posed about young dominance in the popular lexicon but the assertion that young scientists are just privy to a pool of creativity that has dried up for older scientists is incredibly interesting. The idea goes that scientists, and perhaps the human race, are more creative, the younger they are. Case closed? Should America now fire all of her old scientists? Hold on a second because younger scientists may not be all that they’re cracked up to be and perhaps older scientists have been wrongly maligned.

CURRENT ATMOSPHERE

The inspiration for this article came from a paper about the aging scientific workforce

(Blau & Weinberg, 2017). It was given to me by Professor Harold Varmus after I talked with him about the path I wanted to take the article down on. Blau and Weinberg looked through large data sets from the career trajectories of 73,000 scientists from many disciplines and concluded that the scientific workforce is aging rapidly. “From 1993 to 2008, the share of scientists aged 55 and older increased by nearly 90 percent… By comparison, the share of all American workers aged 55 and older increased by little more than 50 percent during that period.” (Yin, 2017). The article explains this changing by the large amount of baby boomer scientists who are aging and in the context of the 1994 ending of mandatory retirement in U.S. universities. The researchers are concerned about this development because scientists are often perceived as being more 120 creative and doing their best work when young. This aging may limit scientific progress for the foreseeable future and may be a crisis in science.

DEFINING SCIENTIFIC CREATIVITY

Defining creativity is a lot like applying for a patent. There are two main elements to creativity: it (1) must be original, that is to say it must be new, unusual, novel, and unexpected and (2) it must also be valuable, which is to say it must be useful, good, adaptable, and appropriate. Psychologist, Dr. Dean Simonton writes that “You can't be creative unless you come up with something that hasn't been done before,” and that “idea also has to work, or be adaptive or be functional in some way; it has to meet some criteria of usefulness." (Carey, 2016)

But does one decide if an idea is original or valuable? This is a very tricky concept because originality is highly subjective topic; what might be ingenious and original to one might be completely obvious to someone else. The best researchers can do is approximate creativity through metrics. Two metrics that researchers use are citation metrics and the evaluation by experts using complex evaluation guidelines. Citation metrics measure how frequently a paper is cited in other articles. Generally, this gives an idea of the impact and may imply how groundbreaking other researchers perceive the article to be. If two papers are compared with citation counts just a few hundreds off one another, it cannot be said for certain which paper made more of an impact on the scientific community and is more creative. However, if one paper got four or no citations and another paper received hundreds of thousands of citations, then it may be clear that the latter article made a bigger splash and may also be more creatively regarded than the former article. There are severe limitations to this approach however. If a paper is published by a famous name, like Stephen Hawking or Neil Degrasse Tyson, it’s bound, by virtue of the name alone, to receive more mentions and citations than another paper that may be 121 of similar grade and quality. Similarly, other qualities not necessarily related to the quality of the paper like the status of the journal where it was published, the language in which it was written, and the topic it deals with may also affect citation numbers” (Heinrich, 1995). The second metric that may be used is an evaluation of scientific quality by an expert. The expert evaluates several aspects of the creativeness in the paper. One of the better known approaches is called the

“Creative Product Analysis Matrix” which uses 14 criteria including , resolution, evaluation and synthesis to judge how creative a paper truly is. Evaluation by expert is rarely used (Heinrich; Torrance).

YOUNG DOMINANCE

The question to be answered in this article is are younger scientists more creative than older scientists. That is, do younger scientists write more big impact and field changing papers because they are more creative than older scientists. I am not disputing that younger scientists do write higher impact papers than older scientists, because that is a well-documented fact (Yin;

Carey; Callaway); however, the reasons why this occurs is what interests me. Is it a question of younger dominance in creativity?

“Young researchers are much more likely to study more innovative topics” says a study done by Packalen and Bhattacharya (Callaway, 2015). Packalen and Bhattacharya arrived at this conclusion after doing a text analysis of more than 20 million biomedical papers. They developed a computer program that extracted every one, two, three-word string in the title and abstract of those 20 million papers and then looks at when the word strings were first mentioned and how frequently the string appeared in papers subsequently to determine the string’s popularity. It then looks at which papers were about the most popular topics and overwhelmingly, it was younger scientists, or scientists earlier in their careers who wrote about 122 these popular topics (Packalen and Bhattacharya). This method does not directly measure creativity, because again, there’s no tool to directly measure creativity, but what it does do it measure young researcher’s “willingness to embrace” new ideas and this may lead to out-of-the- box, creative thought. A physicist at Cornell University had a few small words of critique, because the researchers only analyzed the titles and abstracts of papers, which are tiny, tiny parts of the overall paper, perhaps if research is done again looking at the all the papers in their entirety, a different picture may emerge. Perhaps older researchers include novel ideas sprinkled later throughout their papers.

TWO MAIN TYPES OF CREATIVITY

David W. Galenson proposed a radical take on creativity. He said that there are basically two main types of innovators, two main types of creativity. There are those who just seem to “get it” and make great things really quickly. He calls these people “conceptual innovators.” The other group of people are those who never seem to get it. They try and try and erase and demur and hesitate for year and year before they are satisfied and get it right. Galenson calls them

“experimental innovators” (Galenson, 2003).

Conceptual innovators are those that create perfect work very quickly. They tend to do their very best work very quickly in life. They plan meticulously and precisely and execute it perfectly. When society thinks of a genius, they often picture a conceptual innovator. Child prodigies are conceptual innovators. Mozart, Einstein, Picasso, F. Scott Fitzgerald, and Bob

Dylan are conceptual innovators. The problem with conceptual innovators is that once one writes

Gatsby, it hard to top. Conceptual innovators peak early in life and are relatively obscure for the rest of their life. 123

On the other hand, experimental innovators never have a clear, easily articulated idea.

They don’t work quickly and often take years to finish their work. They are never satisfied and make endless drafts. They are late bloomers. Mark Twain finished the “Adventures of

Huckleberry Finn” at the ripe age of 49. The painter Cezanne, refused to sign his name on the bottom of any of his paintings because that would admit that he was finished with the painting.

Theirs’s is a genius that is subtler, less celebrated by common society, but perhaps just as beautiful and just as creative as the conceptual innovators. Galenson argues that these are the two main trajectories for scientific creativity. Either it sparks early or late or never at all. Some find this theory too simplistic and argue that creativity should be judged on a spectrum.

PRODUCTIVITY, LUCK, AND Q

But why does creativity have to have an overlying theme and order? A lot things in this world do not make sense. There is entropy and chaos. The third and last theory assumes that there is no order to creativity and that it randomly pops up in scientist’s career regardless of his or her age.

This means that it may be just as likely for the scientist to publish a creative paper when he is 25 as it is for him to publish a creative and impactful paper when he is 50.

Sinatra et al. wrote that the highest impact work in a scientist’s career is randomly distributed throughout his or her work, assuming productivity is held constant. Their method looked at the career of multiple of scientists and applied a quantitative modeling using a random- impact rule. This random-impact model included three variables: productivity, luck (p), and Q. Q is an arbitrary and very broad variable that roughly translates to “skill.” It includes a lot of factors including I.Q., drive, motivation, willingness to work with others. Q can also be an indirect or unexpected skill such as for example, the clear and concise writing ability of a researcher that helps makes a convoluted mathematics concept easier to understand for non-math 124 experts. These three factors must be present in high amounts for a paper to be creative. Q is a constant variable. Luck or (p) is not. Luck is defined as the topic choice and the public interest for the subject matter. Productivity is highest when a scientist is younger because he or she has more time to dedicate to projects and more options. This is why younger scientists author more high impact, popular papers. It perhaps isn’t because they are more creative but perhaps just because they are more productive than older researchers. However, if productivity is held constant, the only variable that changes is p, or luck (because Q, skill does not change much throughout life) and luck does not depend on age and is completely random for any particular project. If productivity is held constant, then age would not be a factor in the creativity of a scientific paper. But productivity is a factor in real life; however, this research article demonstrates that creativity might be a simpler concept to understand and provides hope to older researchers that perhaps their golden years aren’t forever behind them.

DISCUSSION

The effect of age on creativity is inconclusive. It is true that younger scientists are do produce more talked about articles than older scientists but idea that this development is because younger scientist are more creative than older scientists may not be accurate. The scientific community is aging. If it were unquestionably true that younger scientists are more creative than older scientists, then the entire scientific enterprise would be in danger. Thankfully this is not the case but more research is needed particularly using novel metrics and novel approaches to defining what creativity is. Lastly, even if younger scientists are shown to not be more creative than the older cohort, this demographic change many still be troubling because younger scientists offer different perspectives, unique insights, and many other interesting ideas that might not be heard.

There may be potentially a crisis in diversity in the future. 125

Works Cited

Blau, D., & Weinberg, B. (2017). Why the US science and engineering workforce is aging rapidly. Procesings of the National Academy of Sciences of the United States of America , 114 (15), 3879-3884. Callaway, E. (2015). Young scientists lead the way on fresh ideas. Nature , 518 (7539), 283- 284. Carey, B. (2016, Nov 3). NYTimes. Galenson, D. W. (2003). The two life cycles of human creativity. The National Bureau of Economic Research , n/a. Heinrich, S. (1995). Scientific creativity: A short overview. Educational Pyschology Review , 7 (3), 225-241. Packalen, M., & Bhattacharya, J. (2015, Jan). Age and the trying out of new ideas. National Bureau of Economic Research , n/a. Pink, D. (2006, July 1). What kind of genius are you. Retrieved from Wired: https://www.wired.com/2006/07/genius/ Sinatra, R., Wang, D., Deville, P., Song, C., & Barabási, A.-L. (2016). Quantifying the evolution of individual scientific impact. Science , 354 (6312), aaf5239. Torrance, E. (1965). Scientific Views of Creativity and Factors Affecting its Growth. Daedalus , 94 (3), 663-681. Yin, S. (2017, Apr 17). NYTimes.